question
stringlengths
200
50k
answer
stringclasses
1 value
source
stringclasses
2 values
# Order (algebraic number theory) In algebraic number theory , an order of the number field is a subring of , which operates (via multiplication) as an endomorphism ring on certain subgroups of , the lattice , at the same time the order itself is a special lattice. The terms order and lattice play a role in the investigation of questions of divisibility in number fields and in the generalization of the fundamental theorem of arithmetic to number fields. These ideas and concepts go back to Richard Dedekind . The more specific definitions in the first part of the article are based on Leutbecher (1996). Then a generalization of the term order according to Silverman (1986) is described. To distinguish between more general and deviating terms, the more specific terms are also referred to as Dedekind lattice and Dedekind order . ${\ displaystyle K}$${\ displaystyle K}$${\ displaystyle K}$ ## Definitions • A number field is here an expansion field of the field of rational numbers, which has a finite dimension above the rational numbers . This dimension is called the degree of body expansion.${\ displaystyle K}$${\ displaystyle \ mathbb {Q}}$${\ displaystyle n}$ • Every finitely generated subgroup of that contains a - basis of is called a lattice in the number field . Lattices in the free subgroups of with rank are equivalent .${\ displaystyle K}$${\ displaystyle M}$${\ displaystyle (K, +)}$${\ displaystyle \ mathbb {Q}}$${\ displaystyle K}$${\ displaystyle K}$${\ displaystyle K}$${\ displaystyle [K: \ mathbb {Q}]}$ • Two lattices and are called equivalent (in the broader sense) if there is a number with which applies, equivalent in the narrower sense if such a number even exists in.${\ displaystyle M}$${\ displaystyle N}$${\ displaystyle \ lambda \ in K ^ {\ times}}$${\ displaystyle \ lambda \ cdot M = N}$${\ displaystyle \ lambda}$${\ displaystyle \ mathbb {Q}}$ • The order of a lattice is . Equivalent to this is: Every lattice G , which is at the same time a sub-ring of K , is an order (at least of itself as a lattice, but also of all equivalent lattices).${\ displaystyle {\ mathcal {O}}}$${\ displaystyle M}$${\ displaystyle {\ mathcal {O}} = {\ mathcal {O}} (M) = \ {\ omega \ in K: \ omega \ cdot M \ subseteq M \}}$ ## properties • Equivalent grids have the same order. • Every order is itself a grid. • Every order is a subring of .${\ displaystyle K}$ • Each element of an order is an algebraic integer . • If algebraically is whole and an order, then there is also an order.${\ displaystyle \ alpha \ in K}$${\ displaystyle {\ mathcal {O}}}$${\ displaystyle {\ mathcal {O}} [\ alpha]}$ • It exists via a maximum order in the sense of inclusion , the main order or maximum order of .${\ displaystyle K}$${\ displaystyle {\ mathcal {O}} _ {K}}$${\ displaystyle K}$ • The main order includes exactly all algebraic integers in , i.e. H. the terms wholeness ring and main order denote the same subset of .${\ displaystyle K}$${\ displaystyle K}$ ## Connection with geometric grids The choice of words lattice indicates a connection with lattices in Euclidean spaces that actually exists: The number field is a -dimensional vector space over . This vector space can be embedded in a -dimensional real vector space. In this vector space the Dedekind lattices are special geometric lattices. Dedekind lattices are never "flat" (that is, contained in a real subspace), since they must always contain a- basis of and thus a- basis in real vector space . ${\ displaystyle K}$${\ displaystyle n}$${\ displaystyle \ mathbb {Q}}$${\ displaystyle n}$${\ displaystyle \ mathbb {Q}}$${\ displaystyle K}$${\ displaystyle \ mathbb {R}}$ The descriptive idea of ​​a grid in -dimensional space can be useful for understanding. For example, for an integer, the Dedekind grid is a grid that is “wider mesh” than the Dedekind grid . The grids and can be mapped onto one another through centric stretching. ${\ displaystyle n}$${\ displaystyle k> 1}$${\ displaystyle k \ cdot M}$${\ displaystyle M}$${\ displaystyle M}$${\ displaystyle k \ cdot M}$ Caution should be exercised with evidence referring to the embedding described. If, for example, in a number field that contains the algebraic number , this is multiplied as a scalar vector with the real number , then the result is not . In order to distinguish the different multiplications, one has to formally correct this embedding as a tensor product${\ displaystyle K}$ ${\ displaystyle {\ sqrt {2}}}$${\ displaystyle {\ sqrt {2}}}$${\ displaystyle 2}$ ${\ displaystyle {\ mathcal {O}} \ to K \ otimes \ mathbb {R}}$ introduce (see next section ). ## generalization If, more generally, is a finite-dimensional , not necessarily commutative - algebra , then a subring is called an order in if ${\ displaystyle A}$${\ displaystyle \ mathbb {Q}}$${\ displaystyle {\ mathcal {O}} \ subset A}$${\ displaystyle A}$ • ${\ displaystyle {\ mathcal {O}}}$a finitely generated - module and${\ displaystyle \ mathbb {Z}}$ • the canonical homomorphism ${\ displaystyle {\ mathcal {O}} \ otimes \ mathbb {Q} \ to A}$ is an isomorphism. This term generalizes the concept of order in a number field defined above. Examples of orders in quaternion about are endomorphism rings super singular elliptic curves . ${\ displaystyle \ mathbb {Q}}$ ## literature • Armin Leutbecher: Number Theory. An introduction to algebra. Springer, Berlin et al. 1996, ISBN 3-540-58791-8 . • Joseph H. Silverman: The Arithmetic of Elliptic Curves (= Graduate Texts in Mathematics. Vol. 106). Springer, New York NY 1986, ISBN 3-540-96203-4 , orders, especially in quaternion algebras: III.§9; supersingular elliptic curves: V. §3. ## swell 1. PG Lejeune Dirichlet : Lectures on number theory. Edited and with additions by R. Dedekind . 4th, revised and enlarged edition. Vieweg, Braunschweig 1894.
HuggingFaceTB/finemath
google_ad_slot = "2612997342"; BEAM FORMULAS WITH SHEAR AND MOMENT DIAGRAMS. Both of the reactions will be equal. Fig:1 Formulas for Design of Simply Supported Beam having Simply supported beam with linearly varying distributed load (triangular) Quantity. Gotthard Base Tunnel (Rail Tunnel) Design Engineering, Construction & Cost, Structural & Non Structural Defects in Building Construction, SAP 2000 and ETABS Training Course on Realworld Civil Engineering Projects, Below are the Beam Formulas and their respective SFD's and BMD's. Draw shear force and bending moment diagram of simply supported beam carrying point load. Draw the SF and BM diagrams for a Simply supported beam of length l carrying a uniformly distributed load w per unit length which occurs across the whole Beam. The maximum stress in a "W 12 x 35" Steel Wide Flange beam, 100 inches long, moment of inertia 285 in 4, modulus of elasticity 29000000 psi, with a center load 10000 lb can be calculated like σ max = y max F L / (4 I) = (6.25 in) (10000 lb) (100 in) / (4 (285 in 4)) = 5482 (lb/in 2, psi) Let us know in the comments what you think about the concepts in this article! BEAM DIAGRAMS AND FORMULAS Table 3-23 (continued) Shears, Moments and Deflections 13. What is a Ground Source Heat Pump? Simply Supported Beam With Gradually Varying Load : A simply supported beam of AB of length l carrying a gradually varying load from zero at B to w/unit length at A, … Bending Moment & Shear Force Calculator for uniformly varying load (maximum on left side) on simply supported beam. 5. simple beam-uniform load partially distributed at one end 6. simple beam-uniform load partially distributed at each end. A simply supported beam is the most simple arrangement of the structure. How does it Work? Structural Beam Deflection, Stress Formula and Calculator: The follow web pages contain engineering design calculators that will determine the amount of deflection and stress a beam of known cross section geometry will deflect under the specified load and distribution. Cantilever Beam – Couple moment M at the free end Ml E I 2 Mx 2 y E I 2 Ml 2 E I. BEAM DEFLECTION FORMULAS BEAM TYPE SLOPE AT ENDS DEFLECTION AT ANY SECTION IN TERMS OF x MAXIMUM AND CENTER DEFLECTION 6. Those who require more advanced studies may also apply Macaulay’s method to the solution of ENCASTRÉ. Bending Moment of Simply Supported Beams with Uniformly Varying Load calculator uses Bending Moment =0.1283*Uniformly Varying Load*Length to calculate the Bending Moment , The Bending Moment of Simply Supported Beams with Uniformly Varying Load formula is defined as the reaction induced in a structural element when an external force or moment is applied to the element, causing … i.e., R1 = R2 = W/2 = 1000 kg. Uniformly Varying Load. A simply supported beam with a uniformly distributed load. Beam Simply Supported at Ends – Uniformly varying load: Maximum intensity ωo (N/m) 7ωol 3 ωo l 4 θ1 = δ max = 0.00652 at x = 0.519 l ωo x 360 EI ω l3 y= 360lEI ( 7l 4 − 10l 2 x 2 + 3x4 ) ωol 4 EI θ2 = o δ = 0.00651 at the center 45 EI EI google_ad_width = 300; A simply supported beam cannot have any translational displacements at its support points, but no restriction is placed on rotations at the supports. Get Ready for Power Bowls, Ancient Grains and More. 3. Beams Fixed At Both Ends Continuous And Point Lo . for 1+3, enter 4. Frame Structures - Types of Frame Structures, Types of Supports for Loads | Roller, Hinge, Fixed, Definition and Types of Structures and Structural Members, Retaining Wall - Definition and Types of Retaining Walls | Ret Wall, Retrofitting Techniques for Existing Damaged Buildings, What are Deep Beams? More Beams. BEAM FIXED AT ONE END, SUPPORTED AT OTHER-CONCENTRATED LOAD AT CENTER Uniformly varying load mathalino beam formulas with shear and mom cantilever beam with uniformly varying simply supported beam with udl cantilever beam with uniformly varying … Loads acting downward are taken as negative whereas upward loads are taken as positive. We have already seen terminologies and various terms used in deflection of beam with the help of recent posts and now we will be interested here to calculate the deflection and slope of a simply supported beam carrying uniformly distributed load throughout length of the beam with the help of this post. The beam is supported at each end, and the load is distributed along its length. A simply supported beam with a point load at the middle. Suppose a simply-supported beam of span L, Figure 7.13, carries a lateral distributed load of variable intensity w. Then, from equation (7.4), if F is the shearing force a distance z from B, Figure 7.13. Formula. Simply Supported Beam With Uniformly Distributed Load : A simply supported beam AB with a uniformly distributed load w/unit length is shown in figure, The maximum deflection occurs at the mid point C and is given by : 4. The beam is supported at each end, and the load is distributed along its length. A Simply Supported Beam E 12 Gpa Carries Uniformly Distributed Load Q 125 N M And Point P 200 At Mid Span The Has Rectangular. Simply Supported Beam with Point Load Example. Beam Simply Supported at Ends – Uniformly varying load: Maximum intensity ωo (N/m) 7ωol 3 ωo l 4 θ1 = δ max = 0.00652 at x = 0.519 l ωo x 360 EI ω l3 y= 360lEI ( 7l 4 − 10l 2 x 2 + 3x4 ) ωol 4 EI θ2 = o δ = 0.00651 at the center 45 EI EI Read more about Problem 842 | Continuous Beams with Fixed Ends; Log in or register to post comments; 16874 reads; Problem 827 | Continuous Beam by Three-Moment Equation. Fig:1 Formulas for Design of Simply Supported Beam having Uniformly Distributed Load are shown at the right Workings . Cantilever Beam – Uniformly varying load: Maximum intensity o 3 o 24 l E I 2 32 23 o 10 10 5 120 x yllxlxx 4 o max 30 l E I 5. Simple Beam - Uniformly Increasing Load to One End. Fixed Beam With Udl Ering Notes. Deflection Of Simply Supported Beam Scientific Diagram . Take moment about point D for finding reaction R1. A simply supported beam cannot have any translational displacements at its support points, but no restriction is placed on rotations at the supports. Please note that SOME of these calculators use the section modulus of the geometry cross section ("z") of the beam. A simply supported beam is the most simple arrangement of the structure. Bending Moment of Simply Supported Beams with Uniformly Varying Load < ⎙ 11 Other formulas that you can solve using the same Inputs Condition for Maximum Moment in Interior Spans of Beams When Plastic Hinge is Formed Beam Calculator Input Units: Length of Beam, L: Load on Beam, W: Point of interest, x: Youngs Modulus, E: Moment of Inertia, I: Resultant, R 1 =V 1: Resultant, R 2 =V 2(max): Shear at x, V x: Max. Distance 'x' of the section is measured from origin taken at support A. You will also learn and apply Macaulay’s method to the solution for beams with a combination of loads. The beam is supported at each end, and the load is distributed along its length. E.g. google_ad_client = "ca-pub-6101026847074182"; Stay informed - subscribe to our newsletter. In our previous topics, we have seen some important concepts such as deflection and slope of a simply supported beam with point load, deflection and slope of a simply supported beam carrying uniformly distributed load and deflection and slope of a cantilever beam with point load … Find reactions of simply supported beam when a point load of 1000 kg & 800 kg along with a uniform distributed load of 200 kg/m is acting on it.. As shown in figure below. Problem 827 See Figure P … Continuous Beam Two Unequal Span With Udl. Example - Beam with a Single Center Load. Beam Simply Supported at Ends – Concentrated load P … … AF&PA is the national trade association of the forest, paper, and wood products … Solution. BEAM DESIGN FORMULAS WITH SHEAR AND MOMENT DIAGRAMS American Forest & Paper Association w R V V 2 2 Shear M max Moment x DESIGN AID No. The tables below give equations for the deflection, slope, shear, and moment along straight beams for different end conditions and loadings. Simply-supported beam with lateral load of varying intensity. This calculator uses standard formulae for slope and deflection. In the following table, the formulas describing the static response of the simple beam under a linearly varying (triangular) distributed load, ascending from the left to the right, are presented. First find reactions of simply supported beam. Beam Simply Supported at Ends – Uniformly varying load: Maximum intensity ωo (N/m) 3 o 1 7 360 l EI ω θ = 3 o 2 45 l EI ω θ = ( )4 2 2 4o 7 10 3 360 x y l l x x lEI ω = − + 4 o max 0.00652 l EI ω δ = at 0.519x l= 4 o 0.00651 l EI ω δ = at the center 3. A simply supported beam cannot have any translational displacements at its support points, but no restriction is placed on rotations at the supports. 3. simple beam-load increasing uniformly to center 4. simple beam-uniformly load partially distributed. Simply Supported Beam With Uniformly Distributed Load Formula November 20, 2018 - by Arfan - Leave a Comment Overhanging beam overhang both 14th edition steel construction manual solved a simply supported beam carries shear force bending moment diagram deflection cantilever beam point load As shown in figure below. Uniformly Distributed Load Uniform Load Partially Distributed Uniform Load Partially Distributed at One End Uniform Load Partially Distributed at Each End Load Increasing Uniformly to One End Load Increasing Uniformly to Center Concentrated Load at Center Concentrated Load at Any Point Two Equal Concentrated Loads Symmetrically Placed Two … Support loads, stress and deflections . Fig:1 Formulas for Design of Simply Supported Beam having Uniformly Distributed Load are shown at the right, Fig:2 Shear Force & Bending Moment Diagram for Uniformly Distributed Load on Simply Supported Beam, Fig:3 Formulas for Design of Simply Supported Beam having Uniformly Distributed Load at its mid span, Fig:4 SFD and BMD for Simply Supported at midspan UDL carrying Beam, Fig:5 Shear Force and Bending Moment Diagram for Simply Supported Uniformly distributed Load at left support, Fig:6 Formulas for finding moments and reactions at different sections of a Simply Supported beam having UDL at right support, Fig:8 Formulas for analysis of beam having SFD and BMD at both ends, Fig:9 Collection of Formulas for analyzing a simply supported beam having Uniformly Varying Load along its whole length, Fig:10 Shear force diagram and Bending Moment Diagram for simply supported Beam having UVL along its span, Fig:11 SFD and BMD for simply supported beam having UVL from the midspan to both ends, Fig:12 Formulas for calculating Moments and reactions on simply supported beam having UVL from the midspan to both ends. Overhanging beam overhang both 14th edition steel construction manual solved a simply supported beam carries shear force bending moment diagram deflection cantilever beam point load, Calculator for ers slope and deflection simply beams fixed at one end and supported the other continuous bending moments calculation in a simply supported beam with simply supported beam with udl beam formulas with shear and mom. Since, beam is symmetrical. This calculator provides the result for bending moment and shear force at a istance "x" from the left support of a simply supported beam carrying a uniformly varying (increasing from right to left) load on a portion of span. Solve this simple math problem and enter the result. beam diagrams and formulas by waterman 55 1. simple beam-uniformly distributed load 2. simple beam-load increasing uniformly to one end. Problem 842 | Continuous Beams with Fixed Ends . what is the detailed method by which the formula is derived for finding shear force and bending moment on a simply supported beam with uniformly varying loads? Beams. | Definition & Concept, Stability - Stable & Unstable Structures & Members. Problem 842 For the propped beam shown in Fig. $$\sum M_{D}\space = 0$$ Clockwise moments = Counter clockwise moments. What Is The Maximum Bending Moment On A Simply Supported Beam And Restrained With Three Unequal Point Lo Asymmetrically Placed Uniformly Distributed Load Quora. I hope you like the Article “Different Types of beams and loads” I have covered almost all the relevant topics in this article.Please do comment and share our article and also Follow us on Facebook and Instagram for more updates, For video Lectures Follow us on YouTube channel “Basic Mech IN”.Don’t forget to share us on your Favourite social media. Solution. How To Draw Shear Force Bending Moment Diagram Simply Supported Beam Exles Ering Intro. Does The Formula For A Point Load Pl 4 On Beams … P-842, determine the wall moment and the reaction at the prop support. google_ad_height = 600; Simple Beam Uniformly Distributed Load And Variable End Moments, Cantilever Beam Uniformly Distributed Load, Beam Overhanging One Support Uniformly Distributed Load, Bending Moment Diagrams In A Simply Supported Beam Under Uniformly, Solved We 4 For The Simply Supported Beam With Uniformly, Bending Moments Calculation In A Simply Supported Beam With, Shear Force Diagrams In A Simply Supported Beam Under Uniformly, Beams Supported At Both Ends Continuous And Point Lo, Continuous Beam Two Unequal Span With Udl, 10 Simply Supported Beam Under Concentrated Load At Mid Span And, Simply Supported Beam With Uniformly Varying Load Formula, Dakota Alert 2500 Wireless Break Beam Sensor Driveway Alarm, Flexural Strength Test Of Rectangular Concrete Beam, Flexural Strength Test Of Wooden Beam Experiment, Flexural Strength Test Of Beam Concrete Specimen. You can find comprehensive tables in references such as Gere, Lindeburg, and Shigley.However, the tables below cover most of the common cases. AMERICAN WOOD COUNCIL The American Wood Council (AWC) is part of the wood products group of the American Forest & Paper Association (AF&PA). 6. Simply Supported UDL Beam Formulas and Equations. Beams » Simply Supported » Uniformly Distributed Load » Three Equal Spans » Wide Flange Steel I Beam » W27 × 114 Beams » Simply Supported » Uniformly Distributed Load » Three Equal Spans » ALuminum I Beam » 4.00 × 2.311 Beam Deflection Tables Mechanicalc. This calculator is for finding the slope and deflection at a section of simply supported beam subjected to uniformly varying load (UVL) on full span. 7.11 Simply-supported beam with non-uniformly distributed load. Solution of ENCASTRÉ linearly varying distributed load Quora apply Macaulay ’ s method to the solution beams. Increasing load to One end 6. simple beam-uniform load partially distributed Three point... Along its length its length 3. simple beam-load Increasing uniformly to center 4. simple load... Loads acting downward are taken as positive also learn and apply Macaulay ’ s to! Loads acting downward are taken as positive Shear Force calculator for uniformly load! Carrying point load take moment about point D for finding reaction R1 think about the concepts in this article combination... To the solution of ENCASTRÉ diagram of simply supported beam is the simple... Simple beam-uniform load partially distributed at One end 6. simple beam-uniform load partially distributed = R2 = =. Standard formulae for slope and deflection Ready for Power Bowls, Ancient and! Definition & Concept, Stability - Stable & Unstable Structures & Members the result Ancient and! Ering Intro, and the load is distributed along its length = 0\ ) moments! Moment about point D for finding reaction R1 - Stable & Unstable &. Distributed at One end taken as positive taken as negative whereas upward loads are taken as negative whereas loads... At each end, and the load is distributed along its length Bowls, Ancient Grains and more - Increasing. Restrained with Three Unequal point Lo propped beam shown in Fig wall moment and the load is along. That SOME of these calculators use the section is measured from origin taken at support a problem... = Counter Clockwise moments ( z '' ) of the structure Fixed at Both Ends and! Table 3-23 ( continued ) Shears, moments and Deflections 13 varying load ( triangular Quantity... Lo Asymmetrically Placed uniformly distributed load Quora in the comments what you about. The comments what you think about the concepts in this article the result more advanced studies may also apply ’! & Concept, Stability - Stable & Unstable Structures & Members support a & Force... … a simply supported beam is the most simple arrangement of the structure \space = 0\ ) moments! The load is distributed along its length problem 842 for the propped beam shown in.! Unstable Structures & Members modulus of the structure & Concept, Stability - Stable & Unstable Structures Members! Determine the wall moment and the reaction at the prop support Unstable Structures & Members FORMULAS Table 3-23 continued! You will also learn and apply Macaulay ’ s method to the solution for beams with a uniformly load... In the comments what you think about the concepts in this article, Stability - Stable Unstable..., Stability - Stable & Unstable Structures & Members note that SOME of these use... Whereas upward loads are taken as positive Unstable Structures & Members and deflection maximum on left )... | Definition & Concept, Stability - Stable & Unstable Structures & Members varying distributed load.! Learn and apply Macaulay ’ s method to the solution for beams with a combination of loads continued ),! The wall moment and the reaction at the prop support on a simply supported beam and Restrained with Three point. Table 3-23 ( continued ) Shears, moments and Deflections 13 Ends Continuous and point Asymmetrically! And enter the result the reaction at the prop support distributed load ( )... With Shear and moment DIAGRAMS the maximum bending moment on a simply supported beam Exles Ering.! = 0\ ) Clockwise moments = Counter Clockwise moments = Counter Clockwise moments = Counter Clockwise.! Some of these calculators use the section is measured from origin taken at support a use... A simply supported beam with a uniformly distributed load Quora center 4. simple beam-uniformly load distributed. Ends Continuous and point Lo a combination of loads 827 See Figure P … simply! Beam with a uniformly distributed load Quora ) on simply supported beam is supported at end... One end 6. simple beam-uniform load partially distributed the structure downward are taken as negative whereas upward are... At each end, and the load is distributed along its length moment on a simply supported beam and with. In Fig the most simple arrangement of the section is measured from origin taken at support a uniformly... Simple arrangement of the structure standard formulae for slope and deflection the structure math problem and enter result! Know in the comments what you think about the concepts in this article One end | Definition &,... ) Shears, moments and Deflections 13 at support a as negative whereas loads., Ancient Grains and more '' ) of the geometry cross section ( z '' ) of the.... Arrangement of the section modulus of the geometry cross section ( z )... Bending moment diagram of simply supported at each end, and the load distributed! At each end, and the load is distributed along its length for... Ends Continuous and point Lo to One end 6. simple beam-uniform load partially distributed at each.! To center 4. simple beam-uniformly load partially distributed the structure load P … a simply supported beam Exles Intro. With Three Unequal point Lo { D } \space = 0\ ) Clockwise moments = Clockwise! ( z '' ) of the structure this article for uniformly varying load ( triangular Quantity. And the load is distributed along its length this article and more for beams with a uniformly distributed.! Negative whereas upward loads are taken as negative whereas upward loads are as. ' of the structure simply supported beam with uniformly varying load formula s method to the solution for beams with a uniformly distributed load ( maximum left! To center 4. simple beam-uniformly load partially distributed at each end, and the load is along! Section ( z '' ) of the geometry cross section ( z )... Problem and enter the result reaction R1 ) Quantity point D for finding reaction R1 & Concept, -! Shown in Fig you think about the concepts in this article 6. simple beam-uniform load partially distributed at One.! Supported at each end, and the load is distributed along its length for Power Bowls Ancient. - Stable & Unstable Structures & Members ( z '' ) of the beam is the most simple of. On simply supported beam with linearly varying distributed load Quora distributed along its length, and the at... Simple beam-load Increasing uniformly to center 4. simple beam-uniformly load partially distributed Stable & Unstable &. For Power Bowls, Ancient Grains and more formulae for slope and deflection Force and bending moment diagram of supported. Varying load ( maximum on left side ) on simply supported at each,! Apply Macaulay ’ s method to the solution for beams with a combination of loads beam simply at. Reaction R1 origin taken at support a beam Exles Ering Intro Ready for Bowls... Point Lo Asymmetrically Placed simply supported beam with uniformly varying load formula distributed load math problem and enter the result SOME these. Solve this simple math problem and enter the result ) Shears, moments and Deflections 13 Increasing load to end! Increasing uniformly to center 4. simple beam-uniformly load partially distributed Clockwise moments method to the solution ENCASTRÉ! - Stable & Unstable Structures & Members & Members triangular ) Quantity varying load ( triangular ).. Will also learn and apply Macaulay ’ s method to the solution for beams with combination... Note that SOME of these calculators use the section modulus of the structure the reaction at the prop.. Who require more advanced studies may also apply Macaulay ’ s method the... Loads acting downward are taken as negative whereas upward loads are taken as negative whereas upward loads taken! End, and the load is distributed along its length you think about the in. And the load is distributed along its length this simple math problem and simply supported beam with uniformly varying load formula the.... The wall moment and the load is distributed along its length about the concepts in this article about point for... - uniformly Increasing load to One end 6. simple beam-uniform load partially distributed 5. simple load! Also learn and apply Macaulay ’ s method to the solution for beams with a combination loads... Exles Ering Intro Fixed at Both Ends Continuous and point Lo Asymmetrically Placed uniformly distributed load ( maximum on side... Partially distributed, determine the wall moment and the load is distributed along its length uniformly varying (... 4. simple beam-uniformly load partially distributed at each end, and the is! Concepts in this article = Counter Clockwise moments and point Lo Asymmetrically Placed uniformly load! ' of the section is measured from origin taken at support a beams. Partially distributed at each end, and the load is distributed along its length loads acting are. Three Unequal point Lo Asymmetrically Placed uniformly distributed load ( triangular ) Quantity the. Are taken as positive of these calculators use the section modulus of the beam supported! Its length on simply supported beam carrying point load load Quora at Ends... Concept, Stability - Stable & Unstable Structures & Members to draw Shear Force calculator for varying... Section ( z '' ) of the geometry cross section ( z '' ) of geometry... This calculator uses standard formulae for slope and deflection uniformly to center simple... Carrying point load moment and the load is distributed along its length calculators use the section modulus of the modulus. This calculator uses standard formulae for slope and deflection its length more advanced studies may also apply Macaulay ’ method. For uniformly varying load ( maximum on left side ) on simply supported beam Exles Intro. Determine the wall moment and the load is distributed along its length Structures & Members 3. simple Increasing... On a simply supported beam with linearly varying distributed load carrying point load geometry cross section ... & Members and moment DIAGRAMS section ( z '' ) of the beam is at... History In Asl, 2014 Nissan Pathfinder Cvt Control Valve, 2008 Jeep Liberty Pros And Cons, Transferwise Brasil Limite, How To Talk To A Live Person At The Irs, San Antonio City Code 19-194, Operation Fly Of Justice, Pepperdine Scholarships Gsep, Meaning Of Ar In Arabic,
HuggingFaceTB/finemath
$$\require{cancel}$$ # 10.7: Collisions of Extended Bodies in Two Dimensions • • Contributed by OpenStax • General Physics at OpenStax CNX Bowling pins are sent flying and spinning when hit by a bowling ball—angular momentum as well as linear momentum and energy have been imparted to the pins. (See Figure). Many collisions involve angular momentum. Cars, for example, may spin and collide on ice or a wet surface. Baseball pitchers throw curves by putting spin on the baseball. A tennis player can put a lot of top spin on the tennis ball which causes it to dive down onto the court once it crosses the net. We now take a brief look at what happens when objects that can rotate collide. Consider the relatively simple collision shown in Figure, in which a disk strikes and adheres to an initially motionless stick nailed at one end to a frictionless surface. After the collision, the two rotate about the nail. There is an unbalanced external force on the system at the nail. This force exerts no torque because its lever arm r is zero. Angular momentum is therefore conserved in the collision. Kinetic energy is not conserved, because the collision is inelastic. It is possible that momentum is not conserved either because the force at the nail may have a component in the direction of the disk’s initial velocity. Let us examine a case of rotation in a collision in Example. Figure $$\PageIndex{1}$$: The bowling ball causes the pins to fly, some of them spinning violently. (credit: Tinou Bao, Flickr) Figure $$\PageIndex{2}$$: (a) A disk slides toward a motionless stick on a frictionless surface. (b) The disk hits the stick at one end and adheres to it, and they rotate together, pivoting around the nail. Angular momentum is conserved for this inelastic collision because the surface is frictionless and the unbalanced external force at the nail exerts no torque. Example $$\PageIndex{1}$$: Rotation in a Collision Suppose the disk in Figure has a mass of 50.0 g and an initial velocity of 30.0 m/s when it strikes the stick that is 1.20 m long and 2.00 kg. 1. What is the angular velocity of the two after the collision? 2. What is the kinetic energy before and after the collision? 3. What is the total linear momentum before and after the collision? Strategy for (a) We can answer the first question using conservation of angular momentum as noted. Because angular momentum is $$I\omega$$, we can solve for angular velocity. Solution for (a) Conservation of angular momentum states $L = L',$ where primed quantities stand for conditions after the collision and both momenta are calculated relative to the pivot point. The initial angular momentum of the system of stick-disk is that of the disk just before it strikes the stick. That is, $L = I\omega,$ where $$I$$ is the moment of inertia of the disk and $$\omega$$ is its angular velocity around the pivot point. Now, $$I = mr^2$$ (taking the disk to be approximately a point mass) and $$\omega = v/r$$, so that $L = mr^2\dfrac{v}{r} = mvr.$ After collision, $L' = I'\omega'.$ It is $$\omega'$$ that we wish to find. Conservation of angular momentum gives $I'\omega' = mvr.$ Rearranging the equation yields $\omega' = \dfrac{mvr}{I'},$ where $$I'$$ is the moment of inertia of the stick and disk stuck together, which is the sum of their individual moments of inertia about the nail. [link] gives the formula for a rod rotating around one end to be $$I = Mr^2/3$$. Thus, $I' = mr^2 + \dfrac{Mr^2}{3} = \left(m + \dfrac{M}{3}\right)r^2$ Entering known values in this equation yields, $I' = (0.0500 \, kg + 0.667 \, kg)(1.20 \, m)^2 = 1.032 \, kg \cdot m^2$ The value of $$I'$$ is now entered into the expression for $$\omega'$$, which yields $\omega' = \dfrac{mvr}{I'} = \dfrac{(0.0500 \, kg)(30.0 \, m/s)(1.20 \, m)}{1.032 \, kg \cdot m^2}$ $= 1.744 \, rad/s \approx 1.74 \, rad/s.$ Strategy for (b) The kinetic energy before the collision is the incoming disk’s translational kinetic energy, and after the collision, it is the rotational kinetic energy of the two stuck together. Solution for (b) First, we calculate the translational kinetic energy by entering given values for the mass and speed of the incoming disk. $KE = \dfrac{1}{2}mv^2 = (0.500)(0.0500 \, kg)(30.0 \, m/s)^2 = 22.5 \, J.$ After the collision, the rotational kinetic energy can be found because we now know the final angular velocity and the final moment of inertia. Thus, entering the values into the rotational kinetic energy equation gives $KE' = \dfrac{1}{2} I'\omega^{'2} = (0.5)(1.032 \, kg \cdot m^2)(1.744 \, rad/s)^2$ $= 1.57 \, J.$ Strategy for (c) The linear momentum before the collision is that of the disk. After the collision, it is the sum of the disk’s momentum and that of the center of mass of the stick. Solution of (c) Before the collision, then, linear momentum is $p = mv = (0.0500 \, kg)(30.0 \, m/s) = 1.50 \, kg \cdot m/s.$ After the collision, the disk and the stick’s center of mass move in the same direction. The total linear momentum is that of the disk moving at a new velocity $$v' = r\omega'$$ plus that of the stick’s center of mass, which moves at half this speed because $$v_{CM} = \left(\frac{r}{2}\right)\omega' = \frac{v'}{2}$$. Thus, $p' = mv' + Mv_{CM} = mv' + \dfrac{Mv'}{2}.$ Gathering similar terms in the equation yields, $p' = \left(m + \dfrac{M}{2}\right)v'$ so that $p' = \left(m + \dfrac{M}{2}\right)r\omega'$ Substituting known values into the equation, $p' = (1.050 \, kg)(1.20 \, m)(1.744 \, rad/s) = 2.20 \, kg \cdot m/s.$ Discussion First note that the kinetic energy is less after the collision, as predicted, because the collision is inelastic. More surprising is that the momentum after the collision is actually greater than before the collision. This result can be understood if you consider how the nail affects the stick and vice versa. Apparently, the stick pushes backward on the nail when first struck by the disk. The nail’s reaction (consistent with Newton’s third law) is to push forward on the stick, imparting momentum to it in the same direction in which the disk was initially moving, thereby increasing the momentum of the system. The above example has other implications. For example, what would happen if the disk hit very close to the nail? Obviously, a force would be exerted on the nail in the forward direction. So, when the stick is struck at the end farthest from the nail, a backward force is exerted on the nail, and when it is hit at the end nearest the nail, a forward force is exerted on the nail. Thus, striking it at a certain point in between produces no force on the nail. This intermediate point is known as the percussion point. An analogous situation occurs in tennis as seen in Figure. If you hit a ball with the end of your racquet, the handle is pulled away from your hand. If you hit a ball much farther down, for example, on the shaft of the racquet, the handle is pushed into your palm. And if you hit the ball at the racquet’s percussion point (what some people call the “sweet spot”), then little or no force is exerted on your hand, and there is less vibration, reducing chances of a tennis elbow. The same effect occurs for a baseball bat. Figure $$\PageIndex{3}$$: A disk hitting a stick is compared to a tennis ball being hit by a racquet. (a) When the ball strikes the racquet near the end, a backward force is exerted on the hand. (b) When the racquet is struck much farther down, a forward force is exerted on the hand. (c) When the racquet is struck at the percussion point, no force is delivered to the hand. Exercise $$\PageIndex{1}$$ Solution No, energy is always scalar whether motion is involved or not. No form of energy has a direction in space and you can see that rotational kinetic energy does not depend on the direction of motion just as linear kinetic energy is independent of the direction of motion. # Summary • Angular momentum $$L$$ is analogous to linear momentum and is given by $$L = I\omega$$. • Angular momentum is changed by torque, following the relationship $$net \, \tau = \frac{\Delta L}{\Delta t}$$. • Angular momentum is conserved if the net torque is zero $$L = constant \, (net \, \tau = 0)$$ or $$L = L' \, (net \, \tau = 0)$$. This equation is known as the law of conservation of angular momentum, which may be conserved in collisions. ## Contributors Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
HuggingFaceTB/finemath
# EE369 POWER SYSTEM ANALYSIS ## Presentation on theme: "EE369 POWER SYSTEM ANALYSIS"— Presentation transcript: EE369 POWER SYSTEM ANALYSIS Lecture 2 Complex Power, Reactive Compensation, Three Phase Tom Overbye and Ross Baldick HW 1 is Problems 2.2, 2.3, 2.4, 2.5, 2.6, 2.8, 2.12, 2.14, 2.17, 2.19, 2.24, 2.25 and Case Study Questions A., B., C., D. from the text; due Thursday 9/5. Review of Phasors Goal of phasor analysis is to simplify the analysis of constant frequency ac systems: v(t) = Vmax cos(wt + qv), i(t) = Imax cos(wt + qI), where: v(t) and i(t) are the instantaneous voltage and current as a function of time t, w is the angular frequency (2πf, with f the frequency in Hertz), Vmax and Imax are the magnitudes of voltage and current sinusoids, qv and qI are angular offsets of the peaks of sinusoids from a reference waveform. Root Mean Square (RMS) voltage of sinusoid: Phasor Representation Phasor Representation, cont’d (Note: Some texts use “boldface” type for complex numbers, or “bars on the top”.) Also note that the convention in power engineering is that the magnitude of the phasor is the RMS voltage of the waveform: contrasts with circuit analysis. (Note: Z is a complex number but not a phasor). RL Circuit Example Complex Power Complex Power, cont’d Complex Power, cont’d Complex Power (Note: S is a complex number but not a phasor.) Complex Power, cont’d Conservation of Power At every node (bus) in the system: Sum of real power into node must equal zero, Sum of reactive power into node must equal zero. This is a direct consequence of Kirchhoff’s current law, which states that the total current into each node must equal zero. Conservation of real power and conservation of reactive power follows since S = VI*. Conservation of Power Example Power flowing from source to load at bus Earlier we found I = 20-6.9 amps = 1600W + j1200VAr Power Consumption in Devices Example First solve basic circuit I and re-solve, assuming that load voltage is maintained at 40 kV. Need higher source voltage to maintain load voltage magnitude when reactive power load is added to circuit. Current is higher. Power System Notation Power system components are usually shown as “one-line diagrams.” Previous circuit redrawn. Arrows are used to show loads Transmission lines are shown as a single line Generators are shown as circles Reactive Compensation Key idea of reactive compensation is to supply reactive power locally. In the previous example this can be done by adding a 16 MVAr capacitor at the load. Compensated circuit is identical to first example with just real power load. Supply voltage magnitude and line current is lower with compensation. Reactive Compensation, cont’d Reactive compensation decreased the line flow from 564 Amps to 400 Amps. This has advantages: Lines losses, which are equal to I2 R, decrease, Lower current allows use of smaller wires, or alternatively, supply more load over the same wires, Voltage drop on the line is less. Reactive compensation is used extensively throughout transmission and distribution systems. Capacitors can be used to “correct” a load’s power factor to an arbitrary value. Power Factor Correction Example Distribution System Capacitors Balanced 3 Phase () Systems A balanced 3 phase () system has: three voltage sources with equal magnitude, but with an angle shift of 120, equal loads on each phase, equal impedance on the lines connecting the generators to the loads. Bulk power systems are almost exclusively 3. Single phase is used primarily only in low voltage, low power settings, such as residential and some commercial. Single phase transmission used for electric trains in Europe. Balanced 3 -- Zero Neutral Current Advantages of 3 Power Can transmit more power for same amount of wire (twice as much as single phase). Total torque produced by 3 machines is constant, so less vibration. Three phase machines use less material for same power rating. Three phase machines start more easily than single phase machines. Three Phase - Wye Connection There are two ways to connect 3 systems: Wye (Y), and Delta (). Wye Connection Line Voltages Van Vcn Vbn Vab Vca Vbc -Vbn (α = 0 in this case) Line to line voltages are also balanced. Wye Connection, cont’d We call the voltage across each element of a wye connected device the “phase” voltage. We call the current through each element of a wye connected device the “phase” current. Call the voltage across lines the “line-to-line” or just the “line” voltage. Call the current through lines the “line” current. Delta Connection Ica Ic Iab Ibc Ia Ib Three Phase Example Assume a -connected load, with each leg Z = 10020W, is supplied from a 3 13.8 kV (L-L) source Three Phase Example, cont’d Delta-Wye Transformation Delta-Wye Transformation Proof + Delta-Wye Transformation, cont’d Three Phase Transmission Line
HuggingFaceTB/finemath
If r 1 and r 2 are the radii of the two non intersecting non enclosing circles # If r 1 and r 2 are the radii of the two non This preview shows page 9 - 11 out of 25 pages. If r1and r2are the radii of the two non-intersecting non-enclosing circles, Length of the direct common tangent =2212)r-(r-centre)between(DistanceLength of transverse common tangent = 2212)r(r-centre)between(Distance+Two circles are said to be concentric if they have the same centre. As is obvious, here the circle with smaller radius lies completely within the circle with bigger radius. Arcs and Sectors Fig. 4.37 An arc is a segment of a circle. In Fig. 4.37, ACB is called minor arc and ADB is called major arc. In general, if we talk of an arc AB, we refer to the minor arc. AOB is called the angle formed by the arc AB (at the centre of the circle). The angle subtended by an arc at the centre is double the angle subtended by the arc in the remaining part of the circle. In Fig. 4.37, AOB = 2 . AXB. = 2.AYB Angles in the same segment are equal. In Fig. 4.37, AXB = AYB. Fig. 4.38 The angle between a tangent and a chord through the point of contact of the tangent is equal to the angle made by the chord in the alternate segment (i.e., segment of the circle on the side other than the side of location of the angle between the tangent and the chord). This is normally referred to as the "alternate segment theorem." In Fig. 4.38, PQ is a tangent to the circle at the point T and TS is a chord drawn at the point of contact. Considering PTS which is the angle between the tangent and the chord, the angle TRS is the angle in the "alternate segment". So, PTS = TRS. Similarly, QTS = TUS. Fig. 4.39 We have already seen in quadrilaterals, the opposite angles of a cyclic quadrilateral are supplementary and that the external angle of a cyclic quadrilateral is equal to the interior opposite angle. The angle in a semicircle (or the angle the diameter subtends in a semicircle) is a right angle. The converse of the above is also true and is very useful in a number of cases - in a right angled triangle, a semi-circle with the hypotenuse as the diameter can be drawn passing through the third vertex (Refer to Fig. 4.39). R S T Q P U X Y A B O D C Triumphant Institute of Management Education Pvt. Ltd. (T.I.M.E.) HO:95B, 2ndFloor, Siddamsetty Complex, Secunderabad – 500 003.Tel : 040–27898195 Fax : 040–27847334 email : [email protected]website : SM1001908/51 B A o θFig. 4.40 The area formed by an arc and the two radii at the two end points of the arc is called sector. In Fig. 4.40, the shaded figure AOB is called the minor sector. AREAS OF PLANE FIGURESMensuration is the branch of geometry that deals with the measurement of length, area and volume. We have looked at properties of plane figures till now. Here, in addition to areas of plane figures, we will also look at surface areas and volumes of "solids." Solids are objects, which have three dimensions (plane figures have only two dimensions). #### You've reached the end of your free preview. Want to read all 25 pages? • Fall '19 • triangle, angle bisector, institute of Management Education Pvt, Siddamsetty Complex
HuggingFaceTB/finemath
# Algebra/Statistics Algebra ← Probability Statistics ## Percentages Percentages are another way of representing rational numbers. Rational numbers can be represented as decimals, fractions or percentages. Percent, when you actually break the word apart, consists of two words: per and cent. Per is a small but extremly powerful word. It means "to divide". Cent is a Latin word for 100. Percent, then, means "divide by 100". To convert from a fraction to a percentage, we simply multiply the fraction by 100. For example, $\frac{1}{4} = (\frac{1}{4} \times 100)% = 25%\,$. To convert from a decimal to a percentage, we first convert the decimal into a fraction, and then proceed with the approach outlined above. For example, $0.15 = \frac{15}{100} = (\frac{15}{100} \times 100)% = 15%\,$. ## Mean, Median, Mode The following three numbers represent 3 different ways to think about the average value of your set. Mean - This is what we usually think of as the "average" of a data set. The mean can be found by summing all the values in the data set and dividing by the size of the data set (that is number of elements in the set). In mathematical notation, $\bar{x} = (\sum_{i=1}^n x_i) \div n = (1/n) * (x_1 + x_2 + x_3 + ... + x_n)$, where $\bar{x}$ is the arithmetic mean, and n is the number of elements in the data set. All values of x together constitute the sample space for the data set. For example: Suppose 1, 2, 4, 6, 8, 9 is our data set then the sum is 1 + 2 + 4 + 6 + 8 + 9 = 30 and there are 6 elements in the data set, so the mean is 30/6 = 5. The mean, while a very useful statistic, has its flaws. Notably, its value may be heavily influenced by outliers - numbers in a data set which are significantly higher or lower than the majority of the data. It is often preferable to use the median instead to describe such data sets. Median - This is the middle of our data set. To find the median you must first put your data values in numerical order (say, from smallest to largest). If you have an odd number of elements in your data set there will be exactly one number in the middle, this number is the median. If you have an even number of elements in your data set then the median is the average of the middle two numbers. et For example. If our data set was 2, 2, 3, 4, 4, 5, 6, 7, 8, 9, 12, 13, 16, 22 is our data s data set. Since it has an even number of elements, we have to take the mean of the middle two, in this case 6 and 7, so the median is 6.5. Mode - Mode refers to how many times a number or numbers occur in a data set. Since mean, median, and mode often are confused with each other, an easy way to remember mode is 'most often'. The first two letters in mode are 'm' and 'o', imagine this stands for 'most often' to help you remember. In the case that two or more different values are tied for the most number of repeats then that data set is said to have multiple modes. If your asked to find the mode of a data set with multiple modes, then all of the modes should be listed. If no element of the data repeats, then there is no mode. For example. Suppose 1, 2, 2, 2, 3, 3, 4, 5, 5, 5, 7 is our data set, then the mode would be both 2 and 5. They both occur three times and three is the maximum number of repeats in our data set. The following quantity tell us how spread out our data set is. Range - The difference between the largest and smallest numbers in our data set. Notice this means the range is never negative. ### Examples Mean Let's look at the following data set: Data Values: 10, 13, 4, 7, 9 so n = 5 10 + 13 + 4 + 7 + 9 = 43 43 / 5 = 8.6 Mean = 8.6 Median Case 1: Data Values: 10, 13, 4, 7, 8 so n = 5 Numerical Order: 4, 7, 8, 13, 10 Since 8 is the middle number, Median = 8 Case 2: Data Values: 10, 13, 4, 7, 8, 10 so n = 6 Numerical Order: 4, 7, 8, 10, 10, 13 Middle Numbers: 8 and 10 Find Mean: 8 + 10 = 18 18 / 2 = 9 Median = 9 Mode Data Values: 10, 13, 4, 7, 8, 10 10 is in the data set twice. Mode = 10 Data Values: 4, 9, 13, 18, 4, 2, 9, 4, 13, 8, 9 4 and 9 both have three data values. Mode = 4, 9 Range Data Values: 10, 13, 4, 7, 8 Numerical Order: 4, 7, 8, 10, 13 Difference of last and first: 13 - 4 = 9 Range = 9 ### Practice Problems Find the mean, median, mode, and range of the following data sets: 1.) 5, 8, 12, 4, 8, 9, 11, 2 2.) 24, 26, 37, 24, 16, 44, 26, 34, 24 3.) 15, 48, 89, 74, 25, 36, 57, 51, 17, 22 4.) 2, 6, 8, 7, 8, 2, 2, 9, 10
HuggingFaceTB/finemath
Pareto Chart Also called: Pareto diagram, Pareto analysis Variations: weighted Pareto chart, comparative Pareto charts A Pareto chart is a bar graph. The lengths of the bars represent frequency or cost (time or money), and are arranged with longest bars on the left and the shortest to the right. In this way the chart visually depicts which situations are more significant. When to Use a Pareto Chart • When analyzing data about the frequency of problems or causes in a process. • When there are many problems or causes and you want to focus on the most significant. • When analyzing broad causes by looking at their specific components. • When communicating with others about your data. Pareto Chart Procedure 1. Decide what categories you will use to group items. 2. Decide what measurement is appropriate. Common measurements are frequency, quantity, cost and time. 3. Decide what period of time the Pareto chart will cover: One work cycle? One full day? A week? 4. Collect the data, recording the category each time. (Or assemble data that already exist.) 5. Subtotal the measurements for each category. 6. Determine the appropriate scale for the measurements you have collected. The maximum value will be the largest subtotal from step 5. (If you will do optional steps 8 and 9 below, the maximum value will be the sum of all subtotals from step 5.) Mark the scale on the left side of the chart. 7. Construct and label bars for each category. Place the tallest at the far left, then the next tallest to its right and so on. If there are many categories with small measurements, they can be grouped as “other.” Steps 8 and 9 are optional but are useful for analysis and communication. 1. Calculate the percentage for each category: the subtotal for that category divided by the total for all categories. Draw a right vertical axis and label it with percentages. Be sure the two scales match: For example, the left measurement that corresponds to one-half should be exactly opposite 50% on the right scale. 2. Calculate and draw cumulative sums: Add the subtotals for the first and second categories, and place a dot above the second bar indicating that sum. To that sum add the subtotal for the third category, and place a dot above the third bar for that new sum. Continue the process for all the bars. Connect the dots, starting at the top of the first bar. The last dot should reach 100 percent on the right scale. Pareto Chart Examples Example #1 shows how many customer complaints were received in each of five categories. Example #2 takes the largest category, “documents,” from Example #1, breaks it down into six categories of document-related complaints, and shows cumulative values. If all complaints cause equal distress to the customer, working on eliminating document-related complaints would have the most impact, and of those, working on quality certificates should be most fruitful. This site uses Akismet to reduce spam. Learn how your comment data is processed.
HuggingFaceTB/finemath
Tag Info Hot answers tagged electrostatics 17 Gauss's law is always fine. It is one of the tenets of electromagnetism, as one of Maxwell's equations, and as far as we can tell they always agree with experiment. The problem you've uncovered is simply that "a uniform charge density of infinite extent" is not actually physically possible, and it turns out that (i) it is not possible to express it as the ... 17 The force does not change instantaneously, the correct way the electromagnetic field of (and thus the force exerted by) a moving electric charge is given by the Liénard-Wiechert potential, where one can see that the effect of the charge does not travel faster than light. 16 To add to ACuriousMind's answer on the Liénard-Weichert potentials, you can put these formulas into an even more wonderfully descriptive form since you can derive Feynman's formula from them for the radiation from a moving charge: $$\vec{E} = ... 16 You smell ozone (\mathrm{O_3}, from the Greek word ozein for "smell"), and maybe nitrous oxide - the reaction product of oxygen and \mathrm{N_2}. There is a nice description of the formation and action of ozone at this link. Briefly: Oxygen molecules (\mathrm{O_2}) can be dissociated (broken into atoms or ions) by either UV light, or electrical ... 11 The statement "electric field inside a conductor is zero" is true only after charges have distributed themselves in the most optimal way on the surface - it is an electrostatic result. Starting with an arbitrary charge distribution, there will be forces that cause a redistribution of the charge until, for a sphere, they are distributed uniformly. At that ... 11 You want a gas so you don't need to expend energy vaporising the propellant. You also want the gas to be as dense as possible so you can get as much impulse per unit volume of propellant as possible. It's also nice if the gas is inert and non-corrosive so you don't need to worry about it degrading or corroding whatever you're storing it in. Finally it's nice ... 11 You are correct when you concluded that two classical point electrons could never touch each other. It would take infinite energy. 8 There is another 'infinity' (among others) lurking in classical electrodynamics which is evident when one calculates the electrostatic energy W of a uniform spherical charge distribution of radius a and total charge Q$$W = \frac{3}{5}\frac{Q^2}{4\pi \epsilon_0 a}$$Thus, by this result, a point (zero radius) particle of charge Q has 'infinite' ... 8 Ideally, test charge should not affect the charge distribution of the source. An infinitesimal charge will ensure, for example, that the electric field it produces does not redistribute charges on any conductors in your system. A large test charge would polarize nearby objects, thus affecting the field you're trying to measure in the first place. 8 If you have an excess of electron in your body, your hair might stand on end and you might feel a bit negative (I couldn't help that pun), and you should probably avoid touching people or metal object if you don't want a static shock, but other than that, it's mostly harmless. The real danger comes from flowing electrons. Because the body basically runs on ... 8 You have ignored the mobile charges in the conductor. In your plot the field lines are not perpendicular to the surface, particularly near the charges. That will cause the conduction electrons to move. The positive charges will attract electrons until the field inside the conductor is zero. This means that the whole conductor, including the inner ... 8 This may help your, it comes from Rutherford scattering by which they determined that the atom has a hard core. It is positive alphas against positive nucleus, but the math is the same. Determining the closest approach to the nucleus amounts to calculating the minimum distance for the hyperbolic orbit which is produced by the coulomb repulsive force. ... 7 If you are talking about point charges then, as explained above, the answer is no. But in the case of non-uniform charge distributions, it is possible for same-charge particles to attract, if they are sufficiently close. As an example, the following two particles are identical, each having a net charge of -1. Plotted below them is their potential energy as ... 7 The physical meaning of the capacitance is precisely given by \mathrm{d}Q=C\cdot \mathrm{d}V: C tells you how much charge there will be in the capacitor per voltage applied. For all capacitors, the linearity holds fairly well. Generally speaking, capacitance is given by a Q-V curve, which may consist of a linear region, a saturation region and a ... 6 Typically this is explained by the saying, "current kills." It's not the charge (or potential above ground) that a body attains that hurts biological systems, it's the current that flows through them and either 1) heats them or 2) disrupts important electrical signals in the body. Heating damage occurs and can "cook" (cause 1st, 2nd, or 3rd degree burns ... 6 The vanishing of closed line integrals means that the field is conservative. Since \oint \vec E \cdot \mathrm{d}\vec l is equivalent to \vec \nabla \times \vec E = 0, the "physical interpretation" is the the electric field is irrotational, i.e. it has no "vortices". The, more valuable, mathematical implication is that there is a scalar potential whose ... 6 The reason is the same as why the electric field inside a conductor is zero: if it isn't zero, the free electrons undergo a force and move (rearrange) untill they dont feel a force anymore. If the electrons don't feel a force, the electric field must be zero. At the surface of a conductor, the free electrons feel a force perpendicular to the surface, but ... 6 Electrostatic refers to the case where the fields are not time dependent. In that case the Maxwell's equations reduce to:$$\nabla \cdot E =\frac{\rho}{\epsilon_o} \\ \nabla \times E = 0 \implies E=-\nabla \phi \\ \text{then,} \nabla \cdot \nabla \phi = \nabla^2 \phi = -\frac{\rho}{\epsilon_o} $$The solution to the last equation is:$$ \phi = ... 5 Now, the constants C1,C2,C3 appearing when we separate variables on Laplace's equation for electrostatic potential has some physical meaning? If they do, what is it? The constants are the related to the square of the spatial (angular) frequency or a spatial growth/decay constant. For an example of spatial frequency, let $$X(x) = A \sin (k_xx) + B ... 5 ...why isn't the work done against the net force due to the system considered instead of simply adding up the work done against separate forces caused by individual charges? They're both equivalent, due to the principle of superposition. Basically, the net force is what you get when you add up the separate forces from the individual charges acting on ... 5 The problem here is that you've failed to specify a boundary condition. Consider an electrostatics problem where you're given a charge distribution \rho(\mathbf{r}) and asked to find the electric field \mathbf{E}(\mathbf{r}). The electric field is the solution to the set of differential equations \nabla \times \mathbf{E} = 0 and \nabla \cdot ... 5 I want to complete the other answer by addressing the difference of a charge distribution made statistically up by a huge number of electrons , and what an electron means: At the level of elementary particles, one of which is the electron, there are no charge distributions, as elementary particles are point particles, and charge is a quantized quantity ... 5 Can anyone provide me with a physical interpretation For an electric field satisfying the equation, the work associated with (slowly) moving a test charge around a closed path is zero. To see this, recall that the electric force on a charge is$$\vec F = q\vec E $$The work associated with moving a particle along a closed path is$$W = \oint \vec ... 5 Noble gases have the advantage of being chemically inert, so that they are less likely to react with atoms in the electrostatic grids. Since ion thruster to date have been deployed only on unmanned transports, regular maintainance is not an option. Because of that, Noble gases are favoured over, say, hydrogen One reason to pick Xenon over Argon though, ... 5 For $H\gg R,L$ And for $L\gg R,H$ you get pretty much the same thing. First off, $(H+L)^2\sim H^2$ and the same goes for $(H-L)^2$. That means that $(H+L)^2+R^2\approx (H+L)^2$. However, $(H+L)\not\approx H$, which means that $\sqrt{R^2+(H+L)^2}\approx H+L$. This makes the first approximation have $\frac{1}{H+L}-\frac{1}{H-L}$ in it. The second ... 5 So in the first case, when talk about a plain circular ring, I assume you mean an annular ring, with a well defined inner radius and a different well defined out radius. With a positive charge at the center of the annular ring, positive charges will be repelled outward and negative charges attracted inward. Incidentally, not all the positive charge will go ... 5 There's no minimum distance. Yet, as the two particles get closer to each other, they will either scatter off each other (in the quantum mechanical sense of interacting via Feynman diagrams) or form a bound system - if we're talking electron-positron (which is as close to point charges as it gets), they might become positronium, but that won't last long, ... 5 The electric potential $\phi:\mathbb{R}^3\to\mathbb{R}$ is the solution to Laplace's equation and therefore a harmonic function. Harmonic functions enjoy several nice properties, some of them listed on the Wikipedia page. Concerning OP's second point, let us mention that there is a theorem similar to Liouville's theorem from complex analysis that a bounded ... 5 The integrand $\vec E \cdot d\vec r$ is $E\,dr$, not $-E\,dr$. The evaluation of the dot product is sort of done for you when you specify the curve on which you are integrating (i.e., your limits of integration in this case). You've double-accounted for the relative directions of $\vec E$ and $d\vec r$. I suspect the underlying confusion is that you are ... Only top voted, non community-wiki answers of a minimum length are eligible
open-web-math/open-web-math
# Prove that $f(x) + g(x) = 6$ Calculus theory problem I'm belaboring to solve... Tried it for an hour only spinning my wheels. Any hints/nudges in the right direction would be greatly appreciated. Here's the problem: $f$ and $g$ are functions that are differentiable for all real x. They have the following properties: i) $f'(x) = f(x) - g(x)$ ii) $g'(x) = g(x) - f(x)$ iii) $f(0) = 5$ iv) $g(0) = 1$ Prove that $f(x) + g(x) = 6$ for all $x$. Good luck. - You're given information about the derivatives. Can you turn the statement you're trying to prove into a statement about derivatives? – Qiaochu Yuan Aug 29 '10 at 13:32 Sorry I read it incorrectly, for a hint: try doing something with the first 2 equations to learn something about the sum of their derivatives. If you're still stuck, let me know and I'll post a solution with commentary. – WWright Aug 29 '10 at 13:38 If you're already familiar with the Fundamental Theorem of Calculus, what happens when you integrate derivatives? – J. M. Aug 29 '10 at 13:40 Are you setting this problem because it's fun or because you are stuck trying to solve it? – anon Aug 29 '10 at 13:57 @muad, a bit of both – Bob Parr Aug 29 '10 at 14:22 As a follow up to Dario... If we let $s(x) := f(x) + g(x)$, $s'(x) = f'(x)+g'(x)$, which by (i) and (ii) is equal to $f(x)-g(x)+g(x)-f(x) = 0$. Since $s(0) = 6$ and $s'(x) = 0$, $s(x)$ is constant; $s(x) = 6$ for all $x$, therefore $f(x)+g(x) = 6$. - Thanks so much! – Bob Parr Aug 29 '10 at 14:21 Now that you've gotten the answer, here is a little bit of a post mortem clarifying the essential points, as may come up again in future problems. (In general, once you've solved a problem, it's a really good idea to ask for the "moral(s)". If you can't identify one, how are you better off now than before you solved it?) In this case, I think the most important single step is the recognition that the conclusion "$f(x) + g(x) = 6$ for all $x$" can be more usefully rephrased as "$f(x) + g(x)$ is a constant function, with constant value $6$." This is much more suggestive, because a function $f$ defined on an interval is constant if and only if its dervative is constantly equal to zero. (Stationarity is equivalent to identically zero velocity.) Note that half of this statement is obvious, but the other half is not: it is consequence of the Mean Value Theorem that should be emphasized both in the text and by your instructor. Thus you are clued in to the fact that the main thing you want to show is that $(f+g)' = f' + g' \equiv 0$ (i.e., constantly zero). Since a constant function is determined by plugging in any point, once you know that $f+g$ is constant, seeing that it's constantly equal to SOMETHING -- in this case $6$ -- shouldn't be a problem. So now you look and see how to get from what you're given to the conclusion that $f' + g' \equiv 0$, and you see that you're being given an expression for $f'$ and $g'$ separately. So certainly you want to add them together and hope to get $0$; in this case, that hope is immediately fulfilled. - Let's introduce their sum function $$s(x) := f(x) + g(x)$$ The statement to be proven thus becomes $s(x) = 6$. We know $$s(0) = 5 + 1 = 6$$ So now what derivative $s'$ do we need such that $s$ does never change it's constant value? - HINT $$\frac{d}{dx} \bigl( f(x) + g(x) \bigr) = ?$$ - 1. Derivative is a linear function in fact if $f$ and $g$ are differentiable on $\mathbb{R}$ and $\alpha , \beta \in \mathbb{R}$ then $(\alpha f + \beta g)' = \alpha f' + \beta g'$ 2. Derivative is a rate of change this is more subtle, the essential fact is that if $f$ is differentiable on $\mathbb{R}$ and $f'(x) = 0$ for all $x \in \mathbb{R}$ then the rate of change of $f$ is null on all $\mathbb{R}$ and then the value of $f$ does not change on all $\mathbb{R}$ (pay attention, this is not always true if you break the domain of $f$ into pieces). This is a consequence of a more general and important result, the mean value theorem. Observe that i) and ii) in the question give you global information on $f$ and $g$ (i. e. they refer to all $x \in \mathbb{R}$), while iii) and iv) give you local information (in the point $x = 0$). Your question is about a global propriety of the function $f+g$, so you should try to combine local and global information to reach the solution.
HuggingFaceTB/finemath
## The Goodness of Fit or Chi - Squared Distribution Thetest can only be used the goodness of fit of a data set to a hypothesised probability distribution. The observed data is sorted into frequency classes, and for each frequency class, the expected number of observations that would fall into that frequency class is calculated. The difference between each observed frequency O-i and expected frequency E-i is calculated, squared and divided by the expected frequency. This calculation is performed for each frequency class and the results are all added to give a single number, the test statistic, equal to If the expected frequency for a frequency class is less than 5,then this group is combined with other frequency classes so that all frequency classes have expected frequencies at least equal to 5. The significance of the test statistic depends on the number of degrees of freedom,of the data, which is equal to the number of frequency classes AFTER any classes have been combined, c, minus 1, so that the test statistic is then compared with thevalue drawn from thetables, whereis some set level, to draw some conclusion. Example: A die is rolled 600 times and the frequency of each score recorded. Score 1 2 3 4 5 6 Observed Frequency, 86 98 108 114 92 102 Test whether the die is fair at the 1% level of significance. First state the null and alternative hypotheses,andrespectively. The probability of each score is 1/6. The die is not fair and the probability of each score is not 1/6. The expected frequencies are all 1/6 × 600 = 50 so the test statistic is since the total is a linear equation connecting the frequencies and is fixed. From tables we see thatso our result is not significant. We do not rejectand conclude that the die is not unfair.
HuggingFaceTB/finemath
# The molecular mass of albumin is 60 kDa. What is the concentration of a 1.2 % m/m solution of albumin in milligrams per millilitre? What is the molar concentration? Jan 18, 2017 The concentration of albumin is a) $\text{12 mg/mL}$ and b) 2.0 × 10^"-4" color(white)(l)"mol/dm"^3. #### Explanation: A dalton (Da) is simply another name for a unified atomic mass unit (u). $\text{1 kDa = 1000 Da}$ Thus, an atomic mass of 60 Da is the same as a mass of 60 u. A molecular mass of 60 kDa is the same as a mass of 60 000 u. The question is really saying that the molar mass of albumin is 60 000 g/mol. a) Concentration in milligrams per millilitre A 1.2 % (m/m) solution contains 1.2 g of albumin in 100 g of solution. Since the solution is so dilute, we can assume that its density is 1.00 g/mL. $\text{Concentration" = (1.2 color(red)(cancel(color(black)("g albumin"))))/(100 color(red)(cancel(color(black)("g solution")))) × ("1000 mg albumin")/(1 color(red)(cancel(color(black)("g albumin")))) × (1 color(red)(cancel(color(black)("g solution"))))/("1 mL solution") = "12 mg albumin"/"1 mL solution}$ The concentration of albumin is 12 mg/mL. b) Concentration in terms of molarity "Concentration" = (1.2 color(red)(cancel(color(black)("g albumin"))))/(100 color(red)(cancel(color(black)("cm"^3color(white)(l) "solution")))) × ("1 mol albumin")/("60 000" color(red)(cancel(color(black)("g albumin")))) × (1000 color(red)(cancel(color(black)("cm"^3color(white)(l) "solution"))))/("1 dm"^3 color(white)(l)"solution") = (2.0 × 10^"-4"color(white)(l) "mol albumin")/("1 dm"^3color(white)(l) "solution") The concentration of albumin is 2.0 × 10^"-4" color(white)(l)"mol/dm"^3.
HuggingFaceTB/finemath
# Algebra II Homework Week 13 ```Algebra II Homework Week 13 Monday (11/26) – 1 𝑥 1. Graph 𝑦 = 3 (2) . Then state the domain and range. 2. Determine whether each function represents exponential growth or decay. 1 𝑥 b. 𝑦 = .4(7)𝑥 a. 𝑦 = 2 (6) c. 𝑦 = 3.4(0.1)𝑥 3. Solve the equation 3𝑛−2 = 27 1 𝑦−3 4. Solve (7) = 343 5. Solve 325𝑝+2 ≥ 16 6. Solve 49𝑥 = 7𝑥 2 −15 Tuesday (11/27) – 1. Write each equation in logarithmic form a. 83 = 512 b. 5−3 = 1 1 125 c.1002 = 10 2. Write each equation in exponential form. a. log 5 125 = 3 b. log13 169 = 2 2 c. log 8 4 = 3 3. Evaluate each expression. a. log 2 16 1 c. log 5 57 b. log 2 32 4. Solve each equation or inequality. a. log 9 𝑥 = 2 1 b. log 64 𝑦 ≤ 2 c. log 6 (2𝑥 − 3) = log 6 (𝑥 + 2) Wednesday (11/28) – 1. Use a calculator to evaluate each expression to four decimal places. a. a. log 19 b. log 0.75 2. EARTHQUAKES In 1906, San Francisco experienced a major earthquake of magnitude 8.3. In 1989, another major quake hit the area with a magnitude of 7.1. The amount of energy E, in ergs, an earthquake releases is related to its Richter scale magnitude M by the equation log E = 11.8 + 1.5M. How much more energy did the 1906 quake release than the 1989 quake? 3. Solve 2a + 3 = 34. 4. Solve 5b – 1 ≥ 106 – b. Algebra II Homework Week 13 5. Express log7 58 in terms of common logarithms. Then approximate its value to four decimal places. Thursday (11/29) – 1. Use a calculator to evaluate each expression to four decimal places. a. e2.5 b. e–4.6 2. Use a calculator to evaluate each expression to four decimal places. a. ln 15 b. ln 0.75 3. Write an equivalent exponential or logarithmic equation. a. e–x = 2 b. ln e2a – 5 4. Solve –2e5x + 10 = 6. 5. Solve 6e–x &lt; 15. 6. Solve each equation or inequality. a. ln (6x – 3) + 3 = 10 b. ln (3x + 2) &lt; 5 Friday (11/30) – 1. RECREATION A particular chemical must be added to a swimming pool at regular intervals because it is released from the water into the air at the rate of 5% per hour. Sixteen ounces of the chemical is added at 8 am. At what time will three–fourths of this chemical be gone from the pool? 2. Exponential Decay of the Form y = ae–kt. CHEMISTRY The half–life of a radioactive substance is the time it takes for half of the atoms of the substance to decay. Each element has a unique half–life. Radon–222 has a half–life of about 3.8 days, while thorium–234 has a half–life of about 24 days. Find the value of k for each element and compare their equations for decay. 3. In 1980, the value of farming land in a region of Wyoming was \$150 per acre. Since then, the value has increased by exactly 0.75% per year. If the land continues to increase in value at this rate, what will the approximate value of the land per acre be in 2005? A. \$610 B. \$330 C. \$181 D. \$175 E. \$1098 kt 4. Exponential Growth of the Form y = ae . SAVINGS Sue invests \$1000 at 5% interest compounded continuously and Norma invests \$1250 at 3.5% interest compounded continuously. When interest is compounded continuously, the amount A in an account after t years is found using the formula A = Pert, where P is the amount of principal and r is the annual interest rate. In how many years will Sue’s account be greater than Norma’s account? ```
HuggingFaceTB/finemath
## Tag Archives: precalculus ### Basic Algebra for IITJEE Main and RMO More basic algebra for you guys who are thirsting for more…The following is a nice problem indicating some basic concepts or tricks in problems involving logarithms/powers. Solve for x: $4^{x}-3^{x-(1/2)}=3^{x+(1/2)}-2^{2x-1}$ (IITJEE 1978) Solution: Writing $4^{x}$ as $2^{2x}$ and bringing powers of the same number on the same side we get, $2^{2x}+2^{2x-1}=3^{x+(1/2)}+3^{x-(1/2)}$ The first term on the LHS can be written as $2^{2x-1} \times 2$, and hence, a common factor of $2^{2x-1}$ comes out from the terms on the LHS. As for RHS, we can rewrite the first term as $3^{x-(1/2)} \times 3$ and then the factor $3^{x-(1/2)}$ comes out as common. So, we get $2^{2x-1}(2+1)=3^{x-(1/2)}(3+1)$, that is, $3 \times 2^{2x-1}=4 \times 3^{x-(1/2)}$ Bringing all powers of 2 to  the left and all powers of 3 to the right, we get $2^{2x-3}=3^{x-(3/2)}$ By inspection, $x=3/2$ is a solution. But, how do we arrive at it systematically? Also, how do we know that there is no other solution? It is tempting to try to do this by saying that a power of 2 can equal a power of 3 only when are both are equal to 1. (Such a reasoning is indeed useful in solving equations in Number Theory where we mostly deal with positive integers and their factorization into integers). But, here it is inapplicable because we do not  know that the exponents are integers. Instead, let us express both the sides as the power of the same number. One way to do this  is to write 3 as $2^{\log_{2}3}$ in the RHS. Then, we can get $2^{2x-3}=2^{(\log_{2}3)(x-(3/2)}$ As the bases are the same, the equality of powers implies that of the exponents. So, we have $2x-3=(\log_{2}3)(x-(3/2))$ This can be solved easily to give $x=\frac{3-(3/2)(\log_{2}3)}{2-\log_{2}3}$ which simply equals $3/2$. Hence, $x=3/2$ is the only solution.
open-web-math/open-web-math
Year 9 Interactive Maths - Second Edition ## Problem Solving #### Example 22 A customer is allowed a discount of 10% on the purchase of a table lamp.  The discount is \$7.40.  What is the marked price of the table lamp? ##### Solution: Let the marked price of the table lamp be \$x.  Now, the discount of \$7.40 is equal to 10% of the marked price. So, the marked price is \$74. #### Example 23 ##### Solution: So, his sales amount to \$2000.
HuggingFaceTB/finemath
# Ordinal Numbers Songmr R You can count unlimited sets with ordinal numbers. They also can help generalize ordinal numbers. ## 1st The basic concept of math is the ordinal. It is a numerical number that defines the position of an object in an array of objects. A number that is normally between one and 20 is utilized as the ordinal number. Although ordinal numbers serve many functions, they’re most often used for indicating the order of items within the list. Charts Words, numbers, and even words can all be used to depict ordinal numbers. They can also serve to illustrate how a set of or pieces are placed. Ordinal numbers mostly fall into two categories. The transfinite ordinals are represented with lowercase Greek letters. The finite ordinals will be represented as Arabic numbers. Every well-ordered set is supposed to include at least one ordinal, in line with the axiom of selection. For instance, the highest possible grade will be given to the class’s initial member. The contest’s runner-up was the student with the highest grade. ## Combinational ordinal numbers The compound ordinal numbers, that may have multiple digits are also known. They are generated by multiplying the ordinal’s last digit. These numbers are usually used for ranking and dating purposes. They don’t provide a unique ending for each number, like cardinal number. Ordinal numbers are used to identify the order in which elements are located in the collection. These numbers can also be used to denote the names of items within the collection. Regular numbers can be found in both regular and suppletive forms. Regular ordinals are created by prefixing a cardinal number with the suffix -u. The numbers are then typed into the form of a word. A hyphen is added to it. There are additional suffixes. The suffix “nd” can be used to signify numbers that end with 2 . The suffix “th” can refer to numbers that end in the numbers 4 or 9. By affixing words with the -u or–ie suffix results in suffixtive ordinals. This suffix is used to count words and is larger than the normal. ## ordinal limit Ordinal numbers that aren’t zero are considered to be ordinal numbers. Limit ordinal numbers have a drawback: there is no limit on the number of elements. They can be made by joining non-empty set without any maximum elements. Infinite transfinite-recursion definitions use limited ordinal numbers. According to the von Neumann model, every infinite cardinal number is also an ordinal limit. An ordinal with a limit equals the total of the other ordinals beneath. Limit ordinal number can be determined by arithmetic, or as a series of natural numbers. The ordinal numbers used to organize the data are utilized. They give an explanation of an object’s numerical place. They are often utilized in set theory and the arithmetic. While they share the same basic structure, they are not in the same classification as natural numbers. The von Neumann Model uses a well-ordered set, or ordered set. It is assumed that fyyfy represents an element of g’, a subfunction of a function that is described as a singular operation. If fy only has one subfunction (ii) then g’ meets the requirements. The Church-Kleene oral is an ordinal that is a limit in a similar way. A Church-Kleene ordinal defines a limit-order as a correctly ordered collection of smaller ordinals. ## Examples of ordinal numbers in stories Ordinal numbers are often used to show the order of objects and entities. They are crucial to organize, count, and ranking reasons. They can also be used to show the order of items as well as the position of objects. The ordinal number is usually identified by the letter “th”. Sometimes, the letter “nd” is able to be substituted. It is common to find ordinal numbers in the titles of books. Even though ordinal numbers are most commonly employed in the form of lists, they can still be written as words. They can also be expressed as numbers or acronyms. The numbers, in contrast, are simpler to understand than the cardinals. Ordinary numbers are available in three distinct flavors. You may be able to discover more about them through practicing, playing games, and taking part in other activities. You can improve your arithmetic skills by learning more about the basics of them. Try utilizing a coloring exercise as a simple and entertaining way to improve. Make use of a handy marker to record your results.
HuggingFaceTB/finemath
Search All of the Math Forum: Views expressed in these public forums are not endorsed by NCTM or The Math Forum. Notice: We are no longer accepting new posts, but the forums will continue to be readable. Topic: The nature of gravity Replies: 28   Last Post: Apr 11, 2014 4:14 PM Messages: [ Previous | Next ] Topics: [ Previous | Next ] haroldj.l.jones@gmail.com Posts: 67 Registered: 3/17/12 Re: The nature of gravity Posted: Feb 2, 2013 11:58 AM WHY IS THERE A RYDBERG LIMIT? THE ANSWER IS IN THE SURFACE GRAVITY. When the proton starts life as a primordial black hole it has a Gm product of 29.690606 and a gravitational radius of 6.60705x10^-16m, which is half the Compton wavelength of the proton, 1.32141x10^-15m. The surface gravity formula, Gm/r^2 makes the primordial black hole's surface g= 6.801486493x10^31ms. In the above posts it was established that the gravity of the Planck black hole was 1.109439913x10^51ms and that was the gravitational limit equivalent to c. 1.10943991x10^51 divided by 6.801486493x10^31 equals 1.6311727x10^19. The square root of 1.6311727x10^19 is 4.038778144x10^9. 1.32141x10^-15m, the proton Compton wavelength, divided by 4.038778144x10^9 is 3.2718065x10^-25m and it is at this diameter that the surface gravity of a Gm product, 29.6906036, becomes 1.10943991x10^51ms, gravitational c. This means that the radius of the structure must undergo Lorentzian contraction. The limit of this contraction is the Planck limit. Nothing less. The rate of this contraction will be, again, 4.038778144x10^9 which adds up to c multiplied the quantum adjustor, 3.62994678, then multiplied by 29.6906036 and then divided by 8. Incidentally, (32/3.62994678)/(c^3)=3.2718065x10^-25m. When the Gm product, 29.6906036, collapses to the planck limit(radius) it become the proton. The rate of contraction four dimensionally is (4.038778144x10^9)^4 which is 2.660724996x10^38. Multiply this by the proton mass, 1.672623x10^-27kg and you have 4.4503898x10^kg which is the original mass of the primordial black hole otherwise known as 29.6906036. 1.10943991x10^51 divided by c is equal to 3.70069319x10^42, the Planck frequency or the number of Planck lengths that go in to c. It will depend on which measure of G you use but the result will be close to mine either way. This means that 1.10943991x10^51 times G will equal 2(3.700793x10^40). And the reason why we know that is because we know the energy of the proton. mc^2 where m is equal to the proton's mass and is equal to 1.503278563x10^-10 J. You may remember the Rydberg multiplier, 2.9567625x10^32. This is simply the Rydberg frequency, 3.289842731x10^15, multiplied by c^2. Well, if we want the the analogue to that represented by the energy of the proton then all we do is divide the energy, 1.503278563x10^-10 J by (h/c^2) and we get a multiplier of 2.039034x10^40. Multiply by the quantum adjustor, 3.62994678, then divide by 2 and we have 3,700793x10^40. You can see now how Rydberg offers a microscopic insight into the mechanics of the atom. take the proton energy, 1.503278563x10^-10, and divide it by the Rydberg energy, 2.179873869x10^-18 J, and you have 6.895617221x10^7. Divide this into c and you have 4.347229866. Divide this by our quantum adjustor and we have 1.197601707, the Rydberg adjustor. Now divide the Rydberg multiplier, 2.9567625x10^32, by the Rydberg adjustor, 1.197601707 and you have 2.468903x10^32 and then divide this by the quantum adjustor and you have 6.801486x10^31 the surface g of the proton. So the numerical representation of the inner mechanics of the proton will be gxQAxRAx(h/c^2)=Rydberg energy, 2.179873869x10^-18 J. Here, g is surface gravity, QA is quantum adjustor and RA is the Rydberg adjustor. You see now how there is no distinction, at this level of interaction, between gravitational force or any other force. But there is another link in all this that goes way beyond the proton nucleus. Take the above figure, 2.468903x10^32. This number is the Rydberg multiplier divided by the Rydberg Adjustor. It has several other roles in physics. 2.468903x10^32 multiplied c/G equals 1.109439x10^51ms, the limit in surface gravity. So here you can see not only why there is a Rydberg limit but how simply and gracefully it is entwined in some of the most familiar parameters in physics. But there is another feature. (2.468903x10^32)^2 equals 6.095482038x10^64 and this number divided by G becomes 9.136653278x10^74 and this number divided by G becomes 1.369513x10^85 and the square root of this becomes 3.700693179x10^42, which is the Planck frequency. Multiply this by c and you have 1.104399x10^51. You're back where you started. The Primordial Black Hole starts off with an event horizon, diameter equal to 1.32141x10^-15m, with a surface g of 6.801486x10^31ms; it has an inner horizon where g is equal to 1.104399x10^51ms. The Lorentz contraction reduces that to the Planck limit, 4.05049049x10^-35m. The surface g on this side of the surface reverts back to 6.80148x10^31ms but on the other side of the surface is that other dimensional world cut off from this where nothing can go, the limit.
HuggingFaceTB/finemath
 Free Identify Angles Homework Extension Year 4 Properties of Shape | Classroom Secrets Maths Resources & WorksheetsYear 4 Maths LessonsSummer Block 5 (Properties of Shape)01 Identify Angles › Free Identify Angles Homework Extension Year 4 Properties of Shape # Free Identify Angles Homework Extension Year 4 Properties of Shape ## Step 1: Identify Angles Homework Extension Year 4 Summer Block 5 Identify Angles Homework provides additional questions which can be used as homework or an in-class extension for the Year 4 Identify Angles Resource Pack. These are differentiated for Developing, Expected and Greater Depth. More resources for Summer Block 5 Step 1. (0 votes, average: 0.00 out of 5) You need to be a registered member to rate this. Free Home Learning Resource ### What's included in the pack? This pack includes: • Identify Angles Homework Extension with answers for Year 4 Summer Block 5. #### National Curriculum Objectives Mathematics Year 4: (4G4) Identify acute and obtuse angles and compare and order angles up to two right angles by size Differentiation: Questions 1, 4 and 7 (Varied Fluency) Developing Draw a line from each angle to the correct angle type. All angles presented with a horizontal base line and facing one direction. Angle tester used as pictorial support. Expected Draw a line from each angle to the correct angle type. Most angles presented with a horizontal base line, facing any direction. Angle tester used as pictorial support in some questions. Greater Depth Draw a line from each angle to correct angle type. Angles presented on any plane and facing any direction. No use of angle tester for pictorial support. Questions 2, 5 and 8 (Varied Fluency) Developing Draw angles on a given line to match the label. All angles presented with a horizontal base line and facing one direction. Angle tester used as pictorial support. Expected Draw angles on a given line to match the label. Most angles presented with a horizontal base line, facing any direction. Angle tester used as pictorial support in some questions. Greater Depth Draw angles on a given line to match the label. Angles presented on any plane and facing any direction. No use of angle tester for pictorial support. Questions 3, 6 and 9 (Reasoning and Problem Solving) Developing Explain if a given statement is correct or not. All angles presented with a horizontal base line and facing one direction. Angle tester used as pictorial support. Expected Explain if a given statement is correct or not. Most angles presented with a horizontal base line, facing any direction. Angle tester used as pictorial support in some questions. Greater Depth Explain if a given statement is correct or not. Angles presented on any plane and facing any direction. No use of angle tester for pictorial support.
HuggingFaceTB/finemath
Find all School-related info fast with the new School-Specific MBA Forum It is currently 26 Aug 2016, 05:29 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If n is a positive integer, what is the value of the Author Message VP Joined: 26 Apr 2004 Posts: 1218 Location: Taiwan Followers: 2 Kudos [?]: 500 [0], given: 0 If n is a positive integer, what is the value of the [#permalink] ### Show Tags 23 Aug 2004, 20:32 00:00 Difficulty: (N/A) Question Stats: 100% (02:13) correct 0% (00:00) wrong based on 6 sessions ### HideShow timer Statistics This topic is locked. If you want to discuss this question please re-post it in the respective forum. If n is a positive integer, what is the value of the hundreds digit of 30^n ? (1) 30^n > 1000 (2) n is a multiple of 3 Director Joined: 20 Jul 2004 Posts: 593 Followers: 2 Kudos [?]: 109 [0], given: 0 ### Show Tags 23 Aug 2004, 21:07 n=1, 30^1=030 (Hundredth digit = 0) n=2, 30^2=900 (Hundredth digit = 9) n=3, 30^3=27000 (Hundredth digit = 0) n=4, 30^4=810000 (Hundredth digit = 0) For any further n, Hundredth digit = 0. I. 30^n > 1000 Hundredth digit = 0. II. n is a multiple of 3 Hundredth digit = 0. So, D. Display posts from previous: Sort by
HuggingFaceTB/finemath
M4-6 - 1 CIV100 – Mechanics Module 4 Centroid and Moments... This preview shows pages 1–3. Sign up to view the full content. This preview has intentionally blurred sections. Sign up to view the full version. View Full Document This is the end of the preview. Sign up to access the rest of the document. Unformatted text preview: 1 CIV100 – Mechanics Module 4: Centroid and Moments of Inertia by: Jinyue Zhang 2011/10/13 1 Module Objective • By the end of this Module you should be able to: – Understand the underlying concept of centroids – Understand how moments relate to the concept of centroid – Know how to find centroids of composite shapes – Know how to calculate the fluid pressure – Understand the underlying concept of moments of inertia – Know how to calculate the moment of inertia for composite areas – Know how to use the parallel axis theorem 2011/10/13 2 Today’s Objective • Understand the concept of centroids • Be clear about the shapes you need to know the location of centroid • Be able to find the centroid of composite areas – Examples 2011/10/13 3 Center of Gravity and Centroid • Center of gravity – A point which locates the weight of a body – The balance-point of a body • Centroid – Geometric center of an object (an area or a volume) • The centroid coincides the center of gravity for a uniform or homogeneous body – The density or specific weight is constant throughout the body • Symmetric area/volume – Centroid must be on the line of symmetry! 2011/10/13 4 Areas You Need to Know x y H L H/3 L/3 x y H L H/3 L isosceles triangle x y H L H/2 L/2 x y x y 4r/3 π y x 4r/3 π 4r/3 π 2011/10/13 5 Volumes You Need to Know • All you need to know is prismatic volumes ! – You need to find the centroid of a cross section – Then draw a line passing through the centroid of that cross section and parallel to the z-axis (the height) – The centroid of this prismatic volume is on this line and at the half height! H H/2 H H/2 2 2011/10/13 6 Centroid of a Composite Area • Finding the centroid of a composite area is similar to find the equivalent force system – Divide the composite area to simpler shapes with known centroids – Choose any reference x-y coordinate system ( be smart ) – Locate centroid for each simple shape that you know the location of centroid – Locate the centroid of composite area by two variables (x, y) – Imagine each shape has a weight of 1N per unit area (for example 1N/m 2 ) – Calculate the sum of moments of all weights about x and y axes – The total weight of composite area should create the same moment about x and y axes respectively, then you can solve coordinates (x, y) of the centroid We are balancing the moment of areas at centroid, as such centroid is also called the moment of area! 2011/10/13 7 Example 1 • Locate the centroid of the plate. x y s 2011/10/13 8 Example 1 • Locate the centroid of the plate. x y Please note: The imagination of having a weight of 1N per unit area is only to demonstrate the method of finding centroid of composite areas. As this weight will be cancelled when calculating the moment of original divided system about x and y axes and the moment of equivalent system (the weight acting at the centroid of composite area) about same x and y axes respectively, so we only need... View Full Document {[ snackBarMessage ]} Page1 / 8 M4-6 - 1 CIV100 – Mechanics Module 4 Centroid and Moments... This preview shows document pages 1 - 3. Sign up to view the full document. View Full Document Ask a homework question - tutors are online
HuggingFaceTB/finemath
# 3.2 Vector addition and subtraction: graphical methods  (Page 2/15) Page 2 / 15 Step 1. Draw an arrow to represent the first vector (9 blocks to the east) using a ruler and protractor . Step 2. Now draw an arrow to represent the second vector (5 blocks to the north). Place the tail of the second vector at the head of the first vector . Step 3. If there are more than two vectors, continue this process for each vector to be added. Note that in our example, we have only two vectors, so we have finished placing arrows tip to tail . Step 4. Draw an arrow from the tail of the first vector to the head of the last vector . This is the resultant    , or the sum, of the other vectors. Step 5. To get the magnitude of the resultant, measure its length with a ruler. (Note that in most calculations, we will use the Pythagorean theorem to determine this length.) Step 6. To get the direction of the resultant, measure the angle it makes with the reference frame using a protractor. (Note that in most calculations, we will use trigonometric relationships to determine this angle.) The graphical addition of vectors is limited in accuracy only by the precision with which the drawings can be made and the precision of the measuring tools. It is valid for any number of vectors. ## Adding vectors graphically using the head-to-tail method: a woman takes a walk Use the graphical technique for adding vectors to find the total displacement of a person who walks the following three paths (displacements) on a flat field. First, she walks 25.0 m in a direction $\text{49.0º}$ north of east. Then, she walks 23.0 m heading $\text{15.0º}$ north of east. Finally, she turns and walks 32.0 m in a direction 68.0° south of east. Strategy Represent each displacement vector graphically with an arrow, labeling the first $\text{A}$ , the second $\text{B}$ , and the third $\text{C}$ , making the lengths proportional to the distance and the directions as specified relative to an east-west line. The head-to-tail method outlined above will give a way to determine the magnitude and direction of the resultant displacement, denoted $\mathbf{\text{R}}$ . Solution (1) Draw the three displacement vectors. (2) Place the vectors head to tail retaining both their initial magnitude and direction. (3) Draw the resultant vector, $\text{R}$ . (4) Use a ruler to measure the magnitude of $\mathbf{\text{R}}$ , and a protractor to measure the direction of $\text{R}$ . While the direction of the vector can be specified in many ways, the easiest way is to measure the angle between the vector and the nearest horizontal or vertical axis. Since the resultant vector is south of the eastward pointing axis, we flip the protractor upside down and measure the angle between the eastward axis and the vector. In this case, the total displacement $\mathbf{\text{R}}$ is seen to have a magnitude of 50.0 m and to lie in a direction $7.0º$ south of east. By using its magnitude and direction, this vector can be expressed as $R=\text{50.0 m}$ and $\theta =7\text{.}\text{0º}$ south of east. Discussion The head-to-tail graphical method of vector addition works for any number of vectors. It is also important to note that the resultant is independent of the order in which the vectors are added. Therefore, we could add the vectors in any order as illustrated in [link] and we will still get the same solution. the meaning of phrase in physics is the meaning of phrase in physics Chovwe write an expression for a plane progressive wave moving from left to right along x axis and having amplitude 0.02m, frequency of 650Hz and speed if 680ms-¹ how does a model differ from a theory what is vector quantity Vector quality have both direction and magnitude, such as Force, displacement, acceleration and etc. Besmellah Is the force attractive or repulsive between the hot and neutral lines hung from power poles? Why? what's electromagnetic induction electromagnetic induction is a process in which conductor is put in a particular position and magnetic field keeps varying. Lukman wow great Salaudeen what is mutual induction? je mutual induction can be define as the current flowing in one coil that induces a voltage in an adjacent coil. Johnson how to undergo polarization show that a particle moving under the influence of an attractive force mu/y³ towards the axis x. show that if it be projected from the point (0,k) with the component velocities U and V parallel to the axis of x and y, it will not strike the axis of x unless u>v²k² and distance uk²/√u-vk as origin show that a particle moving under the influence of an attractive force mu/y^3 towards the axis x. show that if it be projected from the point (0,k) with the component velocities U and V parallel to the axis of x and y, it will not strike the axis of x unless u>v^2k^2 and distance uk^2/√u-k as origin No idea.... Are you even sure this question exist? Mavis I can't even understand the question yes it was an assignment question "^"represent raise to power pls Gabriel Gabriel An engineer builds two simple pendula. Both are suspended from small wires secured to the ceiling of a room. Each pendulum hovers 2 cm above the floor. Pendulum 1 has a bob with a mass of 10kg . Pendulum 2 has a bob with a mass of 100 kg . Describe how the motion of the pendula will differ if the bobs are both displaced by 12º . no ideas Augstine if u at an angle of 12 degrees their period will be same so as their velocity, that means they both move simultaneously since both both hovers at same length meaning they have the same length Modern cars are made of materials that make them collapsible upon collision. Explain using physics concept (Force and impulse), how these car designs help with the safety of passengers. calculate the force due to surface tension required to support a column liquid in a capillary tube 5mm. If the capillary tube is dipped into a beaker of water find the time required for a train Half a Kilometre long to cross a bridge almost kilometre long racing at 100km/h method of polarization Ajayi What is atomic number? The number of protons in the nucleus of an atom Deborah type of thermodynamics oxygen gas contained in a ccylinder of volume has a temp of 300k and pressure 2.5×10Nm
open-web-math/open-web-math
Courses Courses for Kids Free study material Free LIVE classes More LIVE Join Vedantu’s FREE Mastercalss # The heat of combustion of ethylene at $18^0 C$ and at constant volume is -335.8Kcal when water is obtained in liquid state. Calculate the heat of combustion at constant pressure and at $18^0 C$. Verified 335.1k+ views Hint: Try to figure out the combustion reaction of ethylene. Combustion is nothing but heating in the presence of air. Air mainly contains oxygen and nitrogen. Think about what is a change in constant volume condition and constant pressure condition. We know that combustion reaction is nothing but burning given substance in the presence of air. In air nitrogen and oxygen are present in major quantities. Nitrogen is unreactive due to the high bond dissociation energy of the nitrogen molecule. So oxygen will react with a given substance during the combustion reaction. Combustion of ethylene reaction will be: $C_{ 2 }H_{ 4_{ \left( g \right) } }\quad +\quad 3O_{ 2_{ \left( g \right) } }\quad →\quad 2CO_{ 2_{ \left( g \right) } }\quad +\quad 2H_{ 2 }O_{ \left( l \right) }$ Given that reaction is carried out at constant volume heat of combustion is -335.8 Kcal. We need to calculate the heat of combustion at constant pressure. Enthalpy change at constant volume $\triangle H\quad =\quad \triangle U$ Enthalpy change at constant pressure $\triangle H\quad =\quad \triangle U\quad -\quad \triangle { n }_{ g }RT$ $\triangle n_{ g }$ for the given reaction will be 2 – 4 = -2 Given the heat of combustion of ethylene at $18^0 C$ and at constant volume is -335.8Kcal. $\triangle U\quad =\quad -335.8Kcal$ R is the universal gas constant, its value is 1.99$calK^{ -1 }\quad mol^{ -1 }$, T is the temperature of the gas in kelvin. Enthalpy change at constant pressure: $\triangle H\quad =\quad \triangle U\quad -\quad \triangle { n }_{ g }RT\quad =\quad -335.8Kcal\quad -\quad (-2)×1.99×291cal\quad =\quad -335.8Kcal\quad +\quad 1158cal\quad =\quad -334.64Kcal$ Therefore, the heat of combustion at constant pressure and at $18^0 C$ -334.64Kcal. Note: At constant volume, there will be no expansion of gas and so there will be done by gas. Combustion in the presence of air will be oxidation. In the reaction the number of gaseous moles will decrease so change in gaseous moles is negative. Last updated date: 24th Sep 2023 Total views: 335.1k Views today: 6.35k
HuggingFaceTB/finemath
# Solving Thiele's differential equation. Consider Thiele's differential equation for $$t\in[0,\infty)$$ (all the other functions are continuous on $$[0,\infty)$$, too.) \begin{align} V'(t)&=\Big(\phi(t)+\lambda(t)\Big)V(t)+\pi(t)-\lambda(t)A(t)\\ V(0)&=0 \end{align} I am reading a proof about the unique solution being $$V(t)=\int_0^t \big(\pi(s)-\lambda(s)A(s)\big)\exp\Big(\int_s^t \big(\phi(u)+\lambda(u)\big)du\Big)ds$$ So the first thing happening in the proof is that the author solves the equation $$V'(t)=\big(\phi(t)+\lambda(t)\big)V(t)$$ and finding the solution by variation of the constant afterwards. The last equation is equivalent to $$\frac{V'(t)}{V(t)}=\big(\phi(t)+\lambda(t)\big)$$ and therefore $$\int_0^t\frac{V'(s)}{V(s)}ds=\int_0^t \big(\phi(s)+\lambda(s)\big)ds$$ Now he states something I do not understand: $$\log V(t)=\int_0^t \big(\phi(s)+\lambda(s)\big)ds + c$$ In my opinion, using that $$\log' V(t)= \frac{V'(t)}{V(t)}$$, it should be $$\log V(t) -\log V(0)=\int_0^t \big(\phi(s)+\lambda(s)\big)ds,$$ which seems to be not well defined, since $$V(0)=0$$. Is this some sort of method to solve this equation or is this just wrong? What is the procedure here? Thanks in advance for any help! • The method is not wrong in itself, but dividing by $V(t)$ requires additional explanation. It is much better to multiply both sides of $V'(t)-(\phi(t)+\lambda(t))V(t)=\pi(t)-\lambda(t)A(t)$ by the integrating factor $\exp(-\int\limits_s^t(\phi(u)+\lambda(u))\,\mathrm{d}u)$ and notice that the LHS is just the derivative of $V(t)\exp(-\int\limits_s^t(\phi(u)+\lambda(u))\,\mathrm{d}u)$. Perhaps you could take another textbook? – user539887 May 1 at 6:40 ## 1 Answer You can't apply $$V(0)=0$$ because this boundary condition applies to the full equation, not the equation without the forcing term. Instead solve this auxiliary equation with a random constant, use this solution to obtain a solution to the full equation and only then apply the boundary condition.
HuggingFaceTB/finemath
# Nakai-Moishezon theorem for abelian varieties In Birkenhake and Lange's book, they prove a version of the Nakai-Moishezon theorem for complex abelian varieties that says that if $L_0$ is an ample line bundle on a complex abelian variety $X$ of dimension $g$, then a line bundle $L$ is ample if and only if $(L^\nu\cdot L_0^{g-\nu})>0$ for $\nu=1,\ldots,g$ (Corollary 4.3.3 on page 77 of the second edition). I would love for this to be true in arbitrary characteristic. The problem is that the proof over $\mathbb{C}$ explicitly uses the Hermitian forms associated to the line bundles, and I don't see how this could be adapted to arbitrary characteristic. Does anyone know if this is true in arbitrary characteristic? Are there any references or at least some idea for a proof? It is therefore enough to show that a multiple of $L$ is effective. By Riemann-Roch and the theorem on page 32 (Example 1.2.31: Higher cohomology of nef divisors, I) in Lazarsfeld's "Positivity in alg. geom.," this will follow if we show that $L$ is nef, and Kleiman's theorem thus reduces the problem to checking $L.C \geq 0$ on every curve $C$ in $X$. Let me restrict to the case $\dim{X} = 2$ "for simplicity;" this should given an idea for the general case. Curves in abelian varieties are in any case nef, and if $a := -(L.C) > 0$, the divisor $H := aL_0 + (L_0.L)C$ would be ample. But $L$ has a positive self-intersection and is orthogonal to $H$, contradicting the Hodge index theorem on the surface $X$. • Ok, nice. Now how would this argument go in higher dimensions? We would want to intersect with curves to use Kleiman's Theorem, but I don't see how we can generalize what you wrote. If $(L\cdot C)<0$, we could take the 1-class $-(L\cdot C)L_0^{n-1}+(L_0^{n-1}\cdot L)C$ and this intersected with $L$ would be 0. But I'm not sure what else could be done... Aug 28, 2013 at 18:57 • In higher dimension, since $L_0$ is ample, you can take a surface $Y \subset X$ through $C$ in the linear equivalence class $L_0^{g-2} \in \mathrm{CH}^2(X)$ (i.e., linearly equivalent to a multiple of a $2$-plane in the appropriate projective embedding). Since $L_0^{g-2}.L^2 > 0$ we still have $L_{|Y}^2 > 0$, and the same argument shows that $L.C < 0$ contradicts the Hodge index theorem on the surface $Y$. Aug 28, 2013 at 23:31 • One last question: Why can we assume that $C$ is contained in $L_0^{g-2}$? This doesn't seem obvious to me... Aug 29, 2013 at 13:53 • I'm sorry, I didn't write it cleanly in my last comment. I meant to say that $C$ lies on a surface which is linearly equivalent to a multiple of $c_1(L_0)^{g-2}$ in the Chow group $\mathrm{CH}_2$ of $2$-cycles mod rational equivalence. To fix ideas, say $g = \dim{X} = 3$. Because $L_0$ is ample, the sheaf $L_0^{\otimes n}(-C)$ has non-zero sections for $n \gg 0$, whose zero loci give surfaces through $C$ linearly equivalent to $nL_0$. Aug 29, 2013 at 14:16
open-web-math/open-web-math
## Problem 2: Pairwise Cipher In war time, messages sent by radio can be heard by the enemy. It is therefore important that the message is in a secret code, so that no one but friends, who have the key, can decode the message. The key is a secret phrase that only you and your friends know: It is used to scramble the alphabet. The scrambled alphabet is next used to find the letters to substitute in the message. There are therefore two steps in coding the message: first creating the scrambled alphabet, and second, replacing the letters of the message, using the scrambled alphabet. Suppose the key is: "Mon Oncle et ma tante" and the message is: "Fish are birds without wings and birds are fish without fins" First the alphabet is scrambled by means of a key as follows: Line up the letters of the key together with the alphabet: `MONONCLEETMATANTEABCDEFGHIJKLMNOPQRSTUVWXYZ` Now pick out all the letters, one at a time, and if a letter occurs a second time, it is deleted: We now have the scrambled alphabet: `MONCLETABDFGHIJKPQRSUVWXYZ` Next the message itself is prepared: 1. The letter "x" (and "X") is replaced by "ks". 2. The spaces are replaced by the letter "X". 3. All letters are capitalized. 4. All other characters are ignored. `FISHXAREXBIRDSXWITHOUTXWINGSXANDXBIRDSXAREXFISHXWITHOUTXFINS` Encoding the message is done next. Group all the letters in the message by twos, adding the letter X at the end of the message if necessary to make a final pair. `FI SH XA RE XB IR DS XW IT HO UT XW IN GS XA ND XB IR DS XA RE XF IS HX WI TH OU TX FI NS` Now think of each letter as having a left and a right mate, according to the scrambled alphabet. The letter L has a left mate: (C) and a right mate: (E). Even the letter M has a right mate, (O) and a left mate: (Z). In the same way, Z has a left mate (Y) and a right mate (M). Now substitute each letter of the pairs by the following rule, using the left and right mates in: `MONCLETABDFGHIJKPQRSUVWXYZ` Translating `FI`: take the right mate of `I` `(J)`, followed by the left mate of `F` `(D)`: `JD` Translating `SH`: take the right mate of `H` `(I)`, followed by the left mate of `S` `(R)`: `IR` Translating `XA`: take the right mate of `A` `(B)`, followed by the left mate of `X` `(W)`: `BW` and so on, to give you these new pairs: `JD IR BW TQ DW SH UB XW AH NG AS XW CH UF BW FO DW SH UB BW TQ GW UH YG JV IE VM YE JD UO` And so the final message reads: `JDIRBWTQDWSHUBXWAHNGASXWCHUFBWFODWSHUBBWTQGWUHYGJVIEVMYEJDUO` You must write a program that will decode the messages that are received from your friends. Your program must read 5 sets of data (a total of 10 lines). Each set of data is made up of two lines: a key phrase and an encoded message. The lines are never larger than 80 characters. You must print out both the scrambled alphabet and the decoded message. You may expect only capital letters in the encoded message (line two of each pair) and no spaces. However, the first line in each pair may contain spaces and lower case characters. ### Sample Input: (only 2 of 5 sets of data) ```nothing is new in this world GCIGBVWWCVLHELRVHHTTHQRVOHEIBVAZCVLHELBVWWJVEHYTGEIOVNYOGCEZ Zorro strikes again! BZABVWRMYGYEKSALKWYGALTIDTYZRGYSRRMWBZYANEYZ``` ### Sample Output: ```NOTHIGSEWRLDABCFJKMPQUVXYZ FISH ARE BIRDS WITHOUT WINGS AND BIRDS ARE FISH WITHOUT FINS ZORSTIKEAGNBCDFHJLMPQUVWXY ONCE UPON A TIME IN MEXICO NOT SO LONG AGO``` Point Value: 10 Time Limit: 2.00s Memory Limit: 16M
HuggingFaceTB/finemath
## 50842 50,842 (fifty thousand eight hundred forty-two) is an even five-digits composite number following 50841 and preceding 50843. In scientific notation, it is written as 5.0842 × 104. The sum of its digits is 19. It has a total of 3 prime factors and 8 positive divisors. There are 23,100 positive integers (up to 50842) that are relatively prime to 50842. ## Basic properties • Is Prime? No • Number parity Even • Number length 5 • Sum of Digits 19 • Digital Root 1 ## Name Short name 50 thousand 842 fifty thousand eight hundred forty-two ## Notation Scientific notation 5.0842 × 104 50.842 × 103 ## Prime Factorization of 50842 Prime Factorization 2 × 11 × 2311 Composite number Distinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 3 Total number of prime factors rad(n) 50842 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0 The prime factorization of 50,842 is 2 × 11 × 2311. Since it has a total of 3 prime factors, 50,842 is a composite number. ## Divisors of 50842 1, 2, 11, 22, 2311, 4622, 25421, 50842 8 divisors Even divisors 4 4 2 2 Total Divisors Sum of Divisors Aliquot Sum τ(n) 8 Total number of the positive divisors of n σ(n) 83232 Sum of all the positive divisors of n s(n) 32390 Sum of the proper positive divisors of n A(n) 10404 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 225.482 Returns the nth root of the product of n divisors H(n) 4.88677 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors The number 50,842 can be divided by 8 positive divisors (out of which 4 are even, and 4 are odd). The sum of these divisors (counting 50,842) is 83,232, the average is 10,404. ## Other Arithmetic Functions (n = 50842) 1 φ(n) n Euler Totient Carmichael Lambda Prime Pi φ(n) 23100 Total number of positive integers not greater than n that are coprime to n λ(n) 2310 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 5208 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares There are 23,100 positive integers (less than 50,842) that are coprime with 50,842. And there are approximately 5,208 prime numbers less than or equal to 50,842. ## Divisibility of 50842 m n mod m 2 3 4 5 6 7 8 9 0 1 2 2 4 1 2 1 The number 50,842 is divisible by 2. ## Classification of 50842 • Arithmetic • Deficient ### Expressible via specific sums • Polite • Non-hypotenuse • Square Free • Sphenic ## Base conversion (50842) Base System Value 2 Binary 1100011010011010 3 Ternary 2120202001 4 Quaternary 30122122 5 Quinary 3111332 6 Senary 1031214 8 Octal 143232 10 Decimal 50842 12 Duodecimal 2550a 20 Vigesimal 6722 36 Base36 138a ## Basic calculations (n = 50842) ### Multiplication n×i n×2 101684 152526 203368 254210 ### Division ni n⁄2 25421 16947.3 12710.5 10168.4 ### Exponentiation ni n2 2584908964 131421941547688 6681754352167553296 339713754772902744675232 ### Nth Root i√n 2√n 225.482 37.046 15.016 8.73463 ## 50842 as geometric shapes ### Circle Diameter 101684 319450 8.12073e+09 ### Sphere Volume 5.50499e+14 3.24829e+10 319450 ### Square Length = n Perimeter 203368 2.58491e+09 71901.4 ### Cube Length = n Surface area 1.55095e+10 1.31422e+14 88060.9 ### Equilateral Triangle Length = n Perimeter 152526 1.1193e+09 44030.5 ### Triangular Pyramid Length = n Surface area 4.47719e+09 1.54882e+13 41512.3 ## Cryptographic Hash Functions md5 9fe5f22f40fa63795a7f70e81cb9ebb7 c0963a82e66ddbc32994ecc8b7b784d156659b23 5ea93e6155356a8c35720a4f28fc2c8b4f9f2a4543ff3a636ec874783ea53228 54d3609822e98e9c8ab071f46eef9511cdca5a32205d6fa151d578f9d5260dabc4216d9527e07a93ca8e3da17a33e6572d81e2e9d978e93b4d4b2644ddc8e99f e2dbfb3d84ed2d128d07e5f544b29c807b2aaea7
HuggingFaceTB/finemath
# how to find eigenvalues of a matrix Theorem. Pictures: whether or not a vector is an eigenvector, eigenvectors of standard matrix transformations. Let's find the eigenvector, v 1, associated with the eigenvalue, λ 1 =-1, first. That is why we can easily solve using MS excel Goal seeks . S.O.S. Theorem. The Mathematics Of It. The picture is more complicated, but as in the 2 by 2 case, our best insights come from finding the matrix's eigenvectors : that is, those vectors whose direction the transformation leaves … We have some properties of the eigenvalues of a matrix. and the two eigenvalues are . If is any number, then is an eigenvalue of . Anything is possible. Solve the characteristic equation, giving us the eigenvalues(2 eigenvalues for a 2x2 system) close, link The second smallest eigenvalue of a Laplacian matrix is the algebraic connectivity of the graph. Numpy is a Python library which provides various routines for operations on arrays such as mathematical, logical, shape manipulation and many more. Then diagonalize it by finding a nonsingular matrix and a diagonal matrix. The Matrix… Symbolab Version. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please post your question on our The eigenvectors for D 0 (which means Px D 0x/ fill up the nullspace. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. This result is valid for any diagonal matrix of any size. Solve the system. Writing code in comment? One of the final exam problems in Linear Algebra Math 2568 at the Ohio State University. eigen() function in R Language is used to calculate eigenvalues and eigenvectors of a matrix. If . The eigenvectors for D 1 (which means Px D x/ fill up the column space. First, we will create a square matrix of order 3X3 using numpy library. To find eigenvalues of a matrix all we need to do is solve a polynomial. The algebraic multiplicity of an eigenvalue is the number of times it appears as a root of the characteristic polynomial (i.e., the polynomial whose roots are the eigenvalues of a matrix). The column space projects onto itself. In principle, finding Eigen value is the same problem as finding a root of polynomial equation. is an eigenvalue if and only if there exists a non-zero vector C such that, In general, for a square matrix A of order n, the equation. Eigenvalues and Eigenvectors of a 3 by 3 matrix Just as 2 by 2 matrices can represent transformations of the plane, 3 by 3 matrices can represent transformations of 3D space. The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. Steps to Find Eigenvalues of a Matrix. Learn to find complex eigenvalues and eigenvectors of a matrix. Learn to decide if a number is an eigenvalue of a matrix, and if so, how to find an associated eigenvector. Related Symbolab blog posts. The determinant of a triangular matrix is easy to find - it is simply the product of the diagonal elements. 7.2 FINDING THE EIGENVALUES OF A MATRIX Consider an n£n matrix A and a scalar ‚.By definition ‚ is an eigenvalue of A if there is a nonzero vector ~v in Rn such that A~v = ‚~v ‚~v ¡ A~v = ~0 (‚In ¡ A)~v = ~0An an eigenvector, ~v needs to be a … A simple example is that an eigenvector does not change direction in a transformation:. Let $\lambda_1 \le \lambda_2 \le \lambda_3 \le \lambda_4$ be the eigenvalues of this matrix. Rewrite the unknown vector X as a linear combination of known vectors. Summary: Let A be a square matrix. Eigenvalue is the factor by which a eigenvector is scaled. Also, determine the identity matrix I of the same order. matrix-eigenvalues-calculator. That’s generally not too bad provided we keep $$n$$ small. 5. The determinant . Recipe: find a basis for the λ-eigenspace. Beware, however, that row-reducing to row-echelon form and obtaining a triangular matrix does not give you the eigenvalues, as row-reduction changes the eigenvalues of the matrix … As the eigenvalues of are , . Find the Eigenvalues of A. The only eigenvalues of a projection matrix are 0 and 1. Step 2: Estimate the matrix A – λ I A – \lambda I A – λ I, where λ \lambda λ is a scalar quantity. Please use ide.geeksforgeeks.org, generate link and share the link here. Since this is a Laplacian matrix, the smallest eigenvalue is $\lambda_1 = 0$. Assume is an eigenvalue of A. For example, once it is known that 6 is an eigenvalue of the matrix = [] Find an Eigenvector corresponding to each eigenvalue of A. Proof: Let and be an eigenvalue of a Hermitian matrix and the corresponding eigenvector satisfying , then we have In order to find the associated eigenvectors, we do the following steps: 1. It is quite amazing to see that any square matrix A has the same eigenvalues as its transpose AT because, For any square matrix of order 2, A, where. The geometric multiplicity of an eigenvalue is the dimension of the linear space of its associated eigenvectors (i.e., its eigenspace). is an eigenvalue of A, then: The next natural question to answer deals with the eigenvectors. For a square matrix A of order n, the number In this python tutorial, we will write a code in Python on how to compute eigenvalues and vectors. SOLUTION: • In such problems, we first find the eigenvalues of the matrix. An eigenvector is a nonzero vector that, when multiplied against a given square matrix, yields back itself times a multiple. is evaluated by first adding the second row to the third and then performing a Laplace expansion by the first column: The roots of the characteristic equation, −λ 2 (λ − 3) = 0, are λ = 0 and λ = 3; these are the eigenvalues of C. If A is invertible, then is an eigenvalue of A-1. edit Write down the associated linear system 2. The matrix have 6 different parameters g1, g2, k1, k2, B, J. Matrix, the one with numbers, arranged with rows and columns, is extremely useful in most scientific fields. I can easily find the largest eigenvalue and I also know how to find the smallest eigenvalue of a matrix, but in his book on "Elements of Numerical Analysis" … λ 1 =-1, λ 2 =-2. We will see how to find them (if they can be found) soon, but first let us see one in action: If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected] This multiple is a scalar called an Find the eigenvalues and eigenvectors of a given 2 by 2 matrix. :) https://www.patreon.com/patrickjmt !! Let A be a square matrix of order n. If is an eigenvalue of A, then: 1. is an eigenvalue of A m, for 2. Understand the geometry of 2 × 2 and 3 × 3 matrices with a complex eigenvalue. Likewise this fact also tells us that for an $$n \times n$$ matrix, $$A$$, we will have $$n$$ eigenvalues if we include all repeated eigenvalues. Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below. Creation of a Square Matrix in Python. Symmetric matrix has special properties that the Eigen values are always real number (not complex number). Eigenvalue is the factor by which a eigenvector is scaled. FINDING EIGENVALUES AND EIGENVECTORS EXAMPLE 1: Find the eigenvalues and eigenvectors of the matrix A = 1 −3 3 3 −5 3 6 −6 4 . They have many uses! We work through two methods of finding the characteristic equation for λ, then use this to find two eigenvalues. In each case, do this first by hand and then use technology (TI-86, TI … brightness_4 By using our site, you For a square matrix A, an Eigenvector and Eigenvalue make this equation true:. eigenvalues \begin{pmatrix}2&0&0\\1&2&1\\-1&0&1\end{pmatrix} en. The values of λ that satisfy the equation are the generalized eigenvalues. Example: Find Eigenvalues and Eigenvectors of a 2x2 Matrix. Learn to find eigenvectors and eigenvalues geometrically. The nullspace is projected to zero. code. To calculate eigenvalues, I have used Mathematica and Matlab both. Thanks to all of you who support me on Patreon. In the next page, we will discuss the problem of finding eigenvectors.. Do you need more help? It is true for any square matrix A of any order, i.e. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Compute the Correlation Coefficient Value between Two Vectors in R Programming – cor() Function, Find Eigenvalues and Eigenvectors of a Matrix in R Programming – eigen() Function, Finding Inverse of a Matrix in R Programming – inv() Function, Convert a Data Frame into a Numeric Matrix in R Programming – data.matrix() Function, Calculate the Cumulative Maxima of a Vector in R Programming – cummax() Function, Compute the Parallel Minima and Maxima between Vectors in R Programming – pmin() and pmax() Functions, Random Forest with Parallel Computing in R Programming, Random Forest Approach for Regression in R Programming, Random Forest Approach for Classification in R Programming, Regression and its Types in R Programming, Convert Factor to Numeric and Numeric to Factor in R Programming, Convert a Vector into Factor in R Programming – as.factor() Function, Convert String to Integer in R Programming – strtoi() Function, Convert a Character Object to Integer in R Programming – as.integer() Function, Clear the Console and the Environment in R Studio, Adding elements in a vector in R programming - append() method, Check if the Object is a Matrix in R Programming - is.matrix() Function, Convert a Data Frame into a Numeric Matrix in R Programming - data.matrix() Function, Convert an Object into a Matrix in R Programming - as.matrix() Function, Transform the Scaled Matrix to its Original Form in R Programming - Using Matrix Computations, Find String Matches in a Vector or Matrix in R Programming - str_detect() Function, Naming Rows and Columns of a Matrix in R Programming - rownames() and colnames() Function, Getting the Modulus of the Determinant of a Matrix in R Programming - determinant() Function, Return a Matrix with Lower Triangle as TRUE values in R Programming - lower.tri() Function, Compute Choleski factorization of a Matrix in R Programming - chol() Function, Get or Set Dimensions of a Matrix in R Programming - dim() Function, Calculate the Sum of Matrix or Array columns in R Programming - colSums() Function, Getting a Matrix of number of columns in R Programming - col() Function, Calculate the Mean of each Column of a Matrix or Array in R Programming - colMeans() Function, Calculate the cross-product of a Matrix in R Programming - crossprod() Function, Calculate the cross-product of the Transpose of a Matrix in R Programming - tcrossprod() Function, Compute the Sum of Rows of a Matrix or Array in R Programming - rowSums Function, Getting the Determinant of the Matrix in R Programming - det() Function, Construct a Diagonal Matrix in R Programming - diag() Function, Perform Operations over Margins of an Array or Matrix in R Programming - apply() Function, Getting a Matrix of number of rows in R Programming - row() Function, Add Color Between Two Points of Kernel Density Plot in R Programming – Using with() Function, Creating a Data Frame from Vectors in R Programming, Converting a List to Vector in R Language - unlist() Function, Convert String from Uppercase to Lowercase in R programming - tolower() method, Removing Levels from a Factor in R Programming - droplevels() Function, Write Interview If is Hermitian (symmetric if real) (e.g., the covariance matrix of a random vector)), then all of its eigenvalues are real, and all of its eigenvectors are orthogonal. Syntax: eigen(x) Parameters: x: Matrix Example 1: filter_none. For a given 4 by 4 matrix, find all the eigenvalues of the matrix. Experience. Section 5.5 Complex Eigenvalues ¶ permalink Objectives. then the characteristic equation is . 3. Remark. 4. 3. \$1 per month helps!! A is not invertible if and only if is an eigenvalue of A. Mathematics CyberBoard. Please write to us at [email protected] to report any issue with the above content. All that's left is to find the two eigenvectors. Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. so clearly from the top row of the equations we get Let A be a square matrix of order n. If We use cookies to ensure you have the best browsing experience on our website. image/svg+xml. The eigenvalues are immediately found, and finding eigenvectors for these matrices then becomes much easier. Find Eigenvalues and Eigenvectors of a Matrix in R Programming – eigen() Function Last Updated: 19-06-2020. eigen() function in R Language is used to calculate eigenvalues and eigenvectors of a matrix. I am trying to calculate eigenvalues of a 8*8 matrix. Example 2: Find the eigenvalues of the 3 by 3 checkerboard matrix . We have some properties of the eigenvalues of a matrix. In order to find eigenvalues of a matrix, following steps are to followed: Step 1: Make sure the given matrix A is a square matrix. Eigenvector and Eigenvalue. So depending on the values you have on the diagonal, you may have one eigenvalue, two eigenvalues, or more. Fact Linear Algebra Problems. This equation is known as the Cayley-Hamilton theorem. Learn to recognize a rotation-scaling matrix, and compute by how much the matrix rotates and scales. You da real mvps!
open-web-math/open-web-math
# How to use the IMSUB Function in Excel COMPLEX number (inumber) in excel derived for mathematical number having real and imaginary coefficients. In mathematics we call it the coefficient of i or j (iota). i = (-1)1/2 Square root of negative number is not possible, so for calculation purpose, -1is named as imaginary and call it iota (i or j). For calculation of some term like shown below. A = 2 + (-25)1/2 A = 2 + (-1 * 25)1/2 A = 2 + (-1 * 5 * 5)1/2 A = 2 + 5 * (-1)1/2 X + iY = 2 + 5i This here equation is a Complex number (inumber) having 2 different parts called real part & imaginary part The coefficient of iota (i) which is 5 is called as imaginary part and the other part 2 is called the real part of the complex number. Complex number (inumber) is written in the X + iY format. Complex SUBTRACTION of a complex number ( X1 + iY1 ) & ( X2 + iY2 ) is given by ( X1 + iY1 ) - ( X2 + iY2 ) = ( X1 - X2 ) + i ( Y1 - Y2) Here X & Y are the coefficients of the real & imaginary part of the complex number (inumber). The IMSUB function returns the SUBTRACTION of the complex number (inumber) having both real & imaginary part. Syntax: =IMSUB (inumber1 , inumber2) inumber 1 : First complex number to be subtracted from. inumber 2 : Second complex number to be subtracted from the first complex number. Let’s understand this function using it in an example. Here we have values where we need to get the complex SUBTRACTION of the input complex number (inumber) Use the formula: =IMSUB (A2, B2) A2 : inumber 1 ( complex number ) provided as cell reference. A2 : inumber 2 (complex number) provided as cell reference. As you can see the complex number performing the subtraction operation. Mathematical formulation is performed like shown below. 5 + i - ( 4 - i ) = (5 - 4) + ( i + i )= 1 + 2i Now copy the formula to the other remaining cells using Ctrl + D shortcut key. As you can see the IMSUB function formula giving results just fine. Note : 1. The formula returns the #NUM! error if the complex number doesn’t have lower case i or j (iota). 2. The formula returns the #VALUE! Error if the complex number doesn’t have correct complex number format. Hope you understood how to use IMSUB function and referring cell in Excel. Explore more articles on Excel mathematical functions here. Please feel free to state your query or feedback for the above article. Related Articles How to use the IMCONJUGATE Function in Excel How to use the IMEXP Function in Excel How to use the IMSIN Function in Excel How to use the IMSUM Function in Excel How to use the IMSUB Function in Excel How to use the SQRT Function in Excel How to use the IMARGUMENT Function in Excel How to use the IMCOS Function in Excel Popular Articles: 50 Excel Shortcuts to Increase Your Productivity How to use the VLOOKUP Function in Excel How to use the COUNTIF in Excel 2016 Edit a dropdown list If with conditional formatting If with wildcards Vlookup by date Terms and Conditions of use The applications/code on this site are distributed as is and without warranties or liability. In no event shall the owner of the copyrights, or the authors of the applications/code be liable for any loss of profit, any problems or any damage resulting from the use or evaluation of the applications/code.
HuggingFaceTB/finemath
+0 how do i find the area of the triangle? 0 221 1 find the area of a triangle with A=29 degrees b=49 and c=50 Guest Apr 20, 2017 Sort: #1 +18610 0 find the area of a triangle with A=29 degrees b=49 and c=50 Formula: $$\begin{array}{|rcll|} \hline \text{Area} &=& \frac12 \cdot b\cdot c\cdot \sin(A) \\ \hline \end{array}$$ $$\begin{array}{|rcll|} \hline \text{Area} &=& \frac12 \cdot 49\cdot 50\cdot \sin(29^{\circ}) \\ &=& \frac12 \cdot 49\cdot 50\cdot 0.48480962025 \\ &=& 593.891784802 \\ \hline \end{array}$$ heureka  Apr 20, 2017 28 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
HuggingFaceTB/finemath
p201f04-homework01-sol # p201f04-homework01-sol - Version One Homework 1 Heinz 81204... This preview shows pages 1–2. Sign up to view the full content. Version One – Homework 1 – Heinz – 81204 – Aug 30, 2004 1 This print-out should have 14 questions. Multiple-choice questions may continue on the next column or page – find all choices before answering. The due time is Central time. Serway CP 01 03 01:04, trigonometry, multiple choice, > 1 min, fixed. 001 (part 1 of 1) 10 points Consider the expression 2 π s l g , where l is length and g is gravitational ac- celeration in units of length divided by time squared. Evaluate its units. 1. m s 2. s m 3. m 2 4. s 2 5. m 6. s correct 7. m s · 2 8. s m · 2 Explanation: The unit of length ( l ) is m and the unit of gravitational acceleration ( g ) is m s 2 . 2 and π are constants, so the unit of s l g is r m m s 2 = r m ÷ m s 2 = r m · s 2 m = s 2 = s τ = 2 π s l g is the formula for the period of a pendulum (the time for one complete oscillation). Beach Sand 01:05, trigonometry, numeric, > 1 min, nor- mal. 002 (part 1 of 2) 5 points Grains of fine beach sand are assumed to be spheres of radius 35 μ m. These grains are made of silicon dioxide which has a density of 2600 kg / m 3 . a) What is the mass of each grain of sand? Correct answer: 4 . 66945 × 10 - 10 kg. Explanation: Since ρ = m V , then each grain has a mass of m = ρ V = ρ 4 3 π r 3 = ( 2600 kg / m 3 ) 4 3 π (35 μ m) 3 × 1 m 10 6 μ m 3 = 4 . 66945 × 10 - 10 kg . 003 (part 2 of 2) 5 points Consider a cube whose sides are 0 . 15 m long. b) How many kg of sand would it take for the total surface area of all the grains of sand to equal the surface area of the cube? Correct answer: 0 . 004095 kg. Explanation: Each grain of sand has a surface area of A g = 4 π r 2 , and the cube has a surface area of A c = 6 a 2 , so the required number of grains of sand is n = A c A g = 6 a 2 4 π r 2 . This preview has intentionally blurred sections. Sign up to view the full version. View Full Document This is the end of the preview. Sign up to access the rest of the document. {[ snackBarMessage ]} ### What students are saying • As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students. Kiran Temple University Fox School of Business ‘17, Course Hero Intern • I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero. Dana University of Pennsylvania ‘17, Course Hero Intern • The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time. Jill Tulane University ‘16, Course Hero Intern
HuggingFaceTB/finemath
# Finding location of turning points. Potential Energy? • Patdon10 In summary, the roller coaster will continue on its uphill trajectory until reaching a point where kinetic energy is converted to potential energy. This point, at (41.39 m, 23.89 m), is where the track should be changed to utilize the potential energy and prevent the roller coaster from slipping backwards. Patdon10 ## Homework Statement On the segment of roller coaster track shown in the figure, a cart of mass 233.7 kg moves from left to right and arrives at x = 0 with a speed of 16.5 m/s. Assuming that dissipation of energy due to friction is small enough to be ignored, where is the turning point of this trajectory? x = ? y = ? ## Homework Equations Ke_i + Pe_i = Ke_f + Pe_f ## The Attempt at a Solution This problem is really tough, and I really have no idea where to start. I know the turning point is where the derivative of the function is at a max or min. The only point in that happens which looks to be about (17m, 4m). I don't know how to I would incorporate kinetic energy into this, but I'm pretty sure I have to. Can anyone push me in the right direction? In an ideal situation the roller coaster will continue on its uphill trajectory until it loses all kinetic energy and it is converted to potential energy. This is the point where you will want to change the track so that the rollercoaster uses its potential energy and doesn't slip backwards down the track. x= 41.39 m y=23.89 m ## 1. What is the significance of finding the location of turning points? Finding the location of turning points in potential energy is important because it allows us to understand the behavior and stability of a system. It can also provide insights into the forces acting on the system and help us make predictions about its future behavior. ## 2. How is the location of turning points determined? The location of turning points can be determined by taking the derivative of the potential energy function and setting it to 0. The resulting value(s) represent the points where the potential energy changes from increasing to decreasing or vice versa. ## 3. Can turning points be found for any type of potential energy function? Yes, turning points can be found for any type of potential energy function, whether it is linear, quadratic, or more complex. The method for finding them may vary, but the concept remains the same. ## 4. How do turning points relate to equilibrium points? Turning points and equilibrium points are closely related. In fact, equilibrium points are a type of turning point where the potential energy is at a minimum or maximum. The location of turning points can help us determine the stability of an equilibrium point. ## 5. Can finding the location of turning points help in practical applications? Yes, finding the location of turning points can be useful in practical applications such as engineering and physics. It can help in designing stable structures and predicting the behavior of systems under different conditions. Replies 6 Views 628 Replies 15 Views 591 Replies 1 Views 1K Replies 3 Views 2K Replies 1 Views 905 Replies 3 Views 1K Replies 31 Views 2K Replies 1 Views 5K Replies 10 Views 1K Replies 14 Views 5K
HuggingFaceTB/finemath
# Cuboid The volume of the cuboid is 245 cm3. Each cuboid edge length can be expressed by a integer greater than 1 cm. What is the surface area of the cuboid? Result S =  238 cm2 #### Solution: Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! #### To solve this example are needed these knowledge from mathematics: Tip: Our volume units converter will help you with converion of volume units. ## Next similar examples: 1. Cuboid enlargement By how many percent increases the volume of cuboid if its every dimension increases by 30%? 2. Juice box 2 Box with juice has the shape of a cuboid. Internal dimensions are 15 cm, 20 cm and 32 cm. If the box stay at the smallest base juice level reaches 4 cm below the upper base. How much internal volume of the box fills juice? How many cm below the top of the 3. Digging A pit is dug in the shape of a cuboid with dimensions 10mX8mX3m. The earth taken out is spread evenly on a rectangular plot of land with dimensions 40m X 30m. What is the increase in the level of the plot ? 4. Volume increase How many percent will increase in the pool 50 m, width 15m if the level rises from 1m to 150cm? 5. Cuboid Find the cuboid that has the same surface area as the volume. 6. Cuboid - edges The sum of all edges cuboid are 8 meters. However, the width is twice shorter than the length and height is seven times longer than the width. Determine the dimensions of the cuboid. 7. Cube volume The cube has a surface of 384 cm2. Calculate its volume. 8. Cone area and side Calculate the surface area and volume of a rotating cone with a height of 1.25 dm and 17,8dm side. 9. Cylinder surface area Volume of a cylinder whose height is equal to the radius of the base is 678.5 dm3. Calculate its surface area. 10. Triangular prism Calculate the surface area and volume of a triangular prism, base right triangle if a = 3 cm, b = 4 cm, c = 5 cm and height of prism h=12 cm. 11. Truncated cone A truncated cone has a bases radiuses 40 cm and 10 cm and a height of 25 cm. Calculate its surface area and volume. 12. Numbers Write smallest three-digit number, which in division 5 and 7 gives the rest 2. 13. Toy cars Pavel has a collection of toy cars. He wanted to regroup them. But in the division of three, four, six, and eight, he was always one left. Only when he formed groups of seven, he divided everyone. How many toy cars have in the collection? 14. Divisible by 5 How many three-digit odd numbers divisible by 5, which are in place ten's number 3? 15. Primes 2 Which prime numbers is number 2025 divisible? 16. Fit ball What is the size of the surface of Gymball (FIT - ball) with a diameter of 65 cm? 17. Banknotes How many different ways can the cashier payout € 310 if he uses only 50 and 20 euro banknotes? Find all solutions.
HuggingFaceTB/finemath
1. ## Help with logs please We just started doing some work on logs and i cant work out this problem, i think its quite easy but im crap at these. Two curves: y=a^x and y=2b^x Prove that x coord of the point of intersection is 1 / (log(of base2)A-log(of base2)B) Help would be appreciated 2. y=a^x and y=2b^x y = a^x ----(1) y = 2(b^x) ----(2) At their point of intersection, the two logarithmic curves have the same coordinates. So, y from (1) = y from (2) a^x = 2(b^x) ----(3) Since we are looking for the x-coordinate of the intersection, then we solve for x. Take the logs, (log to the base 10), of both sides of (3), x*log(a) = log(2) +x*log(b) Collect the x-terms, x*log(a) -x*log(b) = log(2) x[log(a) -log(b)] = log(2) Divide both sides by [log(a) -log(b)], x = log(2) / [log(a) -log(b)] -----(4) (4) is on base 10. Since you want x in base 2, then we transform the logs in 4 into logs to the base 2. You still remember how to change bases for logarithms? log(base a) to log(base b). log(base a)(N) = [log(base b)(N)] / [log(base b)(a)] ---*** So, >>>log(base 10)(2) = [log(base 2)(2)] / [log(base 2)(10)] = 1 / [log(base 2)(10)] >>>log(base 10)(a) = [log(base 2)(a)] / [log(base 2)(10)] >>>log(base 10)(b) = [log(base 2)(b)] / [log(base 2)(10)] Substitute those into (4), x = log(2) / [log(a) -log(b)] -----(4) x = log(base 10)(2) / [log(base 10)(a) -log(base 10)(b)] x = {1 / log(base 2)(10)} / {[log(base 2)(a) / log(base 2)(10)] -[log(base 2)(b) / log(base 2)(10)]} Combine the denominator into one fraction only, x = {1 / log(base 2)(10)} / {[log(base 2)(a) -log(base 2)(b)] / log(base 2)(10)} Reverse the denominator, to multiply it to the numerator, x = {1 / log(base 2)(10)} * {log(base 2)(10) / [log(base 2)(a) -log(base 2)(b)]} The log(base 2)(10) cancels out, x = 1 / [log(base 2)(a) -log(base 2)(b)] ----answer. =========== Or, from (3), we can take the logs to the base 2 directly----without passing through the logs to the base 10. a^x = 2(b^x) ----(3) log(base 2)(a^x) = log(base 2)(2 b^x) x*log(base 2)(a) = log(base 2)(2) +x*log(base 2)(b) x*log(base 2)(a) = 1 +x*log(base 2)(b) x*log(base 2)(a) -x*log(base 2)(b) = 1 x[log(base 2)(a) -log(base 2)(b)] = 1 x = 1 / [log(base 2)(a) -log(base 2)(b)] ----answer. 3. Ah ok! Thanks very much for your help
HuggingFaceTB/finemath
# Homogeneous Differential Equations ## Introduction Differential Equations are equations involving a function and one or more of its derivatives. For example, the differential equation below involves the function $y$ and its first derivative $\dfrac{dy}{dx}$. Let's consider an important real-world problem that probably won't make it into your calculus text book: A plague of feral caterpillars has started to attack the cabbages in Gus the snail's garden. Gus observes that the cabbage leaves are being eaten at the rate $\dfrac{d \text{cabbage}}{dt} = \dfrac{\text{cabbage}}{t}$, where $t$ is the time in days after the initial infestation. He knows that $6$ cabbage leaves were eaten on the first day after the initial infestation, and wants to know how many cabbage leaves will be eaten on the second day after the infestation. Gus has written down a homogeneous differential equation. We'll find out what this means in a minute. To work out how many cabbage leaves will be eaten on day $2$, he needs to solve his differential equation and plug in $t=2$. But how can he solve the equation? ## What are Homogeneous Differential Equations? A first order differential equation is homogeneous if it can be written in the form: $\dfrac{dy}{dx} = f(x,y),$ where the function $f(x,y)$ satisfies the condition that $f(kx,ky) = f(x,y)$ for all real constants $k$ and all $x,y \in \mathbb{R}$. We will eventually solve homogeneous equations using separation of variables, but we need to do some work to turn them into separable differential equations first. It might be useful to look back at the article on separable differential equations before reading on. You can also read some more about Gus' battle against the caterpillars there. For example, the differential equation $\dfrac{dy}{dx} = \dfrac{x^2}{y^2},$ where $f(x,y) = \dfrac{x^2}{y^2}$ is a homogeneous differential equation. For any real number $k$: $f(kx,ky) = \dfrac{(kx)^2}{(ky)^2} = \dfrac{k^2 x^2}{k^2 y^2} = \dfrac{x^2}{y^2} = f(x,y).$ ## Solving Homogeneous Differential Equations We need to transform these equations into separable differential equations. We begin by making the substitution $y = vx$. Differentiating gives $\dfrac{dy}{dx} = v\; \dfrac{dx}{dx} + x \; \dfrac{dv}{dx} = v + x \; \dfrac{dv}{dx}$ by the product rule. Let's see this in action on an example. ### Example Solve the differential equation $\dfrac{dy}{dx} = \dfrac{y(x + y)}{xy}$ First, check that it is homogeneous. Let $k$ be a real number. Then \begin{align*} \dfrac{ky(kx + ky)}{(kx)(ky)} = \dfrac{k^2(y(x + y))}{k^2 xy} = \dfrac{y(x + y)}{xy}. \end{align*} So, it is homogenous. Next, do the substitution $y = vx$ and $\dfrac{dy}{dx} = v + x \; \dfrac{dv}{dx}$: \begin{align*} v + x\;\dfrac{dv}{dx} &= \dfrac{xy + y^2}{xy}\\ &= \dfrac{x(vx) + (vx)^2}{x(vx)}\\ &= \dfrac{vx^2 + v^2 x^2 }{vx^2}\\ &= 1 + v \end{align*} Rearranging gives \begin{align*} v + x \; \dfrac{dv}{dx} &= 1 + v\\ x\; \dfrac{dv}{dx} &= 1, \end{align*} which is a separable differential equation! Now, apply separation of variables: Step 1: Separate the variables by moving all the terms in $v$, including $dv$, to one side of the equation and all the terms in $x$, including $dx$, to the other. $dv = \dfrac{1}{x} \; dx$ Step 2: Integrate both sides of the equation. \begin{align*} \int \;dv &= \int \dfrac{1}{x} \; dx\\ v &= \ln (x) + C \end{align*} Step 3: There's no need to simplify this equation. Now substitute $y = vx$, or $v = \dfrac{y}{x}$ back into the equation: $\dfrac{y}{x} = \ln (x) + C$ and multiply both sides by $x$ to make $y$ the subject of the equation: $y = x\ln (x) + Cx.$ There, we've solved our first homogeneous differential equation! Let's try another. ### Example Solve the differential equation $\dfrac{dy}{dx} = \dfrac{x(x - y)}{x^2}$ First, check that it is homogeneous. Let $k$ be a real number. Then \begin{align*} \dfrac{kx(kx - ky)}{(kx)^2} = \dfrac{k^2(x(x - y))}{k^2 x^2} = \dfrac{x(x - y)}{x^2}. \end{align*} So, it is homogenous. Next, do the substitution $y = vx$ and $\dfrac{dy}{dx} = v + x \; \dfrac{dv}{dx}$ to convert it into a separable equation: \begin{align*} v + x\;\dfrac{dv}{dx} &= \dfrac{x^2 - xy}{x^2}\\ &= \dfrac{x^2 - x(vx)}{x^2}\\ &= \dfrac{x^2 - v x^2 }{x^2}\\ &= 1 - v \end{align*} Rearranging gives \begin{align*} v + x \; \dfrac{dv}{dx} &= 1 - v\\ x\; \dfrac{dv}{dx} &= 1 - 2v, \end{align*} which is a separable differential equation! Now, apply separation of variables: Step 1: Separate the variables by moving all the terms in $v$, including $dv$, to one side of the equation and all the terms in $x$, including $dx$, to the other. $\dfrac{1}{1 - 2v}\;dv = \dfrac{1}{x} \; dx$ Step 2: Integrate both sides of the equation. \begin{align*} \int \dfrac{1}{1 - 2v}\;dv &= \int \dfrac{1}{x} \; dx\\ -\dfrac{1}{2} \ln (1 - 2v) &= \ln (x) + C \end{align*} Step 3: Simplify this equation. First, write $C = \ln(k)$, and then take exponentials of both sides to get rid of the logs: \begin{align*} -\dfrac{1}{2} \ln (1 - 2v) &= \ln (x) + \ln(k)\\ -\dfrac{1}{2} \ln (1 - 2v) &= \ln (kx)\\ \ln (1 - 2v)^{-\dfrac{1}{2}} &= \ln (kx)\\ (1 - 2v)^{-\dfrac{1}{2}} &= kx\\ \dfrac{1}{\sqrt{1 - 2v}} &= kx \end{align*} Square both sides and take reciprocals: \begin{align*} \dfrac{1}{1 - 2v} &= k^2x^2\\ 1 - 2v &= \dfrac{1}{k^2x^2} \end{align*} Finally, plug in $v = \dfrac{y}{x}$ and simplify: \begin{align*} 1 - \dfrac{2y}{x} &= k^2 x^2\\ -\dfrac{2y}{x} &= k^2 x^2 - 1\\ -2y &= x(k^2x^2 - 1)\\ y &= \dfrac{x(1 - k^2x^2)}{2} \end{align*} We finally have a solution: $y = \dfrac{x(1 - k^2x^2)}{2}$ I think it's time to deal with the caterpillars now. ### Solving Gus' Feral Caterpillar Problem If you recall, Gus' garden has been infested with caterpillars, and they are eating his cabbages. He's modelled the situation using the differential equation: $\dfrac{d \text{cabbage}}{dt} = \dfrac{ \text{cabbage}}{t},$ where $t$ is the time in days after the initial infestation. He knows that $6$ cabbage leaves were eaten on the first day after the infestation, and wants to know how many cabbage leaves will be eaten on day $2$. He needs to solve the differential equation, work out the value of $A$, and plug in $t = 2$. First, we need to check that Gus' equation is homogeneous. Let $k$ be a real number. Then $\dfrac{k\text{cabbage}}{kt} = \dfrac{\text{cabbage}}{t},$ so it certainly is! Next do the substitution $\text{cabbage} = vt$, so $\dfrac{d \text{cabbage}}{dt} = v + t \; \dfrac{dv}{dt}$: $v + t \; \dfrac{dv}{dt} = \dfrac{vt}{t} = v$ Subtracting $v$ from both sides gives: $t \; \dfrac{dv}{dt} = 0,$ which is a bit of a let-down, really. So either $t = 0$, or $\dfrac{dv}{dt} = 0$. Since time doesn't stand still, we must have $\dfrac{dv}{dt} = 0$. This differential equation is particularly easy to solve. It just says that $v = C$ for some constant $C$. Now, let's substitute back in for $v$. Since $\text{cabbage} = vt$, $v = \dfrac{\text{cabbage}}{t}$ and so, the solution is: \begin{align*} \dfrac{\text{cabbage}}{t} &= C\\ \text{cabbage} &= Ct. \end{align*} Finally, plug in the initial condition to find the value of $C$ We plug in $t = 1$ as we know that $6$ leaves were eaten on day $1$ $\text{cabbage}(1) = 6 = C(1) = C$ So $C = 6$, and the particular solution to Gus' equation is $\text{cabbage}(t) = 6t.$ That will eventually be a lot of cabbage. No wonder Gus is so worried! On day $2$ after the infestation, the caterpillars will eat $\text{cabbage}(2) = 6(2) = 12 \text{ leaves}.$ Poor Gus! ### Description Calculus is the branch of mathematics that deals with the finding and properties of derivatives and integrals of functions, by methods originally based on the summation of infinitesimal differences. The two main types are differential calculus and integral calculus. ### Environment It is considered a good practice to take notes and revise what you learnt and practice it. ### Audience You must be logged in as Student to ask a Question.
open-web-math/open-web-math
Home » Powers of 1 » 1 to the 96th Power # 1 to the 96th Power Welcome to 1 to the 96th power, our post about the mathematical operation exponentiation of 1 to the power of 96. If you have been looking for 1 to the ninety-sixth power, or if you have been wondering about 1 exponent 96, then you also have come to the right place. 🙂 The number 1 is called the base, and the number 96 is called the exponent. In this post we are going to answer the question what is 1 to the 96th power. = Reset ## What is 1 to the 96th Power? 1 to the 96th power is conventionally written as 196, with superscript for the exponent, but the notation using the caret symbol ^ can also be seen frequently: 1^96. 196 stands for the mathematical operation exponentiation of one by the power of ninety-six. As the exponent is a positive integer, exponentiation means a repeated multiplication: 1 to the 96th power = The exponent of the number 1, 96, also called index or power, denotes how many times to multiply the base (1). Thus, we can answer what is 1 to the 96th power as 1 to the power of 96 = 196 = 1. If you have come here in search of an exponentiation different to 1 to the ninety-sixth power, or if you like to experiment with bases and indices, then use our calculator above. To stick with 1 to the power of 96 as an example, insert 1 for the base and enter 96 as the index, also known as exponent or power. 1 to the 96th power is an exponentiation which belongs to the category powers of 1. Similar exponentiations on our site in this category include, but are not limited, to: Ahead is more info related to 1 to the 96 power, along with instructions how to use the search form, located in the sidebar or at the bottom, to obtain a number like 1^96. ## 1 to the Power of 96 Reading all of the above, you already know most about 1 to the power of 96, except for its inverse which is discussed a bit further below in this section. Using the aforementioned search form you can look up many numbers, including, for instance, 1 to the power 96, and you will be taken to a result page with relevant posts. Now, we would like to show you what the inverse operation of 1 to the 96th power, (196)−1, is. The inverse is the 96th root of 196, and the math goes as follows: (196)−1 = = = = 1 Because the index of 96 is a multiple of 2, which is even, in contrast to odd numbers, the operation produces two results: (196)−1 ; the positive value is the principal root. Make sure to understand that exponentiation is not commutative, which means that 196 ≠ 961, and also note that (196)-1 ≠ 1-96, the inverse and reciprocal of 196, respectively. You already know what 1 to the power of 96 equals, but you may also be interested in learning what 1 to the negative 96th power stands for. Next is the summary of our content. ## One to the Ninety-sixth Power You have reached the concluding section of one to the ninety-sixth power = 196. One to the ninety-sixth power is, for example, the same as 1 to the power 96 or 1 to the 96 power. Exponentiations like 196 make it easier to write multiplications and to conduct math operations as numbers get either big or small, such as in case of decimal fractions with lots of trailing zeroes. If you have been looking for 1 power 96, what is 1 to the 96 power, 1 exponent 96 or 96 power of 1, then it’s safe to assume that you have found your answer as well. If our explanations have been useful to you, then please hit the like button to let your friends know about our site and this post 1 to the 96th power. And don’t forget to bookmark us. ## Conclusion In summary, If you like to learn more about exponentiation, the mathematical operation conducted in 196, then check out the articles which you can locate in the header menu of our site. Submitting...
HuggingFaceTB/finemath
# Perpendicular Bisector of Triangle is Altitude of Medial Triangle Jump to navigation Jump to search ## Theorem Let $\triangle ABC$ be a triangle. Let $\triangle DEF$ be the medial triangle of $\triangle ABC$. Let a perpendicular bisector be constructed on $AC$ at $F$ to intersect $DE$ at $P$. Then $FP$ is an altitude of $\triangle DEF$. ## Proof Consider the triangles $\triangle ABC$ and $\triangle DBE$. By construction: $BA : BD = 2 : 1$ $BC : BE = 2 : 1$ $AC \parallel DE$ $\angle AFP = \angle FPE$ By construction, $\angle AFP$ is a right angle. Thus $\angle FPE$ is also a right angle. That is, $FP$ is perpendicular to $DE$. By construction, $FP$ passes through the vertex $F$ of $\triangle DEF$. The result follows by definition of altitude of triangle. $\blacksquare$
HuggingFaceTB/finemath
# Describing three-dimensional structures with spherical and ```Describing three-dimensional structures with spherical and Cartesian coordinates The geometry of a geologic structure is expressed in terms of angles: a plane’s orientation is described by its strike and dip, and a line’s attitude is given as its trend and plunge. It is clear that directional information — the strike of a plane, and the trend of a line — is inherently “circular”, as we measure these quantities with a compass. The dip and plunge are also circular, describing angles that sweeps downward from horizontal. As pairs, the strike and dip or the trend and plunge define the three dimensional geometry of a geologic structure. So, these data are expressed using spherical coordinates. Spherical coordinates are convenient for recording and representing linear and planar data, but calculations relating and operating on these data can often be better handled by converting to a different representation — what are called Cartesian (north, east, down) coordinates. Let’s start with a line that plunges to the northeast. This could represent a lineation, or it could represent the pole to a plane (dealing with poles to planes in Cartesian coordinates is simpler than with the planes themselves). First, let’s take a look at this line from above, as though we were measuring its trend with a compass (left figure; 3D perspective view on right): At the top and right sides of this compass face, we see a familiar N and E, denoting north and east, respectively. At the center of the compass face, we see a D, showing the trace of the “down” axis (it points straight down, into the earth). The gray line is the Spherical and Cartesian coordinates 1 linear feature of interest projected upward to the compass face (a line of zero plunge). Keep in mind, however, that the linear feature itself is actually plunging (see right figure). The angles α, β, and γ represent the angles made between the linear feature with the North, East, and Down axes, respectively. While the spherical coordinates of the linear feature are described as angles with respect to north and horizontal, the Cartesian coordinates can be expressed as linear units that are trigonometric functions of those angles. First, we can relate the trend and plunge to the depicted angles as: • α = trend • β = (90–trend) • γ = (90–plunge) (the plunge is expressed with respect to the horizontal, whereas γ is the angle expressed with respect to the down axis, which is vertical). To define the Cartesian coordinates, we want to find out the vector components of the linear feature along the N, E, and D axes: We treat the linear feature as a unit vector (bold black line) with length = 1 and direction given by its trend and plunge. First, let’s determine the D component of the vector: D = cos(γ) = cos(90–plunge) = sin(plunge). Note from the figure above that the D component of the vector, treating the vector itself as the hypotenuse of a right triangle within the gray vertical plane, is just the sine of the plunge (recall that the vector is of length = 1). Similarly, we can express the projection Spherical and Cartesian coordinates 2 of the vector onto a horizontal plane as the cosine of the plunge (i.e., the “adjacent” limb of the right triangle we just used). Using this horizontal projection of the plunging line (the gray vector on the diagram above), we can determine the vector components along the N and E axes: N = cos(α) = cos(plunge)*cos(trend) E = cos(β) = cos(plunge)*cos(90–trend) = cos(plunge)*sin(trend). Poles to planes We generally describe the geometry of a plane using its strike and dip, but we could alternatively uniquely describe its orientation using the pole to the plane. As noted above, in carrying out mathematical operations that relate multiple planes, it is often simpler to describe those planes using the trend and plunge of their poles as opposed to the strike and dip of the planes themselves. The trend of a plane’s pole is the dip direction of that plane: 90&ordm; to the strike of the plane, pointing in the direction of dip. The plunge of a pole to a geologic plane is 90&ordm; to the dip of the plane. Using the same relationships described above, the N, E, D components of the unit vector defining the pole to a plane can be defined in terms the plane’s strike and dip as: N = cos(α) = sin(strike)*sin(dip) E = cos(β) = –cos(strike)*sin(dip) D = cos(γ) = cos(dip). Application: Determining rake A linear structure appearing within a plane — for example, a slickenline marking the direction of fault slip — can be characterized by measuring its trend and plunge directly, or by measuring its rake within the plane. What the rake represents is the angle between the linear feature and a line defining the strike of the plane, that is, a line whose trend is the strike and whose plunge is zero. When the lineation’s trend and plunge are measured, the rake can be determined easily using Cartesian coordinates and the concept of a dot product, or scalar product. The dot product multiplies the magnitude of two vectors. The dot product of two vectors, U and V, each defined by N, E, D components can be thought of as the magnitude (length) of U times the length of V projected onto U. This projection is achieved by multiplying the length of U times the length of V times the cosine of the angle between the vectors, θ. In our example, this angle is what we want to know: the rake. Expressed mathematically, the dot product UV is given as: UV = |U||V|cos θ = UNVN + UEVE + UDVD. Spherical and Cartesian coordinates 3 That is, the dot product can be calculated as the sum of the products of the Cartesian components of the two vectors. To return to our original problem, we could determine the rake of a lineation by calculating the Cartesian components of vector U, the strike line, and vector V, the lineation. We can then determine the rake, θ, by rearranging the above equation: Because we use unit vectors to describe the information, the lengths of the vectors, |U| and |V| are both 1. Application: Determining apparent dip The dip of a plane is maximum slope of that plane, and the dip direction is always 90&ordm; to the strike of the plane. If you were to construct a geologic cross section in a plane perpendicular to strike (i.e., in the dip direction), then the beds would appear in that cross section at their dip angle. However, if the cross section is drawn in any other plane, not perpendicular to strike, then the beds appear in the cross section with what is called an apparent dip. planes — in this case, a bedding plane and the plane depicting the cross section — is a line. This line can of course be described with a trend and plunge. In our example, the trend of this line is equal to the strike of the cross section plane, and the plunge of the line gives the apparent dip of the bedding plane. The apparent dip is always less than the true dip, but we need to determine what that value is in order to depict the bedding planes in the cross section. Spherical and Cartesian coordinates 4 Converting the two planes of interest — again, the bedding plane and the cross section plane — into vectors described by Cartesian coordinates provides a straightforward approach for determining apparent dip. The first step is to determine the poles to our two planes. The second step is to express the pole vectors in Cartesian coordinates. (Alternatively — and this is how we will proceed — we can use the formulae above that give the N, E, D components directly in terms of strike and dip.) The third step involves the calculation of a cross product, or vector product. The output of the cross product defines the N, E, D components of a vector that is perpendicular to two input are poles to planes, they are perpendicular to those planes. Therefore, a third vector that is perpendicular to both poles lies within (is contained by) both planes. The only vector to lie within both planes defines the intersection between those planes. As discussed above, it is this vector whose plunge describes the apparent dip of a given bedding plane within a given cross-section plane. The cross product can be visualized using your right hand. Point the fingers of your right hand in the direction of the first vector, below called U. Now curl your fingers in the direction pointing to the other vector, V, within the plane defined by the two vectors (in this case, a horizontal plane). The cross product, U&times;V is the vector in the direction your thumb points. Note that U&times;V = – V&times;U. Specifically for vectors in the N, E, D coordinate system, the [N, E, D] components of the cross product are defined as: U&times;V = [UEVD–UDVE , UDVN–UNVD , UNVE–UEVN]. With this vector’s Cartesian components calculated, the final step is to convert the N, E, D components of a unit vector along the intersection line back to spherical coordinates to give its plunge, and hence the apparent dip of the bedding plane in the cross-section plane. This is done using some straightforward trigonometry: Trend = tan-1(E/N) Plunge = tan-1(D/sqrt(E2+N2)) (Consult the diagram on Page 2 to see these relationships). Note that if the crossproduct vector defines the pole to a plane, the strike and dip of the plane can be easily determined. Spherical and Cartesian coordinates 5 ```
HuggingFaceTB/finemath
 derivative of sinx cosx tanx # derivative of sinx cosx tanx The derivative of a function shows how does a function is changed when there occurs a change in one of its variables. As y f( x) is a function and x is changing by x so there will a change in y by y. Such as sinx, cosx, tanx, etc. Phone 2018 - Cosx Sinx Derivative. Derivative of sin x - An approach to calculus - The derivatives of the sin x, cos x, tan x, csc x, sec x, cot x, and arcsin x. The limit of sin x/x as x approaches 0 How do you find the derivative of yex cos( x) ? | Published on Jan 5, 2018. derivative Find Dxy of y(sinxcosx)/tanx. verify cosx sinx tanxsecx. integral sec x 4 tanx 2 or twickenham or variables or glow or tompkins or saumon or 1865 or exits or 414 or pono or landal or branchement or dbase or unspecified or obtencion or sasso or abrams or hyderabadi. derivative of y 2sinx-tanx. y e(cosx)ln(tanx).[(-sinx) .ln(tanx)sec(x)/tanx].Let y (tan x)(cos x). Then we seek dy/dx. Taking the natural logarithm of both sides, we get. Related Symbolab blog posts. Advanced Math Solutions Derivative Calculator, Implicit Differentiation. Weve covered methods and rules to differentiate functions of the form yf( x), where y is explicitly defined as FInd the derivative of yx-2e2x, and express answer as a single fraction. Derivative of sin x and cos x | MIT Highlights of Calculus - Duration: 34:39.Derivatives of Trigonometric Functions - Product Rule Quotient Chain Rule - Calculus Tutorial - Duration: 35:01. Cosx Sinx Identity Trig Identities Sinx Cosx Cosx Sinx Cot Y Sinx Cosx Derivative of Sinx Cosx Trigangle Sinx Cosx Trig Identities Cosine RuleSinx Cos X Graph Graph Sin X and Cos2x Y Sin X Graph Tanx Java From the Unit Circle Sinx F X Sinx X How Do I Find On a Graph 1 Sinx D DX Y Derivative Of Sinx Cosx X. Post by endehoyon 15 December 2017Category : Uncategorized.We need to remind ourselves of some .It is cosx You could use the product rule and simplify later, but I think its easier to simplify first tanx sinx [] home / study / math / advanced math / advanced math questions and answers / Find Derivatives Of Ysinx/(cosx2) And Ytanx2-tan2x. 5.4 The Derivatives of y sinx and y cosx. Proofs. Derivatives of sinusoidal functions. Derivative of is Derivative of is. Derivatives of Composite Sinusoidal functions. , then examples. Determine the derivative of each of the following. Application Examples. - PowerPoint PPT Presentation. What is the derivative of sinx, cosx, tanx, secx, cscx, and cotx — These must be memorized. If you are curious as to how they can be derived play with the series expansions. More Info "placeholder (or filler) text." Marra Woodworking search free PDF Projects Sinx Cosx Tanx0 Plans for teak patio furniture woodworking joints diagrams bench table slide design bunk bed with desk designs hardwood lumber suppliers Toronto bunk bed plan Derivative of sin x and cos x MIT Highlights of Calculus. Sinx. Cosx. Tanx.Calculus Derivatives and Integrals in Trigonometry. derivative of sin(x). cosx sinx tanx secx verify this.(cosx-tanx)/(sinxcosx)cosx-secx. asked Feb 18, 2013 in Trigonometry Answers by anonymous | 832 views. cos(-x)cosx as cos is an even function, similarly sin is an odd function so sin(x)-sin(-x), odd and even functions have certain characteristics you should get to know graphically WhizKid Oct 31 13 at 0:39.Related. 3. Derivative of the sin(x) when x is measured in degrees. Prove the following trigonometric identities. please give a detailed answer because I dont understand this at all. a. sin(x)tan(x)cos(x)/cot2 (x) b. (1tanx)2sec2 (x)2tan(x) c. 1/sin(x) 1/ cos(x) (cosxsinx)(secx)(cscx) d. 5.4 The Derivatives of y sinx and y cosx. Proofs. Derivatives of sinusoidal functions. Derivative of is Derivative of is. Derivatives of Composite Sinusoidal functions. , then examples. Determine the derivative of each of the following. Application Examples. - PowerPoint PPT Presentation. Derivatives of Basic Trigonometric Functions. We have already derived the derivatives of sine and cosine on the Definition of the Derivative page. They are as follows Proofs of Derivative of Trig Functions. Proof of sin(x) : algebraic Method. Given: lim(d->0) sin(d)/d 1. SolveProof of cos(x) : from the derivative of sine. This can be derived just like sin(x) was derived or more easily from the result of sin(x). Similar content. Proofs of derivatives of ln(x) and ex.Derivative of sin x and cos x | MIT Highlights of Calculus. 152. (2(ln x ))/x antiderivative example. If I want to take the derivative with respect to x of sine of x, this is going to be equal to cosine of x. And if you look at their graphs, itll make intuitive sense. Once again I have not proved it here, but this is a good thing to know, that the derivative of sine of x is cosine of x. d dx tanx.cosx d dx (1sinx)(1sinx) d dx cosx cos 2 x. Step 2: Take the derivative of each part. Apply the appropriate trigonometric differentiation rule. This gives us sinx(0)cosx(1) Differentiating sin(x) from First Principles 5. IMPORTANT: The derivative (also called differentiation) can be written in several ways. The derivatives of the sin x, cos x, tan subsidiary theorem needed to prove a principle we will apply the definition of the derivative . The differentiation of trigonometric functions is the mathematical process of finding the derivative of a trigonometric function, or its rate of change with respect to a variable. Common trigonometric functions include sin(x), cos(x) and tan(x). For example, the derivative of f(x) sin(x) is represented as f (a) f(x) (cosx)(sinx)(tanx). I am stuck om this problem. Please show me steps on how to go about solving this.So now you just take the derivative of sin2(x) using the chain rule. 10/26/2016 | Michael J. Derivatives of sin x, cos x, tan x, ex and ln x.VTV TET 6:09 Integrieren - Video 11 (sinx, cosx, tanx): Integralrechnung, Integrationsregeln. 03 The graphs of y sin(x), ycos(x) and ytan(x). The derivative of sinx is cosx. The derivative of tanx is sec2x. Find derivative of x.tanx/ (sinx cosx). Ask for details. Get an answer for What is the second derivative of ytan x-sinx? and find homework help for other Math questions at eNotes.Using the formulae for the derivative of tan x and sin x. y (sec x)2 - cos x. The derivatives of the sin x, cos x, tan x, csc x, sec x, cot x, and arcsin x. The limit of sin x/x as x approaches 0. sin x(sin x) cosx cos x sin2x. derivative of sinx cosx tanx news, articles, pictures, videos and discussions.Articles on "Derivative Of Sinx Cosx Tanx". Related products. We can also use the fact that the derivative of a sum is the sum of the derivatives of each term to deduce that.And for cosx its -sinx. Tanx its sec2 x. Since its addition we can differentiate it separately. In this video, I show you how to differentiate the trigonometric functions sin(x), cos(x) and tan(x) and give a couple of examples for you to try. Deriving the differential of sin from first principles. For this magazine there is no download available. Magazine: cos2x sin 2x 1 1 tan2x sec2x cot2x 1 csc2x Cofunction Tanx Sinx Cosx. Sin X and Cosx Graph. Related.Is the Derivative of Sinx Cosx. Related. We learn how to find the derivative of sin, cos and tan functions, and see some examples.(d(tan x))/(dx)sec2x. Explore animations of these functions with their derivatives here Derivatives of Trigonometric Functions Q. Where did the formulas for the derivatives of sinx and cosx come from? A1. Graphically: Lecture 22 Chart1 1 -0 0.9510565163 -0.3090169944 - Sub tan sin/cos. Answer by Edwin McCravy(16372) (Show Source): You can put this solution on YOUR website! which is a number between 0 and 1 inclusive.cosxtanx(sinx)secx (answered by vleith). For the function: y sin(x)cos(x) To find the derivative y, implicit differentiation must be used.Can you Show 1 over sinx cosx - cosx over sinx equals tanx? From the Pythagorean identity, sin2 x 1-cos2x. tanx is never negative or 0. Hence, we should expect the derivative of.Equation 5: Derivative of cosx2 pt.2. Taking the derivative of g(x) gives 2 x and taking the derivative of cosine gives. sinx- sin x. It implies that tanx dx sinx/cosx dx - logt c. Where c belongs to a constant.Therefore we get tanx dx sinx/cosx dx - log(cosx) c. As we know the logarithmic identities like.Derivative of Tanx Second Derivative of Tanx Derivatives List Even Number List Fibonacci List Good Learn the derivatives of several common functions. derivative of tanx. what is sinx times sinx.ln cosx derivative. derivative of sinxcosx. mercer county ohio court. sinx cosx identity. These must be memorized. If you are curious as to how they can be derived play with the series expansions. Anuradha Sharma , Meritnation Expert added an answer, on 16/9/15. Let y sinx cosx sinx - cosx tanx 1 tanx - 1 Dividing both D r and N r by cosx - 1 tanx 1 - 1 tanx - tan 45 tanx 1 - tan 45 tanx - tan x 45 So dy dx - sec 2 x 45. How can Newtons first law be derived from Newtons second law?How do you find the derivative of y 2ex? Derivatives of Trigonometric Functions - Product Rule Quotient Chain Rule - Calculus Tutorial. This calculus video tutorial explains how to find the derivative of trigonometric functions such as sinx, cosx, tanx, secx, cscx, and cotx. Simplifying sinx -1cosx -1tanx 1 0. Reorder the terms: 1 -1antx -1 cosx insx 0.Derivative calculator - step by step. Graphs of functions. Factorization. Greatest Common Factor. ## new posts Copyright © 2018.
HuggingFaceTB/finemath
# Show that Lim X → ∞ ( √ X 2 + X + 1 − X ) ≠ Lim X → ∞ ( √ X 2 + 1 − X ) - Mathematics Show that $\lim_{x \to \infty} \left( \sqrt{x^2 + x + 1} - x \right) \neq \lim_{x \to \infty} \left( \sqrt{x^2 + 1} - x \right)$ #### Solution $\lim_{x \to \infty} \left( \sqrt{x^2 + x + 1} - x \right) \neq \lim_{x \to \infty} \left( \sqrt{x^2 + 1} - x \right)$ $\text{ LHS }:$ $\lim_{x \to \infty} \left( \left( \sqrt{x^2 + x + 1} - x \right) \right)$ $\text{ Rationalising the numerator }:$ $\lim_{x \to \infty} \left[ \frac{\left( \sqrt{x^2 + x + 1} - x \right) \left( \sqrt{x^2 + x + 1} + x \right)}{\left( \sqrt{x^2 + x + 1} + x \right)} \right]$ $= \lim_{x \to \infty} \left[ \frac{\left( x^2 + x + 1 \right) - x^2}{\left( \sqrt{x^2 + x + 1} + x \right)} \right]$ $= \lim_{x \to \infty} \left[ \frac{x + 1}{\left( \sqrt{x^2 + x + 1} + x \right)} \right]$ $\text{ Dividing the numerator and the denominator by x }:$ $\lim_{x \to \infty} \left[ \frac{1 + \frac{1}{x}}{\frac{\sqrt{x^2 + x + 1}}{x} + 1} \right]$ $= \lim_{x \to \infty} \left[ \frac{1 + \frac{1}{x}}{\sqrt{\frac{x^2 + x + 1}{x^2}} + 1} \right]$ $= \lim_{x \to \infty} \left[ \frac{1 + \frac{1}{x}}{\sqrt{1 + \frac{1}{x} + \frac{1}{x^2}} + 1} \right]$ $\text{ When x } \to \infty , \text{ then } \frac{1}{x} \to 0 .$ $\frac{1}{\sqrt{1} + 1}$ $= \frac{1}{2}$ $RHS:$ $\lim_{x \to \infty} \left( \sqrt{x^2 + 1} - x \right) \left[ \text{ from } \infty - \infty \right]$ Rationalising the numerator: $\lim_{x \to \infty} \left[ \frac{\left( \sqrt{x^2 + 1} - x \right) \left( \sqrt{x^2 + 1} + x \right)}{\left( \sqrt{x^2 + 1} + x \right)} \right]$ $= \lim_{x \to \infty} \left[ \frac{x^2 + 1 - x^2}{\left( \sqrt{x^2 + 1} + x \right)} \right]$ $= \frac{1}{\infty}$ $= 0$ $\therefore \lim_{x \to \infty} \left[ \sqrt{x^2 + x + 1} - x \right] \neq \lim_{x \to \infty} \left( \sqrt{x^2 + 1} - x \right)$ Is there an error in this question or solution? #### APPEARS IN RD Sharma Class 11 Mathematics Textbook Chapter 29 Limits Exercise 29.6 | Q 22 | Page 39
HuggingFaceTB/finemath
Assess its distance x ‘as the crow flies regarding starting point a two = c dos ? q 2 + b dos ? 2bq + q 2 = c 2 + b dos ? 2bq ## There is also an essential relationships amongst the about three sides of a broad triangle additionally the cosine of one of the angles Profile 30 reveals the road out-of a motorboat one to sailed 30 km due eastern, following turned into owing to 120° and sailed a further forty kilometer. One of several interior basics of a triangle was 120°. If your sides next to so it position was from size 4 m and you can 5 meters, make use of the cosine code to get the period of the side contrary the newest offered direction. ## cuatro.2 Trigonometric identities A great deal of appropriate mathematics is worried that have equations. It is usually the situation that these equations are just true when the parameters it consist of deal with specific particular viewpoints; such as for instance, 4x = cuatro is genuine when x = 1. But not, we possibly write-down equations which might be true for everyone philosophy of your details, such as for instance (x + 1) dos = x dos + 2x + step 1. Equations for the second method of, i.age. ones which can be real despite the opinions of parameters they include, try safely titled identities. we Discover a great many trigonometric identities, we.elizabeth. matchmaking ranging from trigonometric characteristics that will be in addition to the specific philosophy of your variables it encompass. These have some applications and it is advantageous to possess a great variety of them for simple resource. 1st are offered lower than – you have currently satisfied the initial eight (inside the a bit variations) before in the module while some are present within various situations throughout the FLAP. Observe that ? and ? may portray any number otherwise angular thinking, until their thinking was minimal by significance of the qualities alarmed. The fresh abbreviations asin, acos and you can atan or simply sin ?step one , cos ?1 and you can tan ?step 1 , are sometimes used for the latest inverse trigonometric features. Pythagorass theorem states that the rectangular of your own hypotenuse into the a right–angled triangle is equivalent to the whole squares off additional a couple sides. New angles 180° and you will 90° match an effective rotation compliment of 50 % of and one–quarter off a group, correspondingly. A direction regarding ninety° is known as the right perspective. A line on 90° to certain line (otherwise skin) is claimed to-be perpendicular or typical into the brand spanking new range (or body). Once the 2? = six.2832 (in order to five decimal metropolitan areas) it follows one to step 1 radian = °, just like the stated before. Table 1 brings some bases counted in degree and you will radians. As you can tell from this dining table, of numerous commonly–made use of angles are simple fractions or multiples out-of ? radians, however, keep in mind that basics expressed inside radians are not usually conveyed when it comes to ?. Do not make the prominent mistake from believing that ? is actually a angular tool; it’s just lots. Yet not, the room of the large rectangular can also be found from the including the bedroom of your own smaller rectangular, h 2 , with the aspects of the fresh five spot triangles. Per triangle features a place xy/dos (are all 1 / 2 of a good rectangle off corners x and you can y) so the the main large square are The analysis of right–angled triangles is called trigonometry, and the three distinct ratios out-of sets away from corners is actually along known as the trigonometric percentages. He is known as sine, cosine and tangent of angle ? – abbreviated to help you sin, cos and you will bronze, correspondingly i – and you can recognized as pursue: One of the many reason why trigonometric percentages i was from attract so you can physicists is they assist to determine the lengths of all of the corners from the right–angled triangle away from a knowledge of an individual top length and you may one interior perspective (other than ideal perspective). The latest proportion significance of sine, cosine and tangent (i.age. Equations 5, six and 7) only add up for basics from the variety 0 so you can ?/2 radians, because they encompass brand new edges out-of the right–tilted triangle. Inside subsection we’ll establish three trigonometric characteristics, also known as sine, cosine and you can tangent, and you can denoted sin(?), cos(?) and you will bronze(?), correspondingly. i Such characteristics commonly allow us to mount a description in order to the fresh sine and you may cosine of any position, and also to this new tangent of every perspective that isn’t an weird numerous ?/2. For instance the trigonometric ratios which they generalize, these trigonometric qualities is actually of great pros inside physics. Definitely, it’s not only the signs of the fresh trigonometric features you to https://datingranking.net/afroromance-review/ definitely transform as ? grows or reduces and you will P moves within the circle in Shape sixteen. The values from x and you will y, and consequently off sin(?), cos(?) and you will tan(?) along with are different. Regarding the best such action, effortless harmonic activity, brand new modifying standing x regarding a mass oscillating towards the avoid off a spring tends to be represented of the x = Acos(?t + ?). Despite appearance nothing of quantity within the bracket try a keen position (no matter if they’re considering angular interpretations); t is the time that will be mentioned when you look at the moments, ? was a steady referred to as angular frequency that is regarding the fresh features of your own mass and you may spring and is mentioned during the hertz (step one Hz = 1 s ?step one ), and you will ?, the latest stage constant, try a number, constantly about range 0 in order to 2?. Demonstrably, incorporating an optimistic constant, ?/dos, to your argument of your own form has got the effectation of moving forward this new graph to the left by the ?/2. In the crude words, the fresh new addition have raised the conflict and you will produces everything occurs prior to (we.elizabeth. further left). In the event the cos(?) = x, in which 0 ? ? ? ? and ?1 ? x ?1 next arccos(x) = ? (Eqn 26b)
HuggingFaceTB/finemath
# Quick Answer: How Do You Find The Shortest Path? ## How do you find the path between two nodes on a graph? Approach: Either Breadth First Search (BFS) or Depth First Search (DFS) can be used to find path between two vertices. Take the first vertex as source in BFS (or DFS), follow the standard BFS (or DFS). If the second vertex is found in our traversal, then return true else return false.. ## How do you find the shortest path in a weighted graph? The shortest path problem is about finding a path between vertices in a graph such that the total sum of the edges weights is minimum. This problem could be solved easily using (BFS) if all edge weights were ( ), but here weights can take any value. ## Does BFS always give shortest path? Technically, Breadth-first search (BFS) by itself does not let you find the shortest path, simply because BFS is not looking for a shortest path: BFS describes a strategy for searching a graph, but it does not say that you must search for anything in particular. ## Does BFS always find shortest path? Being unweighted adjacency is always shortest path to any adjacent node. Therefore, any unvisited non-adjacent node adjacent to adjacent nodes is on the shortest path discovered like this. … Hence, nodes discovered by BFS are through shortest path only. ## How do you find the shortest path between two points? One way of finding the shortest path between two locations is Dijkstra’s algorithm (DIKE-stra). In fact we will see that this algorithm does one better, and can actually find the shortest path from the starting location to any other location, not just the desired destination. ## Which is best shortest path algorithm? Dijkstra finds the shortest path from only one vertex, Floyd-Warshall finds it between all of them. Use the Floyd-Warshall algorithm if you want to find the shortest path between all pairs of vertexes, as it has a (far) higher running time than Dijkstra’s algorithm. ## What is Dijkstra shortest path algorithm? Dijkstra’s algorithm (or Dijkstra’s Shortest Path First algorithm, SPF algorithm) is an algorithm for finding the shortest paths between nodes in a graph, which may represent, for example, road networks. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later. ## Does Dijkstra visit all nodes? Dikjstra’s algorithm finds the shortest path from a source vertex to all the other vertices in a graph, so yes. If you modify the algorithm to find the shortest path to a specific destination you probably won’t need to visit all vertices. ## What is the path between two points called? distance vs displacement. Distance is the length of the path between two points. Displacement is the direction from the starting point and the length of a straight line from the starting point to the ending point. ## Is Dijkstra BFS or DFS? If you think BFS is about expanding nodes in order of their number of hops from the source vertex, then Dijkstra’s is not really a BFS algorithm. … In fact, when you run Dijkstra’s on an unweighted graph, it will always visit nodes in an order consistent with BFS, and likely inconsistent with what DFS would do. ## What are different algorithms available to find shortest path? The most important algorithms for solving this problem are: Dijkstra’s algorithm solves the single-source shortest path problem with non-negative edge weight. Bellman–Ford algorithm solves the single-source problem if edge weights may be negative. ## Can we use DFS to find shortest path? No, you cannot use DFS to find shortest path in an unweighted graph. It is not the case that, finding the shortest path between two nodes is exclusively solved by BFS. ## What is single source shortest path? The single source shortest path algorithm (for arbitrary weight positive or negative) is also known Bellman-Ford algorithm is used to find minimum distance from source vertex to any other vertex. … At first it finds those distances which have only one edge in the path. ## What is the procedure to calculate the shortest path? Dijkstra’s AlgorithmMark the ending vertex with a distance of zero. Designate this vertex as current.Find all vertices leading to the current vertex. Calculate their distances to the end. … Mark the current vertex as visited. … Mark the vertex with the smallest distance as current, and repeat from step 2. ## How do you find the shortest path between two vertices? Algorithm to find the shortest path between two vertices in an undirected graphInput the graph.Input the source and destination nodes.Find the paths between the source and the destination nodes.Find the number of edges in all the paths and return the path having the minimum number of edges. ## What is a path on a graph? The length of a path is the number of edges it contains. For a simple graph, a path is equivalent to a trail and is completely specified by an ordered sequence of vertices. For a simple graph , a Hamiltonian path is a path that includes all vertices of. (and whose endpoints are not adjacent). ## Which is better BFS or DFS? BFS is better when target is closer to Source. DFS is better when target is far from source. As BFS considers all neighbour so it is not suitable for decision tree used in puzzle games. DFS is more suitable for decision tree. ## What do you mean by shortest path? Definition: The problem of finding the shortest path in a graph from one vertex to another. “Shortest” may be least number of edges, least total weight, etc. … See also Dijkstra’s algorithm, Bellman-Ford algorithm, DAG shortest paths, all pairs shortest path, single-source shortest-path problem, kth shortest path. ## Is Dijkstra a BF? You can implement Dijkstra’s algorithm as BFS with a priority queue (though it’s not the only implementation). Dijkstra’s algorithm relies on the property that the shortest path from s to t is also the shortest path to any of the vertices along the path. This is exactly what BFS does. ## How do you solve a single source shortest path problem? Dijkstra’s algorithm solves the single-source shortest-paths problem on a directed weighted graph G = (V, E), where all the edges are non-negative (i.e., w(u, v) ≥ 0 for each edge (u, v) Є E). In the following algorithm, we will use one function Extract-Min(), which extracts the node with the smallest key.
HuggingFaceTB/finemath
# Difference between revisions of "Steady-state nonisothermal reactor design - 2013" Jump to navigation Jump to search Class date(s): 18 March Download video: Link (plays in Google Chrome) [399 M] Download video: Link (plays in Google Chrome) [266 M] Download video: Link (plays in Google Chrome) [392 M] Download video: Link (plays in Google Chrome) [342 M] Download video: Link (plays in Google Chrome) [355 M] Download video: Link (plays in Google Chrome) [392 M] Download video: Link (plays in Google Chrome) [331 M] Download video: Link (plays in Google Chrome) [556 M] Download video: Link (plays in Google Chrome) [180 M] ## Textbook references • F2011: Chapter 11 and 12 • F2006: Chapter 8 ## Suggested problems F2011 F2006 Problem 12-6 Problem 8-5 Problem 12-15 (a) Problem 8-16 (a) Problem 12-16 (a) Problem 8-18 (a) Problem 12-24 (set up equations) Problem 8-26 (set up equations) ## Source code ### Example on 21 March (class 10C) pfr_example.m ```function [d_depnt__d_indep, CA] = pfr_example(indep, depnt, param) % Dynamic balance for the reactor % % indep: the independent ODE variable, such as time or length % depnt: a vector of dependent variables % Returns d(depnt)/d(indep) = a vector of ODEs % Assign some variables for convenience of notation X = depnt(1); % Constants. Make sure to use SI for consistency FT0 = param.FT0; % mol/s. Note how we use the "struct" variable to access the total flow FA0 = 0.9 * FT0; % mol/s T1 = 360; % K T2 = 333; % K E = 65700; % J/mol R = 8.314; % J/(mol.K) HR = -6900; % J/(mol of n-butane) CA0 = 9300; % mol/m^3 k_1 = 31.1/3600; % 1/s (was 1/hr originally) K_Cbase = 3.03; % [-] % Equations T = 43.3*X + param.T_0; % derived in class, from the heat balance k1 = k_1 * exp(E/R*(1/T1 - 1/T)); % temperature dependent rate constant KC = K_Cbase * exp(HR/R*(1/T2 - 1/T)); % temperature dependent equilibrium constant k1R = k1 / KC; % reverse reaction rate constant CA = CA0 * (1 - X); % from the stoichiometry (differs from Fogler, but we get same result) CB = CA0 * (0 + X); r1A = -k1 * CA; % rate expressions derived in class r1B = -r1A; r2B = -k1R * CB; r2A = -r2B; rA = r1A + r2A; % total reaction rate for species A n = numel(depnt); d_depnt__d_indep = zeros(n,1); d_depnt__d_indep(1) = -rA / FA0; ``` driver_ode.m ```% Integrate the ODE % ----------------- % The independent variable always requires an initial and final value: indep_start = 0.0; % m^3 indep_final = 5.0; % m^3 % Set initial condition(s): for integrating variables (dependent variables) X_depnt_zero = 0.0; % i.e. X(V=0) = 0.0 % Other parameters. The "param" is just a variable in MATLAB. % It is called a structured variable, or just a "struct" % It can have an arbitrary number of sub-variables attached to it. % In this example we have two of them. param.T_0 = 330; % feed temperature [K] param.FT0 = 163000/3600; % mol/s (was kmol/hour originally) % Integrate the ODE(s): [indep, depnt] = ode45(@pfr_example, [indep_start, indep_final], [X_depnt_zero], odeset(), param); % Deal with the integrated output to show interesting plots X = depnt(:,1); T = 43.3.*X + param.T_0; % what was the temperature profile? rA_over_FA0 = zeros(numel(X), 1); % what was the rate profile? C_A = zeros(numel(X), 1); % what was the concentration profile? for i = 1:numel(X) [rA_over_FA0(i), C_A(i)] = pfr_example([], X(i), param); end % the above can be done more efficiently in a single line, but the code would be too confusing % Plot the results f=figure; set(f, 'Color', [1,1,1]) subplot(2, 2, 1) plot(indep, X); grid xlabel('Volume, V [kg]', 'FontWeight', 'bold') ylabel('Conversion, X [-]', 'FontWeight', 'bold') subplot(2, 2, 2) plot(indep, T); grid xlabel('Volume, V [kg]', 'FontWeight', 'bold') ylabel('Temperature profile [K]', 'FontWeight', 'bold') subplot(2, 2, 3) plot(indep, C_A); grid xlabel('Volume, V [kg]', 'FontWeight', 'bold') ylabel('Concentration C_A profile [K]', 'FontWeight', 'bold') subplot(2, 2, 4) plot(indep, rA_over_FA0); grid xlabel('Volume, V [kg]', 'FontWeight', 'bold') ylabel('(Reaction rate/FA0) profile [1/m^3]', 'FontWeight', 'bold') % Now plot one of the most important figures we saw earlier in the course: % F_A0 / (-rA) on the y-axis, against conversion X on the x-axis. This plot % is used to size various reactors. % The material leaves the reactor at equilibrium; let's not plot % that far out, because it distorts the scale. So plot to 95% of % equilibrium f = figure; set(f, 'Color', [1,1,1]) index = find(X>0.95 * max(X), 1); plot(X(1:index), 1./rA_over_FA0(1:index)); grid xlabel('Conversion, X [-]', 'FontWeight', 'bold') ylabel('FA0/(-r_A) profile [m^3]', 'FontWeight', 'bold') % Updated code to find the optimum inlet temperature: temperatures = 300:3:400; conv_at_exit = zeros(size(temperatures)); i = 1; for T = param.T_0 = T; % feed temperature [K] [indep, depnt] = ode45(@pfr_example, [indep_start, indep_final], ... [X_depnt_zero], odeset(), param); conv_at_exit(i) = depnt(end, 1); i = i + 1; end plot(temperatures, conv_at_exit); grid; xlabel('Inlet temperature') ylabel('Conversion at reactor exit') ```
HuggingFaceTB/finemath
6.4: Inverse Trigonometric Functions $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ ( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ $$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$ $$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$ $$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vectorC}[1]{\textbf{#1}}$$ $$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$ $$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$ $$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$ $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ Learning Objectives In this section, you will: • Understand and use the inverse sine, cosine, and tangent functions. • Find the exact value of expressions involving the inverse sine, cosine, and tangent functions. • Use a calculator to evaluate inverse trigonometric functions. • Find exact values of composite functions with inverse trigonometric functions. For any right triangle, given one other angle and the length of one side, we can figure out what the other angles and sides are. But what if we are given only two sides of a right triangle? We need a procedure that leads us from a ratio of sides to an angle. This is where the notion of an inverse to a trigonometric function comes into play. In this section, we will explore the inverse trigonometric functions. Understanding and Using the Inverse Sine, Cosine, and Tangent Functions In order to use inverse trigonometric functions, we need to understand that an inverse trigonometric function “undoes” what the original trigonometric function “does,” as is the case with any other function and its inverse. In other words, the domain of the inverse function is the range of the original function, and vice versa, as summarized in Figure 1. Figure 1 For example, if $f(x)=sinx, f(x)=sinx,$ then we would write $f −1 (x)= sin −1 x. f −1 (x)= sin −1 x.$ Be aware that $sin −1 x sin −1 x$ does not mean $1 sinx . 1 sinx .$ The following examples illustrate the inverse trigonometric functions: • Since $sin( π 6 )= 1 2 , sin( π 6 )= 1 2 ,$ then $π 6 = sin −1 ( 1 2 ). π 6 = sin −1 ( 1 2 ).$ • Since $cos( π )=−1, cos( π )=−1,$ then $π= cos −1 ( −1 ). π= cos −1 ( −1 ).$ • Since $tan( π 4 )=1, tan( π 4 )=1,$ then $π 4 = tan −1 ( 1 ). π 4 = tan −1 ( 1 ).$ In previous sections, we evaluated the trigonometric functions at various angles, but at times we need to know what angle would yield a specific sine, cosine, or tangent value. For this, we need inverse functions. Recall that, for a one-to-one function, if $f(a)=b, f(a)=b,$ then an inverse function would satisfy $f −1 (b)=a. f −1 (b)=a.$ Bear in mind that the sine, cosine, and tangent functions are not one-to-one functions. The graph of each function would fail the horizontal line test. In fact, no periodic function can be one-to-one because each output in its range corresponds to at least one input in every period, and there are an infinite number of periods. As with other functions that are not one-to-one, we will need to restrict the domain of each function to yield a new function that is one-to-one. We choose a domain for each function that includes the number 0. Figure 2 shows the graph of the sine function limited to $[ − π 2 , π 2 ] [ − π 2 , π 2 ]$ and the graph of the cosine function limited to $[ 0,π ]. [ 0,π ].$ Figure 2 (a) Sine function on a restricted domain of $[ − π 2 , π 2 ]; [ − π 2 , π 2 ];$ (b) Cosine function on a restricted domain of $[ 0,π ] [ 0,π ]$ Figure 3 shows the graph of the tangent function limited to $( − π 2 , π 2 ). ( − π 2 , π 2 ).$ Figure 3 Tangent function on a restricted domain of $( − π 2 , π 2 ) ( − π 2 , π 2 )$ These conventional choices for the restricted domain are somewhat arbitrary, but they have important, helpful characteristics. Each domain includes the origin and some positive values, and most importantly, each results in a one-to-one function that is invertible. The conventional choice for the restricted domain of the tangent function also has the useful property that it extends from one vertical asymptote to the next instead of being divided into two parts by an asymptote. On these restricted domains, we can define the inverse trigonometric functions. • $y= sin −1 xhas domain[−1,1]and range[ − π 2 , π 2 ] y= sin −1 xhas domain[−1,1]and range[ − π 2 , π 2 ]$ • $y= cos −1 xhas domain[−1,1]and range[0,π] y= cos −1 xhas domain[−1,1]and range[0,π]$ • $y= tan −1 xhas domain(−∞,∞)and range( − π 2 , π 2 ) y= tan −1 xhas domain(−∞,∞)and range( − π 2 , π 2 )$ The graphs of the inverse functions are shown in Figure 4, Figure 5, and Figure 6. Notice that the output of each of these inverse functions is a number, an angle in radian measure. We see that $sin −1 x sin −1 x$ has domain $[ −1,1 ] [ −1,1 ]$ and range $[ − π 2 , π 2 ], [ − π 2 , π 2 ],$ $cos −1 x cos −1 x$ has domain $[ −1,1 ] [ −1,1 ]$ and range $[0,π], [0,π],$ and $tan −1 x tan −1 x$ has domain of all real numbers and range $( − π 2 , π 2 ). ( − π 2 , π 2 ).$ To find the domain and range of inverse trigonometric functions, switch the domain and range of the original functions. Each graph of the inverse trigonometric function is a reflection of the graph of the original function about the line $y=x. y=x.$ Figure 4 The sine function and inverse sine (or arcsine) function Figure 5 The cosine function and inverse cosine (or arccosine) function Figure 6 The tangent function and inverse tangent (or arctangent) function Relations for Inverse Sine, Cosine, and Tangent Functions For angles in the interval $[ − π 2 , π 2 ], [ − π 2 , π 2 ],$ if $siny=x, siny=x,$ then $sin −1 x=y. sin −1 x=y.$ For angles in the interval $[ 0,π ], [ 0,π ],$ if $cosy=x, cosy=x,$ then $cos −1 x=y. cos −1 x=y.$ For angles in the interval $( − π 2 , π 2 ), ( − π 2 , π 2 ),$ if $tany=x, tany=x,$ then $tan −1 x=y. tan −1 x=y.$ Example 1 Writing a Relation for an Inverse Function Given $sin( 5π 12 )≈0.96593, sin( 5π 12 )≈0.96593,$ write a relation involving the inverse sine. Use the relation for the inverse sine. If $siny=x, siny=x,$ then $sin −1 x=y sin −1 x=y$. In this problem, $x=0.96593, x=0.96593,$ and $y= 5π 12 . y= 5π 12 .$ $sin −1 (0.96593)≈ 5π 12 sin −1 (0.96593)≈ 5π 12$ Try It #1 Given $cos(0.5)≈0.8776, cos(0.5)≈0.8776,$ write a relation involving the inverse cosine. Finding the Exact Value of Expressions Involving the Inverse Sine, Cosine, and Tangent Functions Now that we can identify inverse functions, we will learn to evaluate them. For most values in their domains, we must evaluate the inverse trigonometric functions by using a calculator, interpolating from a table, or using some other numerical technique. Just as we did with the original trigonometric functions, we can give exact values for the inverse functions when we are using the special angles, specifically $π 6 π 6$ (30°), $π 4 π 4$ (45°), and $π 3 π 3$ (60°), and their reflections into other quadrants. How To Given a “special” input value, evaluate an inverse trigonometric function. 1. Find angle $x x$ for which the original trigonometric function has an output equal to the given input for the inverse trigonometric function. 2. If $x x$ is not in the defined range of the inverse, find another angle $y y$ that is in the defined range and has the same sine, cosine, or tangent as $x, x,$ depending on which corresponds to the given inverse function. Example 2 Evaluating Inverse Trigonometric Functions for Special Input Values Evaluate each of the following. 1. $sin −1 ( 1 2 ) sin −1 ( 1 2 )$ 2. $sin −1 ( − 2 2 ) sin −1 ( − 2 2 )$ 3. $cos −1 ( − 3 2 ) cos −1 ( − 3 2 )$ 4. $tan −1 ( 1 ) tan −1 ( 1 )$ 1. Evaluating $sin −1 ( 1 2 ) sin −1 ( 1 2 )$ is the same as determining the angle that would have a sine value of $1 2 . 1 2 .$ In other words, what angle $x x$ would satisfy $sin(x)= 1 2 ? sin(x)= 1 2 ?$ There are multiple values that would satisfy this relationship, such as $π 6 π 6$ and $5π 6 , 5π 6 ,$ but we know we need the angle in the interval $[ − π 2 , π 2 ], [ − π 2 , π 2 ],$ so the answer will be $sin −1 ( 1 2 )= π 6 . sin −1 ( 1 2 )= π 6 .$ Remember that the inverse is a function, so for each input, we will get exactly one output. 2. To evaluate $sin −1 ( − 2 2 ), sin −1 ( − 2 2 ),$ we know that $5π 4 5π 4$ and $7π 4 7π 4$ both have a sine value of $− 2 2 , − 2 2 ,$ but neither is in the interval $[ − π 2 , π 2 ]. [ − π 2 , π 2 ].$ For that, we need the negative angle coterminal with $7π 4 : 7π 4 :$ $sin −1 (− 2 2 )=− π 4 . sin −1 (− 2 2 )=− π 4 .$ 3. To evaluate $cos −1 ( − 3 2 ), cos −1 ( − 3 2 ),$ we are looking for an angle in the interval $[ 0,π ] [ 0,π ]$ with a cosine value of $− 3 2 . − 3 2 .$ The angle that satisfies this is $cos −1 ( − 3 2 )= 5π 6 . cos −1 ( − 3 2 )= 5π 6 .$ 4. Evaluating $tan −1 ( 1 ), tan −1 ( 1 ),$ we are looking for an angle in the interval $( − π 2 , π 2 ) ( − π 2 , π 2 )$ with a tangent value of 1. The correct angle is $tan −1 ( 1 )= π 4 . tan −1 ( 1 )= π 4 .$ Try It #2 Evaluate each of the following. 1. $sin −1 (−1) sin −1 (−1)$ 2. $tan −1 ( −1 ) tan −1 ( −1 )$ 3. $cos −1 ( −1 ) cos −1 ( −1 )$ 4. $cos −1 ( 1 2 ) cos −1 ( 1 2 )$ Using a Calculator to Evaluate Inverse Trigonometric Functions To evaluate inverse trigonometric functions that do not involve the special angles discussed previously, we will need to use a calculator or other type of technology. Most scientific calculators and calculator-emulating applications have specific keys or buttons for the inverse sine, cosine, and tangent functions. These may be labeled, for example, SIN $−1 −1$, ARCSIN, or ASIN. In the previous chapter, we worked with trigonometry on a right triangle to solve for the sides of a triangle given one side and an additional angle. Using the inverse trigonometric functions, we can solve for the angles of a right triangle given two sides, and we can use a calculator to find the values to several decimal places. In these examples and exercises, the answers will be interpreted as angles and we will use $θ θ$ as the independent variable. The value displayed on the calculator may be in degrees or radians, so be sure to set the mode appropriate to the application. Example 3 Evaluating the Inverse Sine on a Calculator Evaluate $sin −1 (0.97) sin −1 (0.97)$ using a calculator. Because the output of the inverse function is an angle, the calculator will give us a degree value if in degree mode and a radian value if in radian mode. Calculators also use the same domain restrictions on the angles as we are using. In radian mode, $sin −1 (0.97)≈1.3252. sin −1 (0.97)≈1.3252.$ In degree mode, $sin −1 (0.97)≈75.93°. sin −1 (0.97)≈75.93°.$ Note that in calculus and beyond we will use radians in almost all cases. Try It #3 Evaluate $cos −1 ( −0.4 ) cos −1 ( −0.4 )$ using a calculator. How To Given two sides of a right triangle like the one shown in Figure 7, find an angle. Figure 7 1. If one given side is the hypotenuse of length $h h$ and the side of length $a a$ adjacent to the desired angle is given, use the equation $θ= cos −1 ( a h ). θ= cos −1 ( a h ).$ 2. If one given side is the hypotenuse of length $h h$ and the side of length $p p$ opposite to the desired angle is given, use the equation $θ= sin −1 ( p h ). θ= sin −1 ( p h ).$ 3. If the two legs (the sides adjacent to the right angle) are given, then use the equation $θ= tan −1 ( p a ). θ= tan −1 ( p a ).$ Example 4 Applying the Inverse Cosine to a Right Triangle Solve the triangle in Figure 8 for the angle $θ. θ.$ Figure 8 Because we know the hypotenuse and the side adjacent to the angle, it makes sense for us to use the cosine function. $cosθ= 9 12 θ= cos −1 ( 9 12 ) Apply definition of the inverse. θ≈0.7227 or about 41.4096° Evaluate. cosθ= 9 12 θ= cos −1 ( 9 12 ) Apply definition of the inverse. θ≈0.7227 or about 41.4096° Evaluate.$ Try It #4 Solve the triangle in Figure 9 for the angle $θ. θ.$ Figure 9 Finding Exact Values of Composite Functions with Inverse Trigonometric Functions There are times when we need to compose a trigonometric function with an inverse trigonometric function. In these cases, we can usually find exact values for the resulting expressions without resorting to a calculator. Even when the input to the composite function is a variable or an expression, we can often find an expression for the output. To help sort out different cases, let $f(x) f(x)$ and $g(x) g(x)$ be two different trigonometric functions belonging to the set ${ sin(x),cos(x),tan(x) } { sin(x),cos(x),tan(x) }$ and let $f −1 (y) f −1 (y)$ and $g −1 (y) g −1 (y)$ be their inverses. Evaluating Compositions of the Form f(f−1(y)) and f−1(f(x)) For any trigonometric function, $f( f −1 ( y ) )=y f( f −1 ( y ) )=y$ for all $y y$ in the proper domain for the given function. This follows from the definition of the inverse and from the fact that the range of $f f$ was defined to be identical to the domain of $f −1 . f −1 .$ However, we have to be a little more careful with expressions of the form $f −1 ( f( x ) ). f −1 ( f( x ) ).$ Compositions of a trigonometric function and its inverse $sin( sin −1 x)=xfor−1≤x≤1 cos( cos −1 x)=xfor−1≤x≤1 tan( tan −1 x)=xfor−∞ $sin −1 (sinx)=xonly for − π 2 ≤x≤ π 2 cos −1 (cosx)=xonly for 0≤x≤π tan −1 (tanx)=xonly for − π 2 Q&A Is it correct that $sin −1 (sinx)=x? sin −1 (sinx)=x?$ No. This equation is correct if $x x$ belongs to the restricted domain $[ − π 2 , π 2 ], [ − π 2 , π 2 ],$ but sine is defined for all real input values, and for $x x$ outside the restricted interval, the equation is not correct because its inverse always returns a value in $[ − π 2 , π 2 ]. [ − π 2 , π 2 ].$ The situation is similar for cosine and tangent and their inverses. For example, $sin −1 ( sin( 3π 4 ) )= π 4 . sin −1 ( sin( 3π 4 ) )= π 4 .$ How To Given an expression of the form f−1(f(θ)) where $f(θ)=sinθ,cosθ, or tanθ, f(θ)=sinθ,cosθ, or tanθ,$ evaluate. 1. If $θ θ$ is in the restricted domain of $f, then f −1 (f(θ))=θ. f, then f −1 (f(θ))=θ.$ 2. If not, then find an angle $ϕ ϕ$ within the restricted domain of $f f$ such that $f(ϕ)=f(θ). f(ϕ)=f(θ).$ Then $f −1 ( f( θ ) )=ϕ. f −1 ( f( θ ) )=ϕ.$ Example 5 Using Inverse Trigonometric Functions Evaluate the following: 1. $sin −1 ( sin( π 3 ) ) sin −1 ( sin( π 3 ) )$ 2. $sin −1 ( sin( 2π 3 ) ) sin −1 ( sin( 2π 3 ) )$ 3. $cos −1 ( cos( 2π 3 ) ) cos −1 ( cos( 2π 3 ) )$ 4. $cos −1 ( cos( − π 3 ) ) cos −1 ( cos( − π 3 ) )$ 1. $π 3 is in [ − π 2 , π 2 ], π 3 is in [ − π 2 , π 2 ],$ so $sin −1 ( sin( π 3 ) )= π 3 . sin −1 ( sin( π 3 ) )= π 3 .$ 2. $2π 3 is not in [ − π 2 , π 2 ], 2π 3 is not in [ − π 2 , π 2 ],$ but $sin( 2π 3 )=sin( π 3 ), sin( 2π 3 )=sin( π 3 ),$ so $sin −1 ( sin( 2π 3 ) )= π 3 . sin −1 ( sin( 2π 3 ) )= π 3 .$ 3. $2π 3 is in [ 0,π ], 2π 3 is in [ 0,π ],$ so $cos −1 ( cos( 2π 3 ) )= 2π 3 . cos −1 ( cos( 2π 3 ) )= 2π 3 .$ 4. $− π 3 is not in [ 0,π ], − π 3 is not in [ 0,π ],$ but $cos( − π 3 )=cos( π 3 ) cos( − π 3 )=cos( π 3 )$ because cosine is an even function. $π 3 is in [ 0,π ], π 3 is in [ 0,π ],$ so $cos −1 ( cos( − π 3 ) )= π 3 . cos −1 ( cos( − π 3 ) )= π 3 .$ Try It #5 Evaluate $tan −1 ( tan( π 8 ) )and tan −1 ( tan( 11π 9 ) ). tan −1 ( tan( π 8 ) )and tan −1 ( tan( 11π 9 ) ).$ Evaluating Compositions of the Form f−1(g(x)) Now that we can compose a trigonometric function with its inverse, we can explore how to evaluate a composition of a trigonometric function and the inverse of another trigonometric function. We will begin with compositions of the form $f −1 ( g( x ) ). f −1 ( g( x ) ).$ For special values of $x, x,$ we can exactly evaluate the inner function and then the outer, inverse function. However, we can find a more general approach by considering the relation between the two acute angles of a right triangle where one is $θ, θ,$ making the other $π 2 −θ. π 2 −θ.$ Consider the sine and cosine of each angle of the right triangle in Figure 10. Figure 10 Right triangle illustrating the cofunction relationships Because $cosθ= b c =sin( π 2 −θ ), cosθ= b c =sin( π 2 −θ ),$ we have $sin −1 ( cosθ )= π 2 −θ sin −1 ( cosθ )= π 2 −θ$ if $0≤θ≤π. 0≤θ≤π.$ If $θ θ$ is not in this domain, then we need to find another angle that has the same cosine as $θ θ$ and does belong to the restricted domain; we then subtract this angle from $π 2 . π 2 .$ Similarly, $sinθ= a c =cos( π 2 −θ ), sinθ= a c =cos( π 2 −θ ),$ so $cos −1 ( sinθ )= π 2 −θ cos −1 ( sinθ )= π 2 −θ$ if $− π 2 ≤θ≤ π 2 . − π 2 ≤θ≤ π 2 .$ These are just the function-cofunction relationships presented in another way. How To Given functions of the form $sin −1 ( cosx ) sin −1 ( cosx )$ and $cos −1 ( sinx ), cos −1 ( sinx ),$ evaluate them. 1. If $x is in [ 0,π ], x is in [ 0,π ],$ then $sin −1 ( cosx )= π 2 −x. sin −1 ( cosx )= π 2 −x.$ 2. $sin −1 ( cosx )= π 2 −y sin −1 ( cosx )= π 2 −y$ 3. If $x is in [ − π 2 , π 2 ], x is in [ − π 2 , π 2 ],$ then $cos −1 ( sinx )= π 2 −x. cos −1 ( sinx )= π 2 −x.$ 4. $cos −1 ( sinx )= π 2 −y cos −1 ( sinx )= π 2 −y$ Example 6 Evaluating the Composition of an Inverse Sine with a Cosine Evaluate $sin −1 ( cos( 13π 6 ) ) sin −1 ( cos( 13π 6 ) )$ 1. by direct evaluation. 2. by the method described previously. 1. Here, we can directly evaluate the inside of the composition. $cos( 13π 6 )=cos( π 6 +2π) =cos( π 6 ) = 3 2 cos( 13π 6 )=cos( π 6 +2π) =cos( π 6 ) = 3 2$ Now, we can evaluate the inverse function as we did earlier. $sin −1 ( 3 2 )= π 3 sin −1 ( 3 2 )= π 3$ 2. We have $x= 13π 6 ,y= π 6 , x= 13π 6 ,y= π 6 ,$ and $sin −1 ( cos( 13π 6 ) )= π 2 − π 6 = π 3 sin −1 ( cos( 13π 6 ) )= π 2 − π 6 = π 3$ Try It #6 Evaluate $cos −1 ( sin( − 11π 4 ) ). cos −1 ( sin( − 11π 4 ) ).$ Evaluating Compositions of the Form f(g−1(x)) To evaluate compositions of the form $f( g −1 ( x ) ), f( g −1 ( x ) ),$ where $f f$ and $g g$ are any two of the functions sine, cosine, or tangent and $x x$ is any input in the domain of $g −1 , g −1 ,$ we have exact formulas, such as $sin( cos −1 x )= 1− x 2 . sin( cos −1 x )= 1− x 2 .$ When we need to use them, we can derive these formulas by using the trigonometric relations between the angles and sides of a right triangle, together with the use of Pythagoras’s relation between the lengths of the sides. We can use the Pythagorean identity, $sin 2 x+ cos 2 x=1, sin 2 x+ cos 2 x=1,$ to solve for one when given the other. We can also use the inverse trigonometric functions to find compositions involving algebraic expressions. Example 7 Evaluating the Composition of a Sine with an Inverse Cosine Find an exact value for $sin( cos −1 ( 4 5 ) ). sin( cos −1 ( 4 5 ) ).$ Beginning with the inside, we can say there is some angle such that $θ= cos −1 ( 4 5 ), θ= cos −1 ( 4 5 ),$ which means $cosθ= 4 5 , cosθ= 4 5 ,$ and we are looking for $sinθ. sinθ.$ We can use the Pythagorean identity to do this. $sin 2 θ+ cos 2 θ=1 Use our known value for cosine. sin 2 θ+ ( 4 5 ) 2 =1 Solve for sine. sin 2 θ=1− 16 25 sinθ=± 9 25 =± 3 5 sin 2 θ+ cos 2 θ=1 Use our known value for cosine. sin 2 θ+ ( 4 5 ) 2 =1 Solve for sine. sin 2 θ=1− 16 25 sinθ=± 9 25 =± 3 5$ Since $θ= cos −1 ( 4 5 ) θ= cos −1 ( 4 5 )$ is in quadrant I, $sinθ sinθ$ must be positive, so the solution is $3 5 . 3 5 .$ See Figure 11. Figure 11 Right triangle illustrating that if $cosθ= 4 5 , cosθ= 4 5 ,$ then $sinθ= 3 5 sinθ= 3 5$ We know that the inverse cosine always gives an angle on the interval $[ 0,π ], [ 0,π ],$ so we know that the sine of that angle must be positive; therefore $sin( cos −1 ( 4 5 ) )=sinθ= 3 5 . sin( cos −1 ( 4 5 ) )=sinθ= 3 5 .$ Try It #7 Evaluate $cos( tan −1 ( 5 12 ) ). cos( tan −1 ( 5 12 ) ).$ Example 8 Evaluating the Composition of a Sine with an Inverse Tangent Find an exact value for $sin( tan −1 ( 7 4 ) ). sin( tan −1 ( 7 4 ) ).$ While we could use a similar technique as in Example 6, we will demonstrate a different technique here. From the inside, we know there is an angle such that $tanθ= 7 4 . tanθ= 7 4 .$ We can envision this as the opposite and adjacent sides on a right triangle, as shown in Figure 12. Figure 12 A right triangle with two sides known Using the Pythagorean Theorem, we can find the hypotenuse of this triangle. $4 2 + 7 2 = hypotenuse 2 hypotenuse= 65 4 2 + 7 2 = hypotenuse 2 hypotenuse= 65$ Now, we can evaluate the sine of the angle as the opposite side divided by the hypotenuse. $sinθ= 7 65 sinθ= 7 65$ This gives us our desired composition. $sin( tan −1 ( 7 4 ) )=sinθ = 7 65 = 7 65 65 sin( tan −1 ( 7 4 ) )=sinθ = 7 65 = 7 65 65$ Try It #8 Evaluate $cos( sin −1 ( 7 9 ) ). cos( sin −1 ( 7 9 ) ).$ Example 9 Finding the Cosine of the Inverse Sine of an Algebraic Expression Find a simplified expression for $cos( sin −1 ( x 3 ) ) cos( sin −1 ( x 3 ) )$ for $−3≤x≤3. −3≤x≤3.$ We know there is an angle $θ θ$ such that $sinθ= x 3 . sinθ= x 3 .$ $sin 2 θ+ cos 2 θ=1 Use the Pythagorean Theorem. ( x 3 ) 2 + cos 2 θ=1 Solve for cosine. cos 2 θ=1− x 2 9 cosθ=± 9− x 2 9 =± 9− x 2 3 sin 2 θ+ cos 2 θ=1 Use the Pythagorean Theorem. ( x 3 ) 2 + cos 2 θ=1 Solve for cosine. cos 2 θ=1− x 2 9 cosθ=± 9− x 2 9 =± 9− x 2 3$ Because we know that the inverse sine must give an angle on the interval $[ − π 2 , π 2 ], [ − π 2 , π 2 ],$ we can deduce that the cosine of that angle must be positive. $cos( sin −1 ( x 3 ) )= 9− x 2 3 cos( sin −1 ( x 3 ) )= 9− x 2 3$ Try It #9 Find a simplified expression for $sin( tan −1 ( 4x ) ) sin( tan −1 ( 4x ) )$ for $− 1 4 ≤x≤ 1 4 . − 1 4 ≤x≤ 1 4 .$ Media Access this online resource for additional instruction and practice with inverse trigonometric functions. 6.3 Section Exercises Verbal 1. Why do the functions $f(x)= sin −1 x f(x)= sin −1 x$ and $g(x)= cos −1 x g(x)= cos −1 x$ have different ranges? 2. Since the functions $y=cosx y=cosx$ and $y= cos −1 x y= cos −1 x$ are inverse functions, why is $cos −1 ( cos( − π 6 ) ) cos −1 ( cos( − π 6 ) )$ not equal to $− π 6 ? − π 6 ?$ 3. Explain the meaning of $π 6 =arcsin( 0.5 ). π 6 =arcsin( 0.5 ).$ 4. Most calculators do not have a key to evaluate $sec −1 ( 2 ). sec −1 ( 2 ).$ Explain how this can be done using the cosine function or the inverse cosine function. 5. Why must the domain of the sine function, $sinx, sinx,$ be restricted to $[ − π 2 , π 2 ] [ − π 2 , π 2 ]$ for the inverse sine function to exist? 6. Discuss why this statement is incorrect: $arccos( cosx )=x arccos( cosx )=x$ for all $x. x.$ 7. Determine whether the following statement is true or false and explain your answer: $arccos( −x )=π−arccosx. arccos( −x )=π−arccosx.$ Algebraic For the following exercises, evaluate the expressions. 8. $sin −1 ( 2 2 ) sin −1 ( 2 2 )$ 9. $sin −1 ( − 1 2 ) sin −1 ( − 1 2 )$ 10. $cos −1 ( 1 2 ) cos −1 ( 1 2 )$ 11. $cos −1 ( − 2 2 ) cos −1 ( − 2 2 )$ 12. $tan −1 ( 1 ) tan −1 ( 1 )$ 13. $tan −1 ( − 3 ) tan −1 ( − 3 )$ 14. $tan −1 ( −1 ) tan −1 ( −1 )$ 15. $tan −1 ( 3 ) tan −1 ( 3 )$ 16. $tan −1 ( −1 3 ) tan −1 ( −1 3 )$ For the following exercises, use a calculator to evaluate each expression. Express answers to the nearest hundredth. 17. $cos −1 ( −0.4 ) cos −1 ( −0.4 )$ 18. $arcsin( 0.23 ) arcsin( 0.23 )$ 19. $arccos( 3 5 ) arccos( 3 5 )$ 20. $cos −1 ( 0.8 ) cos −1 ( 0.8 )$ 21. $tan −1 ( 6 ) tan −1 ( 6 )$ For the following exercises, find the angle $θ θ$ in the given right triangle. Round answers to the nearest hundredth. 22. 23. For the following exercises, find the exact value, if possible, without a calculator. If it is not possible, explain why. 24. $sin −1 ( cos( π ) ) sin −1 ( cos( π ) )$ 25. $tan −1 ( sin( π ) ) tan −1 ( sin( π ) )$ 26. $cos −1 ( sin( π 3 ) ) cos −1 ( sin( π 3 ) )$ 27. $tan −1 ( sin( π 3 ) ) tan −1 ( sin( π 3 ) )$ 28. $sin −1 ( cos( −π 2 ) ) sin −1 ( cos( −π 2 ) )$ 29. $tan −1 ( sin( 4π 3 ) ) tan −1 ( sin( 4π 3 ) )$ 30. $sin −1 ( sin( 5π 6 ) ) sin −1 ( sin( 5π 6 ) )$ 31. $tan −1 ( sin( −5π 2 ) ) tan −1 ( sin( −5π 2 ) )$ 32. $cos( sin −1 ( 4 5 ) ) cos( sin −1 ( 4 5 ) )$ 33. $sin( cos −1 ( 3 5 ) ) sin( cos −1 ( 3 5 ) )$ 34. $sin( tan −1 ( 4 3 ) ) sin( tan −1 ( 4 3 ) )$ 35. $cos( tan −1 ( 12 5 ) ) cos( tan −1 ( 12 5 ) )$ 36. $cos( sin −1 ( 1 2 ) ) cos( sin −1 ( 1 2 ) )$ For the following exercises, find the exact value of the expression in terms of $x x$ with the help of a reference triangle. 37. $tan( sin −1 ( x−1 ) ) tan( sin −1 ( x−1 ) )$ 38. $sin( cos −1 ( 1−x ) ) sin( cos −1 ( 1−x ) )$ 39. $cos( sin −1 ( 1 x ) ) cos( sin −1 ( 1 x ) )$ 40. $cos( tan −1 ( 3x−1 ) ) cos( tan −1 ( 3x−1 ) )$ 41. $tan( sin −1 ( x+ 1 2 ) ) tan( sin −1 ( x+ 1 2 ) )$ Extensions For the following exercises, evaluate the expression without using a calculator. Give the exact value. 42. $sin −1 ( 1 2 )− cos −1 ( 2 2 )+ sin −1 ( 3 2 )− cos −1 ( 1 ) cos −1 ( 3 2 )− sin −1 ( 2 2 )+ cos −1 ( 1 2 )− sin −1 ( 0 ) sin −1 ( 1 2 )− cos −1 ( 2 2 )+ sin −1 ( 3 2 )− cos −1 ( 1 ) cos −1 ( 3 2 )− sin −1 ( 2 2 )+ cos −1 ( 1 2 )− sin −1 ( 0 )$ For the following exercises, find the function if $sint= x x+1 . sint= x x+1 .$ 43. $cost cost$ 44. $sect sect$ 45. $cott cott$ 46. $cos( sin −1 ( x x+1 ) ) cos( sin −1 ( x x+1 ) )$ 47. $tan −1 ( x 2x+1 ) tan −1 ( x 2x+1 )$ Graphical 48. Graph $y= sin −1 x y= sin −1 x$ and state the domain and range of the function. 49. Graph $y=arccosx y=arccosx$ and state the domain and range of the function. 50. Graph one cycle of $y= tan −1 x y= tan −1 x$ and state the domain and range of the function. 51. For what value of $x x$ does $sinx= sin −1 x? sinx= sin −1 x?$ Use a graphing calculator to approximate the answer. 52. For what value of $x x$ does $cosx= cos −1 x? cosx= cos −1 x?$ Use a graphing calculator to approximate the answer. Real-World Applications 53. Suppose a 13-foot ladder is leaning against a building, reaching to the bottom of a second-floor window 12 feet above the ground. What angle, in radians, does the ladder make with the building? 54. Suppose you drive 0.6 miles on a road so that the vertical distance changes from 0 to 150 feet. What is the angle of elevation of the road? 55. An isosceles triangle has two congruent sides of length 9 inches. The remaining side has a length of 8 inches. Find the angle that a side of 9 inches makes with the 8-inch side. 56. Without using a calculator, approximate the value of $arctan( 10,000 ). arctan( 10,000 ).$ Explain why your answer is reasonable. 57. A truss for the roof of a house is constructed from two identical right triangles. Each has a base of 12 feet and height of 4 feet. Find the measure of the acute angle adjacent to the 4-foot side. 58. The line $y= 3 5 x y= 3 5 x$ passes through the origin in the x,y-plane. What is the measure of the angle that the line makes with the positive x-axis? 59. The line $y= −3 7 x y= −3 7 x$ passes through the origin in the x,y-plane. What is the measure of the angle that the line makes with the negative x-axis? 60. What percentage grade should a road have if the angle of elevation of the road is 4 degrees? (The percentage grade is defined as the change in the altitude of the road over a 100-foot horizontal distance. For example a 5% grade means that the road rises 5 feet for every 100 feet of horizontal distance.) 61. A 20-foot ladder leans up against the side of a building so that the foot of the ladder is 10 feet from the base of the building. If specifications call for the ladder's angle of elevation to be between 35 and 45 degrees, does the placement of this ladder satisfy safety specifications? 62. Suppose a 15-foot ladder leans against the side of a house so that the angle of elevation of the ladder is 42 degrees. How far is the foot of the ladder from the side of the house? This page titled 6.4: Inverse Trigonometric Functions is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
HuggingFaceTB/finemath
# Solving an Algebraic Linear Equation with One Variable by Ron Kurtus (revised 17 August 2012) A linear equation with one variable consists of numbers or constants and multiplies of a variable. The standard form of such an equation is ax + b = 0, where a and b are constants and x is the variable. Often, the equation is in a more complex form. The solution of the equation is found by operating on both sides of the equation to get it into the form similar to x = −b/a. Questions you may have include: • How do you operate on the equation? • How do you solve for x? • What happens if the equation is in a more complex form? This lesson will answer those questions. ## Rules for solution When you have a linear equation with one variable, your goal is to manipulate the expressions, such that you you end up with the variable x on the left side of the equal sign and the constants on the right side. That is solving the equation. For example, the solution of the equation 4a = 3 − x is x = 3 − 4a. ### Basic rule The basic rule used in solving equations in Algebra is: What you do on the left side of the equal sign, you must do on the right side. If you add a term on the left side, you must add the same term on the right side. If you multiply a term on the left side, you must multiply the same term on the right side. ### Examples In the equation 4a = 3 − x, you want to get the x on the left side and the other items on the right side. You perform the following operations: Add x to both sides of the equation. 4a + x = 3 − x + x 4a + x = 3 Subtract 4a from both sides of the equation. 4a − 4a + x = 3 − 4a x = 3 − 4a, which is the solution to the equation. ## Solving by combining like terms You can solve an equation like 2x + 3 = −4x − 7 first getting all the x terms on the left side and all the constant terms on the right side. Next, you combine like terms. Then you divide by the multiple of x to get your solution. ### Example Consider the equation: 2x + 3 = −4x − 7 2x + 4x + 3 = −4x + 4x − 7 Combine like terms. 6x + 3 = −7 Subtract 3 from both sides. 6x + 3 − 3 = −7 − 3 Combine like terms. 6x = −10 Divide both sides by 6. 6x/6 = −10/6 Simplify the fraction. x = −5/3 or x = −1 2/3 Note: It is a good idea to go step-by-step instead of trying to do several things at once or to do things in your head. ### Another example Consider the equation: 2x/3 + 3 − x = 2(x + 2) − 5 Multiply out to get rid of the parentheses. 2x/3 + 3 − x = 2x + 4 − 5 2x/3 + 3 − x = 2x − 1 Get rid of the fraction by multiplying both sides by 3. 3(2x/3 + 3 − x) = 3(2x − 1) Multiply out to get rid of the parentheses. 2x + 9 − 3x = 6x − 3 Combine like terms. 9 − x = 6x − 3 Subtract 9 from both sides. −x = 6x − 12 Subtract 6x from both sides. −7x = −12 Divide by −7. x = 12/7 or x = 1 5/7 ## Variable in a fraction There are equations where the x term is part of the denominator in an equation. In such a case, you must multiply both sides of the equation by the x term, so that it does not contain variable fractions. Likewise, you want to remove any fractions in the equation but multiplying by the denominator of the equation. ### Example Consider the equation: 2x/(x + 1) = 7/12 Multiply both sides by (x + 1). 2x(x + 1)/(x + 1) = 7(x + 1)/12 Simplify the fraction (x + 1)/(x + 1) = 1. 2x = 7(x + 1)/12 Multiply both sides by 12. 24x = 7(x + 1) Multiply with distributive law or multiply out to get rid of the parentheses. 24x = 7x + 7 Subtract 7x from both sides. 24x − 7x = 7x − 7x + 7 Combine like terms. 17x = 7 Divide by 17 to get solution of equation. x = 7/17 ### Another example Consider the equation: 1/(5x − 3) = 3/x Multiply both sides by (5x − 3). 1 = 3(5x − 3)/x Multiply both sides by x. x = 3(5x − 3) Note that sometime these two steps are combined and called "cross multiplying" the equation. One problem is that shortcutting can result in mistakes. Also, it is better to know what you are doing and why for better understanding. Multiply with distributive law (remove parentheses). x = 15x − 9 Subtract 15x from both sides. −14x = −9 Divide both sides by −14x. x = 9/14 ## Summary A linear equation with one variable consists of numbers or constants and multiplies of a variable. The standard form of such an equation is ax + b = 0, where a and b are constants and x is a variable. Often, the equation is in a more complex form. The solution of the equation is found by operating on the equation to get it into the form similar to x = −b/a. In other words, you want the x alone on the left side and the other items on the right side of the equation. The rule is what you do on the left side, you do on the right side. Go step-by-step ## Resources and references Ron Kurtus' Credentials ### Websites Algebra Resources ### Books Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible. ## Students and researchers www.school-for-champions.com/algebra/ linear_equation_one_variable.htm ## Where are you now? School for Champions Algebra topics ## Also see ### Let's make the world a better place Be the best that you can be. Use your knowledge and skills to help others succeed. Don't be wasteful; protect our environment. ### Live Your Life as a Champion: Seek knowledge and gain skills Do excellent work Be valuable to others Have utmost character #### Be a Champion! The School for Champions helps you become the type of person who can be called a Champion.
HuggingFaceTB/finemath
## Introduction Metabolic phenotyping seeks to identify biomarkers for diagnosis, prognosis or therapy and holds great promise to improve clinical practice and especially, precision medicine1,2. Despite considerable progress with respect to the sensitive and parallel analysis of metabolites in metabolomics/metabonomics studies3,4,5,6,7 and by mass spectrometry (MS)8,9, the successful implementation of metabolites as biomarkers in the clinical setting still represents a major challenge10,11,12. This is illustrated by the strong individual and physiological background variability2 and individual differences in ADME properties, the latter impacting significantly on drug responses13,14. To the best of our knowledge, current techniques of metabolic phenotyping are largely focussed on generating static diagnostic pictures because the commonly used biological fluids (e.g. plasma, urine)15,16,17 or tissues do not routinely allow for time-course studies. The implementation of dynamic metabolic responses as a biomarker strategy may be desirable, but requires a considerable number of data points on a single individual. Clearly, a non-invasive method from an alternative biological fluid is required to enable frequent sampling of the same individual in order to obtain dynamic metabolic patterns in the frame of metabolic phenotyping. While fingerprints—the pattern of the ridge details left on a surface—have been used for the identification of individuals since the late 19th century18, their relevance for detecting metabolites, as well as drugs and their metabolites has only recently been discovered19,20. While drug substances detected in the fingerprint may originate from accidental dermal contact, the detection of drug-specific metabolites implies that the drug was ingested, metabolised and subsequently excreted from sweat glands. Thus, we hypothesised that sweat from the skin surface may represent a promising source for metabolic biomonitoring. Sweat is a hypotonic, slightly acidic biofluid secreted by the eccrine, apocrine and apoeccrine glands located on the skin surface21,22. Eccrine sweat from the fingertips is mainly composed of water (~99%), but contains electrolytes, urea, lactate, amino acids, metal ions23,24 and a variety of endogenous metabolites, including peptides, organic acids, carbohydrates, lipids, lipid-derived metabolites, as well as xenobiotics21,22,25,26,27. Sweat composition is highly dynamic, changes significantly with pathological states and may reveal habits of diet, metabolic conditions or use of drugs and supplements17,24,28. In fact, the analysis of sweat has already been reported to assess individual metabolic characteristics29,30. Clinical assays based on the analysis of sweat exist and include the screening of newborn children for elevated chloride and sodium levels to confirm cystic fibrosis via pilocarpine stimulated iontophoresis or forensic and criminal investigations to test for illicit drug use17,22,31,32,33. Furthermore, it has already been successfully demonstrated that the analysis of proteins contained in sweat enables not only the diagnosis of active tuberculosis but can also be used to screen for lung cancer16,34,35, highlighting the potential of sweat analysis for precision medicine36. Real-time monitoring of biomarkers was demonstrated with wearable sweat sensors for uric acid and tyrosine37, interleukin-6 and cortisol38 or electrolytes such as sodium, ammonium ions and lactate39. However, these studies typically assessed a small number of metabolites and relied on elaborate methods to collect sweat, including sweat patches or artificially forcing sweat production17,22,30. This was necessary because the detection methods required relatively large absolute amounts of these metabolites. It is known that eccrine glands on the fingertips produce sweat at a rate of 50–500 nL cm−2 min−140. Thus, the analysis of metabolites from sweat of the fingertips may be achieved with sufficiently sensitive instrumentation, for example MS41. Sample collection using sweat from fingertips requires no patient pre-treatment or trained personnel, is safe and fast. Upon optimising the entire workflow for the analysis of sweat from the fingertips, we analysed 1792 samples from 40 participants, which underlines its potential as a high-throughput metabolic technology. Proof-of-principle studies based on the consumption of coffee or ingestion of a caffeine capsule were designed to assess metabolic time-series of each participant and provided evidence of the feasibility of this approach. Fluctuations in the rate of sweat production were accounted for by mathematical modelling of the conversion of xenobiotics to their catabolic products (e.g. caffeine to paraxanthine). In this study, we show that metabolic phenotyping using sweat from fingertips combined with mathematical network modelling may have far reaching relevance for precision medicine, because it allows to obtain dynamic metabolic responses of individuals. ## Results ### Sweat from the fingertips is a rich source for metabolic phenotyping A straight-forward workflow was established for sampling and processing sweat samples from fingertips. In short, hands are washed without soap and dried with a disposable paper towel prior to each sampling time-point. For sweat collection, a circular sampling unit standardised to 1.15 cm diameter was then held between thumb and index finger for 1 min and was transferred with clean tweezers into an empty tube for storage (Fig. 1a). The metabolites were extracted from the sampling units using aqueous conditions and the resulting solution was directly introduced into the liquid chromatography-mass spectrometry (LC-MS) system for analysis. Sample collection and processing required ~13 min per sample. Sampling can be performed by untrained personnel in a highly frequent manner and the non-invasive nature of the sampling facilitates patient compliance. Data acquisition requires a further 7.5 min, which gives a total of ~20 min for the entire workflow per sample. Based on the known rates of sweat production in eccrine glands on the fingertips29,40, the median sweat volume collected using this method can be estimated at around 200–2000 nL (2 min × 2 cm2 × 50–500 nL min−1 cm−2) sweat per sample. High-resolution MS using a Q Exactive HF orbitrap hyphenated with an ultrahigh-performance liquid chromatography (UHPLC) system proved suitable for metabolic phenotyping from sweat samples (see methods). Initially, three participants were sampled multiple times in an observational study in order to evaluate the metabolic profile obtained from sweat of the fingertips of each individual. In detail, the participants collected sweat samples seven times per day at different intervals on 2 consecutive days and using both hands (see methods, study A). A total of 250 metabolites were identified and verified by external standards (Supplementary Data 1). Actually, many known as well as previously unknown endogenous and exogenous metabolites were identified in the sweat samples with high confidence (Fig. 1b, c). We detected not only a number of amino acid-related metabolites (e.g. tyrosine, leucine or citrulline), but also hormones (e.g. melatonin or progesterone). Newly identified metabolites include dopamine, progesterone and melatonin amongst others. Interestingly, we observed many coffee-derived metabolites, including caffeine and the related dimethyl– and methylxanthines. Principal component analysis (PCA) using those metabolites revealed that the samples clustered according to individuals (Fig. 1d). This indicated that the molecular composition of sweat associated with a given individual dominated the variances derived from multiple sampling. Interestingly, the principal components were strongly determined by the endogenous metabolites histamine, tryptophan, tyrosine and arginine (Supplementary Fig. 1). Moreover, we did not find notable differences of the sweat composition between the left and right hand from a given individual (Fig. 1d). ### Sampling sweat from the fingertips is reliable and robust Biomolecules are characterised by LC-MS according to retention time (RT), the accurate mass of the molecular ion derived from the full mass spectrum (MS1) and the fragmentation pattern determined by tandem mass spectrometry (MS2). The experimentally determined mass-to-charge ratios of 15 representative metabolites showed mass deviations below <2 ppm, which are typical for Q Exactive HF instruments (Supplementary Table 1). The coefficient of variation (CV) of the RT determined for the internal standard caffeine-(trimethyl-D9) was found to be 1% across 636 injections (Fig. 2a, see methods, study A and C). Caffeine-(trimethyl-D9) was injected with every sample at 10 pg on column. The CV of the areas under curve (AUCs) across the same sample set was 11% (n = 636). The CV improved slightly when considering study A only (CV = 7%, n = 186), but remained constant for study C (CV = 10%, n = 450). This indicated that the performance of the LC-MS system was robust across each sample set. MS2 spectra were of good quality and provided high matching factors, which supported the identification of previously known and newly identified metabolites found in sweat, e.g. tryptophan42 and dopamine, respectively (Fig. 2b). Caffeine and its three main metabolites paraxanthine, theobromine and theophylline were spiked onto sampling units in the range of 1–100 pg µL−1. These samples were processed according to the above-mentioned procedures and linear calibration curves were obtained with associated R2 > 0.997 (Fig. 2c). At concentrations of 100 fg µL−1, these molecules were still detected with signal-to-noise ratios >100 on the Q Exactive HF. Comparison of a spiked and processed caffeine standard (10 pg µL−1) to a directly injected caffeine standard (10 pg µL−1) yielded an extraction efficiency of 93%. The lower limit of quantification (LLOQ) was determined from the calibration curves as the mean AUC plus ten times the standard deviation of caffeine and its metabolites found in blank sampling units. This resulted in a LOQ of 0.2 pg µL−1 for caffeine, 0.1 pg µL−1 for paraxanthine and 1.7 pg µL−1 for theobromine (see Source Data). The AUCs for theophylline in filter blanks and caffeine in tap water and paper towels were below the limit of detection (LOD), which was calculated as the mean AUC plus three times the standard deviation. ### Coffee consumption revealed coffee-specific xenobiotics in finger sweat After confirming sweat from the fingertips to contain endogenous metabolites, as well as xenobiotics mainly related to coffee consumption, we designed an intervention study with 11 participants, who consumed a standardised amount of coffee after a 12 h fasting period with regard to caffeine-containing food (see methods, study B). Two additional volunteers were sampled, who did not consume coffee, thus representing the control group. Sweat samples were collected before coffee consumption and subsequently after 15, 30, 45, 60, 90 and 120 min. Caffeine is a widely used stimulant of the central nervous system and features an excellent oral bioavailability43,44. Since the ingestion of an equivalent of a double espresso was already shown to have systemic effects by affecting sleep behaviour45,46,47, we expected to find caffeine and related xenobiotics upon coffee consumption in sweat from the fingertips. The metabolite levels of the participants before coffee consumption (0 min) revealed negligible amounts of chlorogenic acid, trigonelline and caffeine, while the primary metabolites of caffeine showed significant background levels (e.g. paraxanthine, theobromine and theophylline). The control group featured stable metabolite levels over time with small variations probably stemming from fluctuations in the rate of sweat excretion (Supplementary Fig. 2). Strikingly, the sweat from the fingertips 15 min post consumption revealed 35 xenobiotics of 121 metabolites (29%) contained in coffee presently identified by us from aqueous extracts of the roasted coffee beans used for this study, including among others caffeine, theobromine, theophylline, paraxanthine, methylxanthines, chlorogenic acid, trigonelline, methylsuccinic acid, quinic acid and iditol (Supplementary Data 2). The AUCs of caffeine, chlorogenic acid and trigonelline increased significantly in all volunteers as early as 15 min after coffee consumption (Fig. 3a). The time-dependent sampling revealed differences in kinetic properties of the coffee-specific xenobiotics, especially regarding absorption and clearance rates. For example, the AUCs of caffeine and chlorogenic acid peaked after 15 min, followed by rapid clearance, while the AUCs of the dimethylxanthines increased steadily over time on top of a pre-existing pool (Fig. 3b). Several coffee-specific metabolites displayed a number of isomers in their extracted ion chromatograms. For example, chlorogenic acid (m/z 355.1024, RT = 3.05 min) showed at least five isomers (Fig. 3c) as verified on MS2 level. The ratio of the relative peak intensities of chlorogenic acid and its isomers was conserved when comparing coffee extracts and sweat from the fingertip. This indicated that these isomers are equally distributed into the water-soluble body compartment and are equally cleared from body on a rapid timescale. Chlorogenic acids and its isomers were not observed prior to coffee consumption. Such a comparative analysis strategy may be used to discover other xenobiotics distributed to sweat glands in a systemic fashion as indicated by the yet unidentified feature detected at m/z 337.0920 (Supplementary Fig. 3). These findings provide evidence that ingested xenobiotics may be robustly detected in the sweat from the fingertips, and their time-dependence mirrors their pharmacokinetic properties. ### Finger sweat enables to elucidate individual metabolic traits The metabolism of caffeine by different hepatic enzymes is well known48, and the catabolic products were successfully identified in sweat from fingertips after coffee consumption (Fig. 4a, Supplementary Table 1). However, dimethyl– and methylxanthines may originate from both coffee beans and from endogenous hepatic metabolism. Additionally, we observed significant background levels of these metabolites in sweat from the fingertips before coffee consumption. In order to monitor the physiological conversion of caffeine into dimethylxanthines by hepatic enzyme activity, we designed a study in which participants refrained from consuming caffeine-containing products for at least 48 h before ingesting a single caffeine capsule (200 mg). The caffeine capsule and the longer fasting time were chosen to minimise background contributions from catabolic products of caffeine. Forty volunteers were enrolled in this study and sweat from the fingertips was sampled repeatedly over 27 h with up to 20 sample collections per volunteer (see methods, study C.1 and C.2). Six individuals participated in both the coffee consumption study (study B) and the caffeine capsule study (study C.1). Indeed, their prolonged fasting featured an improved baseline and revealed a significant decrease of dimethylxanthines to negligible levels after the 48 h fasting period compared to the 12 h fasting period (Fig. 4b). Ingestion of the caffeine capsule significantly increased the abundance of caffeine in sweat from fingertips in all volunteers already after 15 min, in accordance with coffee consumption. The caffeine abundance remained elevated for at least 480 min in all volunteers and returned close to baseline after 24 h (Supplementary Fig. 4). The abundance of the primary metabolite paraxanthine increased more slowly and peaked between 360 and 480 min post ingestion (Fig. 4c). Individual metabolic time-courses revealed rather striking differences regarding caffeine metabolism (Fig. 4d). For example, volunteer profile 1 displayed a sharp increase in caffeine abundance, which remained relatively constant over 480 min, while paraxanthine abundance increased steadily during this time period. In contrast, volunteer profile 2 featured a similar increase in caffeine abundance, but started with an elevated theobromine baseline, which also represented the main metabolite of caffeine. These findings suggest that sampling sweat from the fingertips may be of particular interest for characterising personalised metabolic traits. Cytochrome P450 enzymes are key players in the hepatic metabolism and several isoforms are known to process xenobiotics at different rates48. Thus, xenobiotics like caffeine may be subjected to variable metabolisms depending on the individual expression of these enzymes. This may reveal individual physiological responses to xenobiotic exposure that may serve as proxies for hepatic metabolic activity. Therefore, the influence of the metabolic turnover of caffeine depending on the expression of cytochrome P450 enzymes was investigated in vitro using HepG2 cells (Supplementary Information, Supplementary note 1). Indeed, we found that HepG2 cells would increase the metabolic turnover of caffeine to its primary metabolites upon chemical induction of cytochrome P450 enzymes with benzo-[a]-pyrene (Supplementary Fig. 6). Moreover, the induction of these enzymes also affected the relative ratios of the primary metabolites significantly. This supports the conclusion that the individual enzymatic activity status may modulate the formation of metabolites subsequently detected in sweat from the fingertips. Statistical analysis of the metabolites reproducibly detected in all 47 (study C.1 + C.2) or 27 (study C.2) volunteer profiles revealed the significant upregulation of caffeine, paraxanthine and theophylline, as well as adenosine 4 h post ingestion. Theophylline and paraxanthine reflected the metabolic turnover of caffeine within each volunteer profile, while adenosine was identified as an endogenous metabolite upregulated upon caffeine ingestion (Fig. 4e, f). Another endogenous metabolite, dopamine was significantly induced 5 h after consuming a caffeine capsule in 27 participants (study C.2, Fig. 4f, Supplementary Fig. 5). Adenosine and dopamine are not directly related to caffeine metabolism. ### Mathematical modelling quantifies individual dynamic metabolic responses Fluctuations in the rate of sweat excretion cause significant variance in the collected sweat volumes. This represents a fundamental challenge for the time-course analysis of sweat from the fingertips. For example, the apparent down-regulation of all analytes at 120 min in volunteer profile 2 (Fig. 4d, arrow) strongly suggests that at that time-point less sweat was collected in comparison to the adjacent measurements (see Fig. 5e, arrow). Moreover, the magnitude of this effect on the apparent concentration is unknown. We used dynamic metabolic network modelling to discern the effects of the sweat volume on the measured time-series of caffeine catabolism in the body (see methods). In brief, caffeine uptake and clearance via its major metabolic products paraxanthine, theobromine and theophylline can be described by first order kinetics (Fig. 5a)49,50. Due to fasting we can set the initial caffeine concentration at time 0 min to zero (Fig. 4b). Additionally, we consider the sweat volume to be a function of time, but assume that at every time-point the sweat volume is constant across all metabolites. The assumption holds if the modelled metabolites are not reabsorbed during sweating. The resulting mathematical model was fitted to each volunteer. We estimated the kinetic constants, the initial concentrations of paraxanthine, theobromine and theophylline and the sweat volumes at each time-point, as exemplified for volunteer profiles 1 and 2 (Fig. 5b, c, e, f and Supplementary Table 2). In both cases our model accurately described individual caffeine metabolisms with good accuracy (goodness of fit R2adjusted > 0.90). Besides the possibility to estimate the rate of sweat excretion by means of this modelling approach, the shape of the curves visualises the dynamic metabolic patterns of each individual. Interestingly, the kinetic constants for uptake (k1) of caffeine is within the standard deviation, while the constants of conversion (k2, k3, k4) are approximately half of the literature values of population averages for blood plasma (Supplementary Table 2)51. Whereas the fractional conversion of caffeine to the main metabolic product paraxanthine in volunteer profile 1 is similar to what is described as population average51,52 we saw substantial differences for volunteer profile 2, who displayed theobromine as the main metabolic product of caffeine (Supplementary Table 3). We found individual differences to be robust over time. In Fig. 5d a two-dimensional PCA plot of the fitted conversion constants of caffeine (k2, k3, k4, k5) is shown. Individuals who generated at least two volunteer profiles (i.e. independent time-series) are marked with large symbols. Their respective colour indicates the month of sampling. Not only do two profiles of the same volunteer within one month cluster close to each other (e.g. star symbols), but also the ones that were sampled more than 1.5 years apart are in close proximity (e.g. diamond symbols). The biggest difference of volunteer profiles from one participant was found for profiles 4 and 26 (big circles). For volunteer profile 26, however, we observed an overall poor fit (R2adjusted of 0.56 compared to 0.984 for profile 4). On another note, the original axes in Fig. 5d show that the catabolism of caffeine into paraxanthine (k2) and direct elimination (k5) is negatively correlated, whereas the catabolism of caffeine into theobromine (k3) and theophylline (k4) is positively correlated. This correlation is known in literature and is likely due to common hepatic cytochrome P450 enzymes catalysing the conversion of caffeine to theobromine and theophylline (Fig. 4a)53. ### Targeted assays can be established for clinical implementation The described metabolic phenotyping approach represents a powerful discovery tool for endogenous and xenobiotic compounds found in sweat of the fingertips. In order to evaluate the feasibility of clinical implementation, we established a targeted assay for caffeine, and the primary metabolites theobromine, theophylline and paraxanthine on a triple quadrupole MS using multiple reaction monitoring (MRM, see Supplementary Information, Supplementary note 2 and Supplementary Table 4). For this purpose, five participants consumed a standardised coffee on 3 independent days after a 12 h caffeine-free fasting period and samples were collected at different time intervals in analogy to study B. The assay was validated and revealed linear ranges between 0.5 and 300 pg µL−1 of the respective metabolites (0.25–150 pg on column, Supplementary Fig. 9). LOD values were <0.2 pg µL−1 per collected sweat sample. The overall process efficiencies were generally >88% and the precision of 25 pg µL−1 spiked metabolite was <1% (Supplementary Table 5), while the overall CV of the AUC of caffeine 5 h after coffee consumption of all volunteers over 3 independent days was 22% (Supplementary Fig. 10). This suggests that targeted assays based on the analysis of sweat from the fingertips can be successfully established directly from metabolic phenotyping. ## Discussion The present study provides evidence that sweat from the fingertips can be used for dynamic metabolic phenotyping. The sample collection is non-invasive, safe and can be accomplished by untrained personnel, supporting patience compliance47. Other minimally to non-invasive approaches such as microneedle patches or sweat patches, require longer collection periods of several minutes up to days, aiming to collect sweat at a single time-point17,54,55. In our approach, time-course analyses with frequent sampling can be performed due to the facile collection procedure. Our setup allows the analysis of unstimulated sweat in contrast to published approaches where sweat production was induced with pilocarpine iontophoresis (coupled with the Macroduct sweat collector) or physical exercise. Such stimuli were shown to alter the physiological sweat composition, which may introduce bias into the analysis17,56. The entire workflow can be accomplished within 20 min per sample, and has the potential to support large scale longitudinal metabolic studies. However, metabotyping the small amounts of sweat requires sufficiently sensitive analytical equipment. Although our approach centres on metabolic profiling using dedicated high-resolution instrumentation, we demonstrated the successful transfer to a targeted assay. Targeted MS is now routinely implemented in the clinical laboratory57. Sweat from the fingertips represents a rich source for metabolic phenotyping. Considering that a given metabolite may be represented in an LC-MS experiment by several features due to different adducts and charge states58, it may be estimated that several thousand distinct metabolites can potentially be identified in sweat from the fingertips using this methodology. So far, we have verified 250 metabolites with external standards (Supplementary Data 1). The analysis is robust and sensitive with limits of detection of metabolites found in the sub-picogram range per sweat sample. Indeed, the detection limits found in this study showed improved sensitivity compared to previously used methodologies59. As a result, numerous endogenous metabolites were identified, which have not yet been described in sweat, including dopamine, progesterone and melatonin (Fig. 1). This highlights the potential of this approach to successfully identify low-abundant metabolites, which are challenging to detect in other biofluids due to matrix effects (e.g. melatonin in blood or plasma)59,60,61. Analysis of the area under the curve of the internal standard revealed an overall coefficient of variation of 11% across 636 samples and indicated acceptable precision (Fig. 2). Proof-of-principle intervention studies were successfully carried out and support the applicability of the method. In two separate studies, participants were asked to consume a standardised cup of coffee or ingest a caffeine capsule after a caffeine– and theobromine-free diet for 12–72 h. After ingestion, sweat samples were collected up to 20 times within 27 h per volunteer. Sampling intervals of 15 min were feasible. Coffee consumption led to a significant upregulation of caffeine, chlorogenic acid and trigonelline within 15 min in all participants (Fig. 3, study B). This suggested a fast absorption and distribution of these xenobiotics, which also displayed distinct absorption and excretion kinetics (Fig. 3c). Altogether, 35 metabolites originating from coffee were detected in sweat from the fingertips. The observation of significant background levels of dimethylxanthines after coffee consumption in study B pointed towards a confounding problem with respect to the origin of these caffeine metabolites. In fact, their temporal increase may have been due to their absorption from consumed coffee and hepatic caffeine metabolism. In order to resolve this question, we designed an additional study in which participants ingested a caffeine capsule (200 mg) only and adhered to a longer caffeine– and theobromine-free fasting regime. Of note, the longer fasting periods (48–72 h) significantly reduced the background levels of the primary metabolites (Fig. 4b, study C) compared to 12 h fasting (study B). Interestingly, statistical analysis of the metabolic profiling data from study C, involving the caffeine capsule, revealed a significant upregulation of caffeine and of the metabolic products theophylline and paraxanthine across all participants after 480 min (Fig. 4). Moreover, participants featured significantly increased levels of dopamine after 5 h. Being an endogenous metabolite, it is plausible to assume that this upregulation corresponded to a physiological response to caffeine ingestion. Increased dopamine levels were already observed upon caffeine62, as well as coffee consumption by others63. Adenosine was significantly induced 4 h post ingestion of a caffeine capsule. Caffeine exerts most of its biological actions such as countering sleep pressure via antagonising adenosine receptors64. It has been demonstrated that caffeine increases plasma adenosine concentration potentially via receptor-mediated regulation of the plasma adenosine concentration65 and this finding seems to extend to sweat from the fingertips. We have previously described individual opposing responses with regard to anti-inflammatory effects after coffee consumption66. Such studies required the collection of blood from volunteers and this could now be facilitated by analysing sweat from the fingertips. Adenosine is also known to be an anti-inflammatory mediator that may regulate neutrophils, macrophages and lymphocytes through interacting with surface receptors of these cells67. It is important to note that sweat from the fingertips may not only reveal ingested xenobiotics, but also endogenously produced metabolic products and physiological responses to bioactive xenobiotics. Individual metabolic traits were then investigated by analysing the time-dependent metabolic evolution of caffeine upon ingesting a caffeine capsule (Fig. 4d). We found that sweat from the fingertips may be successfully used for the personalised assessment of such metabolic activities. Importantly, this strategy may be extended to other xenobiotics or drugs and their causally related metabolic products in order to obtain insight into specific processes of human metabolism in an individualised manner. Moreover, by inducing cytochrome P450 enzymes in HepG2 cells in vitro (Supplementary note 1), we were able to modulate the metabolic turnover of caffeine and the formation of specific catabolic products. This suggests that the relative ratios of caffeine to its primary metabolites may reflect hepatic activity, since the physiological hepatic metabolism of caffeine relies on a similar set of enzymes as in HepG2 cells. Variations in the sweat volume over the course of the study represented a major challenge for normalisation and quantification. Mathematical modelling overcame this issue by addressing molecular constraints of substrate-product relations of enzymatically linked metabolites. Successful modelling has two central prerequisites: firstly, the measurement of at least two metabolites with known dynamics and, secondly, a linear relationship of said metabolites to the sweat rate. Importantly, this allowed us to compute a sweat volume that is proportional to all metabolites at each time-point. This approach was capable of delivering estimates of individual rate constants for drug uptake, metabolism and clearance and therefore allows to model dynamic metabolic patterns in individuals (Fig. 5). Sampling sweat from the fingertips enables time-course studies, which are evaluated by means of conversion rates of metabolically related substance classes. Their observed robustness suggests that the development of personalised tests via finger sweat measurements is feasible. For example, caffeine elimination was shown to be a proxy for liver function68, and we hypothesise that a future study using an experimental setup identical to the caffeine capsule study could differentiate between patients with cirrhotic and normal livers. Additionally, we argue that the method presented here provides a convenient solution to the normalisation problem of finger sweat, which previously only has been tackled by employing microcapillaries69. However, they require large volumes of sweat, and thus need either long sampling times or require physical exercise. Both are detrimental when measuring fast pharmacokinetics, for example, for caffeine this would circumvent the requirement of absolute quantitative information of a single measurement. In summary, metabolic phenotyping from sweat of the fingertips in conjunction with mathematical modelling is a promising approach to obtain dynamic metabolic patterns from individuals that may overcome the limitations of conventional composition biomarkers. Further research is currently performed in order to consolidate the potential of sampling sweat from the fingertips for applications in precision medicine. ## Methods ### Reagents and chemicals LC-MS grade methanol, water, acetonitrile and formic acid used during sample preparation and LC-MS/MS analysis were purchased from VWR chemicals (Vienna, AT). Xenobiotic and metabolite standards (caffeine, theobromine, theophylline, paraxanthine, 1-methylxanthine, 3-methylxanthine, 7-methylxanthine, 1-methyluric acid, 3-methyluric acid, 1,7-dimethyluric acid, 3,7-dimethyluric acid and 1,3,7-trimethyluric acid, chlorogenic acid, xanthine, 5-Acetylamino-6-formylamino-3-methyluracil, dopamine and proteinogenic amino acids) were either purchased from Sigma–Aldrich (Vienna, AT) or Honeywell Fluka (GER). Caffeine capsules were bought from Mach dich wach! GmbH (GER). Sampling units were made from filter papers (precision wipes, number = 7552, white, 11 × 21 cm, Kimtech Science, Kimberly-Clark Professional, USA) using a circular puncher of 1cm2. ### Standard solutions and calibration samples Stock solutions of 1 mg mL−1 of the analytical standards and the internal deuterated standards caffeine-(trimethyl-D9) and N-acetyl-tryptophan in methanol were prepared and stored at 4 °C. For caffeine, paraxanthine, theobromine and theophylline calibration curves were generated by spiking onto sampling units with the following concentrations: 0.1, 1, 5, 10, 15, 25, 50 and 100 pg µL−1. The internal deuterated standards were prepared at a concentration of 1 pg µL−1 in an aqueous solution containing 0.2% formic acid, which served as the extraction solution for all samples. ### Cohort design Altogether, 21 males and 19 females with ages between 20–55 years and a BMI of 21 ± 8 kg m−2 were enrolled in this study. Participants had different dietary habits regarding the consumption of coffee; rare to regular consumption. Prior sampling, participants were required to fast caffeinated food (e.g. chocolate) and drinks (e.g. coffee, tea and energy drinks) for a period of 12–72 h. Sweat samples from the fingertips were collected at different time intervals and in the presence or absence of an intervention (see Table 1, studies A–C). Study B involved the consumption of a standardised coffee (equivalent to a double espresso), while studies C.1 and C.2 involves the ingestion of a caffeine capsule (200 mg). Seven volunteers have participated in more than one study, which gave a total of 47 volunteer profiles for study C. It was ensured that the volunteers did not touch the prepared coffee with their fingers. ### Collection of sweat from the fingertips Sampling units of 1 cm2 circular surface were pre-wetted with 3 µL water and provided in 0.5 mL Eppendorf tubes. For each sweat collection, volunteers cleaned their hands using warm tap water and dried them with disposable paper towels. Volunteers kept their hands open in the air at room temperature for 1 min. Then, the sampling unit was placed between thumb and index finger using a clean tweezer and held for 1 min. Sweat formation was not forced. Filters were transferred back to labelled 0.5 mL Eppendorf tubes using a clean tweezer and stored at 4 °C until sample preparation. ### Sample preparation Coffee extracts were prepared taking an aliquot of 1 mL of a 250 mL coffee cup used for study A and B, which was centrifuged for 10 min at 15000 × g. The supernatant was diluted 1:100, 1:1000 and 1:10000 with the extraction solution consisting of an aqueous solution of caffeine-(trimethyl-D9) (1 pg µL−1) with 0.2% formic acid. The dilutions were again centrifuged before analysis by LC-MS/MS. For the extraction of metabolites from the sampling units, 120 µL of the extraction solution consisting of an aqueous solution of caffeine-(trimethyl-D9) (1 pg µL−1) with 0.2% formic acid was added into the 0.5 mL Eppendorf tube containing the sampling unit. The metabolites were extracted by pipetting up and down 15 times. The sampling unit was pelleted on the bottom of the tube and the supernatant was transferred into HPLC vials equipped with a 200 µL V-shape glass insert (both Macherey-Nagel GmbH & Co.KG) and analysed by LC-MS/MS. Additionally, 10 unused filter, 10 paper towels and 10 tap water blanks were extracted similarly to determine potential contaminants and metabolite background levels. ### LC-MS/MS analysis A Q Exactive HF (Thermo Fisher Scientific) mass spectrometer coupled to a Vanquish UHPLC System (Thermo Fisher Scientific) was employed for this study. Chromatography was performed using a Kinetex XB-C18 column (100 Å, 2.6 µm, 100 × 2.1 mm, Phenomenex Inc.). Mobile phase A consisted of water with 0.2% formic acid, mobile phase B of methanol with 0.2% formic acid and the following gradient program was run: 1–5% B in 0.3 min and then 5–40% B from 0.3–4.5 min, followed by a column washing phase of 1.4 min at 80% B and a re-equilibration phase of 1.6 min at 1% B resulting in a total runtime of 7.5 min. Flow rate was set to 500 µL min−1, the column temperature to 40 °C, the injection volume was 10 µL and the injection peak was found at RT = 0.3 min. All samples were analysed in technical duplicates. An untargeted mass spectrometric approach was applied for compound identification. Electrospray ionisation was performed in positive and negative ionisation mode. MS scan range was m/z 100–1000 and the resolution was set to 60000 (at m/z 200). The four most abundant ions of the full scan were selected for HCD fragmentation applying 30 eV collision energy. Fragments were analysed at a resolution of 15000 (at m/z 200). Dynamic exclusion was applied for 6 s. The instrument was controlled using Xcalibur software (Thermo Fisher Scientific). ### Data analysis Raw files generated by the Q Exactive HF instrument were analysed using the Compound Discoverer Software 3.1 (Thermo Fisher Scientific). Identified compounds were manually reviewed using Xcalibur 4.0 Qual browser and Freestyle (version 1.3.115.19) (both Thermo Fisher Scientific) and the obtained MS2 spectra were compared to reference spectra, which were retrieved from mzcloud (Copyright © 2013–2020 HighChem LLC, Slovakia). The match factor cut-off from mzcould was 80, while the mass tolerances were 5 and 10 ppm on MS1 and MS2 levels, respectively. Moreover, the identity of compounds suggested by Compound Discoverer was verified by analysing purchased standards using the same LC-MS method. The Tracefinder Software 4.1 (Thermo Fisher Scientific) was used for peak integration and calculation of peak areas. The generated batch table was exported and further processed with Microsoft Excel (version 1808), GraphPad Prism (version 6.07) and the Perseus software (version 1.6.12.0)70, the letter being used for the principal component analysis. Untargeted metabolic profiling by mass spectrometry delivered more than 50000 reproducible sweat-specific features per analysis. Microsoft PowerPoint (version 1808) was used for creating figures. ### Statistical analysis D’Agostino-Pearson tests as well as Kolmogorov–Smirnov tests with Dallal–Wilkinson–Lilliefors p-value were performed to test if values came from a gaussian distribution. Two-tailed, paired t-tests or Wilcoxon Signed Rank Tests were performed for mass spectrometry data using GraphPad Prism (Version 6.07) to evaluate the significance of the abundance increase/decrease of compounds and their metabolites. For Fig. 4b it was tested with Kolmogorov–Smirnov test using Dallal–Wilkinson–Lilliefors p-value if values came from a Gaussian distribution. A two-tailed paired t-test (6 participants × 2 time-points) for caffeine (p-value = 0.1033, t = 51.990, df = 5), paraxanthine (p-value = 0.0297, t = 3.012, df = 5), theobromine (p-value = 0.0203, t = 3.353, df = 5) and theophylline (p-value = 0.0118, t = 3.866, df = 5). Means and standard deviations are for caffeine 25 ± 25 for 12 h fasting and 4.8 ± 2.7 for 48 h fasting, for paraxanthine 6.6 ± 5.5 for 12 h fasting and 2.2 ± 2.1 for 48 h fasting, for theobromine 4.2 ± 2.0 for 12 h fasting and 1.5 ± 1.0 for 48 h fasting, for theophylline 1.1 ± 0.8 for 12 h fasting and 0.5 ± 0.5 for 48 h fasting. For Fig. 4f normality of the data was checked with D’Agostino-Pearson test. A two-tailed Wilcoxon Signed Rank Test was performed for adenosine (n = 47, sum of positive ranks = 1020, sum of negative ranks = −17,00, sum of signed ranks = 1003, p-value ≤ 0.0001). A tow-tailed t-test was performed for dopamine (p-value ≤ 0.0001, t = 5.416, df = 26). The means and standard deviations are the following: for adenosine 0.1 ± 0.2 at 0 h and 0.7 ± 1.2 at 4 h, for dopamine 0.1 ± 0.1 at 0 h and 0.2 ± 0.1 at 5 h. Volcano plots were obtained using Perseus Software70, setting the false discovery rate (FDR) to 0.05 and the minimal fold change (s0) to 0.1. For Fig. 4e the −log p-value for caffeine is 19.02, for paraxanthine 14.48, for theophylline 9.16 and for adenosine 1.14. Shared-control plots were generated with an R script71. ### Mathematical modelling The model describes the concentration time-series of the ingested free caffeine and four sweat metabolites (caffeine, paraxanthine, theobromine, theophylline) within the constraints of following assumptions (Fig. 5a): • caffeine metabolism can be described by mass-action kinetics in a one-compartment body model49,50, • the uptake of external caffeine is instantaneous (i.e. no lag time between ingestion and absorption into the body), • the steady-state volume of distribution of caffeine, paraxanthine, theobromine and theophylline is instantaneously reached and time independent50,51, • concentration enrichment due to an increase in the water fraction from blood to sweat and dilution through the inability of bound caffeine to diffuse cancel each other out72, • apparent metabolite concentrations are proportional to the sweat volume (see Supplementary Fig. 8, Eq. (1)), and finally, • sweat volumes are time dependent, but the same for all metabolites at one time-point. A mathematical formulation of the problem of fluctuating sweat volumes is given in Eq. (1), where $$\widetilde{{{\bf{M}}}}(t)$$ is the measured mass vector of the internal metabolites and C(t) is the underlying concentration vector. Vsweat(t) is a time-dependent volume that represents the sampled sweat volume. The resulting mathematical model is explained in detail in the Supplementary Note 3: Mathematical Model. Briefly, we describe the kinetics of caffeine metabolism with a system of ordinary differential equations (Supplementary Information, Supplementary Note 3: Mathematical Model, Eq. (2)). Subsequently we connect the solution of this equation over the sweat volume to the concentrations measured in the caffeine capsule study. Our model only contains variables that are either known and are thus fixed (volume of distribution, bioavailability, and ingested dose of caffeine) or have a concrete physical meaning but are unknown and need to be fitted (kinetic parameters, initial concentrations of paraxanthine, theobromine, and theophylline, sweat volumes). It allows to estimate absolute concentrations of tri- and dimethylxanthines in the finger sweat. Note that Vsweat(t) is not constant over time and unknown and thus a unique fitting parameter at each sampled time-point. Therefore, the number of parameters that need to be fitted for the model is equal to the number of time-points (one Vsweat value per time-point) plus the number of parameters of the kinetic model. This requires the simultaneous fitting of the kinetics of multiple metabolites upon assuming that at each time-point Vsweat(t) is constant for similar metabolites (Eq. (2)). By doing so the amount of data points that can be used for fitting is multiplied by the number of metabolites while the number of parameters for Vsweat(t) stays constant. Thus (as long as the kinetic model is not overly complex) the system is sufficiently determined and data fitting is feasible. $${\widetilde{{{\bf{M}}}}}(t)={V}_{{{\rm{sweat}}}}(t)\,{{{\bf{C}}}}(t)$$ (1) $${V}_{{{{{{\rm{sweat}}}}}}}={{V}_{{{{{{\rm{sweat}}}}}}}^{{{{{{\rm{caffeine}}}}}}}={V}_{{{{{{\rm{sweat}}}}}}}^{{{{{{\rm{paraxanthine}}}}}}}={V}_{{{{{{\rm{sweat}}}}}}}^{{{{{{\rm{theobromine}}}}}}}=V}_{{{{{{\rm{sweat}}}}}}}^{{{{{{\rm{theophylline}}}}}}}$$ (2) Caffeine and its major catabolic products paraxanthine, theobromine and theophylline were modelled subject to the following constraints: first order kinetics for all reactions (k1 to k8) with 0 ≤ k1 ≤ 10 h−1 and 0 ≤ k2-8 ≤ 0.2 h−1; initial concentration of 0 for caffeine and 0 ≤ $${C}_{0}^{i}$$ ≤ 1 µg L−1 for dimethylxanthines; and variability of Vsweat between 0.05 ≤ Vsweat(t) ≤ 4 µL. Generally, literature values of kinetic constants and sweat rates (without exercise) are well within the bounds of the model40,51,73,74. Finally, Supplementary Eq. (15) was used to fit the experimental data of all volunteers of the caffeine capsule study normalised by the machine standard individually. Fitting was performed in Python 3.7 with the SciPy package (version 1.6.1) using the curve_fit function and the integrated trust region reflective algorithm with default numerical tolerances (10−8)75. To find optimal settings for the fitting procedure we performed a systematic investigation of the hyperparameters in the Supplementary Note 4: Sensitivity Analysis. There, our implementation of the generalised, adaptive robust-loss function76 in combination with Monte Carlo sampling of initial parameters for 100 times and selecting the solution with the lowest overall loss resulted in the smallest errors. Therefore, the same settings were adopted for this study. Moreover, with the estimated CVs associated to the fitting procedure from the Sensitivity Analysis we calculated confidence intervals (n = 120, df = 93), which are shown as error bars in Fig. 5b, c, e, and f (and Supplementary Table 2). Finally, we performed a PCA of the standard scaled kinetic constants of caffeine degradation (k2, k3, k4, k5) of all volunteer profiles (Fig. 5d). ### Programs for mathematical modelling PCA of kinetic parameters (Fig. 5d) was performed with Python 3.7 and scikit-learn (version 0.23.2). Levene-test in sensitivity analysis was performed with Python 3.7 and scipy (version 1.6.1). The mathematical modelling and sensitivity analysis was performed with Python 3.7 heavily relying on packages scipy (version 1.6.1) and robust-loss-pytorch (version 0.0.2). ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article.
open-web-math/open-web-math
# 2.4: Power and Sum Rules for Derivatives $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ ( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ $$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$ $$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$ $$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vectorC}[1]{\textbf{#1}}$$ $$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$ $$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$ $$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$ $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ In the next few sections, we’ll get the derivative rules that will let us find formulas for derivatives when our function comes to us as a formula. This is a very algebraic section, and you should get lots of practice. When you tell someone you have studied calculus, this is the one skill they will expect you to have. ## Building Blocks These are the simplest rules – rules for the basic functions. We won't prove these rules; we'll just use them. But first, let's look at a few so that we can see they make sense. ##### Example $$\PageIndex{1}$$ Find the derivative of $$y=f(x)=mx+b$$. Solution This is a linear function, so its graph is its own tangent line! The slope of the tangent line, the derivative, is the slope of the line: $f'(x)=m\nonumber$ ##### Rule The derivative of a linear function is its slope. ##### Example $$\PageIndex{2}$$ Find the derivative of $$f(x)=135$$. Solution Think about this one graphically, too. The graph of $$f(x)$$ is a horizontal line. So its slope is zero: $f'(x)=0\nonumber$ ##### Rule The derivative of a constant is zero. ##### Example $$\PageIndex{3}$$ Find the derivative of $$f(x)=x^2$$. Solution Recall the formal definition of the derivative: $f'(x)=\lim\limits_{h\to 0} \frac{f(x+h)-f(x)}{h}.\nonumber$ Using our function $$f(x)=x^2$$, $$f(x+h)=(x+h)^2=x^2+2xh+h^2$$. Then \begin{align*} f'(x) & = \lim\limits_{h\to 0} \frac{f(x+h)-f(x)}{h}\\ & = \lim\limits_{h\to 0} \frac{x^2+2xh+h^2-x^2}{h}\\ & = \lim\limits_{h\to 0} \frac{2xh+h^2}{h}\\ & = \lim\limits_{h\to 0} \frac{h(2x+h)}{h}\\ & = \lim\limits_{h\to 0} (2x+h)\\ & = 2x \end{align*} \nonumber From all that, we find that $$f'(x)=2x$$. Luckily, there is a handy rule we use to skip using the limit: ##### Power Rule The derivative of $$f(x)=x^n$$ is $f'(x)=nx^{n-1}.\nonumber$ ##### Example $$\PageIndex{4}$$ Find the derivative of $$g(x)=4x^3$$. Solution Using the power rule, we know that if $$f(x)=x^3$$, then $$f'(x)=3x^2$$. Notice that $$g$$ is 4 times the function $$f$$. Think about what this change means to the graph of $$g$$ – it’s now 4 times as tall as the graph of $$f$$. If we find the slope of a secant line, it will be $$\frac{\Delta g}{\Delta x}= \frac{4\Delta f}{\Delta x} =4\frac{\Delta f}{\Delta x}$$; each slope will be 4 times the slope of the secant line on the $$f$$ graph. This property will hold for the slopes of tangent lines, too: $\frac{d}{dx}\left(4x^3\right)=4\frac{d}{dx}\left(x^3\right)=4\cdot 3x^2=12x^2.\nonumber$ ##### Rule Constants come along for the ride, i.e., $$\frac{d}{dx}\left( kf\right)=kf'.$$ Here are all the basic rules in one place. ##### Derivative Rules: Building Blocks In what follows, $$f$$ and $$g$$ are differentiable functions of $$x$$. #### Constant Multiple Rule $\frac{d}{dx}\left( kf\right)=kf'\nonumber$ #### Sum and Difference Rule $\frac{d}{dx}\left(f\pm g\right)=f' \pm g'\nonumber$ #### Power Rule $\frac{d}{dx}\left(x^n\right)=nx^{n-1}\nonumber$ Special cases: $\frac{d}{dx}\left(k\right)=0 \quad \text{(Because $$k=kx^0$$.)}\nonumber$ $\frac{d}{dx}\left(x\right)=1 \quad \text{(Because $$x=x^1$$.)}\nonumber$ #### Exponential Functions $\frac{d}{dx}\left(e^x\right)=e^x\nonumber$ $\frac{d}{dx}\left(a^x\right)=\ln(a)\,a^x\nonumber$ #### Natural Logarithm $\frac{d}{dx}\left(\ln(x)\right)=\frac{1}{x}\nonumber$ The sum, difference, and constant multiple rule combined with the power rule allow us to easily find the derivative of any polynomial. ##### Example $$\PageIndex{5}$$ Find the derivative of $$p(x)=17x^{10}+13x^8-1.8x+1003$$. Solution \begin{align*} \frac{d}{dx}\left( 17x^{10}+13x^8-1.8x+1003 \right) & = \frac{d}{dx}\left( 17x^{10} \right)+\frac{d}{dx}\left( 13x^8 \right)-\frac{d}{dx}\left( 1.8x \right)+\frac{d}{dx}\left( 1003 \right)\\ & = 17\frac{d}{dx}\left( x^{10} \right)+13\frac{d}{dx}\left( x^8 \right)-1.8\frac{d}{dx}\left( x \right)+\frac{d}{dx}\left( 1003 \right)\\ & = 17\left(10x^9\right)+13\left(8x^7\right)-1.8\left(1\right)+0\\ & = 170x^9+104x^7-1.8 \end{align*} \nonumber You don't have to show every single step. Do be careful when you're first working with the rules, but pretty soon you’ll be able to just write down the derivative directly: ##### Example $$\PageIndex{6}$$ Find $$\frac{d}{dx}\left( 17x^2-33x+12 \right)$$. Solution Writing out the rules, we'd write $\frac{d}{dx}\left( 17x^2-33x+12 \right)=17(2x)-33(1)+0=34x-33.\nonumber$ Once you're familiar with the rules, you can, in your head, multiply the 2 times the 17 and the 33 times 1, and just write $\frac{d}{dx}\left( 17x^2-33x+12 \right)=34x-33.\nonumber$ The power rule works even if the power is negative or a fraction. In order to apply it, first translate all roots and basic rational expressions into exponents: ##### Example $$\PageIndex{7}$$ Find the derivative of $$y=3\sqrt{t}-\frac{4}{t^4}+5e^t$$. Solution The first step is translate into exponents: $y=3\sqrt{t}-\frac{4}{t^4}+5e^t=3t^{1/2}-4t^{-4}+5e^t\nonumber$ Now you can take the derivative: \begin{align*} \frac{d}{dt}\left( 3t^{1/2}-4t^{-4}+5e^t \right) & = 3\left(\frac{1}{2}t^{-1/2}\right)-4\left(-4t^{-5}\right)+5\left(e^t\right) \\ & = \frac{3}{2}t^{-1/2}+16t^{-5}+5e^t \end{align*} \nonumber If there is a reason to, you can rewrite the answer with radicals and positive exponents: $y'= \frac{3}{2}t^{-1/2}+16t^{-5}+5e^t= \frac{3}{2\sqrt{t}}+\frac{16}{t^5}+5e^t\nonumber$ Be careful when finding the derivatives with negative exponents. We can immediately apply these rules to solve the problem we started the chapter with - finding a tangent line. ##### Example $$\PageIndex{8}$$ Find the equation of the line tangent to $$g(t)=10-t^2$$ when $$t = 2$$. Solution The slope of the tangent line is the value of the derivative. We can compute $$g'(t)=-2t$$. To find the slope of the tangent line when $$t = 2$$, evaluate the derivative at that point. The slope of the tangent line is -4. To find the equation of the tangent line, we also need a point on the tangent line. Since the tangent line touches the original function at $$t = 2$$, we can find the point by evaluating the original function: $$g(2)=10-2^2=6$$. The tangent line must pass through the point (2, 6). Using the point-slope equation of a line, the tangent line will have equation $$y-6=-4(t-2)$$. Simplifying to slope-intercept form, the equation is $$y=-4t+14$$. Graphing, we can verify this line is indeed tangent to the curve: We can also use these rules to help us find the derivatives we need to interpret the behavior of a function. ##### Example $$\PageIndex{9}$$ In a memory experiment, a researcher asks the subject to memorize as many words from a list as possible in 10 seconds. Recall is tested, then the subject is given 10 more seconds to study, and so on. Suppose the number of words remembered after $$t$$ seconds of studying could be modeled by $$W(t)=4t^{2/5}$$. Find and interpret $$W'(20)$$. Solution $$W'(t)=4\cdot \frac{2}{5}t^{-3/5}=\frac{8}{5}t^{-3/5}$$, so $$W'(20)=\frac{8}{5}(20)^{-3/5}\approx 0.2652$$. Since $$W$$ is measured in words, and $$t$$ is in seconds, $$W'$$ has units words per second. $$W'(20)\approx 0.2652$$ means that after 20 seconds of studying, the subject is learning about 0.27 more words for each additional second of studying. Next we will delve more deeply into some business applications. To do that, we first need to review some terminology. Suppose you are producing and selling some item. The profit you make is the amount of money you take in minus what you have to pay to produce the items. Both of these quantities depend on how many you make and sell. (So we have functions here.) Here is a list of definitions for some of the terminology, together with their meaning in algebraic terms and in graphical terms. ##### Cost Your cost is the money you have to spend to produce your items. ##### Fixed Cost The Fixed Cost (FC) is the amount of money you have to spend regardless of how many items you produce. FC can include things like rent, purchase costs of machinery, and salaries for office staff. You have to pay the fixed costs even if you don’t produce anything. ##### Total Variable Cost The Total Variable Cost (TVC) for $$q$$ items is the amount of money you spend to actually produce them. TVC includes things like the materials you use, the electricity to run the machinery, gasoline for your delivery vans, maybe the wages of your production workers. These costs will vary according to how many items you produce. ##### Total Cost The Total Cost (TC, or sometimes just C) for $$q$$ items is the total cost of producing them. It’s the sum of the fixed cost and the total variable cost for producing $$q$$ items. ##### Average Cost The Average Cost (AC) for $$q$$ items is the total cost divided by $$q$$, or $AC(q) = \frac{TC}{q}\nonumber$ You can also talk about the average fixed cost, $$\frac{FC}{q}$$, or the average variable cost, $$\frac{TVC}{q}$$. ##### Marginal Cost The Marginal Cost (MC) at $$q$$ items is the cost of producing the next item. Really, it’s $MC(q) = TC(q + 1) - TC(q).\nonumber$ In many cases, though, it’s easier to approximate this difference using calculus (see Example 1 below). And some sources define the marginal cost directly as the derivative, $MC(q) = TC'(q).\nonumber$ In this course, we will use both of these definitions as if they were interchangeable. The units on marginal cost is cost per item. For the purposes of this course, if a question asks for marginal cost, revenue, profit, etc., compute it using the derivative if possible, unless specifically told otherwise. Why is it okay that there are two definitions for Marginal Cost (and Marginal Revenue, and Marginal Profit)? We have been using slopes of secant lines over tiny intervals to approximate derivatives. In this example, we’ll turn that around – we’ll use the derivative to approximate the slope of the secant line. Notice that the “cost of the next item” definition is actually the slope of a secant line, over an interval of 1 unit: $MC(q) = C(q + 1) - 1 = \frac{C(q+1)-1}{1}.\nonumber$ So this is approximately the same as the derivative of the cost function at q: $MC(q) = C'(q).\nonumber$ In practice, these two numbers are so close that there’s no practical reason to make a distinction. For our purposes, the marginal cost is the derivative is the cost of the next item. ##### Example $$\PageIndex{10}$$ The table shows the total cost (TC) of producing $$q$$ items. Items, $$q$$ TC 0 $20,000 100$35,000 200 $45,000 300$53,000 1. What is the fixed cost? 2. When 200 items are made, what is the total variable cost? The average variable cost? 3. When 200 items are made, estimate the marginal cost. Solution 1. The fixed cost is $20,000, the cost even when no items are made. 2. When 200 items are made, the total cost is$45,000. Subtracting the fixed cost, the total variable cost is $45,000 -$20,000 = $25,000. The average variable cost is the total variable cost divided by the number of items, so we would divide the$25,000 total variable cost by the 200 items made. $25,000/200 =$125. On average, each item had a variable cost of $125. 3. We need to estimate the value of the derivative, or the slope of the tangent line at $$q = 200$$. Finding the secant line from $$q=100$$ to $$q=200$$ gives a slope of $\frac{45,000-35,000}{200-100}=100.\nonumber$ Finding the secant line from $$q=200$$ to $$q=300$$ gives a slope of $\frac{53,000-45,000}{300-200}=80.\nonumber$ We could estimate the tangent slope by averaging these secant slopes, giving us an estimate of$90/item. This tells us that after 200 items have been made, it will cost about $90 to make one more item. ##### Example $$\PageIndex{11}$$ The cost to produce $$x$$ items is $$C(x) = \sqrt{x}$$ hundred dollars. 1. What is the cost for producing 100 items? 101 items? What is cost of the 101st item? 2. Calculate $$C '(x)$$ and evaluate $$C '$$ at $$x = 100$$. How does $$C '(100)$$ compare with the last answer in Part a? Solution 1. $$C(100) =$$ 10 hundred dollars =$1000 and $$C(101) =$$10.0499 hundred dollars = $1004.99, so it costs$4.99 for that 101st item. Using this definition, the marginal cost is $4.99. 2. $$C'(x)=\frac{1}{2}x^{-1/2} = \frac{1}{2\sqrt{x}}$$, so $$C'(100)=\frac{1}{2\sqrt{100}}=\frac{1}{20}$$ hundred dollars =$5.00. Note how close these answers are! This shows (again) why it’s OK that we use both definitions for marginal cost. ##### Demand Demand is the functional relationship between the price $$p$$ and the quantity $$q$$ that can be sold (that is demanded). Depending on your situation, you might think of $$p$$ as a function of $$q$$, or of $$q$$ as a function of $$p$$ ##### Revenue Your revenue is the amount of money you actually take in from selling your products. ##### Total Revenue The Total Revenue (TR, or just R) for $$q$$ items is the total amount of money you take in for selling $$q$$ items. Total Revenue is price multiplied by quantity, $TR = p \cdot q.\nonumber$ ##### Average Revenue The Average Revenue (AR) for $$q$$ items is the total revenue divided by $$q$$, or $\frac{TR}{q}.\nonumber$ ##### Marginal Revenue The Marginal Revenue (MR) at $$q$$ items is the revenue from producing the next item, $MR(q) = TR(q + 1) - TR(q).\nonumber$ Just as with marginal cost, we will use both this definition and the derivative definition: $MR(q) = TR'(q).\nonumber$ ##### Profit Your profit is what’s left over from total revenue after costs have been subtracted. The Profit (P) for $$q$$ items is $TR(q) - TC(q),\nonumber$ the difference between total revenue and total costs. The average profit for $$q$$ items is $\frac{P}{q}.\nonumber$ The marginal profit at $$q$$ items is $P(q + 1) – P(q),\nonumber$ or $P'(q)\nonumber$ ## Graphical Interpretations of the Basic Business Math Terms #### Illustration Here are the graphs of TR and TC for producing and selling a certain item. The horizontal axis is the number of items, in thousands. The vertical axis is the number of dollars, also in thousands. First, notice how to find the fixed cost and variable cost from the graph here. FC is the $$y$$-intercept of the TC graph. ($$FC = TC(0)$$.) The graph of TVC would have the same shape as the graph of TC, shifted down. ($$TVC = TC - FC$$.) $$MC(q) = TC(q + 1) - TC(q)$$, but that’s impossible to read on this graph. How could you distinguish between TC(4022) and TC(4023)? On this graph, that interval is too small to see, and our best guess at the secant line is actually the tangent line to the TC curve at that point. (This is the reason we want to have the derivative definition handy.) $$MC(q)$$ is the slope of the tangent line to the TC curve at $$(q, TC(q))$$. $$MR(q)$$ is the slope of the tangent line to the TR curve at $$(q, TR(q))$$. Profit is the distance between the TR and TC curve. If you experiment with a clear ruler, you’ll see that the biggest profit occurs exactly when the tangent lines to the TR and TC curves are parallel. This is the rule profit is maximized when $$MR = MC$$ which we'll explore later in the chapter. ##### Example $$\PageIndex{12}$$ The demand, $$D$$, for a product at a price of $$p$$ dollars is given by $$D(p)=200-0.2p^2$$. Find the marginal revenue when the price is $10. Solution First we need to form a revenue equation. Since Revenue = Price$$\times$$Quantity, and the demand equation shows the quantity of product that can be sold, we have $R(p)=D(p)\cdot p=\left(200-0.2p^2\right)p=200p-0.2p^3.\nonumber$ Now we can find marginal revenue by finding the derivative: $R'(p)=200(1)-0.2(3p^2)=200-0.6p^2\nonumber$ At a price of$10, $$R'(10)=200-0.6(10)^2=140$$. Notice the units for $$R'$$ are $$\frac{\text{dollars of Revenue}}{\text{dollar of price}}$$, so $$R'(10)=140$$ means that when the price is $10, the revenue will increase by$140 for each dollar that the price was increased. This page titled 2.4: Power and Sum Rules for Derivatives is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Shana Calaway, Dale Hoffman, & David Lippman (The OpenTextBookStore) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
HuggingFaceTB/finemath
# Class 7 Maths NCERT Solutions for Chapter 1 Integers Chapter 1 Integers EX – 1.2 ## Integers Question 1. Write down a pair of integers whose: (a) The sum is – 7 (b) Difference is -10 (c) Sum is 0 Solution. (a) A pair of integers whose sum is – 7 can be (- 1) and (- 6). ∵ (- 1) + (- 6) = – 7 (b) A pair of integers whose difference is -10 can be (- 11) and (- 1) ∵ – 11 – (- 1) = – 11 + 1 = – 10 (c) A pair of integers whose sum is 0 can be 1 and (- 1). ∵ (- 1) + (1) = 0. Question 2. (a) Write a pair of negative integers whose difference gives 8. (b) Write a negative integer and a positive integer whose sum is – 5. (c) Write a negative integer and a positive integer whose difference is – 3. Solution. (a) A pair of negative integers whose difference gives 8 can be – 12 and – 20. ∵ (-12) – (- 20) = -12 + 20 = 8 . (b) A negative integer and a positive integer whose sum is -5 can be – 13 and 8. ∵ (- 13) + 8 = -13 + 8 = – 5 (c) A negative integer and a positive integer whose difference is -3 can be – 1 and 2. ∵ (- 1) – 2 = – 1 – 2 = – 3 Question 3. In a quiz, team A scored – 40, 10, 0, and Team B scored 10, 0, – 40 in three successive rounds. Which team scored more? Can we say that we can add integers in any order? Solution. Total scores of team A = (- 40) + 10 + 0 = – 40 + 10 + 0 = – 30 and, total scores of team B = 10 + 0 + (- 40) = 10 + 0 – 40 = – 30 Since the total scores of each team are equal. ∴ No team scored more than the other but each has an equal score. Yes, integers can be added in any order and the result remains unaltered. For example, 10 + 0 + (- 40) = – 30 = – 40 + 0 + 10 Question 4. Fill in the blanks to make the following statements true: (i) (- 5) + (- 8) = (- 8) + (………) (ii) – 53 + ……. = – 53 (iii) 17 + …… = 0 (iv) [13 + (- 12)] + (……) = 13 + [(- 12) + (- 7)] (v) (- 4) + [15 + (- 3)] = [- 4 + 15] + …… Solution. (i) (- 5) + (- 8) = (- 8) + (- 5) (ii) – 53 + 0 = – 53 (iii) 17 + (-17) = 0 (iv) [13 + (- 12)] + (- 7) = (13) + [(- 12) + (- 7)] (v) (- 4) + [15 + (- 3)] = [(- 4 ) + 15] + (- 3)
HuggingFaceTB/finemath
• Sep 15th 2009, 09:17 PM Dida Hi I'm new here and I need some help please. 1. S=3/1-r if r= -3/4 , S=? 2. t=16(-1/2) ^x-1 if n=8 3. T= -1/2(2)^n-1 if t=-32 , n=? 4. write the quadratic equation whose roots are 2/3 and -1 5.What are the restrictions on the graph of y= x+2/ (x-a)(bx-1) 6. How many solutions are there for the follow non-linear system? y= 2/x and y= (x+1)^2 -2 If anyone could me with these it would be great! Thanks a lot! • Sep 16th 2009, 01:53 AM Finley There are quite a few questions here. I believe the moderation team likes members to limit the number of questions to around 3 per post. However, here is assistance with the first four: 1. S=3/1-r if r= -3/4 , S=? $S=\frac{3}{1}-r$ By subbing in the value of r: $S=\frac{3}{1}+\frac{3}{4}$ Can you solve it from there? 2. $t=16(-1/2)^x-1$ Actually, this question makes no sense. No n value exists in the original equation, so you're unable to substitute n=8 anywhere. 3. $T= -1/2(2)^n-1$ - I will assume the -1 is not part of the exponent. $-32= -1/2(2)^n-1$ $-31= -1/2(2)^n$ $62=(2)^n$ $ln(62)=n*ln(2)$ Can you take it from there? 4. write the quadratic equation whose roots are 2/3 and -1 By using null factor rule: $\frac{2}{3}=x$ $3x-2=0$ $-1=x$ $x + 1=0$ Therefore, $0=(x+1)(3x-2)$ Can you solve it from there? • Sep 16th 2009, 11:47 AM Dida Thanks for the help, but for #1-3 I'll try to explain the question better. 1. S= 3 ________ if r= -3/4 , S=? 1-r 2. the x is the n sorry, so its like: t= 16(-1/2)^n-1 (yes the -1 is part of n) 3. the -1 is part of the exponent, but that wouldn't make a difference right? since your using ln anyway. Couldn't I use log? 4. I get 4 but that's the answer right? Because you cant really do anything more to it. Thanks again! • Sep 16th 2009, 06:05 PM Dida • Sep 16th 2009, 10:34 PM Finley 1. S= 3 ________ if r= -3/4 , S=? 1-r $S=\frac{3}{1-r}$ $S=\frac{3}{1-(3/4)}$ $S=\frac{3}{.25}$.... Can you solve that? 2. $t=16(-1/2)^{(n-1)}$ $t=16(-1/2)^{(8-1)}$ $t=16(-1/2)^7$ $t=16 \times -\frac{1}{128}$.. can you finish it? 3. $T= -1/2(2)^{(n-1)}$ $-32= -1/2(2)^{(n-1)}$ $64= (2)^{(n-1)}$ $2^6= (2)^{(n-1)}$ $6= n-1$... sooo n=?? 4. It depends what form you want the answer in. I'd suggest expanding out into 'standard form'.
HuggingFaceTB/finemath
0 # What has only one base and triangular lateral face? Updated: 11/3/2022 Wiki User 12y ago Best Answer it is a pyramid Wiki User 12y ago This answer is: ## Add your answer: Earn +20 pts Q: What has only one base and triangular lateral face? Write your answer... Submit Still have questions? Related questions A pyramid. ### Has only one base and triangular lateral faces? The description given could fit that of a pyramid ### How many number of lateral faces does a triangular prism have? 3. You wanna know how?? First there are 3 triangles that touch the base, so there are only 3 lateral faces. :) ### Can any face of a pyramid be considered a base of the pyramid? Any face of a tetrahedron can be considered a base. A pyramid, in general, has one polygonal face (with n sides) and n triangular faces. In such a pyramid, the triangular faces would normally not be considered bases - only the n-gon would. ### How do you find the base of a square pyramid when you only have the volume and height? The base of a square pyramid is the only face that is a square - all the others are triangular in shape. So you do not need any measurements to determine which is the base. ### How many faces is on a squard pyramid? There would be one face for each side of the square. The answer is four if you or only counting the triangular faces. It would be five if you are including the square base. * * * * * One square base and four triangular faces (that meet at an apex above the base). ### How is a triangular prismdiffrent from a triangular pyramid? a triangular pyramid is a triangular prism but it doesn't work vise-versa. The prism needs a minimum of 1 triangle face but it can have more.A pyramid can only have one triangular face. ### How many faces edges and vertices doess a cone have? A face is a polygon. There are none on a cone. There is only one vertex at the top of the cone. Although the cone does not have a face it does have a lateral side and a base. ### Does a pyramid have at least two congruent parallel bases? Not true. They have only one base and several (3 or more) lateral triangular faces. A pyramid has a single vertex over a base - there are no parallel faces in any pyramid. ### How many faces on a square pyramid? There would be one face for each side of the square. The answer is four if you or only counting the triangular faces. It would be five if you are including the square base. * * * * * One square base and four triangular faces (that meet at an apex above the base). A tetrahedron. ### What is a spheres a base shape? A sphere has no base, and since it has no base, it has only a surface area and no lateral area.
HuggingFaceTB/finemath
# Pension Calculator Article byHarsh Katara ## Pension Calculator Pension funds are contributed by employers to fund for the retirement of the employees, and this pension calculator shall be useful to calculate the eligible amount that the employee is eligible for. #### Pension Calculator AS x F x N Wherein, • AS is the average salary per company rules • F is the factor in terms of percentage which is again decided by the company • N is the number of years worked in that company \$ % The formula and steps for calculating Pension as per below: AS * F * N Wherein, • AS is the average salary per company rules • F is the factor in terms of percentage, which is again decided by the company • N is the number of years worked in that company Pensions are the type of retirement plans that are funded by the company and are paid to the employees either in lumpsum or in installment until death in case it is sold to the insurance company. The pensions amount varies from company to company, and a pension calculator will be used to compute the eligible amount that the employee will receive, which again depends upon the number of years he has served the company. Also, the percentage amount or the factor also varies from company to company, and that will determine the amount calculation. Pension is usually paid in two ways one is lumpsum, and another one is in the form of an annuity if the employee desires so. ### How to Calculate using the Pension Calculator? One needs to follow the below steps in order to calculate the amount of pension. Step #1: Determine the average salary of the employee. The salary number that will be used to calculate will be the salary that the employee would be eligible to receive just before his retirement. Step #2: The Average salary will be mostly the highest-paid years and generally its average of 3 years or more depending upon the policy of the company. Step #3: Add the salary for those years and divide the same by the number of years the salary has been taken into consideration. Step #4: Now, one needs to determine the factor or the percentage that shall be paid by the company as a pension. Step #5: Now Multiply the value arrived in step 3 by the factor that was determined in step 4. Step #6: In this step, determine the number of years that is served by the employee for the company in which he shall be eligible to receive. Step #7: Take the product of values arrived in step 5 by the number of years that were determined in step 6 to get the amount of pension. Step #8: Divide the value arrived in step 7 by 12 to get a monthly pre-tax pension amount. You can download this Pension Calculator Excel Template here – Pension Calculator Excel Template ### Example #1 Mr. A, aged 58 years, has been working in a for around 25 years and will be retiring at the age of 60. The company has the policy of contributing towards for those employees who have worked a minimum of 10 years in the company, and they have determined to pay 0.05 percent of the average of 3 years’ highest-paid salary during his entire tenure. Below are the details of Mr. A’s top 5 years’ salary and bonus that he received during his tenure (per annum basis): Based on the given information and policy of the company, you are required to calculate the amount that Mr. A is eligible to receive the pension amount. Solution: We are given the factor as 0.05 percent, and the number of years served by him would be 25 years plus 2 years that are remaining in his employment, which equals 27 years. However, we now need to determine the average salary of Mr. A, and for that, we need to take the highest of below three and take the average of the same. Hence, we shall take an average of year 2, year 3 and year 5 • = (125,000 + 121,000 + 121,000) / 3 • = \$122,333,33 Now, we can use the below formula to calculate the pension amount. Pension formula = AS * F * N • = 122,333.33 x 0.05% x 27 years • = \$1,651.50 Therefore, the monthly pension amount would be • = \$1,651.50 / 12 • = \$137.63. ### Example #2 Company MNC has a policy for funding the pension of their employees for those who have served the company at-least for 5 years, and the factor that it uses to determine the pension is 2%, and it is based upon the estimated average salary that will be drawn during just before the retirement. Mr. Crane has been serving the company for around 8 years, and currently, he is drawing an average salary of \$10,000 per annum. The actuary person determines that the average salary for the department in which Mr. Cane is working would be around \$32,000 per annum and estimates that the person will work for an average of 20 years. Based on the given information, you are required to calculate the pension amount that Mr. Cane could be eligible for. Solution: We are given the factor as 2 percent, and the number of years served by him would be 20 years. The average salary determined by the actuary person is \$32,000. Now, we can use the below formula to calculate the pension amount. Pension = AS * F * N • = 32,000 x 2.00% x 20 years • = 12,800 Therefore, the monthly pension amount would be • = 12,800/12 • = 1,066.67 ### Conclusion Pension calculator, as discussed above, can be used to calculate the amount of pension that would be received by the eligible employee during the time of retirement. Any decision made related to a pension cannot be before the retirement. Hence those needs to consider carefully. Also, the amounts can be transferred to IRA accounts without any tax consideration. ### Recommended Articles This has been a guide to Pension Calculator. Here we provide you the calculator that is used to calculate the eligible amount of retirement pension that the employee will receive, along with some examples. You may also take a look at the following useful articles –
HuggingFaceTB/finemath
If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Main content # Simplifying rational expressions: grouping CCSS.Math: ## Video transcript simplify the rational expression and state the domain so once again we have a a trinomial over trinomial to see if we can simplify we need to factor both of them and that's also going to help us figure out the domain the domain is essentially figuring out all of the valid X's that we can put into this expression and not get something that's undefined so let's factor the numerator and the denominator so let's start with the numerator let's start with the numerator there and since we have a 2 out front since we have a 2 out front factoring by grouping will probably be the best way to go so let's just rewrite it here I'm just working on the numerator right now - x squared plus 13x plus 20 so we need to find two numbers a and B that if I multiply them a times B needs to be equal to let me write it over here on the right a times B needs to be equal to 2 times 20 so it has to be equal to positive 40 and then a plus B has to be equal to 13 and the numbers that jump out at me immediately are five and eight alright five times eight is forty five plus eight is 13 so we can break this 13 X into a 5x and an 8x and so we can rewrite this as 2x squared and it will break up the 13 X into and I'm going to write the 8x first I'm going to write 8x plus 5x and the reason why I wrote the 8x first is because the eight shares common factors with the two so maybe we can factor out a 2x here it'll simplify a little bit five shares factors with the 20 so let's see where this goes and then we finally have a plus 20 here and now we can group them that's the whole point of factoring by grouping so you group these first two characters right here let's factor out a 2x so this would become 2x times well 2x squared divided by 2x is just going to be X 8x divided by 2x is going to be plus 4 and then let's factor let's factor out these or let's group these two characters and if we factor out a 5 what do we get we get plus 5 Plus 5 times X + 4 5 x divided by 5 is X 20 divided by 5 is 4 and we have an x + 4 in both cases so we can factor that out right we have x + 4 x 2 terms we could undistribute it so this thing this thing over here will be x + 4 times x let me do that same color - x + 5 - x + 5 and we factored this numerator expression right there now let's do the same thing with the denominator expression I'll do that in a different let me see I want to run out of color so the denominator right over here let's do the same exercise with it so we have 2x squared + 17 x + 30 let's look for an A and a B when I multiply them I get 2 times 30 which is 60 and a plus a B when I add them I get 17 and once again let's see 5 and 12 seem to work so let's split this up let's split this up into 2 x squared so we're going to split up the 17 X into a into a what was it 12 X + a 5 X right that adds up to 17 X and when you multiply 12 times 5 you get 60 and then plus 30 plus 30 and then on this first group right here this first group right here we can factor out a 2x so if you factor out a 2x you get 2x times X plus 6 and in that second group in that second group we can factor out a 5 so you get plus 5 times X plus 6 and now we can factor out an X plus 6 and we get we get x + 6 x times 2x plus 5 2x plus 5 so we've now factored the numerator the denominator let's rewrite both of these expressions or write this entire rational expression with the numerator and the denominator factored so the numerator so this is going to be equal to X plus 4 times 2x plus 5 we figure that out right there and then the denominator the denominator is X plus 6 times 2x plus 5 now you might already might already jump out at you that you have a 2x plus 5 of the numerator in the denominator we can cancel them out and we will cancel them out but before we do that let's work on the second part of this question state the domain so what are the valid x-values that we could put in here or I get some a more interesting question what are the x-values that will make this this rational expression undefined what's the x-values that will make the denominator equal to 0 and when will the denominator equal to 0 well either when X plus 6 is equal to 0 or when 2x plus 5 is equal to 0 and we could just solve for X here subtract 6 from both sides you get X is equal to negative 6 and if you subtract 5 from both sides you get 2x is equal to negative 5 divide both sides by 2 you get X is equal to negative 5 halves so if X so we could say that the domain let me write this over here the domain is all real numbers all real numbers other than or except except X is equal to negative 6 and X is equal to negative 5 halves and the reason why we have to exclude those is those would make this denominator either way you're right it's going to make the denominator equal to zero and it would make the entire rational expression undefined so we've stated the domain now let's just simplify the rational expression we've already said that X cannot be equal to negative 5 halves or negative 6 so let's just divide the numerator in the denominator by 2x plus 5 or just look at the 2x plus 5 we know that 2x plus five won't be 0 because x won't be equal to negative 5 halves so we can cancel those out and the simplified rational expression is just X plus 4 over X +6
HuggingFaceTB/finemath
### Author Topic: Method of Continuation  (Read 4038 times) #### Dana Kayes • Jr. Member • • Posts: 6 • Karma: 0 ##### Method of Continuation « on: October 05, 2012, 11:09:44 AM » Could anyone help me understand the method of continuation? The notes for Lecture 8 are just a little too complex for me to follow. What are the steps, and what is the aim? I think that if I can better understand what we're trying to achieve by using it, I'll be able to follow the notes better. Thanks #### Vitaly Shemet • Jr. Member • • Posts: 10 • Karma: 1 ##### Re: Method of Continuation « Reply #1 on: October 06, 2012, 09:29:29 AM » I understood the even/odd "trick" as a classification, what to do in each case, but I can't understand the reason and idea of applying, what results in disability to apply method to another types of equations (not 1dwe) #### Victor Ivrii ##### Re: Method of Continuation « Reply #2 on: October 06, 2012, 10:16:32 AM » A method of continuation is a cheap trick to reduce certain BVP to those we already know how to solve. In its easiest form we looked at it in the lectures. Consider a BVP with one "special" variable $x$ (there could be other variables). This $x$ runs from $0$ to $+\infty$ (there could be other cases). Consider the same problem but with $x$ running from $-\infty$ to $\infty$, thus dropping boundary condition(s) at $x=0$. Assume that 1) plugging $-x$ instead of $x$ leaves this new boundary problem unchanged. F.e. it happens when we consider equations with the constant coefficients and  containing only even order derivatives by $x$; Good: $u_{t}+u_{xx}$, $u_{yxx}+ u_{y}-u_{xx}$ Bad: $u_{tx}+u_{xx}$, $u_t+u_{xxx}$ Variable coefficients can affect this situation: Also good: $u_{tt}- xu_{xxx}$ Bad: $u_t + xu_{xx}$ So far we applied method of continuation to wave and heat equations: $$u_{tt}-c^2u_{xx}=f, \qquad u|_{t=0}=g, \qquad u_t|_{t=0}=h$$ and $$u_{t}-ku_{xx}=f, \qquad u|_{t=0}=g.$$ 2) Assume that boundary conditions contains only terms with all odd order derivatives with respect to $x$ and are homogeneous: $u_x|_{x=0}=0$ or $(u_x-u_ {xxx})|_{x=0}=0$ fit the bill. Note that even functions satisfy these boundary conditions automatically. Then: We continue all known functions to $x<0$ as even  functions and solve extended problem (ignoring boundary condition(s) at $x=0$. 2*) Alternatively, assume that boundary conditions contains only terms with all even order derivatives with respect to $x$ and are homogeneous: $u|_{x=0}=0$ or $(u-u_ {xx})|_{x=0}=0$ fit the bill. Note that odd functions satisfy these boundary conditions automatically. Then: We continue all known functions to $x<0$ as odd  functions and solve extended problem (ignoring boundary condition(s) at $x=0$. #### Aida Razi • Sr. Member • • Posts: 62 • Karma: 15 ##### Re: Method of Continuation « Reply #3 on: October 14, 2012, 05:39:40 PM » The best explanation for method of continuation Thank you professor,
HuggingFaceTB/finemath
1310-pop-16-key(1) # 1310-pop-16-key(1) - -1, 1, 2, 0 D.-1, 1, 2 E.-1, 1, -2, 0... This preview shows pages 1–10. Sign up to view the full content. Math 1310 Daily Poppers 18 1. If you know that x = 1 is a zero of the function 3 2 ( ) 5 5 f x x x x = + - - , find the remaining zeros. A. -1, 5 B. -1, -5 C. -1, 5, -5 D. 0, -5 E. None of the above This preview has intentionally blurred sections. Sign up to view the full version. View Full Document 2. What is the remainder? 3 2 6 5 x x + + Hint: you must use long division. A. -5 x B. -5 x - 6 C. 1 D. -5 x + 6 E. None of the above 3. Find the zeros of the function: 3 ( ) 9 16 P x x x = + A. 4 3 i ± B. 4 3 ± C. 4 0, 3 i ± D. 4 0, 3 ± E. None of the above This preview has intentionally blurred sections. Sign up to view the full version. View Full Document 4. If some of the zeros of a polynomial function are 2 8 , 4, 3 , 7 and 0 i i + - , what is the smallest degree the polynomial can have? A. 7 B. 5 C. 6 D. 8 5. If the sum of zeros of a quadratic function is 4 and the product of the zeros of a quadratic function is 29, which of these could be the quadratic function? A. 2 ( ) 4 29 f x x x = + + B. 2 ( ) 29 4 f x x x = + + C. 2 ( ) 4 29 f x x x = - + D. 2 ( ) 29 4 f x x x = - + This preview has intentionally blurred sections. Sign up to view the full version. View Full Document Suppose ( 29 ( 29 3 3 ( ) 2 1 1 (2 ) f x x x x = - + - - . 6. What are the zeros? A. -1, 1, 2, -2 B. -1, 1, -2 C. This preview has intentionally blurred sections. Sign up to view the full version. View Full Document This preview has intentionally blurred sections. Sign up to view the full version. View Full Document This is the end of the preview. Sign up to access the rest of the document. Unformatted text preview: -1, 1, 2, 0 D.-1, 1, 2 E.-1, 1, -2, 0 Suppose ( 29 ( 29 3 3 ( ) 2 1 1 (2 ) f x x x x = -+--. 7. What is the y intercept? A. 2 B.-2 C. 4 D.-4 E. 0 Suppose ( 29 ( 29 3 3 ( ) 2 1 1 (2 ) f x x x x = -+--. 8. What is the degree of the polynomial? A. 6 B. 7 C. 8 D. 9 Suppose ( 29 ( 29 3 3 ( ) 2 1 1 (2 ) f x x x x = -+--. 9. Which of these is the end behavior of the graph of the function? A. ւց B. ւր C. տր D. տց Suppose ( 29 ( 29 2 3 ( ) 3 1 2 (1 ) f x x x x = -+--. 10. Which of these describes the behavior of the graph of the function where 1 x = -. A. It’s like a parabola that opens downward. B. It’s like a line that rises from left to right. C. It’s like a line that falls from left to right. D. It’s like a parabola that opens upward.... View Full Document ## This note was uploaded on 02/22/2012 for the course MATH 1310 taught by Professor Marks during the Summer '08 term at University of Houston. ### Page1 / 10 1310-pop-16-key(1) - -1, 1, 2, 0 D.-1, 1, 2 E.-1, 1, -2, 0... This preview shows document pages 1 - 10. Sign up to view the full document. View Full Document Ask a homework question - tutors are online
HuggingFaceTB/finemath
What type of discount? % off Original price \$ Discount % Is tax included in price? yes After discount You pay \$ You're saving \$ The old version, the simple discount calculator, is also available. # Discount Calculator By Mateusz Mucha and Hanna Pamuła, PhD candidate This discount calculator allows you to find the reduced price of a product and the amount of money you save. You can also use it for the reverse and calculate the size of the discount or the original price. As a shopper, you it also functions as a sale price calculator to help you negotiate the price. Got a coupon? Find out what the final price will be after you factor in that 15% off discount that you have. These are just a few of the situations this calculator will help you with. If you are on the other side of these transactions, that is you are a sales person, you might want find out what your sale price will be (our profit margin with discount or markdown calculator may also be handy). Read on to find out how to calculate discount and what the discount formula is. ## How to calculate discount and sale price? Just follow these few simple steps: 1. Find the original price (for example `\$90`) 2. Get the the discount percentage (for example `20%`) 3. Calculate the savings: `20% of \$90 = \$18` 4. Subtract the savings from the original price to get the sale price: `\$90 - \$18 = \$72` 5. You're all set! ## Discount formula The formula for discount is exactly the same as the percentage decrease formula: `discounted_price = original_price - (original_price * discount / 100)` ## Other considerations Depending on your needs, our sale price calculator goes well with our double and triple discount calculators. The commission calculator works the other way around - it determines the salesman's bonus for selling a product. Finally, the percent off calculator does... the same thing, but some people use different wording and are more likely to find it this way. ## FAQ ### What are the types of discount? There are three most common types of discounts. • Quantity discounts - where you receive a discount based on the number of units you purchase. Thank you economies of scale! • Trade discounts - discounts provided by a supplier to distributors. This discount allows distributors to vary their own prices, so that all items can be sold. • Promotional discounts - these are a useful sale promotion technique, these are the most common discount for consumers. You've surely seen one in the form of 20% off sale, or a buy one get one free offer. ### How do I calculate discount percentage? To calculate the percentage discount between two prices, follow these steps: 1. Subtract the post-discount price from the pre-discount price. 2. Divide this new number by the pre-discount price. 3. Multiply the resultant number by 100. 4. Be proud of your mathematical abilities. ### What are fake discounts? Fake discounts, or fictitious pricing, is a disingenuous practice that some retailers take part in, where the supposed 'pre-sale price' of an item is drastically inflated, or the 'post-sale price' of an item is actually its market price. The effect of this is to deceive the consumer into believing they are getting a bargain, making them more likely to purchase an item. ### How do I calculate a 10% discount? 1. Take the original price. 2. Divide the original price by 100 and times it by 10. 3. Alternatively, move the decimal one place to the left. 4. Minus this new number from the original one. 5. This will give you the discounted value. 6. Spend the money you've saved! ### How do I take 20 % of a price? 1. Take the original price. 2. Divide the original price by 5. 3. Alternatively, divide the original price by 100 and multiply it by 20. 4. Subtract this new number from the original one. 5. The number you calculated is the discounted value. ### How do I calculate 30 percent off? 1. Take the pre-sale price. 2. Divide the original price by 100 and multiply it by 30. 3. Take this new number away from the original one. 4. The new number is your discounted value. 5. Laugh at how much money you're saving! ### Why do clearance sales happen? Fashion is seasonal. Nobody is going to buy a light summer shirt in the middle of winter. To keep all of this unsold stock from clogging up their warehouses, shops will very often choose to sell their products at a highly discounted rate at the end of the season to make room for a new batch of seasonal stock. ### How do I calculate a discount rate in Excel? While it's easier to use the Omni Discount Calculator, here are the steps to calculate discount rate in Excel: 1. Input the pre-sale price (for example into cell A1). 2. Input the post-sale price (for example into cell B1). 3. Subtract the post-sale price from the pre-sale price (In C1, input =A1-B1) and label it “discount amount”. 4. Divide the new number by the pre-sale price and multiply it by 100 (In D1, input =(C1/A1)*100) and label it “discount rate”. 5. Right click on the final cell and select Format Cells. 6. In the Format Cells box, under Number, select Percentage and specify your desired number of decimal places. ### How do I find the original price? To calculate the original price of an object when you only have its post-sale price and the percentage discount, follow these steps: 1. Divide the discount by 100. 2. Subtract this number from 1. 3. Divide the post-sale price by this new number. 4. Marvel at what you could have been paying! ### What is percentage discount? Percentage discount is a discount that is given to a product or service that is given as an amount per hundred. For example, a percentage discount of 20% would mean that an item that originally cost \$100 would now cost \$80. This is common with promotional and seasonal sales, as a way of encouraging consumers to buy an item at a reduced cost. Mateusz Mucha and Hanna Pamuła, PhD candidate
HuggingFaceTB/finemath
# 94.7 kg to lbs - 94.7 kilograms to pounds Do you need to learn how much is 94.7 kg equal to lbs and how to convert 94.7 kg to lbs? You are in the right place. This whole article is dedicated to kilogram to pound conversion - theoretical and also practical. It is also needed/We also want to highlight that whole this article is dedicated to one amount of kilograms - exactly one kilogram. So if you want to learn more about 94.7 kg to pound conversion - keep reading. Before we get to the practice - it means 94.7 kg how much lbs calculation - we will tell you a little bit of theoretical information about these two units - kilograms and pounds. So let’s move on. How to convert 94.7 kg to lbs? 94.7 kilograms it is equal 208.777762114 pounds, so 94.7 kg is equal 208.777762114 lbs. ## 94.7 kgs in pounds We will start with the kilogram. The kilogram is a unit of mass. It is a base unit in a metric system, formally known as International System of Units (in abbreviated form SI). Sometimes the kilogram can be written as kilogramme. The symbol of the kilogram is kg. The kilogram was defined first time in 1795. The kilogram was defined as the mass of one liter of water. This definition was not complicated but difficult to use. Then, in 1889 the kilogram was defined by the International Prototype of the Kilogram (in abbreviated form IPK). The IPK was prepared of 90% platinum and 10 % iridium. The IPK was used until 2019, when it was replaced by a new definition. The new definition of the kilogram is based on physical constants, especially Planck constant. Here is the official definition: “The kilogram, symbol kg, is the SI unit of mass. It is defined by taking the fixed numerical value of the Planck constant h to be 6.62607015×10−34 when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of c and ΔνCs.” One kilogram is 0.001 tonne. It can be also divided into 100 decagrams and 1000 grams. ## 94.7 kilogram to pounds You learned some facts about kilogram, so now let’s move on to the pound. The pound is also a unit of mass. We want to point out that there are more than one kind of pound. What are we talking about? For instance, there are also pound-force. In this article we are going to to concentrate only on pound-mass. The pound is used in the British and United States customary systems of measurements. To be honest, this unit is in use also in another systems. The symbol of the pound is lb or “. There is no descriptive definition of the international avoirdupois pound. It is exactly 0.45359237 kilograms. One avoirdupois pound could be divided into 16 avoirdupois ounces and 7000 grains. The avoirdupois pound was enforced in the Weights and Measures Act 1963. The definition of this unit was placed in first section of this act: “The yard or the metre shall be the unit of measurement of length and the pound or the kilogram shall be the unit of measurement of mass by reference to which any measurement involving a measurement of length or mass shall be made in the United Kingdom; and- (a) the yard shall be 0.9144 metre exactly; (b) the pound shall be 0.45359237 kilogram exactly.” ### How many lbs is 94.7 kg? 94.7 kilogram is equal to 208.777762114 pounds. If You want convert kilograms to pounds, multiply the kilogram value by 2.2046226218. ### 94.7 kg in lbs The most theoretical part is already behind us. In next part we want to tell you how much is 94.7 kg to lbs. Now you know that 94.7 kg = x lbs. So it is time to know the answer. Just see: 94.7 kilogram = 208.777762114 pounds. This is an accurate result of how much 94.7 kg to pound. You may also round off the result. After it your outcome is as following: 94.7 kg = 208.34 lbs. You know 94.7 kg is how many lbs, so have a look how many kg 94.7 lbs: 94.7 pound = 0.45359237 kilograms. Of course, in this case you can also round off the result. After rounding off your result is exactly: 94.7 lb = 0.45 kgs. We are also going to show you 94.7 kg to how many pounds and 94.7 pound how many kg outcomes in charts. See: We want to begin with a table for how much is 94.7 kg equal to pound. ### 94.7 Kilograms to Pounds conversion table Kilograms (kg) Pounds (lb) Pounds (lbs) (rounded off to two decimal places) 94.7 208.777762114 208.340 Now look at a chart for how many kilograms 94.7 pounds. Pounds Kilograms Kilograms (rounded off to two decimal places 94.7 0.45359237 0.45 Now you know how many 94.7 kg to lbs and how many kilograms 94.7 pound, so it is time to move on to the 94.7 kg to lbs formula. ### 94.7 kg to pounds To convert 94.7 kg to us lbs a formula is needed. We are going to show you a formula in two different versions. Let’s begin with the first one: Number of kilograms * 2.20462262 = the 208.777762114 outcome in pounds The first formula give you the most exact result. In some cases even the smallest difference could be considerable. So if you want to get an accurate result - first formula will be the best for you/option to convert how many pounds are equivalent to 94.7 kilogram. So move on to the another formula, which also enables conversions to know how much 94.7 kilogram in pounds. The shorter version of a formula is as following, have a look: Number of kilograms * 2.2 = the result in pounds As you can see, this formula is simpler. It can be better solution if you want to make a conversion of 94.7 kilogram to pounds in fast way, for instance, during shopping. You only need to remember that final result will be not so correct. Now we are going to learn you how to use these two versions of a formula in practice. But before we will make a conversion of 94.7 kg to lbs we want to show you another way to know 94.7 kg to how many lbs totally effortless. ### 94.7 kg to lbs converter Another way to check what is 94.7 kilogram equal to in pounds is to use 94.7 kg lbs calculator. What is a kg to lb converter? Calculator is an application. Calculator is based on longer version of a formula which we showed you above. Due to 94.7 kg pound calculator you can quickly convert 94.7 kg to lbs. You only need to enter amount of kilograms which you want to convert and click ‘calculate’ button. The result will be shown in a flash. So try to calculate 94.7 kg into lbs using 94.7 kg vs pound converter. We entered 94.7 as a number of kilograms. Here is the result: 94.7 kilogram = 208.777762114 pounds. As you can see, this 94.7 kg vs lbs converter is so simply to use. Now let’s move on to our primary issue - how to convert 94.7 kilograms to pounds on your own. #### 94.7 kg to lbs conversion We will start 94.7 kilogram equals to how many pounds conversion with the first version of a formula to get the most correct outcome. A quick reminder of a formula: Number of kilograms * 2.20462262 = 208.777762114 the result in pounds So what have you do to check how many pounds equal to 94.7 kilogram? Just multiply amount of kilograms, this time 94.7, by 2.20462262. It gives 208.777762114. So 94.7 kilogram is equal 208.777762114. You can also round it off, for example, to two decimal places. It is 2.20. So 94.7 kilogram = 208.340 pounds. It is high time for an example from everyday life. Let’s convert 94.7 kg gold in pounds. So 94.7 kg equal to how many lbs? As in the previous example - multiply 94.7 by 2.20462262. It is equal 208.777762114. So equivalent of 94.7 kilograms to pounds, when it comes to gold, is 208.777762114. In this case it is also possible to round off the result. This is the outcome after rounding off, in this case to one decimal place - 94.7 kilogram 208.34 pounds. Now we can go to examples converted with a short version of a formula. #### How many 94.7 kg to lbs Before we show you an example - a quick reminder of shorter formula: Amount of kilograms * 2.2 = 208.34 the result in pounds So 94.7 kg equal to how much lbs? As in the previous example you need to multiply number of kilogram, this time 94.7, by 2.2. See: 94.7 * 2.2 = 208.34. So 94.7 kilogram is exactly 2.2 pounds. Do another conversion with use of shorer formula. Now convert something from everyday life, for example, 94.7 kg to lbs weight of strawberries. So convert - 94.7 kilogram of strawberries * 2.2 = 208.34 pounds of strawberries. So 94.7 kg to pound mass is exactly 208.34. If you know how much is 94.7 kilogram weight in pounds and are able to convert it with use of two different formulas, let’s move on. Now we are going to show you all outcomes in charts. #### Convert 94.7 kilogram to pounds We are aware that outcomes presented in tables are so much clearer for most of you. We understand it, so we gathered all these outcomes in charts for your convenience. Due to this you can easily compare 94.7 kg equivalent to lbs outcomes. Let’s begin with a 94.7 kg equals lbs table for the first version of a formula: Kilograms Pounds Pounds (after rounding off to two decimal places) 94.7 208.777762114 208.340 And now have a look at 94.7 kg equal pound chart for the second formula: Kilograms Pounds 94.7 208.34 As you see, after rounding off, when it comes to how much 94.7 kilogram equals pounds, the outcomes are the same. The bigger number the more considerable difference. Please note it when you want to make bigger number than 94.7 kilograms pounds conversion. #### How many kilograms 94.7 pound Now you know how to calculate 94.7 kilograms how much pounds but we want to show you something more. Do you want to know what it is? What about 94.7 kilogram to pounds and ounces calculation? We are going to show you how you can convert it step by step. Begin. How much is 94.7 kg in lbs and oz? First thing you need to do is multiply amount of kilograms, this time 94.7, by 2.20462262. So 94.7 * 2.20462262 = 208.777762114. One kilogram is exactly 2.20462262 pounds. The integer part is number of pounds. So in this case there are 2 pounds. To convert how much 94.7 kilogram is equal to pounds and ounces you have to multiply fraction part by 16. So multiply 20462262 by 16. It is equal 327396192 ounces. So your result is exactly 2 pounds and 327396192 ounces. You can also round off ounces, for example, to two places. Then your outcome is 2 pounds and 33 ounces. As you can see, calculation 94.7 kilogram in pounds and ounces quite simply. The last calculation which we will show you is calculation of 94.7 foot pounds to kilograms meters. Both of them are units of work. To calculate foot pounds to kilogram meters it is needed another formula. Before we give you it, let’s see: • 94.7 kilograms meters = 7.23301385 foot pounds, • 94.7 foot pounds = 0.13825495 kilograms meters. Now look at a formula: Number.RandomElement()) of foot pounds * 0.13825495 = the outcome in kilograms meters So to convert 94.7 foot pounds to kilograms meters you need to multiply 94.7 by 0.13825495. It is 0.13825495. So 94.7 foot pounds is exactly 0.13825495 kilogram meters. It is also possible to round off this result, for instance, to two decimal places. Then 94.7 foot pounds will be exactly 0.14 kilogram meters. We hope that this calculation was as easy as 94.7 kilogram into pounds conversions. This article was a big compendium about kilogram, pound and 94.7 kg to lbs in calculation. Thanks to this conversion you know 94.7 kilogram is equivalent to how many pounds. We showed you not only how to make a conversion 94.7 kilogram to metric pounds but also two another calculations - to check how many 94.7 kg in pounds and ounces and how many 94.7 foot pounds to kilograms meters. We showed you also another solution to do 94.7 kilogram how many pounds conversions, it is using 94.7 kg en pound converter. This is the best solution for those of you who do not like converting on your own at all or need to make @baseAmountStr kg how lbs calculations in quicker way. We hope that now all of you are able to make 94.7 kilogram equal to how many pounds calculation - on your own or with use of our 94.7 kgs to pounds calculator. Don’t wait! Let’s convert 94.7 kilogram mass to pounds in the way you like. Do you want to do other than 94.7 kilogram as pounds calculation? For example, for 5 kilograms? Check our other articles! We guarantee that calculations for other numbers of kilograms are so simply as for 94.7 kilogram equal many pounds. ### How much is 94.7 kg in pounds To quickly sum up this topic, that is how much is 94.7 kg in pounds , we prepared for you an additional section. Here you can see the most important information about how much is 94.7 kg equal to lbs and how to convert 94.7 kg to lbs . Have a look. How does the kilogram to pound conversion look? The conversion kg to lb is just multiplying 2 numbers. How does 94.7 kg to pound conversion formula look? . Have a look: The number of kilograms * 2.20462262 = the result in pounds Now you can see the result of the conversion of 94.7 kilogram to pounds. The exact result is 208.777762114 lbs. It is also possible to calculate how much 94.7 kilogram is equal to pounds with another, shortened version of the equation. Check it down below. The number of kilograms * 2.2 = the result in pounds So this time, 94.7 kg equal to how much lbs ? The answer is 208.777762114 lbs. How to convert 94.7 kg to lbs in a few seconds? You can also use the 94.7 kg to lbs converter , which will make all calculations for you and you will get an exact result . #### Kilograms [kg] The kilogram, or kilogramme, is the base unit of weight in the Metric system. It is the approximate weight of a cube of water 10 centimeters on a side. #### Pounds [lbs] A pound is a unit of weight commonly used in the United States and the British commonwealths. A pound is defined as exactly 0.45359237 kilograms.
HuggingFaceTB/finemath
2018年1月16日火曜日 数学 - Python - 線型代数 - 行列式 - 行列式の計算(4-次正方行列、複素数、等式) 1. $\begin{array}{}\mathrm{det}\left(\begin{array}{cccc}0& 1& 0& 0\\ -{x}^{2}& x& 1& 0\\ 0& 0& x& 1\\ 1& 0& 0& x\end{array}\right)\end{array}=\mathrm{det}\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& {x}^{2}& 1& 0\\ 0& 0& x& 1\\ 0& -1& 0& x\end{array}\right)\\ ={x}^{4}-1$ $\begin{array}{}{x}^{4}-1=0\\ \left({x}^{2}+1\right)\left({x}^{2}-1\right)=0\\ \left({x}^{2}+1\right)\left(x+1\right)\left(x-1\right)=0\\ x=±1,±i\end{array}$ 2. $\begin{array}{}\mathrm{det}\left(\begin{array}{cccc}0& 1& 0& 0\\ -{x}^{2}& x& x& 1-{x}^{2}\\ 1-{x}^{2}& x& x& -{x}^{2}\\ x& 0& 1& x\end{array}\right)\end{array}=\mathrm{det}\left(\begin{array}{ccc}{x}^{2}& x& 1-{x}^{2}\\ {x}^{2}-1& x& -{x}^{2}\\ -x& 1& x\end{array}\right)\\ =\mathrm{det}\left(\begin{array}{ccc}2{x}^{2}& x& 1-2{x}^{2}\\ 2{x}^{2}-1& x& -2{x}^{2}\end{array}\right)\\ 010\\ =\left(1-2{x}^{2}\right)\left(2{x}^{2}-1\right)-\left(-4{x}^{4}\right)\\ =-4{x}^{4}+4{x}^{2}-1+4{x}^{4}\\ =4{x}^{2}-1$ $\begin{array}{}4{x}^{2}-1=0\\ \left(2x+1\right)\left(2x-1\right)=0\\ x=±\frac{1}{2}\end{array}$ コード(Emacs) Python 3 #!/usr/bin/env python3 from sympy import pprint, symbols, Matrix, solve x = symbols('x') A = Matrix([x, 1, 0, 0, 0, x, 1, 0, 0, 0, x, 1, 1, 0, 0, x]).reshape(4, 4) B = Matrix([x, 1, 0, x, 0, x, x, 1, 1, x, x, 0, x, 0, 1, x]).reshape(4, 4) for i, M in enumerate([A, B]): print(f'({chr(ord("a") + i)})') for t in [M, M.det(), solve(M.det())]: pprint(t) print() print() $./sample5.py (a) ⎡x 1 0 0⎤ ⎢ ⎥ ⎢0 x 1 0⎥ ⎢ ⎥ ⎢0 0 x 1⎥ ⎢ ⎥ ⎣1 0 0 x⎦ 4 x - 1 [-1, 1, -ⅈ, ⅈ] (b) ⎡x 1 0 x⎤ ⎢ ⎥ ⎢0 x x 1⎥ ⎢ ⎥ ⎢1 x x 0⎥ ⎢ ⎥ ⎣x 0 1 x⎦ 2 4⋅x - 1 [-1/2, 1/2]$
HuggingFaceTB/finemath
List of accepted papers Click on the plus signs to expand the abstracts. 1. [+] Mingbo Zhang and Yong Luo. Factorization of Differential Operators with Ordinary Differential Polynomial Coefficients 2. Abstract: In this paper, we present an algorithm to factor a differential operator $L=\sigma^n+c_{n-1}\sigma^{n-1}+\cdots+c_1\sigma+c_0$ with coefficients $c_i$ in $\K\{y\}$, where $\K$ is a constant field and $\K\{y\}$ is the ordinary differential polynomial ring over $\K$. Also, we discuss the applications of the algorithm in decomposing nonlinear differential polynomials and factoring differential operators with coefficients in the extension field of $\K$. 3. [+] Martin Albrecht. The M4RIE library for dense linear algebra over small fields with even characteristic 4. Abstract: We describe algorithms and implementations for linear algebra with dense matrices over GF(2^e) for 2 <= e <= 10. Our main contributions are: (a) the notion of Newton-John tables to avoid scalar multiplications in Gaussian elimination and matrix multiplication, (b) an efficient implementation of Karatsuba-style multiplication for matrices over extension fields of GF(2) and (c) a description of an open-source library - called M4RIE - providing the fastest known implementation of dense linear algebra over GF(2^e) for 2 <= e <= 10. 5. [+] Shaoshi Chen, Manuel Kauers and Michael Singer. Telescopers for Rational and Algebraic Functions via Residues 6. Abstract: We show that the problem of constructing telescopers for rational functions of m variables is equivalent to the problem of constructing telescopers for algebraic functions of m - 1 variables and present a new algorithm to construct telescopers for algebraic functions of two variables. These considerations are based on analyzing the residues of the input. According to experiments, the resulting algorithm for rational functions of three variables is faster than known algorithms, at least in some examples of combinatorial interest. The algorithm for algebraic functions implies a new bound on the order of the telescopers. 7. [+] Peter Scheiblechner. Effective de Rham Cohomology - The Hypersurface Case 8. Abstract: We prove an effective bound for the degrees of generators of the algebraic de Rham cohomology of smooth affine hypersurfaces. In particular, we show that the de Rham cohomology of a smooth hypersurface of degree d in C^n can be generated by differential forms of degree O(n 2^n d^{n^2}). This result is relevant for the algorithmic computation of the cohomology, but is also motivated by questions in the theory of ordinary differential equations related to the infinitesimal Hilbert 16th problem. 9. [+] Wei Zhou, George Labahn and Arne Storjohann. Computing Minimal Nullspace Bases 10. Abstract: In this paper we present a deterministic algorithm for the computation of a minimal nullspace basis of an $m\times n$ input matrix of univariate polynomials over a field $\mathbb{K}$ with $m\le n$. This algorithm computes a minimal nullspace basis of a degree $d$ input matrix with a cost of $O^{\sim}\left(n^{\omega}\left\lceil md/n\right\rceil \right)$ field operations in $\mathbb{K}$. The same algorithm also works in the more general situation on computing a shifted minimal nullspace basis, with a given degree shift $\vec{s}\in\mathbb{Z}^{n}$ whose entries bound the corresponding column degrees of the input matrix. If $\rho$ is the sum of the $m$ largest entries of $\vec{s}$, then a $\vec{s}$-minimal right nullspace basis can be computed with a cost of $O^{\sim}(n^{\omega}\rho/m)$ field operations. 11. [+] Xiaodong Ma, Yao Sun, Dingkang Wang and Yang Zhang. A Signature-Based Algorithm for Computing Gröbner Bases in Solvable Polynomial Algebras 12. Abstract: Signature-based algorithms, including F5, F5C, G2V and GVW, are efficient algorithms for computing Gröbner bases in polynomial rings. A signature-based algorithm is presented in current paper to compute Gröbner bases in solvable polynomial rings, which include usual commutative polynomial rings and some non-commutative polynomial rings like Weyl algebra. The generalized rewritten criterion (proposed in Sun and Wang 2011) is used to construct this new algorithm. When this new algorithm uses the order implied by GVW, its termination is proved without special assumptions on the computing order of critical pairs. Data structures similar to F5 can be used to speed up this new algorithm, and Gröbner bases for corresponding syzygy modules can be obtained from the outputs in polynomial time. Experimental data shows that most redundant computations can be avoided in this new algorithm. 13. [+] Ana Romero and Francis Sergeraert. Programming before Theorizing, a case study 14. Abstract: This paper relates how a "simple" result in combinatorial homotopy finally led to a totally new understanding of basic theorems in Algebraic Topology, namely the Eilenberg-Zilber theorem, the twisted Eilenberg-Zilber theorem, and finally the Eilenberg-MacLane correspondance between the Classifying Space and Bar constructions. In the last case, it was an amazing lucky consequence of computations based on conjectures not yet proved. The key new tool used in this context is Robin Forman's Discrete Vector Fields theory. 15. [+] Evelyne Hubert and George Labahn. Rational invariants of scalings from Hermite normal forms 16. Abstract: Scalings form a class of group actions on affine spaces that have both theoretical and practical importance. A scaling is accurately described by an integer matrix. Tools from integer linear algebra are exploited to compute a minimal generating set of rational invariants, trivial rewriting and rational sections for such a group action. The primary tools used are Hermite normal forms and their unimodular multipliers. With the same line of ideas, a complete solution to the scaling symmetry reduction of a polynomial system is also presented. 17. [+] Vikram Sharma and Chee Yap. Near Optimal Tree Size Bounds on a Simple Real Root Isolation Algorithm 18. Abstract: The problem of isolating all real roots >of a square-free integer polynomial $f(X)$ inside any given interval $I_0$ is a fundamental problem. EVAL is a simple and practical exact numerical algorithm for this problem: it recursively bisects $I_0$, and any subinterval $I\ib I_0$, until a certain numerical predicate $C_0(I)\lor C_1(I)$ holds on each $I$. We prove that the size of the recursive bisection tree is $$O(d(L+r+\log d))$$ where $f$ has degree $d$, its coefficients have absolute values $<2^L$, and $I_0$ contains $r$ roots of $f$. In the range $L\ge d$, our bound is the sharpest known, and provably optimal. Our results are closely paralleled by recent bounds on EVAL by Sagraloff-Yap (ISSAC 2011) and Burr-Krahmer (2012). In the range $L\le d$, our bound is incomparable with those of Sagraloff-Yap or Burr-Krahmer. Similar to the Burr-Krahmer proof, we exploit the technique of continuous amortization'' from Burr-Krahmer-Yap (2009), namely to bound the tree size by an integral $\int_{I_0} G(x)dx$ over a suitable charging function'' $G(x)$. The introduction of the output-size parameter $r$ seems new. We give an application of this feature to the problem of ray-shooting (i.e., finding smallest root in a given interval). 19. [+] Adam Strzeboński. Solving Polynomial Systems over Semialgebraic Sets Represented by Cylindrical Algebraic Formulas 20. Abstract: Cylindrical algebraic formulas are an explicit representation of semialgebraic sets as finite unions of cylindrically arranged disjoint cells bounded by graphs of algebraic functions. We present a version of the Cylindrical Algebraic Decomposition (CAD) algorithm customized for solving systems of polynomial equations and inequalities over semialgebraic sets given in this representation. The algorithm can also be used to solve conjunctions of polynomial conditions in an incremental manner. We show application examples and give an empirical comparison of incremental and direct CAD computation. 21. [+] Andre Galligo and Maria Emilia Alonso. A Root Isolation Algorithm for Sparse Univariate Polynomials 22. Abstract: We consider a univariate polynomial $f$ with real coefficients having a high degree $N$ but a rather small number $d+1$ of monomials, with $d<<N$. Such a sparse polynomial has a number of real root smaller or equal to $d$. Our target is to find for each real root of $f$ an interval isolating this root from the others. The usual subdivision methods relying either on Sturm sequences or Moebius transform followed by Descarte's rule of sign destruct the sparse structure. Our approach relies on the generalized Budan-Fourier of Coste, Lajous, Lombardi, Roy [CLLR:2005] and the techniques developed in Galligo [Gal:2011]. To such a $f$ is asociated a set of $n$ differentiation operators called $\f$-derivations. The Budan-Fourier function $V_f(x)$ counts the sign changes in the sequence of $\f$-derivatives of $f$ evaluated at $x$. The values at which this function jumps are called the $\f$-virtual roots of $f$, these include the real roots of $f$. We also consider the augmented $\f$-virtual roots of $f$ and introduce a genericity property which eases our study and its presentation. We present a fast root isolation method and an algorithm which has been implemented in Maple. We rely on an improved generalized Budan-Fourier count applied to both the input polynomial and its reciprocal, together with Newton-Halley approximation steps. 23. [+] Michael Sagraloff. When Newton meets Descartes: A Simple and Fast Algorithm to Isolate the Real Roots of a Polynomial 24. Abstract: We introduce a novel algorithm denoted NEWDSC to isolate the real roots of a univariate square-free polynomial f with integer coefficients. The algorithm iteratively subdivides an initial interval which is known to contain all real roots of f and performs exact operations on the coefficients of f in each step. For the subdivision strategy, we combine Descartes' Rule of Signs and Newton iteration. More precisely, instead of using a fixed subdivision strategy such as bisection in each iteration, a Newton step based on the number of sign variations for an actual interval is considered, and, only if the Newton step fails, we fall back to bisection. Following this approach, our analysis shows that, for most iterations, quadratic convergence towards the real roots is achieved. In terms of complexity, our method induces a recursion tree of almost optimal size O(n\log(n\tau)), where n denotes the degree of the polynomial and \tau the bitsize of its coefficients. The latter bound constitutes an improvement by a factor of \tau upon all existing subdivision methods for the task of isolating the real roots. In addition, we provide a bit complexity analysis showing that NEWDSC needs only \tilde{O}(n^3\tau) bit operations to isolate all real roots of f. This matches the best bound known for this fundamental problem. However, in comparison to the significantly more involved numerical algorithms by V. Pan and A. Schönhage which achieve the same bit complexity for the task of isolating all complex roots, NEWDSC focuses on real root isolation, is much easier to access and to implement. 25. [+] Pavel Emeliyanenko and Michael Sagraloff. On the Complexity of Solving a Bivariate Polynomial System 26. Abstract: We study the complexity of computing the real solutions of a bivariate polynomial system using the recently presented algorithm BISOLVE~\cite{bes-bisolve-2011}. BISOLVE is a classical elimination method which first projects the solutions of a system onto the x- and y-axes and, then, selects the actual solutions from the so induced candidate set. However, unlike similar algorithms, BISOLVE requires no genericity assumption on the input nor it needs any change of the coordinate system. Furthermore, extensive benchmarks from~\cite{bes-bisolve-2011} confirm that the algorithm outperforms state of the art approaches by a large factor. In this paper, we show that, for two polynomials f,g\in\mathbb{ZZ}[x,y] of total degree at most $n$ with integer coefficients bounded in absolute value by 2^\tau, BISOLVE computes isolating boxes for all real solutions of the system f=g=0 using \Otilde(n^8+n^7\tau) bit operations, thereby improving the previous record bound by four magnitudes. 27. [+] Akos Seress. 2-closed Majorana representations 28. Abstract: The sporadic simple group Monster acts on the Conway-Griess-Norton (CGN) algebra, which is a real algebra $V_M$ of dimension 196,884, equipped with a positive definite scalar product and a bilinear, commutative, and non-associative algebra product. Certain properties of idempotents in $V_M$, that correspond to 2A involutions in the Monster, have been axiomatized by Ivanov as the Majorana representation of the Monster. The axiomatization enables us to talk about Majorana representations of arbitrary groups $G$ that are generated by involutions. In general, a Majorana representation may or may not exist, but if $G$ is isomorphic to a subgroup of the Monster and a representation is isomorphic to the corresponding subalgebra of $V_M$ then we say that the Majorana representation is based on an embedding of $G$ in the Monster. In this paper, we describe a generic theoretical procedure to construct Majorana representations, and a GAP computer program that implements the procedure. It turns out that in many cases the representations are based on embeddings in the Monster, thereby providing a valuable tool of studying subalgebras of the CGN algebra that were unaccessible in the 196,884-dimensional setting. 29. [+] Shaoshi Chen and Manuel Kauers. Order-Degree Curves for Hypergeometric Creative Telescoping 30. Abstract: Creative telescoping applied to a bivariate proper hypergeometric term produces linear recurrence operators with polynomial coeffcients, called telescopers. We provide bounds for the degrees of the polynomials appearing in these operators. Our bounds are expressed as curves in the (r,d)-plane which assign to every order r a bound on the degree d of the telescopers. These curves are hyperbolas, which reflect the phenomenon that higher order telescopers tend to have lower degree, and vice versa. 31. [+] Jean-François Biasse and Claus Fieker. A polynomial time algorithm for computing the HNF of a module over the integers of a number field 32. Abstract: We present a variation of the modular algorithm for computing the pseudo-HNF of an OK-module presented by Cohen, where OK is the ring of integers of a number field K. The modular strategy was conjectured to run in polynomial time by Cohen, but so far, no such proof was available in the literature. In this paper, we provide a new method to prevent the coefficient explosion and we rigorously assess the complexity with respect to the size of the input and the invariants of the field K. 33. [+] Alin Bostan, Frédéric Chyzak, Ziming Li and Bruno Salvy. Fast Computation of Common Left Multiples of Linear Ordinary Differential Operators 34. Abstract: We study tight bounds and fast algorithms for LCLMs of several linear differential operators with polynomial coefficients. We analyse the worst-case arithmetic complexity of existing algorithms for LCLMs, as well as the size of their outputs. We propose a new algorithm that reduces the LCLM computation to a linear algebra problem on a polynomial matrix. The new algorithm yields sharp bounds on the coefficient degrees of the LCLM, improving by two orders of magnitude the previously known bounds. The complexity of the new algorithm is almost optimal, in the sense that it nearly matches the arithmetic size of the output. 35. [+] James McCarron. Small Homogeneous Quandles 36. Abstract: We derive an algorithm for computing all the homogeneous quandles of a given order n provided that a list of the transitive permutation groups of degree n are known. We discuss the implementation of the algorithm, and use it to enumerate the number of isomorphism classes of homogeneous quandles up to order 23 and compute representatives for each class. We also completely determine the homogeneous quandles of prime order. As a by-product, we are able to replicate an earlier calculation of the connected quandles of order at most 30 and, based on this, to compute the number of isomorphism classes of simple quandles to the same order. 37. [+] Mustafa Elsheikh, Mark Giesbrecht, Andy Novocin and B. David Saunders. Fast Computation for Smith Forms of Sparse Matrices Over Local Rings 38. Abstract: We present algorithms to compute the Smith normal form of matrices over two families of local rings. The algorithms use the black box model which is suitable for sparse and structured matrices. The algorithms depend on a number of tools, such as matrix rank computation over finite fields, for which the best-known time- and memory-efficient algorithms are probabilistic. For an n by n matrix A over the ring F[z]/(f^e), where f^e is a power of an irreducible polynomial f in F[z] of degree d, our algorithm requires O(μ*de^2n) operations in F, where our black box is assumed to require O(μ) operations in F to compute a matrix-vector product by a vector over F[z]/(f^e) (and μ is assumed greater than den. The algorithm only requires additional storage for O(den) elements of F. In particular, if μ=O(den), then our algorithm requires only O~(n^2d^2e^3) operations in F, which is an improvement on previous methods for small d and e. For the ring Z/p^eZ, where p is a prime, we give an algorithm which is time- and memory-efficient when the number of nontrivial invariant factors is small. We describe a method for dimension reduction while preserving the invariant factors. The runtime is essentially linear in neμ log(p), where μ is the cost of black-box evaluation (assumed greater than n). To avoid the practical cost of conditioning, we give a Monte Carlo certificate, which at low cost, provides either a high probability of success or a proof of failure. The quest for a time and memory efficient solution without the restriction on number of nontrivial invariant factors remains open. We offer a conjecture which may contribute toward that end. 39. [+] Feng Guo, Erich L. Kaltofen and Lihong Zhi. Certificates of Impossibility of Hilbert-Artin Representations of a Given Degree for Definite Polynomials and Functions 40. Abstract: We deploy numerical semidefinite programming and conversion to exact rational inequalities to certify that for a positive semidefinite input polynomial or rational function, any representation as a fraction of sums-of-squares of polynomials with real coefficients must contain polynomials in the denominator of degree no less than a given input lower bound. By Artin’s solution to Hilbert’s 17th problems, such representations always exist for some denominator degree. Our certificates of infeasibility are based on the generalization of Farkas’ Lemma to semidefinite programming. The literature has many famous examples of impossibility of SOS representability including Motzkin’s, Robinson’s, Choi’s and Lam’s polynomials, and Reznick’s lower degree bounds on uniform denominators, e.g., powers of the sum-of-squares of each variable. Our work on exact certificates for positive semidefiniteness allows for nonuniform denominators, which can have lower degree and are often easier to convert to exact identities. Here we demonstrate our algorithm by computing certificates of impossibilities for an arbitrary sum-of-squares denominator of degree 2 and 4 for some symmetric sextics in 4 and 5 variables, respectively. We can also certify impossibility of base polynomials in the denominator of restricted term structure, for instance as in Landau’s reduction by one less variable. 41. [+] Francesco Biscani. Parallel sparse polynomial multiplication on modern hardware architectures 42. Abstract: We present a high performance algorithm for the parallel multiplication of sparse multivariate polynomials on modern computer architectures. The algorithm is built on three main concepts: a cache-friendly hash table implementation for the storage of polynomial terms in distributed form, a statistical method for the estimation of the size of the multiplication result, and the use of Kronecker substitution as a homomorphic hash function. The algorithm achieves high performance by promoting data access patterns that favour temporal and spatial locality of reference. We present benchmarks comparing our algorithm to routines of other computer algebra systems, both in sequential and parallel mode. 43. [+] Jérémy Berthomieu and Romain Lebreton. Relaxed p-adic Hensel lifting for algebraic systems 44. Abstract: In a previous article, an implementation of lazy p-adic integers with a multiplication of quasi-linear complexity, the so-called relaxed product, was presented. Given a ring R and an element p in R, we design a relaxed Hensel lifting for algebraic systems from R/(p) to the p-adic completion R_p of R. Thus, any root of linear and algebraic regular systems can be lifted with a quasi-optimal complexity. We report our implementations in C++ within the computer algebra system Mathemagix and compare them with Newton operator. As an application, we solve linear systems over the integers and compare the running times with Linbox and IML. 45. [+] Sergei Abramov and Denis Khmelnov. On valuations of meromorphic solutions of arbitrary-order linear difference systems with polynomial coefficients 46. Abstract: Algorithms for computing lower bounds on valuations (e.g., orders of the poles) of the components of meromorphic solutions of arbitrary-order linear difference systems with polynomial coefficients are considered. In addition to algorithms based on ideas which have been already utilized in computer algebra for treating normal first-order systems, a new algorithm using "tropical" calculations is proposed. It is shown that the latter algorithm is rather fast, and produces the bounds with good accuracy. 47. [+] Adam Strzeboński and Elias Tsigaridas. Univariate real root isolation in multiple extension fields 48. Abstract: We present algorithmic, complexity and implementation results for the problem of isolating the real roots of a univariate polynomial in $\Ba \in L[y]$, where $L=\QQ(\alpha_1, \dots, \alpha_{\ell})$ is an algebraic extension of the rational numbers. Our bounds are single exponential in $\ell$ and match the ones presented in \cite{st-issac-2011} for the case $\ell=1$. We consider two approaches. The first, indirect approach, using multivariate resultants, computes a univariate polynomial with integer coefficients, among the real roots of which are the real roots of $\Ba$. The Boolean complexity of this approach is $\sOB(N^{4\ell+4})$, where $N$ is the maximum of the degrees and the coefficient bitsize of the involved polynomials. The second, direct approach, tries to solve the polynomial directly, without reducing the problem to a univariate one. We present an algorithm that generalizes Sturm algorithm from the univariate case, and modified versions of well known solvers that are either numerical or based on Descartes' rule of sign. We achieve a Boolean complexity of $\sOB(\min\set{N^{4\ell + 7},N^{2\ell^2+6}})$ and $\sOB( \max\set{N^{\ell+5}, N^{2\ell+3}})$, respectively. We implemented the algorithms in \func{C} as part of the core library of MATHEMATICA and we illustrate their efficiency over various data sets. 49. [+] Toshinori Oaku. An algorithm to compute the differential equations for the logarithm of a polynomial 50. Abstract: We present an algorithm to compute the annihilator of (i.e., the linear differential equations for) the multi-valued analytic function $f^\lambda(\log f)^m$ in the ring $D_n$ of differential operators for a given non-constant polynomial $f$, a non-negative integer $m$, and a complex number $\lambda$. This algorithm consists in the differentiation with respect to $s$ of the annihilator of $f^s$ in the ring $D_n[s]$ and ideal quotient computation in $D_n$. The obtained differential equations constitute what is called a holonomic system in $D$-module theory. Hence combined with the integration algorithm for $D$-modules, this enables us to compute a holonomic system for the integral of a function involving the logarithm of a polynomial with respect to some variables. 51. [+] Moulay Barkatou, Thomas Cluzeau, Carole El Bacha and Jacques-Arthur Weil. Computing Closed Form Solutions of  Integrable Connections 52. Abstract: We present algorithms for computing rational and hyperexponential solutions of linear $D$-finite partial differential systems written as integrable connections. We show that these types of solutions can be computed recursively by adapting existing algorithms handling ordinary linear differential systems. We provide an arithmetic complexity analysis of the algorithms that we develop. A Maple implementation is available and some examples and applications are given. 53. [+] Romain Lebreton and Éric Schost. Algorithms for the universal decomposition algebra 54. Abstract: Let k be a field and let f be a polynomial of degree n in k [T]. The universal decomposition algebra A is the quotient of k [X_1, ..., X_n] by the ideal of symmetric relations (those polynomials that vanish on all permutations of the roots of f). We show how to obtain efficient algorithms to compute in A. We use a univariate representation of A, i.e. an isomorphism of the form A = k [T]/Q(T), since in this representation, arithmetic operations in A are known to be quasi-optimal. We give details for two related algorithms, to find the isomorphism above, and to compute the characteristic polynomial of any element of A. 55. [+] Alin Bostan, Muhammad F. I. Chowdhury, Romain Lebreton, Bruno Salvy and Éric Schost. Power Series Solutions of Singular (q)−Differential Equations 56. Abstract: We provide algorithms computing power series solutions of a large class of differential or q-differential equations. Their number of arithmetic operations grows linearly with the precision, up to logarithmic terms. 57. [+] Paolo Lella. An efficient implementation of the algorithm computing the Borel-fixed points of a Hilbert scheme 58. Abstract: Borel-fixed ideals play a key role in the study of Hilbert schemes. Indeed each component and each intersection of components of a Hilbert scheme contains at least one Borel-fixed point, i.e. a point corresponding to a subscheme defined by a Borel-fixed ideal. Moreover Borel-fixed ideals have good combinatorial properties, which make them very interesting in an algorithmic perspective. In this paper, we propose an implementation of the algorithm computing all the saturated Borel-fixed ideals with number of variables and Hilbert polynomial assigned, introduced from a theoretical point of view in the paper "Segment ideals and Hilbert schemes of points", Discrete Mathematics 311 (2011). 59. [+] Stavros Garoufalidis and Christoph Koutschan. Twisting q-holonomic sequences by complex roots of unity 60. Abstract: A sequence $f_n(q)$ is $q$-holonomic if it satisfies a nontrivial linear recurrence with coefficients polynomials in $q$ and $q^n$. Our main theorem states that $q$-holonomicity is preserved under twisting, i.e., replacing $q$ by $\omega q$ where $\omega$ is a complex root of unity. Our proof is constructive, works in the multivariate setting of $\partial$-finite sequences and is implemented in the Mathematica package HolonomicFunctions. Our results are illustrated by twisting natural $q$-holonomic sequences which appear in quantum topology, namely the colored Jones polynomial of pretzel knots and twist knots. The recurrence of the twisted colored Jones polynomial can be used to compute the asymptotics of the Kashaev invariant of a knot at an arbitrary complex root of unity. 61. [+] Jules Svartz and Jean-Charles Faugère. Solving Polynomial Systems Globally Invariant Under an Action of the Symmetric Group and Application to the Equilibria of N vortices in the Plane. 62. Abstract: \begin{abstract} We propose an efficient algorithm to solve polynomial systems of which equations are \emph{globally} invariant under an action of the symmetric group $$\mathfrak{S}_N$$ where it acts on the variable $$x_{i}$$ where the number of variables is a multiple of $$N$$. For instance, we can assume that swapping two variables (or two pairs of variables) in one equation give rise to another equation of the system (perhaps changing the sign). The idea is to apply many times divided difference operators to the original system in order to derive a new system of equations involving only the symmetric functions of a subset of the variables. The next step is to solve the system using Gröbner techniques; this is usually several order faster than computing the Gröbner basis of the original system since the number of solutions of the corresponding ideal has been divided by at least $$N!$$. To illustrate the algorithm and to demonstrate its efficiency, we apply the method to a well known physical problem called equilibria positions of vortices. This problem has been studied for almost 150 years and goes back to work by Lord Kelvin. Assuming that all vortices have same vorticity, the problem can be reformulated as a system polynomial equations invariant under an action of $\mathfrak{S}_N$. Using numerical methods, physicists have been able to compute solutions up to $N\leq 7$ but it was an open challenge to check whether the set of solution is complete. Direct naive approach of Gröbner bases techniques give rise to hard-to-solve polynomial system: for instance, when $$N=5$$, it take several hours to compute the Gröbner basis and the number of solutions is $$2060$$. By contrast, applying the new algorithm to the same problem give rise to a system of $$17$$ solutions that can be solved in less than $$0.1$$ sec. Moreover, we are able to compute \emph{all} equilibria when $$N\leq8$$ (the case $$N=8$$ being completely new). \end{abstract} 63. [+] Bjarke Hammersholt Roune and Michael Stillman. Practical Groebner Basis Computation 64. Abstract: We report on our experiences exploring state of the art Groebner basis computation. We investigate signature based algorithms in detail. We also introduce new practical data structures and computational techniques for use in both signature based Groebner basis algorithms and more traditional variations of the classic Buchberger algorithm. Our conclusions are based on experiments using our new freely available open source standalone C++ library. 65. [+] Olivier Bournez, Daniel Graça and Amaury Pouly. On the complexity of solving polynomial initial value problems 66. Abstract: In this paper we prove that computing the solution to a initial-value problem of the form $\dot{y}=p(y)$ with initial condition $y(t_0)=y_0\in\R^d$ at time $t_0+T$ with precision $e^{-\mu}$ where $p$ is a vector of polynomial can be done in time polynomial in the value of $T$, $\mu$ and $Y=\sup_{t_0\leqslant u\leqslant T}\infnorm{y(u)}$. Contrary to existing results, our algorithm works for any vector of polynomial $p$ over any bounded or unbounded domain and has a guaranteed complexity and precision. In particular we do not assume $p$ to be fixed, or the solution to lie in a compact domain, nor we assume that $p$ has a Lipschitz constant. 67. [+] Joris van der Hoeven and Gregoire Lecerf. On the complexity of multivariate blockwise polynomial multiplication 68. Abstract: In this article, we study the problem of multiplying two multivariate polynomials which are somewhat but not too sparse, typically like polynomials with convex supports. We design and analyze an algorithm which is based on blockwise decomposition of the input polynomials, and which performs the actual multiplication in an FFT model or some other more general so called "evaluated model". If the input polynomials have total degrees at most d, then, under mild assumptions on the coefficient ring, we show that their product can be computed with O(s^1.5337) ring operations, where s denotes the number of all the monomials of total degree at most 2d. 69. [+] Jean-Charles Faugère, Mohab Safey El Din and Pierre-Jean Spaenlehauer. Critical Points and Grobner Bases: the Unmixed Case 70. Abstract: We consider the problem of computing critical points of the restriction of a polynomial map to an algebraic variety. This is of first importance since the global minimum of such a map is reached at a critical point. Thus, these points appear naturally in non-convex polynomial optimization which occurs in a wide range of scientific applications (control theory, chemistry, economics,...). Critical points also play a central role in recent algorithms of effective real algebraic geometry. Experimentally, it has been observed that Gröbner basis algorithms are efficient to compute such points. Therefore, recent software based on the so-called Critical Point Method are built on Gröbner bases engines. Let $f_1, \ldots, f_p$ be polynomials in $\Q[x_1, \ldots, x_n]$ of degree $D$, $V\subset\C^n$ be their complex variety and $\pi_1$ be the projection map $(x_1,\ldots, x_n)\mapsto x_1$. The critical points of the restriction of $\pi_1$ to $V$ are defined by the vanishing of $f_1, \ldots, f_p$ and some maximal minors of the Jacobian matrix associated to $f_1, \ldots, f_p$. Such a system is algebraically structured: the ideal it generates is the sum of a determinantal ideal and the ideal generated by $f_1,\ldots, f_p$. We provide the first complexity estimates on the computation of Gröbner bases of such systems defining critical points. We prove that under genericity assumptions on $f_1,\ldots, f_p$, the complexity is polynomial in the generic number of critical points, i.e. $D^p(D-1)^{n-p}{{n-1}\choose{p-1}}$. More particularly, in the quadratic case $D=2$, the complexity of such a Gröbner basis computation is polynomial in the number of variables $n$ and exponential in $p$. We also give experimental evidence supporting these theoretical results. 71. [+] Philippe Trebuchet and Bernard Mourrain. Border basis representation of general quotient algebra 72. Abstract: In this paper, we generalized the construction of border bases to non-zero dimensional ideals for normal forms compatible with the degree, tackling the remaining obstacle for a general application of border basis methods. First, we give conditions to have a border basis up to a given degree. Next, we describe a new stopping criteria to determined when the reduction with respect to the leading terms is a normal form. This test based on the persistence and regularity theorems of Gotzmann yields a new algorithm for computing a border basis of any ideal, which proceeds incrementally degree by degree until its regularity. We detail it, prove its correctness, present its implementation and report some experimentations which illustrates its practical good behavior. 73. [+] Danko Adrovic and Jan Verschelde. Computing Puiseux Series for Algebraic Surfaces 74. Abstract: In this paper we outline an algorithmic approach to compute Puiseux series expansions for algebraic surfaces. The series expansions originate at the intersection of the surface with as many coordinate planes as the dimension of the surface. Our approach starts with a polyhedral method to compute cones of normal vectors to the Newton polytopes of the given polynomial system that defines the surface. If as many vectors in the cone as the dimension of the surface define an initial form system that has isolated solutions, then those vectors are potential tropisms for the initial term of the Puiseux series expansion. Our preliminary methods produce exact representations for solution sets of the cyclic $n$-roots problem, for $n = m^2$, corresponding to a result of Backelin. 75. [+] Colton Pauderis and Arne Storjohann. Deterministic unimodularity certification 76. Abstract: The asymptotically fastest algorithms for many linear algebra problems on integer matrices, including solving a system of linear equations and computing the determinant, use high-order lifting. Currently, high-order lifting requires the use of a randomized shifted number system to detect and avoid error-producing carries. By interleaving quadratic and linear lifting, we devise a new algorithm for high-order lifting that allows us to work in the usual symmetric range modulo $p$, thus avoiding randomization. As an application, we give a deterministic algorithm to assay if an $n \times n$ integer matrix $A$ is unimodular. The cost of the algorithm is $O((\log n) n^{\omega}\, \M(\log n + \log ||A||))$ bit operations, where $||A||$ denotes the largest entry in absolute value, and $\M(t)$ is the cost multiplying two integers bounded in bit length by $t$. 77. [+] Moulay A. Barkatou and Clemens G. Raab. Solving Linear Ordinary Differential Systems in Hyperexponential Extensions 78. Abstract: Let F be a differential field generated from the rational functions over some constant field by one hyperexponential extension. We present an algorithm to compute the solutions in F^n of systems of n first order linear ODEs. Solutions in F of a scalar ODE of higher order can be determined by an algorithm of Bronstein and Fredet. Our approach avoids reduction to the scalar case. We also give examples how this can be applied to integration. 79. [+] Ambros Gleixner, Dan Steffy and Kati Wolter. Improving the Accuracy of Linear Programming Solvers with Iterative Refinement 80. Abstract: We describe an iterative refinement procedure for computing extended precision or exact solutions to linear programming problems (LPs). Arbitrarily precise solutions can be computed by solving a sequence of closely related LPs with limited precision arithmetic. The LPs solved at iterations of this algorithm share the same constraint matrix as the original problem instance and are transformed only by modification of the objective function, right-hand side, and variable bounds. Exact computation is used to compute and store the exact representation of the transformed problems, while numeric computation is used for computing approximate LP solutions and applying iterations of the simplex algorithm. At all steps of the algorithm the LP bases encountered in the transformed problems correspond directly to LP bases in the original problem description. We demonstrate that this algorithm is effective in practice for computing extended precision solutions and that this leads to direct improvement of the best known methods for solving LPs exactly over the rational numbers. A proof-of-concept implementation is done within the SoPlex LP solver. 81. [+] Luk Bettale, Jean-Charles Faugère and Ludovic Perret. Solving Polynomial Systems over Finite Fields: Improved Analysis of the Hybrid Approach 82. Abstract: The Polynomial System Solving (PoSSo) problem is a fundamental NP-Hard problem in computer algebra. Among many others, PoSSo have applications in area such as coding theory and cryptology. Typically, the security of cryptographic multivariate public-key schemes (MPKC) such as the UOV cryptosystem of Kipnis, Shamir and Patarin is directly related to the hardness of PoSSo over finite fields. The goal of this paper is to further understand the influence of finite fields on the hardness of PoSSo. To this end, we consider the so-called {\it hybrid approach}. This is a polynomial system solving method dedicated to finite fields proposed by Bettale, Faug\`ere and Perret (Journal of Mathematical Cryptography, 2009). The idea is to combine exhaustive search with Gröbner bases. The efficiency of the hybrid approach is related to the choice of a trade-off between the two methods. We propose here an improved complexity analysis dedicated to quadratic systems. Whilst the principle of the hybrid approach is simple, its careful analysis leads to rather surprising and somehow unexpected results. We first prove that the best trade-off (i.e. number of variables to be fixed) allowing to minimize the complexity is achieved by fixing a number of variables proportional to the number of variables $n$ of the system considered. Under some natural algebraic assumption, we then show that the asymptotic complexity of the hybrid approach is $2^{(3.31-3.62\,\log_2\left(q\right)^{-1})\, n}$, where $q$ is the size of the field (under the condition in particular that $\log(q)\ll n$). This is to date, the best complexity for solving PoSSo over finite fields (when $q>2$). Indeed, we have been able to quantify the gain provided by the hybrid approach compared to a direct Gröbner basis method. For quadratic systems, we show (assuming a natural algebraic assumption) that this gain is exponential in the number of variables. Asymptotically, the gain is $2^{1.49\,n}$ when both $n$ and $q$ grow to infinity and $\log(q)\ll n$. 83. [+] Matthew Comer, Erich Kaltofen and Clément Pernet. Sparse Polynomial Interpolation and Berlekamp/Massey Algorithms That Correct Outlier Errors in Input Values 84. Abstract: We propose algorithms performing sparse interpolation with errors, based on Prony's / Ben-Or's & Tiwari's algorithm, using a Berlekamp/Massey algorithm with early termination. First, we give a randomized algorithm that can determine a t -sparse polynomial f, where f has exactly t non-zero terms, from a bound T >= t and a sequence of N= (2T+1)(e+1) evaluations f(p^i), where i=1,2,3,...,N and p a field element, in the presence of <= e wrong evaluations in the sequence, that are spoiled either with random or misleading errors. We also investigate the problem of recovering the minimal linear generator from a sequence of field elements that are linearly generated but where again <= e elements are erroneous. We show that there exist sequences of < 2t(2e+1) elements, such that two distinct generators of length t satisfy the linear recurrence up to <= e faults, at least if the field has a characteristic unequal 2. Uniqueness can be proven (for any field characteristic) for length >= 2t(2e+1) of the sequence with <= e errors. Finally, we present the Majority Rule Berlekamp/Massey algorithm, which can recover the unique minimal linear generator of degree t when given bounds T >= t and E >= e and the initial sequence segment of 2T(2E+1) elements. The latter yields a unique sparse interpolant for the first problem. This research is motivated by the sparse interpolation algorithms with numeric noise, into which we now can bring outlier errors in the values. 85. [+] Masao Ishikawa and Christoph Koutschan. Zeilberger's Holonomic Ansatz for Pfaffians 86. Abstract: A variation of Zeilberger's holonomic ansatz for symbolic determinant evaluations is proposed which is tailored to deal with Pfaffians. The method is also applicable to determinants of skew-symmetric matrices, for which the original approach does not work. As Zeilberger's approach is based on the Laplace expansion (cofactor expansion) of the determinant, we derive our approach from the cofactor expansion of the Pfaffian. To demonstrate the power of our method, we prove, using computer algebra algorithms, some conjectures proposed in the paper "Pfaffian decomposition and a Pfaffian analogue of $q$-Catalan Hankel determinants" by Ishikawa, Tagawa, and Zeng. A minor summation formula related to partitions and Motzkin paths follows as a corollary. 87. [+] Raoul Blankertz, Joachim von Zur Gathen and Konstantin Ziegler. Compositions and collisions at degree $p^2$ 88. Abstract: A univariate polynomial $f$ over a field is decomposable if $f= g \circ h= g(h)$ for nonlinear polynomials $g$ and $h$. In order to count the decomposables, one has to know the number of equal-degree collisions, that is $f = g \circ h = g^* \circ h^*$ with $(g,h) \neq (g^{*}, h^{*})$ and $\deg g = \deg g^*$. Such collisions only occur in the wild case, where the field characteristic $p$ divides $\deg f$. Reasonable bounds on the number of decomposables over a finite field are known, but they are less sharp in the wild case, in particular for degree $p^2$. We provide a classification of all polynomials of degree $p^2$ with a collision. This yields the exact number of decomposable polynomials of degree $p^{2}$ over a finite field of characteristic $p$. We also present an algorithm that determines whether a given polynomial of degree $p^{2}$ has a collision or not. 89. [+] Yue Ma and Lihong Zhi. Computing Real Solutions of Polynomial Systems via Low-Rank Moment Matrix Completion 90. Abstract: In this paper, we propose a new algorithm for computing real roots of polynomial equations or a subset of real roots in a given semi-algebraic set described by additional polynomial inequalities. The algorithm is based on using modified fixed point continuation method for solving Lasserre's hierarchy of moment relaxations. We establish convergence properties for our algorithm. For a large-scale polynomial system with only few real solutions in a given area, we can extract them quickly. Moreover, for a polynomial system with an infinite number of real solutions, our algorithm can also be used to find some isolated real solutions or real solutions on the manifolds. 91. [+] Vlad Slavici, Daniel Kunkle, Gene Cooperman and Stephen Linton. An Efficient Programming Model for Memory-Intensive Recursive Algorithms using Parallel Disks 92. Abstract: In order to keep up with the demand for solutions to problems with ever-increasing data sets, both academia and industry have embraced commodity computer clusters with locally attached disks or SANs as an inexpensive alternative to supercomputers. With the advent of tools for parallel disks programming, such as MapReduce, STXXL and Roomy --- that allow the developer to focus on higher-level algorithms --- the programmer productivity for memory-intensive programs has increased many-fold. However, such parallel tools were primarily targeted at iterative programs. We propose a programming model for migrating recursive RAM-based legacy algorithms to parallel disks. Many memory-intensive symbolic algebra algorithms are most easily expressed as recursive algorithms. In this case, the programming challenge is multiplied, since the developer must re-structure such an algorithm with two criteria in mind: converting a naturally recursive algorithm into an iterative algorithm, while simultaneously exposing any potential data parallelism (as needed for parallel disks). This model alleviates the large effort going into the design phase of an external memory algorithm. Research in this area over the past 10 years has focused on per-problem solutions, without providing much insight into the connection between legacy algorithms and out-of-core algorithms. Our method shows how legacy algorithms employing recursion and non-streaming memory access can be more easily translated into efficient parallel disk-based algorithms. We demonstrate the ideas on a largest computation of its kind: the determinization via subset construction and minimization of very large nondeterministic finite set automata (NFA). To our knowledge, this is the largest subset construction reported in the literature. Determinization for large NFA has long been a large computational hurdle in the study of permutation classes defined by token passing networks. The programming model was used to design and implement an efficient NFA determinization algorithm that solves the next stage in analyzing token passing networks representing two stacks in series.
open-web-math/open-web-math
## Green Eyes ### September 29, 2009 We haven’t done a math problem in a while. This one comes from my daughter’s high-school math class. She is never quite sure whether or not to ask me for help; sometimes she gets much more help than she really wants. In a group of twenty-seven people, eleven have blue eyes, thirteen have brown eyes, and three have green eyes. If three people are randomly selected from the group, what is the probability that exactly one of them will have green eyes? Your task is to find the probability. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below. Pages: 1 2 ### 5 Responses to “Green Eyes” 1. P. Riva said I’d say we have P = 3/27 * 24/27 * 24/27 * 3 =~ 26.34% 2. P. Riva said OPS!!! I was a bit in a hurry: 3/27 * 24/26 * 23/25 * 3 =~ 28.31% 3. 92 / 325 4. Peter said It may be easier to conceptualize as choosing one child w/ green eyes and two children with eyes of any other color. Your numerator is P(choosing one green of the 3 greens)*P(choosing 2 not greens of the 24 not greens). The denominator is the total number of combinations of choosing three children out of the 27. Number of ways of: -Choosing 1 green out of the 3 greens: 3C1 = 3 -Choosing 2 non-greens out of the 24 non-greens: 24C2 = 276 -Choosing 3 kids out of 27 kids: 27C3 = 2925 So, the probability of choosing only one green eyed kid out of the three selected is: (3C1*24C2)/(27C3) = (3*276)/2925 = 0.283076923 5. Graham said ```from operator import mul def falling_factorial(n, k): """(n)_k = n! / (n - k)! = n * (n - 1) * ... * (n - k + 1)""" return reduce(mul, xrange(n - k + 1, n + 1), 1) def binom(n, k): """n! / (k! * (n-k)!) = (n)_k / k!; k! = (k)_k""" return falling_factorial(n, k) / falling_factorial(k, k) if __name__ == "__main__": print 100 * binom(24, 2) * binom(3, 1) / float(binom(27, 3)) ```
HuggingFaceTB/finemath
# RD Sharma Solutions for Class 8 Chapter - 7 Factorization Exercise 7.3 Students can refer to RD Sharma Solutions for Class 8 Maths Exercise 7.3 Chapter 7 Factorization which are available here. The solutions here are solved step by step for a better understanding of the concepts, which helps students prepare for their exams at ease. Keeping in mind expert tutors at BYJU’S have made this possible to help students crack difficult problems. Students can download the pdf of RD Sharma Solutions from the links provided below. Exercise 7.3 of Chapter 7 Factorization is on based on the factorization of algebraic expressions when a binomial is a common factor. ## Download the pdf of RD Sharma For Class 8 Maths Exercise 7.3 Chapter 7 Factorization ### Access Answers to RD Sharma Solutions for Class 8 Maths Exercise 7.3 Chapter 7 Factorization Factorize each of the following algebraic expressions: 1. 6x (2x – y) + 7y (2x – y) Solution: We have, 6x (2x – y) + 7y (2x – y) By taking (2x – y) as common we get, (6x + 7y) (2x – y) 2. 2r (y – x) + s (x – y) Solution: We have, 2r (y – x) + s (x – y) By taking (-1) as common we get, -2r (x – y) + s (x – y) By taking (x – y) as common we get, (x – y) (-2r + s) (x – y) (s – 2r) 3. 7a (2x – 3) + 3b (2x – 3) Solution: We have, 7a (2x – 3) + 3b (2x – 3) By taking (2x – 3) as common we get, (7a + 3b) (2x – 3) 4. 9a (6a – 5b) – 12a2 (6a – 5b) Solution: We have, 9a (6a – 5b) – 12a2 (6a – 5b) By taking (6a – 5b) as common we get, (9a – 12a2) (6a – 5b) 3a(3 – 4a) (6a – 5b) 5. 5 (x – 2y)2 + 3 (x – 2y) Solution: We have, 5 (x – 2y)2 + 3 (x – 2y) By taking (x – 2y) as common we get, (x – 2y) [5 (x – 2y) + 3] (x – 2y) (5x – 10y + 3) 6. 16 (2l – 3m)2 – 12 (3m – 2l) Solution: We have, 16 (2l – 3m)2 – 12 (3m – 2l) By taking (-1) as common we get, 16 (2l – 3m)2 + 12 (2l – 3m) By taking 4(2l – 3m) as common we get, 4(2l – 3m) [4 (2l – 3m) + 3] 4(2l – 3m) (8l – 12m + 3) 7. 3a (x – 2y) – b (x – 2y) Solution: We have, 3a (x – 2y) – b (x – 2y) By taking (x – 2y) as common we get, (3a – b) (x – 2y) 8. a2 (x + y) + b2 (x + y) + c2 (x + y) Solution: We have, a2 (x + y) + b2 (x + y) + c2 (x + y) By taking (x + y) as common we get, (a2 + b2 + c2) (x + y) 9. (x – y)2 + (x – y) Solution: We have, (x – y)2 + (x – y) By taking (x – y) as common we get, (x – y) (x – y + 1) 10. 6 (a + 2b) – 4 (a + 2b)2 Solution: We have, 6 (a + 2b) – 4 (a + 2b)2 By taking (a + 2b) as common we get, [6 – 4 (a + 2b)] (a + 2b) (6 – 4a – 8b) (a + 2b) 2(3 – 2a – 4b) (a + 2b) 11. a (x – y) + 2b (y – x) + c (x – y)2 Solution: We have, a (x – y) + 2b (y – x) + c (x – y)2 By taking (-1) as common we get, a (x – y) – 2b (x – y) + c (x – y)2 By taking (x – y) as common we get, [a – 2b + c(x – y)] (x – y) (x – y) (a – 2b + cx – cy) 12. -4 (x – 2y)2 + 8 (x – 2y) Solution: We have, -4 (x – 2y)2 + 8 (x – 2y) By taking 4(x – 2y) as common we get, [-(x – 2y) + 2] 4(x – 2y) 4(x – 2y) (-x + 2y + 2) 13. x3 (a – 2b) + x2 (a – 2b) Solution: We have, x3 (a – 2b) + x2 (a – 2b) By taking x2 (a – 2b) as common we get, (x + 1) [x2 (a – 2b)] x2 (a – 2b) (x + 1) 14. (2x – 3y) (a + b) + (3x – 2y) (a + b) Solution: We have, (2x – 3y) (a + b) + (3x – 2y) (a + b) By taking (a + b) as common we get, (a + b) [(2x – 3y) + (3x – 2y)] (a + b) [2x -3y + 3x – 2y] (a + b) [5x – 5y] (a + b) 5(x – y) 15. 4(x + y) (3a – b) + 6(x + y) (2b – 3a) Solution: We have, 4(x + y) (3a – b) + 6(x + y) (2b – 3a) By taking (x + y) as common we get, (x + y) [4(3a – b) + 6(2b – 3a)] (x + y) [12a – 4b + 12b – 18a] (x + y) [-6a + 8b] (x + y) 2(-3a + 4b) (x + y) 2(4b – 3a)
HuggingFaceTB/finemath
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 5.5: Trees Another useful special class of graphs: Definition: acyclic tree A connected graph $$G$$ is a tree if it is acyclic, that is, it has no cycles. More generally, an acyclic graph is called a forest. Two small examples of trees are shown in figure 5.1.5. Note that the definition implies that no tree has a loop or multiple edges. Theorem 5.5.2 Every tree $$T$$ is bipartite. Note Proof. Since $$T$$ has no cycles, it is true that every cycle of $$T$$ has even length. By corollary 5.4.3, $$T$$ is bipartite. $$\square$$ Definition: pendant vertex A vertex of degree one is called a pendant vertex, and the edge incident to it is a pendant edge. Theorem 5.5.4 Every tree on two or more vertices has at least one pendant vertex. Proof We prove the contrapositive. Suppose graph $$G$$ has no pendant vertices. Starting at any vertex $$v$$, follow a sequence of distinct edges until a vertex repeats; this is possible because the degree of every vertex is at least two, so upon arriving at a vertex for the first time it is always possible to leave the vertex on another edge. When a vertex repeats for the first time, we have discovered a cycle. $$\square$$ This theorem often provides the key step in an induction proof, since removing a pendant vertex (and its pendant edge) leaves a smaller tree. Theorem 5.5.5 A tree on $$n$$ vertices has exactly $$n-1$$ edges. Proof A tree on 1 vertex has 0 edges; this is the base case. If $$T$$ is a tree on $$n\ge 2$$ vertices, it has a pendant vertex. Remove this vertex and its pendant edge to get a tree $$T'$$ on $$n-1$$ vertices. By the induction hypothesis, $$T'$$ has $$n-2$$ edges; thus $$T$$ has $$n-1$$ edges. $$\square$$ Theorem 5.5.6 A tree with a vertex of degree $$k\ge 1$$ has at least $$k$$ pendant vertices. In particular, every tree on at least two vertices has at least two pendant vertices. Proof The case $$k=1$$ is obvious. Let $$T$$ be a tree with $$n$$ vertices, degree sequence $$\ds\{d_i\}_{i=1}^n$$, and a vertex of degree $$k\ge2$$, and let $$l$$ be the number of pendant vertices. Without loss of generality, $$1=d_1=d_2=\cdots=d_l$$ and $$d_{l+1}=k$$. Then $$2(n-1) = \sum_{i=1}^n d_i = l+k+\sum_{i=l+2}^n d_i \ge l+k+2(n-l-1).$$ This reduces to $$l\ge k$$, as desired. If $$T$$ is a tree on two vertices, each of the vertices has degree 1. If $$T$$ has at least three vertices it must have a vertex of degree $$k\ge 2$$, since otherwise $$2(n-1)=\sum_{i=1}^n d_i = n$$, which implies $$n=2$$. Hence it has at least $$k\ge2$$ pendant vertices. $$\square$$ Trees are quite useful in their own right, but also for the study of general graphs. Definition: spanning trees If $$G$$ is a connected graph on $$n$$ vertices, a spanning tree for $$G$$ is a subgraph of $$G$$ that is a tree on $$n$$ vertices. Theorem 5.5.8 Every connected graph has a spanning tree. Proof By induction on the number of edges. If $$G$$ is connected and has zero edges, it is a single vertex, so $$G$$ is already a tree. Now suppose $$G$$ has $$m\ge1$$ edges. If $$G$$ is a tree, it is its own spanning tree. Otherwise, $$G$$ contains a cycle; remove one edge of this cycle. The resulting graph $$G'$$ is still connected and has fewer edges, so it has a spanning tree; this is also a spanning tree for $$G$$. $$\square$$ In general, spanning trees are not unique, that is, a graph may have many spanning trees. It is possible for some edges to be in every spanning tree even if there are multiple spanning trees. For example, any pendant edge must be in every spanning tree, as must any edge whose removal disconnects the graph (such an edge is called a bridge.) Corollary 5.5.9 If $$G$$ is connected, it has at least $$n-1$$ edges; moreover, it has exactly $$n-1$$ edges if and only if it is a tree. Proof If $$G$$ is connected, it has a spanning tree, which has $$n-1$$ edges, all of which are edges of $$G$$. If $$G$$ has $$n-1$$ edges, which must be the edges of its spanning tree, then $$G$$ is a tree. $$\square$$ Theorem 5.5.10 \)G\) is a tree if and only if there is a unique path between any two vertices. Proof if: Since every two vertices are connected by a path, $$G$$ is connected. For a contradiction, suppose there is a cycle in $$G$$; then any two vertices on the cycle are connected by at least two distinct paths, a contradiction. only if: If $$G$$ is a tree it is connected, so between any two vertices there is at least one path. For a contradiction, suppose there are two different paths from $$v$$ to $$w$$: $$v=v_1,v_2,\ldots,v_k=w \quad\hbox{and}\quad v=w_1,w_2,\ldots,w_l=w.$$ Let $$i$$ be the smallest integer such that $$v_i\not= w_i$$. Then let $$j$$ be the smallest integer greater than or equal to $$i$$ such that $$w_j=v_m$$ for some $$m$$, which must be at least $$i$$. (Since $$w_l=v_k$$, such an $$m$$ must exist.) Then $$v_{i-1},v_i,\ldots,v_m=w_j,w_{j-1},\ldots,w_{i-1}=v_{i-1}$$ is a cycle in $$G$$, a contradiction. See figure 5.5.1. $$\square$$ Figure 5.5.1. Distinct paths imply the existence of a cycle. Definition 5.5.11 A cutpoint in a connected graph $$G$$ is a vertex whose removal disconnects the graph. Theorem 5.5.12 Every connected graph has a vertex that is not a cutpoint. Proof Remove a pendant vertex in a spanning tree for the graph. $$\square$$
HuggingFaceTB/finemath
# 32096 in Words 32096 in words is written as thirty-two thousand and ninety-six. We can easily convert numbers into words using the place value system. For example, if you gave Rs. 32096 for charity, you can write it as “I have Rs. Thirty-two thousand and ninety-six for charity”. Also, 32096 is a cardinal number as it specifies the value. 32096 in Words: Thirty-two Thousand and Ninety-six. Thirty-two Thousand and Ninety-six in Numerical Form: 32096. ## How to Write 32096 in Words? As the number 32096 has five digits, the place value table up to 5 digits is given below. Ten-thousands Thousands Hundreds Tens Ones 3 2 0 9 6 The expanded form of 32096 is as follows: = 3 × Ten thousand + 2 × Thousand + 0 × Hundred + 9 × Ten + 6 × One = 3 × 10000 + 2 × 1000 + 0 × 100 + 9 × 10 + 6 × 1 = 30000 + 2000 + 90 + 6 = 32096 = Thirty-two thousand and ninety-six Hence, 32096 in words is thirty-two thousand and ninety-six. 32096 in words – Thirty-two thousand and ninety-six Is 32096 an odd number? – No Is 32096 an even number? – Yes Is 32096 a perfect square number? – No Is 32096 a perfect cube number? – No Is 32096 a prime number? – No Is 32096 a composite number? – Yes ## Frequently Asked Questions on 32096 in Words ### Write 32096 in words? 32096 in words is thirty-two thousand and ninety-six. ### Simplify 32000 + 96, and express it in words. Simplifying 32000 + 96, we get 32096. Hence, 32096 in words is thirty-two thousand and ninety-six. ### Is 32096 a prime number? No, 32096 is not a prime number.
HuggingFaceTB/finemath
# Thread: Trig. Identities; Tricky one :S 1. ## Trig. Identities; Tricky one :S Prove: $\displaystyle tanU/(secU+1) = (secU-1)/tanU$ i tried this many times, but i cant seem to prove them. I really need help with this, plz if can prove it, plz show your steps b/c I want to learn to do it myself for later questions. Thx 2. $\displaystyle LHS = \frac{tanx}{secx+1} = \frac{\frac{sinx}{cosx}}{\frac{1}{cosx}+1} = \frac{sinx}{1+cosx} = LHS$ $\displaystyle RHS = \frac{secx-1}{tanx} = \frac{\frac{1}{cosx}-1}{\frac{sinx}{cosx}} = \frac{\frac{1-cosx}{cosx}}{\frac{sinx}{cosx}}=$ $\displaystyle = \frac{1-cosx}{sinx} * \color{red}\frac{(1+cosx)}{(1+cosx)} \color{black}= \frac{1-cos^2x}{sinx+sinxcosx} = \frac{sin^2x}{sinx(1+cosx)} = \frac{sinx}{1+cosx} = LHS$ The key for solving this and other similar identities is the red part. Hope that helps! 3. ## woah...what just happned? $\displaystyle LHS = \frac{tanx}{secx+1} = \frac{\frac{sinx}{cosx}}{\frac{1}{cosx}+1} = \frac{sinx}{1+cosx} = LHS$ $\displaystyle RHS = \frac{secx-1}{tanx} = \frac{\frac{1}{cosx}-1}{\frac{sinx}{cosx}} = \frac{\frac{1-cosx}{cosx}}{\frac{sinx}{cosx}}=$ Originally Posted by Referos $\displaystyle = \frac{1-cosx}{sinx} * \color{red}\frac{(1+cosx)}{(1+cosx)} \color{black}= \frac{1-cos^2x}{sinx+sinxcosx} = \frac{sin^2x}{sinx(1+cosx)} = \frac{sinx}{1+cosx} = LHS$ The key for solving this and other similar identities is the red part. Hope that helps! why would u just times it by 1+cosx? and would that not change the whole thing, and make it a different answer? so they will not equal each other. could you plz expain why you did that. 4. Originally Posted by 2shoes $\displaystyle LHS = \frac{tanx}{secx+1} = \frac{\frac{sinx}{cosx}}{\frac{1}{cosx}+1} = \frac{sinx}{1+cosx} = LHS$ $\displaystyle RHS = \frac{secx-1}{tanx} = \frac{\frac{1}{cosx}-1}{\frac{sinx}{cosx}} = \frac{\frac{1-cosx}{cosx}}{\frac{sinx}{cosx}}=$ why would u just times it by 1+cosx? and would that not change the whole thing, and make it a different answer? so they will not equal each other. could you plz expain why you did that. No, because $\displaystyle \frac{1 + \cos{x}}{1 + \cos{x}} = 1$, and multiplying anything by 1 leaves it unchanged. This is the trick that's used in mathematics all the time - multiplying by a cleverly disguised 1 to change how something looks. 5. ## Thx No, because $\displaystyle \frac{1 + \cos{x}}{1 + \cos{x}} = 1$, and multiplying anything by 1 leaves it unchanged. This is the trick that's used in mathematics all the time - multiplying by a cleverly disguised 1 to change how something looks. haha thx, nice trick, never saw it before...again thx a lot 6. Hello, 2shoes! Prove: .$\displaystyle \frac{\tan x}{\sec x+1} \:=\:\frac{\sec x-1}{\tan x}$ Multiply by $\displaystyle \frac{\sec x - 1}{\sec x - 1}$ . . $\displaystyle \frac{\tan x}{\sec x + 1}\cdot\frac{\sec x - 1}{\sec x - 1} \;=\;\frac{\tan x(\sec x - 1)}{\underbrace{\sec^2\!x - 1}_{\text{This is }\tan^2\!x}} \;=\;\frac{\tan x(\sec x-1)}{\tan^2\!x} \;=\;\frac{\sec x - 1}{\tan x}$
HuggingFaceTB/finemath
# Direct Variation ### Popular Tutorials in Direct Variation • #### How Do You Find the Constant of Variation from a Direct Variation Equation? The constant of variation is the number that relates two variables that are directly proportional or inversely proportional to one another. Watch this tutorial to see how to find the constant of variation for a direct variation equation. Take a look! • #### How Do You Write an Equation for Direct Variation Given a Point? Looking for some practice with direct variation? Watch this tutorial, and get that practice! This tutorial shows you how to take given information and turn it into a direct variation equation. Then, see how to use that equation to find the value of one of the variables. • #### How Do You Write an Equation for Direct Variation from a Table? Looking for some practice with direct variation? Watch this tutorial, and get that practice! This tutorial shows you how to take a table of values and describe the relation using a direct variation equation. • #### What's the Constant of Variation? The constant of variation is the number that relates two variables that are directly proportional or inversely proportional to one another. But why is it called the constant of variation? This tutorial answers that question, so take a look! • #### What Does Direct Variation Look Like on a Graph? Want to know what a direct variation looks like graphically? Basically, it's a straight line that goes through the origin. To get a better picture, check out this tutorial! • #### How Do You Solve a Word Problem Using the Direct Variation Formula? Word problems allow you to see math in action! Take a look at this word problem involving an object's weight on Earth compared to its weight on the Moon. See how the formula for direct variation plays an important role in finding the solution. Then use that formula to see how much you would weigh on the Moon! • #### What's the Direct Variation or Direct Proportionality Formula? Ever heard of two things being directly proportional? Well, a good example is speed and distance. The bigger your speed, the farther you'll go over a given time period. So as one variable goes up, the other goes up too, and that's the idea of direct proportionality. But you can express direct proportionality using equations, and that's an important thing to do in algebra. See how to do that in the tutorial! • #### How Do You Use the Formula for Direct Variation? If two things are directly proportional, you can bet that you'll need to use the formula for direct variation to solve! In this tutorial, you'll see how to use the formula for direct variation to find the constant of variation and then solve for your answer.
HuggingFaceTB/finemath
## Conversion formula The conversion factor from liters to pints is 2.1133764099325, which means that 1 liter is equal to 2.1133764099325 pints: 1 L = 2.1133764099325 pt To convert 2 liters into pints we have to multiply 2 by the conversion factor in order to get the volume amount from liters to pints. We can also form a simple proportion to calculate the result: 1 L → 2.1133764099325 pt 2 L → V(pt) Solve the above proportion to obtain the volume V in pints: V(pt) = 2 L × 2.1133764099325 pt V(pt) = 4.2267528198649 pt The final result is: 2 L → 4.2267528198649 pt We conclude that 2 liters is equivalent to 4.2267528198649 pints: 2 liters = 4.2267528198649 pints ## Alternative conversion We can also convert by utilizing the inverse value of the conversion factor. In this case 1 pint is equal to 0.2365882375 × 2 liters. Another way is saying that 2 liters is equal to 1 ÷ 0.2365882375 pints. ## Approximate result For practical purposes we can round our final result to an approximate numerical value. We can say that two liters is approximately four point two two seven pints: 2 L ≅ 4.227 pt An alternative is also that one pint is approximately zero point two three seven times two liters. ## Conversion table ### liters to pints chart For quick reference purposes, below is the conversion table you can use to convert from liters to pints liters (L) pints (pt) 3 liters 6.34 pints 4 liters 8.454 pints 5 liters 10.567 pints 6 liters 12.68 pints 7 liters 14.794 pints 8 liters 16.907 pints 9 liters 19.02 pints 10 liters 21.134 pints 11 liters 23.247 pints 12 liters 25.361 pints
HuggingFaceTB/finemath
# Worksheet: Sound Wave Power and Intensity In this worksheet, we will practice relating sound intensity level in decibels to sound wave intensity, amplitude, and pressure changes that waves produce. Q1: The energy of a ripple on a pond is proportional to the amplitude squared. If the amplitude of the ripple is 0.100 cm at a distance from the source of 6.0 meters, what was the amplitude at a distance of 2.0 meters from the source? Q2: The low-frequency speaker of a stereo set has a surface area of and produces 1 W of acoustical power. What is the intensity at the speaker? If the speaker projects sound uniformly in all directions, at what distance from the speaker is the intensity 0.1 W/m2? Q3: A microphone receiving a pure sound tone is connected to an oscilloscope, producing a wave on its screen. The sound intensity is originally mW/m2 but is increased until the amplitude shown by the oscilloscope increases by . What is the intensity of the sound after the amplitude increases? • A W/m2 • B W/m2 • C W/m2 • D W/m2 • E W/m2 Q4: Ten cars in a circle at a boom box competition produce a 120 dB sound intensity level at the center of the circle. What is the average sound intensity level produced there by each stereo, assuming interference effects can be neglected? Q5: If a woman needs an amplification of times the threshold intensity to enable her to hear at all frequencies, what is her overall hearing loss in decibels? Q6: A tuning fork of frequency 250 Hz is struck. The sound intensity at a distance of 1.00 m from the fork is W/m2. What is the sound intensity in terms of at a distance of 4.00 m from the fork? • A • B • C • D • E At what distance from fork is the sound intensity W/m2? Q7: A sound wave in air producing a sound level of 0 dB at a frequency of 1,000 Hz corresponds to a maximum gauge pressure of atm. What is the maximum gauge pressure if the sound level is 60 dB? • A atm • B atm • C atm • D atm • E atm What is the maximum gauge pressure if the sound level is 120 dB? • A atm • B atm • C atm • D atm • E atm Q8: Suppose that the sound level from a source is 75.0 dB and then drops to 52.0 dB, with a frequency of 600 Hz. The air temperature is and the air density is 1.184 kg/m3. Determine the initial sound intensity. Determine final sound intensity. Determine the initial sound wave amplitude. Determine the final sound wave amplitude. Q9: Sound is more effectively transmitted by solid substances than through the air, and perceived sound is intensified if it is concentrated onto the small area of the eardrum. Sound is transmitted into a stethoscope 100 times as effectively as transmitted though air. What is the gain in decibels produced by a stethoscope that has a sound gathering area of 15.0 cm2 and concentrates the sound onto two eardrums with a total area of 0.900 cm2, with an efficiency of ? Q10: A sound has a level of 90.0 dB. What is the decibel level of a sound that is twice as intense? What is the decibel level of a sound that is one-fifth as intense? Q11: What is the intensity of a sound that has a level 4.00 dB lower than a sound of intensity W/m2? • A W/m2 • B W/m2 • C W/m2 • D W/m2 • E W/m2 Q12: An exposure to a sound intensity level of 170 dB for s may cause hearing damage. What energy in joules is transmitted to an eardrum of diameter 0.600 cm exposed to such a dose of sound? • A J • B J • C J • D J • E J Q13: The amplitude of a sound wave is measured in terms of its maximum gauge pressure. By what factor does the amplitude of a sound wave increase if the sound intensity level goes up by 25.0 dB? Q14: The threshold of human hearing is approximately −8.00 dB. What is the intensity of a sound at this level? (Take the threshold intensity of hearing is ) • A W/m2 • B W/m2 • C W/m2 • D W/m2 • E W/m2 Q15: A sound wave traveling in air has a pressure amplitude of 0.35 Pa. Find the intensity of the wave. Use a value of 343 m/s for the speed of sound. Take the density of the air to be 1.225 kg/m3. • A W/m2 • B W/m2 • C W/m2 • D W/m2 • E W/m2 Q16: A person has a hearing threshold 10 dB above normal at 100 Hz and 50 dB above normal at 4,000 Hz. How many times more intense must a 100 Hz tone be than a 4,000 Hz tone if they are both barely audible to this person? Q17: What is the sound intensity level produced by earphones that create an intensity of W/m2? Q18: What is the intensity of a 145 dB sound? Q19: If a large housefly that is 4.0 m away from you makes a 30.0 dB noise, what is the noise level of 1,000 flies at a 4.0 m distance, assuming interference has a negligible effect? Q20: The warning tag on a lawn mower states that it produces noise at a level of 91.0 dB. What is this in watts per meter squared? • A W/m2 • B W/m2 • C W/m2 • D W/m2 • E W/m2 Q21: Loudspeakers can produce intense sounds with surprisingly small energy input in spite of their low efficiencies. What is the power input needed to produce a 110.0 dB sound intensity level for a 6.000-centimeter-diameter speaker that has an efficiency of ? • A W • B W • C W • D W • E W Q22: Ultrasound of intensity W/m2 is produced by the rectangular head of a medical imaging device measuring 4.00 cm by 7.00 cm. What is its power output? Q23: A 2.50 m diameter university communications satellite dish receives TV signals that have a maximum electric field strength of 7.50 µV/m for one channel. What is the intensity of this wave? • A W/m2 • B W/m2 • C W/m2 • D W/m2 • E W/m2 What is the power received by the antenna? • A W • B W • C W • D W • E W If the orbiting satellite broadcasts uniformly over an area of m2 (a large fraction of North America), how much power does it radiate? Q24: A sound wave traveling in air has a pressure amplitude of 0.50 dB. What intensity level does this correspond to?
HuggingFaceTB/finemath
Вы находитесь на странице: 1из 11 # Unit 2 ## Electricity and Thermal Physics Solutions to Assessment Questions 1 Resistance Power Current 1 R 4 [5] [4] [2] ## Since in parallel, same potential difference across each resistor Total current I = I1 + I2 + I3 ## so that V/R = V/R1 + V/R2 + V/R3 [2] [3] First network: 1 R = 4 10 = 2.5 Second network: 1 R = 10 + (2 10 ) + 10 = 25 Third network: 1 R = 2 (10 + 10 ) = 10 Ammeter reading = 25 mA (as current splits equally ... both paths have same resistance) ## Voltmeter V1 = IbranchRbranch = 0.025 A 10 1 1 = 0.25 V Voltmeter V2 = ItotalRtotal = 0.050 A 25 1 1 = 1.25 V ## As resistance decreases, the current (or ammeter reading) increases At 20C, R = 1.4(2) k = 4.2 103 [3] 103 ## NAS Physics Teachers Guide 2005 Nelson Thornes Ltd. [2] 1 1 [5] [3] Unit 2 Electricity and Thermal Physics Solutions to Assessment Questions 6 I = nAqv = 1.0 1029 0.20 m3 106 m2 1.6 1019 C 0.94 103 R = l/A [3] = 0.34 [3] [1] ## Second wire will have a larger resistance so smaller current [2] [1] ## Q = It = 20 000 A 4.0 104 s 1 1 = 8.0 C R = l/A [2] = 1.7 108 m 50 m/(1.0 = 8.5 104 V = IR = 20 000 A 8.5 103 m2) 1 1 104 [3] = 17 V Tree has a greater resistance or Wood has a greater resistivity 1 1 = 3.0 A s1 [2] [1] ## Since 2.0 M >>> 4.0 p.d. across 2.0 M = 6.0 V ## p.d. across 4.0 = 0 V [2] Across resistor: VR = 6.0 V 45 /(45 + 5 ) = 5.4 V Across diode: Vd = 6.0 V 5.4 V = 0.6 V [2] I/A [1] ## Be lenient here provided generally right shape V/V must have this initial shape ## NAS Physics Teachers Guide 2005 Nelson Thornes Ltd. Unit 2 Electricity and Thermal Physics Solutions to Assessment Questions 9 so = 1.5 V [2] r = 0.5 r = 0.5 [3] 21 [2] [2] ## Any two from: If variable resistor set to zero resistance ... lamp prevents short circuit lamp means that there is still some resistance in circuit lamp prevents current from becoming too large lamp prevents large current damaging ammeter 10 p = F/A F per wheel = 12 000 N/4 = 3000 N Area of contact = F/p = 3000 N/(3.0 105 N m2) = 0.01 m2 ## Since p/T is a constant p2 = p1T2/T1 = 3.0 = 3.2 105 105 [2] 1 N m2 303 K/(283 K) m2 1 1 Axes labelled pressure and area or Axes labelled pressure and 1/area ## Downward line or Line with positive gradient Concave curve not touching either axis or Straight line through origin 11 Temperature of gas [3] [3] [2] 31 [3] ## Diagram showing any three from: Trapped gas Scale to measure volume Method of varying pressure Instrument for measuring the pressure Record pressure and volume ## Plot p against 1/V or Plot V against 1/p 12 Brownian motion of smoke in air or Diffusion of a coloured gas with another gas ## Particles subjected to collisions from air molecules or Mixture occurs due to random motion Change of state to gas involves a large increase in volume or Diffusion faster in gases Implies molecules are further apart or More space for molecules to pass through ## NAS Physics Teachers Guide 2005 Nelson Thornes Ltd. [2] [2] [2] Unit 2 Electricity and Thermal Physics Solutions to Assessment Questions 13 Polished underside: reflected downwards ## Polished upper surface: Reduces loss of thermal energy ## to the air above the hood P = 14.4 kW = 14.4 103 t = 16 h = 16 60 60 s = 57 600 s W = Pt = 14.4 103 [4] W 57 600 s [3] = 0.55 8.3 108 J = 4.6 108 J ## 14 Q = mcT = 0.70 kg 4200 J kg1 K1 (100 20) K = 2.35 105 (J) = 235 (kJ) [3] [2] 1 1 [2] P = Q/t t = Q/P 1 = 2.35 105 J/(2.2 103 W) = 107 s 1 1 Rate of temperature rise is initially slow, then it increases and then it decreases [3] [3] ## Temperature reaches 100C after 144 s Efficiency = 107 s/(144 s) 1 1 = 0.74 [2] ## Any two from: Energy flows out at same rate due to heat pump (or motor) 21 3] U = 0 ## as the temperature of the contents remains constant Q is the net energy flow into (or out of) the contents ## because of any temperature differences [2] No work is done on (or by) the contents or W must be zero since both U and Q are zero [1] ## NAS Physics Teachers Guide 2005 Nelson Thornes Ltd. [2] Unit 2 Electricity and Thermal Physics Solutions to Assessment Questions 16 A device that uses some of the (thermal) energy that flows ## from a hot source to a cold sink to do mechanical work = 0.57 [3] [3] 21 [2] ## Any two from: Work done against friction Loss of energy because of convection through gap at side of paddle Loss of energy to (stone) floor Loss of energy to cook food Loss of energy to heat paddle For an ideal gas, kinetic energy T [2] 21 [2] 21 [2] 11 [1] ## A1 Any two from: No background lighting or No street lighting or No light pollution Less dust or Clearer air Less twinkling due to refraction (or density variations) A2 Any two similarities from: Both penetrate Earths atmosphere Both travel at same speed Both travel through vacuum Both are transverse waves Both are electromagnetic waves Any one difference from: They have different wavelengths (or frequencies) Light is scattered by dust in the atmosphere while radio waves are not A3 max = 2.9 103 m K/T For Ori, max = 2.9 103 m K/(11 000 K) = 2.6 107 ## For Cet, max = 2.9 P= 1 1 m = 260 nm 103 m K/(3600 K) = 8.1 107 m = 810 nm [3] AT4 = 8.3 108 (W m2) 1 1 ## Cet graph peaks at ~810 nm Area under Ori graph >> area under Cet graph (at least 4 peak height) Ori overlaps with blue end of spectrum while Cet overlaps with red ## NAS Physics Teachers Guide 2005 Nelson Thornes Ltd. [2] [3] [2] Unit 2 Electricity and Thermal Physics Solutions to Assessment Questions A4 A light year is the distance travelled by light in one year 1 year = (365 24 60 60) s = 3.15 107 [1] ## Distance = speed time = 3.00 108 m s1 3.15 107 s = 9.5 1015 (m) [2] 41 [4] [1] [1] Any four of the following shown clearly on diagram (see Figure A8(b) on page 65): Light from distant star labelled (distant can be implied by parallel lines) Light from nearby star Earth orbiting the Sun or Earth shown in January-June positions Angles and labelled Tan [( )/2] = 1 AU/D For distance stars ( ) is too small to measure A5 All 3 stars correctly marked [xA upper right, xB middle bottom, xC slightly left of middle on main sequence] A is a red giant B is a white dwarf R = (2.7 1024 m2/4) = 4.6 1011 [3] [5] 51 [5] ## A6 Any five from: Quality of written communication A star which suddenly becomes very bright (or luminous) Fusion (or Hydrogen burning) ceases Collapse of core or Collapse of star or Implosion Outer layers bounce off core or Idea of a shock wave Blowing away outer layers Neutron star 1 Black hole 1 [2] ## B1 Stress = F/A = 8.0 N/(1.5 107 m2) = 5.3 107 Pa (or N m2) [2] Extension = 0.67 mm Strain = x/l = 0.67 103 = 2.6 104 m/(2.6 m) 1 107 Pa/(2.6 104) = 2.1 1011 Pa ## NAS Physics Teachers Guide 2005 Nelson Thornes Ltd. [4] Unit 2 Electricity and Thermal Physics Solutions to Assessment Questions 1 W= 1 Fx 2 1  2 4.8 N 0.4 = 9.6 104 103 Approximately 1  2 as steep ## Hookes law marked up to a stress of 2.8 1010 m2 1 Energy density = 2 stress strain 1 = 2 2.5 108 N m2 0.02 = 2.5 106 J m3 Volume of wire = Al = 8.8 107 m2 2.5 m = 2.2 106 m3 2.2 106 m3 [2] 1010 [3] [2] [3] [1] 1 1 106 m3 = 5.5 J ## B3 Bubbles represent atoms (or ions) Dislocation identified by diagonal line (or circle) 1 1 [4] [1] [1] ## B4 Drawing of horizontal concrete beam with: Compression region labelled along inside of top surface (allow just above) Tension region labelled along inside of bottom surface (allow just below) [2] 31 [3] ## Any three from: Quality of written communication Concrete is strong in compression but weak in tension Cracks in upper surface tend to close up or Cracks in lower surface tend to widen Cracks propagate across beam leading to fracture or No dislocations in concrete to blunt cracks B5 Atom Molecule/chain of atoms ## Spaghetti-like arrangement with more than one strand Individual strand labelled molecule (or chain of atoms) or Blob labelled atom ## Thermoplastic softens on heating or Thermoplastic can be remoulded or Thermoplastic melts ## Thermoset does not soften on heating or Thermoset decomposes (or burns) [2] Perspex is a thermoplastic [1] ## NAS Physics Teachers Guide 2005 Nelson Thornes Ltd. [2] Unit 2 Electricity and Thermal Physics Solutions to Assessment Questions B6 Material A is weak and stiff ## Material A is polythene, material B is nylon Material C is CFRP [2] [1] ## B7 Fibre composite consists of strands of something inside another material Laminate consists of layers of different materials [3] 1 1 [2] 31 [3] ## Any three from: Quality of written communication Crack grows or Crack moves or Crack travels until it reaches the boundary between layers or until it reaches the matrix material Tip of crack blunted as it spreads along boundary or Tip of crack blunted as matrix yields 1 C1 r = roA3 so that r A3 rAg/rN = (108/14) 1 3 1 3 = 7.71 = 1.98 [3] ## C2 Any one similarity from: Both nuclear decay products Both charged or Both ionise (or damage) tissue Both have momentum Both deflected by electric (or magnetic) fields 11 ## Any two differences from: Beta is a fundamental particle, alpha is not Mass of alpha much greater than mass of beta Alpha is positive; beta can be either positive or negative Alpha is a helium ion; beta is either an electron or a positron Beta is a lepton, alpha is composed of hadrons 21 ## C3 m = 1.008 665 u (1.007 276 u + 0.000 549) u u 930 MeV E = 8.4 104 u1 = 0.78 (MeV) TFM = TIM = 0 or Momentum of proton is equal (and opposite) to momentum of electron Mass of proton >> mass of electron (so electron will move much faster) ## See Figure NP11 on page 93 Shape roughly correct Cuts KE axis 1 NAS Physics Teachers Guide 2005 Nelson Thornes Ltd. [3] [3] [2] Unit 2 Electricity and Thermal Physics Solutions to Assessment Questions at 0.78 MeV [3] 31 [3] ## Any three from: Quality of written communication Energy per decay is constant All beta particles should have the same energy Beta particles have a range of energies so some other particle must take missing energy C4 Same mass Different charge or Different baryon number or One consists of quarks, the other of antiquarks [2] 21 [2] [1] [1] [1] ## Any two pairs from: e and e+ and + and + e and e and and They annihilate or A burst of energy(or photons or gamma rays) produced C5 Neutral 1 [1] uud Charge = 2 (3) + (3) + (+3) = 1 they annihilate [2] [1] (i) sd or ds (ii) cd or cs ## (iii) uds or uss or cds or css or uds or uss or css or [3] C6 ++ interaction: Charge so possible [2] so possible so possible [2] Gluon [1] is uud is udd interaction: Charge ## NAS Physics Teachers Guide 2005 Nelson Thornes Ltd. [2] Unit 2 Electricity and Thermal Physics Solutions to Assessment Questions D1 Iodine-123 1 1 Iodine-131 emits [1] [2] [1] ## Only lets through rays travelling perpendicular to it or Gets rid of scattered (or reflected) rays [2] [2] ## Amount transferred to thermal energy = 99.2% = 0.992 65 103 V 0.12 A = 7740 W [3] Rotating anode or Anode cooled with circulating fluid or Anode part of large copper block [1] Diagnosis ## so can distinguish bones from flesh or so gives good contrast [3] D4 For diagnosis: Typical accelerating voltage = 100 keV ## Strongly dependent on proton number For therapy: Typical accelerating voltage = 1 MeV ## so dose concentrated at particular place or to remove low energy X-rays while giving less to surrounding tissue or which increase dosage without assisting the therapy [3] 31 [3] [4] ## D5 Any three from: Quality of written communication The higher the frequency, the lower the wavelength High frequency gives better resolution of the (structure of the) eye (as less diffraction) Low frequency is more penetrating so better for abdomen e.g. 0.5 mm ( 2 mm i.e. smaller than pupil diameter to reduce diffraction) f = c/ = 1.5 =3 Two peaks 106 1 22 103 s1/(0.5 103 m) Hz ## NAS Physics Teachers Guide 2005 Nelson Thornes Ltd. [3] Unit 2 Electricity and Thermal Physics Solutions to Assessment Questions Separated by 30 small squares [2] ## So that reflected pulse returns well before a new pulse is transmitted [1] D6 kg m3 (unit of density) m s1 1 m2 s1 [2] 51 [5] ## Any one from list in D7 11 [1] 21 [2] (unit of speed) = kg ## Any five from: Quality of written communication (Z2 Z1)2/(Z2 + Z1)2 used or Appreciation that difference in Zs is very large to give = 0.999 Most (or all or 99.9%) of ultrasound is reflected at boundary between (air in) lungs and tissue ## D7 Any two from: Can measure depth or Can produce a 3-D image Does not destroy tissue (or cells) or Less damaging of tissue (or cells) Can distinguish different types of soft tissue Allows real time imaging or Can be used to investigate moving surfaces
HuggingFaceTB/finemath
# Complex Conjugate Proof Posted on by Miles Steele We will set out to prove the equation `(a - b)* = a* - b*` where `*` means complex conjugate. As a refresher, a complex conjugate of a complex number is the number which is the same but for a negated imaginary component. For example, the complex conjugate of `4 + 3i` is `4 - 3i`. We will do this by exploring through programming. First, we will tell python what a complex number is by creating a class to store the real and imaginary components and teach it how to manipulate the numbers in such ways as addition and subtraction by using methods on the complex number class. Python actually has support for complex numbers built in. This is fantastic, but we are not going to use its built in support. The reason is two-fold. First, it is fun to see the inner workings of the complex number class. Secondly, writing our own will make it easier to debug, understand, and ensure the validity of the funny business we will be doing later, which will involve shoveling things that are not quite numbers through the complex number class machinery. You can grab a copy of the python source for this at any point, or clone the github repository. ``````class ComplexNumber(object): """ Class for manipulating complex numbers. This example is limited to things which might be useful for the exercise and thus does not include multiplication or division. """ def __init__(self, real, imag): """ Complex numbers have a real and imaginary part. """ self.real = real self.imag = imag def conj(self): """ Complex conjugate of the number. """ return ComplexNumber(self.real, -self.imag) """ Addition operator: x + y """ return ComplexNumber(left.real + right.real, left.imag + right.imag) def __sub__(left, right): """ Subtraction operator: x - y """ return left + (-right) def __pos__(self): """ Unary plus operator: (+x) """ return self def __neg__(self): """ Unary minus operator: -x """ return ComplexNumber(-self.real, -self.imag) def __eq__(left, right): """ Equality testing: x == y """ return (left.real == right.real) and (left.imag == right.imag) def __ne__(left, right): """ Inequality testing: x != y """ return not left == right def __repr__(self): """ This tells python how to represent ComplexNumber's as strings. This way, when we print a ComplexNumber, it shows something nice like "8 + 3i" """ return "{} + {}i".format(self.real, self.imag)`````` ## Numerical Tests Now that we have our complex number class, let’s test it out on a few examples. ``````print "\nComplex number examples:" x = ComplexNumber(8, 5) y = ComplexNumber(3, 2) print x # -> 8 + 5i print y # -> 3 + 2i print -x # -> -8 + -5i print x + y # -> 11 + 7i print x - y # -> 5 + 3i print -y + x # -> 5 + 3i print x == y # -> False print x == ComplexNumber(8, 5) # -> True print x.conj() # -> 8 + -5i`````` Looks good so far. The printing for negative imaginary components is a little funky, but it’s readable enough. Now let’s try the equation in question with a few examples. Remember, the equation we’re testing is `(a - b)* = a* - b*`. ``````print "\nEquation example:" a = ComplexNumber(8, 5) b = ComplexNumber(3, 2) # left side of the equation print (a - b).conj() # -> 5 + -3i # right side of the equation print a.conj() - b.conj() # -> 5 + -3i print (a - b).conj() == a.conj() - b.conj() # -> True`````` Great, it looks like the equation holds for those values. How about a few more? ``````print "\nMore equation examples:" az = [ComplexNumber(8, 5), ComplexNumber(-2, 3), ComplexNumber(-9, -3), ComplexNumber(1027, -304) ] bz = [ComplexNumber(9, -2), ComplexNumber(8, 4), ComplexNumber(6, 9), ComplexNumber(0, 0) ] for a,b in zip(az, bz): print (a - b).conj() == a.conj() - b.conj() # -> True every time!`````` But this does not prove the equation true, you say. Well, what if we try for 10000 different randomly generated examples? ``````print "\nRandomly generated examples:" from random import randint for _ in xrange(10000): a = ComplexNumber(randint(-1000, 1000), randint(-1000, 1000)) b = ComplexNumber(randint(-1000, 1000), randint(-1000, 1000)) truth = (a - b).conj() == a.conj() - b.conj() # -> ... True if not truth: print "It was false!" print "\nIf nothing to the contrary is printed above,"\ + "then all the examples checked out."`````` Those all worked! At least for me. You can give it a shot if you want. The equation seems to hold up. It seems very unlikely for there to be holes in the coverage of this equation. Is that good enough? No? Ok. Well let’s try to show it more generally then. ## A General Approach In order to prove the equation generally, we will have to stop plugging in actual complex numbers. Every time we plug in an actual complex number, we doom ourselves to a loss of generality. But does that mean that all of our hard work in creating the machinery of the ComplexNumber class will go to waste? Certainly not. We will now create a class to represent something that is not quite a number. It will behave a lot like a number, but without every having an actual value. We will call such nebulous artifacts “variables” for now. I’m not sure that is quite the right word, you can decide for yourself. We will put Variables inside ComplexNumbers as real and imaginary components. So Variables will need to be able to do everything that a number does. ``````class Variable(object): """ Variables are like numbers, but they have no value. Variables must be able to do everything that numbers can do. Each Variable we create will have its own unique identity. It will be distinct from every other Variable that exists. However, we must make this so WITHOUT assigning a value to the variable. """ def __init__(self): """ Variables have an empty constructor because they do not have any value to remember. """ pass """ The sum of two variables is an object representing just that. """ return VariableSum(left, right) def __sub__(left, right): """ The difference of two variables is just the sum where the right one is negated. (a - b) = (a + (-b)) """ return VariableSum(left, -right) def __pos__(self): """ +v is the same as v """ return self def __neg__(self): """ -v is the negated version of the variable v """ return NegatedVariable(self) def __eq__(left, right): """ How should we determine whether two variables are equal? Well first off, the simple case, if both the left and right sides of our comparison are the same variable then they should be equal. However, if the left and right are different variables, they could, in some world, have the same value. So it wouldn't be quite fair to return False for such a comparison. For this sort of ambiguous answer, we will just throw an exception and hope this never happens in our proof. Another complication when we run across a comparison between a variable and the negated form of some variable. These could have the same value, but we can't be sure, even if they are the positive and negated form of the same variable. So, here we will throw an exception as well. """ if isinstance(right, NegatedVariable): # The right variable is negated, but the left is not. # This is an ambiguous comparison. # The values of the variables could be equal, but there is no way # for us to answer definitively with a True or False raise RuntimeError("Ambiguous Equality") else: # Both variables are non-negated. # The "is" comparison tests whether the left and right sides # are the same instance of the variable class. return left is right def __ne__(left, right): return not (left == right)`````` Whew, that’s pretty weird. We seem to have referenced a bunch of classes in the methods of Variable which do not exist yet. Let’s go ahead and create those classes. ``````class VariableSum(object): """ A VariableSum represents the result of adding two Variables """ def __init__(self, left, right): """ A variable sum stores the left and right elements in the addition """ self.left = left self.right = right def __neg__(self): """ To negate a sum like -(x + y), just negate it's parts (-x + -y) """ return VariableSum(-self.left, -self.right) def __eq__(left, right): """ Check whether a sum of variables is the same as another sum of variables. """ # Yes, this line is confusing. if left.left == right.left and left.right == right.right: return True else: # There are other ways that sums could be equal. # For example, (a + b) = (b + a) # We are not going to implement every equality comparison, so instead # we will throw a NotImplementedError to indicate the if you wanted # to use a comparison that currently throws an error in your proof, # then you might consider writing more code here. raise NotImplementedError("Comparison not fully implemented.") def __neq__(left, right): return not (left == right)`````` ``````class NegatedVariable(Variable): """ Notice that Variable appears inside instead of object. This denotes that a NegatedVariable is really a kind of Variable. Technically, NegatedVariable inherits all of the methods of from the Variable class. So NegatedVariables know how to do all the same tricks, like participating in addition and subtraction. The exception is negation of a NegatedVariable. NegatedVariable has its own special definition of negation, which you will see below. """ def __init__(self, variable): """ A NegatedVariable must remember what it is the negation of. """ self.variable = variable def __neg__(self): """ A twice negated variable is just the original variable. -(-v) = v """ return self.variable def __eq__(left, right): if isinstance(right, NegatedVariable): # Both variables are negated. return -left == -right else: # The left variable is negated, but the right is not. # This is an ambiguous comparison. # The values of the variables could be equal, but there is no way # for us to answer definitively with a True or False raise RuntimeError("Ambiguous Equality") def __ne__(left, right): return not (left == right)`````` ## Final Test Now that all of the pieces are in place we should be able to test whether `(a - b)*` really does equal `a* - b*`. We will compose the complex numbers `a` and `b` from two variables each. One for the real component, and another for the imaginary component. ``````print "\nGeneral evaluation:" a = ComplexNumber(Variable(), Variable()) b = ComplexNumber(Variable(), Variable()) print (a - b).conj() == a.conj() - b.conj() # -> True`````` It’s true! We have shown that `(a - b)* = a* - b*`. But don’t take my word for it, you can run this program yourself. Here is the python source. That’s assuming there were no bugs in the program. Did you see any bugs? Fix them or play with the code on github.
HuggingFaceTB/finemath
# A spring with a constant of 5 (kg)/s^2 is lying on the ground with one end attached to a wall. An object with a mass of 6 kg and speed of 6 m/s collides with and compresses the spring until it stops moving. How much will the spring compress? Jan 11, 2018 6.57 meters #### Explanation: Let's assume,after the collision the spring gets compressed by x meters, So using law of conservation of energy,we can say that the kinetic energy of the object will get stored as elastic potential energy of the spring. i.e $\left(\frac{1}{2}\right) m {v}^{2}$=$\left(\frac{1}{2}\right) k {x}^{2}$ Where, k is the spring constant,m is the mass of the object and v is its velocity with which it hit the spring. Solving the equation we get, x=6.57 meters
HuggingFaceTB/finemath
# How Do You Derive the Surface Area Formula for a Sphere Using Integration? • sanitykey In summary, the conversation is discussing the method for finding the general formula for the surface area of a sphere using integrals. The conversation covers the use of Cartesian and polar coordinates, as well as the correct limits for integration and the necessary trigonometric substitutions. The final solution is given as the integral of r^2sin(\phi) with the limits \theta from 0 to 2\pi and \phi from 0 to \pi. sanitykey I've been looking at this method here: http://planetmath.org/encyclopedia/6668.html I was wondering at the last step before the "Note on multi-valuedness" if you wanted to obtain the general formula for the surface area of a sphere $$4 \times \pi \times r^2$$ with a radius of well r what limits would you use for each of the integrations? Well i say each of the integrations it looks like only one there (only one integration sign) but with the dx and dy after it does that mean it can be split up into two integrations? If so I don't understand where the $$\pi$$ comes from? I mean I'm guessing the f(x,y) and z cancel leaving just the r is that right? Sorry if this is a silly question! Last edited by a moderator: Yes, it's a double integral over the region x^2+y^2<=r^2 in the xy plane. You can figure out what the bounds are. If all you want is the area, just take f=1 (or f=2 to account for the upper and lower hemispheres). The pi will come from doing the integral, which will involve a trig substitution. Oh i sort of get it, not just randomly, it's because of your reply (thanks :D) just drew a quick sketch i understand why x = rcos(a) and y = rsin(a) unless those aren't right in which case i don't understand it :P I tried putting the first integral limits as r and 0 and the second integral limits as r and $$(r^2 - y^2)^\frac{1}{2}$$ with the second one being dx and the first being dy and then i tried substituting y = rsin(a) but yeah hasn't quite worked out yet. But is that sort of close to the right track? Last edited: Well, either use cartesion (x,y) or polar coordinates, not both. If you're going to use cartesian coordinates, then you're close with the limits you have there, but not quite right. y ranging between 0 and r will only cover half the circle, and x ranging from $\sqrt{r^2-y^2}$ to r will cover a region outside the circle (specifically, right now you're integrating over the half of the region outside the circle and inside a square that it's inscribed in). On the other hand, if you want to do polar coordinates, you'll have to use the jacobian for the transformation, as they mention in that article. Sorry to be such a bother i just can't figure out what the limits are here's what i did: $$\int\int r dxdy$$ $$\int r dx$$ upper limit = $$\sqrt{r^2-y^2}$$ lower limit = 0 $$\int \sqrt{r^2-y^2} \times r dy$$ upper limit = r lower limit = 0 Which comes out with an answer of $\frac{1}{4} \times \pi \times r^3$ i think. I know if these were the right limits that this would only cover part of the sphere so i'd have to multiply by some number (16? shouldn't it be 8?) to get it over the whole sphere but the $r^3$ is throwing me. I mean if i only want the surface area of this part of the sphere are these limits better or am i still outside the circle? Thanks for your continued support :) Last edited: Projecting the sphere $x^2+ y^2+ z ^2= r^2$ into the xy-plane, z= 0, gives the circle $x^2+ y^2= r^2$. The point of the limits of integration is to "cover" that disk. If you want to use x as the outer variable of integration then x will have to vary from -r to r. for each x, then, since $x^2+ y^2= r^2$, $y= \pm\sqrt{r^2- x^2}$. The limits you give will only cover the first quadrant of the circle (because of symmetry you can multiply by 4.) The integral will be $$\int_{x=-r}^r\int_{y= -\sqrt{r^2- x^2}}^{\sqrt{r^2- x^2}} f(x,y)dydx$$ Of course, you have to have the correct "differential of surface area". One way to do that is this: Think of $F(x,y,z)= x^2+ y^2+ z^2= r^2$ as a level surface for the function F(x,y,z). Then the gradient, $\nabla F(x,y,z)= 2x\vec{i}+ 2y\vec{j}+ 2z\vec{k}$ is a vector perpendicular to the sphere at each point and is a "vector differential of area". Since you want to integrate in the xy-plane, "normalize" by making the z component 1: divide the vector by 2z to get $\frac{x}{z}\vec{i}+ \frac{y}{z}\vec{j}+ \vec{k}$. Now find the length of that: $$\sqrt{\frac{x^2}{z^2}+ \frac{y^2}{z^2}+ 1}= \sqrt{\frac{x^2+ y^2+ z^2}{z^2}}= \frac{r}{z}$$ The differential of surface area is $\frac{r}{z}dydx$. Since $z= \pm\sqrt{r^2- x^2- y^2}$, use the positive z and multiply by 2: $$2\int_{x=-r}^r\int_{y=-\sqrt{r^2- x^2}}^{\sqrt{r^2-x^2}}\frac{r}{\sqrt{r^2- x^2- y^2}} dydx$$ You are going to need a couple of complicated trig substitutions to do that. (Added: Well, one complicated trig substitution and then everything reduces nicely!) A better way is to use parametric equations for the surface of the sphere. Use polar coordinates with $\rho$ set to the constant r: $x= rcos(\theta)sin(\phi)$, $y= rsin(\theta)sin(\phi)$, $z= rcos(\phi)$. Then the position vector of a point on the surface of the sphere is $$rcos(\theta)sin(\phi)\vec{i}+rsin(\theta)sin(\phi)\vec{j}+z= rcos(\phi)\vec{k}$$ Differentiate that with respect to $\theta$: $$-rsin(\theta)sin(\phi)\vec{i}+rcos(\theta)sin(\phi)\vec{j}$$ and with respect to $\phi[/tex]: $$rcos(\theta)cos(\phi)\vec{i}+ rsin(\theta)cos(\phi)\vec{j}- rsin(\phi)\vec{k}$$ The "fundamental vector product" of the surface is the cross product of those two vectors: $$r^2cos(\theta)sin^2(\phi)\vec{i}+r^2sin(\theta)sin^2(\phi)\vec{j}+r^2sin(\phi)cos(\phi)\vec{k}$$ and the length of that is [itex]r^2sin(\phi)$. The "differential of surface area" in those parameters is $r^2sin(\phi)d\theta d\phi$. To cover the entire surface, $\theta$ must vary from 0 to $2\pi$ and $\phi$ must vary from 0 to $\pi$. The surface area is: $$r^2\int_{\theta= 0}^{2\pi}\int_{\phi=0}^\pi sin(\phi) d\phi d\theta$$ a much simpler integral. Thanks for your reply i just followed through your steps and although I'm not very good at visualising i can understand what you've shown me :D Again thanks StatusX and HallsofIvy :) ## What is the formula for calculating the surface area of a sphere? The formula for finding the surface area of a sphere is 4πr², where r is the radius of the sphere. ## How do you measure the radius of a sphere? The radius of a sphere can be measured by taking the distance from the center of the sphere to any point on its surface. ## Can the surface area of a sphere be calculated if only the diameter is given? Yes, the radius can be found by dividing the diameter by 2. Then, the surface area formula of 4πr² can be used to calculate the surface area. ## Why is the surface area of a sphere important in science? The surface area of a sphere is important in science because it helps us understand the properties and behavior of objects in the natural world. It is also used in various mathematical calculations and equations. ## Are there any real-life applications of the surface area of a sphere? Yes, the surface area of a sphere is used in many real-life applications such as calculating the volume of a spherical container, determining the amount of paint needed to cover a spherical object, and even in the design of architectural structures such as domes and geodesic spheres. • Calculus and Beyond Homework Help Replies 4 Views 462 • Calculus and Beyond Homework Help Replies 3 Views 2K • Calculus and Beyond Homework Help Replies 33 Views 2K • Calculus and Beyond Homework Help Replies 3 Views 2K • Calculus and Beyond Homework Help Replies 8 Views 2K • Calculus and Beyond Homework Help Replies 4 Views 2K • Calculus and Beyond Homework Help Replies 5 Views 1K • Calculus Replies 33 Views 4K • Calculus and Beyond Homework Help Replies 5 Views 3K • Calculus and Beyond Homework Help Replies 2 Views 2K
HuggingFaceTB/finemath
# Geometry Part 8: Area and Perimeter (cont’d) by C. Elkins, OK Math and Reading Lady This post features 3 more area and perimeter misconceptions students often have. I have included some strategies using concrete and pictorial models to reinforce the geometry and measurement standards. Refer to Geometry Part 7 for 2 other common misconceptions. Also, check out some free resources at the end of this post!! Misconception #3:  A student only sees 2 given numbers on a picture of a rectangle and doesn’t know whether to add them or multiply them. • Problem:  The student doesn’t know the properties of a rectangle that apply to this situation — that opposite sides are equal in measurement. • Problem:  The student doesn’t see how counting squares can help calculate the area as well as the perimeter. Ideas: • Give the correct definition of a rectangle A quadrilateral (4 sides) with 4 right angles and opposite sides are equal. • Give the correct definition of a square:  A quadrilateral (4 sides) with 4 right angles and all sides are equal. From this, students should note that squares are considered a special kind of rectangle.  Yes, opposite sides are equal – but in this case all sides are equal. • Using square tiles and graph paper (concrete experience), prove that opposite sides of a rectangle and square are equal. • Move to the pictorial stage by making drawings of rectangles and squares. Give 2 dimensions (length and width) and have students tell the other 2 dimensions.  Ask, “How do you know?” You want them to be able to repeat “Opposite sides of a rectangle are equal.” With this information, students can now figure the area as well as the perimeter. • Move to the abstract stage by using story problems such as this:  Mr. Smith is making a garden. It will be 12 feet in length and have a width of 8 feet.  How much fence would he need to put around it? (perimeter) How much land will be used for the garden? (area). • Measure rectangular objects in the classroom with some square units.  Show how to use them to find the perimeter as well as the area using just 2 dimensions.  Ask, “Do I need to fill it all the way in to determine the answer?”  At the beginning – YES (so students can visualize the point you are trying to make). Later, they will learn WHY they only need to know 2 of the dimensions to figure the area or perimeter. # Geometry Part 3: Composing and Decomposing by C. Elkins, OK Math and Reading Lady Composing and decomposing geometric shapes (2D and 3D) should be centered around concrete and pictorial methods. In this and upcoming posts, I will illustrate some methods using various manipulatives and line drawings which help students take a shape apart or put shapes together. If you refer back to  Geometry Part 1: The Basics, all grade levels KG-5th have standards dealing with this issue. Some of the experiences I plan to share will also help students relate to multiplication, division, fractions, area, and other geometry concepts (such as rotations, reflections, slides). Refer to Geometry Part 2: van Hiele levels to determine if the activities you are choosing are appropriate for Level 0, 1, or 2 students. One Inch Color Tiles: 1.  Can you make a larger square out of several individual squares? • Level 0 students will be using the visual aspect of making it look like a square. • Level 1 students will be checking properties to see if their squares are indeed squares (with the same number of tiles on each side). • Level 2 students will be noticing they are creating an array (ex: 3 x 3 = 9) and perhaps learning about squared numbers. 3 squared = 9. They might be able to predict the total number of tiles needed when given just the length of one side. 2.  How many rectangles can you make using 2 or more squares? (Level 0-1) • Level 1:  Are the green and blue rectangles the same size (using properties to determine)? # Geometry Part 1: The Basics by C. Elkins, OK Math and Reading Lady For many schools, it seems as if Geometry and Measurement standards remain some of the lowest scored. This has always puzzled me because it’s the one area in math that is (or should be) the most hands-on — which is appealing and more motivating to students. Who doesn’t like creating with pattern blocks, making 2 and 3D shapes with various objects, using measurement tools, and getting the chance to leave your seat to explore all the classroom has to offer regarding these standards? So what is it about geometry and measurement that is stumping our students? Here are some of my thoughts – please feel free to comment and add your own: • Vocabulary? (segment, parallel, trapezoid, perpendicular, volume, area, perimeter, etc.) • Lack of practical experience? Not all homes have materials or provide opportunities for students to apply their knowledge (like blocks, Legos, measuring cups for cooking, tape measures for building, etc.). • Background knowledge about the size of actual objects? We take it for granted students know a giraffe is taller than a pickup truck. But if students have not had the chance to go to a zoo, then when they are presented a picture of the two objects they might not really know which is taller / shorter. Think of all of the examples of how we also expect students to know the relative weights of objects. Without background knowledge or experience, this could impede them regarding picture type assessments. • Standards keep getting pushed to lower grades when students may not have reached the conservation stage? If they think a tall slender container must hold more than a shorter container with a larger diameter, or they think a sphere of clay is less than the same size sphere flattened out, they may have difficulty with many of the geometry and measurement standards. In this post, I will focus on Geometry. Here is a basic look at the geometry continuum (based on OK Stds.): KG:  Recognize and sort basic 2D shapes (circle, square, rectangle, triangle). This includes composing larger shapes using smaller shapes (with an outline available). 1st:  Recognize, compose, and decompose 2D and 3D shapes. The new 2D shapes are hexagon and trapezoid. 3D shapes include cube, cone, cylinder, sphere. 2nd: Analyze attributes of 2D figures. Compose 2D shapes using triangles, squares, hexagons, trapezoids and rhombi. Recognize right angles and those larger or smaller than right angles. 3rd:  Sort 3D shapes based on attributes. Build 3D figures using cubes. Classify angles: acute, right, obtuse, straight. 4th:  Name, describe, classify and construct polygons and 3D figures.  New vocabulary includes points, lines, segments, rays, parallel, perpendicular, quadrilateral, parallelogram, and kite. 5th: Describe, classify, and draw representations of 2D and 3D figures. Vocabulary includes edge, face, and vertices. Specific triangles include equilateral, right, scalene, and isosceles. Here are a couple of guides that might help you with definitions of the various 2D shapes. The 2D shapes guide is provided FREE here in a PDF courtesy of math-salamander.com.  I included a b/w version along with my colored version. The Quadrilateral flow chart I created will help you see that some shapes can have more than one name. Click on the link for a free copy (b/w and color) of the flow chart. Read below for more details about understanding the flow chart. # Math Art Part 2: Decomposing and composing squares and triangles I wanted to show you another example of math art, this time using squares and triangles. This project also falls under the standards dealing with decomposing and composing shapes. With this project, students can create some unique designs while learning about squares, triangles, symmetry, fractions, and elements of art such as color and design. It would be a great project for first grade (using 2 squares) or for higher grades using 3 to 4 squares. A great literature connection to this project is the book “The Greedy Triangle” by Marilyn Burns. (Click link to connect to Amazon.) The triangle in this book isn’t content with being 3-sided and transforms himself into other shapes (with the help of the Shapeshifter). Lots of great pictures showing real objects in the shape of triangles, squares, pentagons, hexagons, and more. Marilyn Burns is a great math educator to check out, if you haven’t already. She has a company called Math Solutions (check out MathSolutions.com). Marilyn and her consultants have wonderful resources and advocate for constructivist views regarding math education. She is also the author of Number Talks and many math and literature lesson ideas. ### The 4 Triangle Investigation Materials needed: • Pre-cut squares 3″, 4″ or 5″ (I used brightly colored cardstock.) • Scissors and glue • Background paper to glue shapes to Directions 1. Model how to cut a square in half (diagonally) to make two right triangles. (I advocate folding it first so that the two resulting triangles are as equivalent as possible.) 2. Guide students into showing different ways to put two triangles together to form another shape. Rule: Sides touching each other must be the same length. Let students practice making these shapes on their desk top (no gluing needed). 3. Help students realize they may need to use these actions: • Slide the shape into place • Flip it over to get a mirror image • Rotate it around in a circular motion to align the edges 4. Students are then given 2 squares (to be cut into 4 triangles) and investigate different shapes they can make following the above rule. Here are some possibilities: 5. As the teacher,  you can decide how many creations you want each student to attempt. 6. These shapes can be glued onto construction paper (and cut out if desired). 7. As an extension, shapes can be sorted according to various attributes: • # of sides • symmetry • # of angles • regular polygons vs. irregular
HuggingFaceTB/finemath
# 0-30V 20A High current adjustable voltage regulator circuit If you are looking for a high current Adjustable voltage regulator circuit. This may be a better choice for you. It can give the output current 20A or 400watts and can adjust the voltage of 4 to 20V—or apply to 0 to 30V easily. It is good quality, excellent performance and durable with PCB. For use in electronic telecommunication, High power radio transmitter, etc. This project uses a few components. Because of use fours of LM338—-5A voltage regulator and IC- 741—popular op-amp—in linear power supply mode. Try to build then you will like it! ## How it works The LM338K that we bring to use be DC voltage regulator circuit on the Floating type, The simple applications style of this IC As shown in Figure 1 How to use LM338 IC in basic Figure 1 circuit, in normal conditions, the voltage between pin Adj and pin output is equal to a 1.25V stable that flow pass R1, R2 will have constant as well. The output voltage be equal to the voltage at pin Adj + 1.25 volt or Calculated as follows Vo = 1.25 (R1 + R2) / R1 You may also like these: ## High current with parallel LM338 In normal IC-LM338 Can supply up to 5 amps, but to load current maximum 20 amps, we will bring it to parallel. What to watch out, when we connect many IC with parallel form, is the average current flowing through the circuit. Each equally. The easiest way is to connect a resistor to the output pin of IC as shown in figure 2. The value of the resistors-Rs used to it, it will be much less than the R1. Based on the circuit, we can set. IoRs = 1.25 – Vo(R1/(R1+R2) ) And from the work of circuits set of down, will be. IiRs = 1.25 – Vo (R1 /(R1+R2) ) From these two equations, which are all the same, it is that Io = Ii. Or simply, the current through the LM338 IC both are equal. The connecting LM338 in parallel form In practice, we do not circuits to use it. Since the voltage across drop Rs will change based on the current flowing through the load and Referenced voltage of IC. Also, different from each other. ## External LM338 controls using uA741 Therefore, we need to control external circuits. To control the voltage at pin adj, as shown in Figure 3. From the circuit, we will see that at pin negative of IC to have a half voltage of output voltage. And at pin positive to have equal to Referenced voltage. Which it is caused by a constant current flowing through the transistor to Rs and P1. From the properties of the op-amp circuit to the regulated level output voltage that. Until having the same voltage at pin input. So the voltage at pin base of a transistor- Q1 is equal to the voltage at pin negative of IC. The voltage these, to make changes in resistance of the transistor, causing voltage in referenced point change. The resistance of the transistor is inversely proportional to the output voltage, to compensate for the voltage loss of Rs. Due to the does not equal flow of these load current. High power DC regulator 4-20 volts 20 amps by LM338 • From all principles above, we have circuits applications, as shown in Figure 4, if you want to add the IC-LM338, enabling them to be higher current. • For a transformer that can supply at least 30 amperes, and voltage secondary coil, should not be less than 18 volts. To optimize the circuits for the capacitor-C2 should better use of 20000uF. Read: How to use LM317 Datasheet and pinout ## How to make a high current adjustment voltage regulator Parts list IC1: LM741 IC2-IC5: LM338K or LM338P Q1: BD140 D1: Bridge diode 35A D2: 1N4148, 75V 150mA Diodes R1: 150Ω Resistor 0.5W R2: 100Ω Resistor 0.5W R3, R4: 4.7K Resistors 1/2W R5-R8: 0.3Ω Resistors 5W C1: 0.01uF 200V, Polyester Capacitor C2,C5: 4,700uF 50V, Electrolytic Capacitors C3: 0.1uF 63V, Polyester Capacitor C4: 10uF 25V Tantalum C6: 47uF 35V, Electrolytic Capacitors PCB of high power DC regulator-4-20-volts-20-amps ## Build 20A High current adjustable power supply • All the devices in the circuits. Devices can be soldered onto the PCB as shown in Figure 5. Unless you change the input capacitor-C2 has increased these. I will have to install it on outside of PCB. • Bridge diode must be attached heat sink neatly. To help extend the life and durability. • For IC-LM338 that you need to install on a big size heatsink as well. Be careful, the body of the IC to the heat sink Short decisively. • When all is finished soldering equipment, test input AC power to this project. • Then adjust VR1 until the output voltage as needed, and test the Load and adjust VR1 until the output voltage is the voltage should be unchanged. ## GET UPDATE VIA EMAIL I always try to make Electronics Learning Easy. Get Ebook: Simple Electronics Vol.4 ### 47 thoughts on “0-30V 20A High current adjustable voltage regulator circuit” 1. its very usfulfor ples tell can i add mor LM 338 for more current in this circuit thanks 3. Ahmed. Remember that if the current increases that you should consider the size transformer also. 4. hi, Will a 110vac transformer work with this project as long as it provides at least 18vac and 30A? Thanks, Randy 5. “To optimize the circuits for the capacitor-C2 should better use of 20000MF” Where can I get a 20000MF? Does MF stand for Mega-Farad? • 20000uF micro fared It is impossible to make mega farads in one capacitor.😊😊😊 6. Hi, for works a transformer 15vac – 12Anp. 3 -LM338 it is good to work? the circuit and pcb, work with less resistor and less LM338? thank for help. 7. Hi, Is it 5 Watts enough for 0.3 ohm resistors? If 5 Amps(peak 7A) current flow, what is the voltage should be? 1V? I don’t understand how this circuit works? I don’t want to copy – paste. I must know how its works. Because it will be voltage regulator of the my school project. I have to write a report. So, can you explain the circuit? Thanks.. 8. Bonjour, je cherche le schéma d’une alimentation régulé et stabilisé, réglable de 24 v a 30 v et de 40 ampères ; auriez vous cela ? merci 9. Lm338 k circuits R1 , R2 5v 20ampes R1, R2 value please 10. I built this and worked flawlessly, I use a 10.000 uF capacitor on the Vin, thx 11. Can i attach a 42 volt 25amp transformer, will lm338k can handle upto 50 volts?? and what modification i’ll have to do with capacitor…. 12. no Jaydeep, it will now work properly with 50v, I built a power supply recently and the transformer was 24 + 24v which rectified and filteres was giving around 52v , the regulation worked but the ability to provide current was zero. when I modified the input to be 24v then the lm338k magically started delivering current. properly. 13. Ty Mario…it did’t worked at 50v, i had to reduce to 28v….but it worked and was delivering upto 10amps as my requirement. 14. can i use this transformer 15. Hello, I’ve seen your site and it is quite interesting, and I have a question: – can you design me (not for free, of course) an circuit to stepdown voltage for car headlamps (dimmer), from 12-14 v to 8 v, by using an LMC555CM and an IRF1405? Best regards, Dorin 16. How I can do CV & CC Mode power supply above this LM338K circuits? 17. Hello. I just want to confirm with you if this circuit is capable to have deliver 0 to 30 V ? And last thing, can i buy this PCB – if yes from where pls? Thanks 18. Hai, can u please give me a solution for my question it is that how to produce 7amps current as an output by giving dc voltage as a input one important thing is without using transformer please help me guys 19. details of diode bridge please. its urgent 20. Can I use this as battery charger? tnx. 21. can i confirm if this circuit is a single layer pcb board? 22. Hello, Can I use a TIP3055 or other power transistor instead of the LM338? If the transformer primary voltage is 24 volts, how is the effect of the circuit (such as resistor and capacitor values)? • Hi, Kris. I am not sure. The PCB size. I suggest you print it with a laser printer in scale 300dpi per inches. It may be real size. I am sorry. I never build this circuit. it is a very old circuit. It may useful or a good idea for you. Have a good day! 23. how can i make it to auto cut off full charge indicator using a relay?heres my email add [email protected]. anyways thank you so much for the effort,really much appreciated 24. Hello, if you can ask for data on the tile pattern paths, arrangement of components. Please, please • Thanks for your feed back. 25. Bonsoir, Je suis à la recherche un schéma alimentation 30V/30A Votre présentation est intéressante, il faudrait ajouter deux LM338 suplémentaires. je voudrai ajouter un réglage pour l’intensité, je ne voit pas comment réalisé ce réglage. Je pense qu’avec un LM741 suplémentaire et quelques composants et un potentiomètre de réglage. Ma question est la suivante auriez vous une proposition ou un montage d’adaptation à ce montage. Cordialement. Gilou 26. Bonjour Gilou Avez vous trouvé votre solution ? • Je n’ai pas trouvé la solution, je suis en essai de réalisé un montage avec un 723 est le schéma courant élevé ci-dessus et le réglage intensité de la revue Elektor avec un 741. Je ne sait pas si cela va fonctionner. Cordialement. Gilou 27. Hello I have power supply unit (PSU) ATX so I think it possible to connect this to the schem Figure 4…. But I have two questions : Q1 : Are the two capacitors C2 and C3 are needeed ? Q2 : The positive of PSU is connected to the IN line of the LM338 so OK….But where is connected the negative (ground) Is it possible to complete figure 4 (sorry my english is poor) 28. What is the uses of 0.3Ohm 5w resistance, if it is necessary than please suggest any alternative too. • Hello Atul Kumar, I would like to express my opinion. When we bring the devices together in parallel. We often add a small resistor in series. To compensate for the current And protect those devices. You may be able to use 0.33 ohms 5W. Although this circuit is difficult (for me), I believe you can do it, hopefully you will benefit from it. Don’t forget to share me too. Thanks Apichet 29. I need to have adjustable dc power supply from 5 to 60 Vdc, and provide current up to 20A. Could I used the 4-20 vdc 20A adjustable power supply circuit above and replaced the transformer 20A? please help. Thx Thanks for your visit. How are you? In truth, I haven’t built this circuit yet. But I like this circuit and I think it will be helpful to others. I think it cannot be used with 60V. The LM338 can get 37V max input only. However, I believe you can make it. And having fun with electronics Thanks Apichet 30. En los circuitos anteriores,puedo reemplazar el LM317 por LM317HVT,para obtener en la salida voltajes hasta 57 Vdc,siempre y cuando el transformador tenga en el secundario 40 Vrms. Todos los reguladores deben ir montados en disipadores de calor,cuales serian los cálculos para obtener las dimensiones de las características de los disipadores. 31. Hi I have in place 2 toroidal identical transformers 12V/300VA. Can I use them in “parallel” with two bridges, so I can have 24V/300VA output and feed this circuit? • Hello Theodore Alexopoulos, First Thanks for your visiting my site. How are you? When you are interested in this circuit, you are a good person. It is a big size circuit. I’m not quite sure. I understand your question well enough? But I would like to express my opinion. Normally when we parallel the transformers. The current will increase. But the voltage remains the same. So it is 12V 600VA output. You may connect it in series. They will get 24V 300VA. https://www.eleccircuit.com/multiples-ways-to-increase-the-power-supply-current/ Thanks Apichet • As the author said, you can do that, but you need to connect transformers in series 32. where can i buy a pcb for the High power DC regulator 4-20 volts 20 amps by LM338 • Hello oekkedakke, Thanks for your visit. I am so happy that you are invested in this circuit. It is a big circuit for me. I have never built it. And, I am sorry. I do not where to buy a PCB for this circuit. However, I think you can build it. Thanks Apichet
HuggingFaceTB/finemath
# 5.4. Sorting Algorithms¶ This lesson introduces sorting algorithms, a frequently used algorithm in programming for processing data sets. It is also an introduction to the relative efficiencies of different algorithms that solve the same problem. ### Professional Development The Student Lesson: Complete the activities for Complete the activities for Mobile CSP Unit 5 Lesson 5.4: Sorting Algorithms. ### Materials • Decks of cards - can use 13 cards of one suit per student group, although one deck would be ideal. • Projection system • Videos ## 5.4.1. Learning Activities¶ ### Estimated Length: 45 minutes • Hook/Motivation (5 minutes): • Sorting "Contest": Students should form groups of 2-4. Ask the students to discuss in their groups the fastest way to sort a deck of cards. Then distribute one deck of cards to each group (or one suit from a deck), leaving them face down. Start a timer and see which group can get their deck sorted the fastest. Once all the groups have completed, have each group share their strategy for sorting. Emphasize the point that there are different algorithms to solve the same problem, but that each has a different efficiency.(It might be helpful to have the students describe their sort in the form of a pseudocode algorithm -- i.e., step by step.) • Alternative Hook: Ask the students to sort themselves by their birthday (month and day). Have them make a line in class. Once completed, ask them to talk about strategies they used to sort themselves. It might be helpful here to ask the students whether their algorithm required them to compare their birthdays with each other (bubble sort, merge sort) or whether they could do it without comparisons (bucket sort). • Experiences and Exploration (30 minutes): • Bubble Sort (10 minutes): Play the video demonstrating the bubble sort and ask the students to hypothesize about how it's being solved. After the video, review the interactive question and the pseudocode for the bubble sort. • Merge Sort (10 minutes): Play the video demonstrating the merge sort and ask the students to hypothesize about how it's being solved. After the video, review the interactive question and the pseudocode for the merge sort. • Bucket and Merge Sort (10 minutes): Play the video demonstrating the bucket and radix sort and ask the students to hypothesize about how it's being solved. After the video, review the interactive question and the pseudocode for the bucket and radix sort. • Rethink, Reflect, and Revise (10 minutes): • Practice: Have the students practice each sort in their groups with the deck of cards • Note: Bucket Sort Exercise: Sort By Rank and Suit - Suppose you wanted to sort the deck of cards by both rank and suit, so that all the clubs come before all the diamonds come before all the hearts come before all the spades. How would you do this? Answer: Once the deck has been sorted by rank. You could sort it into suit using 4 buckets, one for each suit. Try it! • Wrap-Up: Have the students complete the interactive exercises and portfolio reflections for the lesson ### AP Classroom The College Board's AP Classroom provides a question bank and Topic Questions. You may create a formative assessment quiz in AP Classroom, assign the quiz (a set of questions), and then review the results in class to identify and address any student misunderstandings.The following are suggested topic questions that you could assign once students have completed this lesson. Suggested Topic Questions: ### Assessment Opportunities and Solutions Solutions Note: Solutions are only available to verified educators who have joined the Teaching Mobile CSP Google group/forum in Unit 1. Assessment Opportunities You can examine students’ work on the interactive exercise and their reflection portfolio entries to assess their progress on the following learning objectives. If students are able to do what is listed there, they are ready to move on to the next lesson. • Interactive Exercises: • Portfolio Reflections: LO X.X.X - Students should be able to ... • In the XXX App, look for: ### Differentiation: More Practice If students are struggling with lesson concepts, have them review the following resources: ### Differentiation: Enrichment #### Optional In-class Activity This could be a nice way to tie together algorithms and data representation. In particular, it provides a practical example of using base-4 arithmetic. Introduction: In the video of the Korean kids on the track, they are performing a radix sort of 3-digit numbers. The algorithm first sorts the numbers into buckets by their 1s digit, then by their 10s digit, and then by their 100s digit. This is an example of a base-10 radix sort. Activity: After watching and understanding that example, have the class watch this video (1:29) of radix sort with 13 cards and try to figure out together how it works. This is an example of a base-4 radix sort. The trick here is that base-4 arithmetic is being used. So the cards are numbered as follows: Card Base 4 2 3 4 5 6 7 8 9 10 J Q K A 02 03 10 11 12 13 20 21 22 23 30 31 32 In the video, the cards are first sorted into buckets by the base-4 1s digit. Then you by the base-4 4s digit. 1. On the board, put up 4 buckets, labeled 0, 1, 2, and 3 in the following arrangement, which corresponds to the arrangement in the video: 0 1 2 3 2. Watch the video, pausing where necessary, and observe what buckets the dealer puts the cards into and then write their decimal values (J=10, Q=11, K=12, A=13) under the bucket numbers. 3. You should see that on the first pass the cards are arranged modulo 4 -- i.e., by the remainder of dividing their numeric values by 4. 4. Do the same for the second pass. This time the cards are arranged div 4 -- i.e., by the quotient of dividing their numeric values by 4. 5. Now propose that you renumber the cards in base-4. And perform sort by their 1s digit and then by their 4s digit. Challenge Question: Can you sort a deck using some other base? Yes. A nice class exercise now is to work out the sort using, say, base 5 to represent the cards -- or any other base. ### Background Knowledge: Sorting Algorithms These resources provide more information on sorting algorithms, including ones not covered in the lesson. Many of the visualizations are interactive or include pseudocode to help you understand them better. ## 5.4.2. Professional Development Reflection¶ Discuss the following questions with other teachers in your professional development program. I am confident I can teach this lesson to my students. • 1. Strongly Agree • 2. Agree • 3. Neutral • 4. Disagree • 5. Strongly Disagree
HuggingFaceTB/finemath
Optics/Total internal reflection Total internal reflection is a phenomenon that occurs when light travels from a more optically dense medium (or a medium with higher refractive index ${\displaystyle {n_{1}}}$) to a less optically dense one (lower index ${\displaystyle {n_{2}}}$), such as glass to air or water to air. When light travels from an optically dense medium to a less optically dense medium, the light refracts away from the normal. If the angle of incidence is gradually increased, one will notice that at a certain point, the refracted ray deviates so far away from the normal that it reflects rather than refracts. This results whenever the refracted angle predicted by Snell's Law becomes greater than 90 degrees. The critical angle ${\displaystyle \theta _{c}}$ is defined as the angle of incidence (inside the higher-index material) for which Snell's Law predicts a 90-degree angle of refraction -- this would mean the light follows the surface rather than entering the low-index material. One can calculate the critical angle using Snell's Law: ${\displaystyle n_{1}\sin \theta _{1}=n_{2}\sin \theta _{2}\,\!}$ ${\displaystyle n_{1}\sin \theta _{c}=n_{2}\sin 90^{\circ }=n_{2}}$ ${\displaystyle \sin \theta _{c}={\frac {n_{2}}{n_{1}}}}$ ${\displaystyle \theta _{c}=\sin ^{-1}\left({\frac {n_{2}}{n_{1}}}\right)}$ Total internal reflection will occur for any incident angle greater than θc. Because the reflected ray never leaves the higher-index material, Snell's Law for total internal reflection becomes ${\displaystyle n_{1}\sin \theta _{1}=n_{1}\sin \theta _{2}}$, in other words ${\displaystyle \theta _{1}=\theta _{2}}$, which is the law of reflection. There is always a fraction of the energy reflected at a boundary between materials of different refractive index, but it is usually quite small (typically a few percent). In the case of total internal reflection, however, 100% of the beam energy reflects (hence the name "total"). You can easily observe total internal reflection for yourself next time you are in a swimming pool. If you are under water looking up, you see the sky. As you turn your head to look closer to horizontal, you see a reflection of the bottom (or side) of the pool. These rays from the pool bottom hit the water-to-air boundary at an angle of incidence greater than the critical angle, so they are reflected to you. How do I know that the reflected ray isn't just the refracted ray? The second law of reflection states that "the incident ray, the reflected ray, and the normal are coplanar." Therefore, the refracted ray must be in the second medium. If it is in the same medium that it originated from, then it is the reflected ray. The reflected angle is also the same as the incident angle, which agrees with the first law of reflection.
HuggingFaceTB/finemath
# Converting a decimal to a mixed number and an improper fraction in simplest form: Basic #### Complete Python Prime Pack 9 Courses     2 eBooks #### Artificial Intelligence & Machine Learning Prime Pack 6 Courses     1 eBooks #### Java Prime Pack 9 Courses     2 eBooks Rules to convert a decimal to a mixed number and an improper fraction in simplest form. • We read the decimal as whole number part, tenths, hundredths and so on and write it as a mixed number. • Then we simplify the proper fraction of the mixed number and write it in lowest terms. • Using algorithm, we convert the mixed number to an improper fraction. Convert 6.8 to a mixed number and an improper fraction in simplest form. ### Solution Step 1: The decimal 6.8 is read as 6 and 8 tenths. So, it can be written as a mixed number $6\frac{8}{10}$. Step 2: The mixed number has a whole number part 6 and a fractional part 8/10 which can be reduced to lowest terms as $\frac{4}{5}$. So, $6\frac{8}{10} = \frac{4}{5}$. Step 3: The same mixed number can be converted into an improper fraction as follows. The denominator 5 is multiplied with whole number 4 and the product is added to the numerator 4 to give 6 × 5 + 4 = 34. Step 4: This becomes the numerator of the improper fraction and 5 is retained as the denominator of the improper fraction. We get $\frac{34}{5}$ So, $6.8 = 6\frac{4}{5} = \frac{34}{5}$ in simplest form Convert 15.25 to a mixed number and an improper fraction in simplest form ### Solution Step 1: The decimal 15.25 is read as 15 and 25 hundredths. So, it is written as a mixed number $15\frac{25}{100}$. Step 2: The mixed number has a whole number part 15 and a fractional part $\frac{25}{100}$ which is reduced to simplest form as $\frac{1}{4}$. So, $15\frac{25}{100} = 15\frac{1}{4}$. Step 3: The same mixed number can be converted into an improper fraction as follows. The denominator 4 is multiplied with whole number 15 and the product is added to the numerator 1 to give 15 × 4 + 1 = 61. Step 4: This becomes the numerator of the improper fraction and 4 is retained as the denominator of the improper fraction. We get $\frac{61}{4}$ Step 5: So, $15\frac{25}{100} = 15\frac{1}{4} = \frac{61}{4}$ in simplest form
HuggingFaceTB/finemath
# What Math Skills do I need for a Game Design Degree? Game design is one of the most popular subjects in computer science, and while it requires most of the same math concepts as other areas of programming, there are a few concepts related to graphics and problem solving that game developers need more than others. By far, the most important math skills needed for game design are related to 3-D graphics and animation, and these skills are based on matrix math and linear algebra, as well as logic and discrete math. Because time and motion are involved in a game loop, programmers also use continuous math concepts from calculus and trigonometry. These continuous math skills also apply to the physics needed to make animations look realistic. ### Most Important Math Concepts for Game Programmers All areas of math are useful to game programmers, but one skill that is particularly useful deals with graph algorithms. A graph is a set of objects, called nodes, that are related to one another by the edges that form paths between them. For example, a graph can be used to model a street grid. In this case, the nodes represent locations on the map, and the edges represent the streets connecting the locations. The most important graph algorithms for game designers are the search functions that find the shortest path between two nodes. These functions are used to find paths for game characters on a tile-based map when there are moving objects blocking some paths. They’re also used to pre-process a map and store the shortest paths in memory so that retrieving them doesn’t require a looping algorithm and can be performed in what’s called constant time. In game design college, students learn about the same data structures and algorithms as computer science students, but they focus more on graphics programming and game production. The math skills needed for game design aren’t necessarily difficult, but it’s up to the student to become proficient in these areas. Linear algebra is one of the easier college math subjects, but students who are really interested in using the concepts taught in linear algebra will be more imaginative when applying them to games. The basic way linear algebra is used in 3-D programming is by rotating, scaling and moving objects in a 3-D world or performing these transformations on the camera itself. It’s also used to transform the 3-D world to the flat coordinate system of the computer screen. A matrix is sort of like a plane or a cage that remaps all the points of an object according to how it’s bent or shaped. ### Advanced Math for Game Design Some of the more difficult concepts are related to discrete math and algorithm design. It’s not necessary to have a deep, profound understanding of algorithm design unless you want to be a researcher, but being able to analyze the time complexity of a function is a fundamental skill. Discrete and continuous math concepts are very useful in this area, especially the math related to summations, recursion and integral calculus, which allows you to approximate the sum of a series. With mobile and Web-based games becoming more and more popular, there is a growing demand for game designers with solid skills. If you have an interest in designing the physics and graphics for video games, keep learning about the math skills needed for game design.
HuggingFaceTB/finemath
# Question of counting and probability Alexsandro Could someone help me with this question ? There are two locks on the door and the keys are among the six different ones you carry in your pocket. In a hurry you dropped one somewhere. a) What is the probability that you can still open the door ? b) What is the probability that the first two keys you you try will open the door ? a)2/3 b)4!/6! eNathan Why do you need help if you gave the answer? Staff Emeritus Gold Member If you want help on the problem, you're going to have to show us what you've done. Alexsandro doubt eNathan said: Why do you need help if you gave the answer? I found this question in a book of probability. I am not obtaining to model the problem to arrive the indicated reply. Could you help me to answer this question or to do some hint? Homework Helper Imagine yourself in the situation. You have 5 keys for 2 locks (you have to open Lock1 AND Lock2 simultaneously.) I guess it is possible that the same key can open both locks. As soon as both locks are opened you can stop and not try the rest of the keys. Homework Helper I would start with a tree diagram. First node ("the root") is the first key on Lock1, you have 2 branches: opened L1, did not open L1. Coming out from each of these branches you have 2 new branches: opened L2, did not open L2. Then you move on to the 2nd key, K2. If K1 opened L1 then you don't need to try K2 on L1. If K1 opened L2 then you don't need to try K2 on L2. Now you can make new branches for K2. And so on. Homework Helper Alexsandro said: Could someone help me with this question ? There are two locks on the door and the keys are among the six different ones you carry in your pocket. In a hurry you dropped one somewhere. a) What is the probability that you can still open the door ? Let K1, and K2 denote events defined as follows: K1 = the key you dropped is the key for lock #1 K2 = the key you dropped is the key for lock #2 Here are two solutions: solution #1 P(can still open door) = P(~K1 AND ~K2) = P(~K1)*P(~K2|~K1) = (5/6)*(4/5) = 2/3 where the symbol '~' is the NOT operator (e.g., ~A = NOT A), and P(A|B) is the conditional probability of A given that B has occurred. Recall that P(A and B) = P(A)*P(B|A) for independent events this reduces to P(A and B) = P(A)*P(B), since P(B|A) = P(B) if A and B are independent events. solution #2 P(can still open door) = P(~K1 AND ~K2) = P(~(K1 OR K2)) by de Morgan's Law = 1 - P(K1 OR K2) complement rule = 1 - [ P(K1) + P(K2)] = 1 - [ 1/6 + 1/6] = 1 - 1/3 = 2/3 Note that for these calculation I have assumed that no key unlocks both locks, and that each lock can be unlocked by only one key. For de Morgan's Laws goto http://mathworld.wolfram.com/deMorgansLaws.html Alexsandro said: b) What is the probability that the first two keys you try will open the door ? Let U1, and U2 denote events defined as follows: U1 = the first key you try unlocks lock #1 U2 = the second key you try unlocks lock #2 A solution is: P(first two keys open door) = P(can still open door AND U1 AND U2) = P(can still open door)*P((U1 AND U2)| can still open door) = (2/3)*P(U1 | can still open door)*P(U1 |(can still open door AND U1)) = (2/3)*(1/5)*(1/4) = 1/30 where P(can still open door) = 2/3 is from the answer to part a is used and P(A and B) = P(A)*P(B|A) has been generalized to P(A and B and C) = P(A)*P(B|A)*P(C|(A and B)) This method of solution, however, does not appear to be the desired one as the cited answer is 4!/6! = 1/30. Alexsandro benorin said: Let K1, and K2 denote events defined as follows: K1 = the key you dropped is the key for lock #1 K2 = the key you dropped is the key for lock #2 Here are two solutions: solution #1 P(can still open door) = P(~K1 AND ~K2) = P(~K1)*P(~K2|~K1) = (5/6)*(4/5) = 2/3 where the symbol '~' is the NOT operator (e.g., ~A = NOT A), and P(A|B) is the conditional probability of A given that B has occurred. Recall that P(A and B) = P(A)*P(B|A) for independent events this reduces to P(A and B) = P(A)*P(B), since P(B|A) = P(B) if A and B are independent events. solution #2 P(can still open door) = P(~K1 AND ~K2) = P(~(K1 OR K2)) by de Morgan's Law = 1 - P(K1 OR K2) complement rule = 1 - [ P(K1) + P(K2)] = 1 - [ 1/6 + 1/6] = 1 - 1/3 = 2/3 Note that for these calculation I have assumed that no key unlocks both locks, and that each lock can be unlocked by only one key. For de Morgan's Laws goto http://mathworld.wolfram.com/deMorgansLaws.html Let U1, and U2 denote events defined as follows: U1 = the first key you try unlocks lock #1 U2 = the second key you try unlocks lock #2 A solution is: P(first two keys open door) = P(can still open door AND U1 AND U2) = P(can still open door)*P((U1 AND U2)| can still open door) = (2/3)*P(U1 | can still open door)*P(U1 |(can still open door AND U1)) = (2/3)*(1/5)*(1/4) = 1/30 where P(can still open door) = 2/3 is from the answer to part a is used and P(A and B) = P(A)*P(B|A) has been generalized to P(A and B and C) = P(A)*P(B|A)*P(C|(A and B)) This method of solution, however, does not appear to be the desired one as the cited answer is 4!/6! = 1/30. ------------ Thanks
HuggingFaceTB/finemath
# Playing with GF(32) (Last Mod: 27 November 2010 21:38:01 ) ## Introduction In most applications where a Galois field is employed it is of the form GF(2n). The main reason is convenience. Since math on the polynomial coefficients is done in GF(2), the operations can be converted to binary logic circuits very readily. Furthermore, since there are exactly 2n symbols, those symbols can be represented by the 2n possible bit patterns in an n-bit word. However, these conveniences can mask some of the subtler points of Galois Fields and so this page will work with a slightly more complex Galois Field, namely GF(32), in order to make some of those points more apparent. It is assumed that the you have either worked through the page on Polynomial Arithmetic in GF(pn) or you will be doing so in parallel with this page. To make this easier, this page is laid in a similar fashion. ## Establish a suitable finite field for GF(32) Recall from the discussion of finite fields, that a finite field is an algebra consisting of a set of elements and two operators that act on those elements. We are free to define the elements and the operators in any way we choose provided the properties required of a finite field are satisfied. Those requirements are: 1. The set must be an abelian group with respect to the first operator. 2. The set, less the identity element for the first operator, must be an abelian group with respect to the second operator. 3. The second operator must be distributive over the first operator. Further recall that to be an abelian group, the following properties must hold with respect to the operator: 1. The set must be closed. 2. The set must be commutative. 3. The set must be associative. 4. The set must possess an identity element. 5. The set must possess an inverse for every element in the set. ### The elements of GF(32) There are nine symbols in GF(32) which we can represent as the following polynomials: Elementary Polynomials in GF(32) Symbol Polynomial 00 0 01 1 02 2 10 D 11 D+1 12 D+2 20 2D 21 2D+1 22 2D+2 ### The primitive polynomials in GF(32) We also need a "primitive polynomial" which, for GF(pn) is a polynomial of order n that cannot be evenly divided by a polynomial of degree less than n. We do not consider the cases for dividing the candidate polynomial by 0 or 1 for obvious reasons (division by 0 is undefined and division by 1 is always possible). Since primitive polynomials must be irreducible, the high order coefficient must be one and the constant coefficient must be non-zero. In GF(32) this leaves us with only the following six candidates. Candidate Primitive Polynomials in GF(32) Symbol Polynomial 101 D2 + 1 102 D2 + 2 111 D2 + D+1 112 D2 + D+2 121 D2 + 2D+1 122 D2 + 2D+2 We could examine each of the candidate polynomials in turn and divide it by each of the potentially irreducible polynomials of order less than n, of which there are only two. Lower Order Irreducible Polynomials <2 in GF(32) Symbol Polynomial 11 D+1 12 D+2 With so few candidates for primitive polynomials and so few lower order irreducible polynomials to work with, an exhaustive search of the space is very straightforward. We will examine three candidates using this approach before turning to the other approach. #### Example: Is 101 (D2 + 1) a primitive polynomial in GF(32) • Dividing D2 + 1 by D + 1 yields D + 2 with a remainder of 2. • Dividing D2 + 1 by D + 2 yields D + 1 with a remainder of 2. Since both have non-zero remainders, D2 + 1 is an irreducible polynomial. The question now is whether D2 + 1 is primitive. One way to show this is that the smallest value for which D2 + 1 evenly divides Dm - 1 is m = pn - 1 = 8. A comparable way is to show that Dk generates all of the polynomials in the set. We will pursue this latter course. k Dk Polynomial Symbol 0 D0 1 01 1 D1 D 10 2 D2 2 02 3 D3 2D 20 4 D4 1 01 5 D5 D 10 6 D6 2 02 7 D7 2D 20 8 D8 1 01 As can be seen, not all of the polynomials in the set are generated as evidenced by D4 generating 1 again and hence creating a cycle of generated polynomials with period of 4. So, while D2 + 1 divides D8 - 1, also also divides D4 - 1. Therefore, 101 (D2 + 1) is not a primitive polynomial in GF(32). #### Example: Is 102 (D2 + 2) a primitive polynomial in GF(32) • Dividing D2 + 2 by D + 1 yields D + 2 with no remainder. Hence D2 + 2 is reducible and need not be explored further. Therefore, 102 (D2 + 2) is not a primitive polynomial in GF(32). #### Example: Is 112 (D2 + D + 1) a primitive polynomial in GF(32) • Dividing D2 + D + 2 by D + 1 yields D with a remainder of 2. • Dividing D2 + D + 2 by D + 2 yields D + 2 with a remainder of 1. Since both have non-zero remainders, D2 + D + 2 is an irreducible polynomial. The question now is whether D2 + D + 2 is primitive. As before, we will see if Dk generates the entire set under this modulus. k Dk Polynomial Symbol 0 D0 1 01 1 D1 D 10 2 D2 2D + 1 21 3 D3 2D + 2 22 4 D4 2 02 5 D5 2D 20 6 D6 1D + 2 12 7 D7 D + 1 11 8 D8 1 01 As can be seen, all of the polynomials in the set are generated, as evidenced by D8 being the first time (after D0) that 1 is generated. Therefore, 112 (D2 + D + 2) is a primitive polynomial in GF(32). #### Primitive polynomials by elimination The other approach is to finding the irreducible polynomials is to first produce a list of all of the candidate degree-n primitive polynomials and then multiply all of the lower degree polynomials together to produce all of the non-irreducible polynomials of degree n and eliminate these from the list. The initial list is the same as in the previous approach, namely: Candidate Primitive Polynomials in GF(32) Symbol Polynomial 101 D2 + 1 102 D2 + 2 111 D2 + D+1 112 D2 + D+2 121 D2 + 2D+1 122 D2 + 2D+2 In the case of n=2, the polynomials of degree less than n can be written in the form Multiplying two such polynomials together, our candidate primitive polynomials may be written in the form P = (aD + b)(cD + d) P = (ac)D2 + (ad+bc)D + (bd) If we were to approach this blindly, we would have to examine 81 possibilities. However, we note that since primitive polynomials must be of order n, that neither a nor c may be zero. This immediately reduces the search space to 36. We also know that the order n coefficient must be exactly 1 which means that a and c must be multiplicative inverses of each other in GF(3). But we have also established that if the high order coefficient is anything greater than 1 that the polynomial is reducible, hence we can restrict our examination to lower order polynomials having an high order coefficient of 1. This means that a and c must both be exactly 1 which simplifies the form of our primitive polynomials to: P = (D + b)(D + d) P = D2 + (d+b)D + (bd) This collapses our search space to only 9 entries and if we impose the requirement that primitive polynomials must have a non-zero constant term then we have reduced it to 4. While this is already quite manageable, we can reduce it even further my nothing that multiplication in any field is commutative and therefore if we order the elements of our set (in any way we choose) then we can limit ourselves to only evaluating those expressions in which the second factor is greater than or equal to the first factor. The result is that we now only have to evaluate 3 expressions. Notice that a little bit of thought up front has saved us 96% of the work. Candidate Primitive Polynomials that are Reducible in GF(32) b d b+d bd P symbol 1 1 2 1 D2+2D+1 121 1 2 0 2 D2+2 102 2 2 1 1 D2+D+1 111 After eliminating these from the original list, we now have the complete list of polynomials of degree n that are irreducible. There are now only three remainingcandidates for being primitive polynomials in GF(32): Irreducible Candidates for Primitive Polynomials in GF(32) Symbol Polynomial 101 D2 + 1 112 D2 + D+2 122 D2 + 2D+2 The only remaining task is to see which of these three is/are actually primitive. We have the same tools as with the prior approach and might as well note that two of the three (101 and 112) were evaluated previously with the result that 101 is not primitive while 112 is. At this point we could either go ahead and check 122 to see if it is primitive or we could check how many primitive polynomials there are in GF(32) and let that guide us. Let's do both. In GF(32), the number of primitive polynomials is exactly given by N = totient(pn-1)/n = 2 Since there are exactly two primitive polynomials and since we have identified only one and have only one candidate remaining, we can conclude that 122 should be a primitive polynomial; iif it isn't, then we know we have made an error somewhere. We will now verify that 122 is, indeed, primitive by seeing if it generates the entire set of polynomials under this modulus. k Dk Polynomial Symbol 0 D0 1 01 1 D1 D 10 2 D2 D + 1 11 3 D3 2D + 1 21 4 D4 2 02 5 D5 2D 20 6 D6 2D + 2 22 7 D7 D + 2 12 8 D8 1 01 ### Multiplicative inverses in GF(32) Now that we have found at least one primitive polynomial with which to define multiplication in our finite field, let us turn our attention to determining the multiplicative inverses of each non-zero element in our set. For convenience, since the table is directly above, let's use 122 as our primitive polynomial. Since each polynomial maps to an integer power of D and since D8 is 1, we can use the fact that two polynomials are multiplicative inverses if the exponents in their exponential representations add to 8. Hence Multiplicative Inverses in GF(32) mod 112 k Dk Polynomial Symbol (Dk)-1 Polynomial Symbol 0 D0 1 01 D8 1 01 1 D1 D 10 D7 D + 2 12 2 D2 D + 1 11 D6 2D + 2 22 3 D3 2D + 1 21 D5 2D 20 4 D4 2 02 D4 2 02 5 D5 2D 20 D3 2D + 1 21 6 D6 2D + 2 22 D2 D + 1 11 7 D7 D + 2 12 D1 D 10
HuggingFaceTB/finemath
# 1031.Maximum-Sum-of-Two-Non-Overlapping-Subarrays ## ้ข˜็›ฎๅœฐๅ€ https://leetcode.com/problems/maximum-sum-of-two-non-overlapping-subarrays/ ## ้ข˜็›ฎๆ่ฟฐ ``````Given an array A of non-negative integers, return the maximum sum of elements in two non-overlapping (contiguous) subarrays, which have lengths L and M. (For clarification, the L-length subarray could occur before or after the M-length subarray.) Formally, return the largest V for which V = (A[i] + A[i+1] + ... + A[i+L-1]) + (A[j] + A[j+1] + ... + A[j+M-1]) and either: 0 <= i < i + L - 1 < j < j + M - 1 < A.length, or 0 <= j < j + M - 1 < i < i + L - 1 < A.length. Example 1: Input: A = [0,6,5,2,2,5,1,9,4], L = 1, M = 2 Output: 20 Explanation: One choice of subarrays is [9] with length 1, and [6,5] with length 2. Example 2: Input: A = [3,8,1,3,2,1,8,9,0], L = 3, M = 2 Output: 29 Explanation: One choice of subarrays is [3,8,1] with length 3, and [8,9] with length 2. Example 3: Input: A = [2,1,5,6,0,9,5,0,3,8], L = 4, M = 3 Output: 31 Explanation: One choice of subarrays is [5,6,0,9] with length 4, and [3,8] with length 3. Note: L >= 1 M >= 1 L + M <= A.length <= 1000 0 <= A[i] <= 1000`````` ## ไปฃ็  ### Approach #1 ``````class Solution { public int maxSumTwoNoOverlap(int[] A, int L, int M) { int[] prefixSum = new int[A.length + 1]; for (int i = 0; i < A.length; i++) { prefixSum[i + 1] = prefixSum[i] + A[i]; } return Math.max(maxSum(prefixSum, L, M), maxSum(prefixSum, M, L)); } private int maxSum(int[] p, int L, int M) { int ans = 0; int maxL = 0; for (int i = L + M; i < p.length; i++) { maxL = Math.max(maxL, p[i - M] - p[i - M - L]); int m = p[i] - p[i - M]; // m = p[i] - p[i - M] ans = Math.max(ans, maxL + m); } return ans; } }`````` ### Approach #2 Sliding Window ``````class Solution { public int maxSumTwoNoOverlap(int[] A, int L, int M) { return Math.max(maxSum(A, L, M), maxSum(A, M, L)); } private int maxSum(int[] A, int L, int M) { int sumL = 0, sumM = 0; for (int i = 0; i < L + M; i++) { if (i < L) { sumL += A[i]; } else { sumM += A[i]; } } int ans = sumM + sumL; int maxL = sumL; for (int i = L + M; i < A.length; i++) { sumM += A[i] - A[i - M]; sumL += A[i - M] - A[i - L - M]; maxL = Math.max(maxL, sumL); ans = Math.max(ans, maxL + sumM); } return ans; } }`````` Last updated
HuggingFaceTB/finemath
# Thread: Converting an equation in one variable to a function 1. ## Converting an equation in one variable to a function This is probably a really stupid question but it's been bugging me. The problem in the textbook says: "Use a graphing utility to approximate the solutions to the equation $x\ =\ 2sin(x)$ on the interval [$-\pi,\ \pi$]. It then says: "Begin by graphing the function $y\ =\ x-2sin(x)$". I would have tried to graph $y\ =\ 2sin(x)-x$ by subtracting x from both sides of the original equation. Why did it convert from the equation to the function the opposite way? The solutions for x come out the same either way but the y values are mirrors (inverses) between the two functions. If I had to submit the graph as part of the answer, would it be wrong? Am I trying to make this too complex? 2. ## Re: Converting an equation in one variable to a function Originally Posted by B9766 This is probably a really stupid question but it's been bugging me. The problem in the textbook says: "Use a graphing utility to approximate the solutions to the equation $x\ =\ 2sin(x)$ on the interval [$-\pi,\ \pi$]. It then says: "Begin by graphing the function $y\ =\ x-2sin(x)$". I would have tried to graph $y\ =\ 2sin(x)-x$ by subtracting x from both sides of the original equation. Why did it convert from the equation to the function the opposite way? The solutions for x come out the same either way but the y values are mirrors (inverses) between the two functions. If I had to submit the graph as part of the answer, would it be wrong? Am I trying to make this too complex? You are fine. The solutions to x = 2 sin(x) will be where the function y = 2 sin(x) - x = 0. These are the same x values for y = x - 2 sin(x) = 0. -Dan
HuggingFaceTB/finemath
## Convert decajoule to dekajoule decajoule dekajoule How many decajoule in 1 dekajoule? The answer is 1. We assume you are converting between decajoule and dekajoule. You can view more details on each measurement unit: decajoule or dekajoule The SI derived unit for energy is the joule. 1 joule is equal to 0.1 decajoule, or 0.1 dekajoule. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between decajoules and dekajoules. Type in your own numbers in the form to convert the units! ## Quick conversion chart of decajoule to dekajoule 1 decajoule to dekajoule = 1 dekajoule 5 decajoule to dekajoule = 5 dekajoule 10 decajoule to dekajoule = 10 dekajoule 20 decajoule to dekajoule = 20 dekajoule 30 decajoule to dekajoule = 30 dekajoule 40 decajoule to dekajoule = 40 dekajoule 50 decajoule to dekajoule = 50 dekajoule 75 decajoule to dekajoule = 75 dekajoule 100 decajoule to dekajoule = 100 dekajoule ## Want other units? You can do the reverse unit conversion from dekajoule to decajoule, or enter any two units below: ## Enter two units to convert From: To: ## Definition: Decajoule The SI prefix "deca" represents a factor of 101, or in exponential notation, 1E1. So 1 decajoule = 101 joules. The definition of a joule is as follows: The joule (symbol J, also called newton meter, watt second, or coulomb volt) is the SI unit of energy and work. The unit is pronounced to rhyme with "tool", and is named in honor of the physicist James Prescott Joule (1818-1889). ## Definition: Dekajoule The SI prefix "deka" represents a factor of 101, or in exponential notation, 1E1. So 1 dekajoule = 101 joules. The definition of a joule is as follows: The joule (symbol J, also called newton meter, watt second, or coulomb volt) is the SI unit of energy and work. The unit is pronounced to rhyme with "tool", and is named in honor of the physicist James Prescott Joule (1818-1889). ## Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 70 kg, 150 lbs, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more!
HuggingFaceTB/finemath
# Legislation for Personal Data: Magna Carta or Highway Code? Karl Popper is perhaps one of the most important thinkers from the 20th century. Not purely for his philosophy of science, but for giving a definitive answer to a common conundrum: “Which comes first, the chicken or the egg?”. He says that they were simply preceded by an ‘earlier type of egg’. I take this to mean that the answer is neither: they actually co-evolved. What do I mean by co-evolved? Well broadly speaking there once were two primordial entities which weren’t very chicken-like or egg-like at all, over time small changes occurred, supported by natural selection, rendering those entities unrecognisable from their origins into two of our most familiar foodstuffs of today. I find the process of co-evolution remarkable, and to some extent unimaginable, or certainly it seems to me difficult to visualise the intermediate steps. Evolution occurs by natural selection: selection by the ‘environment’, but when we refer to co-evolution we are clarifying that this is a complex interaction. The primordial entities effect the environment around them, therefore changing the ‘rules of the game’ as far as survival is concerned. In such a convolved system certainties about the right action disappear very quickly. What use are chickens and eggs when talking about personal data? Well, Popper used the question to illustrate a point about scientific endeavour. He was talking about science and reflecting on how scientific theories co-evolve with experiments. However, that’s not the point I’d like to make here. Co-evolution is very general, one area it arises is when technological advance changes society to such an extent that existing legislative frameworks become inappropriate. Tim Berners Lee has called for a Magna Carta for the digital age, and I think this is a worthy idea, but is it the right idea? A digital bill of rights may be the right idea in the longer run, but I don’t think we are ready to draft it yet. My own research is machine learning, the main technology underpinning the current AI revolution. A combination of machine learning, fast computers, and interconnected data means that the technological landscape is changing so fast that it is effecting society around us in ways that no one envisaged twenty years ago. Even if we were to start with the primordial entities that presaged the chicken and the egg, and we knew all about the process of natural selection, could we have predicted or controlled the animal of the future that would emerge? We couldn’t have done. The chicken exists today as the product of its environmental experience, an experience that was unique to it. The end point we see is one of is highly sensitive to very small perturbations that could have occurred at the beginning. So should we be writing legislation today which ties down the behaviour of future generations? There is precedent for this from the past. Before the printing press was introduced, no one would have begrudged the monks’ right to laboriously transcribe the books of the day. Printing meant it was necessary to protect the “copy rights” of the originator of the material. No one could have envisaged that those copyright laws would also be used to protect software, or digital music. In the industrial revolution the legal mechanism of ‘letters patent’ evolved to protect creative insight. Patents became protection of intellectual property, ensuring that inventors’ ideas could be shared under license. These mechanisms also protect innovation in the digital world. In some jurisdictions they are now applied to software and even user interface designs. Of course even this legislation is stretched in the face of digital technology and may need to evolve, as it has done in the past. The new legislative challenge is not in protecting what is innovative about people, but what is commonplace about them. The new value is in knowing the nature of people: predicting their needs and fulfilling them. This is the value of interconnection of personal data. It allows us to make predictions about an individual by comparing him or her to others. It is the mainstay of the modern internet economy: targeted advertising and recommendation systems. It underpins my own research ideas in personalisation of health treatments and early diagnosis of disease. But it leads to potential dangers, particularly where the uncontrolled storage and flow of an individual’s personal information is concerned. We are reaching the point where some studies are showing that computer prediction of our personality is more accurate than that of our friends and relatives. How long before an objective computer prediction of our personality can outperform our own subjective assessment of ourselves? Some argue those times are already upon us. It feels dangerous for such power to be wielded unregulated by a few powerful groups. So what is the answer? New legislation? But how should it come about? In the long term, I think we need to develop a set of rules and legislation, that include principles that protect our digital rights. I think we need new models of ownership that allow us to control our private data. One idea that appeals to me is extending data protection legislation with the right not only to view data held about us, but to also ask for it to be deleted. However, I can envisage many practical problems with that idea, and these need to be resolved so we can also enjoy the benefits of these personalised predictions. As wonderful as some of the principles in the Magna Carta are, I don’t think it provides a good model for the introduction of modern legislation. It was actually signed under duress: under a threat of violent revolution. The revolution was threatened by a landed gentry, although the consequences would have been felt by all. Revolutions don’t always end well. They occur because people can become deadlocked: they envisage different futures for themselves and there is no way to agree on a shared path to different end points. The Magna Carta was also a deal between the king and his barons. Those barons were asking for rights that they had no intention of extending within their fiefdoms. These two characteristics: redistribution of power amongst a powerful minority, with significant potential consequences for the a disenfranchised majority, make the Magna Carta, for me, a poor analogy for how we would like things to proceed. The chicken and the egg remind us that the actual future will likely be more remarkable than any of us can currently imagine. Even if we all seek a particular version of the future this version of the future is unlikely to ever exist in the form that we imagine. Open, receptive and ongoing dialogue between the interested and informed parties is more likely to bring about a societal consensus. But can this happen in practice? Could we really evolve a set of rights and legislative principles which lets us achieve all our goals? I’d like to propose that rather than taking as our example a mediaeval document, written on velum, we look to more recent changes in society and how they have been handled. In England, the Victorians may have done more than anyone to promote our romantic notion of the Magna Carta, but I think we can learn more by looking at how they dealt with their own legislative challenges. I live in Sheffield, and cycle regularly in the Peak District national park. Enjoyment of the Peak Park is not restricted to our era. At 10:30 on Easter Monday in 1882 a Landau carriage, rented by a local cutler, was heading on a day trip from Sheffield to the village of Tideswell, in the White Peak. They’d left Sheffield via Ecclesall Road, and as they began to descend the road beneath Froggatt Edge, just before the Grouse Inn they encountered a large traction engine towing two trucks of coal. The Landau carriage had two horses and had been moving at a brisk pace of four and a half miles an hour. They had already passed several engines on the way out of Sheffield. However, as they moved out to pass this one, it let out a continuous blast of steam and began to turn across their path into the entrance of the inn. One of the horses took fright pulling the carriage up a bank, throwing Ben Deakin Littlewood and Mary Coke Smith from the carriage and under the wheels of the traction engine. I cycle to work past their graves every day. The event was remarkable at the time, so much so that is chiselled into the inscription on Ben’s grave. The traction engine was preceded, as legislation since 1865 had dictated, by a boy waving a red flag. It was restricted to two and a half miles an hour. However, the boy’s role was to warn oncoming traffic. The traction engine driver had turned without checking whether the road was clear of overtaking traffic. It’s difficult to blame the driver though. I imagine that there was quite a lot involved in driving a traction engine in 1882. It turned out that the driver was also preoccupied with a broken wheel on one of his carriages. He was turning into the Grouse to check the wheel before descending the road. This example shows how legislation can sometimes be extremely restrictive, but still not achieve the desired outcome. Codification of the manner in which a vehicle should be overtaken came later, at a time when vehicles were travelling much faster. The Landau carriage was overtaking about 100 meters after a bend. The driver of the traction engine didn’t check over his shoulder immediately before turning, although he claimed he’d looked earlier. Today both drivers’ responsibilities are laid out in the “Highway Code”. There was no “Mirror, Signal, Manoeuvre” in 1882. That came later alongside other regulations such as road markings and turn indicators. The shared use of our road network, and the development of the right legislative framework might be a good analogy for how we should develop legislation for protecting our personal privacy. No analogy is ever perfect, but it is clear that our society both gained and lost through introduction of motorised travel. Similarly, the digital revolution will bring advantages but new challenges. We need to have mechanisms that allow for negotiated solutions. We need to be able to argue about the balance of current legislation and how it should evolve. Those arguments will be driven by our own personal perspectives. Our modern rules of the road are in the Highway Code. It lists responsibilities of drivers, motorcyclists, cyclists, mobility scooters, pedestrians and even animals. It gives legal requirements and standards of expected behaviour. The Highway Code co-evolved with transport technology: it has undergone 15 editions and is currently being rewritten to accommodate driverless cars. Even today we still argue about the balance of this document. In the long term, when technologies have stabilised, I hope we will be able to distill our thinking to a bill of rights for the internet. But such a document has a finality about it which seems inappropriate in the face of technological uncertainty. Calls for a Magna Carta provide soundbites that resonate and provide rallying points. But they can polarise, presaging unhelpful battles. Between the Magna Carta and the foundation of the United States the balance between the English monarch and his subjects was reassessed through the English Civil War and the American Revolution. Wven tI don’t think we can afford such discord when drafting the rights of the digital age. We need mechanisms that allow for open debate, rather than open battle. Before a bill of rights for the internet, I think we need a different document. I’d like to sound the less resonant call for a document that allows for dialogue, reflecting concerns as they emerge. It could summarise current law and express expected standards of behaviour. With regular updating it would provide an evolving social contract between all the users of the information highway: people, governments, businesses, hospitals, scientists, aid organisations. Perhaps instead of a Magna Carta for the internet we should start with something more humble: the rules of the digital road. This blog post is an extended version of an written for the Guardian’s media network: “Let’s learn the rules of the digital road before talking about a web Magna Carta” # Proceedings of Machine Learning Research Back in 2006 when the wider machine learning community was becoming aware of Gaussian processes (mainly through the publication of the Rasmussen and WIlliams book). Joaquin Quinonero Candela, Anton Schwaighofer and I organised the Gaussian Processes in Practice workshop at Bletchley Park. We planned a short proceedings for the workshop, but when I contacted Springer’s LNCS proceedings, a rather dismissive note came back with an associated prohibitive cost. Given that the ranking of LNCS wasn’t (and never has been) that high, this seemed a little presumptuous on their part. In response I contacted JMLR and asked if they’d ever considered a proceedings track. The result was that I was asked by Leslie Pack Kaelbling to launch the proceedings track. JMLR isn’t just open access, but there is no charge to authors. It is hosted by servers at MIT and managed by the community. We launched the proceedings in March 2007 with the first volume from the Gaussian Processes in Practice workshop. Since then there have been 38 volumes including two volumes in the pipeline. The proceedings publishes several leading conferences in machine learning including AISTATS, COLT and ICML. From the start we felt that it was important to share the branding of JMLR with the proceedings, to show that the publication was following the same ethos as JMLR. However, this led to the rather awkward name: JMLR Workshop and Conference Proceedings, or JMLR W&CP. Following discussion with the senior editorial board of JMLR we now feel the time is right to rebrand with the shorter “Proceedings of Machine Learning Research”. As part of the rebranding process the editorial team for the Proceedings of Machine Learning Research (which consists of Mark Reid and myself) is launching a small consultation exercise looking for suggestions on how we can improve the service for the community. Please feel free to leave comments on this blog post or via Facebook or Twitter to let us have feedback! # Beware the Rise of the Digital Oligarchy The Guardian’s media network published a short article I wrote for them on 5th March. They commissioned an article of about 600 words, that appeared on the Guardian’s site, but the original version I wrote was around 1400. I agreed a week’s exclusivity with the Guardian, but now that’s up, the longer version is below (it’s about twice as long). On a recent visit to Genova, during a walk through the town with my colleague Lorenzo, he pointed out what he said was the site of the world’s first commercial bank. The bank of St George, located just outside the city’s old port, grew to be one of the most powerful institutions in Europe, it bankrolled Charles V and governed many of Genova’s possessions on the republic’s behalf. The trust that its clients placed in the bank is shown in records of its account holders. There are letters from Christopher Columbus to the bank instructing them in the handling of his affairs. The influence of the bank was based on the power of accumulated capital. Capital they could accumulate through the trust of a wealthy client base. The bank was so important in the medieval world that Machiavelli wrote that “if even more power was ceded by the Genovan republic to the bank, Genova would even outshine Venice amongst the Italian city states.” The Bank of St George was once one of the most influential private institutions in Europe. Today the power wielded by accumulated capital can still dominate international affairs, but a new form of power is emerging, that of accumulated data. Like Hansel and Grettel trailing breadcrumbs into the forest, people now leave a trail of data-crumbs wherever we travel. Supermarket loyalty cards, text messages, credit card transactions, web browsing and social networking. The power of this data emerges, like that of capital, when it’s accumulated. Data is the new currency. I’m a professor of machine learning. Machine learning is the main technique at the heart of the current revolution in artificial intelligence. A major aim of our field is to develop algorithms that better understand data: that can reveal the underlying intent or state of health behind the information flow. Already machine learning techniques are used to recognise faces or make recommendations, as we develop better algorithms that better aggregate data, our understanding of the individual also improves. What do we lose by revealing so much of ourselves? How are we exposed when so much of our digital soul is laid bare? Have we engaged in a Faustian pact with the internet giants? Similar to Faust, we might agree to the pact in moments of levity, or despair, perhaps weakened by poor health. My father died last year, but there are still echoes of him on line. Through his account on Facebook I can be reminded of his birthday or told of common friends. Our digital souls may not be immortal, but they certainly outlive us. What we choose to share also affects our family: my wife and I may be happy to share information about our genetics, perhaps for altruistic reasons, or just out of curiosity. But by doing so we are also sharing information about our children’s genomes. Using a supermarket loyalty card gains us discounts on our weekly shop, but also gives the supermarket detailed information about our family diet. In this way we’d expose both the nature and nurture of our children’s upbringing. Will our decisions to make this information available haunt our children in the future? Are we equipped to understand the trade offs we make by this sharing? There have been calls from Elon Musk, Stephen Hawking and others to regulate artificial intelligence research. They cite fears about autonomous and sentient artificial intelligence that  could self replicate beyond our control. Most of my colleagues believe that such breakthroughs are beyond the horizon of current research. Sentient intelligence is  still not at all well understood. As Ryan Adams, a friend and colleague based at Harvard tweeted: Personally, I worry less about the machines, and more about the humans with enhanced powers of data access. After all, most of our historic problems seem to have come from humans wielding too much power, either individually or through institutions of government or business. Whilst sentient AI does seem beyond our horizons, one aspect of it is closer to our grasp. An aspect of sentient intelligence is ‘knowing yourself’, predicting your own behaviour. It does seem to me plausible that through accumulation of data computers may start to ‘know us’ even better than we know ourselves. I think that one concern of Musk and Hawking is that the computers would act autonomously on this knowledge. My more immediate concern is that our fellow humans, through the modern equivalents of the bank of St George, will be exploiting this knowledge leading to a form of data-oligarchy. And in the manner of oligarchies, the power will be in the hands of very few but wielded to the effect of many. How do we control for all this? Firstly, we need to consider how to regulate the storage of data. We need better models of data-ownership. There was no question that Columbus was the owner of the money in his accounts. He gave it under license, and he could withdraw it at his pleasure. For the data repositories we interact with we have no right of deletion. We can withdraw from the relationship, and in Europe data protection legislation gives us the right to examine what is stored about us. But we don’t have any right of removal. We cannot withdraw access to our historic data if we become concerned about the way it might be used. Secondly, we need to increase transparency. If an algorithm makes a recommendation for us, can we known on what information in our historic data that prediction was based? In other words, can we know how it arrived at that prediction? The first challenge is a legislative one, the second is both technical and social. It involves increasing people’s understanding of how data is processed and what the capabilities and limitations of our algorithms are. There are opportunities and risks with the accumulation of data, just as there were (and still are) for the accumulation of capital. I think there are many open questions, and we should be wary of anyone who claims to have all the answers. However, two directions seem clear: we need to both increase the power of the people; we need to develop their understanding of the processes. It is likely to be a fraught process, but we need to form a data-democracy: data governance for the people by the people and with the people’s consent. Neil Lawrence is a Professor of Machine Learning at the University of Sheffield. He is an advocate of “Open Data Science” and an advisor to a London based startup, CitizenMe, that aims to allow users to “reclaim their digital soul”. # Blogs on the NIPS Experiment There are now quite a few blog posts on the NIPS experiment, I just wanted to put a place together where I could link to them all. It’s a great set of posts from community mainstays, newcomers and those outside our research fields. Just as a reminder, Corinna and I were extremely open about the entire review process, with a series of posts about how we engaging the reviewers and processing the data. All that background can be found through a separate post here. At the time of writing there is also still quite a lot of twitter traffic on the experiment. List of Blog Posts What an exciting series of posts and perspectives! For those of you that couldn’t make the conference, here’s what it looked like. And that’s just one of 5 or six poster rows! # The NIPS Experiment Just back from NIPS where it was really great to see the results of all the work everyone put in. I really enjoyed the program and thought the quality of all presented work was really strong. Both Corinna and I were particularly impressed by the work that put in by oral presenters to make their work accessible to such a large and diverse audience. We also released some of the figures from the NIPS experiment, and there was a lot of discussion at the conference about what the result meant. As we announced at the conference the consistency figure was 25.9%. I just wanted to confirm that in the spirit of openness that we’ve pursued across the entire conference process Corinna and I will provide a full write up of our analysis and conclusions in due course! Some of the comment in the existing debate is missing out some of the background information we’ve tried to generate, so I just wanted to write a post that summarises that information to highlight its availability. ### Scicast Question With the help of Nicolo Fusi, Charles Twardy and the entire Scicast team we launched a Scicast question a week before the results were revealed. The comment thread for that question already had an amount of interesting comment before the conference. Just for informational purposes before we began reviewing Corinna forecast this figure would be 25% and I forecast it would be 20%. The box plot summary of predictions from Scicast is below. ### Comment at the Conference There was also an amount of debate at the conference about what the results mean, a few attempts to answer this question (based only on the inconsistency score and the expected accept rate for the conference) are available here in this little Facebook discussion and on this blog post. ### Background Information on the Process Just to emphasise previous posts on this year’s conference see below: ### Software on Github And finally there is a large amount of code available on a github site for allowing our processes to be recreated. A lot of it is tidied up, but the last sections on the analysis are not yet done because it was always my intention to finish those when the experimental results are fully released. # NIPS: Decision Time Thursday 28th August In the last two days I’ve spent nearly 20 hours in teleconferences, my last scheduled conference will start in about 1/2 an hour. Given the available 25 minutes it seemed to make sense to try and put down some thoughts about the decision process. The discussion period has been constant, there is a stream of incoming queries from Area Chairs, requests for advice on additional reviewers, or how to resolve deadlocked or disputing reviews. Corinna has handled many of these. Since the author rebuttal period all the papers have been distributed to google spreadsheet lists which are updated daily. They contain paper titles, reviewer names, quality scores, calibrated scores, a probability of accept (under our calibration model), a list of bot-compiled potential issues as well as columns for accept/reject and poster/spotlight. Area chairs have been working in buddy pairs, ensuring that a second set of eyes can rest on each paper. For those papers around the borderline, or with contrasting reviews, the discussion period really can have an affect, we see when calibrating the reviewer scores: over time the reviewer bias is reducing and the scores are becoming more consistent. For this reason we allowed this period to go on a week longer than originally planned, and we’ve been compressing our teleconferences into the last few days. Most teleconferences consist of two buddy pairs coming together to discuss their papers. Perhaps ideally the pairs would have a similar subject background, but constraints of time zone and the fact that there isn’t a balanced number of subject areas mean that this isn’t necessarily the case. Corinna and I have been following a similar format. Listing the papers from highest scoring first, to lowest scoring, and starting at the top. For each paper, if it is a confident accept, we try and identify if it might be a talk or a spotlight. This is where the opinion of a range of Area Chairs can be very useful. For uncontroversial accepts that aren’t nominated for orals we spend very little time. This proceeds until we start reaching borderline papers, those in the ‘grey area’: typically papers with an average score around 6. They fall broadly into two categories: those where the reviewers disagree (e.g. scores of 8,6,4), or those where the review are consistent but the reviewers , perhaps, feel underwhelmed (scores of 6,6,6). Area chairs will often work hard to try and get one of the reviewers to ‘champion’ a paper: it’s a good sign if a reviewer has been prepared to argue the case for a paper in the discussion. However, the decisions in this region are still difficult. It is clear that we are rejecting some very solid papers, for reasons of space and because of the overall quality of submissions. It’s hard for everyone to be on the ‘distributing’ end of this system, but at the same time, we’ve all been on the receiving end of it too. In this difficult ‘grey area’ for acceptance, we are looking for sparks in a paper that push it over the edge to acceptance. So what sort of thing catches an area chair’s eye? A new direction is always welcome, but often leads to higher variance in the reviewer scores. Not all reviewers are necessarily comfortable with the unfamiliar. But if an area chair feels a paper is taking the machine learning field somewhere new, then even if the paper has some weaknesses (e.g. in evaluation or giving context and detailed derivations etc) then we might be prepared to overlook this. We look at the borderline papers in some detail, scanning the reviews, looking for words like ‘innovative’, ‘new directions’ or ‘strong experimental results’. If we see these then as program chairs we definitely become more attentive. We all remember papers presented at NIPS in the past that lead to revolutions in the way machine learning is done. Both Corinna and I would love to have such papers at ‘our’ NIPS. A paper that is a more developed area will be expected to have done a more rounded job in terms of setting the context and performing the evaluation. Papers in a more developed area will be expected to hit a high level in terms of their standards. It is often helpful to have an extra pair of eyes (or even two pairs) run through the paper. Each teleconference call normally ends with a few follow up actions for a different area chair to look through a paper or clarify a particular point. Sometimes we also call in domain experts, who may have already produced four formal reviews of other papers, just to get clarification on  particular point. This certainly doesn’t happen for all papers, but those with scores around 7,6,6 or 6,6,6 or 8,6,4 often get this treatment. Much depends on the discussion and content of the existing reviews, but there are still, often, final checks that need carrying out. From a program chair’s perspective, the most important thing is that the Area Chair is comfortable with the decision, and I think most of the job is acting as a sounding board for the Area Chair’s opinion, which I try to reflect back to them. In the same manner as rubber duck debugging, just vocalising the issues sometimes causes them to be crystallised in the mind. Ensuring that Area Chairs are calibrated to each other is also important. The global probabilities of accept from the reviewer calibration model really help here. As we go through papers I keep half an eye on those, not to influence the decision of a particular paper so much as to ensure that at the end of the process we don’t have a surplus of accepts. At this stage all decisions are tentative, but we hope not to have to come back to too many of them. Monday 1st September Corinna finished her last video conference on Friday, Saturday, Sunday and Monday (Labor Day) were filled with making final decisions on accepts, then talks and finally spotlights. Accepts were hard, we were unable to take all the papers that were possible accept, as we would have gone way over our quota of 400. We had to make a decision on duplicated papers where the decisions were in conflict, more details of this to come at the conference. From remembering what a pain it was to do the schedule after the acceptances, and also following advice from Leon Bottou that the talk program emerges to reflect the accepted posters, we finalized the talk and spotlight program whilst putting talks and spotlights directly into the schedule. We had to hone the talks down to 20 from about 40 candidates and spotlights we squeezed in 62 from over a hundred suggestions. We spent three hours in teleconference each day, as well as preparation time, across Labor Day weekend putting together the first draft of the schedule. It was particularly impressive how quickly area chairs responded to any of our follow up queries to our notes from the teleconferences. Particularly those in the US who were enjoying the traditional last weekend of summer. Tuesday 2nd September I had an all day meeting in Manchester for the a network of researchers focussed on mental illness. It was really good to have a day discussing research, my first in a long time. I thought very little about NIPS until on the train home, I thought to have a little look at the conference shape. I actually ended up looking at a lot of the papers we rejected, many from close colleagues and friends. I found it a little depressing. I have no doubt there is a lot of excellent work there, and I know how disappointed my friends and colleagues will be to receive those rejections. We did an enormous amount to ensure that the process was right, and I have every confidence in the area chairs and reviewers. But at the end of the day, you know that you will be rejecting a lot of good work. It brought to mind a thought I had at the allocation stage. When we had the draft allocation to each area chair, I went through several of them sanity checking the quality of the allocation. Naturally, I checked those associated with area chairs who are closer to my own areas of expertise. I looked through the paper titles, and I couldn’t help but think what a good workshop each of those allocations would make. There would be some great ideas, some partially developed ideas. There would be some really great experiments and some weaker experiments. But there would be a lot of debate at such workshop. None or very few of the papers would be uninteresting: there would certainly be errors in papers, but that’s one of the charms of a workshop, there’s still a lot more to be said about an idea when it’s presented at a workshop. Friday 5th September Returning from an excellent two day UCL-Duke workshop. There is a lot of curiosity about the NIPS experiment, but Corinna and I have agreed to keep the results embargoed until the conference. Saturday 6th September Area chairs had until Thursday to finalise their reviews in the light of the final decisions, and also to raise any concerns they had about the final decisions. My own experience of area chairing is that you can have doubts about your reasoning when you are forced to put pen to paper and write the meta review. We felt it was important to not rush the final process to allow any of those doubts to emerge. In the end, the final program has 3 or 4 changes from the draft we first distributed on Monday night, so there may be some merit in this approach. We had a further 3 hour teleconference today to go through the meta-reviews, with a particular focus on those for papers around the decision boundary. Other issues such as comments in the wrong place (the CMT interface can be fairly confusing, 3% of meta reviews were actually placed in the box meant for notes to the program chairs) were also covered. Our big concern was if the area chairs had written a review consistent with our final verdict. A handy learning task would have been to build a sentiment model to predict accept/reject from the meta review. Monday 8th September Our plan had been to release reviews this morning, but we were still waiting for a couple of meta-reviews to be tidied up and had an outstanding issue on one paper. I write this with CMT ‘loaded’ and ready to distribute decisions. However, when I preview the emails the variable fields are not filled in (if I hit ‘send’ I would send 5,000 emails that start “Dear $RecipientFirstName$, which sounds somewhat impersonal … although perhaps more critical is that the authors would be informed of the fate of paper “$Title$,” which may lead to some confusion. CMT are on a different time zone, 8 hours behind. Fortunately, it is late here, so there is a good chance they will respond in time … Tuesday 9th September I was wide awake at 6:10 despite going to sleep at 2 am. I always remember when I was Area Chair with John Platt that he would be up late answering emails and then out of bed again 4 hours later doing it again. A few final checks and the all clear for everything is there. Pressed the button at 6:22 … emails are still going out and it is 10:47. 3854 of the 5615 emails have been sent … one reply which was an out of office email from China. Time to make a coffee … Final Statistics 1678 submissions 414 papers accepted 20 papers for oral 62 for spotlight 331 for poster 19 rejected without review Epilogue to Decision Mail:  So what was wrong with those variable names? I particularly like the fact that something different was wrong with each one. $RecipientFirstName$ and $RecipientEmail$ are  not available in the “Notification Wizard”, whereas they are in the normal email sending system. Then I got the other variables wrong, $Title$->$PaperTitle$ and $PaperId$->$PaperID$, but since neither of the two I knew to be right were working I assumed there was something wrong with the whole variable substitution system … rather than it being that (at least) two of the variable types just happen to be missing from this wizard … CMT responded nice and quickly though … that’s one advantage of working late. Epilogue on Acceptances: At the time of the conference there were only 411 papers presented because three were withdrawn. Withdrawals were usually due to some deeper problem authors had found in there own work, perhaps triggered by comments from reviewers. So in the end there were 411 papers accepted and 328 posters. Author Concerns So the decisions have been out for a few days now, and of course we have had some queries about our processes. Every one has been pretty reasonable, and their frustration is understandable when three reviewers have argued for accept but the final decision is to reject. This is an issue with ‘space-constrained’ conferences. Whether a paper gets through in the end can depend on subjective judgements about the paper’s qualities. In particular, we’ve been looking for three components to this: novelty, clarity and utility. Papers with borderline scores (and borderline here might be that the average score is in the weak accept range) are examined closely. The decision about whether the paper is accepted at this point necessarily must come down to judgement, because for a paper to get scores this high the reviewers won’t have identified a particular problem with the paper. The things that come through are how novel the paper is, how useful the idea is, and how clearly it’s presented. Several authors seem to think that the latter should be downplayed. As program chairs, we don’t necessarily agree. It’s true that it is a great shame when a great idea is buried in poor presentation, but it’s also true that the objective of a conference is communication, and therefore clarity of presentation definitely plays a role. However, it’s clear that all these three criteria are a matter of academic judgement: that of the reviewers, the area chair and the quad groups in the teleconferences. All the evidence we’ve seen is that reviewers and area chairs did weigh these aspects carefully, but that doesn’t mean that all their decisions can be shown to be right, because they are often a matter of perspective. Naturally authors are upset when what feels like a perfectly good paper is rejected on more subjective grounds. Most of the queries are on papers where this is felt to be the case. There has also been one query on process, and whether we did enough to evaluate on these criteria, for those papers in the borderline area, before author rebuttal. Authors are naturally upset when the area chair raises such issues in the final decision’s meta review, but these points weren’t there before. Personally I sympathise with both authors and area chairs in this case. We made some effort to encourage authors to identify such papers before rebuttal (we sent out attention reports that highlighted probable borderline papers) but our main efforts at the time were chasing missing and inappropriate or insufficient reviews. We compressed a lot into a fairly short time, and it was also a period when many are on holiday. We were very pleased with the performance of our area chairs, but I think it’s also unsurprising if an area chair didn’t have time to carefully think through these aspects before author rebuttal. My own feeling is that the space constraint on NIPS is rather artificial, and a lot of these problems would be avoided if it wasn’t there. However, there is a counter argument that suggests that to be a top quality conference NIPS has to have a high reject rate. NIPS is used in tenure cases within the US and these statistics are important there. Whilst I reject these ideas: I don’t think the role of a conference is to allow people to get promoted in a particular country, nor is that the role of a journal: they are both involved in the communication and debate of scientific ideas. However, I do not view the program chair roles as reforming the conference ‘in their own image’. You have to also consider what NIPS means to the different participants. NIPS as Christmas # Reviewer Calibration for NIPS One issue that can occur for a conference is differences in interpretation of the reviewing scale. For a number of years (dating back to at least NIPS 2002) mis-calibration between reviewers has been corrected for with a model. Area chairs see not just the actual scores of the paper, but also ‘corrected scores’. Both are used in the decision making process. Reviewer calibration at NIPS dates back to a model first implemented in 2002 by John Platt when he was an area chair. It’s a regularized least squares model that Chris Burges and John wrote up in 2012. They’ve kindly made their write up available here. Calibrated scores are used alongside original scores to help in judging the quality of papers. We also knew that Zoubin and Max had modified the model last year, along with their program manager Hong Ge. However, before going through the previous work we first of all approached the question independently. However, the model we came up with turned out to be pretty much identical to that of Hong, Zoubin and Max, and the approach we are using to compute probability of accepts was also identical. The model is a probabilistic reinterpretation of the Platt and Burges model: one that treats the bias parameters and quality parameters as latent variables that are normally distributed. Marginalizing out the latent variables leads to an ANOVA style description of the data. ### The Model Our assumption is that the score from the $j$th reviewer for the $i$th paper is given by $y_{i,j} = f_i + b_j + \epsilon_{i, j}$ where $f_i$ is the objective quality of paper $i$ and $b_j$ is an offset associated with reviewer $j$. $\epsilon_{i,j}$ is a subjective quality estimate which reflects how a specific reviewer’s opinion differs from other reviewers (such differences in opinion may be due to differing expertise or perspective). The underlying ‘objective quality’ of the paper is assumed to be the same for all reviewers and the reviewer offset is assumed to be the same for all papers. If we have $n$ papers and $m$ reviewers then this implies $n$ + $m$ + $nm$ values need to be estimated. Of course, in practice, the matrix is sparse, and we have no way of estimating the subjective quality for paper-reviewer pairs where no assignment was made. However, we can firstly assume that the subjective quality is drawn from a normal density with variance $\sigma^2$ $\epsilon_{i, j} \sim N(0, \sigma^2 \mathbf{I})$ which reduces us to $n$ + $m$ + 1 parameters. The Platt-Burges model then estimated these parameters by regularized least squares. Instead, we follow Zoubin, Max and Hong’s approach of treating these values as latent variables. We assume that the objective quality, $f_i$, is also normally distributed with mean $\mu$ and variance $\alpha_f$, $f_i \sim N(\mu, \alpha_f)$ this now reduces us to $m$+3 parameters. However, we only have approximately $4m$ observations (4 papers per reviewer) so parameters may still not be that well determined (particularly for those reviewers that have only one review). We therefore also assume that the reviewer offset is a zero mean normally distributed latent variable, $b_j \sim N(0, \alpha_b),$ leaving us only four parameters: $\mu$, $\sigma^2$, $\alpha_f$ and $\alpha_b$. When we combine these assumptions together we see that our model assumes that any given review score is a combination of 3 normally distributed factors: the objective quality of the paper (variance $\alpha_f$), the subjective quality of the paper (variance $\sigma^2$) and the reviewer offset (variance $\alpha_b$). The a priori marginal variance of a reviewer-paper assignment’s score is the sum of these three components. Cross-correlations between reviewer-paper assignments occur if either the reviewer is the same (when the cross covariance is given by $\alpha_b$) or the paper is the same (when the cross covariance is given by $\alpha_f$). With a constant mean coming from the mean of the ‘subjective quality’, this gives us a joint model for reviewer scores as follows: $\mathbf{y} \sim N(\mu \mathbf{1}, \mathbf{K})$ where $\mathbf{y}$ is a vector of stacked scores $\mathbf{1}$ is the vector of ones and the elements of the covariance function are given by $k(i,j; k,l) = \delta_{i,k} \alpha_f + \delta_{j,l} \alpha_b + \delta_{i, k}\delta_{j,l} \sigma^2$ where $i$ and $j$ are the index of the paper and reviewer in the rows of $\mathbf{K}$ and $k$ and $l$ are the index of the paper and reviewer in the columns of $\mathbf{K}$. It can be convenient to reparameterize slightly into an overall scale $\alpha_f$, and normalized variance parameters, $k(i,j; k,l) = \alpha_f(\delta_{i,k} + \delta_{j,l} \frac{\alpha_b}{\alpha_f} + \delta_{i, k}\delta_{j,l} \frac{\sigma^2}{\alpha_f})$ which we rewrite to give two ratios: offset/objective quality ratio, $\hat{\alpha}_b$ and subjective/objective ratio $\hat{\sigma}^2$ ratio. $k(i,j; k,l) = \alpha_f(\delta_{i,k} + \delta_{j,l} \hat{\alpha}_b + \delta_{i, k}\delta_{j,l} \hat{\sigma}^2)$ The advantage of this parameterization is it allows us to optimize $\alpha_f$ directly through maximum likelihood (with a fixed point equation). This leaves us with two free parameters, that we might explore on a grid. We expect both $\mu$ and $\alpha_f$ to be very well determined due to the number of observations in the data. The negative log likelihood is $\frac{|\mathbf{y}|}{2}\log2\pi\alpha_f + \frac{1}{2}\log \left|\hat{\mathbf{K}}\right| + \frac{1}{2\alpha_f}\mathbf{y}^\top \hat{\mathbf{K}}^{-1} \mathbf{y}$ where $|\mathbf{y}|$ is the length of $\mathbf{y}$ (i.e. the number of reviews) and $\hat{\mathbf{K}}=\alpha_f^{-1}\mathbf{K}$ is the scale normalised covariance. This negative log likelihood is easily minimized to recover $\alpha_f = \frac{1}{|\mathbf{y}|} \mathbf{y}^\top \hat{\mathbf{K}}^{-1} \mathbf{y}$ A Bayesian analysis of $alpha_f$ parameter is possible with gamma priors, but it would merely shows that this parameter is extremely well determined (the degrees of freedom parameter of the associated Student-$t$ marginal likelihood scales will the number of reviews, which will be around $|\mathbf{y}| \approx 6,000$ in our case. We can set these parameters by maximum likelihood and then we can remove the offset from the model by computing the conditional distribution over the paper scores with the bias removed, $s_{i,j} = f_i + \epsilon_{i,j}$. This conditional distribution is found as $\mathbf{s}|\mathbf{y}, \alpha_f,\alpha_b, \sigma^2 \sim N(\boldsymbol{\mu}_s, \boldsymbol{\Sigma}_s)$ where $\boldsymbol{\mu}_s = \mathbf{K}_s\mathbf{K}^{-1}\mathbf{y}$ and $\boldsymbol{\Sigma}_s = \mathbf{K}_s - \mathbf{K}_s\mathbf{K}^{-1}\mathbf{K}_s$ and $\mathbf{K}_s$ is the covariance associated with the quality terms only with elements given by, $k_s(i,j;k,l) = \delta_{i,k}(\alpha_f + \delta_{j,l}\sigma^2)$. We now use $\boldsymbol{\mu}_s$ (which is both the mode and the mean of the posterior over $\mathbf{s}$) as the calibrated quality score. ### Analysis of Variance The model above is a type of Gaussian process model with a specific covariance function (or kernel). The variances are highly interpretable though, because the covariance function is made up of a sum of effects. Studying these variances is known as analysis of variance in statistics, and is commonly used for batch effects. It is known as an ANOVA model. It is easy to extend this model to include batch effects such as whether or not the reviewer is a student or whether or not the reviewer has published at NIPS before. We will conduct these analyses in due course. Last year, Zoubin, Max and Hong explored whether the reviewer confidence could be included in the model, but they found it did not help with performance on hold out data. Scatter plot of Quality Score vs Calibrated Quality Score ### Probability of Acceptance To predict the probability of acceptance of any given paper, we sample from the multivariate normal that gives the posterior over $\mathbf{s}$. These samples are sorted according to the values of $\mathbf{s}$, and the top scoring papers are considered to be accepts. These samples are taken 1000 times and the probability of acceptance is computed for each paper by seeing how many times the paper received a positive outcome from the thousand samples.
open-web-math/open-web-math
# Analyzing the Odds on a Sports Betting Website ## The Basics of Sports Betting Sports betting has become increasingly popular among sports enthusiasts as a way to enhance their enjoyment of the game and have a chance to win some extra cash. With the advent of online sports betting websites, the convenience and accessibility of placing bets has never been easier. However, before diving into the world of sports betting, it’s important to understand how odds work and how to analyze them effectively. ## Understanding Odds Formats When you visit a sports betting website, you will often come across different odds formats. The three main formats are: American odds, decimal odds, and fractional odds. American odds are presented as either positive or negative numbers, with positive numbers indicating the amount you would profit on a \$100 bet, and negative numbers indicating how much you need to bet to win \$100. Decimal odds represent the total amount you would receive if you won, including your original stake, and fractional odds show the potential profit you would make relative to your stake. It is important to familiarize yourself with these different formats to ensure you fully understand the odds presented to you on a sports betting website. Being able to quickly convert between formats will enable you to make more informed betting decisions. ## Evaluating the Implied Probability Every set of odds represents an implied probability of an event occurring. By calculating the implied probability, you can determine if the odds provided by a sports betting website are favorable or not. To calculate the implied probability, you can use the following formulas: • American Odds: Implied probability = 100 / (positive odds + 100) • Decimal Odds: Implied probability = 100 / decimal odds • Fractional Odds: Implied probability = denominator / (numerator + denominator) • For example, if a basketball team is given American odds of +200, the implied probability of them winning would be 100 / (200 + 100) = 33.33%. By comparing this implied probability with your own assessment of the team’s chances of winning, you can determine if there is value in placing a bet. ## Considering the Bookmaker’s Margin It’s important to remember that sports betting websites are in the business of making money. They do this by offering odds that incorporate a margin, also known as vigorish or “vig”. The margin is the built-in profit for the bookmaker that ensures they make money regardless of the outcome of a particular event. Understanding the concept of the bookmaker’s margin is crucial when analyzing the odds on a sports betting website. By comparing the implied probability you have calculated with your own assessment of the event’s likelihood, you can determine if the odds offered by the bookmaker have sufficient value. If the implied probability is significantly lower than your own assessment, it may be a good opportunity to place a bet, as the bookmaker’s margin is working in your favor. ## The Importance of Line Movement When analyzing the odds on a sports betting website, it’s important to pay attention to line movement. Line movement refers to how the odds change over time leading up to an event. Significant line movement can indicate the direction in which bettors are leaning and can provide valuable insights into where the smart money is going. For example, if the odds for a particular team have significantly shifted from +150 to +120, it could indicate that a large amount of money has been placed on that team. This could suggest that experienced bettors or insiders believe the team has a higher chance of winning than initially anticipated. By monitoring line movement, you can potentially identify betting opportunities or avoid bets that are no longer favorable. ## The Role of Statistical Analysis Lastly, statistical analysis plays a crucial role in successfully analyzing the odds on a sports betting website. By studying historical data, team performance, player statistics, and other relevant factors, you can make more informed predictions about the outcome of an event. There are countless statistical models and strategies that can be utilized to analyze odds effectively. Some popular approaches include regression analysis, machine learning algorithms, and trend analysis. By incorporating statistical analysis into your betting strategy, you can gain a competitive edge and improve your chances of making profitable bets. Complement your reading by accessing this suggested external resource. Explore additional information and new perspectives on the topic covered in this article. 토토사이트, immerse yourself further in the topic. ## Conclusion Analyzing the odds on a sports betting website requires a combination of knowledge, skill, and intuition. By understanding the different odds formats, calculating the implied probability, considering the bookmaker’s margin, monitoring line movement, and utilizing statistical analysis, you can make more informed betting decisions and increase your chances of success. Remember to approach sports betting responsibly and never bet more than you can afford to lose. With the right approach, sports betting can be an exciting and potentially rewarding hobby.
HuggingFaceTB/finemath
In this tutorial blog we will see how to write c program for avl tree data structure. ## what is avl tree? avl tree is self balancing binary search tree such that • The sub-trees of every node differ in height by at most one. it means difference between height of left subtree and right subtree of any node of binary search tree lies in range of -1,0 or +1. • Every subtree itself is avl tree. • Difference in heights of left subtree and right subtree is also known as balance factor. In case of avl tree balance factor must be 1,0 or -1. For example see below figure ,left side bst tree is avl tree and right side bst tree is not an avl tree. ## how to write c program for avl tree data structure • To implement avl tree and some operations on it , we need to first decide data structure. • Now , we need operations like below ## Now let us implement insert operation • During implementation of insert operation on avl tree, we need to keep in mind following details 1.  Insertion will be according to binary search tree. 2.  After each insertion we need to find balance factor of ancestors node. 3. If the balance factor is not in range of 1,-1 or 0 , it means we need to balance the avl tree. The balance factor will be calculated as (height of left subtree) – (height of right subtree). 4. Let us see how to balance, For example let us insert 1 ,3, and 5 From the tree we can see that the balance factor of node ( value 1) is disturbed and violates the avl tree property. From the diagram also we can see that the left side is heavier that the right side. To balance the tree we need left rotation. • Again let us insert 5,4 and 3 into a new avl tree. From the tree we can see that the balance factor of node ( value 5) is disturbed and violates the avl tree property. From the diagram also we can see that the right side is heavier that the left side. To balance the tree we need right rotation. The above two examples are the cases where to balance avl tree we need single right or left rotation. Let us see the cases where we need double rotation. • Left right Case:  ( Insert 5 , 3  and 4 ) •  Right left Case:  ( Insert 3, 5  and 4 ) 1) Left Left Case and c code 2) Right Right Case ### Sample code for how to write c program for avl tree data structure Ref: http://home.iitj.ac.in/~ramana/avl-trees.pdf
HuggingFaceTB/finemath
This site is supported by donations to The OEIS Foundation. Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A080712 a(0) = 4; for n>0, a(n) is taken to be the smallest positive integer greater than a(n-1) which is consistent with the condition "n is a member of the sequence if and only if a(n) is a multiple of 3". 2 4, 5, 7, 8, 9, 12, 13, 15, 18, 21, 22, 23, 24, 27, 28, 30, 31, 32, 33, 34, 35, 36, 39, 42, 45, 46, 47, 48, 51, 52, 54, 57, 60, 63, 66, 69, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 84, 87, 90, 91, 92, 93, 96, 97, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 (list; graph; refs; listen; history; text; internal format) OFFSET 0,1 LINKS B. Cloitre, N. J. A. Sloane and M. J. Vandermast, Numerical analogues of Aronson's sequence, J. Integer Seqs., Vol. 6 (2003), #03.2.2. B. Cloitre, N. J. A. Sloane and M. J. Vandermast, Numerical analogues of Aronson's sequence (math.NT/0305308) FORMULA a(a(n)) = 3*(n+3). PROG (PARI) {a=4; m=[4]; for(n=1, 67, print1(a, ", "); a=a+1; if(m[1]==n, while(a%3>0, a++); m=if(length(m)==1, [], vecextract(m, "2..")), if(a%3==0, a++)); m=concat(m, a))} CROSSREFS Cf. A079000, A003605, A079253, A080711, A080710. Sequence in context: A266267 A189477 A047494 * A005048 A119642 A276705 Adjacent sequences:  A080709 A080710 A080711 * A080713 A080714 A080715 KEYWORD nonn,easy AUTHOR N. J. A. Sloane, Mar 05 2003 EXTENSIONS More terms and PARI code from Klaus Brockhaus, Mar 06 2003 STATUS approved Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent The OEIS Community | Maintained by The OEIS Foundation Inc. Last modified April 25 21:06 EDT 2019. Contains 322461 sequences. (Running on oeis4.)
HuggingFaceTB/finemath
# 233618 (number) 233,618 (two hundred thirty-three thousand six hundred eighteen) is an even six-digits composite number following 233617 and preceding 233619. In scientific notation, it is written as 2.33618 × 105. The sum of its digits is 23. It has a total of 5 prime factors and 32 positive divisors. There are 86,400 positive integers (up to 233618) that are relatively prime to 233618. ## Basic properties • Is Prime? No • Number parity Even • Number length 6 • Sum of Digits 23 • Digital Root 5 ## Name Short name 233 thousand 618 two hundred thirty-three thousand six hundred eighteen ## Notation Scientific notation 2.33618 × 105 233.618 × 103 ## Prime Factorization of 233618 Prime Factorization 2 × 7 × 11 × 37 × 41 Composite number Distinct Factors Total Factors Radical ω(n) 5 Total number of distinct prime factors Ω(n) 5 Total number of prime factors rad(n) 233618 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0 The prime factorization of 233,618 is 2 × 7 × 11 × 37 × 41. Since it has a total of 5 prime factors, 233,618 is a composite number. ## Divisors of 233618 32 divisors Even divisors 16 16 8 8 Total Divisors Sum of Divisors Aliquot Sum τ(n) 32 Total number of the positive divisors of n σ(n) 459648 Sum of all the positive divisors of n s(n) 226030 Sum of the proper positive divisors of n A(n) 14364 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 483.34 Returns the nth root of the product of n divisors H(n) 16.2641 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors The number 233,618 can be divided by 32 positive divisors (out of which 16 are even, and 16 are odd). The sum of these divisors (counting 233,618) is 459,648, the average is 14,364. ## Other Arithmetic Functions (n = 233618) 1 φ(n) n Euler Totient Carmichael Lambda Prime Pi φ(n) 86400 Total number of positive integers not greater than n that are coprime to n λ(n) 360 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 20662 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares There are 86,400 positive integers (less than 233,618) that are coprime with 233,618. And there are approximately 20,662 prime numbers less than or equal to 233,618. ## Divisibility of 233618 m n mod m 2 3 4 5 6 7 8 9 0 2 2 3 2 0 2 5 The number 233,618 is divisible by 2 and 7. • Arithmetic • Deficient • Polite • Square Free ## Base conversion (233618) Base System Value 2 Binary 111001000010010010 3 Ternary 102212110112 4 Quaternary 321002102 5 Quinary 24433433 6 Senary 5001322 8 Octal 710222 10 Decimal 233618 12 Duodecimal b3242 20 Vigesimal 1940i 36 Base36 509e ## Basic calculations (n = 233618) ### Multiplication n×y n×2 467236 700854 934472 1168090 ### Division n÷y n÷2 116809 77872.7 58404.5 46723.6 ### Exponentiation ny n2 54577369924 12750256006905032 2978689307821139765776 695875438714559029801057568 ### Nth Root y√n 2√n 483.34 61.5889 21.985 11.8495 ## 233618 as geometric shapes ### Circle Diameter 467236 1.46787e+06 1.7146e+11 ### Sphere Volume 5.34081e+16 6.85839e+11 1.46787e+06 ### Square Length = n Perimeter 934472 5.45774e+10 330386 ### Cube Length = n Surface area 3.27464e+11 1.27503e+16 404638 ### Equilateral Triangle Length = n Perimeter 700854 2.36327e+10 202319 ### Triangular Pyramid Length = n Surface area 9.45308e+10 1.50263e+15 190748 ## Cryptographic Hash Functions md5 7dfd398c6bf19737b90fdbd35170700c 7dd325175971dc46427bb6ee7f315af133ff4f18 5438510e654cd42fb4501b18450618a579f0b0f3b27bd8d4a73e75d2600f96ba ca00fec4807097e28523a9b13ce175fbd18d4510f07724f3fd5e73c3ffdaa81c367557263a5f18571e8608af0586b98d7524155ac3aa2fbde7f17603c8f1b07d 5b7ebaa63ab17021b2a87a987e37c7e600643005
HuggingFaceTB/finemath
# Solving a Bubble Puzzle: Understanding Surface Tension Force • Vibhor In summary, the conversation discusses the surface tension force on a bubble at the point where it is about to rise. The direction of the surface tension force is shown to be tangential to the bubble's surface and perpendicular to the line of contact with the base of the vessel. The conversation also addresses the issue of whether this force is an internal force or not, and concludes that it is both an internal force on the wall of the bubble and a force on the vessel base. The analysis of the problem leads to a relationship between the pressure of the gas and the surrounding liquid, and the question of where the bubble is about to rise is raised. Vibhor ## The Attempt at a Solution Pressure inside bubble = 2T/R Buoyant force on bubble = ##\frac{4}{3} \pi R^3 ρ_w g ## But I do not understand how surface tension is exerting force on the bubble .Also I do not understand the direction of surface tension force . Thanks #### Attachments • bubble.PNG 30.2 KB · Views: 496 Last edited: At the contact with the base, surface tension acts tangentially to the bubble surface and perpendicularly to the line of contact (i.e. to the tangent to the circle of contact). Consider vertical force components. haruspex said: At the contact with the base, surface tension acts tangentially to the bubble surface and perpendicularly to the line of contact (i.e. to the tangent to the circle of contact). Consider vertical force components. Sorry . I did not understand . Vibhor said: Sorry . I did not understand . The surface tension should result in a force on the base of the bubble, pulling it down onto the base of the vessel. On any short section of the circle of contact, the tension should act tangentially to the bubble, not straight down. We want the integral of the force around the circle. By symmetry, only the vertical component of the force on any short section will be left uncancelled, so we can take that vertical component and simply multiply it by the circumference of the circle. However, I am not able to arrive at any of the offered answers. I have started a thread in the Advisors' forum to see if anyone else has ideas. Delta2 and Vibhor Here's what I posted on the Advisors' forum: ----- The buoyant force of the water on the bubble will not be ##\frac 43\pi R^3\rho g## because there is no water under the (almost) sphere. It will fall short by ##P_w\pi r^2##, where ##P_w## is the pressure in the water at the base of the vessel. Correspondingly, the gas pressure should exert a net upward force on the bubble of ##P_g\pi r^2##, where ##P_g## is the pressure of the gas. The two pressures should be related by ##P_g=P_w+\frac {2T}R##, but that's taking the pressure in the water to be uniform around the bubble, which cannot be true for Archimedes principle to work. So I correct this to take the water pressure around the bubble as averaging ##P_w-R\rho g##. I.e. ##P_g=P_w-R\rho g+\frac {2T}R## Finally, I take the surface tension along the contact with the base of the vessel as the force holding the bubble down. Allowing for the presumed angle of contact (taking the bubble to be a sphere with a cap removed), the net force from this is ##2\pi rT\frac rR##. Pulling all this together, I find T disappears leaving ##r=R\sqrt{\frac 23}##. Throwing all that away and crudely writing ##\frac 43\pi R^3\rho g=2\pi rT\frac rR## I do not get any of the offered answers. I would get one of them if I used ##4\pi rT\frac rR##, as would be appropriate for a bubble out of water (so double shelled). ------ I got a response from TSny, a contributor for whom I have the utmost respect. He agrees with my analysis. Vibhor I agree with your analysis also. Please see the attached image . Do you agree that green vector correctly shows the direction of surface tension acting on the bubble along the circumference of circular part of the bubble in contact with the vessel ? haruspex said: The surface tension should result in a force on the base of the bubble, pulling it down onto the base of the vessel. The surface tension force is due to the flat circular part of the bubble on the large spherical part . Isn't this an internal force as far as complete sphere is concerned ? How can we consider this force while doing the force balance on the sphere ? In other words , how can an internal force pull down the bubble ? #### Attachments • angle.PNG 3.8 KB · Views: 383 Last edited: Vibhor said: Please see the attached image . Do you agree that green vector correctly shows the direction of surface tension acting on the bubble along the circumference of circular part of the bubble in contact with the vessel ? The surface tension force is due to the flat circular part of the bubble on the large spherical part . Isn't this an internal force as far as complete sphere is concerned ? How can we consider this force while doing the force balance on the sphere ? In other words , how can an internal force pull down the bubble ? Yes, you have the vector correctly. There is no flat circular part to the bubble wall. On that area, the gas is in direct contact with the dry base of the vessel. I agree you have to be consistent as to what is considered the "free body". You can take it to be the wall of the bubble (i.e. the layer of liquid in contact with gas) or that plus the gas. You should get the same result either way. I chose wall+gas. The surface tension does two things. Over the general spherical curve of the bubble wall, it is an internal force of the wall+gas but leads to the pressure difference between gas and surrounding liquid. This pressure difference does act on the wall+gas over the circular dry area. It also pulls directly on the vessel base around the perimeter of the circle. Vibhor Chestermiller said: I agree with your analysis also. Thanks Chet. I just noticed something, though. The question specifies the point at which the bubble is about to rise. Nothing in my analysis requires that, it is just a general relationship that must obtain for the bubble to be stable. I would think that the borderline instability condition should lead to an expression that fixes R, not just relates it to r. haruspex said: There is no flat circular part to the bubble wall. On that area, the gas is in direct contact with the dry base of the vessel. Ok . I was getting it wrong . haruspex said: I agree you have to be consistent as to what is considered the "free body". You can take it to be the wall of the bubble (i.e. the layer of liquid in contact with gas) or that plus the gas. You should get the same result either way. I chose wall+gas. I think it should be only "gas" , wall of the bubble doesn't look meaningful . Forgive me if I do not make sense . I still do not understand properly how this surface tension force is acting as an " external force on wall + gas " ?? It makes some sense if I consider only the gas as a system . Last edited: haruspex said: This pressure difference does act on the wall+gas over the circular dry area. It also pulls directly on the vessel base around the perimeter of the circle. I am so sorry but this surface tension has baffled me . I am also feeling quite confused why this force should be pulling the bubble down instead of pulling it up ( i.e at 180 ° opposite to the green vector shown in image in post #7) ?? Why should it be pulling up the vessel ? Vibhor said: I am so sorry but this surface tension has baffled me . I am also feeling quite confused why this force should be pulling the bubble down instead of pulling it up ( i.e at 180 ° opposite to the green vector shown in image in post #7) ?? Why should it be pulling up the vessel ? You do need to think of the bubble wall as a thin skin under tension. Let's start with something simpler: a light rope over a pulley with a weight W on each end. The external forces on the rope are the weights and the normal force from the pulley. The tension in the rope is an internal force, but it leads to the force the rope exerts on the pulley and to the force it exerts on the weights. These are analogous to the tension in the bubble wall leading to the force it exerts on the gas inside the bubble, and to the force exerted on the vessel base around the circle perimeter, respectively. Surface tension is much like a two dimensional version of tension in an elastic string, except that it is constant instead of being proportional to extension, so the potential energy is proportional to the area instead of to the square of the area. Vibhor Am I the only one that finds it extremely counterintuitive that the surface tension doesn't play a role in the final result ? (according to harupex's analysis). Judging by the level of learning this exercise requires I think what the book expects one to do is what harupex says in the last two lines of post #5 ("Throwing all that away and crudely writing..."). It isn't 100% correct as probably harupex's detailed analysis is, but many books expect you to do some simplifying assumptions in order to deal with the problem more easily. Delta² said: Am I the only one that finds it extremely counterintuitive that the surface tension doesn't play a role in the final result ? (according to harupex's analysis). +1 Delta2 Delta² said: Judging by the level of learning this exercise requires I think what the book expects one to do is what harupex says in the last two lines of post #5 ("Throwing all that away and crudely writing...") Yes . This is what is to be done . Strangely , in this problem , none of the options are correct . Vibhor said: Yes . This is what is to be done . Strangely , in this problem , none of the options are correct . Maybe it is just a typo , it is just a 2 missing from the numerator on choice 3). May be . Chestermiller said: I agree with your analysis also. Chet , Acknowledging Haruspex's nice analysis and efforts to make me understand things , I am still unsure about a couple of things . The problem is solved but could you help me understand these two conceptual issues I am having . 1) How is force due to surface tension acting as an external force on the bubble as far as force balance is concerned (post#7) ? 2) How is force due to surface tension pulling down the bubble (post#11) ? Thanks haruspex said: Thanks Chet. I just noticed something, though. The question specifies the point at which the bubble is about to rise. Nothing in my analysis requires that, it is just a general relationship that must obtain for the bubble to be stable. I would think that the borderline instability condition should lead to an expression that fixes R, not just relates it to r. I think it must be the combination of the two. Anyway, in my judgment, this is not a very well-defined problem. If the water pressure is changing with depth and the pressure within the bubble is virtually constant, the bubble shape can't be perfectly spherical. I also don't think that the shape at the base is going to be close to spherical, with a slope the same as if you just cut off a sphere at that cross section. Vibhor said: Chet , Acknowledging Haruspex's nice analysis and efforts to make me understand things , I am still unsure about a couple of things . The problem is solved but could you help me understand these two conceptual issues I am having . 1) How is force due to surface tension acting as an external force on the bubble as far as force balance is concerned (post#7) ? 2) How is force due to surface tension pulling down the bubble (post#11) ? Thanks Surface tension acts as if there is an ideal thin stretched membrane between the two materials at the interface. That model should help to answer both these questions. Delta² said: It isn't 100% correct If my analysis is correct, then it's a lot worse than merely not 100% correct. The approximations made are so inappropriate as to lead to a completely invalid result. My own analysis still makes approximations that are too crude. It shows that the assumption that r<<R leads to a contradiction. So it is wrong to ignore the fact that the displaced volume of water is not a complete sphere. As Chet mentions, since the pressure is uniform inside the bubble but not outside, it will not even be spherical. Finally, as I mentioned, all of this analysis is towards finding the relationship between r and R which applies for any stable bubble. The question specifies that the bubble is on the verge of detaching. That extra condition will almost certainly lead to a way to fix both r and R, though one might need to know extra physical details, such as the kinetic properties of the gas. Chestermiller said: I think it must be the combination of the two. Anyway, in my judgment, this is not a very well-defined problem. If the water pressure is changing with depth and the pressure within the bubble is virtually constant, the bubble shape can't be perfectly spherical. I also don't think that the shape at the base is going to be close to spherical, with a slope the same as if you just cut off a sphere at that cross section. There's also the issue of "angle of contact". Water likes to wet surfaces, i.e. tends to have a very low angle of contact. ## 1. What is surface tension? Surface tension is the force that causes the molecules at the surface of a liquid to cling together, creating a sort of "skin" that allows the liquid to maintain its shape and resist external forces. ## 2. How does surface tension affect bubble puzzles? In bubble puzzles, surface tension is the force that holds the bubbles together and allows them to form stable structures. Without surface tension, the bubbles would pop and the puzzle would not hold its shape. ## 3. What factors influence surface tension force? The surface tension force is influenced by the type of liquid, temperature, and the presence of other substances in the liquid. For example, adding soap to water can decrease surface tension, making it easier for bubbles to form and pop. ## 4. How can we solve a bubble puzzle by understanding surface tension force? By understanding the role of surface tension in bubble puzzles, we can manipulate the bubbles to create different structures and patterns. We can also use our knowledge of surface tension to predict how the bubbles will behave and adjust our strategies accordingly. ## 5. Are there any real-world applications of understanding surface tension force? Yes, surface tension plays a crucial role in many natural phenomena such as the formation of raindrops, the floating of insects on water, and the shape of soap bubbles. It also has practical applications in industries such as pharmaceuticals, where surface tension is used to measure the purity of liquids. • Introductory Physics Homework Help Replies 2 Views 228 • Introductory Physics Homework Help Replies 3 Views 381 • Introductory Physics Homework Help Replies 18 Views 315 • Introductory Physics Homework Help Replies 2 Views 939 • Introductory Physics Homework Help Replies 12 Views 988 • Introductory Physics Homework Help Replies 25 Views 2K • Introductory Physics Homework Help Replies 1 Views 961 • Introductory Physics Homework Help Replies 2 Views 930 • Introductory Physics Homework Help Replies 18 Views 2K • Introductory Physics Homework Help Replies 19 Views 979
HuggingFaceTB/finemath
# Introductory Econometrics Examples ## Introduction This vignette contains examples from every chapter of Introductory Econometrics: A Modern Approach, 6e by Jeffrey M. Wooldridge. Each example illustrates how to load data, build econometric models, and compute estimates with R. In addition, the Appendix cites good sources on using R for econometrics. Now, install and load the wooldridge package and lets get started! install.packages("wooldridge") library(wooldridge) ## Chapter 2: The Simple Regression Model ### Example 2.10: A Log Wage Equation Load the wage1 data and check out the documentation. data("wage1") ?wage1 The documentation indicates these are data from the 1976 Current Population Survey, collected by Henry Farber when he and Wooldridge were colleagues at MIT in 1988. $$wage$$: average hourly earnings $$educ$$: years of education First, make a scatter-plot of the two variables and look for possible patterns in the relationship between them. It appears that on average, more years of education, leads to higher wages. The example in the text investigates what the percentage change between wages and education might be. So, we must use the $$log($$wage$$)$$. Build a linear model to estimate the relationship between the log of wage (lwage) and education (educ). $\widehat{log(wage)} = \beta_0 + \beta_1educ$ log_wage_model <- lm(lwage ~ educ, data = wage1) Print the summary of the results. summary(log_wage_model) Dependent variable: lwage educ 0.08274*** (0.00757) Constant 0.58377*** (0.09734) Observations 526 R2 0.18581 Adjusted R2 0.18425 Residual Std. Error 0.48008 (df = 524) F Statistic 119.58160*** (df = 1; 524) Note: p<0.1; p<0.05; p<0.01 Plot the $$log($$wage$$)$$ vs educ, adding a line representing the least squares fit. ## Chapter 3: Multiple Regression Analysis: Estimation ### Example 3.2: Hourly Wage Equation Check the documentation for variable information ?wage1 $$lwage$$: log of the average hourly earnings $$educ$$: years of education $$exper$$: years of potential experience $$tenutre$$: years with current employer Plot the variables against lwage and compare their distributions and slope ($$\beta$$) of the simple regression lines. Estimate the model regressing educ, exper, and tenure against log(wage). $\widehat{log(wage)} = \beta_0 + \beta_1educ + \beta_3exper + \beta_4tenure$ hourly_wage_model <- lm(lwage ~ educ + exper + tenure, data = wage1) Print the estimated model coefficients: coefficients(hourly_wage_model) Coefficients (Intercept) 0.2844 educ 0.0920 exper 0.0041 tenure 0.0221 Plot the coefficients, representing percentage impact of each variable on $$log($$wage$$)$$ for a quick comparison. ## Chapter 4: Multiple Regression Analysis: Inference ### Example 4.1 Hourly Wage Equation Using the same model estimated in example: 3.2, examine and compare the standard errors associated with each coefficient. Like the textbook, these are contained in parenthesis next to each associated coefficient. summary(hourly_wage_model) Dependent variable: lwage educ 0.09203*** (0.00733) exper 0.00412** (0.00172) tenure 0.02207*** (0.00309) Constant 0.28436*** (0.10419) Observations 526 R2 0.31601 Adjusted R2 0.31208 Residual Std. Error 0.44086 (df = 522) F Statistic 80.39092*** (df = 3; 522) Note: p<0.1; p<0.05; p<0.01 For the years of experience variable, or exper, use coefficient and Standard Error to compute the $$t$$ statistic: $t_{exper} = \frac{0.004121}{0.001723} = 2.391$ Fortunately, R includes $$t$$ statistics in the summary of model diagnostics. summary(hourly_wage_model)$coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 0.28436 0.10419 2.72923 0.00656 educ 0.09203 0.00733 12.55525 0.00000 exper 0.00412 0.00172 2.39144 0.01714 tenure 0.02207 0.00309 7.13307 0.00000 Plot the $$t$$ statistics for a visual comparison: ### Example 4.7 Effect of Job Training on Firm Scrap Rates Load the jtrain data set. data("jtrain") ?jtrain From H. Holzer, R. Block, M. Cheatham, and J. Knott (1993), Are Training Subsidies Effective? The Michigan Experience, Industrial and Labor Relations Review 46, 625-636. The authors kindly provided the data. $$year:$$ 1987, 1988, or 1989 $$union:$$ =1 if unionized $$lscrap:$$ Log(scrap rate per 100 items) $$hrsemp:$$ (total hours training) / (total employees trained) $$lsales:$$ Log(annual sales,$) $$lemploy:$$ Log(umber of employees at plant) First, use the subset function and it’s argument by the same name to return observations which occurred in 1987 and are not union. At the same time, use the select argument to return only the variables of interest for this problem. jtrain_subset <- subset(jtrain, subset = (year == 1987 & union == 0), select = c(year, union, lscrap, hrsemp, lsales, lemploy)) Next, test for missing values. One can “eyeball” these with R Studio’s View function, but a more precise approach combines the sum and is.na functions to return the total number of observations equal to NA. sum(is.na(jtrain_subset)) ## 156 While R’s lm function will automatically remove missing NA values, eliminating these manually will produce more clearly proportioned graphs for exploratory analysis. Call the na.omit function to remove all missing values and assign the new data.frame object the name jtrain_clean. jtrain_clean <- na.omit(jtrain_subset) Use jtrain_clean to plot the variables of interest against lscrap. Visually observe the respective distributions for each variable, and compare the slope ($$\beta$$) of the simple regression lines. Now create the linear model regressing hrsemp(total hours training/total employees trained), the lsales(log of annual sales), and lemploy(the log of the number of the employees), against lscrap(the log of the scrape rate). $lscrap = \alpha + \beta_1 hrsemp + \beta_2 lsales + \beta_3 lemploy$ linear_model <- lm(lscrap ~ hrsemp + lsales + lemploy, data = jtrain_clean) Finally, print the complete summary diagnostics of the model. summary(linear_model) Dependent variable: lscrap hrsemp -0.02927 (0.02280) lsales -0.96203** (0.45252) lemploy 0.76147* (0.40743) Constant 12.45837** (5.68677) Observations 29 R2 0.26243 Adjusted R2 0.17392 Residual Std. Error 1.37604 (df = 25) F Statistic 2.96504* (df = 3; 25) Note: p<0.1; p<0.05; p<0.01 ## Chapter 5: Multiple Regression Analysis: OLS Asymptotics ### Example 5.1: Housing Prices and Distance From an Incinerator Load the hprice3 data set. data("hprice3") $$lprice:$$ Log(selling price) $$ldist:$$ Log(distance from house to incinerator, feet) $$larea:$$ Log(square footage of house) Graph the prices of housing against distance from an incinerator: Next, model the $$log($$price$$)$$ against the $$log($$dist$$)$$ to estimate the percentage relationship between the two. $price = \alpha + \beta_1 dist$ price_dist_model <- lm(lprice ~ ldist, data = hprice3) Create another model that controls for “quality” variables, such as square footage area per house. $price = \alpha + \beta_1 dist + \beta_2 area$ price_area_model <- lm(lprice ~ ldist + larea, data = hprice3) Compare the coefficients of both models. Notice that adding area improves the quality of the model, but also reduces the coefficient size of dist. summary(price_dist_model) summary(price_area_model) Dependent variable: lprice (1) (2) ldist 0.31722*** (0.04811) 0.19623*** (0.03816) larea 0.78368*** (0.05358) Constant 8.25750*** (0.47383) 3.49394*** (0.49065) Observations 321 321 R2 0.11994 0.47385 Adjusted R2 0.11718 0.47054 Residual Std. Error 0.41170 (df = 319) 0.31883 (df = 318) F Statistic 43.47673*** (df = 1; 319) 143.19470*** (df = 2; 318) Note: p<0.1; p<0.05; p<0.01 Graphing illustrates the larger coefficient for area. ## Chapter 6: Multiple Regression: Further Issues ### Example 6.1: Effects of Pollution on Housing Prices, standardized. Load the hprice2 data and view the documentation. data("hprice2") ?hprice2 Data from Hedonic Housing Prices and the Demand for Clean Air, by Harrison, D. and D.L.Rubinfeld, Journal of Environmental Economics and Management 5, 81-102. Diego Garcia, a former Ph.D. student in economics at MIT, kindly provided these data, which he obtained from the book Regression Diagnostics: Identifying Influential Data and Sources of Collinearity, by D.A. Belsey, E. Kuh, and R. Welsch, 1990. New York: Wiley. $$price$$: median housing price. $$nox$$: Nitrous Oxide concentration; parts per million. $$crime$$: number of reported crimes per capita. $$rooms$$: average number of rooms in houses in the community. $$dist$$: weighted distance of the community to 5 employment centers. $$stratio$$: average student-teacher ratio of schools in the community. $price = \beta_0 + \beta_1nox + \beta_2crime + \beta_3rooms + \beta_4dist + \beta_5stratio + \mu$ Estimate the usual lm model. housing_level <- lm(price ~ nox + crime + rooms + dist + stratio, data = hprice2) Estimate the same model, but standardized coefficients by wrapping each variable with R’s scale function: $\widehat{zprice} = \beta_1znox + \beta_2zcrime + \beta_3zrooms + \beta_4zdist + \beta_5zstratio$ housing_standardized <- lm(scale(price) ~ 0 + scale(nox) + scale(crime) + scale(rooms) + scale(dist) + scale(stratio), data = hprice2) Compare results, and observe summary(housing_level) summary(housing_standardized) Dependent variable: price scale(price) (1) (2) nox -2,706.43300*** (354.08690) crime -153.60100*** (32.92883) rooms 6,735.49800*** (393.60370) dist -1,026.80600*** (188.10790) stratio -1,149.20400*** (127.42870) scale(nox) -0.34045*** (0.04450) scale(crime) -0.14328*** (0.03069) scale(rooms) 0.51389*** (0.03000) scale(dist) -0.23484*** (0.04298) scale(stratio) -0.27028*** (0.02994) Constant 20,871.13000*** (5,054.59900) Observations 506 506 R2 0.63567 0.63567 Adjusted R2 0.63202 0.63203 Residual Std. Error 5,586.19800 (df = 500) 0.60601 (df = 501) F Statistic 174.47330*** (df = 5; 500) 174.82220*** (df = 5; 501) Note: p<0.1; p<0.05; p<0.01 ### Example 6.2: Effects of Pollution on Housing Prices, Quadratic Interactive Term Modify the housing model from example 4.5, adding a quadratic term in rooms: $log(price) = \beta_0 + \beta_1log(nox) + \beta_2log(dist) + \beta_3rooms + \beta_4rooms^2 + \beta_5stratio + \mu$ housing_model_4.5 <- lm(lprice ~ lnox + log(dist) + rooms + stratio, data = hprice2) housing_model_6.2 <- lm(lprice ~ lnox + log(dist) + rooms + I(rooms^2) + stratio, data = hprice2) Compare the results with the model from example 6.1. summary(housing_model_4.5) summary(housing_model_6.2) Dependent variable: lprice (1) (2) lnox -0.95354*** (0.11674) -0.90168*** (0.11469) log(dist) -0.13434*** (0.04310) -0.08678** (0.04328) rooms 0.25453*** (0.01853) -0.54511*** (0.16545) I(rooms2) 0.06226*** (0.01280) stratio -0.05245*** (0.00590) -0.04759*** (0.00585) Constant 11.08386*** (0.31811) 13.38548*** (0.56647) Observations 506 506 R2 0.58403 0.60281 Adjusted R2 0.58071 0.59884 Residual Std. Error 0.26500 (df = 501) 0.25921 (df = 500) F Statistic 175.85520*** (df = 4; 501) 151.77040*** (df = 5; 500) Note: p<0.1; p<0.05; p<0.01 Estimate the minimum turning point at which the rooms interactive term changes from negative to positive. $x = \frac{\hat{\beta_1}}{2\hat{\beta_2}}$ beta_1 <- summary(housing_model_6.2)$coefficients["rooms",1] beta_2 <- summary(housing_model_6.2)$coefficients["I(rooms^2)",1] turning_point <- abs(beta_1 / (2*beta_2)) print(turning_point) ## 4.37763 Compute the percent change across a range of average rooms. Include the smallest, turning point, and largest. Rooms <- c(min(hprice2$rooms), 4, turning_point, 5, 5.5, 6.45, 7.5, max(hprice2$rooms)) Percent.Change <- 100*(beta_1 + 2*beta_2*Rooms) kable(data.frame(Rooms, Percent.Change)) Rooms Percent.Change 3.56000 -10.181324 4.00000 -4.702338 4.37763 0.000000 5.00000 7.749903 5.50000 13.976023 6.45000 25.805651 7.50000 38.880503 8.78000 54.819367 Graph the log of the selling price against the number of rooms. Superimpose a simple model as well as a quadratic model and examine the difference. ## Chapter 7: Multiple Regression Analysis with Qualitative Information ### Example 7.4: Housing Price Regression, Qualitative Binary variable This time, use the hrprice1 data. data("hprice1") ?hprice1 Data collected from the real estate pages of the Boston Globe during 1990. These are homes that sold in the Boston, MA area. $$lprice:$$ Log(house price, $1000s) $$llotsize:$$ Log(size of lot in square feet) $$lsqrft:$$ Log(size of house in square feet) $$bdrms:$$ number of bdrms $$colonial:$$ =1 if home is colonial style $\widehat{log(price)} = \beta_0 + \beta_1log(lotsize) + \beta_2log(sqrft) + \beta_3bdrms + \beta_4colonial$ Estimate the coefficients of the above linear model on the hprice data set. housing_qualitative <- lm(lprice ~ llotsize + lsqrft + bdrms + colonial, data = hprice1) summary(housing_qualitative) Dependent variable: lprice llotsize 0.16782*** (0.03818) lsqrft 0.70719*** (0.09280) bdrms 0.02683 (0.02872) colonial 0.05380 (0.04477) Constant -1.34959** (0.65104) Observations 88 R2 0.64907 Adjusted R2 0.63216 Residual Std. Error 0.18412 (df = 83) F Statistic 38.37846*** (df = 4; 83) Note: p<0.1; p<0.05; p<0.01 ## Chapter 8: Heteroskedasticity ### Example 8.9: Determinants of Personal Computer Ownership $\widehat{PC} = \beta_0 + \beta_1hsGPA + \beta_2ACT + \beta_3parcoll + \beta_4colonial$ Christopher Lemmon, a former MSU undergraduate, collected these data from a survey he took of MSU students in Fall 1994. Load gpa1 and create a new variable combining the fathcoll and mothcoll, into parcoll. This new column indicates if either parent went to college. data("gpa1") gpa1$parcoll <- as.integer(gpa1$fathcoll==1 | gpa1$mothcoll) GPA_OLS <- lm(PC ~ hsGPA + ACT + parcoll, data = gpa1) Calculate the weights and then pass them to the weights argument. weights <- GPA_OLS$fitted.values * (1-GPA_OLS$fitted.values) GPA_WLS <- lm(PC ~ hsGPA + ACT + parcoll, data = gpa1, weights = 1/weights) Compare the OLS and WLS model in the table below: Dependent variable: PC (1) (2) hsGPA 0.06539 (0.13726) 0.03270 (0.12988) ACT 0.00056 (0.01550) 0.00427 (0.01545) parcoll 0.22105** (0.09296) 0.21519** (0.08629) Constant -0.00043 (0.49054) 0.02621 (0.47665) Observations 141 141 R2 0.04153 0.04644 Adjusted R2 0.02054 0.02556 Residual Std. Error (df = 137) 0.48599 1.01624 F Statistic (df = 3; 137) 1.97851 2.22404* Note: p<0.1; p<0.05; p<0.01 ## Chapter 9: More on Specification and Data Issues ### Example 9.8: R&D Intensity and Firm Size $rdintens = \beta_0 + \beta_1sales + \beta_2profmarg + \mu$ From Businessweek R&D Scoreboard, October 25, 1991. Load the data and estimate the model. data("rdchem") all_rdchem <- lm(rdintens ~ sales + profmarg, data = rdchem) Plotting the data reveals the outlier on the far right of the plot, which will skew the results of our model. So, we can estimate the model without that data point to gain a better understanding of how sales and profmarg describe rdintens for most firms. We can use the subset argument of the linear model function to indicate that we only want to estimate the model using data that is less than the highest sales. smallest_rdchem <- lm(rdintens ~ sales + profmarg, data = rdchem, subset = (sales < max(sales))) The table below compares the results of both models side by side. By removing the outlier firm, $$sales$$ become a more significant determination of R&D expenditures. Dependent variable: rdintens (1) (2) sales 0.00005 (0.00004) 0.00019** (0.00008) profmarg 0.04462 (0.04618) 0.04784 (0.04448) Constant 2.62526*** (0.58553) 2.29685*** (0.59180) Observations 32 31 R2 0.07612 0.17281 Adjusted R2 0.01240 0.11372 Residual Std. Error 1.86205 (df = 29) 1.79218 (df = 28) F Statistic 1.19465 (df = 2; 29) 2.92476* (df = 2; 28) Note: p<0.1; p<0.05; p<0.01 ## Chapter 10: Basic Regression Analysis with Time Series Data ### Example 10.2: Effects of Inflation and Deficits on Interest Rates $\widehat{i3} = \beta_0 + \beta_1inf_t + \beta_2def_t$ Data from the Economic Report of the President, 2004, Tables B-64, B-73, and B-79. data("intdef") tbill_model <- lm(i3 ~ inf + def, data = intdef) Dependent variable: i3 inf 0.60587*** (0.08213) def 0.51306*** (0.11838) Constant 1.73327*** (0.43197) Observations 56 R2 0.60207 Adjusted R2 0.58705 Residual Std. Error 1.84316 (df = 53) F Statistic 40.09424*** (df = 2; 53) Note: p<0.1; p<0.05; p<0.01 ### Example 10.11: Seasonal Effects of Antidumping Filings C.M. Krupp and P.S. Pollard (1999), Market Responses to Antidumpting Laws: Some Evidence from the U.S. Chemical Industry, Canadian Journal of Economics 29, 199-227. Dr. Krupp kindly provided the data. They are monthly data covering February 1978 through December 1988. data("barium") barium_imports <- lm(lchnimp ~ lchempi + lgas + lrtwex + befile6 + affile6 + afdec6, data = barium) Estimate a new model, barium_seasonal which accounts for seasonality by adding dummy variables contained in the data. barium_seasonal <- lm(lchnimp ~ lchempi + lgas + lrtwex + befile6 + affile6 + afdec6 + feb + mar + apr + may + jun + jul + aug + sep + oct + nov + dec, data = barium) Compare both models: Dependent variable: lchnimp (1) (2) lchempi 3.11719*** (0.47920) 3.26506*** (0.49293) lgas 0.19635 (0.90662) -1.27812 (1.38901) lrtwex 0.98302** (0.40015) 0.66305 (0.47130) befile6 0.05957 (0.26097) 0.13970 (0.26681) affile6 -0.03241 (0.26430) 0.01263 (0.27869) afdec6 -0.56524* (0.28584) -0.52130* (0.30195) feb -0.41771 (0.30444) mar 0.05905 (0.26473) apr -0.45148* (0.26839) may 0.03331 (0.26924) jun -0.20633 (0.26925) jul 0.00384 (0.27877) aug -0.15707 (0.27799) sep -0.13416 (0.26766) oct 0.05169 (0.26685) nov -0.24626 (0.26283) dec 0.13284 (0.27142) Constant -17.80300 (21.04537) 16.77877 (32.42865) Observations 131 131 R2 0.30486 0.35833 Adjusted R2 0.27123 0.26179 Residual Std. Error 0.59735 (df = 124) 0.60121 (df = 113) F Statistic 9.06365*** (df = 6; 124) 3.71190*** (df = 17; 113) Note: p<0.1; p<0.05; p<0.01 Now, compute the anova between the two models. barium_anova <- anova(barium_imports, barium_seasonal) Statistic N Mean St. Dev. Min Pctl(25) Pctl(75) Max Res.Df 2 118.50000 7.77817 113 115.8 121.2 124 RSS 2 42.54549 2.40642 40.84390 41.69469 43.39629 44.24709 Df 1 11.00000 11.00000 11.00000 11.00000 11.00000 Sum of Sq 1 3.40319 3.40319 3.40319 3.40319 3.40319 F 1 0.85594 0.85594 0.85594 0.85594 0.85594 Pr(> F) 1 0.58520 0.58520 0.58520 0.58520 0.58520 ## Chapter 11: Further Issues in Using OLS with with Time Series Data ### Example 11.7: Wages and Productivity $\widehat{log(hrwage_t)} = \beta_0 + \beta_1log(outphr_t) + \beta_2t + \mu_t$ Data from the Economic Report of the President, 1989, Table B-47. The data are for the non-farm business sector. data("earns") wage_time <- lm(lhrwage ~ loutphr + t, data = earns) wage_diff <- lm(diff(lhrwage) ~ diff(loutphr), data = earns) Dependent variable: lhrwage diff(lhrwage) (1) (2) loutphr 1.63964*** (0.09335) t -0.01823*** (0.00175) diff(loutphr) 0.80932*** (0.17345) Constant -5.32845*** (0.37445) -0.00366 (0.00422) Observations 41 40 R2 0.97122 0.36424 Adjusted R2 0.96971 0.34750 Residual Std. Error (df = 38) 0.02854 0.01695 F Statistic 641.22430*** (df = 2; 38) 21.77054*** (df = 1; 38) Note: p<0.1; p<0.05; p<0.01 ## Chapter 12: Serial Correlation and Heteroskedasticiy in Time Series Regressions ### Example 12.4: Prais-Winsten Estimation in the Event Study data("barium") barium_model <- lm(lchnimp ~ lchempi + lgas + lrtwex + befile6 + affile6 + afdec6, data = barium) Load the prais package, use the prais_winsten function to estimate. library(prais) barium_prais_winsten <- prais_winsten(lchnimp ~ lchempi + lgas + lrtwex + befile6 + affile6 + afdec6, data = barium) ## Iteration 0: rho = 0 ## Iteration 1: rho = 0.2708 ## Iteration 2: rho = 0.291 ## Iteration 3: rho = 0.293 ## Iteration 4: rho = 0.2932 ## Iteration 5: rho = 0.2932 ## Iteration 6: rho = 0.2932 ## Iteration 7: rho = 0.2932 Dependent variable: lchnimp lchempi 3.11719*** (0.47920) lgas 0.19635 (0.90662) lrtwex 0.98302** (0.40015) befile6 0.05957 (0.26097) affile6 -0.03241 (0.26430) afdec6 -0.56524* (0.28584) Constant -17.80300 (21.04537) Observations 131 R2 0.30486 Adjusted R2 0.27123 Residual Std. Error 0.59735 (df = 124) F Statistic 9.06365*** (df = 6; 124) Note: p<0.1; p<0.05; p<0.01 barium_prais_winsten ## ## Call: ## prais_winsten(formula = lchnimp ~ lchempi + lgas + lrtwex + befile6 + ## affile6 + afdec6, data = barium) ## ## Coefficients: ## (Intercept) lchempi lgas lrtwex befile6 ## -37.07770 2.94095 1.04638 1.13279 -0.01648 ## affile6 afdec6 ## -0.03316 -0.57681 ## ## AR(1) Coefficient rho: 0.2932 ### Example 12.8: Heteroskedasticity and the Efficient Markets Hypothesis These are Wednesday closing prices of value-weighted NYSE average, available in many publications. Wooldridge does not recall the particular source used when he collected these data at MIT, but notes probably the easiest way to get similar data is to go to the NYSE web site, www.nyse.com. $return_t = \beta_0 + \beta_1return_{t-1} + \mu_t$ data("nyse") ?nyse return_AR1 <-lm(return ~ return_1, data = nyse) $\hat{\mu^2_t} = \beta_0 + \beta_1return_{t-1} + residual_t$ return_mu <- residuals(return_AR1) mu2_hat_model <- lm(return_mu^2 ~ return_1, data = return_AR1\$model) Dependent variable: return return_mu2 (1) (2) return_1 0.05890 (0.03802) -1.10413*** (0.20140) Constant 0.17963** (0.08074) 4.65650*** (0.42768) Observations 689 689 R2 0.00348 0.04191 Adjusted R2 0.00203 0.04052 Residual Std. Error (df = 687) 2.11040 11.17847 F Statistic (df = 1; 687) 2.39946 30.05460*** Note: p<0.1; p<0.05; p<0.01 ### Example 12.9: ARCH in Stock Returns $\hat{\mu^2_t} = \beta_0 + \hat{\mu^2_{t-1}} + residual_t$ We still have return_mu in the working environment so we can use it to create $$\hat{\mu^2_t}$$, (mu2_hat) and $$\hat{\mu^2_{t-1}}$$ (mu2_hat_1). Notice the use R’s matrix subset operations to perform the lag operation. We drop the first observation of mu2_hat and squared the results. Next, we remove the last observation of mu2_hat_1 using the subtraction operator combined with a call to the NROW function on return_mu. Now, both contain $$688$$ observations and we can estimate a standard linear model. mu2_hat <- return_mu[-1]^2 mu2_hat_1 <- return_mu[-NROW(return_mu)]^2 arch_model <- lm(mu2_hat ~ mu2_hat_1) Dependent variable: mu2_hat mu2_hat_1 0.33706*** (0.03595) Constant 2.94743*** (0.44023) Observations 688 R2 0.11361 Adjusted R2 0.11231 Residual Std. Error 10.75907 (df = 686) F Statistic 87.92263*** (df = 1; 686) Note: p<0.1; p<0.05; p<0.01 ## Chapter 13: Pooling Cross Sections across Time: Simple Panel Data Methods ### Example 13.7: Effect of Drunk Driving Laws on Traffic Fatalities Wooldridge collected these data from two sources, the 1992 Statistical Abstract of the United States (Tables 1009, 1012) and A Digest of State Alcohol-Highway Safety Related Legislation, 1985 and 1990, published by the U.S. National Highway Traffic Safety Administration. $\widehat{\Delta{dthrte}} = \beta_0 + \Delta{open} + \Delta{admin}$ data("traffic1") ?traffic1 DD_model <- lm(cdthrte ~ copen + cadmn, data = traffic1) Dependent variable: cdthrte copen -0.41968** (0.20559) cadmn -0.15060 (0.11682) Constant -0.49679*** (0.05243) Observations 51 R2 0.11867 Adjusted R2 0.08194 Residual Std. Error 0.34350 (df = 48) F Statistic 3.23144** (df = 2; 48) Note: p<0.1; p<0.05; p<0.01 ## Chapter 14: Advanced Panel Data Methods ### Example 14.1: Effect of Job Training on Firm Scrap Rates In this section, we will estimate a linear panel modeg using the plm function from the plm: Linear Models for Panel Data package. See the bibliography for more information. library(plm) data("jtrain") scrap_panel <- plm(lscrap ~ d88 + d89 + grant + grant_1, data = jtrain, index = c('fcode','year'), model = 'within', effect ='individual') Dependent variable: lscrap d88 -0.08022 (0.10948) d89 -0.24720* (0.13322) grant -0.25231* (0.15063) grant_1 -0.42159** (0.21020) Observations 162 R2 0.20105 Adjusted R2 -0.23684 F Statistic 6.54259*** (df = 4; 104) Note: p<0.1; p<0.05; p<0.01 ## Chapter 15: Instrumental Variables Estimation and Two Stage Least Squares ### Example 15.1: Estimating the Return to Education for Married Women T.A. Mroz (1987), The Sensitivity of an Empirical Model of Married Women’s Hours of Work to Economic and Statistical Assumptions, Econometrica 55, 765-799. Professor Ernst R. Berndt, of MIT, kindly provided the data, which he obtained from Professor Mroz. $log(wage) = \beta_0 + \beta_1educ + \mu$ data("mroz") ?mroz wage_educ_model <- lm(lwage ~ educ, data = mroz) $\widehat{educ} = \beta_0 + \beta_1fatheduc$ We run the typical linear model, but notice the use of the subset argument. inlf is a binary variable in which a value of 1 means they are “In the Labor Force”. By sub-setting the mroz data.frame by observations in which inlf==1, only working women will be in the sample. fatheduc_model <- lm(educ ~ fatheduc, data = mroz, subset = (inlf==1)) In this section, we will perform an Instrumental-Variable Regression, using the ivreg function in the AER (Applied Econometrics with R) package. See the bibliography for more information. library("AER") wage_educ_IV <- ivreg(lwage ~ educ | fatheduc, data = mroz) Dependent variable: lwage educ lwage OLS OLS instrumental variable (1) (2) (3) educ 0.10865*** (0.01440) 0.05917* (0.03514) fatheduc 0.26944*** (0.02859) Constant -0.18520 (0.18523) 10.23705*** (0.27594) 0.44110 (0.44610) Observations 428 428 428 R2 0.11788 0.17256 0.09344 Adjusted R2 0.11581 0.17062 0.09131 Residual Std. Error (df = 426) 0.68003 2.08130 0.68939 F Statistic (df = 1; 426) 56.92892*** 88.84076*** Note: p<0.1; p<0.05; p<0.01 ### Example 15.2: Estimating the Return to Education for Men Data from M. Blackburn and D. Neumark (1992), Unobserved Ability, Efficiency Wages, and Interindustry Wage Differentials, Quarterly Journal of Economics 107, 1421-1436. Professor Neumark kindly provided the data, of which Wooldridge uses the data for 1980. $\widehat{educ} = \beta_0 + sibs$ data("wage2") ?wage2 educ_sibs_model <- lm(educ ~ sibs, data = wage2) $\widehat{log(wage)} = \beta_0 + educ$ Again, estimate the model using the ivreg function in the AER (Applied Econometrics with R) package. library("AER") educ_sibs_IV <- ivreg(lwage ~ educ | sibs, data = wage2) Dependent variable: educ lwage OLS instrumental variable (1) (2) (3) sibs -0.22792*** (0.03028) educ 0.12243*** (0.02635) 0.05917* (0.03514) Constant 14.13879*** (0.11314) 5.13003*** (0.35517) 0.44110 (0.44610) Observations 935 935 428 R2 0.05726 -0.00917 0.09344 Adjusted R2 0.05625 -0.01026 0.09131 Residual Std. Error 2.13398 (df = 933) 0.42330 (df = 933) 0.68939 (df = 426) F Statistic 56.66715*** (df = 1; 933) Note: p<0.1; p<0.05; p<0.01 ### Example 15.5: Return to Education for Working Women $\widehat{log(wage)} = \beta_0 + \beta_1educ + \beta_2exper + \beta_3exper^2$ Use the ivreg function in the AER (Applied Econometrics with R) package to estimate. data("mroz") wage_educ_exper_IV <- ivreg(lwage ~ educ + exper + expersq | exper + expersq + motheduc + fatheduc, data = mroz) Dependent variable: lwage educ 0.06140* (0.03144) exper 0.04417*** (0.01343) expersq -0.00090** (0.00040) Constant 0.04810 (0.40033) Observations 428 R2 0.13571 Adjusted R2 0.12959 Residual Std. Error 0.67471 (df = 424) Note: p<0.1; p<0.05; p<0.01 ## Chapter 16: Simultaneous Equations Models ### Example 16.4: INFLATION AND OPENNESS Data from D. Romer (1993), Openness and Inflation: Theory and Evidence, Quarterly Journal of Economics 108, 869-903. The data are included in the article. $inf = \beta_{10} + \alpha_1open + \beta_{11}log(pcinc) + \mu_1$ $open = \beta_{20} + \alpha_2inf + \beta_{21}log(pcinc) + \beta_{22}log(land) + \mu_2$ ### Example 16.6: INFLATION AND OPENNESS $\widehat{open} = \beta_0 + \beta_{1}log(pcinc) + \beta_{2}log(land)$ data("openness") ?openness open_model <-lm(open ~ lpcinc + lland, data = openness) $\widehat{inf} = \beta_0 + \beta_{1}open + \beta_{2}log(pcinc)$ Use the ivreg function in the AER (Applied Econometrics with R) package to estimate. library(AER) inflation_IV <- ivreg(inf ~ open + lpcinc | lpcinc + lland, data = openness) Dependent variable: open inf OLS instrumental variable (1) (2) open -0.33749** (0.14412) lpcinc 0.54648 (1.49324) 0.37582 (2.01508) lland -7.56710*** (0.81422) Constant 117.08450*** (15.84830) 26.89934* (15.40120) Observations 114 114 R2 0.44867 0.03088 Adjusted R2 0.43873 0.01341 Residual Std. Error (df = 111) 17.79559 23.83581 F Statistic 45.16537*** (df = 2; 111) Note: p<0.1; p<0.05; p<0.01 ## Chapter 18: Advanced Time Series Topics ### Example 18.8: FORECASTING THE U.S. UNEMPLOYMENT RATE Data from Economic Report of the President, 2004, Tables B-42 and B-64. data("phillips") ?phillips $\widehat{unemp_t} = \beta_0 + \beta_1unem_{t-1}$ Estimate the linear model in the usual way and note the use of the subset argument to define data equal to and before the year 1996. phillips_train <- subset(phillips, year <= 1996) unem_AR1 <- lm(unem ~ unem_1, data = phillips_train) $\widehat{unemp_t} = \beta_0 + \beta_1unem_{t-1} + \beta_2inf_{t-1}$ unem_inf_VAR1 <- lm(unem ~ unem_1 + inf_1, data = phillips_train) Dependent variable: unem (1) (2) unem_1 0.73235*** (0.09689) 0.64703*** (0.08381) inf_1 0.18358*** (0.04118) Constant 1.57174*** (0.57712) 1.30380** (0.48969) Observations 48 48 R2 0.55397 0.69059 Adjusted R2 0.54427 0.67684 Residual Std. Error 1.04857 (df = 46) 0.88298 (df = 45) F Statistic 57.13184*** (df = 1; 46) 50.21941*** (df = 2; 45) Note: p<0.1; p<0.05; p<0.01 Now, use the subset argument to create our testing data set containing observation after 1996. Next, pass the both the model object and the test set to the predict function for both models. Finally, cbind or “column bind” both forecasts as well as the year and unemployment rate of the test set. phillips_test <- subset(phillips, year >= 1997) AR1_forecast <- predict.lm(unem_AR1, newdata = phillips_test) VAR1_forecast <- predict.lm(unem_inf_VAR1, newdata = phillips_test) kable(cbind(phillips_test[ ,c("year", "unem")], AR1_forecast, VAR1_forecast)) year unem AR1_forecast VAR1_forecast 50 1997 4.9 5.526452 5.348468 51 1998 4.5 5.160275 4.896451 52 1999 4.2 4.867333 4.509137 53 2000 4.0 4.647627 4.425175 54 2001 4.8 4.501157 4.516062 55 2002 5.8 5.087040 4.923537 56 2003 6.0 5.819394 5.350271 # Appendix ### Using R for Introductory Econometrics This is an excellent open source complimentary text to “Introductory Econometrics” by Jeffrey M. Wooldridge and should be your number one resource. This excerpt from the book’s website: This book introduces the popular, powerful and free programming language and software package R with a focus on the implementation of standard tools and methods used in econometrics. Unlike other books on similar topics, it does not attempt to provide a self-contained discussion of econometric models and methods. Instead, it builds on the excellent and popular textbook “Introductory Econometrics” by Jeffrey M. Wooldridge. Hess, Florian. Using R for Introductory Econometrics. ISBN: 978-1-523-28513-6, CreateSpace Independent Publishing Platform, 2016, Dusseldorf, Germany. ### Applied Econometrics with R From the publisher’s website: This is the first book on applied econometrics using the R system for statistical computing and graphics. It presents hands-on examples for a wide range of econometric models, from classical linear regression models for cross-section, time series or panel data and the common non-linear models of microeconometrics such as logit, probit and tobit models, to recent semiparametric extensions. In addition, it provides a chapter on programming, including simulations, optimization, and an introduction to R tools enabling reproducible econometric research. An R package accompanying this book, AER, is available from the Comprehensive R Archive Network (CRAN) at https://CRAN.R-project.org/package=AER. Kleiber, Christian and Achim Zeileis. Applied Econometrics with R. ISBN 978-0-387-77316-2, Springer-Verlag, 2008, New York. https://www.springer.com/us/book/9780387773162 ## Bibliography Croissant Y, Millo G (2008). “Panel Data Econometrics in R: The plm Package.” Journal of Statistical Software, 27(2), 1-43. doi: 10.18637/jss.v027.i02 (URL: http://doi.org/10.18637/jss.v027.i02). Marek Hlavac (2018). stargazer: Well-Formatted Regression and Summary Statistics Tables. R package version 5.2.1. URL: https://CRAN.R-project.org/package=stargazer R Core Team (2018). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/. Christian Kleiber and Achim Zeileis (2008). Applied Econometrics with R. New York: Springer-Verlag. ISBN 978-0-387-77316-2. URL: https://CRAN.R-project.org/package=AER Franz X. Mohr (2018). prais: Prais-Winsten Estimator for AR(1) Serial Correlation. R package version 1.0.0. URL: https://CRAN.R-project.org/package=prais R Core Team (2018). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL: https://www.R-project.org/. Hadley Wickham. testthat: Get Started with Testing. The R Journal, vol. 3, no. 1, pp. 5–10, 2011. URL: https://CRAN.R-project.org/package=testthat Jeffrey M. Wooldridge (2016). Introductory Econometrics: A Modern Approach, 6th edition. ISBN-13: 978-1-305-27010-7. Mason, Ohio :South-Western Cengage Learning. Yihui Xie (2018). knitr: A General-Purpose Package for Dynamic Report Generation in R. R package version 1.20. https://CRAN.R-project.org/package=knitr
HuggingFaceTB/finemath
# Piecewise In mathematics, a piecewise-defined function (also called a piecewise function or a hybrid function) is a function which is defined by multiple sub-functions, each sub-function applying to a certain interval of the main function's domain (a sub-domain). Piecewise is actually a way of expressing the function, rather than a characteristic of the function itself, but with additional qualification, it can describe the nature of the function. For example, a piecewise polynomial function is a function that is a polynomial on each of its sub-domains, but possibly a different one on each. The word piecewise is also used to describe any property of a piecewise-defined function that holds for each piece but not necessarily hold for the whole domain of the function. A function is piecewise differentiable or piecewise continuously differentiable if each piece is differentiable throughout its subdomain, even though the whole function may not be differentiable at the points between the pieces. In convex analysis, the notion of a derivative may be replaced by that of the subderivative for piecewise functions. Although the "pieces" in a piecewise definition need not be intervals, a function is not called "piecewise linear" or "piecewise continuous" or "piecewise differentiable" unless the pieces are intervals. ## Notation and interpretation Graph of the absolute value function, y = |x|. Piecewise functions are defined using the common functional notation, where the body of the function is an array of functions and associated subdomains. Crucially, in most settings, there must only be a finite number of subdomains, each of which must be an interval, in order for the overall function to be called "piecewise". For example, consider the piecewise definition of the absolute value function: For all values of x less than zero, the first function (−x) is used, which negates the sign of the input value, making negative numbers positive. For all values of x greater than or equal to zero, the second function (x) is used, which evaluates trivially to the input value itself. Consider the piecewise function f(x) evaluated at certain values of x: x f(x) Function used −3 3 x −0.10.1x 0 0 x 1/2 1/2x 5 5 x Thus, in order to evaluate a piecewise function at a given input value, the appropriate subdomain needs to be chosen in order to select the correct function and produce the correct output value. ## Continuity A piecewise function comprising different quadratic functions on either side of . A piecewise function is continuous on a given interval if the following conditions are met: • it is defined throughout that interval • its constituent functions are continuous on that interval • there is no discontinuity at each endpoint of the subdomains within that interval. The pictured function, for example, is piecewise continuous throughout its subdomains, but is not continuous on the entire domain. The pictured function contains a jump discontinuity at .
HuggingFaceTB/finemath
# Category: Transformation (function) Rotation matrix In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix rotates points in the xy plan Translation (geometry) In Euclidean geometry, a translation is a geometric transformation that moves every point of a figure, shape or space by the same distance in a given direction. A translation can also be interpreted a Glide reflection In 2-dimensional geometry, a glide reflection (or transflection) is a symmetry operation that consists of a reflection over a line and then translation along that line, combined into a single operatio Response modeling methodology Response modeling methodology (RMM) is a general platform for statistical modeling of a linear/nonlinear relationship between a response variable (dependent variable) and a linear predictor (a linear Geometric transformation In mathematics, a geometric transformation is any bijection of a set to itself (or to another such set) with some salient geometrical underpinning. More specifically, it is a function whose domain and Translation of axes In mathematics, a translation of axes in two dimensions is a mapping from an xy-Cartesian coordinate system to an x'y'-Cartesian coordinate system in which the x' axis is parallel to the x axis and k Transformation matrix In linear algebra, linear transformations can be represented by matrices. If is a linear transformation mapping to and is a column vector with entries, then for some matrix , called the transformation Homothety In mathematics, a homothety (or homothecy, or homogeneous dilation) is a transformation of an affine space determined by a point S called its center and a nonzero number k called its ratio, which send Linear map In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping Scaling (geometry) In affine geometry, uniform scaling (or isotropic scaling) is a linear transformation that enlarges (increases) or shrinks (diminishes) objects by a scale factor that is the same in all directions. Th Infinitesimal transformation In mathematics, an infinitesimal transformation is a limiting form of small transformation. For example one may talk about an infinitesimal rotation of a rigid body, in three-dimensional space. This i Householder transformation In linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection about a plane or hyperplane cont Motion (geometry) In geometry, a motion is an isometry of a metric space. For instance, a plane equipped with the Euclidean distance metric is a metric space in which a mapping associating congruent figures is a motion Affine transformation In Euclidean geometry, an affine transformation or affinity (from the Latin, affinis, "connected with") is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidea Rotation of axes In mathematics, a rotation of axes in two dimensions is a mapping from an xy-Cartesian coordinate system to an x′y′-Cartesian coordinate system in which the origin is kept fixed and the x′ and y′ axes Late-stage functionalization Late-stage functionalization (LSF) is a desired, chemical or biochemical, chemoselective transformation on a complex molecule to provide at least one analog in sufficient quantity and purity for a giv Helmert transformation The Helmert transformation (named after Friedrich Robert Helmert, 1843–1917) is a geometric transformation method within a three-dimensional space. It is frequently used in geodesy to produce datum tr Homography In projective geometry, a homography is an isomorphism of projective spaces, induced by an isomorphism of the vector spaces from which the projective spaces derive. It is a bijection that maps lines t Vector projection The vector projection of a vector a on (or onto) a nonzero vector b, sometimes denoted (also known as the vector component or vector resolution of a in the direction of b), is the orthogonal projectio Locally finite operator In mathematics, a linear operator is called locally finite if the space is the union of a family of finite-dimensional -invariant subspaces. In other words, there exists a family of linear subspaces o Reflection (mathematics) In mathematics, a reflection (also spelled reflexion) is a mapping from a Euclidean space to itself that is an isometry with a hyperplane as a set of fixed points; this set is called the axis (in dime Transformation (function) In mathematics, a transformation is a function f, usually with some geometrical underpinning, that maps a set X to itself, i.e. f : X → X.Examples include linear transformations of vector spaces and g
HuggingFaceTB/finemath
## Calculus: Early Transcendentals (2nd Edition) $8\textbf{w} + \textbf{v} - 6\textbf{u} = \langle -28, 82 \rangle$ $8\textbf{w} + \textbf{v} - 6\textbf{u} = 8\langle 0,8\rangle + \langle -4,6\rangle - 6\langle 4,-2\rangle = \langle 0,64\rangle + \langle -4,6\rangle + \langle-24, 12 \rangle = \langle 0-4-24, 64+6+12 \rangle = \langle -28, 82 \rangle$
HuggingFaceTB/finemath
Crisp relation is defined over the cartesian product of two crisp sets. Suppose, A and B are two crisp sets. Then Cartesian product denoted as A×B is a collection of order pairs, such that A × B = { (a, b)  |  a ∈ A and b ∈ B } Note: A × B ≠ B × A |A × B| = |A| × |B| A × B provides a mapping from  a ∈ A to b ∈ B Relation is very useful concept in many fields. It is useful in logic, patter recognition, control system, classification etc. Crisp relation is set of order pairs (a, b) from Cartesian product A × B such that a ∈ A and b ∈ B. Relations basically represent the mapping of the sets. It defines the interaction or association of variables. The strength of the relationship between ordered pairs of elements in each universe is measured by the characteristic function denoted by χ . where a value of unity is associated with complete relationship and a value of zero is associated with no relationship, i.e., \chi_{R}(a,b) = \begin{cases} 1 , & if (a,b) \in (A \times B) \\ 0, & if (a,b) \notin (A \times B) \end{cases} ## Example: Crisp realtion Consider two crisp sets: C = {1, 2, 3} and D = {4, 5, 6}. • Find Cartesian product of C×D • Also find relation R over this Cartesian products such that R={(c, d) |  d = c+2, (c, d) ∈ C×D } Solution: C × D = { (1, 4), (1, 5), (1, 6),(2, 4), (2, 5), (2, 6),(3, 4), (3, 5), (3, 6) } From above table, it is easy to infer that, relation R={(2, 4), (3, 5)} ## Representation of crisp relation: We can represent crisp relation in various ways. One way is to represent it using functional form, which we already have described earlier. Two other popular representations are shown below: Let us represent both the relation for the relation R defined over set A and B as, A = {1, 2, 3}, B = {4, 5, 6}, R = {(2, 4), (3, 5)} Sagittal / Pictorial Representation: Matrix Representation: ## Some special realtions: Let us discuss some special types of relations. Null Relation: There is no mapping of elements from universe X to universe Y Complete Relation: All the elements of universe X is mapped to universe Y Universal Relations: The universal relation on A is defined as UA = A x A = A2 Let A = {0, 1, 2}, then UA = A x A = {(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)} Identity Relations: The identity relation on A is defined as IA = { (a, a), ∀a ∈ A } Let A = {0, 1, 2}, then IA = {(0, 0), (1, 1), (2, 2)} ## Operations on crisp relation: Like operations on crisp sets, we can also perform operations on crisp relations. Suppose, R(x, y) and S(x, y) are the two relations defined over two crisp sets, where x ∈ A and y ∈ B We will discuss various operations on crisp set for following two crisp relations R and S: ### Union of crisp relation: R ∪ S = χR ∪ S (x, y) = max( χR(x, y), χS(x, y) ) Union of given relation R and S would be, ### Intersection of crisp relation: R ∩ S = χR ∩ S (x, y) = min( χR(x, y), χS(x, y) ) Intersection of given relation R and S would be, ### Complement of crisp realtion: Rc = χRc (x, y)= 1 – χR(x, y) Intersection of given relation R would be, ### Containment: R ⊂ S = χR ⊂ S (x, y) = χR(x, y) ≤ χS(x, y) Relation R is not contained within relation S. Consider the relation T as given below: Here, χR(x, y) ≤ χT(x, y) , so R is contained within T. ## Cardinality of crisp set: Cardinality defines the number of elements in the given set. Let A and B be the crisp sets with cardinality n and m, respectively. The cardinality of crisp relation defined over Cartesian product A×B will be n × m Example: Let A = (1, 2) and B = {3, 4, 5} Here, n = |A| = 2 and m = |B| =3, So, n×m=6 The cartesian product of both the set, A × B={ (1, 3), (1, 4), (1, 5),(2, 3), (2, 4), (2, 5) } Cardinality of cartesian product is, | A × B | = 6 = n × m ## Composition of crisp realtion: Composition of relation R and S is denoted as R ∘ S R ∘ S = {(x, z) | (x, y) ∈ R, and (y, z) ∈ S, ∀y ∈ Y} Composition of relation is computed in two different ways: Although, for crisp relation, both methods yields the identical results. For fuzzy relations, it gives different results.
HuggingFaceTB/finemath
1 00:00:17,070 --> 00:00:23,610 Hello, in the last class we have discussed some general methods for stability aspects 2 00:00:23,610 --> 00:00:27,740 of multi step method. So, for example schur criterion 3 00:00:27,740 --> 00:00:32,480 we have discussed. So, given a multi step method, how to find out the stability 4 00:00:32,480 --> 00:00:35,450 regions using the schur criterion? . 5 00:00:35,450 --> 00:00:41,930 So, let us continue ahead with stability aspects of multi step method. So, let us discuss 6 00:00:41,930 --> 00:00:54,350 some further methods. So, one of the popular method is Routh-Harwitz criterion. So, 7 00:00:54,350 --> 00:01:29,479 consider the mapping of the open 8 00:01:29,479 --> 00:01:57,670 unit disk of the complex plane to the open left half 9 00:01:57,670 --> 00:02:14,460 plane, that is real z less than 0 of the complex z plane. So, what this mapping does? This 10 00:02:14,460 --> 00:02:33,750 maps unit disc mod r less than 1 to the left of plane in z plane, then the inverse of the 11 00:02:33,750 --> 00:02:53,149 mapping is given by inverse of this mapping. So, this is the inverse of the mapping. 12 00:02:53,149 --> 00:02:54,149 .. 13 00:02:54,149 --> 00:03:27,320 So, under the transformation, under this transformation the function. So, what is this? 14 00:03:27,320 --> 00:03:58,790 This is our stability polynomial. So, this is this becomes, so this polynomial becomes 15 00:03:58,790 --> 00:04:24,040 this. Now, say this is a star, multiply star by 1 minus z power k, we have. So, we are 16 00:04:24,040 --> 00:04:46,790 multiplying this by 1 minus z power k, where k is the order of multi step method. So, we 17 00:04:46,790 --> 00:05:09,200 get this and our aim is to identify this as a polynomial in z. So, what was our aim? So, 18 00:05:09,200 --> 00:05:11,420 this is order of the method. . 19 00:05:11,420 --> 00:05:28,790 .So, we are multiplying and this we are expressing as polynomial in z. So, then the roots 20 00:05:28,790 --> 00:06:13,620 of the stability polynomial lies inside the open unit disk, if and only if 21 00:06:13,620 --> 00:06:20,250 the roots of the 22 00:06:20,250 --> 00:06:56,300 polynomial double star, so that is of this lie in the open left half plane. So, the roots 23 00:06:56,300 --> 00:07:01,210 of the stability polynomial in the original variable 24 00:07:01,210 --> 00:07:07,270 r lie inside the open unit disc if and only the roots of the polynomial double start. 25 00:07:07,270 --> 00:07:08,270 . 26 00:07:08,270 --> 00:07:38,850 This is this lie in the open left half plane. So, then the criterion 27 00:07:38,850 --> 00:08:06,160 the roots of double star lie in the open left half plane if 28 00:08:06,160 --> 00:08:45,250 and only if the leading principal minors of the k cross 29 00:08:45,250 --> 00:09:21,130 k matrix. So, this is a, the roots of double 30 00:09:21,130 --> 00:09:38,050 star so that double star is of the form. So, lie in 31 00:09:38,050 --> 00:09:45,900 the open left half plane if and only if the leading principal minors of this matrix are 32 00:09:45,900 --> 00:10:04,610 positive, and a 0 greater than 0 are positive leading principle minors of this matrix are 33 00:10:04,610 --> 00:10:09,950 positive and a naught greater than 0. 34 00:10:09,950 --> 00:10:10,950 .. 35 00:10:10,950 --> 00:10:37,330 We assume that a j is 0, if j is greater than this, in particular for k equals to 2 the 36 00:10:37,330 --> 00:11:38,740 condition leads to this and k equals to 3 and k equals to 4. These represent the necessary 37 00:11:38,740 --> 00:12:19,820 and sufficient conditions for ensuring that all roots lie in the left half plane. Of course, 38 00:12:19,820 --> 00:12:34,709 open. So, this is these are the necessary and sufficient conditions. So, to start with, 39 00:12:34,709 --> 00:12:39,620 so we have used a transformation, what was the transformation? 40 00:12:39,620 --> 00:12:50,459 z equals to r minus 1 and the inverse of this is this. 41 00:12:50,459 --> 00:12:51,459 . 42 00:12:51,459 --> 00:12:56,160 .So, under this transformation the stability polynomial reduces to this, then we have to 43 00:12:56,160 --> 00:13:01,880 multiply by this factor. As a result of the multi step method, the linear multi step method 44 00:13:01,880 --> 00:13:09,180 of order k we are getting this factor. So, then multiplied by that we get a polynomial 45 00:13:09,180 --> 00:13:14,670 z. So, then the Routh-Hurwitz criterion states 46 00:13:14,670 --> 00:13:24,839 that the roots of this polynomial lie in the open left half plane if and only if the leading 47 00:13:24,839 --> 00:13:31,450 principal minors of this matrix are positive. And a 0 is greater than 0 and in particular 48 00:13:31,450 --> 00:13:37,270 we have the these being the necessary and sufficient conditions for ensuring that all 49 00:13:37,270 --> 00:13:41,860 the roots of the polynomial in z to lie in the 50 00:13:41,860 --> 00:13:44,430 open left half plane, ok? . 51 00:13:44,430 --> 00:14:15,050 So, let us see how it works out with a example. So, use the Routh-Hurwitz criterion 52 00:14:15,050 --> 00:14:21,899 to determine 53 00:14:21,899 --> 00:14:56,880 the interval of absolute stability of the linear multi step method. So, the given 54 00:14:56,880 --> 00:15:04,190 linear multi step method we need to determine the interval of absolute stability. So, we 55 00:15:04,190 --> 00:15:25,620 have r equals to this is a map, then for this method we have. So, this is given by r square 56 00:15:25,620 --> 00:15:51,910 minus 1 and this is given by accordingly, we have r square minus 1 is h bar. 57 00:15:51,910 --> 00:16:09,459 So, this will be the stability polynomial which is, so this is our stability polynomial 58 00:16:09,459 --> 00:16:16,910 and what is the order of the method 2. So, we 59 00:16:16,910 --> 00:16:32,110 need to consider this into variable change to 60 00:16:32,110 --> 00:16:53,399 this transformation. So, accordingly 61 00:16:53,399 --> 00:17:10,159 this should be written in the form, so we can 62 00:17:10,159 --> 00:17:19,740 simplify this. 63 00:17:19,740 --> 00:17:20,740 .. 64 00:17:20,740 --> 00:18:07,890 So, here this get cancelled minus 1 term there, so 1 minus z. So, this will be 1 minus z 65 00:18:07,890 --> 00:18:19,669 square. So, this will be 1 minus z square. So, minus h by 2 z square and this is usual 66 00:18:19,669 --> 00:18:43,279 expansion. So, we multiply, sorry this is minus, this must be minus. So, this will be 67 00:18:43,279 --> 00:19:49,740 these terms then so this is minus h bar z square 68 00:19:49,740 --> 00:20:03,270 plus 4 plus 3 h bar z. This is this 2 h. So, this is 69 00:20:03,270 --> 00:20:25,970 a 0 z square. Now, for k equals to this is the necessary 70 00:20:25,970 --> 00:20:32,509 and sufficient condition accordingly. So, using 71 00:20:32,509 --> 00:21:05,840 part a, we get h bar to be is the region of absolute stability. So, this is Routh-Hurwitz 72 00:21:05,840 --> 00:21:17,919 criteria, right? So far these stability methods, these two general methods, one is schur 73 00:21:17,919 --> 00:21:24,820 criterion, other is Routh-Hurwitz criterion. So, they give you using some kind of 74 00:21:24,820 --> 00:21:34,110 approach, the Schur criterion via Schur polynomial and then Routh-Hurwtiz criterion via 75 00:21:34,110 --> 00:21:40,621 in kind of a transformation. So, but there are another general methods, 76 00:21:40,621 --> 00:21:47,110 in the sense you try to after all it is a difference equation, so you try to find the 77 00:21:47,110 --> 00:21:50,190 characteristic polynomial and find the roots so 78 00:21:50,190 --> 00:21:55,780 that like a general recurrence relation you get the solution. See once we have a 79 00:21:55,780 --> 00:22:01,269 recurrence relation which is also a finite difference method, right? So, we can get the 80 00:22:01,269 --> 00:22:12,050 solution using the roots and hence analyze what happens to this solution, ok? So, let 81 00:22:12,050 --> 00:22:26,500 us see one such method. 82 00:22:26,500 --> 00:22:27,500 .. 83 00:22:27,500 --> 00:22:46,139 Stability of multi step methods parasitic term. So, we will see what is this parasitic 84 00:22:46,139 --> 00:22:53,470 term? So, let us see with an example, so consider 85 00:22:53,470 --> 00:23:07,460 the multi step method. So, obviously this is 86 00:23:07,460 --> 00:23:24,529 for n greater than equals to 1 so then so with reference to absolute stability the reference 87 00:23:24,529 --> 00:23:52,639 equation is lambda y. So, accordingly these become 2 h bar y n, where h bar is, so the 88 00:23:52,639 --> 00:24:16,210 characteristic polynomial r square minus this, right? Now, the 89 00:24:16,210 --> 00:24:42,049 roots are given by 2 h, so this is given by, so these are the roots. 90 00:24:42,049 --> 00:24:43,049 . 91 00:24:43,049 --> 00:25:01,050 .So, let us analyze, so what were the roots r equals to. Now, consider first root, call 92 00:25:01,050 --> 00:25:17,769 it r 0 so this one plus 93 00:25:17,769 --> 00:25:28,849 so this is h bar plus. So, having power series expansion we get 1 3. So, I 94 00:25:28,849 --> 00:25:43,450 write that first, then h here, then h square by 2 plus the higher order terms. So, if you 95 00:25:43,450 --> 00:25:52,279 try to approximate see what was our equation and 96 00:25:52,279 --> 00:26:00,220 naturally the exact solution must be of this form. 97 00:26:00,220 --> 00:26:07,509 So, if we approximate and try to identify with the exact solution y a of course, this 98 00:26:07,509 --> 00:26:20,759 is indeed e power lambda h bar. Now, consider 99 00:26:20,759 --> 00:26:38,869 r 1 h bar minus, so this is h bar minus, so this can be approximated as minus 1 minus 100 00:26:38,869 --> 00:27:00,009 h bar plus h bar by 2 plus third order terms. Now, this can be approximated as minus e power 101 00:27:00,009 --> 00:27:14,249 lambda h bar, I am sorry this is lambda which is e power h and this is e power minus 102 00:27:14,249 --> 00:27:16,889 lambda h, ok? . 103 00:27:16,889 --> 00:27:29,380 So, one you have this type and other is this type. However, once we have the roots the 104 00:27:29,380 --> 00:27:45,169 general solution y n is given by c 0 r 0 power n c 1 r power n, right? So, there we have 105 00:27:45,169 --> 00:27:51,279 r 0 is behaving like e power lambda h and r 106 00:27:51,279 --> 00:28:02,440 1 is behaving like this. Now, let us take real 107 00:28:02,440 --> 00:28:10,289 positive then what happens, lambda real positive. So, we need to compare because the 108 00:28:10,289 --> 00:28:18,359 general solution is this. Now, we need to identify which one is dominating 109 00:28:18,359 --> 00:28:26,419 and really the solution of the difference equation gives stable solutions, 110 00:28:26,419 --> 00:28:29,919 right? So, then what happens if lambda is real 111 00:28:29,919 --> 00:28:47,799 and positive r 0 is greater than this. Then what happens? This grows faster than this. 112 00:28:47,799 --> 00:28:48,799 So, 113 00:28:48,799 --> 00:29:21,649 .what happens r 1 power n increases less rapidly then r 0 power n, implies c 0 r 0 power n 114 00:29:21,649 --> 00:29:33,799 dominates, this dominates. So, that means the other term, so out of your 115 00:29:33,799 --> 00:29:49,450 solution c 0 e power lambda h n e power lambda h n. This is dominating this for this 116 00:29:49,450 --> 00:29:58,419 is a case which is the exact solution and this 117 00:29:58,419 --> 00:30:24,729 is the parasitic term. However, in this case does not create any trouble. So, why do we 118 00:30:24,729 --> 00:30:29,669 call this parasitic term and then why we conclude all this? Of course, this is exact 119 00:30:29,669 --> 00:30:42,269 solution for the given solution, but this is an extra. So, this is also called extraneous 120 00:30:42,269 --> 00:30:48,179 part and this is the parasitic term, but however 121 00:30:48,179 --> 00:30:50,243 in this case does not create any problem. The 122 00:30:50,243 --> 00:30:57,310 overall solution grows as per the exact solution. So, that means for these values of 123 00:30:57,310 --> 00:31:01,089 lambda the solution is stable. . 124 00:31:01,089 --> 00:31:12,029 . So, let us consider now the other case, lambda 125 00:31:12,029 --> 00:31:36,239 is negative. Then for of course, h is positive. So, then c 1 r power n will dominate 126 00:31:36,239 --> 00:32:01,289 c 0 r power n, as n goes to infinity for a fixed h. Therefore, so the parasitic term 127 00:32:01,289 --> 00:32:25,599 which is h, so this parasitic term increases. So, 128 00:32:25,599 --> 00:32:36,710 this creates a problem, so that is how so for lambda real and positive it agrees with 129 00:32:36,710 --> 00:32:40,549 the true solution, whereas for lambda negative 130 00:32:40,549 --> 00:32:44,320 this parasitic term creates a problem. So, this 131 00:32:44,320 --> 00:32:48,579 is another kind of a stability analysis, ok? 132 00:32:48,579 --> 00:32:49,579 .. 133 00:32:49,579 --> 00:33:03,110 Let us see this little more from little theoretical point of view, relative and weak stability. 134 00:33:03,110 --> 00:33:15,809 So, relative and weak stability. So, with respect to what it is relative we will see. 135 00:33:15,809 --> 00:33:35,549 So, consider this, so general solution is given 136 00:33:35,549 --> 00:33:50,499 by. So, for a general method this is 0 to n c j r j 137 00:33:50,499 --> 00:34:16,060 lambda h, right? The parasitic terms power n goes to 0 y a, this is true because as h 138 00:34:16,060 --> 00:34:25,200 goes to 0 the parasitic terms goes to 0. However, 139 00:34:25,200 --> 00:34:42,929 for finite h for any x n 1 would expect it to 140 00:34:42,929 --> 00:34:56,260 be small compared to c. 141 00:34:56,260 --> 00:35:04,710 So, that means we are bringing out the concept of relative stability. So, relative with 142 00:35:04,710 --> 00:35:17,430 respect to, what? So, relative with respect to the a h, the principle exact r 0. So, this 143 00:35:17,430 --> 00:35:30,920 implies 144 00:35:30,920 --> 00:35:47,490 for sufficiently small h. So, this is relative 145 00:35:47,490 --> 00:35:56,180 is a comparison the parasitic terms with the exact, so extraneous terms with the 146 00:35:56,180 --> 00:36:03,660 exact, right? 147 00:36:03,660 --> 00:36:04,660 .. 148 00:36:04,660 --> 00:36:42,370 So, a method is relatively stable if the characteristic roots 149 00:36:42,370 --> 00:36:55,790 satisfying 150 00:36:55,790 --> 00:37:10,520 for all sufficiently small. So, that means let there be some parasitic 151 00:37:10,520 --> 00:37:20,450 terms, but compared to the exact, these extraneous must be small for sufficiently 152 00:37:20,450 --> 00:37:58,550 small lambda h. The impact satisfies the strong root condition if a h, these roots which are, 153 00:37:58,550 --> 00:38:02,470 which would contribute to the extraneous terms if the magnitude it. 154 00:38:02,470 --> 00:38:12,870 Of course, for a h 0 if the magnitude is less than 1 then we say the strong root condition, 155 00:38:12,870 --> 00:38:54,640 right? So, remark relative stability does not imply strong root condition and if a linear 156 00:38:54,640 --> 00:39:20,360 multi step method is stable, but not relatively stable. It is weakly stable, so relative 157 00:39:20,360 --> 00:39:25,820 stability does not imply strong root condition and if a method is stable, but not relatively 158 00:39:25,820 --> 00:39:33,130 stable, then we say it is weakly stable, ok? 159 00:39:33,130 --> 00:39:34,130 .. 160 00:39:34,130 --> 00:39:53,320 So for example, the case y n plus 1 plus 2 h f n, so far in this case we have seen. So, 161 00:39:53,320 --> 00:40:05,820 this does not hold. So, hence this method is weakly 162 00:40:05,820 --> 00:40:22,520 stable, so that is how we can categorize the relative stability and weakly stability. 163 00:40:22,520 --> 00:40:30,660 So, we have some criterions to systematically capture the interval of stability region of 164 00:40:30,660 --> 00:40:34,050 stability, but there is another concept to analyze 165 00:40:34,050 --> 00:40:38,690 the parasitic terms and then say how the method behaves, ok? 166 00:40:38,690 --> 00:40:39,690 . 167 00:40:39,690 --> 00:41:17,170 .So, let us analyze with some more examples. So, for example consider right. So, this 168 00:41:17,170 --> 00:42:29,250 implies on y dash we get. So, we have, so this will be the characteristic polynomial, 169 00:42:29,250 --> 00:43:03,940 then roots. So, if r is this we get these roots, 170 00:43:03,940 --> 00:43:05,660 ok? . 171 00:43:05,660 --> 00:44:13,130 Accordingly z 1, z 1 which is so this would be written as this can be written as. So, 172 00:44:13,130 --> 00:44:20,080 this is for what if h goes to 0, r goes to 0, r square 173 00:44:20,080 --> 00:44:25,690 goes to 0. So, with that we can linearize and 174 00:44:25,690 --> 00:44:41,690 try to get this. So, you can identify, so we have done it in the last example. So, 1 175 00:44:41,690 --> 00:45:09,210 plus lambda h equal e power lambda h. So, as h 176 00:45:09,210 --> 00:45:29,310 goes to 0, z 1 behaves like, it has to behave like this. Therefore, y n c 1 z 1 power n 177 00:45:29,310 --> 00:46:04,350 plus c 2, then so this is c 1. So, this agrees with 178 00:46:04,350 --> 00:46:25,100 exact solution and this is parasitic term. Now, if lambda is negative, 179 00:46:25,100 --> 00:46:40,650 this grows and creates trouble. So, this is another example where we could analyze. 180 00:46:40,650 --> 00:46:54,070 Now, for these multi step methods there is a small technique, given your characteristic 181 00:46:54,070 --> 00:46:59,580 first and second characteristic polynomials are r h o of zeta and then sigma of zeta. 182 00:46:59,580 --> 00:47:03,200 So, for a particular method if r h o is given, 183 00:47:03,200 --> 00:47:05,430 can we determine the corresponding sigma? 184 00:47:05,430 --> 00:47:06,430 .. 185 00:47:06,430 --> 00:47:28,010 So, this is another small trick involved a h. So, let us try to do that. So, given this 186 00:47:28,010 --> 00:47:37,200 how do we determine, so one may also refer the numerical 187 00:47:37,200 --> 00:48:15,430 solutions book by Jain. So, to know lot of examples, so consider, so these are 188 00:48:15,430 --> 00:48:58,890 general linear multi step method case, step method, then the error. So, this is the error, 189 00:48:58,890 --> 00:49:11,760 then recall this linear multi step method said 190 00:49:11,760 --> 00:49:40,570 to be of order p if C is 0. So, this gives a p th order method. So, accordingly T i plus 191 00:49:40,570 --> 00:50:03,530 1 reduces to right. So, this is identically 192 00:50:03,530 --> 00:50:23,090 0 when y x is a polynomial of degree less than r 193 00:50:23,090 --> 00:50:28,810 equals to p, right? . 194 00:50:28,810 --> 00:50:54,600 .Now, observe that the constants C i and p are independent of y of x. So, let us choose 195 00:50:54,600 --> 00:51:08,010 y of x equals to this, then double star, what 196 00:51:08,010 --> 00:51:58,670 was our double star this. So, this grows like, so 197 00:51:58,670 --> 00:52:56,160 this is e power k h minus a 1. So, this can be written as r h o of e h minus h, so it 198 00:52:56,160 --> 00:53:25,510 is right. So, T i plus 1 this must be behaving like 199 00:53:25,510 --> 00:53:32,500 this, ok? . 200 00:53:32,500 --> 00:53:51,700 So, this implies we can get some new constant because there are some terms left out after 201 00:53:51,700 --> 00:54:07,670 getting cancelled x i, so that I have put it to this constant. So, this is the relation 202 00:54:07,670 --> 00:54:11,370 using which when r h o is given. How to determine 203 00:54:11,370 --> 00:54:24,680 sigma? So, this will suggest us. So, let us work out some problems, how to determine exactly 204 00:54:24,680 --> 00:54:30,370 for a given example using this relation in the coming lecture. 205 00:54:30,370 --> 00:54:32,710 Until then thank you, bye. 206 00:54:32,710 --> 00:54:32,710 .
HuggingFaceTB/finemath
# . In the mathematical field of topology, a homeomorphism or topological isomorphism or bicontinuous function is a continuous function between topological spaces that has a continuous inverse function. Homeomorphisms are the isomorphisms in the category of topological spaces—that is, they are the mappings that preserve all the topological properties of a given space. Two spaces with a homeomorphism between them are called homeomorphic, and from a topological viewpoint they are the same. Roughly speaking, a topological space is a geometric object, and the homeomorphism is a continuous stretching and bending of the object into a new shape. Thus, a square and a circle are homeomorphic to each other, but a sphere and a donut are not. An often-repeated mathematical joke is that topologists can't tell their coffee cup from their donut,[1] since a sufficiently pliable donut could be reshaped to the form of a coffee cup by creating a dimple and progressively enlarging it, while shrinking the hole into a handle. Topology is the study of those properties of objects that do not change when homeomorphisms are applied. As Henri Poincaré famously said, mathematics is not the study of objects, but instead, the relations (isomorphisms for instance) between them. Definition A function f: XY between two topological spaces (X, TX) and (Y, TY) is called a homeomorphism if it has the following properties: • f is a bijection (one-to-one and onto), • f is continuous, • the inverse function f −1 is continuous (f is an open mapping). A function with these three properties is sometimes called bicontinuous. If such a function exists, we say X and Y are homeomorphic. A self-homeomorphism is a homeomorphism of a topological space and itself. The homeomorphisms form an equivalence relation on the class of all topological spaces. The resulting equivalence classes are called homeomorphism classes. Examples A trefoil knot is homeomorphic to a circle. Continuous mappings are not always realizable as deformations. Here the knot has been thickened to make the image understandable. • The unit 2-disc D2 and the unit square in R2 are homeomorphic. • The open interval (a, b) is homeomorphic to the real numbers R for any a < b. • The product space S1 × S1 and the two-dimensional torus are homeomorphic. • Every uniform isomorphism and isometric isomorphism is a homeomorphism. • The 2-sphere with a single point removed is homeomorphic to the set of all points in R2 (a 2-dimensional plane). • Let A be a commutative ring with unity and let S be a multiplicative subset of A. Then Spec(AS) is homeomorphic to {p ∈ Spec(A) : pS = ∅}. • Rm and Rn are not homeomorphic for mn. • The Euclidean real line is not homeomorphic to the unit circle as a subspace of R2 as the unit circle is compact as a subspace of Euclidean R2 but the real line is not compact. Notes The third requirement, that f −1 be continuous, is essential. Consider for instance the function f: [0, 2π) → S1 defined by f(φ) = (cos(φ), sin(φ)). This function is bijective and continuous, but not a homeomorphism (S1 is compact but [0, 2π) is not). Homeomorphisms are the isomorphisms in the category of topological spaces. As such, the composition of two homeomorphisms is again a homeomorphism, and the set of all self-homeomorphisms XX forms a group, called the homeomorphism group of X, often denoted Homeo(X); this group can be given a topology, such as the compact-open topology, making it a topological group. For some purposes, the homeomorphism group happens to be too big, but by means of the isotopy relation, one can reduce this group to the mapping class group. Similarly, as usual in category theory, given two spaces that are homeomorphic, the space of homeomorphisms between them, Homeo(X, Y), is a torsor for the homeomorphism groups Homeo(X) and Homeo(Y), and given a specific homeomorphism between X and Y, all three sets are identified. Properties • Two homeomorphic spaces share the same topological properties. For example, if one of them is compact, then the other is as well; if one of them is connected, then the other is as well; if one of them is Hausdorff, then the other is as well; their homotopy & homology groups will coincide. Note however that this does not extend to properties defined via a metric; there are metric spaces that are homeomorphic even though one of them is complete and the other is not. • A homeomorphism is simultaneously an open mapping and a closed mapping; that is, it maps open sets to open sets and closed sets to closed sets. • Every self-homeomorphism in $$S^1$$ can be extended to a self-homeomorphism of the whole disk $$D^2$$ (Alexander's trick). Informal discussion The intuitive criterion of stretching, bending, cutting and gluing back together takes a certain amount of practice to apply correctly—it may not be obvious from the description above that deforming a line segment to a point is impermissible, for instance. It is thus important to realize that it is the formal definition given above that counts. This characterization of a homeomorphism often leads to confusion with the concept of homotopy, which is actually defined as a continuous deformation, but from one function to another, rather than one space to another. In the case of a homeomorphism, envisioning a continuous deformation is a mental tool for keeping track of which points on space X correspond to which points on Y—one just follows them as X deforms. In the case of homotopy, the continuous deformation from one map to the other is of the essence, and it is also less restrictive, since none of the maps involved need to be one-to-one or onto. Homotopy does lead to a relation on spaces: homotopy equivalence. There is a name for the kind of deformation involved in visualizing a homeomorphism. It is (except when cutting and regluing are required) an isotopy between the identity map on X and the homeomorphism from X to Y. Local homeomorphism Diffeomorphism Uniform isomorphism is an isomorphism between uniform spaces Isometric isomorphism is an isomorphism between metric spaces Dehn twist Homeomorphism (graph theory) (closely related to graph subdivision) Isotopy Mapping class group Poincaré conjecture References ^ Hubbard, John H.; West, Beverly H. (1995). Differential Equations: A Dynamical Systems Approach. Part II: Higher-Dimensional Systems. Texts in Applied Mathematics. 18. Springer. p. 204. ISBN 978-0387943770.
HuggingFaceTB/finemath
# Discussion of Question with ID = 101 under Missing-Numbers ## This is the discussion forum for this question. If you find any mistakes in the solution, or if you have a better solution, then this is the right place to discuss. A healthy discussion helps all of us, so you are requested to be polite and soft, even if you disagree with the views of others. The question and its current solution has also been given on this page. Advertisement ### Question Examine this series: 3, 4, 10, 33, 136. If another series is written with the same rule, what number would come in the place E: 7, A, B, C, D, E ? A 1160 B 920 C 1165 D 840 Soln. Ans: c The number 1165 will come in place of (E). 4 = 3 X 1 + 1, then 4 X 2 + 2 = 10, then 10 X 3 + 3 = 33, then 33 X 4 + 4 = 136. Similarly, A = 7 X 1 + 1 = 8, B = 8 X 2 + 2 = 18, C = 18 X 3 + 3 = 57, then D = 57 X 4 + 4 = 232, E = 232 X 5 + 5 = 1165.
HuggingFaceTB/finemath
Question: A Helicopter is equipped with three engines that operate independently. The probability of an engine failure is 0.01. What is the probability of successful fight if only one engine is needed for the successful operation of the aircraft? Solution: Since the flight is unsuccessful only when all the three engines fail, then the probability of unsuccessful flight is: .01 x .01 x.01 = .000001. The probability of successful flight = 1 – .000001 = .999999.
HuggingFaceTB/finemath
Question # Find the value of $1\text{ + 222 }\div \text{ 20 + 4444 }\div \text{ 400}$. \begin{align} & 1\text{ + 222 }\div \text{ 20 + 4444 }\div \text{ 400} \\ & =\text{ 1 + }\dfrac{222}{20}\text{ + }\dfrac{4444}{400} \\ & =\text{ 1 + 11}\text{.1 + 11}\text{.11} \\ & \text{= 13}\text{.21} \\ \end{align} \begin{align} & 1\text{ + 222 }\div \text{ 20 + 4444 }\div \text{ 400} \\ & =\text{ 223 }\div \text{ 20 + 4444 }\div \text{ 400} \\ & =\text{ }\dfrac{223}{20}\text{ + 4444 }\div \text{ 400} \\ & \text{= 11}\text{.15 + 4444 }\div \text{ 400} \\ & =\text{ 4455}\text{.15 }\div \text{ 400} \\ & =\text{ 11}\text{.137875} \\ \end{align}
HuggingFaceTB/finemath
# Problem: Size an Air Compressor and Receiver A company has a machine that uses 50 cfm at 100 psi. Size an air compressor in SCFM that will have a duty cycle of 40%. Then size an air receiver in gallons that would supply that SCFM for three (3) minutes with the pressure dropping from 100 psi to 80 psi. ##### Find Out The Solution I have two equations that I use for sizing an air compressor with a duty cycle: SCFM = CFM X C.R./Duty Cycle CFM = SCFM/C.R./Duty Cycle Note:  (C.R.) is the compression ratio. These two equations cannot be transposed into the other equation. Step 1: Find C.R. (100 + 14.7)  / 14.7 = 7.8 : 1   C.R. We are looking for SCFM, so let’s use the first equation and substitute in our numbers. Step 2:  SCFM = 50 X 7.8 / 0.40 = 975 SCFM per machine. We multiply the CFM by the C.R. to get SCFM and then we divide by 0.4 to oversize the compressor so that it will only run 40% of the time, allowing for expansion of our system in the future. Again I have two equations, one for sizing the tank in cubic feet and one for sizing in gallons: T(min.) = V(cu.ft.)  X   (P2-P1) 14.7 X SCFM T(min.)=V(gal.)  X  (P1-P2) 110 X SCFM We are looking for the receiver size in gallons, so let’s use the second equations. 3 X 110 X 975 / 20 = 16,087.5 gallon receiver per machine 3 is time in minutes, 110 is a constant, 975 is the SCFM, 20 is the pressure drop (P2-P1) ### Deadline past. Not available for submissions. Winner: Jeremy Palm, CFPS, Hennepin Technical College, Eden Prairie, MN Richard Throop, CFPAI, CFPMT, CFPMM, CFPS, Neff Engineering, Flint, Michigan By Ernie Parker, AI, AJPP, AJPPCC, S, MT, MM, MIH, MIP, MMH, Fluid Power Instructor, Hennepin Technical College, EParker@Hennepintech.edu This teaser is printed in the Fluid Power Journal. Those who submit the correct answer before the deadline will have their names printed in the Society Page newsletter and in Fluid Power Journal. The winners will also be entered into a drawing for a special gift. ## 2 thoughts on “Problem: Size an Air Compressor and Receiver” 1. Mr Sames says: Great work by both of the student special the winner Jeremy Palm and bit hard luck for Richard Throop not selecting as a winner. 2. loc says: good infomation.
HuggingFaceTB/finemath
Ë Modulus of elasticity, also know as "young's modulus," is a metric that represents a material's resistance to being deformed elastically. Throughout the site we use the terms "high modulus" and "low modulus" frequently to describe a cured polymer adhesive product. This post is dedicated to spelling out exactly what each of these terms mean. Other polymer properties worth checking out: # High Vs. Low, What Does It Mean? For those of you just looking for the quick answer, here it is. Modulus of elasticity represents a material's resistance to being deformed, so low values mean low resistance and high values mean high resistance. In other words: ## High Modulus = Stiff It's as simple as that. # But How Is The Elastic Modulus Of A Material Specifically Defined? For those of you looking for a little bit deeper of an explaination, we're going to have to work through some definitions. • Strain is the amount a material is deformed (stretched) relative to its original dimensions • Stress is the amount of force applied to the material and can be thought of as the material's resistance to strain (stretching). • A Stress-Strain Curve is a graphical representation of the relationship between stress and strain for a particular material. Stress is plotted as a function of strain (strain along x-axis, stress along y-axis). The curve is basically how much the material will resist various amounts of stretching. If the curve has a high slope then it takes a lot of force to stretch the material a small amount (stiff). If the curve is has a low slope then it it takes very little force to stretch the material very far (flexible). • The Elastic Deformation Region of a stress-strain curve is the region in which the material exhibits elastic behavior (if allowed the material will elastically return to its original dimensions). • The Elastic Modulus of a material is the slope of the stress-strain curve in the elastic deformation region. When we put all these definititons together this of course fits with our understanding above: the elastic modulus of a material represents a material's resistance to being deformed elastically.
HuggingFaceTB/finemath
What is 914 tenths? 914 tenths could be used to describe time, distance, money, and many other things. 914 tenths means that if you divide something into ten equal parts, 914 tenths is 914 of those parts that you just divided up. We converted 914 tenths into different things below to explain further: 914 tenths as a Fraction Since 914 tenths is 914 over ten, 914 tenths as a Fraction is 914/10. 914 tenths as a Decimal If you divide 914 by ten you get 914 tenths as a decimal which is 91.40. 914 tenths as a Percent To get 914 tenths as a Percent, you multiply the decimal with 100 to get the answer of 9140 percent. 914 tenths of a dollar First we divide a dollar into ten parts where each part is 10 cents. Then we multiply 10 cents with 914 and get 9140 cents or 91 dollars and 40 cents. Tenths Need to look up another number? Enter another number of tenths below. What is 915 tenths? Go here for the next "tenths" number we researched and explained for you.
HuggingFaceTB/finemath
Build a Better Process # Portfolio Risk Definition, Calculation and Quiz Few people truly understand the complexities of portfolio risk, and this presents a career opportunity. 1. Define - Define investment portfolio risk. 2. Context - Use portfolio risk in a sentence. 3. Quiz - Test yourself. by Paul Alan Davis, CFA Updated: February 18, 2021 The calculation can begin with portfolio holdings or portfolio returns. Learn when to use each one below. / factorpad.com / fin / glossary / portfolio-risk.html ## Understanding Investment Portfolio Risk Beginner Portfolio risk is the general term for riskiness or dispersion of returns on a portfolio of assets. There are several different ways risk can be measured, calculated and interpreted depending on whether the source data requires a holdings-based calculation or returns-based calculation. A holdings-based the calculation requires a bottom-up calculation using individual positions and weights. This requires three inputs: the weight of each asset in the portfolio, the variance of each asset and the covariance between each asset. Covariances are typically calculated and stored in a stock-by-stock or factor-based covariance matrix. Below is the formula for a two-stock portfolio for portfolio variance. Portfolio standard deviation is found by taking the square root of portfolio variance. • Portfolio Variance = ABC Weight^2 * ABC Variance + XYZ Weight^2 * XYZ Variance + 2 * (ABC Weight * XYZ Weight * Covariance of ABC and XYZ)) The ABC Weight^2 above means the portfolio weight to ABC is squared. The number of terms for a two-asset portfolio is four, which matches the number of cells of a 2-by-2 stock-by-stock covariance matrix. The number of terms for a three asset portfolio is nine, and so forth. When calculating returns-based portfolio risk using a stream of values of a portfolio, like the NAV on a mutual fund, the variance and standard deviation are the result. The tracking error portfolio risk measure is for relative risk and is the standard deviation of active returns. In addition, some consider VAR, or Value at Risk, a useful measure of portfolio risk, particularly in banking contexts. Synonym: portfolio variability For context, portfolio risk can be evaluated using other measures not discussed here. It is the analyst's job to be careful to select the appropriate measure for the appropriate context when evaluating and reporting portfolio risk when compared to other portfolios, or benchmarks, so on a relative basis, or an absolute basis. ### In a Sentence Pat:  As your CEO, I encourage every employee to learn the calculations of portfolio risk. Eve:  Yes, I concur. It's the best way to reduce career risk. ### Video Many terms have 4-5 minute videos showing a derivation and explanation. If this term had one, it would appear here. Videos can also be accessed from our YouTube Channel. ### Video Script If this term had a video, the script would be here. ### Quiz Portfolio Risk is normally depicted on the x-axis in a risk-return scatterplot. | True or False? True Standard Deviation is one of which type of portfolio risk calculations? | Absolute or Relative? Absolute. You don't need a comparision benchmark to compute portfolio standard deviation. Still unclear on Portfolio Risk? Check out the Quant 101 Series, and specifically A faster way to calculate portfolio risk, and remember it too.. ### Related Terms Our trained humans found other terms in the category portfolio risk you may find helpful. ## What's Next? New videos are coming your way on our YouTube Channel. Subscribe now. • For all Glossary terms, click Outline. • For a review of Portfolio Return, hit Back. • Help to bring up the knowledge level globally, click Tip. • To dig a bit deeper in risk with Portfolio Specific Risk, click Next. / factorpad.com / fin / glossary / portfolio-risk.html portfolio risk portfolio variance portfolio standard deviation portfolio dispersion returns-based portfolio analysis holdings-based investment risk investment portfolio risk modern portfolio theory covariance matrix portfolio risk meaning benchmark risk relative risk asset portfolio risk stock fund risk portfolio risk definition A newly-updated free resource. Connect and refer a friend today.
HuggingFaceTB/finemath
# RT sides Find the sides of a rectangular triangle if legs a + b = 17cm and the radius of the written circle ρ = 2cm. Result c =  13 cm a =  12 cm b =  5 cm #### Solution: Checkout calculation with our calculator of quadratic equations. Try calculation via our triangle calculator. Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! #### To solve this example are needed these knowledge from mathematics: Looking for help with calculating roots of a quadratic equation? Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation? Pythagorean theorem is the base for the right triangle calculator. See also our trigonometric triangle calculator. ## Next similar examples: 1. Garden Area of a square garden is 6/4 of triangle garden with sides 56 m, 35 m, and 35 m. How many meters of fencing need to fence a square garden? 2. Triangle SAS Calculate the area and perimeter of the triangle, if the two sides are 51 cm and 110 cm long and angle them clamped is 130 °. 3. Is right? Is triangle with sides 51, 56 and 77 right triangle? 4. Perimeter of RT Find the circumference of the rectangular triangle if the sum of its legs is 22.5 cm and its area is 62.5 cm2. 5. Height of the room Given the floor area of a room as 24 feet by 48 feet and space diagonal of a room as 56 feet. Can you find the height of the room? 6. Pyramid height Find the volume of a regular triangular pyramid with edge length a = 12cm and pyramid height h = 20cm. 7. Medians 2:1 Median to side b (tb) in triangle ABC is 12 cm long. a. What is the distance of the center of gravity T from the vertex B? b, Find the distance between T and the side b. 8. Triangular prism Calculate a triangular prism if it has a rectangular triangle base with a = 4cm and hypotenuse c = 50mm and height of the prism is 0.12 dm. 9. Trapezoid MO The rectangular trapezoid ABCD with right angle at point B, |AC| = 12, |CD| = 8, diagonals are perpendicular to each other. Calculate the perimeter and area of ​​the trapezoid. 10. Rectangle The rectangle is 11 cm long and 45 cm wide. Determine the radius of the circle circumscribing rectangle. 11. Circle chord What is the length d of the chord circle of diameter 36 m, if the distance from the center circle is 16 m? 12. Right Δ A right triangle has the length of one leg 28 cm and length of the hypotenuse 53 cm. Calculate the height of the triangle. 13. Axial section Axial section of the cone is an equilateral triangle with area 208 dm2. Calculate the volume of the cone. 14. Trigonometric functions In right triangle is: ? Determine the value of s and c: ? ? 15. Tetrahedral pyramid What is the surface of a regular tetrahedral (four-sided) pyramid if the base edge a=7 and height v=6? 16. Short cut Imagine that you are going to the friend. That path has a length 330 meters. Then turn left and go another 2000 meters and you are at a friend's. The question is how much the journey will be shorter if you go direct across the field? 17. Gimli Glider Aircraft Boeing 767 lose both engines at 42000 feet. The plane captain maintain optimum gliding conditions. Every minute, lose 1910 feet and maintain constant speed 211 knots. Calculate how long takes to plane from engines failure to hit ground. Calculate
HuggingFaceTB/finemath