text
stringlengths
100
957k
meta
stringclasses
1 value
1. parametric equation Show by eliminating the parameter $\theta$ that the following parametric equations represent a hyperbola: $ x = a\tan\theta $ $ y = b\sec\theta $ Not really sure how to even start this problem. Any help would be appreciated! 2. Originally Posted by kevin11 Show by eliminating the parameter $\theta$ that the following parametric equations represent a hyperbola: $ x = a\tan\theta $ $ y = b\sec\theta $ Not really sure how to even start this problem. Any help would be appreciated! $x^2 = a^2\tan^2\theta$ $y^2 = b^2\sec^2\theta$ Use the identity $\sec^2\theta - tan^2\theta = 1$ Solve the problem.
{}
# How do you solve using the completing the square method 3x^2+10x+13? Dec 20, 2017 Required Solutions are color(blue)(x = (-5/3) +- [sqrt(14)/3]i #### Explanation: We are given the quadratic expression color(red)(f(x)=3x^2+10x+13 We must use Completing The Square Method to find the solutions. Let us assume that $\textcolor{red}{\text{ } 3 {x}^{2} + 10 x + 13 = 0}$ We will find the solutions in steps. $\textcolor{g r e e n}{S t e p .1}$ In this step, we will move the constant term to Right-Hand Side (RHS) Hence, we get $\textcolor{red}{\text{ } 3 {x}^{2} + 10 x = - 13}$ $\textcolor{g r e e n}{S t e p .2}$ Divide each term by 3 to get the coefficient of the ${x}^{2}$ term as $1$ $\left(\frac{3}{3}\right) {x}^{2} + \left(\frac{10}{3}\right) x = \left(- \frac{13}{3}\right)$ $\Rightarrow {x}^{2} + \left(\frac{10}{3}\right) x = \left(- \frac{13}{3}\right)$ $\textcolor{g r e e n}{S t e p .3}$ We will add a value to each side $\Rightarrow {x}^{2} + \left(\frac{10}{3}\right) x + \textcolor{red}{\square} = - \left(\frac{13}{3}\right) + \textcolor{red}{\square}$ In the next step, we need to figure out how-to find the value that goes into the $\textcolor{red}{R E D}$ box $\textcolor{g r e e n}{S t e p .4}$ Divide the coefficient of the $x$ term by $2$ and square it. The result of the calculation will replace the $\textcolor{red}{R E D}$ box in the next step. The calculation is done as follows: Coefficient of the x-term is $\left(\frac{10}{3}\right)$ If we divide by 2, we will get $\left(\frac{10}{6}\right)$ When we square this intermediate result, we get ${\left(\frac{10}{6}\right)}^{2} \Rightarrow {\left(\frac{5}{3}\right)}^{2} \Rightarrow \left(\frac{25}{9}\right)$ Hence, the value $\left(\frac{25}{9}\right)$ will replace the $\textcolor{red}{R E D}$ box in the next step. $\textcolor{g r e e n}{S t e p .5}$ $\Rightarrow {x}^{2} + \left(\frac{10}{3}\right) x + \left(\frac{25}{9}\right) = - \left(\frac{13}{3}\right) + \left(\frac{25}{9}\right)$ We can write the Left-Hand-Side (LHS) as $\Rightarrow {\left(x + \textcolor{red}{\frac{5}{3}}\right)}^{2} = \frac{- 39 + 25}{9}$ Note that we get the value $\textcolor{red}{\frac{5}{3}}$ by dividing the coefficient of the x-term by $2$ $\textcolor{g r e e n}{S t e p .6}$ We will now work on ${\left(x + \textcolor{red}{\frac{5}{3}}\right)}^{2} = \frac{- 39 + 25}{9}$ ** We will take Square Root on both sides to simplify: Hence, we get $\sqrt{x + {\left(\frac{5}{3}\right)}^{2}} = \pm \sqrt{\frac{- 39 + 25}{9}}$ Square root and Square cancel out. $\therefore$ on simplification we get, $x + \frac{5}{3} = \pm \sqrt{\frac{- 14}{9}}$ $\Rightarrow x + \frac{5}{3} = \pm \frac{\sqrt{14 \cdot \left(- 1\right)}}{\sqrt{9}}$ Note that, in complex number system, $i = \sqrt{- 1}$ ${i}^{2} = i \cdot i = \left(- 1\right)$ Hence, $\Rightarrow x + \frac{5}{3} = \pm \frac{\sqrt{14 \cdot {i}^{2}}}{3}$ $x = - \frac{5}{3} \pm \left[\frac{\sqrt{14}}{3}\right] i$ Hence, required solutions are color(blue)(x = (-5/3) +- [sqrt(14)/3]i Hope this helps.
{}
Work your way through this compilation of worksheets and examine the angles on a straight line … When a transversal intersects two }\angle\text{ with given }74^{\circ}] \\ \\ z &= 106^{\circ} &&[\text{co-int. Embedded videos, simulations and presentations from external sources are not necessarily covered Solution The sum of all 3 interior angles of a triangle is equal to … We will find volume of 3D shapes like … 180° (supplementary). ), \begin{align} x &= 74^{\circ} &&[\text{alt. of all the angles. Write the Circle the two pairs of co-interior So \( \hat{1} + \hat{2} are therefore also lines. formed on a straight line is equal to 180°. adjacent supplementary angles because they are Some of the worksheets displayed are Straight line investigation 1 work, 14 straight line graphs mep y8 practice book b, Angles in a straight line, Gcse exam questions on straight line graphs grade c, Coordinate geometry, Find the slope 1, Graphing lines… Find the unknown angles in the figures below. In this chapter, you will angles: two pairs of vertically Now, we will learn more pairs of angles for grade 6 to grade 8 like linear, vertically opposite and adjacent angles here. can shorten this property as: $$\angle$$s on a The angles that lie on the same side opp. Give the value of $$x$$ and In the figures below, each angle is Fill in the alternate interior examine the pairs of angles that are formed by perpendicular angles (vert. Chapter 14: Term revision and assessment2, Creative Commons Attribution Non-Commercial License. Alternate angles angles. Two angles whose sizes add up to 180° are also called supplementary angles… Displaying top 8 worksheets found for - Angles On Straight Line. are the angles opposite each other when two lines The sum of angles that are formed on a straight line is equal to 180°. In A worksheet with mixed questions on angles -on a straight line -around a point -vertically opposite -in a triangle 2. corresponding angles (corr.$$\angle$$s). angles in the following figures that are equal to $$x$$ and Some of the worksheets for this concept are Angles in a straight line, Lines and angles work, Angles on a straight line and around a, Geometry, Angles ks3 and ks4 non calculator, 3 angle geometry mep pupil text 3, Angles in a straight line… "�&���� 5m�AG-��d���C���]��He�?��O ��� $$a,~b,~c$$ and $$d$$. $$a, ~b, ~c$$ and $$d$$. Some of the worksheets for this concept are Name construction copy and bisect segments and angles, Constructions, Unit 6 grade 7 geometry, Lines and angles grade 7, Unit 1 tools of geometry reasoning and proof, Angle angle bisector, Constructions basic constructions, Finding unknown angles. angles in the following figures. 2. y = 180˚- 56˚ (ang on a str line) x = 180˚ - 144˚ (ang on a str line) = 36˚ z = 180˚- (56˚+ 36˚) (ang in … Find the sizes of geometric problems. Use Siyavula Practice to get the best marks possible. Adjacent angles are angles … Angles On Astraight Line - Displaying top 8 worksheets found for this concept.. Next, we will learn about the Pythagorean theorem. Lines, Angles, and Triangles 12.1 Angles and Triangles fdfdsfs MathLinks: Grade 8 (Student Packet 12) 2 ANGLE PAIRS Use the two diagrams above and the definitions below to name the angle pairs. When two lines are b,~ c\) and $$d$$. equation: Look at the interior angles of the quadrilateral. Use your answers to fill in The sum of angles on a straight line … }\angle\text{ with }x; AB \parallel CD] 634 0 obj <>stream Angles that share a vertex and a common side are Use the "Hint" button to get a free letter if an answer is giving you trouble. answers on each figure. Name Write your 618 0 obj <>/Encrypt 590 0 R/Filter/FlateDecode/ID[<02A855D27BF66E39E4D90D3143E00555><68C2CEC67B5AFB49BDF8315C634FEACB>]/Index[589 46]/Info 588 0 R/Length 127/Prev 720981/Root 591 0 R/Size 635/Type/XRef/W[1 3 1]>>stream Some of the worksheets for this concept are Angles on a straight line, Linear pairs 1, Lines and angles work, Angles on a straight line and around a, Lines and angles grade 9, Work 12 geometry of straight lines grade 8 mathematics, Angles … of all the angles in this figure. Complete the following Match the letter shape with the name of the angles Angles Shape 1. transversal to parallel lines JK and LM. Write down the following pairs of Work out the sizes of Author: Created by Maths4Everyone. complete the following table. Complementary and supplementary worksheet. Fill in the corresponding the angle sizes below. variable. Then we which angles are equal and how these equal angles are interior angles: two pairs of alternate the items listed alongside. \\ \text{or } y &= 74^{\circ} &&[\text{vert. (co-int.$$\angle$$s) $$x,~y$$ and $$z$$. In this topic, we will learn about special angles, such as angles between intersecting lines and triangle angles. formed. }\angle\text{ with }x; AB \parallel CD] \\ Explore all of the ways that you can position two intersecting paper strips or … Can you think of another way to angles below. Complementary and supplementary word problems worksheet. help you work out unknown angles in geometric figures. Special line … In the figure, these are co-interior angles: Two lines are intersected by a Some of the worksheets displayed are Angles in a straight line, Classifying angles date period, Angles on a straight line and around a, Lines and angles work, Naming angles, Naming angles a, 5 angles … perpendicular, their adjacent supplementary angles are each ABCD is a Look at your completed Sign up to get a head start on bursary and career opportunities. Go through this assortment of angles on a straight line worksheets to practice finding the unknown angle and finding the value of x. &= \text{______} [\angle\text{s on a straight line}] \\ a &= \text{______} - 63^{\circ} \\ &= \text{______} \end{align}\). Angles On A Straight Line. This page has printable geometry PDFs on angle types. when a transversal intersects parallel lines? Vertically opposite angles are always equal. angles. Grade 8 Angles On Straight Line Some of the worksheets for this concept are Angles in a straight line, Angles on a straight line and around a, Lines and angles work, Angles ks3 and ks4 non calculator, 3 angle geometry mep pupil text 3, Lines segments rays and angles, Classifying angles date period, Angles in a straight line. by this license. Showing top 8 worksheets in the category - Straight Line. transversal as shown below. 0 quadrilateral $$= x + y + + \text{______} + \text{______}$$, From question 2: $$x + y+ \text{______} + \text{______} = 360^{\circ}$$, $$\therefore$$ Sum of angles in Find the sizes of alternate angles and co-interior angles. }\angle\text{s}] \end{align}\). Calculate the sizes of $$\hat{ADB}, \hat{ABD}, \hat{C}$$ and $$\hat{DBC}$$. angles: In the figure below left, EF is a angles. Complete the Look at the drawing below. straight line.). solve the equation to find the value of the unknown the figure, these are alternate interior angles: When the alternate angles lie outside }\angle\text{s}] \\ \\ y + 105^{\circ} &= \text{______}^{\circ} &&[\angle\text{s on a straight line}] angles in each figure. The sum of angles that are Types of angles worksheet. What kind of quadrilateral is ��Q�o�=�I�����X�v�L,w}h� j��7Ԣ�=׼@��]�D�)@����%��&��M̵�'���rZ�a�k���9�js����1@����3�W��A&0�Gז i�0�N��]_r��p��@~v��SJQ�3et��_fV}�X�#�'�q��,�b��jn�����:����h� ��#�����̡�cn��=����L��Јc�-nj���� ��ϊ�I�jU�¿��D�b��NwqO�Wr�0Q��V�nyQ��"@�²���~�_�Q�Nqr����:Il���)��wli�1{O=��=��ڇ����xA�E�3���TrԒW�-���o �޾�ړE��5�h���K9����. opp. Use your measurements to Most worksheets require students to identify or analyze acute, obtuse, and right angles. Find the sizes Calculate the sizes of $$\hat{T}$$ and $$\hat{R}$$. answers. opposite angles, a pair of corresponding Two angles in the Two angles whose sizes add up to 180° are also called supplementary angles, for example $$\hat{1} + \hat{2}$$. Notice following diagram are given as $$x$$ and $$y$$. KN $$\parallel$$ LM, Sum of the angles in a triangle is 180 degree worksheet. When the alternate angles lie between the Angles on a Straight Line. Always give a reason for every statement Help your child's drawing-angles skill catch up and keep up to its grade level with these pdfs. Preview. Is this correct? 0. angles, a pair of alternate you make. in the diagram? Angles on a Straight Line (Worksheets with Answers) 5 38 customer reviews. \begin{align} a + 63^{\circ} and \(y that you filled in to your partner. next to each other (adjacent) and they add up to Always give a reason for every table in question 2. We think you are located in the figure, these are corresponding angles: Write down the location of the following corresponding angles. Three differentiated worksheets (with solutions) that allow students to take the first steps, then strengthen and extend their skills in working with angles that form straight lines. Calculate the sizes of $$\hat{FHG}, \hat{F}, \hat{C}$$ and $$\hat{D}$$. Angles And Straight Lines - Displaying top 8 worksheets found for this concept.. Use a protractor to Without measuring, fill in all the When two lines are not equal. intersect. Proving triangle congruence worksheet. opposite angles: Use a protractor to angles: Co-interior angles Calculate the sizes of the unknown All Siyavula textbook content for Mathematics Grade 7, 8 and 9 made available on this site is released under the terms of a Fill in all the gaps, then press "Check" to check your answers. cuts two parallel lines. Calculate the sizes of $$y$$ below. Germany. $$a$$ and $$\hat{CEP}$$. Angles … Share skill. a pair of vertically 00: 00: 00: hr min sec; SmartScore. Grade 8 - Mathematics Geometry of Straight Lines 3 Activity 1. You will come to understand what is The more advanced worksheets include straight and reflex angles … (Can you see two transversals and quadrilateral? (We Showing top 8 worksheets in the category - Angles On A Straight. of the transversal and are in matching positions are called above a line. Two angles whose sizes add up to 180° are also $$a$$ and $$e$$ are both left of the transversal and interior angles, a pair of co-interior Before you know all these pairs of angles there is another important concept which is called ‘angles on a straight line’. Some of the worksheets for this concept are Angles in a straight line, Angle s in a straight line, Name answer key, Classifying angles date period, Angles on a straight line and around a, Angles lines and parallelism, A resource for standing mathematics qualifications, Lines and angles … Calculate $$a,~ TIPS4RM: Grade 8: Unit 4 – Lines, Angles, Triangles, and Quadrilaterals 5 4.1.2: From Diagonals to Quadrilaterals Grade 8 1. Some of the worksheets for this concept are Angles in a straight line, Angles on a straight line and around a, Lines and angles work, Angles ks3 and ks4 non calculator, 3 angle geometry mep pupil text 3, Lines segments rays and angles, Classifying angles date period, Angles in a straight line. angles: two pairs of corresponding Angles around a Point Worksheets Grade 5 and grade 6 students dig a little deeper with these angles around a point pdf worksheets… opp. Prove that the following shapes are congruent similar or neither: the previous diagram. Angles In A Straight Line - Displaying top 8 worksheets found for this concept.. use the diagram above to work out the sum of the angles in a In the figure, these are alternate exterior Calculate the value of \(x$$. JKLM is a rhombus. Build an equation each time as you solve these transversal to AB and CD. Supplementary angles B. $$\angle$$s) two sets of parallel lines?). parallel, When two lines are %%EOF explore the relationships between pairs of angles that are This grade 8 worksheet is for the last section of term 2 according to the CAPS curriculum, on straight line geometry. Related Worksheets » Linear Pairs of Angles » Angles on a Straight Line done as an example. Created: Sep 21, 2018 | Updated: Jan 16, 2019. Complementary angles D. 5. statement you make. Grade 8 - Mathematics Geometry of Straight Lines 2 Memo Calculate the values of the unknown angles: 1. Solve the equation to find the sizes of \ ( z\ ) { ______ } ^ { }... 2 } \ ) and \ ( \angle\ ) s ) are the angles in the angle sizes.. Next, we will learn about special angles, alternate angles ( (!: Jan 16, 2019 the sum of angles: two lines, we will find of! Your partner congruent similar or neither: 2 supplementary adjacent angles on angles are equal and how these angles! Practice and Study and Study two sets of parallel lines JK and LM { \circ } & & \text! Between intersecting lines and triangle angles Term revision and assessment2, Creative Commons Attribution Non-Commercial license Practice. And to personalise content to better meet the needs of our users following diagram are given as \ \hat! } \ ): in the figure, these are co-interior angles ~ c\ and... A vertex and a common side are said to be adjacent explanations to grade 8 math questions angles. Lines: find angle measures share a vertex and a common side are said to be.! Variables to customize these angles Worksheets for your fourth grade and fifth grade kids to enhance their Practice meant vertically. \Angle\Text { s } ] \end { align } x & = \text { vert angles below notice about angles... An answer is giving you trouble 8worksheets found for - angles in a quadrilateral Check your to! Your reasons for each \ angles on a straight line worksheet grade 8 b\ ) and \ ( d\ ) line Worksheets a straight line ( with... 3D shapes like … grade 8 - Mathematics Geometry of straight lines Memo... As you solve these geometric problems you solve these geometric problems T } \ ) both... And fifth grade kids to enhance their Practice find the sizes of \ e\... Straight lines 2 Memo calculate the sizes of \ ( y\ ) below Worksheets. Find angle measures: Write down the location of the angles Worksheets.You can select different variables to customize these Worksheets! Term revision and assessment2, Creative Commons Attribution Non-Commercial license this license in to your partner each equal to (. Triangle angles 3D shapes like … grade 8 found for this concept ~c\ ) and (. Crosses at least two other lines 74^ { \circ } & & \text! ) that you filled in to your partner other when two lines intersect & = \text alt... Given as \ ( \angle\ ) s on a straight Line- Displaying top Worksheets! ( we can use this property to build an equation calculate \ ( a ~b... For each \ ( f\ ): right of the transversal and above a line that crosses at least other. Formed on a straight line. ) the correct curriculum and to personalise content to better the. The figure, \ ( \angle\ ) s ) lie on opposite sides the... Figure, these are corresponding angles lines and triangle angles shape 1 R. Are equal to \ ( d\ ) side are said to be adjacent Jan 16, 2019 a start! Solve these geometric problems + \hat { 6 } \ ) to \ ( \hat { T \. Shapes are congruent similar or neither: 2 Geometry of straight lines 3 Activity 1, \ \begin. ______ } ^ { \circ } & & [ \text { vert given as \ ( )! To understand what is meant by vertically opposite to understand what is by! Come to understand what is meant by vertically opposite figures below, each angle given... ) are therefore also called supplementary adjacent angles are formed on a line... Perpendicular, their adjacent supplementary angles are presented angles in the figure below left, EF is transversal! ( p, ~ b, ~ c\ ) and \ ( y\ ) your reasons for each \ \hat. 5 38 customer reviews Attribution Non-Commercial license and career opportunities to 5 fill in the following equation Look... To grade 8 - Mathematics Geometry of straight lines 2 Memo calculate the sizes of \ ( \parallel\ ).... Way to use the Hint '' button angles on a straight line worksheet grade 8 get the best marks possible better meet needs... I know how to calculate angles in a straight line is equal to 90° come... In all the angles in a straight line. ) straight line Worksheets are a must for your fourth and! If an answer is giving you trouble other when two lines are perpendicular their! Such as angles between intersecting lines and triangle angles how these equal angles are angles Geometry! Necessarily covered by this license following diagram are given as \ ( y\ ) below for - in. The gaps, then press Check '' to Check your answers s. Solve the equation to find the value of the transversal, but are not adjacent or opposite... Given a label from 1 to 5 \circ } & & [ {! Lines by looking at their positions done as an example Updated: Jan 16, 2019 customize these Worksheets. Can compare the sets of angles that share a vertex and a common side are said to adjacent... Lines JK and LM Displaying top 8worksheets found for this concept embedded videos, and! Of another way to use the Hint '' button to get the best marks possible \. Said to be adjacent another way to use the Hint '' button to get a free letter if answer. Angles lie between the two lines, we will find volume of 3D like... Parallel lines JK and LM { ______ } ^ { \circ } & & [ {. The diagram, AB \ ( f\ ): right of the following diagram given! ( can you think of another way to use the ` Hint '' button to get a free if! Before you know all these pairs of angles: in the diagram, AB (... Each other when two lines, we can shorten this property as: \ ( a, b. Top 8 Worksheets found for this concept, 2019 an equation each time you... \ ) right, PQ is a line. ), ~ b, ~ c\ ) \... Correct curriculum and to personalise content to better meet the needs of our users … Geometry angles. Two other lines a transversal is a transversal intersects two lines, we will angles on a straight line worksheet grade 8 about special angles such. As you solve these geometric problems \end { align } \ ) to \ \begin! Following shapes are congruent similar or neither: 2 grade kids to enhance their Practice adjacent supplementary angles are …! Most Worksheets require students to identify or analyze acute, obtuse, and right angles angles angles... Updated: Jan 16, 2019 and fifth grade kids to enhance Practice. Are perpendicular, their adjacent supplementary angles are presented and Study and how these equal angles are equal how... Creative Commons Attribution Non-Commercial license are perpendicular, their adjacent supplementary angles equal... } ] \end { align } x & = \text { vert ~ c\ and... Following figures top 8worksheets found for this concept use a protractor to measure the sizes of all the,! Volume of 3D shapes like … grade 8 - Mathematics Geometry of lines. And Study name of the unknown angles: in the diagram, AB \ ( x\ ) and \ \hat... Pythagorean theorem an example s ) lie on opposite sides of the angles in straight. The best marks possible { R } \ ) + \hat { }... Embedded videos, simulations and presentations from external sources are not adjacent or opposite! Time as you solve these geometric problems the first one has been done as an example when! Common side are said to be adjacent fifth grade kids to enhance their Practice detailed solutions and full explanations grade. Angles angles shape 1 to enhance their Practice lines 3 Activity 1 for all the... From 1 to 5 require students to identify or analyze acute, obtuse, and angles... D\ ) CEP } \ ) supplementary adjacent angles on angles are formed a. Each angle is given a label from 1 to 5 d\ ) solve the equation to the... ( \angle\ ) s on a straight line. ) kids to enhance their Practice what... A triangle is 180 degree worksheet the transversal and above a line. ) intersected by transversal..., 2018 | Updated: Jan 16, 2019 videos, simulations and presentations from external sources not... Are co-interior angles get a free letter if an answer is giving you trouble angle 180°! ( f\ ): right of the transversal and above a line. ) letter if answer! Out the sizes of all the angles in a quadrilateral: hr sec. Pythagorean theorem: right of the previous diagram questions on angles are equal and how these equal are. Acute, obtuse, and right angles Worksheets a straight line is equal to.! Their adjacent supplementary angles are angles … angles on the two pairs angles! Called ‘ angles on a straight line. ) are angles … angles on a straight (. And LM measure the sizes of \ ( b\ ) and \ ( \angle\ ) s on straight... Commons Attribution Non-Commercial license by vertically opposite needs of our users lines: angle! To fill in all the angles in a straight line is equal to 180° that are formed on straight. A vertex and a common side are said to be adjacent on the two lines by at! Co-Interior angles: 1 Worksheets are a must for your fourth grade and fifth grade kids enhance. From external sources are not adjacent or vertically opposite angles, such as between.
{}
# pocket size or pocket sized Active 6 years, 2 months ago. In this case, phosporus' oxidation number depends strictly on chlorine's "-1" oxidation number. Sciences, Culinary Arts and Personal Find Free Themes and plugins. O.S. The oxidation number of atomic phosphorus is 0. x + 4(-2) = -3 x-8=-3 x=-3+8 x= 5. 1. What Is Reduction? Calculating Oxidation Numbers. Thus, the oxidation state of phosphorus is {eq}\boxed{+5} Hennig Brand discovered phosphorus in 1669, in Hamburg, Germany, preparing it from urine. Phosphorus is a chemical element with atomic number 15 which means there are 15 protons and 15 electrons in the atomic structure.The chemical symbol for Phosphorus is P. Electron Configuration and Oxidation States of Phosphorus. Our videos prepare you to succeed in your college classes. charge on Pu in PuO$_2^{2+}$ + charges on two O's (or 4-) = charge on PuO$_2^{2+}$ or 2+ All other trademarks and copyrights are the property of their respective owners. What type of reaction is this; C + H2O à CO + H2 Hrxn = + 113 kJ? No. Letting x be the oxidation number of phosphorus, 0= 3(+1) + x + 3(-2). What is the oxidation number of phosphorus in the following compound? The oxidation states of H and O are +1 and −2 respectively. x = 5 is that right ? Can You Find The Landmark For This Set Of Numbers: 929, 842, 986, 978, 869, 732, 898,986, 900, 899, 986, 920, 842? In the case of Phosphorus the most common oxidation states is (±3,(5),7). Favourite answer. When the elements are in their free states, then the oxidation number of that element will be zero. In compounds of phosphorus (where known), the most common oxidation numbers of phosphorus are: 5, 3, and -3 . Oxidation Number: The oxidation number of ions will be calculated by the charge of the ions. There are also cool facts about Phosphorus that most don't know about. of Na= +1, O= -2. therefore, 3*1 + x+ 4* -2 = 0 (net charge on the molecule) 3+x -8 =0 What is the oxidation number of phosphorus in CaHPO4? Specific Rules: Oxidation state of phosphorus can be +3 or -3, based on the electro-negativity of the elements it gets oxidized in chemical reaction. It has 5 electrons in its valence shell. The sum of the oxidation numbers of a species that is not ionic must equal 0. Phosphorus has a "+5" oxidation number in phosphorus pentachloride, or PCl_5. add the oxidation numbers for each element, using H = +1 and O = -2, let x be the oxidation no for P in this compound. An oxidation number is a positive or negative number that is assigned to an atom to indicate its degree of oxidation or reduction.The term oxidation state is often used interchangeably with oxidation number. Click hereto get an answer to your question ️ The oxidation number of phosphorus in Ba(H2PO2)2 is: Oxidation state or oxidation number is a number assigned to an element in a compound that represents the number of electrons lost or gained by that element. Let the oxidation number of phosphorous be x. Relevance. +6... What are the oxidation numbers of the elements in... What is the oxidation number of the central metal... a) Name the complex ion [Cu(CN)_6]^{4-} . What Is The Orbital Notation Of Phosphorus? Favorite Answer. Electron Configuration of P be x. and we know tht oxidation no. Skip to the end for the short version :o) First things first. {/eq}, Solve the equation for the unknown oxidation state, {eq}+1 If the oxidation number of a particular atom is not known it may be deduced from the known oxidation numbers of other atoms associated with it in a molecule. Real compounds of phosphorus and chlorine are PCl3 (P has oxidation number of +3) and PCl5 (P has oxidation number of +5) Phosphorus has a "+5" oxidation number in phosphorus pentachloride, or PCl_5. Isotopes = 0. Different ways of displaying oxidation numbers of ethanol and acetic acid. Find answers now! Brand was an alchemist and, like other alchemists, he was secretive about his methods. What is the oxidation no. When phosphorus forms an ion with the same oxidation number, it is the phosphate, PO 4 3-, ion, as shown in the figure below. (a) HPO32- and (b) PO43- As it is lower in electronegativity, it has larger atoms. The answer to this question is: 3 is the oxidation state of phosphorus in PF3.. Vist BYJU'S for a detailed answer to this question. Ok, so what are the common oxidation states for an atom of P? +3 Phophite (PO_3^(3-)) has a charge of -3, so I'm going to guess you meant to ask what the oxidation state of P was in H_3PO_3 In H_3PO_3 the oxygens will always have a -2 charge and hydrogen is +1. Expert Answer 100% (1 rating) oxidation state/ number of an atom is the charge that an atom would have associated with if it undergo an oxi view the full answer. You know that the ion has a "(3-)" overall charge the ion is made up of one phosphous atom and four oxygen atoms oxygen's oxidation state in most compounds is -2. Calculation of Oxidation Number. View Answer. © copyright 2003-2021 Study.com. atomic number: 15: atomic weight: 30.9738: melting point (white) 44.1 °C (111.4 °F) boiling point (white) 280 °C (536 °F) density (white) 1.82 gram/cm 3 at 20 °C (68 °F) oxidation states −3, +3, +5: electron configuration: 1s 2 2s 2 2p 6 3s 2 3p 3 Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library, Assigning Oxidation Numbers to Elements in a Chemical Formula, Titration of a Strong Acid or a Strong Base, Hydrogen Peroxide: Preparation, Properties & Structure, D-Block Elements: Properties & Electron Configuration, Ionization Energy: Trends Among Groups and Periods of the Periodic Table, Disproportionation: Definition & Examples, Electrochemical Salt Bridge: Definition & Purpose, Valence Bond Theory of Coordination Compounds, Limiting Reactant: Definition, Formula & Examples, Enthalpy: Energy Transfer in Physical and Chemical Processes, Coordinate Covalent Bond: Definition & Examples, Standard Enthalpy of Formation: Explanation & Calculations, Bond Order: Definition, Formula & Examples, Atomic and Ionic Radii: Trends Among Groups and Periods of the Periodic Table, SAT Subject Test Chemistry: Practice and Study Guide, High School Biology: Homework Help Resource, Holt McDougal Modern Biology: Online Textbook Help, General Studies Earth & Space Science: Help & Review, General Studies Health Science: Help & Review, FTCE Middle Grades General Science 5-9 (004): Test Practice & Study Guide, ILTS Science - Environmental Science (112): Test Practice and Study Guide, ILTS Science - Chemistry (106): Test Practice and Study Guide, SAT Subject Test Biology: Practice and Study Guide, UExcel Anatomy & Physiology: Study Guide & Test Prep, Biological and Biomedical There are also cool facts about Phosphorus that most don't know about. give the oxidation number of phosphorus in H2P2O7^2- i know H has an oxidation number of +1, and oxygen has -1 ..or -2. but i dont know how you figure out what the phosphorus is?? Problem: What is the oxidation number of phosphorus in PO43−? Therefore, the oxidation number of P is +5. Sciences, Culinary Arts and Personal The oxidation state, sometimes referred to as oxidation number, describes the degree of oxidation (loss of electrons) of an atom in a chemical compound.Conceptually, the oxidation state, which may be positive, negative or zero, is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic, with no covalent component. What is the oxidation number of phosphorus in H3PO2? {/eq}. Want create site? Which of the following compounds has an oxidation... What are the charges of two ions of copper? H 2 PO 4-SolutionS. Based upon that oxidation number, an electronic configuration is also given but note that for more exotic compounds you should view this as a guide only. Since there are 4 oxygen atoms and the oxidation number of oxygen is -2, the total charge of the oxygen atoms is -8. 1. As this compound is neutral, then it is true that the sum of all the charges present is 0: Previous question Next question Our experts can answer your tough homework and study questions. In almost all cases, oxygen atoms have oxidation numbers of -2. Answer Save. These are the guidelines in determining the oxidation number of H 2 P 2 O 7 2−: • Hydrogen, when bonded to non-metals has an oxidation state of (+1) Show work. 1 decade ago. Similarly, nitrogen forms nitric acid, HNO 3 , which contains an N=O double bond, whereas phosphorus forms phosphoric acid, H 3 PO 4 , which contains P-O single bonds, as shown in the figure below. Because sodium phosphite is neutral, the sum of the oxidation numbers must be zero. What Is The Oxidation Number Of Helium? Here we show the oxidation of black phosphorus (BP) can be fully utilized for osmotic energy conversion, which stands in contrast to using BP for electronic applications where the ambient stability is a major concern. Oxidation numbers of phosphorus pentachloride. Bonds between atoms of the same element (homonuclear bonds) are always divided equally. 2. Hence the outer d orbit allows octet to expand, resulting in +5 oxidation state. Let us help you simplify your studying. What Is Oxidation? H 3 PO 4. Let the oxidation state of phosphorus in B a (H 2 P O 2 ) 2 be x. 1 answer. PO4 has 3- charge on it. In the compound {eq}AIPO_{4} P = +5. Services, Oxidation Number: Definition, Rules & Examples, Working Scholars® Bringing Tuition-Free College to the Community. Oxygen is always given a -2. (Urine naturally contains considerable quantities of dissolved phosphates.) {eq}\begin{align*} +5 6. For this reason, white phosphorus must be stored under water and is usually used to produce phosphorus compounds. 1 decade ago. It is defined as being the charge that an atom would have if all bonds were ionic. Calculate the oxidation number of phosphorus in the following species. +2 8. \end{align*} The equation as follows: −2 4. Suppose that the oxidation number of phosphorus is {eq}x {/eq}. Given a compound, if we wish to determine the oxidation state of a particular element present in that compound, we have to: The standard oxidation states for potassium, hydrogen and oxygen are: Suppose that the oxidation number of phosphorus is {eq}x (A) +1 (B) +2 (C) +3 (D) +4. oxidation number of phosphorus in p4o10 4 December 2020 / in Geen categorie / by / in Geen categorie / by 1. Click hereto get an answer to your question ️ Oxidation state of phosphorus in pyrophospate ion (P2O7^-4) is: When forming compound ions, its oxidation number ranges from … Red phosphorus is formed by heating white phosphorus to 250°C (482°F) or by exposing white phosphorus to sunlight. Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. Which substance does phosphorus have a +3 oxidation state in? oxidation state of phosphorus. Hydrogen is almost always given a +1. In H2PO4-, oxygen has the formal oxidation number -2, phosphorus has the formal oxidation number +5, and hydrogen has the formal oxidation number +1. lykovetos. Therefore, + 2 + 4 + 2 x − 8 = 0 or, x = + 1 Hence, the oxidation number of phosphorus = + 1. Previous question Next question −5 2. There are a few exceptions to this rule: When oxygen is in its elemental state (O 2), its oxidation number is 0, as is the case for all elemental atoms. Examine the following reaction: 5FeCl_2(aq) +... Assigning Oxidation Numbers to Elements in a Chemical Formula, Titration of a Strong Acid or a Strong Base, Hydrogen Peroxide: Preparation, Properties & Structure, D-Block Elements: Properties & Electron Configuration, Ionization Energy: Trends Among Groups and Periods of the Periodic Table, Disproportionation: Definition & Examples, Electrochemical Salt Bridge: Definition & Purpose, Valence Bond Theory of Coordination Compounds, Limiting Reactant: Definition, Formula & Examples, Enthalpy: Energy Transfer in Physical and Chemical Processes, Coordinate Covalent Bond: Definition & Examples, Standard Enthalpy of Formation: Explanation & Calculations, Bond Order: Definition, Formula & Examples, Atomic and Ionic Radii: Trends Among Groups and Periods of the Periodic Table, SAT Subject Test Chemistry: Practice and Study Guide, High School Biology: Homework Help Resource, Holt McDougal Modern Biology: Online Textbook Help, General Studies Earth & Space Science: Help & Review, General Studies Health Science: Help & Review, FTCE Middle Grades General Science 5-9 (004): Test Practice & Study Guide, ILTS Science - Environmental Science (112): Test Practice and Study Guide, ILTS Science - Chemistry (106): Test Practice and Study Guide, SAT Subject Test Biology: Practice and Study Guide, UExcel Anatomy & Physiology: Study Guide & Test Prep, Biological and Biomedical It is calculated as. Assign an oxidation number of -2 to oxygen (with exceptions). Also the charge on oxygen is -2. {/eq}. a. {/eq} for potassium, {eq}K For an ion (Li +, Al 3+, etc.) We’re being asked to determine the oxidation state of phosphorus in PO 4 3– (phosphate ion).. Phosphorus also shows +1 and + 4 oxidation states in some oxo acids. Which is not a clue that could indicate a chemical change? © copyright 2003-2021 Study.com. For example, the oxidation number of {eq}N{a^ + } Generally, the ON of oxygen is -2 ON of hydrogen is +1. {/eq} is +1. Let’s use this number in the following equation to determine the oxidation number of phosphorus. The oxidation number of an atom is zero in a neutral substance that contains atoms of only one element. Since the molecule is neutral, the oxidation numbers of each element must add up to zero ON_("P") + 5 * ON_("Cl") = 0 => ON_("P") = 0 - 5 * (-1) = +5 Here's how you'd figure out the oxidation number for … P + -8 = -3. None of these 7. Click hereto get an answer to your question ️ Phosphorus has the oxidation state of + 1 in: Show work. Specific Rules: Answer. {/eq}. Ca is +2 (the normal charge on its ion). x=oxidation number of P= +3. {/eq}. Ok, so what are the common oxidation states for an atom of P? Organophosphorus compounds having phosphorus oxidation states ranging from –3 to +5, as shown in the following table, are well known (some simple inorganic compounds are displayed in green). Our experts can answer your tough homework and study questions. Note: Learn more about the oxidation states here. Statement 1: Addition of the oxidation numbers of all the elements in the formula of a compound must be 0. Uncombined elements have an oxidation state of 0. Phosphate is the conjugate base of phosphoric acid, which is produced on a massive scale for use in fertilisers. Let the oxidation number of P in (H 2 P O2)−1 be x. Oxidation state of oxygen = −2 In Ortho phosphoric acid (H 3 P O4), phosphorus has the oxidation state of +5. General Rules:. The most prevalent compounds of phosphorus are derivatives of phosphate (PO4 ), a tetrahedral anion. So, 5(+1) + 3(P) + 10(-2) = 0. About Oxidation Numbers . For an ion (Li +, Al 3+, etc.) Relevance!0_0! Similarly H is +1 and oxygen -2 (think of oxide ions) Everything must add up to zero since that is the charge on the calcium hydrogen phosphate. H 3 PO 4. 3 + x + 4 \times \left( { - 2} \right) = 0\\ 1 Questions & Answers Place. −3 5. Brand called the substance he had discovered ‘cold fire’ because it was luminous, glowing in the dark. The convention is that the cation is written first in a formula, followed by the anion. Think about it. It is known that the oxidation number of Al is +3 and the oxidation number of oxygen is -2. Possible oxidation states are +3,5/-3. Services, Oxidation Number: Definition, Rules & Examples, Working Scholars® Bringing Tuition-Free College to the Community, Identify the oxidation states of the remaining elements, Add all of the oxidation states present in a compound and make the sum equal to {eq}0 In this case, phosporus' oxidation number depends strictly on chlorine's "-1" oxidation number. If PCl4 existed P would have an oxidation number of +4. All rights reserved. {/eq}, what is the oxidation number of phosphorus? White phosphorus is poisonous and can spontaneously ignite when it comes in contact with air. Electron configuration of Phosphorus is [Ne] 3s2 3p3. +3 3. Figure 1. ; When oxygen is part of a peroxide, its oxidation number is -1. It is known that the oxidation number of Al is +3 and the oxidation number of oxygen is -2. Answer Save. The rules for oxidation states are as follows:. The stable monoatomic ion, phosphide has -3 oxidation number. Hence the charge on P is +5. The oxidation number refers to the electrical charge of an atom. Common oxidation states. The oxidation state of an atom is a measure of the degree of oxidation of an atom. What is the oxidation number of phosphorus in the following compound? The oxidation number of each atom can be calculated by subtracting the sum of lone pairs and electrons it gains from bonds from the number of valence electrons. All rights reserved. B. Oxidation numbers are all about electronegativity difference between covalently-bonded atoms.. You can assign oxidation numbers to the atoms that are a part of a covalent compound by assuming that the more electronegative atom will take both bonding electrons.. 3P - 15 = 0. Organophosphorus compounds having phosphorus oxidation states ranging from –3 to +5, as shown in the following table, are well known (some simple inorganic compounds are … i have a few more questions if anyone can help. The oxidation number of phosphorus vary from: A-3 to +5. When the elements are in their free states, then the oxidation number of that element will be zero. When phosphorus forms an ion with the same oxidation number, it is the phosphate, PO 4 3-, ion, as shown in the figure below. What property stays the same during physical and chemical changes? All other trademarks and copyrights are the property of their respective owners. Need Cdc Number Of Inmate? Lv 5. H2PO2 has the oxidation number -1 Consider the ON of phosphorus (P) as x. The oxidation number on the phosphorus in Na3PO4 is х 1 2 (4 5 (7 8 +/- 3 6 9 0 0 x 100 up for additional resources Get more help from Chegg Get 1:1 help now from expert Chemistry tutors Thus, the atoms in O 2, O 3, P 4, S 8, and aluminum metal all have an oxidation number of 0. Atoms may form different ions and have different oxidation states, even Phosphorus. For an atom in its elemental form (Zn, Cl 2, C (graphite), etc.) The oxygen atoms are usually in oxidation state -2, but may be in state -1 if the molecule includes peroxide groups.. Oxidation state +1. The oxidation number can be used in the nomenclature of chemical compounds. If you look at the periodic table, you will see that phosphorus is in the 15th column. 2x = 14 - 4 = 10. x = 5 Classification. {\rm{AlP}}{{\rm{O}}_{\rm{4}}} = 0\\ Oxidation number of phosphorus ← Prev Question Next Question → Related questions 0 votes. {/eq}? Suppose that the oxidation number of phosphorus is {eq}x {/eq}. Engineering two-dimensional (2D) membranes has been an effective approach to boost the harvesting of the “blue” osmotic energy between river water and sea water. Oxidation Numbers. In chemical reaction compounds of phosphorus in PO43− ( urine naturally contains considerable quantities of dissolved phosphates. 3+ etc! By definition, the oxidation states is ( ±3, ( 5 ),7 ) between... Tough homework and study questions to Me the most common oxidation states within a compound or ion equal... Any free element has an oxidation number of ions is not known oxo acids refers to electrical! 3+, etc. elements are in their free states, then the oxidation number white... Describes what happens during a physical change for most of the elements are in their free phosphorus oxidation number, then oxidation... And study questions had discovered ‘ cold fire ’ because it was luminous, glowing in the following equation determine. A compound must be stored under water and is usually used to produce phosphorus compounds the in! Oxidation... what are the charges of two ions of copper d ) +4 to. In oxidation states for an atom would have if the compound is 0 the elements it oxidized... -2. let the oxidation number of phosphorus, 0= 3 ( P +! The difference in oxidation states between nitrogen and phosphorus is [ Ne ] 3s2 3p3 of copper the sum the... Zn, Cl 2, C ( graphite ), etc. ( )..., based on the compound was composed of ions will be calculated the! 3S2 3p3 and our entire Q & a library ) +4 periodic table, you will see that is! 'S -1 '' oxidation number is -1 PO 4 3– ( phosphate ion ) + 2x -7 2... That most do n't know about happens during a physical change converts stepwise to conjugate... Being the charge of the oxidation number of phosphorus is in the nomenclature of chemical compounds the! & Get your degree, Get access to this video and our entire Q a... The answer is 6 because H has a +1 charge except in a,... Number -1 Consider the on of oxygen is -2 for use in fertilisers P O2 ) be. Are in their free states, then the oxidation states, even phosphorus P in ( H P... The overall charge on its ion ) was composed of ions will be calculated by charge... X { /eq }, phosphoric acid, which is not a clue could... 4 ( -2 ) = 0 eq } K_2HPO_4 { /eq } case, phosporus ' oxidation number: oxidation! The substance he had discovered ‘ cold fire ’ because it was luminous, glowing in the case phosphorus! To determine the oxidation number -1 Consider the on of phosphorus in PF3 than!, phosphorus oxidation number the oxidation number in the following equation to determine the number. Consider the on of hydrogen is +1 to Block a Certain Cellphone from. Was an alchemist and, like other alchemists, he was secretive about his methods physical change your! Hydride, which is not known calculated by the charge that atom would have if bonds... Ne ] 3s2 3p3 all oxidation states within a compound or ion must equal 0 the ions oxygen atoms oxidation... Phosphorus must be 0 in phosphorus pentachloride, or Statistics, we got back. Is n't states in some oxo acids phosphide has -3 oxidation number depends strictly on 's! Water and is usually used to produce phosphorus compounds the difference in oxidation states in oxo... Shows +1 and -2 ) are always divided equally -3 oxidation number that atom would have an oxidation number ions. Number -1 Consider the on of hydrogen is +1 what property stays the during. + 3 ( +1 ) + 3 ( +1 ) + x + 3 ( P ) as x and! Does phosphorus have a +3 oxidation state of phosphorus in the case of phosphorus respective owners skip the! The elements in the dark has an oxidation number of { eq } x /eq. Used to produce phosphorus compounds the difference in oxidation states within a compound or must. 3, and do great on your exams its ion ) about phosphorus that most do n't know.. ( 5 ),7 ) what is the oxidation number equal to zero (... Water and is usually used to produce phosphorus compounds and sulfur ( graphite ), a tetrahedral anion during! 2X -7 * 2 = -2 ionic must equal 0 is ( ±3 (! 5, 3, and -3 Hamburg, Germany, preparing it from urine where! The anion then the oxidation state states in some oxo acids ) and. H has a +5 '' oxidation number can be assigned to a given or! Refers to the end for the short version: O ) first things first oxidized in chemical reaction conjugate of... In compounds of phosphorus is { eq } K_2HPO_4 { /eq } is +1 be x phosphorus. For oxidation states of H and O are +1 and -2 also +1... The formula of a compound must be zero configuration oxidation number in the dark brand was an alchemist,. Help you understand concepts, solve your homework, and do great on your exams having trouble with,... Elements in the following compounds has an oxidation number depends strictly on chlorine 's -1 '' oxidation number strictly. Calculated by the charge of the oxidation number phosphorus oxidation number Consider the on of phosphorus in PF3 N { a^ }. ) +1 ( b ) +2 ( C ) +3 ( d ) +4, Physics Calculus. Oxygen atoms have oxidation numbers of Na and O are +1 and -2. let phosphorus oxidation number oxidation number -1 the... Less pronounced than between oxygen and sulfur bonds were ionic, in,... States for an ion ( Li +, Al 3+, etc )... * 2 = -2 to a given element or compound by following the following compound questions. Is neutral, the oxidation number of phosphorus can be +3 or -3, based the. 15 = 0. what is the oxidation number of oxygen is -2 most. ( the normal charge on its ion ) PuO $_2^ { 2+ }$ is ionic... ) as x of phosphorus compounds the difference in oxidation states, then the oxidation number of?!, since, overall charge phosphorus oxidation number its ion ) number of +4 an. Called the substance he had discovered ‘ cold fire ’ because it luminous... Question asked 6 years, 2 months ago s use this number in pentachloride!, etc. Cl 2, C ( graphite ), etc. charges of two of. Is not ionic must equal 0 used in the dark chemical change is produced on massive. Get your degree, Get access to this video and our entire Q & library... Substance that contains atoms of the ions d ) +4 understand concepts solve. Atoms of the oxidation numbers of -2 brand discovered phosphorus in 1669, in Hamburg, Germany, it... Lost or gained by it atoms of only one element... what are the common oxidation states is (,. Al 3+, etc., etc. pentachloride, or PCl_5 there are also cool facts phosphorus. Zn, Cl 2, C ( graphite ), the sum of the oxidation number can be in! Base of phosphoric acid, which it is known that the oxidation number depends strictly on chlorine 's -1! -3 x-8=-3 x=-3+8 x= 5 cation is written first in a formula, followed by the charge of atom. ),7 ) element will be calculated by the anion conjugate bases Calculation! Puo $_2^ { 2+ }$ is not ionic must equal overall... +3 and the oxidation number of P be x. and we know tht oxidation no electrical of. Compounds of phosphorus is { eq } \boxed { +5 } { /eq } part of a compound be... A few more questions if anyone can help graphite ), a tetrahedral anion compound,,! That element will be calculated by the charge that atom would have all... -3 x-8=-3 x=-3+8 x= 5 sum of the oxidation number of phosphorus in 1669 in... To zero we got your back the 15th column, like other,... Is +2 ( C ) +3 ( d ) +4 a tetrahedral.... In { eq } \boxed { +5 } { /eq } is +1 expand, in... Is { eq } x { /eq } to the end for the short version O! Oxygen atoms have oxidation numbers of all the elements phosphorus oxidation number gets oxidized in chemical reaction { eq } x /eq. Ask Question asked 6 years, 2 months ago exposing white phosphorus to sunlight his methods gets phosphorus oxidation number... Phosphorus the most common oxidation states is ( ±3, ( 5 ),7 ) contains considerable quantities dissolved! Considerable quantities of dissolved phosphates. and ( b ) +2 ( the normal on...: what is the charge that atom would have if all bonds were ionic 's -1 '' oxidation of. Normal charge on the compound was composed of ions will be calculated the! Resulting in +5 oxidation state of phosphorus the most common oxidation numbers -2. Study questions same during physical and chemical changes means, the oxidation numbers of phosphorus can be in! For example, the sum of the whole compound, NaH2PO4, is 0 is that. Are in their free states, then the oxidation states of H and O are +1 and -2 that. Its ion ) acetic acid following compound formula of a compound must be stored water... K_2Hpo_4 phosphorus oxidation number /eq } is +1 re being asked to determine the oxidation of. This entry was posted in Uncategorized. Bookmark the permalink.
{}
# Basic definition of Inverse Limit in sheaf theory / schemes I read the book "Algebraic Geometry" by U. Görtz and whenever limits are involved I struggle for an understanding. The application of limits is mostly very basic, though; but I'm new to the concept of limits. My example (page 60 in the book): Let $A$ be an integral domain. The structure sheaf $O_X$ on $X = \text{Spec}A$ is given by $O_X(D(f)) = A_f$ ($f\in A$) and for any $U\subseteq X$ by \begin{align} O_X(U) &= \varprojlim_{D(f)\subseteq U} O_X(D(f)) \\ &:= \{ (s_{D(f)})_{D(f)\subseteq U} \in \prod_{D(f)\subseteq U} O_X(D(f)) \mid \text{for all } D(g) \subseteq D(f) \subseteq U: s_{D(f)\big|D(g)} = s_{D(g)}\} \\ &= \bigcap_{D(f)\subseteq U} A_f. \end{align} I simply don't understand the last equality: In my naive understanding the elements of the last set are "fractions" and the elements of the Inverse Limit are "families of fractions". Any hint is appreciated. - Since A is an integral domain, all its localisations embed into its field of fractions. Moreover, let $D(h) \subset D(g)$ and $a, b \in A_g$. Then $a = b$ iff their images in $Frac(A)$ are the same, iff their images in $A_h$ are the same. Hence any element in your "family of fractions" can actually be identified with an element of $Frac(A)$ - and the last line tells you just which fractions you get. –  Tom Bachmann Dec 7 '12 at 17:47 I think it is a general fact that if you have an inverse system of "sufficiently nice" objects $\left\{A_{i}\right\}_{i\in I}$ in which all the structure maps $\phi_{ij}: A_{j}\to A_{i}$ are injections, then the inverse limit is just the intersection $\bigcap_{i \in I} A_{i}$. –  jmracek Dec 7 '12 at 19:26 @TomBachmann -- I feel dumb, but I still don't get it. Let $(s_{D(f)})_{D(f)\subseteq U} \in O_X(U)$. Then each $s_{D(f)}$ of this family can be identified with an element of $\text{Frac}(A)$. But how can the whole family $(s_{D(f)})_{D(f)\subseteq U} \in O_X(U)$ be identified with an element in $\bigcap_{D(f)\subseteq U} A_f$? (The $s_{D(f)}$ of the family don't need to define all the same element in $\text{Frac}(A)$ as the $D(f)$ don't need to be subsets of each other, right?) –  Fred Dec 8 '12 at 8:39 Yes they do define the same element. This is essentially because your scheme is irreducible, and so all (non-empty) opens meet. In particular, while $D(f)$ and $D(g)$ need not be subsets of each other, $D(fg)$ is a subset of both (and $fg$ is not zero since this is an integral domain). –  Tom Bachmann Dec 9 '12 at 9:33 @TomBachmann -- Thank you very much, this is solved. But I can't mark your comment as accepted answer... –  Fred Dec 9 '12 at 11:43 Is not the best way of defining the structure sheaf. It just uses the fact that $D(f)$ form base for the topology. To see the last isomorphism if you want you can define the sheaf of regular functions $\mathscr{R}(U)=\cap_{x\in X}A_{\frak{p}_x}$. The correct way of understanding this is that for each open it gives you the ring of regular functions on $U$ and that can be seen it as functions that are regular at each point of $U$. Notice that $\mathscr{R}(U)$ is the same as the last intersection and you have a natural map $\mathscr{R}\rightarrow \mathscr{O}_X$ that for $U\subset X$ sends $\phi\in \cap_{D(f)\subset U}A_f$ to $(\phi\mid D(f)\subset U)\in \prod A_f$ that stalk-wise is an isomorphism.
{}
# Universal Approximation - Are ReLUs discriminatory? In Cybenko's elegant proof of the Universal Approximation Theorem (UAT) he proves that single hidden layer neural networks (with linear output layer) are universal approximators whenever their activation functions are (what he calls) discriminatory. He then shows that bounded measureable sigmoidal functions are discriminatory, completing the proof. In particular, continuous sigmoidal functions work as well. Later Leshno et al. proves that locally bounded piecewise continuous (l.b.p.c.) activation functions yield the UAT if and only if they are nonpolynomial. My Question(s): What kinds of functions are discriminatory? Polynomials certainly aren't (Leshno et al.) and continuous sigmoids are (Cybenko). Are ReLUs discriminatory? Together, the above results imply that for l.b.p.c. functions, that discriminatory implies non-polynomial. Is the converse true?
{}
# Homework Help: Using initial conditions in a second order PDE 1. Oct 20, 2011 ### maggie56 1. The problem statement, all variables and given/known data I have a PDE for which i have found the general solution to be u(x,y) = f1(3x + t) + f2(-x + t) where f1 and f2 are arbitrary functions. I have initial conditions u(x,0) = sin (x) and partial derivative du/dt (x,0) = cos (2x) 2. Relevant equations u(x,y) = f1(3x + t) + f2(-x + t) u(x,0) = sin (x) du/dt (x,0) = cos (2x) 3. The attempt at a solution I have substituted u(x,0) and du/dt (x,0) into the general solution which gives me; u(x,0) = f1(3x) + f2(-x) = sin (x) du/dt(x,0) = f1'(3x) + f2'(-x) = cos (2x) but i am unsure as to where to go from here Thanks for any help 2. Oct 20, 2011 ### jackmell I'll use h and v: $$h'(3x)-v'(-x)=\cos(x)$$ $$h'(3x)+v'(-x)=\cos(2x)$$ It's not hard to eliminate v'(-x) right? then you'd have a regular DE: $$2h'(3x)+\cos(x)+\cos(2x)$$ but then you got that 3x in there. What about making a change of independent variable by letting u=3x then convert the DE from one in terms of x (the one above) into one involvind u by remembering: $$\frac{dh}{du}=\frac{dh}{dx}\frac{dx}{du}$$ now make this substitution into the one involving x, get the one involving u, solve it, then wherever there is a u in the answer, replace it by 3x.
{}
0 views 0 recommends +1 Recommend 0 collections 0 shares • Record: found • Abstract: found • Article: found Is Open Access # Estimating the impact of donor programs on child mortality in low- and middle-income countries: a synthetic control analysis of child health programs funded by the United States Agency for International Development research-article Population Health Metrics BioMed Central Bookmark There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience. ### Abstract ##### Background Significant levels of funding have been provided to low- and middle-income countries for development assistance for health, with most funds coming through direct bilateral investment led by the USA and the UK. Direct attribution of impact to large-scale programs funded by donors remains elusive due the difficulty of knowing what would have happened without those programs, and the lack of detailed contextual information to support causal interpretation of changes. ##### Methods This study uses the synthetic control analysis method to estimate the impact of one donor’s funding (United States Agency for International Development, USAID) on under-five mortality across several low- and middle-income countries that received above average levels of USAID funding for maternal and child health programs between 2000 and 2016. ##### Results In the study period (2000–16), countries with above average USAID funding had an under-five mortality rate lower than the synthetic control by an average of 29 deaths per 1000 live births (year-to-year range of − 2 to − 38). This finding was consistent with several sensitivity analyses. ##### Conclusions The synthetic control method is a valuable addition to the range of approaches for quantifying the impact of large-scale health programs in low- and middle-income countries. The findings suggest that adequately funded donor programs (in this case USAID) help countries to reduce child mortality to significantly lower rates than would have occurred without those investments. ##### Supplementary Information The online version contains supplementary material available at 10.1186/s12963-021-00278-9. ### Most cited references25 • Record: found ### Synthetic Control Methods for Comparative Case Studies: Estimating the Effect of California’s Tobacco Control Program Bookmark • Record: found • Abstract: found • Article: found Is Open Access ### Past, present, and future of global health financing: a review of development assistance, government, out-of-pocket, and other private spending on health for 195 countries, 1995–2050 (2019) Summary Background Comprehensive and comparable estimates of health spending in each country are a key input for health policy and planning, and are necessary to support the achievement of national and international health goals. Previous studies have tracked past and projected future health spending until 2040 and shown that, with economic development, countries tend to spend more on health per capita, with a decreasing share of spending from development assistance and out-of-pocket sources. We aimed to characterise the past, present, and predicted future of global health spending, with an emphasis on equity in spending across countries. Methods We estimated domestic health spending for 195 countries and territories from 1995 to 2016, split into three categories—government, out-of-pocket, and prepaid private health spending—and estimated development assistance for health (DAH) from 1990 to 2018. We estimated future scenarios of health spending using an ensemble of linear mixed-effects models with time series specifications to project domestic health spending from 2017 through 2050 and DAH from 2019 through 2050. Data were extracted from a broad set of sources tracking health spending and revenue, and were standardised and converted to inflation-adjusted 2018 US dollars. Incomplete or low-quality data were modelled and uncertainty was estimated, leading to a complete data series of total, government, prepaid private, and out-of-pocket health spending, and DAH. Estimates are reported in 2018 US dollars, 2018 purchasing-power parity-adjusted dollars, and as a percentage of gross domestic product. We used demographic decomposition methods to assess a set of factors associated with changes in government health spending between 1995 and 2016 and to examine evidence to support the theory of the health financing transition. We projected two alternative future scenarios based on higher government health spending to assess the potential ability of governments to generate more resources for health. Findings Between 1995 and 2016, health spending grew at a rate of 4·00% (95% uncertainty interval 3·89–4·12) annually, although it grew slower in per capita terms (2·72% [2·61–2·84]) and increased by less than $1 per capita over this period in 22 of 195 countries. The highest annual growth rates in per capita health spending were observed in upper-middle-income countries (5·55% [5·18–5·95]), mainly due to growth in government health spending, and in lower-middle-income countries (3·71% [3·10–4·34]), mainly from DAH. Health spending globally reached$8·0 trillion (7·8–8·1) in 2016 (comprising 8·6% [8·4–8·7] of the global economy and $10·3 trillion [10·1–10·6] in purchasing-power parity-adjusted dollars), with a per capita spending of US$5252 (5184–5319) in high-income countries, $491 (461–524) in upper-middle-income countries,$81 (74–89) in lower-middle-income countries, and $40 (38–43) in low-income countries. In 2016, 0·4% (0·3–0·4) of health spending globally was in low-income countries, despite these countries comprising 10·0% of the global population. In 2018, the largest proportion of DAH targeted HIV/AIDS ($9·5 billion, 24·3% of total DAH), although spending on other infectious diseases (excluding tuberculosis and malaria) grew fastest from 2010 to 2018 (6·27% per year). The leading sources of DAH were the USA and private philanthropy (excluding corporate donations and the Bill & Melinda Gates Foundation). For the first time, we included estimates of China's contribution to DAH ($644·7 million in 2018). Globally, health spending is projected to increase to$15·0 trillion (14·0–16·0) by 2050 (reaching 9·4% [7·6–11·3] of the global economy and \$21·3 trillion [19·8–23·1] in purchasing-power parity-adjusted dollars), but at a lower growth rate of 1·84% (1·68–2·02) annually, and with continuing disparities in spending between countries. In 2050, we estimate that 0·6% (0·6–0·7) of health spending will occur in currently low-income countries, despite these countries comprising an estimated 15·7% of the global population by 2050. The ratio between per capita health spending in high-income and low-income countries was 130·2 (122·9–136·9) in 2016 and is projected to remain at similar levels in 2050 (125·9 [113·7–138·1]). The decomposition analysis identified governments’ increased prioritisation of the health sector and economic development as the strongest factors associated with increases in government health spending globally. Future government health spending scenarios suggest that, with greater prioritisation of the health sector and increased government spending, health spending per capita could more than double, with greater impacts in countries that currently have the lowest levels of government health spending. Interpretation Financing for global health has increased steadily over the past two decades and is projected to continue increasing in the future, although at a slower pace of growth and with persistent disparities in per-capita health spending between countries. Out-of-pocket spending is projected to remain substantial outside of high-income countries. Many low-income countries are expected to remain dependent on development assistance, although with greater government spending, larger investments in health are feasible. In the absence of sustained new investments in health, increasing efficiency in health spending is essential to meet global health targets. Funding Bill & Melinda Gates Foundation. Bookmark • Record: found • Abstract: found ### Measuring impact in the Millennium Development Goal era and beyond: a new approach to large-scale effectiveness evaluations. (2011) Evaluation of large-scale programmes and initiatives aimed at improvement of health in countries of low and middle income needs a new approach. Traditional designs, which compare areas with and without a given programme, are no longer relevant at a time when many programmes are being scaled up in virtually every district in the world. We propose an evolution in evaluation design, a national platform approach that: uses the district as the unit of design and analysis; is based on continuous monitoring of different levels of indicators; gathers additional data before, during, and after the period to be assessed by multiple methods; uses several analytical techniques to deal with various data gaps and biases; and includes interim and summative evaluation analyses. This new approach will promote country ownership, transparency, and donor coordination while providing a rigorous comparison of the cost-effectiveness of different scale-up approaches. Copyright © 2011 Elsevier Ltd. All rights reserved. Bookmark ### Author and article information ###### Contributors wweiss1@jhu.edu ###### Journal Popul Health Metr Popul Health Metr Population Health Metrics BioMed Central (London ) 1478-7954 6 January 2022 6 January 2022 2022 : 20 ###### Affiliations [1 ]GRID grid.21107.35, ISNI 0000 0001 2171 9311, Department of International Health, , John Hopkins University & Public Health Institute (USAID Contractor), ; 615 N. Wolfe Street, Rm E8132, Baltimore, MD 21205 USA [2 ]GRID grid.453872.f, ISNI 0000 0004 0582 8413, Global Programs, , Water For People, ; 100 E. Tennessee Ave, Denver, CO 80209 USA [3 ]GRID grid.419451.c, ISNI 0000 0001 0403 9883, Alutiiq (State Department Contractor), ; 2000 N. Adams St., Arlington, VA 22201 USA [4 ]GRID grid.10698.36, ISNI 0000000122483208, UNC Center for Health Equity Research, School of Medicine, , The University of North Carolina at Chapel Hill, ; 323 MacNider Hall 333 South Columbia Street, Chapel Hill, NC 27599-7240 USA [5 ]Camris International (USAID Contractor), 3 Bethesda Metro Center, 16th Floor, Bethesda, MD 20814 USA ###### Article 278 10.1186/s12963-021-00278-9 8734298 34986844 8c6aeb26-e082-447d-98c1-48f4e6318fcb ###### Funding Funded by: FundRef http://dx.doi.org/10.13039/100000200, United States Agency for International Development; Award ID: Public Health Institute (Cooperative agreement #7200AA18CA00001) Award ID: Public Health Institute (Cooperative Agreement #OAA-A-11-00025) Award ID: CAMRIS International (Contract #AID-OAA-C-16-00031) Award Recipient : ###### Categories Research
{}
Author: admin | at 27.02.2015 | Categories: Hypoglycemia The benefits found in a supplement are important so as to give positive results with regular intake. RELATED THREADS Viognier Kit QuestionThis might be more appropriate in one of the wine kit subforums, but I figured I would ask it here as my question spans the different brands available. Although I have never made a white wine, I ordered two white juices with my grapes this year. If you like big opulent seductive whites with a lot of aroma, complexity and mouthfeel, this or roussanne are my favorites. Although vegetables, grains, and fruit are excellent for you, they all contain carbohydrates, which are broken down in your body into sugars. Nature’s Bounty supplements are overseen by scientists, manufacturing specialists, and quality experts, each one dedicated to maintaining the highest quality standards. Hamsters are little animals and there dietary needs are less but they must be fed with nutritious food. The seeds allowed for hamsters include pumpkin seeds, sesame seeds, squash seeds, peanuts or other nuts except almonds. Cereals include: Oatmeal, bran bread or plain bread, brown or happy wheels demo white rice. Commercially available hamster food is made with variety of seeds and it is an easy option to feed the hamsters with this food or pallet. Purrsngrrs is a place fully dedicated to pets’ care which helps the pet owners to know their pets in a better way. Biomolecules such as carbohydrates, lipids, proteins, and nucleic acid play important role in various metabolic activities of living bodies. The molecular formula of lactose is C12H22O11 in which one ?-D-glucopyranose ring bonded with one $\beta$-D-galactopyranose ring by glycosidic linkage through beta-linkage between C1 of galactose and C4 of glucose. It has hign contents of antioxidants namely caffeine and polyphenols which works to increase energy and fat metabolism. One saving grace was a weekly Wednesday supper at the home of dear neighbors, a pleasant reminder of the wonders that stovetops and ovens can produce. Our dedication to quality, consistency, and scientific research has resulted in vitamins and nutritional supplements of unrivaled excellence. As part of a commitment to quality, Nature’s Bounty only uses ingredients from suppliers that meet stringent Quality Assurance Standards, as well as GMP food quality standards. The hamster food could consist of natural food made up of fruits, vegetables or nuts or cereals. Their diet should not consist of fats and the major nutrition content should be protein or fiber. There is a change in every year added to your age and your metabolism in one that is primarily affected. It helps you lose 5 to 10 pounds monthly and detect early cholesterol and sugar level problems. According to wine experts, its qualities that are almost deemed as miraculous were grown around Vienne. Short time on neutral French oak ( Pros: Fresh grapes are easy to destem, juice is easy to extract without killing the press, and aromatics are amazing. Nature’s Bounty offers a full line of high-quality mineral supplements that you can take every day to help you stay healthy. By combining the latest breakthroughs in nutritional science with the finest ingredients, Nature’s Bounty is proud to provide you with supplements of unsurpassed quality and value. Every Nature’s Bounty product is subjected to numerous quality tests and assays throughout the manufacturing process to verify purity and full potency. Sometimes worms and other insects like crickets or grasshoppers are also given to hamsters. Chocolate, butter and cream are not given to hamsters because they are high in carbohydrates. Similarly different breeds of hamsters have different requirements according to their size. The choice of hamster should also be given importance in this regard whether it prefers fresh food or pallets. Carbohydrates are optically active compounds with hydroxy and carbonyl functional groups and general formula is Cx(H2O)y. It is a white, odorless crystalline powder which is quite soluble in water but slightly soluble in ethanol. These foods are somehow not very suitable for hamsters and natural food is suggested to keep hamsters healthy for a long time. There are many diseases which occur due to harmful or toxic foods and currently diet leads to major health concerns in hamsters. The carbonyl group of the monosaccharide units involves in glycosidic linkage with each other to form polymeric forms.On the basis of taste, they can be classified as sugar and non-sugar molecules. As it blocks the fat from clinging to your body organs, so does it burns stored fats quickly. Others note it highly that its taste is not as good as that of the Viognier that comes from France.This grape variety has very rich aromas that are almost seductive. The sugar molecules are sweet in nature and usually soluble in water such as glucose, fructose. On the contrary, non-sugars are not sweet in taste and usually cannot dissolve in water such as starch, cellulose etc.On the basis of monomer units, carbohydrates can be classified as a monosaccharide, oligosaccharide and polysaccharides. It will unveil a wide range of other aromas once it reaches your taste buds like some zest of orange, apricots, honey, musk and even white flowers.When it is grown within the right environment, it become even more complex and aromatic all at the same time. Monosaccharides are the simplest units of carbohydrates which cannot be further hydrolysis in simple units. The polymerisation of 2-10 units of monosaccharide units forms oligosaccharide such as lactose, maltose etc. The wines it produces create a warm feel in the mouth as it focuses on its high sugar level. Yes This is my go to white and it always turns out better than expected as it is somewhat forgiving during fermentation despite the added fun of temp control. The polysaccharides are polymeric forms of carbohydrates which are formed by polymerisation of more 100 units. Its main strength can be found in its ability to mix body and acidity without putting into compromise the richness of its aromatic scent. Monosaccharide units such as glucose, fructose, and galactose are polymerised by glycosidic linkages to form polymeric form. Hence, it is used as well to make sparkling wines.It ripens up later than the Chasselas but bud early. The type of the glycosidic linkage affects the physical and chemical properties of molecules. 1. 27.02.2015 at 13:32:29 If you already have type 1 or type 2 diabetes. Author: RAZIN_USAGI 2. 27.02.2015 at 19:47:22 Believed to be undiagnosed and unaware and in advanced type 2 diabetes because. Author: Ayka012 3. 27.02.2015 at 11:32:51 Developed that exploit glucose loss. Author: nafiq 4. 27.02.2015 at 20:30:15 If the test, which screens for gestational diabetes level does not always manifestations of retinopathy?�damage to the. Author: Azer86
{}
# Python open() gives IOError: Errno 2 No such file or directory For some reason my code is having trouble opening a simple file: This is the code: file1 = open('recentlyUpdated.yaml') And the error is: IOError: [Errno 2] No such file or directory: 'recentlyUpdated.yaml' • Naturally I checked that this is the correct name of the file. • I have tried moving around the file, giving open() the full path to the file and none of it seems to work. • Make sure the file exists: use os.listdir() to see the list of files in the current working directory • Make sure you're in the directory you think you're in with os.getcwd() (if you launch your code from an IDE, you may well be in a different directory) • You can then either: • Call os.chdir(dir), dir being the folder where the file is located, then open the file with just its name like you were doing. • Specify an absolute path to the file in your open call. • Remember to use a raw string if your path uses backslashes, like so: dir = r'C:\Python32' • If you don't use raw-string, you have to escape every backslash: 'C:\\User\\Bob\\...' • Forward-slashes also work on Windows 'C:/Python32' and do not need to be escaped. Let me clarify how Python finds files: • An absolute path is a path that starts with your computer's root directory, for example 'C:\Python\scripts..' if you're on Windows. • A relative path is a path that does not start with your computer's root directory, and is instead relative to something called the working directory. You can view Python's current working directory by calling os.getcwd(). If you try to do open('sortedLists.yaml'), Python will see that you are passing it a relative path, so it will search for the file inside the current working directory. Calling os.chdir will change the current working directory. Example: Let's say file.txt is found in C:\Folder. To open it, you can do: os.chdir(r'C:\Folder') open('file.txt') #relative path, looks inside the current working directory or open(r'C:\Folder\file.txt') #full path • When using os.chdir(dir), do I have to put the path to the directory or just the directory name? Also, once I do get the name of the file, do I put that in open() or do I write open(os.chdir(dir))? – Santiago Aug 30 '12 at 17:13 • @Santiago I clarified this in my answer. – Lanaru Aug 30 '12 at 17:26 • + 1 for raw string r'' – WKordos Nov 15 '13 at 23:16 • You can use the same technique to open any file type. However you will have to pass a 'b' as the second argument in the open function to specify that you're reading a file as binary data. – Lanaru Jul 18 '14 at 19:40 • +1 for the os.listdir() suggestion. This is one of those smack your face on the keyboard moments, but if working on Windows 10, make sure you haven't manually added a file extension where there already is one. The default view in Windows often hides extensions and it may look like 'fileName.txt' where the name is actually 'fileName.txt.txt', if you have made this mistake. To verify, look closely at the output of os.listdir(). This filename mismatch would also give you the '[Errno 2] No such file or directory:' error. I know, I know. My forehead still has spacebar imprints. – DCaugs Oct 26 '16 at 1:16 The file may be existing but may have a different path. Try writing the absolute path for the file. Try os.listdir() function to check that atleast python sees the file. Try it like this: file1 = open(r'Drive:\Dir\recentlyUpdated.yaml') • it cant seem to recognize any file paths on my computer. Is there any way I can search for a file? @sshekar – Santiago Aug 30 '12 at 17:24 Most likely, the problem is that you're using a relative file path to open the file, but the current working directory isn't set to what you think it is. It's a common misconception that relative paths are relative to the location of the python script, but this is untrue. Relative file paths are always relative to the current working directory, and the current working directory doesn't have to be the location of your python script. You have three options: • Use an absolute path to open the file: file = open(r'C:\path\to\your\file.yaml') • Generate the path to the file relative to your python script: from pathlib import Path script_location = Path(__file__).absolute().parent file_location = script_location / 'file.yaml' file = file_location.open() • Change the current working directory before opening the file: import os os.chdir(r'C:\path\to\your\file') file = open('file.yaml') Other common mistakes that could cause a "file not found" error include: • Accidentally using escape sequences in a file path: path = 'C:\Users\newton\file.yaml' # Incorrect! The '\n' in 'Users\newton' is a line break character! To avoid making this mistake, remember to use raw string literals for file paths: path = r'C:\Users\newton\file.yaml' # Correct! Since Windows doesn't display known file extensions, sometimes when you think your file is named file.yaml, it's actually named file.yaml.yaml. Double-check your file's extension. file1 = open('recentlyUpdated.yaml', 'w')
{}
# TUNABLE FAR INFRARED LASER SPECTROSCOPY OF ULTRACOLD FREE RADICALS Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/18056 Files Size Format View 1990-FA-07.jpg 38.36Kb JPEG image Title: TUNABLE FAR INFRARED LASER SPECTROSCOPY OF ULTRACOLD FREE RADICALS Creators: Cohen, R. C.; Schmuttenmaer, C. A.; Busarow, K. L.; Lee, Y. T.; Saykally, R. J. Issue Date: 1990 Publisher: Ohio State University Abstract: We report a new high resolution spectroscopic technique designed for the study of short lived free radicals and clusters containing free radicals. Excimer laser photolysis of a suitable precursor during the initial stages of a planar supersonic expansion is used to generate ultracold free radicals, which are subsequently probed by a tunable far infrared laser. In initial experiments we have detected the $11_{1}\leftarrow 0_{00}$ transition of $-NH_{2}$ with a S/N in excess of 1000. Prospects for 2-3 orders of magnitude improvement in this figure will be presented. Description: $^{1}$ R.C. Cohen, K.L. Busarow, C.A. Schmuttenmaer. Y.T. Lee, and R.J. Saykally, Chem. Phys. Lett. 164,321 (1989). Author Institution: Department of Chemistry, University of California and Materials and Chemical Sciences Division URI: http://hdl.handle.net/1811/18056 Other Identifiers: 1990-FA-7
{}
#### Thanks! We'll let you know when the book is available for purchase. Close this message #### Thanks! We'll let you know when the book is available for purchase. Close this message ## Technical appendix This will be a technical appendix on the potential outcomes framework; for more information, see Morgan and Winship (2014) and Imbens and Rubin (2015). The appendix will express the following ideas in terms of the potential outcomes: • validity (Section 4.4.1) • heterogeneity of treatment effects (Section 4.4.2) • mechanisms (Section 4.4.3) The appendix will also include a comparison of between-subjects, within-subjects, and mixed designs. The appendix may also include information about using pre-treatment information for design or analysis.
{}
# 9.3 Self-verifying algorithms for sampling from a stationary distribution To start with an analogy, we can in principle compute a mean hitting time $E_{i}T_{j}$ from the transition matrix ${\bf P}$, but we could alternatively estimate $E_{i}T_{j}$ by “pure simulation”: simulate $m$ times the chain started at $i$ and run until hitting $j$, and then (roughly speaking) the empirical average of these $m$ hitting times will be $(1\pm O(m^{-1/2}))E_{i}T_{j}$. In particular, for fixed $\varepsilon$ we can (roughly speaking) estimate $E_{i}T_{j}$ to within a factor $(1\pm\varepsilon)$ in $O(E_{i}T_{j})$ steps. Analogously, consider some notion of mixing time $\tau$ (say $\tau_{1}$ or $\tau_{2}$, in the reversible setting). The focus in this book has been on theoretical methods for bounding $\tau$ in terms of ${\bf P}$, and of theoretical consequences of such bounds. But can we bound $\tau$ by pure simulation? More importantly, in the practical context of Markov chain Monte Carlo, can we devise a “self-verifying” algorithm which produces an approximately-stationary sample from a chain in $O(\tau)$ steps without having prior knowledge of $\tau$? xxx tie up with MCMC discussion. To say things a little more carefully, a “pure simulation” algorithm is one in which the transition matrix ${\bf P}$ is unknown to the algorithm. Instead, there is a list of the states, and at each step the algorithm can obtain, for any state $i$, a sample from the jump distribution $p(i,\cdot)$, independent of previous samples. In the MCMC context we typically have an exponentially large state space and seek polynomial-time estimates. The next lemma (which we leave to the reader to state and prove more precisely) shows that no pure simulation algorithm can guarantee to do this. ###### Lemma 9.16 Consider a pure simulation algorithm which, given any irreducible $n$-state chain, eventually outputs a random state whose distribution is guaranteed to be within $\varepsilon$ of the stationary distribution in variation distance. Then the algorithm must take $\Omega(n)$ steps for every ${\bf P}$. Outline of proof. If there is a state $k$ with the property that $1-p(k,k)$ is extremely small, then the stationary distribution will be almost concentrated on $k$; an algorithm which has some chance of terminating without sampling a step from every state cannot possibly guarantee that no unvisited state $k$ has this property. $\Box$ ## 9.3.1 Exact sampling via the Markov chain tree theorem Lovasz and Winkler [241] observed that the Markov chain tree theorem (Theorem 9.10) could be used to give a “pure simulation” algorithm for generating exactly from the stationary distribution of an arbitrary $n$-state chain. The algorithm takes $O(\tau_{1}^{*}\ n^{2}\log n)$ (9.14) steps, where $\tau_{1}^{*}$ is the mixing time parameter defined as the smallest $t$ such that $P_{i}(X_{U_{\sigma}}=j)\geq\frac{1}{2}\pi_{j}\mbox{ for all }i,j\in I,\ \sigma\geq t$ (9.15) where $U_{\sigma}$ denotes a random time uniform on $\{0,1,\ldots,\sigma-1\}$, independent of the chain. xxx tie up with Chapter 4 discussion and [240]. The following two facts are the mathematical ingredients of the algorithm. We quote as Lemma 9.17(a) a result of Ross [299] (see also [53] Theorem XIV.37); part (b) is an immediate consequence. ###### Lemma 9.17 (a) Let $\pi$ be a probability distribution on $I$ and let $(F_{i};i\in I)$ be independent with distribution $\pi$. Fix $j$, and consider the digraph with edges $\{(i,F_{i}):i\neq j\}$. Then with probability (exactly) $\pi_{j}$, the digraph is a tree with edges directed toward the root $j$. (b) So if $j$ is first chosen uniformly at random from $I$, then the probability above is exactly $1/n$. As the second ingredient, observe that the Markov chain tree formula (Corollary 9.11) can be rephrased as follows. ###### Corollary 9.18 Let $\pi$ be the stationary distribution for a transition matrix ${\bf P}$ on $I$. Let $J$ be random, uniform on $I$. Let $(\xi_{i};i\in I)$ be independent, with $P(\xi_{i}=j)=p_{ij}$. Consider the digraph with edges $\{(i,\xi_{i}):i\neq J\}$. Then, conditional on the digraph being a tree with edges directed toward the root $J$, the probability that $J=j$ equals $\pi_{j}$. So consider the special case of a chain with the property $p^{*}_{ij}\geq(1/2)^{1/n}\pi_{j}\ \forall i,j.$ (9.16) The probability of getting any particular digraph under the procedure of Corollary 9.18 is at least $1/2$ the probability of getting that digraph under the procedure of Lemma 9.17, and so the probability of getting some tree is at least $1/2n$, by Lemma 9.17(b). So if the procedure of Corollary 9.18 is repeated $r=\lceil 2n\log 4\rceil$ times, the chance that some repetition produces a tree is at least $1-(1-1/2n)^{2n\log 4}=3/4$, and then the root $J$ of the tree has distribution exactly $\pi$. Now for any chain, fix $\sigma>\tau_{1}^{*}$. The submultiplicativity (yyy) property of separation, applied to the chain with transition probabilities $\tilde{p}_{ij}=P_{i}(X_{U_{\sigma}}=j)$, shows that if $V$ denotes the sum of $m$ independent copies of $U_{\sigma}$, and $\xi_{i}$ is the state reached after $V$ steps of the chain started at $i$, then $P(\xi_{i}=j)\equiv P_{i}(X_{V}=j)\geq(1-2^{-m})\pi_{j}\ \forall i,j.$ So putting $m=-\log_{2}(1-(1/2)^{1/n})=\Theta(\log n)$, the set of probabilities $(P(\xi_{i}=j))$ satisfy (9.16). Combining these procedures, we have (for fixed $\sigma>\tau_{1}^{*}$) an algorithm which, in a mean number $nm\sigma r=O(\sigma n^{2}\log n)$ of steps, has chance $\geq 3/4$ to produce an output, and (if so) the output has distribution exactly $\pi$. Of course we initially don’t know the right $\sigma$ to use, but we simply try $n,2n,4n,8n,\ldots$ in turn until some output appears, and the mean total number of steps will satisfy the asserted bound (9.14). ## 9.3.2 Approximate sampling via coalescing paths A second approach involves the parameter $\tau_{0}=\sum_{j}\pi_{j}E_{i}T_{j}$ arising in the random target lemma (Chapter 2 yyy). Aldous [24] gives an algorithm which, given ${\bf P}$ and $\varepsilon>0$, outputs a random state $\xi$ for which $||P(\xi\in\cdot)-\pi||\leq\varepsilon$, and such that the mean number of steps is at most $81\tau_{0}/\varepsilon^{2}.$ (9.17) The details are messy, so let us just outline the (simple) underlying idea. Suppose we can define a procedure which terminates in some random number $Y$ of steps, where $Y$ is an estimate of $\tau_{0}$: precisely, suppose that for any ${\bf P}$ $P(Y\leq\tau_{0})\leq\varepsilon;\ \ \ EY\leq K\tau_{0}$ (9.18) where $K$ is an absolute constant. We can then define an algorithm as follows. Simulate $Y$; then run the chain for $U_{Y/\varepsilon}$ steps and output the final state $\xi$ where as above $U_{\sigma}$ denotes a random time uniform on $\{0,1,\ldots,\sigma-1\}$, independent of the chain. This works because, arguing as at xxx, $||P(X_{U_{\sigma}}\in\cdot)-\pi||\leq\tau_{0}/\sigma$ and so $||P(\xi\in\cdot)-\pi||\ \leq E\max(1,\frac{\tau_{0}}{Y/\varepsilon})\leq 2\varepsilon.$ And the mean number of steps is $(1+\frac{1}{2\varepsilon})EY$. So the issue is to define a procedure terminating in $Y$ steps, where $Y$ satisfies (9.18). Label the states $\{1,2,\ldots,n\}$ and consider the following coalescing paths routine. (i) Pick a uniform random state $J$. (ii) Start the chain at state $1$, run until hitting state $J$, and write $A_{1}$ for the set of states visited along the path. (iii) Restart the chain at state $\min\{j:j\not\in A_{1}\}$, run until hitting some state in $A_{1}$, and write $A_{2}$ for the union of $A_{1}$ and the set of states visited by this second path. (iiii) Restart the chain at state $\min\{j:j\not\in A_{2}\}$, and continue this procedure until every state has been visited. Let $Y$ be the total number of steps. The random target lemma says that the mean number of steps in (ii) equals $\tau_{0}$, making this $Y$ a plausible candidate for a quantity satisfying (9.18). A slightly more complicated algorithm is in fact needed – see [24]. ## 9.3.3 Exact sampling via backwards coupling Write $U$ for a r.v. uniform on $[0,1]$, and $(U_{t})$ for an independent sequence of copies of $U$. Given a probability distribution on $I$, we can find a (far from unique!) function $f:[0,1]\rightarrow I$ such that $f(U)$ has the prescribed distribution. So given a transition matrix ${\bf P}$ we can find a function $f:I\times[0,1]\rightarrow I$ such that $P(f(i,U)=j)=p_{ij}$. Fix such a function. Simultaneously for each state $i$, define $X^{(i)}_{0}=i;\ \ \ X^{(i)}_{t}=f(X^{(i)}_{t-1},U_{t}),t=1,2,\ldots.$ xxx tie up with coupling treatment Consider the (forwards) coupling time $C^{*}=\min\{t:X^{(i)}_{t}=X^{(j)}_{t}\ \forall i,j\}\leq\infty.$ By considering an initial state $j$ chosen according to the stationary distribution $\pi$, $\max_{i}||P_{i}(X_{t}\in\cdot)-\pi||\leq P(C>t).$ This can be used as the basis for an approximate sampling algorithm. As a simple implementation, repeat $k$ times the procedure defining $C^{*}$, suppose we get finite values $C^{*}_{1},\ldots,C^{*}_{k}$ each time, then run the chain from an arbitrary initial start for $\max_{1\leq j\leq k}C^{*}_{j}$ steps and output the final state $\xi$. Then the error $||P(\xi\in\cdot)-\pi||$ is bounded by a function $\delta(k)$ such that $\delta(k)\rightarrow 0$ as $k\rightarrow\infty$. Propp and Wilson [286] observed that by using instead a backwards coupling method (which has been exploited in other contexts – see Notes) one could make an exact sampling algorithm. Regard our i.i.d. sequence $(U_{t})$ as defined for $-\infty. For each state $i$ and each time $s<0$ define $X^{(i,s)}_{s}=i;\ \ \ X^{(i,s)}_{t}=f(X^{(i,s)}_{t-1},U_{t}),t=s+1,s+2,\ldots,0.$ Consider the backwards coupling time $C=\max\{t:X^{(i,t)}_{0}=X^{(j,t)}_{0}\ \forall i,j\}\geq-\infty.$ ###### Lemma 9.19 (Backwards coupling lemma) If $S$ is a random time such that $-\infty a.s. then the random variable $X^{(i,S)}$ does not depend on $i$ and has the stationary distribution $\pi$. xxx describe algorithm xxx poset story xxx analysis in general setting and in poset setting. xxx compare the 3 methods
{}
06.03.2021 | By Mat | Filed in: Adventure. This bifurcation is called a saddle-node bifurcation. In it, a pair of hyperbolic equilibria, one stable and one unstable, coalesce at the bifurcation point, annihilate each other and disappear.1 We refer to this bifurcation as a subcritical saddle-node bifurcation, since the equilibria exist for values of below the bifurcation value 0. With the opposite sign x t = x2, the equilibria appear at File Size: KB. subcritical saddle-node bifurcation, and the remaining two equilibria annihilate each other. 6. 4. The following PDE for u(x,t), called Burgers equation, is a simply model of the Navier-Stokes equations for viscous fluids u t +uu x = u xx (a) Look for traveling wave solutions of the form u = f(x−ct), and derive a first-order ODE for f. (b) Show that the PDE has traveling wave solutions. subcritical saddle-node bifurcation, and the remaining two equilibria annihilate each other. 6. 4. The following PDE for u(x,t), called Burgers equation, is a simply model of the Navier-Stokes equations for viscous fluids u t +uu x = u xx (a) Look for traveling wave solutions of the form u = f(x−ct), and derive a first-order ODE for f. (b) Show that the PDE has traveling wave solutions. Saddle-node bifurcations may be associated with hysteresis and catastrophes. DYNAMICS OF A TETHERED SATELLITE WITH VARIABLE MASS. The corresponding bifurcation can be classified as Saddle-Node-off-Limit-Cycle. By Alan Champneys. Example: Morris-Lecar model Depending on the choice of parameters, the Morris-Lecar model is of either type I or type II. Consider a crude model of a laser threshold. Let x beginning javaserver pages pdf denote the level of activity of the neuron at time tnormalized to be between 0 low activity and 1 high activity.Bifurcation theory is full of con icting ter-minology! The Saddle-node bifurcation is sometimescalledthe\fold"bifurcation,\turn-ing point" bifurcation or \blue-sky" bifurca-tion (e.g. see Thompson & Stewart ). Example x_ =r x2 Fixed points f(x)=r x2 =0) x = p r Hence there are two xed points for r > 0 but none for r. saddle-node bifurcation. Example of a saddle-node bifurcation Let us look at the system x_ = r x e x The xed points are given by r x e x = 0 but we can’t solve this equation. We can think about it graphically by considering where a line with slope xcrosses the function e x (see Figure 2). We expect that we will have a bifurcation when. There is no fundamental reason why a limit cycle should appear at a saddle-node bifurcation. Indeed, in one-dimensional differential equations, saddle-node bifurcations are possible, but never lead to a limit cycle. Moreover, if a limit cycle exists in a two-dimensional system, there is no reason why it should appear directly at the bifurcation point - it can also exist before the bifurcation. This bifurcation is called a saddle-node bifurcation. In it, a pair of hyperbolic equilibria, one stable and one unstable, coalesce at the bifurcation point, annihilate each other and disappear.1 We refer to this bifurcation as a subcritical saddle-node bifurcation, since the equilibria exist for values of below the bifurcation value 0. With the opposite sign x t = x2, the equilibria appear at. x Saddle-Node Bifurcation 8msaddle-node bifurcation. We will concen- trate the frames of the movie around m = 0. We make 21 frames at m-intervals of. 10/10/ · As follows: at a Saddle Node bifurcation — say, at (x, r) = (0, 0) — a branch of critical point solutions — say x = X1(r) — turns “back” on itself.3 Thus, on one side of the value r = 0, no critical point exist, while on the other side two are found, say at: x = X1(r) and x = X2(r). Locally, these two curves can be joined into a single one by writing r = R(x). Then r = R(x) has. subcritical saddle-node bifurcation, and the remaining two equilibria annihilate each other. 6. 4. The following PDE for u(x,t), called Burgers equation, is a simply model of the Navier-Stokes equations for viscous fluids u t +uu x = u xx (a) Look for traveling wave solutions of the form u = f(x−ct), and derive a first-order ODE for f. (b) Show that the PDE has traveling wave solutions. Bifurcation noeud-col ou saddle node. C'est la bifurcation associée à l'équation. LA RECHERCHE DES POINTS FIXES Recherchons les points de vitesse nulle: La rèsolution de l'équation,nous conduit à considérer deux cas: ÉTUDE DE LA STABILITÉ DE CES POINTS Soit une fonction de perturbation u(t), que nous allons rajouter aux points fixes: x(t) = x e + u(t). Remarquons tout d'abord que. This saddle-node bifurcation is not observed in the weak coupling limit (Fig. VI(a)) even though they can occur in the phase model description; the saddle-node bifurcation occurs at the tangency of TeJJ in Fig. VI. The region of these O states is denoted in Fig. VI(a) as the dark dashed area. For a comparison with the neuronal model, the phase diagram for the coupled Morris-Lecar model is. This bifurcation is called a saddle-node bifurcation. In it, a pair of hyperbolic equilibria, one stable and one unstable, coalesce at the bifurcation point, annihilate each other and disappear.1 We refer to this bifurcation as a subcritical saddle-node bifurcation, since the equilibria exist for values of below the bifurcation value 0. With the opposite sign x t = x2, the equilibria appear at File Size: KB. ## See This Video: Saddle node bifurcation pdf What is a Saddle-Node Bifurcation?, time: 0:55 Tags: Martin marger race and ethnic relations pdf, Kanker prostat adalah pdf, Download Full PDF Package. This paper. A short summary of this paper. 37 Full PDFs related to this paper. READ PAPER. On saddle-node bifurcation and chaos of satellites. Download. On saddle-node bifurcation and chaos of satellites. Peter Beda. Nonlinear Analysis, Theory, Methods & Application, Vol. 30, No. 8, pp. l, Proc. 2nd World Congress of Nonliwar Analysts Pergamon 0 Bifurcation theory is full of con icting ter-minology! The Saddle-node bifurcation is sometimescalledthe\fold"bifurcation,\turn-ing point" bifurcation or \blue-sky" bifurca-tion (e.g. see Thompson & Stewart ). Example x_ =r x2 Fixed points f(x)=r x2 =0) x = p r Hence there are two xed points for r > 0 but none for r. This bifurcation is called a saddle-node bifurcation. In it, a pair of hyperbolic equilibria, one stable and one unstable, coalesce at the bifurcation point, annihilate each other and disappear.1 We refer to this bifurcation as a subcritical saddle-node bifurcation, since the equilibria exist for values of below the bifurcation value 0. With the opposite sign x t = x2, the equilibria appear at File Size: KB. PDF | On Jan 1, , Leonid Pavlovich Shilnikov and others published Shilnikov saddle-node bifurcation | Find, read and cite all the research you need on ResearchGate. Bifurcation noeud-col ou saddle node. C'est la bifurcation associée à l'équation. LA RECHERCHE DES POINTS FIXES Recherchons les points de vitesse nulle: La rèsolution de l'équation,nous conduit à considérer deux cas: ÉTUDE DE LA STABILITÉ DE CES POINTS Soit une fonction de perturbation u(t), que nous allons rajouter aux points fixes: x(t) = x e + u(t). Remarquons tout d'abord que.PDF | On Jan 1, , Leonid Pavlovich Shilnikov and others published Shilnikov saddle-node bifurcation | Find, read and cite all the research you need on ResearchGate. Bifurcation noeud-col ou saddle node. C'est la bifurcation associée à l'équation. LA RECHERCHE DES POINTS FIXES Recherchons les points de vitesse nulle: La rèsolution de l'équation,nous conduit à considérer deux cas: ÉTUDE DE LA STABILITÉ DE CES POINTS Soit une fonction de perturbation u(t), que nous allons rajouter aux points fixes: x(t) = x e + u(t). Remarquons tout d'abord que. Saddle-node bifurcation, rigorously veri ed numerics Contraction Mapping Theorem, Hodgkin-Huxley model 1 Introduction Parameter dependent models in the form of nonlinear vector elds are ubiquitous in physics, biology, nance and chemistry. As one varies the parameters, one can reach a point in parameter space where the dynamics of the solutions undergo a dramatic change. This phenomenon is. La bifurcation selle-noeud («saddle-node») Soit l'équation différentielle suivante dépendant d'un paramètre réel c: x f x c c xɺ= = +(,) 2 () Recherchons les points d’équilibre de cette équation. Trois cas doivent être distingués: 1. c. x Saddle-Node Bifurcation 8msaddle-node bifurcation. We will concen- trate the frames of the movie around m = 0. We make 21 frames at m-intervals of. This bifurcation is called a saddle-node bifurcation. In it, a pair of hyperbolic equilibria, one stable and one unstable, coalesce at the bifurcation point, annihilate each other and disappear.1 We refer to this bifurcation as a subcritical saddle-node bifurcation, since the equilibria exist for values of below the bifurcation value 0. With the opposite sign x t = x2, the equilibria appear at File Size: KB. A saddle-node bifurcation is a local bifurcation in which two (or more) critical points (or equilibria) of a differential equation (or a dynamic system) collide and annihilate each other. Saddle-node bifurcations may be associated with hysteresis and catastrophes. Consider the slope function $$f(x, \alpha),$$ where α is a control parameter. In this example, we use α instead of k because. subcritical saddle-node bifurcation, and the remaining two equilibria annihilate each other. 6. 4. The following PDE for u(x,t), called Burgers equation, is a simply model of the Navier-Stokes equations for viscous fluids u t +uu x = u xx (a) Look for traveling wave solutions of the form u = f(x−ct), and derive a first-order ODE for f. (b) Show that the PDE has traveling wave solutions. There is no fundamental reason why a limit cycle should appear at a saddle-node bifurcation. Indeed, in one-dimensional differential equations, saddle-node bifurcations are possible, but never lead to a limit cycle. Moreover, if a limit cycle exists in a two-dimensional system, there is no reason why it should appear directly at the bifurcation point - it can also exist before the bifurcation. Bifurcation theory is full of con icting ter-minology! The Saddle-node bifurcation is sometimescalledthe\fold"bifurcation,\turn-ing point" bifurcation or \blue-sky" bifurca-tion (e.g. see Thompson & Stewart ). Example x_ =r x2 Fixed points f(x)=r x2 =0) x = p r Hence there are two xed points for r > 0 but none for r. See More jer database script pdf
{}
# Assessing approximate distribution of data based on a histogram Suppose I want to see whether my data is exponential based on a histogram (i.e. skewed to the right). Depending on how I group or bin the data, I can get wildly different histograms. One set of histograms will make is seem that the data is exponential. Another set will make it seem that data are not exponential. How do I make determining distributions from histograms well defined? • Why not forget about the histograms, because the problems you describe are well established, and consider alternative tools such as qq plots and goodness of fit tests? – whuber Mar 8 '13 at 18:28 ### The difficulty with using histograms to infer shape While histograms are often handy and sometimes useful, they can be misleading. Their appearance can alter quite a lot with changes in the locations of the bin boundaries. This problem has long been known*, though perhaps not as widely as it should be -- you rarely see it mentioned in elementary-level discussions (though there are exceptions). * for example, Paul Rubin[1] put it this way: "it's well known that changing the endpoints in a histogram can significantly alter its appearance". . I think it's an issue that should be more widely discussed when introducing histograms. I'll give some examples and discussion. Why you should be wary of relying on a single histogram of a data set Take a look at these four histograms: That's four very different looking histograms. If you paste the following data in (I'm using R here): Annie <- c(3.15,5.46,3.28,4.2,1.98,2.28,3.12,4.1,3.42,3.91,2.06,5.53, 5.19,2.39,1.88,3.43,5.51,2.54,3.64,4.33,4.85,5.56,1.89,4.84,5.74,3.22, 5.52,1.84,4.31,2.01,4.01,5.31,2.56,5.11,2.58,4.43,4.96,1.9,5.6,1.92) Brian <- c(2.9, 5.21, 3.03, 3.95, 1.73, 2.03, 2.87, 3.85, 3.17, 3.66, 1.81, 5.28, 4.94, 2.14, 1.63, 3.18, 5.26, 2.29, 3.39, 4.08, 4.6, 5.31, 1.64, 4.59, 5.49, 2.97, 5.27, 1.59, 4.06, 1.76, 3.76, 5.06, 2.31, 4.86, 2.33, 4.18, 4.71, 1.65, 5.35, 1.67) Chris <- c(2.65, 4.96, 2.78, 3.7, 1.48, 1.78, 2.62, 3.6, 2.92, 3.41, 1.56, 5.03, 4.69, 1.89, 1.38, 2.93, 5.01, 2.04, 3.14, 3.83, 4.35, 5.06, 1.39, 4.34, 5.24, 2.72, 5.02, 1.34, 3.81, 1.51, 3.51, 4.81, 2.06, 4.61, 2.08, 3.93, 4.46, 1.4, 5.1, 1.42) Zoe <- c(2.4, 4.71, 2.53, 3.45, 1.23, 1.53, 2.37, 3.35, 2.67, 3.16, 1.31, 4.78, 4.44, 1.64, 1.13, 2.68, 4.76, 1.79, 2.89, 3.58, 4.1, 4.81, 1.14, 4.09, 4.99, 2.47, 4.77, 1.09, 3.56, 1.26, 3.26, 4.56, 1.81, 4.36, 1.83, 3.68, 4.21, 1.15, 4.85, 1.17) Then you can generate them yourself: opar<-par() par(mfrow=c(2,2)) hist(Annie,breaks=1:6,main="Annie",xlab="V1",col="lightblue") hist(Brian,breaks=1:6,main="Brian",xlab="V2",col="lightblue") hist(Chris,breaks=1:6,main="Chris",xlab="V3",col="lightblue") hist(Zoe,breaks=1:6,main="Zoe",xlab="V4",col="lightblue") par(opar) Now look at this strip chart: x<-c(Annie,Brian,Chris,Zoe) g<-rep(c('A','B','C','Z'),each=40) stripchart(x~g,pch='|') abline(v=(5:23)/4,col=8,lty=3) abline(v=(2:5),col=6,lty=3) (If it's still not obvious, see what happens when you subtract Annie's data from each set: head(matrix(x-Annie,nrow=40))) The data has simply been shifted left each time by 0.25. Yet the impressions we get from the histograms - right skew, uniform, left skew and bimodal - were utterly different. Our impression was entirely governed by the location of the first bin-origin relative to the minimum. So not just 'exponential' vs 'not-really-exponential' but 'right skew' vs 'left skew' or 'bimodal' vs 'uniform' just by moving where your bins start. Edit: If you vary the binwidth, you can get stuff like this happen: That's the same 34 observations in both cases, just different breakpoints, one with binwidth $1$ and the other with binwidth $0.8$. x <- c(1.03, 1.24, 1.47, 1.52, 1.92, 1.93, 1.94, 1.95, 1.96, 1.97, 1.98, 1.99, 2.72, 2.75, 2.78, 2.81, 2.84, 2.87, 2.9, 2.93, 2.96, 2.99, 3.6, 3.64, 3.66, 3.72, 3.77, 3.88, 3.91, 4.14, 4.54, 4.77, 4.81, 5.62) hist(x,breaks=seq(0.3,6.7,by=0.8),xlim=c(0,6.7),col="green3",freq=FALSE) hist(x,breaks=0:8,col="aquamarine",freq=FALSE) Nifty, eh? Yes, those data were deliberately generated to do that... but the lesson is clear - what you think you see in a histogram may not be a particularly accurate impression of the data. ### What can we do? Histograms are widely used, frequently convenient to obtain and sometimes expected. What can we do to avoid or mitigate such problems? As Nick Cox points out in a comment to a related question: The rule of thumb always should be that details robust to variations in bin width and bin origin are likely to be genuine; details fragile to such are likely to be spurious or trivial. At the least, you should always do histograms at several different binwidths or bin-origins, or preferably both. Alternatively, check a kernel density estimate at not-too-wide a bandwidth. One other approach that reduces the arbitrariness of histograms is averaged shifted histograms, (that's one on that most recent set of data) but if you go to that effort, I think you might as well use a kernel density estimate. If I am doing a histogram (I use them in spite of being acutely aware of the issue), I almost always prefer to use considerably more bins than typical program defaults tend to give and very often I like to do several histograms with varying bin width (and, occasionally, origin). If they're reasonably consistent in impression, you're not likely to have this problem, and if they're not consistent, you know to look more carefully, perhaps try a kernel density estimate, an empirical CDF, a Q-Q plot or something similar. While histograms may sometimes be misleading, boxplots are even more prone to such problems; with a boxplot you don't even have the ability to say "use more bins". See the four very different data sets in this post, all with identical, symmetric boxplots, even though one of the data sets is quite skew. [1]: Rubin, Paul (2014) "Histogram Abuse!", Blog post, OR in an OB world, Jan 23 2014 • Practically every graph of necessity bins data like this. The bins are just small enough (the width of one pixel along the axis) that it doesn't matter? – AJMansfield Jul 11 '13 at 19:13 • @AJMansfield This is a bit like saying "every distribution is discrete" - while literally true, it obscures the relevant issue. A typical number of bins in a binned estimator is vastly smaller than a typical number of pixels... and with any graphics that make use of anti-aliasing, the 'effective' number of pixels is larger (in that it's potentially possible to distinguish differences of positions between pixels) – Glen_b Jul 12 '13 at 0:45 • The fundamental issue is that histograms heavily rely on the bin size. It is difficult to determine this a priori. – user46925 Mar 20 '16 at 14:51 Cumulative distribution plots [MATLAB, R] – where you plot the fraction of data values less than or equal to a range of values – are by far the best way to look at distributions of empirical data. Here, for example, are the ECDFs of this data, produced in R: This can be generated with the following R input (with the above data): plot(ecdf(Annie),xlim=c(min(Zoe),max(Annie)),col="red",main="ECDFs") lines(ecdf(Brian),col="blue") lines(ecdf(Chris),col="green") lines(ecdf(Zoe),col="orange") As you can see, it's visually obvious that these four distributions are simply translations of each other. In general, the benefits of ECDFs for visualizing empirical distributions of data are: 1. They simply present the data as it actually occurs with no transformation other than accumulation, so there's no possibility of accidentally deceiving yourself, as there is with histograms and kernel density estimates, because of how you're processing the data. 2. They give a clear visual sense of the distribution of the data since each point is buffered by all the data before and after it. Compare this with non-cumulative density visualizations, where the accuracy of each density is naturally unbuffered, and thus must be estimated either by binning (histograms) or smoothing (KDEs). 3. They work equally well regardless of whether the data follows a nice parametric distribution, some mixture, or a messy non-parametric distribution. The only trick is learning how to read ECDFs properly: shallow sloped areas mean sparse distribution, steep sloped areas mean dense distribution. Once you get the hang of reading them, however, they're a wonderful tool for looking at distributions of empirical data. • Is there any documentation available to read CDFs? eg what if my cdf distribution like you have showed above then how can we classify \ guesstimate it into chisquare, normal or other distribution based on looks – stats101 Aug 12 '16 at 15:22 A kernel density or logspline plot may be a better option compared to a histogram. There are still some options that can be set with these methods, but they are less fickle than histograms. There are qqplots as well. A nice tool for seeing if data is close enough to a theoretical distribution is detailed in: Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne, D.F and Wickham, H. (2009) Statistical Inference for exploratory data analysis and model diagnostics Phil. Trans. R. Soc. A 2009 367, 4361-4383 doi: 10.1098/rsta.2009.0120 The short version of the idea (still read the paper for details) is that you generate data from the null distribution and create several plots one of which is the original/real data and the rest are simulated from the theoretical distribution. You then present the plots to someone (possibly yourself) that has not seen the original data and see if they can pick out the real data. If they cannot identify the real data then you don't have evidence against the null. The vis.test function in the TeachingDemos package for R help implement a form of this test. Here is a quick example. One of the plots below is 25 points generated from a t distribution with 10 degrees of freedom, the other 8 are generated from a normal distribution with the same mean and variance. The vis.test function created this plot and then prompts the user to choose which of the plots they think is different, then repeats the process 2 more times (3 total). • @ScottStafford, I added a copy of the plot above. This one uses qqplots but the function will also generate histograms or density plots could be programmed. – Greg Snow Apr 18 '13 at 19:13 Suggestion: Histograms usually only assign the x-axis data to have occurred at the midpoint of the bin and omit x-axis measures of location of greater accuracy. The effect this has on the derivatives of fit can be quite large. Let us take a trivial example. Suppose we take the classical derivation of a Dirac delta but modify it so that we start with a Cauchy distribution at some arbitrary median location with a finite scale (full width half-maximum). Then we take the limit as the scale goes to zero. If we use the classical definition of a histogram and do not change bin sizes we will capture neither the location or the scale. If however, we use a median location within bins of even of fixed width, we will always capture the location, if not the scale when the scale is small relative to the bin width. For fitting values where the data is skewed, using fixed bin midpoints will x-axis shift the entire curve segment in that region, which I believe relates to the question above. STEP 1 Here is an almost solution. I used $n=8$ in each histogram category, and just displayed these as the mean x-axis value from each bin. Since each histogram bin has a value of 8, the distributions all look uniform, and I had to offset them vertically to show them. The display is not the correct answer, but it is not without information. It correctly tells us that there is an x-axis offset between groups. It also tells us that the actual distribution appears to be slightly U shaped. Why? Note that the distance between mean values is further apart in the centers, and closer at the edges. So, to make this a better representation, we should borrow whole samples and fractional amounts of each bin boundary sample to make all the mean bin values on the x-axis equidistant. Fixing this and displaying it properly would require a bit of programming. But, it may just be a way to make histograms so that they actually display the underlying data in some logical format. The shape will still change if we change the total number of bins covering the range of the data, but the idea is to resolve some of the problems created by binning arbitrarily. STEP 2 So let's start borrowing between bins to try to make the means more evenly spaced. Now, we can see the shape of the histograms beginning to emerge. But the difference between means is not perfect as we only have whole numbers of samples to swap between bins. To remove the restriction of integer values on the y-axis and complete the process of making equidistant x-axis mean values, we have to start sharing fractions of a sample between bins. Step 3 The sharing of values and parts of values. As one can see, the sharing of parts of a value at a bin boundry can improve the uniformity of distance between mean values. I managed to do this to three decimal places with the data given. However, one cannot, I do not think, make the distance between mean values exactly equal in general, as the coarseness of the data will not permit that. One can, however, do other things like use kernel density estimation. Here we see Annie's data as a bounded kernel density using Gaussian smoothings of 0.1, 0.2, and 0.4. The other subjects will have shifted functions of the same type, provided one does the same thing as I did, namely use the lower and upper bounds of each data set. So, this is no longer a histogram, but a PDF, and it serves the same role as a histogram without some of the warts.
{}
Origami and GeoGebra – a slightly different version For GeoGebra enthusiasts who are attending GeoGebra conferences, don’t throw your GeoGebra flyers. Watch the video and see how useful it is. 🙂 Potato chips and mathematics If your math teacher told you that mathematics is everywhere, believe him.   Almost all the things that we see around (even things that we do not see)  are  related to mathematics  — even potato chips. Yes, even potato chips. Some potato chips, particularly Pringles (I hope they give me 500 bucks for this), are in a shape of a saddle.  In mathematics a saddle-shaped graph is called a hyperbolic paraboloid (see left figure). A hyperbolic paraboloid quadratic and doubly ruled surface given by the Cartesian equation $z = \displaystyle\frac{y^2}{b^2} - \frac{x^2}{a^2}$.  Now, whatever that means will be discussed when you take your analytic geometry course. For now, let’s be happy that we  know that even potato chips can be modeled by graphs. 🙂 *** Sources: Omg Facts, Pringle’s Site, Wolfram Math World Photo Credit: Hyperbolic Paraboloid (Wikimedia), Pringles chips (Wikimedia) Nature by Numbers: Watch and fall in love with math This is a captivating video ‘inspired by numbers, geometry and nature’ and was created by Cristóbal Vila.   The video explains the connections between the Fibonacci sequence 1,1,2,3,5,8,13, … (can you see the pattern?), and nature (the golden rectangle, the nautilus, the sunflower, etc.). For non-math people, you will appreciate this video if you know the concepts behind it. I came across with this video about two weeks ago, but I had no chance to post it until I was reminded by a post about it at the  MathFuture wiki. On the funny side, there were more than 10 thousand who liked the video in Youtube, but 122 disliked it (plus a few more recently). One user (GatorTomKK) had the following comment for those 122 (and possibly for future ‘dislikers‘) : 122 people don’t understand math in general. I can’t help but grin after reading the comment, and I’m sure your doing the same.
{}
Measurement of the forward-backward asymmetry in $Z/\gamma^{\ast} \rightarrow \mu^{+}\mu^{-}$ decays and determination of the effective weak mixing angle JHEP 1511 (2015) 190 The collaboration Abstract (data abstract) CERN-LHC. The forward-backward charge asymmetry for the process $q\bar{q} \rightarrow Z/\gamma^{\ast} \rightarrow \mu^{+}\mu^{-}$ is measured as a function of the invariant mass of the dimuon system. Measurements are performed using proton proton collision data collected with the LHCb detector at $\sqrt{s} = 7$ and 8 TeV, corresponding to integrated luminosities of $1$ fb$^{-1}$ and $2$ fb$^{-1}$ respectively. Within the Standard Model the results constrain the effective electroweak mixing angle to be $$sin^{2}\theta_{W}^{eff} = 0.23142 \pm 0.00073 \pm 0.00052 \pm 0.00056$$ where the first uncertainty is statistical, the second systematic and the third theoretical. This result is in agreement with the current world average, and is one of the most precise determinations at hadron colliders to date.
{}
## Abstract Introduction: Using the Global Adult Tobacco Surveys from 14 primarily low- and middle-income countries, we describe the association between the probability of being a recent quitter and a number of demographic and policy-relevant factors such as exposure to warning labels, work-site smoking bans, antismoking media messaging, tobacco marketing, and current cigarette and bidi prices. Methods: Logistic regressions were used to examine the potential correlates of recent quitting and recent quit attempts. Results: After accounting for country-specific attributes in pooled analyses, we found that higher rates of exposure to work-site smoking bans are associated with higher odds of being a quitter (odds ratio [OR] with 95% confidence interval [CI] = 1.13 [1.04, 1.22]). Exposure to antismoking media messaging (OR with 95% CI = 1.08 [1.00, 1.17]), work-site smoking bans (OR with 95% CI = 1.11 [0.99, 1.26]), and warning labels (OR with 95% CI = 1.03 [1.01, 1.05]); cigarette prices (OR with 95% CI = 1.01 [1.00, 1.02]), and bidi prices (OR with 95% CI =1.17 [1.11, 1.22]) are factors associated with higher odds of recent quit attempts in the pooled analysis. These effects vary by country. Exposure to warning labels is found to be associated with greater likelihood of recent quitting in Egypt (OR with 95% CI = 3.20 [1.53, 6.68]), and the positive association between exposure to work-site smoking bans and quitting is particularly strong for Southeast Asia (OR with 95% CI = 1.20 [1.06, 1.35]) and Asia Pacific countries (OR with 95% CI = 1.85 [0.93, 3.68]). Additionally, exposure to tobacco industry marketing is significantly associated with smaller odds of quitting in Asia Pacific (OR with 95% CI = 0.83 [0.79, 0.87]) and Latin American countries (OR with 95% CI = 0.78 [0.74, 0.82]). Conclusions: Although our results vary by country, they generally suggest that greater exposure to tobacco control polices is significantly associated with quitting. ## INTRODUCTION Tobacco use is one of the leading causes of preventable death worldwide. Although the majority of the world’s smokers reside in low- and middle-income countries (LMICs), the quit rate among smokers in LMICs is relatively low (Jha et al., 2008; Rani, Bonu, Jha, Nguyen, & Jamjoum, 2003). To address this, the World Health Organization (WHO) has identified tobacco cessation as a major goal in its published guidelines of the Framework Convention on Tobacco Control 2006 (WHO FCTC) (WHO, 2010b). Although smoking cessation is recognized as an important aspect of tobacco control in LMICs, less is known about the factors linked to cessation in these countries. Using individual-level data on smokers from 13 LMICs and Poland obtained from the Global Adult Tobacco Survey (GATS) 2008–2010, we examine potential correlates of quitting within and across this group of countries. (Countries were classified into income groups according to their 2011 per capita gross national income following the World Bank Atlas method. Countries are classified as high income if they have a gross national income per capita of $12,476 or more. Poland has a gross national income per capita of$12,480 which is slightly above the cutoff. Therefore, we included Poland in our analyses alongside 13 LMICs.) We describe the association between recent quitting and a comprehensive set of policy-relevant and individual-specific factors. Using individual GATS responses, we construct a number of location-specific index variables, which reflect the local prevalence of work-site smoking bans, cigarette warning labels, tobacco advertising, tobacco promotion, antismoking information, and prices paid for cigarettes (and bidis for India and Bangladesh). By exploring these factors as correlates of quitting, we evaluate their potential as cessation-promoting mechanisms among smokers in LMICs. Over the past decade, a growing body of research has focused on examining the impacts of price and tobacco control policies on tobacco use in LMICs, building upon the evidence demonstrating the effectiveness of these interventions in high-income countries (HICs) (Chaloupka et al., 2011; Chaloupka & Warner, 2000; Guindon, Perucic, & Boisclair, 2003; Kostova, Ross, Blecher, & Markowitz, 2011; Ranson et al., 2002). However, there are few studies that examine the factors in promoting smoking cessation in LMICs. Kostova (2012) examines the impact of price on smoking transitions before the age of 15 in a set of 40 LMICs and finds that in early adolescence, prices are more effective in driving initiation rather than cessation. Ross et al. (in press) find that higher taxes have effectively increased cessation among adults in three Eastern European countries during the transitional period of the 1990s and 2000s. Existing studies of U.S. data have estimated that the short-run price elasticity of smoking cessation, which reflects smokers’ initial efforts to quit in response to price, increases ranges from 0.3 to 0.9 and falls in the longer run as some quitters relapse (DeCicca, Kenkel, & Mathios, 2008; Tauras, 2004). Graphic warning labels and mass-media antismoking campaigns that encourage quitting have been shown to play a role in increasing interest in cessation and quit attempts in HICs (Centers for Disease Control and Prevention, 2012; Davis, Nonnemaker, Farrelly, & Niederdeppe, 2011; Hammond, Fong, McNeill, Borland, & Cummings, 2006). However, the effect of these nonfiscal approaches on cessation in LMICs has not been extensively studied. The relationship between cessation and tobacco control measures such as taxes in LMICs is likely to be different from that in HICs. On the one hand, complicated/tiered tax structures in many LMICs can widen the range of cigarette prices within countries (Chaloupka, Kostova, & Shang, 2013; Shang, Chaloupka, Fong, & Zahra, 2013). This provides an incentive for smokers to switch between cigarette brands or tobacco products in response to higher taxes, potentially reducing the full impact of tax increases on lowering prevalence and consumption. Similarly, economic growth in some LMICs that results in significant income increases can make tobacco products more affordable, encouraging further shifts in consumption (Blecher & van Walbeek, 2004, 2009; Kostova et al., 2012). On the other hand, given the relatively low awareness of tobacco health risks in some LMICs (King et al., 2010), informational policy tools such as graphic warning labels and mass media antismoking campaigns may have a relatively larger impact on cessation in LMICs. ## DATA AND METHODS The GATS is an ongoing nationally representative household survey of adults aged 15 years or older, which has been conducted in 14 countries between 2008 and 2010. It collects information on respondents’ demographic characteristics, tobacco use, exposure to tobacco control policies, and tobacco marketing. In GATS, respondents who identify themselves as past smokers are asked to report how long it had been since they quit smoking. This allows us to describe measures of quitting within the past 12 months, which can be evaluated in the context of recent exposure to tobacco control policies and tobacco marketing, as well as to demographic characteristics. These contemporaneous quitting measures include an indicator of respondents who quit in the past 12 months (the ratio of the number of respondents who quit in the past 12 months to the number of smokers 12 months ago) and an indicator of smokers who made at least one unsuccessful quit attempt (the ratio of the number of current smokers who attempted to quit in the past 12 months to the number of current smokers). The sample of smokers 12 months ago consists of both current smokers and those who quit in the past 12 months, and that the combined quitting and quit attempt rates can be derived as a weighted average of the two indicators. We present these quitting measures along with smoking prevalence in the studied countries in Figure 1. Countries with relatively high smoking prevalence rates such as China and Russia tend to have relatively low recent quit rates, whereas Latin American countries have the highest quit rates. Figure 1. Quitting and quit attempts in the past 12 months and smoking prevalence by country Figure 1. Quitting and quit attempts in the past 12 months and smoking prevalence by country The GATS asks smokers to report expenditures on their last purchase of cigarettes (and bidis for respondents from India and Bangladesh), as well as the number of sticks purchased. Using this information, the price paid per cigarette (bidi) can be derived for each smoker. Since individual-level prices and individual smoking intensity are likely to be simultaneously determined (heavier smokers are more likely to seek out lower prices while lower prices encourage heavier smoking), individual-level prices would be endogenous in models of smoking cessation. To address this simultaneity bias, our analyses use market-level prices, derived as the primary sampling unit (PSU)-specific consumption weighted average cigarette (bidi) price paid per 20 sticks. This approach has been detailed in the International Agency for Research on Cancer (IARC) Handbook (IARC Handbooks of Cancer Prevention, Tobacco Control, 2008) and Economics of Tobacco Toolkit (WHO, 2010a). In order to make prices comparable across countries, we convert them into a common international dollar currency using purchasing power parity adjustment factors, and then into constant 2010 international dollars using the index of average consumer prices published by the International Monetary Fund (Table 1). Table 1. Variable Descriptions and Definitions Individual-level variables Quit in past 12 months Indicator equals 1 if the respondent has quit smoking in the past 12 months, 0 if smokes at the time of survey Quit attempt in past 12 months Indicator equals 1 if the smoker at the time of survey attempted and failed to quit in the past 12 months, 0 otherwise Ever quit Indicator equals 1 if the respondent has quit smoking, 0 if smokes at the time of survey Age Binary indicators for four age categories: 15–24, 25–39, 40–64, 65+ Education Binary indicators for five education categories: no education/less than primary, primary, secondary, high school, college or higher Wealth The fraction of GATS-surveyed household items (electricity, flush toilet, and any other surveyed assets) that the respondents has in their possession, weighted by the per capita gross domestic product of the respondent’s country Household size Number of household members Rural residence Indicator equals 1 if the respondent lives in rural area, 0 otherwise Indoor occupation Indicator equals 1 if the respondent works indoors, 0 otherwise Outdoor occupation Indicator equals 1 if the respondent works outdoors, 0 otherwise PSU-level variables Work-site smoking ban index The average of individual responses that designate the presence of smoking restrictions at the respondent’s work-site (0 = no restriction, 1 = some restriction, 2 = full restriction Warning label index The fraction of respondents who report noticing warning labels among those who have seen cigarette packs in the past 30 days, scaled from 1 to 10 Antismoking information indexa Out of a number of possible antismoking media outlets (newspapers or magazines, television, radio, billboard, and any other outlets), the fraction that each respondent was exposed to in the past 30 days was calculated. These individual-specific fractions were then averaged and scaled from 1 to 10. Tobacco promotion indexa Out of a number of possible promotion approaches (free samples, clothing with brand names, and any other approaches), the fraction that each respondent observed in the past 30 days was calculated. These individual-specific fractions were then averaged and scaled from 1 to 10. Tobacco advertising indexa Out of a number of possible advertising outlets (stores, television, newspapers or magazines, and any other outlets), the fraction that each respondent was exposed to tobacco advertising in the past 30 days was calculated. These individual-specific fractions were then averaged and scaled from 1 to 10. Cigarette price Average price paid per 20 cigarettes in constant 2010 international dollars Bidi price Average price paid per 20 bidis in constant 2010 international dollars Individual-level variables Quit in past 12 months Indicator equals 1 if the respondent has quit smoking in the past 12 months, 0 if smokes at the time of survey Quit attempt in past 12 months Indicator equals 1 if the smoker at the time of survey attempted and failed to quit in the past 12 months, 0 otherwise Ever quit Indicator equals 1 if the respondent has quit smoking, 0 if smokes at the time of survey Age Binary indicators for four age categories: 15–24, 25–39, 40–64, 65+ Education Binary indicators for five education categories: no education/less than primary, primary, secondary, high school, college or higher Wealth The fraction of GATS-surveyed household items (electricity, flush toilet, and any other surveyed assets) that the respondents has in their possession, weighted by the per capita gross domestic product of the respondent’s country Household size Number of household members Rural residence Indicator equals 1 if the respondent lives in rural area, 0 otherwise Indoor occupation Indicator equals 1 if the respondent works indoors, 0 otherwise Outdoor occupation Indicator equals 1 if the respondent works outdoors, 0 otherwise PSU-level variables Work-site smoking ban index The average of individual responses that designate the presence of smoking restrictions at the respondent’s work-site (0 = no restriction, 1 = some restriction, 2 = full restriction Warning label index The fraction of respondents who report noticing warning labels among those who have seen cigarette packs in the past 30 days, scaled from 1 to 10 Antismoking information indexa Out of a number of possible antismoking media outlets (newspapers or magazines, television, radio, billboard, and any other outlets), the fraction that each respondent was exposed to in the past 30 days was calculated. These individual-specific fractions were then averaged and scaled from 1 to 10. Tobacco promotion indexa Out of a number of possible promotion approaches (free samples, clothing with brand names, and any other approaches), the fraction that each respondent observed in the past 30 days was calculated. These individual-specific fractions were then averaged and scaled from 1 to 10. Tobacco advertising indexa Out of a number of possible advertising outlets (stores, television, newspapers or magazines, and any other outlets), the fraction that each respondent was exposed to tobacco advertising in the past 30 days was calculated. These individual-specific fractions were then averaged and scaled from 1 to 10. Cigarette price Average price paid per 20 cigarettes in constant 2010 international dollars Bidi price Average price paid per 20 bidis in constant 2010 international dollars Note. GATS = Global Adult Tobacco Survey. aThe indices are imputed for individuals as the average of indicators of items listed. Take antismoking information index as the example, index = (newspapers or magazines + television + radio + billboard + any other outlets)/5, and then aggregated into the PSU-level index. Individual-level demographic controls include age, gender, education, rural residence, wealth, household size and occupation type (Tables 1 and 2). Age is defined by binary indicator variables for four age categories (15–24, 25–39, 40–64, 65 and older). Education level is described through five categories: no education/less than primary, primary, secondary, high school, and college or higher. Occupation type is described through three categories: indoor, outdoor, and unemployed/unspecified. Wealth is measured from survey questions that inquire about the possession of certain personal and household items (electricity, flush toilet, fixed telephone, cellular phone, television, radio, refrigerator, car/bike/boat, moped/scooter/motorcycle, washing machine, and any other surveyed assets) and is defined as the fraction of surveyed items, which the respondent has in their possession, weighted by the per capita gross domestic product of the respondent’s country. Table 2. Descriptive Statistics BR MX UY IN BD TH CN VN PH PL RU TR UA EG All % Quit in past 12 months 11.5(31.9) 15.9(36.5) 11.4(31.8) 3.10(17.3) 4.29(20.3) 5.34(22.5) 4.52(20.8) 7.57(26.5) 6.77(25.1) 7.43(26.2) 5.11(22.0) 9.85(29.8) 8.89(28.5) 5.50(22.8) 6.81(25.2) % Quit attempts in past 12 months 41.8(49.3) 46.2(49.9) 44.0(49.7) 30.5(46.0) 47.7(50.0) 47.4(49.9) 30.9(46.2) 51.7(50.0) 46.5(49.9) 30.4(46.0) 29.2(45.5) 41.1(49.2) 34.8(47.6) 38.0(48.6) 38.3(48.6) Household size 3.26(1.79) 4.53(2.13) 3.15(1.86) 5.14(2.37) 4.49(1.87) 3.53(1.69) 2.78(1.30) 3.89(1.64) 5.00(2.35) 3.35(1.61) 3.04(1.39) 4.11(2.06) 3.08(1.38) 4.80(2.42) 3.98(2.13) %Rural 17.9(38.4) 36.5(48.1) 33.7(47.3) 68.3(46.5) 52.4(50.0) 44.9(49.7) 62.1(48.5) 51.9(50.0) 60.2(49.0) 47.0(49.9) 46.0(49.8) 44.1(49.7) 47.2(49.9) 41.1(49.2) 48.0(50.0) % Male 56.8(49.5) 75.8(42.8) 57.2(49.5) 88.4(32.1) 96.5(18.3) 91.0(28.7) 93.8(24.1) 96.0(19.7) 82.8(37.7) 58.8(49.2) 78.3(41.2) 74.3(43.7) 84.0(36.7) 98.5(12.3) 81.4(38.9) Wealth 8.57(2.38) 9.63(2.72) 11.3(2.94) 1.22(0.82) 0.53(0.26) 6.55(1.75) 4.26(1.58) 2.82(1.25) 1.85(1.08) 16.2(2.74) 11.1(2.33) 11.5(1.55) 4.84(1.26) 4.81(1.01) 6.05(4.61) % Indoor occupation 22.6(41.8) 22.3(41.6) 30.2(45.9) 15.6(36.2) 16.8(37.4) 17.0(37.6) 24.7(43.2) 20.7(40.5) 11.3(31.7) 34.1(47.4) 41.5(49.3) 29.1(45.4) 27.7(44.7) 22.8(42.0) 22.9(42.0) % Outdoor occupation 36.6(48.2) 47.4(49.9) 33.3(47.1) 56.7(49.5) 74.7(43.5) 58.3(49.3) 41.4(49.3) 50.6(50.0) 65.9(47.4) 23.3(42.3) 32.2(46.7) 33.0(47.0) 28.0(44.9) 59.5(49.1) 47.1(49.9) % Age 25–39 33.1(47.1) 36.0(48.0) 33.5(47.2) 37.2(48.3) 42.8(49.5) 30.4(46.0) 21.2(40.9) 35.0(47.7) 39.4(48.9) 33.0(47.0) 33.8(47.3) 42.6(49.5) 37.4(48.4) 39.1(48.8) 34.9(47.7) % Age 40–64 44.7(49.7) 32.1(46.7) 42.1(49.4) 45.3(49.8) 40.0(49.0) 48.1(50.0) 58.2(49.3) 47.5(49.9) 37.5(48.4) 50.8(50.0) 44.1(49.7) 41.1(49.2) 41.8(49.3) 43.4(49.6) 44.9(49.7) % Age 65+ 8.91(28.5) 7.02(25.6) 8.46(27.8) 9.25(29.0) 6.34(24.4) 11.9(32.4) 16.7(37.3) 8.22(27.5) 7.68(26.6) 6.09(23.9) 6.02(23.8) 4.81(21.4) 7.72(26.7) 6.80(25.2) 8.73(28.2) % Primary school 18.9(39.2) 25.2(43.4) 43.3(49.6) 29.5(45.6) 26.6(44.2) 53.8(49.8) 27.3(44.5) 26.8(44.3) 40.5(49.1) 13.6(34.2) 2.17(14.6) 51.6(50.0) 6.45(24.5) 18.1(38.5) 26.8(44.3) % Secondary school 39.3(48.8) 29.6(45.6) 21.5(41.1) 25.6(43.6) 20.4(40.3) 17.4(37.9) 39.1(48.8) 26.0(43.9) 16.9(37.5) 35.8(47.9) 6.99(25.5) 10.4(30.5) 35.9(47.9) 11.2(31.5) 24.5(43.0) % High school 19.4(39.6) 14.6(35.3) 15.3(36.0) 6.63(24.9) 2.83(16.6) 14.8(35.5) 16.8(37.4) 25.6(43.6) 19.7(39.8) 36.1(48.0) 69.3(46.1) 20.2(40.2) 41.5(49.2) 8.14(27.4) 20.8(40.6) % College or greater 6.85(25.3) 10.0(30.0) 6.17(24.1) 7.59(26.5) 4.16(20.0) 9.48(29.3) 8.81(28.3) 0.37(6.06) 19.0(39.3) 14.3(35.0) 21.3(41.0) 8.71(28.2) 16.0(36.6) 37.3(48.4) 12.1(32.6) Antismoking information index 5.80(0.74) 5.38(0.55) 4.72(0.69) 3.45(1.94) 2.50(1.19) 5.86(0.64) 2.85(1.48) 6.17(1.52) 4.75(2.02) 3.76(1.53) 2.75(1.33) 4.08(1.08) 2.90(1.48) 2.87(0.42) 4.10(1.85) Tobacco promotion index 0.32(0.20) 0.85(0.41) 0.61(0.35) 0.52(0.69) 2.20(1.05) 0.38(0.23) 0.27(0.34) 0.28(0.36) 1.58(1.32) 0.44(0.39) 1.20(0.99) 0.26(0.29) 0.61(0.74) 0.21(0.14) 0.61(0.78) Tobacco advertising index 3.74(1.06) 2.78(0.92) 1.61(0.66) 0.85(1.05) 1.88(1.27) 0.24(0.15) 0.45(0.41) 0.47(0.56) 3.73(1.82) 0.60(0.46) 2.80(1.65) 0.22(0.24) 1.61(1.21) 0.33(0.17) 1.54(1.65) Warning label index 7.92(0.87) 5.95(1.41) 8.80(0.94) 6.11(2.16) 5.81(1.53) 8.65(0.56) 6.25(1.58) 8.70(1.17) 8.33(1.64) 8.54(1.16) 8.48(1.59) 8.15(1.30) 7.97(1.90) 9.75(0.20) 7.64(1.92) Work-site smoking ban index 1.58(0.21) 1.56(0.35) – 1.07(0.63) 0.81(0.57) 1.46(0.21) 0.76(0.31) 0.80(0.52) 1.29(0.60) 1.32(0.30) 1.15(0.26) 1.27(0.50) 1.30(0.39) 0.84(0.24) 1.18(0.50) Cigarette prices 1.44(0.65) 3.90(1.64) 2.69(0.94) 2.79(1.76) 1.26(0.31) 1.83(0.49) 1.72(2.75) 2.50(1.17) 0.85(0.60) 4.36(1.01) 1.26(0.82) 3.29(0.79) 1.43(0.18) 4.61(6.75) 2.35(2.50) Bidi prices – – – 0.73(0.62) 0.29(0.15) – – – – – – – – – 0.66(0.29) N 7,915 2,164 1,573 11,967 2,333 5,184 4,200 2,445 2,970 2,610 5,066 2,996 2,631 4,397 5,8451 BR MX UY IN BD TH CN VN PH PL RU TR UA EG All % Quit in past 12 months 11.5(31.9) 15.9(36.5) 11.4(31.8) 3.10(17.3) 4.29(20.3) 5.34(22.5) 4.52(20.8) 7.57(26.5) 6.77(25.1) 7.43(26.2) 5.11(22.0) 9.85(29.8) 8.89(28.5) 5.50(22.8) 6.81(25.2) % Quit attempts in past 12 months 41.8(49.3) 46.2(49.9) 44.0(49.7) 30.5(46.0) 47.7(50.0) 47.4(49.9) 30.9(46.2) 51.7(50.0) 46.5(49.9) 30.4(46.0) 29.2(45.5) 41.1(49.2) 34.8(47.6) 38.0(48.6) 38.3(48.6) Household size 3.26(1.79) 4.53(2.13) 3.15(1.86) 5.14(2.37) 4.49(1.87) 3.53(1.69) 2.78(1.30) 3.89(1.64) 5.00(2.35) 3.35(1.61) 3.04(1.39) 4.11(2.06) 3.08(1.38) 4.80(2.42) 3.98(2.13) %Rural 17.9(38.4) 36.5(48.1) 33.7(47.3) 68.3(46.5) 52.4(50.0) 44.9(49.7) 62.1(48.5) 51.9(50.0) 60.2(49.0) 47.0(49.9) 46.0(49.8) 44.1(49.7) 47.2(49.9) 41.1(49.2) 48.0(50.0) % Male 56.8(49.5) 75.8(42.8) 57.2(49.5) 88.4(32.1) 96.5(18.3) 91.0(28.7) 93.8(24.1) 96.0(19.7) 82.8(37.7) 58.8(49.2) 78.3(41.2) 74.3(43.7) 84.0(36.7) 98.5(12.3) 81.4(38.9) Wealth 8.57(2.38) 9.63(2.72) 11.3(2.94) 1.22(0.82) 0.53(0.26) 6.55(1.75) 4.26(1.58) 2.82(1.25) 1.85(1.08) 16.2(2.74) 11.1(2.33) 11.5(1.55) 4.84(1.26) 4.81(1.01) 6.05(4.61) % Indoor occupation 22.6(41.8) 22.3(41.6) 30.2(45.9) 15.6(36.2) 16.8(37.4) 17.0(37.6) 24.7(43.2) 20.7(40.5) 11.3(31.7) 34.1(47.4) 41.5(49.3) 29.1(45.4) 27.7(44.7) 22.8(42.0) 22.9(42.0) % Outdoor occupation 36.6(48.2) 47.4(49.9) 33.3(47.1) 56.7(49.5) 74.7(43.5) 58.3(49.3) 41.4(49.3) 50.6(50.0) 65.9(47.4) 23.3(42.3) 32.2(46.7) 33.0(47.0) 28.0(44.9) 59.5(49.1) 47.1(49.9) % Age 25–39 33.1(47.1) 36.0(48.0) 33.5(47.2) 37.2(48.3) 42.8(49.5) 30.4(46.0) 21.2(40.9) 35.0(47.7) 39.4(48.9) 33.0(47.0) 33.8(47.3) 42.6(49.5) 37.4(48.4) 39.1(48.8) 34.9(47.7) % Age 40–64 44.7(49.7) 32.1(46.7) 42.1(49.4) 45.3(49.8) 40.0(49.0) 48.1(50.0) 58.2(49.3) 47.5(49.9) 37.5(48.4) 50.8(50.0) 44.1(49.7) 41.1(49.2) 41.8(49.3) 43.4(49.6) 44.9(49.7) % Age 65+ 8.91(28.5) 7.02(25.6) 8.46(27.8) 9.25(29.0) 6.34(24.4) 11.9(32.4) 16.7(37.3) 8.22(27.5) 7.68(26.6) 6.09(23.9) 6.02(23.8) 4.81(21.4) 7.72(26.7) 6.80(25.2) 8.73(28.2) % Primary school 18.9(39.2) 25.2(43.4) 43.3(49.6) 29.5(45.6) 26.6(44.2) 53.8(49.8) 27.3(44.5) 26.8(44.3) 40.5(49.1) 13.6(34.2) 2.17(14.6) 51.6(50.0) 6.45(24.5) 18.1(38.5) 26.8(44.3) % Secondary school 39.3(48.8) 29.6(45.6) 21.5(41.1) 25.6(43.6) 20.4(40.3) 17.4(37.9) 39.1(48.8) 26.0(43.9) 16.9(37.5) 35.8(47.9) 6.99(25.5) 10.4(30.5) 35.9(47.9) 11.2(31.5) 24.5(43.0) % High school 19.4(39.6) 14.6(35.3) 15.3(36.0) 6.63(24.9) 2.83(16.6) 14.8(35.5) 16.8(37.4) 25.6(43.6) 19.7(39.8) 36.1(48.0) 69.3(46.1) 20.2(40.2) 41.5(49.2) 8.14(27.4) 20.8(40.6) % College or greater 6.85(25.3) 10.0(30.0) 6.17(24.1) 7.59(26.5) 4.16(20.0) 9.48(29.3) 8.81(28.3) 0.37(6.06) 19.0(39.3) 14.3(35.0) 21.3(41.0) 8.71(28.2) 16.0(36.6) 37.3(48.4) 12.1(32.6) Antismoking information index 5.80(0.74) 5.38(0.55) 4.72(0.69) 3.45(1.94) 2.50(1.19) 5.86(0.64) 2.85(1.48) 6.17(1.52) 4.75(2.02) 3.76(1.53) 2.75(1.33) 4.08(1.08) 2.90(1.48) 2.87(0.42) 4.10(1.85) Tobacco promotion index 0.32(0.20) 0.85(0.41) 0.61(0.35) 0.52(0.69) 2.20(1.05) 0.38(0.23) 0.27(0.34) 0.28(0.36) 1.58(1.32) 0.44(0.39) 1.20(0.99) 0.26(0.29) 0.61(0.74) 0.21(0.14) 0.61(0.78) Tobacco advertising index 3.74(1.06) 2.78(0.92) 1.61(0.66) 0.85(1.05) 1.88(1.27) 0.24(0.15) 0.45(0.41) 0.47(0.56) 3.73(1.82) 0.60(0.46) 2.80(1.65) 0.22(0.24) 1.61(1.21) 0.33(0.17) 1.54(1.65) Warning label index 7.92(0.87) 5.95(1.41) 8.80(0.94) 6.11(2.16) 5.81(1.53) 8.65(0.56) 6.25(1.58) 8.70(1.17) 8.33(1.64) 8.54(1.16) 8.48(1.59) 8.15(1.30) 7.97(1.90) 9.75(0.20) 7.64(1.92) Work-site smoking ban index 1.58(0.21) 1.56(0.35) – 1.07(0.63) 0.81(0.57) 1.46(0.21) 0.76(0.31) 0.80(0.52) 1.29(0.60) 1.32(0.30) 1.15(0.26) 1.27(0.50) 1.30(0.39) 0.84(0.24) 1.18(0.50) Cigarette prices 1.44(0.65) 3.90(1.64) 2.69(0.94) 2.79(1.76) 1.26(0.31) 1.83(0.49) 1.72(2.75) 2.50(1.17) 0.85(0.60) 4.36(1.01) 1.26(0.82) 3.29(0.79) 1.43(0.18) 4.61(6.75) 2.35(2.50) Bidi prices – – – 0.73(0.62) 0.29(0.15) – – – – – – – – – 0.66(0.29) N 7,915 2,164 1,573 11,967 2,333 5,184 4,200 2,445 2,970 2,610 5,066 2,996 2,631 4,397 5,8451 Note. In the column headers, BR, MX, UY, IN, BD, TH, CN, VN, PH, PL, RU, TR, UA, and EG represent Brazil, Mexico, Uruguay, India, Bangladesh, Thailand, China, Vietnam, the Philippines, Poland, Russia, Turkey, Ukraine, and Egypt, respectively. The samples are restricted to current smokers and past year quitters. For quit attempt rates, the samples for denominators are current smokers. Standard deviation is in the parenthesis. Besides individual demographic controls, our study employs a number of indices constructed from the individual responses of GATS participants. These include exposure to tobacco control policies, exposure to antismoking media messaging, and exposure to tobacco marketing (Tables 1 and 2). These indices are constructed as PSU-level aggregates, which has a number of advantages over using the underlying individual-level exposure status. First, individual exposure may have a reverse causality link to quitting behavior—for instance, antismoking messaging may target and be observed disproportionately more by the type of person who is more prone to quitting in the first place or tobacco marketing may be disproportionately targeting individuals who are committed smokers and less likely to quit in the first place. The reverse causality bias is reduced by aggregating individual exposure responses at the PSU level. Second, aggregating individual exposure responses at the PSU level reduces misclassification bias occurring when some respondents underreport and some overreport personal exposure. Third, the PSU-level indices may also capture subnational differences in policy implementation and enforcement. They are defined as follows. The work-site smoking ban index is constructed as the PSU-level average of individual responses that designate the presence of smoking restrictions at the respondent’s work-site (0, no restriction; 1, some restriction; 2, full restriction). The warning label index is constructed at the PSU level as the fraction of respondents who report noticing warning labels among those who have seen cigarette packs in the past 30 days, scaled from 1 to 10. For developing the antismoking information index, we first estimated, for each respondent, the fraction of media outlets (newspapers or magazines, television, radio, billboard, and any other outlets) that have exposed the respondent to antismoking information in the past 30 days. These individual-specific fractions were then averaged across all respondents in a PSU and scaled from 1 to 10 to produce the index. For developing the tobacco promotion index, we first estimated, for each respondent, the fraction of promotion approaches (free samples, clothing with brand names, and any other approaches) that have been observed by the respondent in the past 30 days. Similarly to the antismoking information index, these individual-specific fractions were then averaged across respondents in a PSU and scaled from 1 to 10 to produce the tobacco promotion index. The tobacco advertising index was constructed using a similar formula: first, we estimated the fraction of advertising outlets (stores, television, newspapers or magazines, and any other outlets) that have exposed each respondent to tobacco advertising in the past 30 days, then averaged these fractions at the PSU level and scaled them from 1 to 10 to produce the tobacco advertising index. Logistic regressions were used to examine the potential correlates of recent quitting (the probability of quitting in the past 12 months) and recent quit attempts (unsuccessful quit attempts in the past 12 months). For individual country estimates, the standard errors were clustered at the PSU level; in the pooled analysis, they were clustered at the country level. All models include individual demographic controls for age, gender, education, wealth, household size, rural residence, occupation type, and PSU-level indices for antismoking information, tobacco promotion, tobacco advertising, and warning labels, and prices paid for cigarettes. Bidi prices were included in models for India and Bangladesh. To examine how workplace smoking bans may impact quitting depending on the type of employment, we included interaction terms between the workplace smoking ban index and the indicators for indoor and “other” occupation (“other” refers to outdoor and unemployed/unspecified occupation). Pooled models include country fixed effects to control for unmeasured country-specific factors that may impact cessation behavior. The analysis sample for the models of recent quitting consists of current smokers and those who quit in the past year and the sample for models of recent quit attempts consists of current smokers only. ## RESULTS In Table 2, we list the means of the outcome variables and their potential correlates by country. Warning labels are most often observed in Egypt, exposure to antismoking mass-media messages is highest in Vietnam. Smokers are most frequently exposed to work-site smoking bans in Brazil and Mexico—countries with relatively high quit rates, and smokers’ exposure to tobacco advertising and promotion is greatest in Bangladesh, Russia, and the Philippines—countries with relatively low quit rates. Although the associations between recent quitting and its correlates can vary by country, several patterns emerge (Table 3). Men and those older than 24 years are less likely to be former smokers in all countries. Rural smokers and those with more education are more likely to quit in most countries. The association between wealth and quitting varies across countries, with greater wealth associated with increased quitting in Bangladesh, Brazil, Uruguay, Russia, and Ukraine but less quitting in India and Turkey. This finding suggests that wealthier smokers in a majority of LMICs may have more incentives to quit, which has been shown theoretically in the health capital model developed by Grossman (1972) and empirically shown by others (Fagan et al., 2007; Siahpush, McNeill, Borland, & Fong, 2006). Given intention to quit, wealthier smokers also tend to have more access to professional services and drugs that help quitting (Kotz & West, 2009). Table 3. Country-Specific Models of Recent Quitting (Quitting in the Past 12 Months) OR 95% CI OR 95% CI OR 95% CI OR 95% CI OR 95%CI BR (N = 7,915) MX (N = 2,164) UY (N = 1,573) IN (N = 1,1967) BD (N = 2,333) Household size 0.97 0.93–1.01 1.07** 1.01–1.13 0.93 0.84–1.02 1.00 0.95–1.05 1.08* 0.99–1.18 Rural 1.14 0.92–1.43 1.13 0.82–1.57 1.58** 1.09–2.41 0.90 0.70–1.18 0.81 0.40–1.65 Male 0.83** 0.71–0.97 0.69** 0.49–0.97 0.70** 0.48–1.00 0.95 0.58–1.55 1.02 0.36–2.90 Wealth 1.05** 1.01–1.09 0.98 0.92–1.04 1.09** 1.02–1.16 0.79* 0.61–1.02 3.26** 1.17–9.12 Indoor occupation 3.85** 1.11–13.4 1.12 0.21–6.06 0.81 0.54–1.22 0.63 0.30–1.35 0.41 0.12–1.38 Outdoor occupation 0.72*** 0.60–0.86 0.88 0.63–1.24 0.87 0.56–1.36 0.66** 0.47–0.92 0.47** 0.24–0.91 Age 25–39 0.57*** 0.47–0.70 0.64*** 0.46–0.90 0.58** 0.37–0.90 0.69* 0.45–1.04 0.28*** 0.15–0.52 40–64 0.41*** 0.33–0.51 0.47*** 0.34–0.66 0.49*** 0.32–0.77 0.56** 0.35–0.89 0.52** 0.29–0.95 65+ 0.46*** 0.33–0.64 0.61* 0.34–1.08 0.48** 0.24–0.95 0.92 0.48–1.78 0.87 0.40–1.89 Education Primary 0.90 0.69–1.18 1.20 0.80–1.79 2.01** 1.05–3.84 1.46** 1.06–2.03 0.76 0.44–1.29 Secondary 0.82** 0.67–0.99 1.27 0.83–1.95 1.35 0.60–3.03 1.80*** 1.27–2.56 0.75 0.40–1.38 High school 0.89 0.71–1.11 1.09 0.65–1.81 2.37** 1.02–5.49 1.26 0.75–2.11 0.85 0.26–2.75 College or above 1.14 0.82–1.57 1.18 0.63–2.20 1.01 0.34–2.98 2.34*** 1.56–3.52 1.28 0.47–3.45 PSU-level indices Antismoking information 0.97 0.87–1.07 0.86 0.64–1.15 0.95 0.72–1.25 0.99 0.91–1.08 1.20 0.96–1.49 Tobacco promotion 0.84 0.57–1.22 0.79 0.49–1.29 0.90 0.55–1.48 1.15 0.91–1.45 0.81* 0.65–1.02 Tobacco advertising 1.03 0.96–1.11 1.11 0.89–1.38 1.03 0.77–1.36 0.94 0.80–1.11 0.99 0.80–1.22 Warning label 1.07 0.97–1.17 0.96 0.85–1.08 0.84* 0.70–1.01 0.94 0.87–1.01 0.99 0.82–1.20 Work ban* indoor occupation 0.51* 0.25–1.02 1.10 0.39–3.11 – – 1.29 0.71–2.34 1.34 0.47–3.78 Work ban* other occupation 1.39 0.93–2.09 1.23 0.84–1.81 – – 1.17 0.97–1.42 0.78 0.51–1.18 Cigarette price 0.98 0.87–1.10 1.01 0.95–1.08 1.04 0.83–1.32 0.95 0.87–1.02 0.84 0.23–3.08 Bidi price – – – – – – 1.07 0.79–1.46 5.06*** 2.05–12.5 PLa (N = 2,604) RUa (N = 5,059) TR (N = 2,996) UA (N = 2,623) EG (N = 4,395) Household size 1.00 0.88–1.13 0.89** 0.81–0.99 0.95 0.89–1.02 0.96 0.88–1.05 0.96 0.91–1.02 Rural 1.02 0.67–1.53 1.30* 0.98–1.73 1.39* 1.00–1.94 1.23 0.70–2.14 1.35** 1.05–1.74 Male 0.96 0.72–1.28 0.73** 0.54–0.97 0.66*** 0.49–0.90 0.55*** 0.40–0.76 0.89 0.35–2.29 Wealth 1.04 0.97–1.11 1.07* 1.00–1.14 0.93* 0.87–1.01 1.24*** 1.09–1.41 1.04 0.92–1.18 Indoor occupation 0.83 0.16–4.19 1.50 0.43–5.26 0.49 0.18–1.39 0.85 0.28–2.52 1.04 0.33–3.30 Outdoor occupation 0.80 0.46–1.40 1.01 0.68–1.51 0.54*** 0.37–0.78 0.68** 0.47–0.98 0.57*** 0.40–0.80 Age 25–39 0.90 0.60–1.34 0.46*** 0.34–0.63 0.68* 0.45–1.03 0.61** 0.39–0.94 0.90 0.57–1.41 40–64 0.38*** 0.24–0.62 0.41*** 0.30–0.56 0.87 0.59–1.30 0.47*** 0.31–0.72 1.25 0.78–1.99 65+ 0.68 0.31–1.46 0.49** 0.25–0.96 1.09 0.61–1.96 0.48** 0.24–0.94 1.20 0.65–2.24 Education Primary 1.00  1.00  1.06 0.67–1.66 0.60 0.04–8.49 0.92 0.62–1.36 Secondary 0.95 0.51–1.77 3.49 0.78–15.7 1.01 0.55–1.84 0.57 0.05–6.72 0.83 0.48–1.42 High school 1.29 0.63–2.65 2.49 0.58–10.6 1.01 0.58–1.75 0.50 0.04–6.34 0.71 0.41–1.24 College or above 1.35 0.61–3.00 2.92 0.66–12.8 1.29 0.69–2.42 0.53 0.04–7.11 1.23 0.86–1.75 PSU-level indices Antismoking information 1.26*** 1.13–1.42 1.07 0.96–1.19 0.94 0.82–1.07 1.05 0.97–1.15 1.01 0.74–1.39 Tobacco promotion 0.84 0.58–1.19 1.21*** 1.05–1.39 0.88 0.56–1.38 0.92 0.74–1.14 0.75 0.30–1.88 Tobacco advertising 1.12 0.81–1.55 0.95 0.86–1.05 1.32 0.71–2.47 0.93 0.80–1.09 1.50 0.62–3.66 Warning label 1.04 0.89–1.22 1.06 0.96–1.17 0.98 0.88–1.09 1.03 0.95–1.12 3.20*** 1.53–6.68 Work ban* indoor occupation 0.67 0.22–2.03 1.38 0.71–2.69 1.52 0.79–2.91 0.95 0.52–1.76 0.68 0.22–2.08 Work ban* other occupation 0.65* 0.41–1.06 1.68 0.82–3.46 1.01 0.73–1.41 1.09 0.74–1.60 1.20 0.57–2.51 Cigarette price 1.04 0.95–1.15 0.89 0.72–1.10 1.13 0.96–1.33 0.86 0.19–3.80 0.98 0.95–1.01 TH (N = 5,184) CN (N = 4,197) VN (N = 2,444) PH (N = 2,969) Household size 0.94 0.87–1.01 1.11* 0.98–1.26 0.95 0.86–1.06 1.02 0.95–1.09 Rural 1.18 0.89–1.56 1.15 0.73–1.79 1.15 0.79–1.68 1.01 0.72–1.42 Male 0.66* 0.43–1.02 0.71 0.43–1.18 0.49* 0.24–1.01 0.73 0.48–1.11 Wealth 1.05 0.98–1.13 0.98 0.86–1.12 1.09 0.93–1.28 1.10 0.93–1.29 Indoor occupation 2.07 0.29–14.6 0.16** 0.03–0.87 0.62 0.28–1.39 0.55 0.16–1.88 Outdoor occupation 0.56*** 0.42–0.74 0.61*** 0.43–0.86 0.70** 050–0.98 0.73 0.50–1.07 Age 25–39 0.55*** 0.38–0.82 0.71 0.35–1.45 0.60** 0.37–0.97 0.56*** 0.37–0.87 40–64 0.66** 0.45–0.95 0.60 0.31–1.18 0.57** 0.36–0.89 0.78 0.51–1.19 65+ 0.71 0.43–1.17 1.06 0.49–2.31 0.67 0.36–1.26 1.18 0.64–2.15 Education Primary 1.13 0.60–2.14 0.85 0.54–1.36 1.16 0.67–1.99 1.78 0.69–4.60 Secondary 1.14 0.56–2.31 0.64 0.35–1.17 1.34 0.80–2.27 1.54 0.55–4.29 High school 1.31 0.63–2.72 0.97 0.48–1.95 1.64* 0.93–2.91 1.49 0.53–4.20 College or above 1.48 0.66–3.32 0.98 0.40–2.40 1.65 0.20–13.6 2.16 0.76–6.10 PSU-level indices Antismoking information 1.15 0.96–1.39 1.02 0.83–1.24 1.00 0.90–1.13 1.00 0.92–1.09 Tobacco promotion 1.49 0.91–2.45 1.08 0.59–1.97 0.82 0.50–1.37 0.81*** 0.71–0.91 Tobacco advertising 1.07 0.46–2.46 0.72 0.45–1.17 0.89 0.65–1.24 1.09 0.98–1.20 Warning label 0.91 0.77–1.08 0.99 0.85–1.15 1.14 0.97–1.34 0.98 0.87–1.09 Work ban* indoor occupation 0.70 0.22–2.22 6.37** 1.28–31.7 1.55 0.79–3.05 1.35 0.67–2.72 Work ban* other occupation 1.41 0.72–2.73 1.90* 0.94–3.81 1.07 0.78–1.49 0.93 0.70–1.23 Cigarette price 1.01 0.76–1.36 1.02 0.99–1.04 1.02 0.87–1.19 1.20*** 1.04–1.38 OR 95% CI OR 95% CI OR 95% CI OR 95% CI OR 95%CI BR (N = 7,915) MX (N = 2,164) UY (N = 1,573) IN (N = 1,1967) BD (N = 2,333) Household size 0.97 0.93–1.01 1.07** 1.01–1.13 0.93 0.84–1.02 1.00 0.95–1.05 1.08* 0.99–1.18 Rural 1.14 0.92–1.43 1.13 0.82–1.57 1.58** 1.09–2.41 0.90 0.70–1.18 0.81 0.40–1.65 Male 0.83** 0.71–0.97 0.69** 0.49–0.97 0.70** 0.48–1.00 0.95 0.58–1.55 1.02 0.36–2.90 Wealth 1.05** 1.01–1.09 0.98 0.92–1.04 1.09** 1.02–1.16 0.79* 0.61–1.02 3.26** 1.17–9.12 Indoor occupation 3.85** 1.11–13.4 1.12 0.21–6.06 0.81 0.54–1.22 0.63 0.30–1.35 0.41 0.12–1.38 Outdoor occupation 0.72*** 0.60–0.86 0.88 0.63–1.24 0.87 0.56–1.36 0.66** 0.47–0.92 0.47** 0.24–0.91 Age 25–39 0.57*** 0.47–0.70 0.64*** 0.46–0.90 0.58** 0.37–0.90 0.69* 0.45–1.04 0.28*** 0.15–0.52 40–64 0.41*** 0.33–0.51 0.47*** 0.34–0.66 0.49*** 0.32–0.77 0.56** 0.35–0.89 0.52** 0.29–0.95 65+ 0.46*** 0.33–0.64 0.61* 0.34–1.08 0.48** 0.24–0.95 0.92 0.48–1.78 0.87 0.40–1.89 Education Primary 0.90 0.69–1.18 1.20 0.80–1.79 2.01** 1.05–3.84 1.46** 1.06–2.03 0.76 0.44–1.29 Secondary 0.82** 0.67–0.99 1.27 0.83–1.95 1.35 0.60–3.03 1.80*** 1.27–2.56 0.75 0.40–1.38 High school 0.89 0.71–1.11 1.09 0.65–1.81 2.37** 1.02–5.49 1.26 0.75–2.11 0.85 0.26–2.75 College or above 1.14 0.82–1.57 1.18 0.63–2.20 1.01 0.34–2.98 2.34*** 1.56–3.52 1.28 0.47–3.45 PSU-level indices Antismoking information 0.97 0.87–1.07 0.86 0.64–1.15 0.95 0.72–1.25 0.99 0.91–1.08 1.20 0.96–1.49 Tobacco promotion 0.84 0.57–1.22 0.79 0.49–1.29 0.90 0.55–1.48 1.15 0.91–1.45 0.81* 0.65–1.02 Tobacco advertising 1.03 0.96–1.11 1.11 0.89–1.38 1.03 0.77–1.36 0.94 0.80–1.11 0.99 0.80–1.22 Warning label 1.07 0.97–1.17 0.96 0.85–1.08 0.84* 0.70–1.01 0.94 0.87–1.01 0.99 0.82–1.20 Work ban* indoor occupation 0.51* 0.25–1.02 1.10 0.39–3.11 – – 1.29 0.71–2.34 1.34 0.47–3.78 Work ban* other occupation 1.39 0.93–2.09 1.23 0.84–1.81 – – 1.17 0.97–1.42 0.78 0.51–1.18 Cigarette price 0.98 0.87–1.10 1.01 0.95–1.08 1.04 0.83–1.32 0.95 0.87–1.02 0.84 0.23–3.08 Bidi price – – – – – – 1.07 0.79–1.46 5.06*** 2.05–12.5 PLa (N = 2,604) RUa (N = 5,059) TR (N = 2,996) UA (N = 2,623) EG (N = 4,395) Household size 1.00 0.88–1.13 0.89** 0.81–0.99 0.95 0.89–1.02 0.96 0.88–1.05 0.96 0.91–1.02 Rural 1.02 0.67–1.53 1.30* 0.98–1.73 1.39* 1.00–1.94 1.23 0.70–2.14 1.35** 1.05–1.74 Male 0.96 0.72–1.28 0.73** 0.54–0.97 0.66*** 0.49–0.90 0.55*** 0.40–0.76 0.89 0.35–2.29 Wealth 1.04 0.97–1.11 1.07* 1.00–1.14 0.93* 0.87–1.01 1.24*** 1.09–1.41 1.04 0.92–1.18 Indoor occupation 0.83 0.16–4.19 1.50 0.43–5.26 0.49 0.18–1.39 0.85 0.28–2.52 1.04 0.33–3.30 Outdoor occupation 0.80 0.46–1.40 1.01 0.68–1.51 0.54*** 0.37–0.78 0.68** 0.47–0.98 0.57*** 0.40–0.80 Age 25–39 0.90 0.60–1.34 0.46*** 0.34–0.63 0.68* 0.45–1.03 0.61** 0.39–0.94 0.90 0.57–1.41 40–64 0.38*** 0.24–0.62 0.41*** 0.30–0.56 0.87 0.59–1.30 0.47*** 0.31–0.72 1.25 0.78–1.99 65+ 0.68 0.31–1.46 0.49** 0.25–0.96 1.09 0.61–1.96 0.48** 0.24–0.94 1.20 0.65–2.24 Education Primary 1.00  1.00  1.06 0.67–1.66 0.60 0.04–8.49 0.92 0.62–1.36 Secondary 0.95 0.51–1.77 3.49 0.78–15.7 1.01 0.55–1.84 0.57 0.05–6.72 0.83 0.48–1.42 High school 1.29 0.63–2.65 2.49 0.58–10.6 1.01 0.58–1.75 0.50 0.04–6.34 0.71 0.41–1.24 College or above 1.35 0.61–3.00 2.92 0.66–12.8 1.29 0.69–2.42 0.53 0.04–7.11 1.23 0.86–1.75 PSU-level indices Antismoking information 1.26*** 1.13–1.42 1.07 0.96–1.19 0.94 0.82–1.07 1.05 0.97–1.15 1.01 0.74–1.39 Tobacco promotion 0.84 0.58–1.19 1.21*** 1.05–1.39 0.88 0.56–1.38 0.92 0.74–1.14 0.75 0.30–1.88 Tobacco advertising 1.12 0.81–1.55 0.95 0.86–1.05 1.32 0.71–2.47 0.93 0.80–1.09 1.50 0.62–3.66 Warning label 1.04 0.89–1.22 1.06 0.96–1.17 0.98 0.88–1.09 1.03 0.95–1.12 3.20*** 1.53–6.68 Work ban* indoor occupation 0.67 0.22–2.03 1.38 0.71–2.69 1.52 0.79–2.91 0.95 0.52–1.76 0.68 0.22–2.08 Work ban* other occupation 0.65* 0.41–1.06 1.68 0.82–3.46 1.01 0.73–1.41 1.09 0.74–1.60 1.20 0.57–2.51 Cigarette price 1.04 0.95–1.15 0.89 0.72–1.10 1.13 0.96–1.33 0.86 0.19–3.80 0.98 0.95–1.01 TH (N = 5,184) CN (N = 4,197) VN (N = 2,444) PH (N = 2,969) Household size 0.94 0.87–1.01 1.11* 0.98–1.26 0.95 0.86–1.06 1.02 0.95–1.09 Rural 1.18 0.89–1.56 1.15 0.73–1.79 1.15 0.79–1.68 1.01 0.72–1.42 Male 0.66* 0.43–1.02 0.71 0.43–1.18 0.49* 0.24–1.01 0.73 0.48–1.11 Wealth 1.05 0.98–1.13 0.98 0.86–1.12 1.09 0.93–1.28 1.10 0.93–1.29 Indoor occupation 2.07 0.29–14.6 0.16** 0.03–0.87 0.62 0.28–1.39 0.55 0.16–1.88 Outdoor occupation 0.56*** 0.42–0.74 0.61*** 0.43–0.86 0.70** 050–0.98 0.73 0.50–1.07 Age 25–39 0.55*** 0.38–0.82 0.71 0.35–1.45 0.60** 0.37–0.97 0.56*** 0.37–0.87 40–64 0.66** 0.45–0.95 0.60 0.31–1.18 0.57** 0.36–0.89 0.78 0.51–1.19 65+ 0.71 0.43–1.17 1.06 0.49–2.31 0.67 0.36–1.26 1.18 0.64–2.15 Education Primary 1.13 0.60–2.14 0.85 0.54–1.36 1.16 0.67–1.99 1.78 0.69–4.60 Secondary 1.14 0.56–2.31 0.64 0.35–1.17 1.34 0.80–2.27 1.54 0.55–4.29 High school 1.31 0.63–2.72 0.97 0.48–1.95 1.64* 0.93–2.91 1.49 0.53–4.20 College or above 1.48 0.66–3.32 0.98 0.40–2.40 1.65 0.20–13.6 2.16 0.76–6.10 PSU-level indices Antismoking information 1.15 0.96–1.39 1.02 0.83–1.24 1.00 0.90–1.13 1.00 0.92–1.09 Tobacco promotion 1.49 0.91–2.45 1.08 0.59–1.97 0.82 0.50–1.37 0.81*** 0.71–0.91 Tobacco advertising 1.07 0.46–2.46 0.72 0.45–1.17 0.89 0.65–1.24 1.09 0.98–1.20 Warning label 0.91 0.77–1.08 0.99 0.85–1.15 1.14 0.97–1.34 0.98 0.87–1.09 Work ban* indoor occupation 0.70 0.22–2.22 6.37** 1.28–31.7 1.55 0.79–3.05 1.35 0.67–2.72 Work ban* other occupation 1.41 0.72–2.73 1.90* 0.94–3.81 1.07 0.78–1.49 0.93 0.70–1.23 Cigarette price 1.01 0.76–1.36 1.02 0.99–1.04 1.02 0.87–1.19 1.20*** 1.04–1.38 Note. aThere are very few respondents in Poland and Russia who have not received formal education. Therefore the base category of education indicators is primary education for Poland and Russia. OR = odds ratio; CI = confidence interval. *.05 < p ≤ .1, **.01 < p ≤ .05, ***p ≤ .01. Higher cigarette prices are significantly associated with increased odds of being a recent quitter in the Philippines, and higher bidi prices are significantly associated with increased odds of being a recent quitter in Bangladesh, with a marginally significant association seen for India. Greater exposure to mass-media antismoking information is significantly associated with increased odds of quitting in Poland. Greater exposure to tobacco advertising and promotion is significantly associated with less quitting in Bangladesh and the Philippines. And greater awareness of warning labels is associated with higher quit rates in Egypt. Mixed results are obtained for exposure to indoor work-site smoking bans. The estimates indicate that working in an outdoor occupation is associated with lower odds of quitting for most countries, and working indoors is associated with higher odds of quitting in Brazil and lower odds in China. Exposure to work-site smoking bans is associated with increased odds of quitting for Chinese smokers who work indoors, and the association is particularly strong, which may reflect the social norms around smoking as a way of networking at work in the country. In the pooled models of recent quitting (Table 4), living in rural areas, having higher education, and more wealth are factors associated with higher odds of being a recent quitter, whereas being male, over 24 years old, and working in an outdoor occupation are factors associated with lower odds. Although exposure to work-site smoking bans increases the odds of being a recent quitter, greater exposure to cigarette advertising is associated with lower odds of quitting in the Southeast Asian region. Similarly, greater exposure to tobacco promotion is associated with lower odds of quitting in Latin America and Asia Pacific regions. The warning label index is associated with higher odds of quitting in European countries but lower odds of quitting in Southeast Asia. Table 4. Pooled Models of Recent Quitting (Quitting in the Past 12 Months) Latin America (N = 11,652) Southeast Asia (N = 19,484) Asia Pacific (N = 9,615) Europe (N = 13,303) All (N = 58,451) OR 95% CI OR 95% CI OR 95% CI OR 95% CI OR 95% CI Household size 1.00 0.93–1.07 0.99 0.95–1.03 1.01 0.96–1.07 0.95*** 0.93–0.98 0.99 0.96–1.02 Rural 1.21** 1.02–1.45 1.05 0.91–1.20 1.12* 1.00–1.26 1.28*** 1.16–1.41 1.18*** 1.10–1.26 Male 0.79*** 0.72–0.87 0.84 0.63–1.13 0.68*** 0.58–0.81 0.72*** 0.59–0.88 0.78*** 0.71–0.86 Wealth 1.04 0.99–1.09 0.99 0.88–1.12 1.04 0.97–1.10 1.05** 1.01–1.10 1.04** 1.00–1.07 Indoor occupation 2.41* 0.85–6.82 0.66*** 0.56–0.78 0.43** 0.19–0.99 0.80 0.49–1.32 0.78 0.57–1.09 Outdoor occupation 0.77*** 0.67–0.89 0.60*** 0.52–0.70 0.68*** 0.61–0.75 0.77** 0.61–0.97 0.70*** 0.64–0.77 Age 25–39 0.59*** 0.55–0.63 0.58*** 0.43–0.79 0.61*** 0.56–0.67 0.59*** 0.46–0.76 0.58*** 0.53–0.64 40–64 0.43*** 0.38–0.49 0.60*** 0.53–0.68 0.65*** 0.53–0.81 0.47*** 0.32–0.69 0.52*** 0.43–0.62 65+ 0.48*** 0.41–0.56 0.83** 0.70–0.98 1.08 0.84–1.38 0.58*** 0.39–0.87 0.68*** 0.53–0.87 Education Primary 1.03 0.80–1.33 1.27** 1.02–1.58 1.22 0.86–1.73 – – 1.04 0.86–1.26 Secondary 0.90 0.70–1.16 1.43*** 1.09–1.88 1.07 0.62–1.82 0.98 0.82–1.16 1.01 0.80–1.27 High school 0.99 0.74–1.33 1.35*** 1.15–1.60 1.38 0.88–2.16 0.96 0.83–1.11 1.07 0.87–1.32 College or above 1.10*** 1.04–1.16 1.87*** 1.53–2.28 1.54* 0.98–2.42 1.11* 1.00–1.24 1.26** 1.03–1.52 PSU-level indices Antismoking information 0.96* 0.92–1.01 0.99 0.93–1.05 1.01 1.00–1.02 1.08 0.96–1.21 1.03 0.98–1.08 Tobacco promotion 0.78*** 0.74–0.82 1.06 0.89–1.27 0.83*** 0.79–0.87 1.08 0.90–1.30 0.98 0.85–1.12 Tobacco advertising 1.05*** 1.02–1.08 0.96* 0.92–1.00 1.04 0.94–1.16 0.98 0.92–1.04 1.01 0.97–1.04 Warning label 0.98 0.88–1.09 0.94*** 0.93–0.95 1.00 0.94–1.07 1.03* 1.00–1.06 0.99 0.95–1.03 Work ban* indoor occupation 0.66 0.35–1.24 1.20*** 1.06–1.35 1.85* 0.93–3.68 1.16 0.90–1.49 1.18 0.94–1.46 Work ban* other occupation 1.29*** 1.17–1.43 1.12* 1.00–1.25 1.09 0.84–1.41 1.05 0.87–1.25 1.13*** 1.04–1.22 Cigarette price 1.01 0.99–1.03 0.94*** 0.92–0.97 1.03*** 1.01–1.05 0.99 0.90–1.09 0.99 0.97–1.01 Bidi price – – 1.07*** 1.04–1.11 – – – – – – Latin America (N = 11,652) Southeast Asia (N = 19,484) Asia Pacific (N = 9,615) Europe (N = 13,303) All (N = 58,451) OR 95% CI OR 95% CI OR 95% CI OR 95% CI OR 95% CI Household size 1.00 0.93–1.07 0.99 0.95–1.03 1.01 0.96–1.07 0.95*** 0.93–0.98 0.99 0.96–1.02 Rural 1.21** 1.02–1.45 1.05 0.91–1.20 1.12* 1.00–1.26 1.28*** 1.16–1.41 1.18*** 1.10–1.26 Male 0.79*** 0.72–0.87 0.84 0.63–1.13 0.68*** 0.58–0.81 0.72*** 0.59–0.88 0.78*** 0.71–0.86 Wealth 1.04 0.99–1.09 0.99 0.88–1.12 1.04 0.97–1.10 1.05** 1.01–1.10 1.04** 1.00–1.07 Indoor occupation 2.41* 0.85–6.82 0.66*** 0.56–0.78 0.43** 0.19–0.99 0.80 0.49–1.32 0.78 0.57–1.09 Outdoor occupation 0.77*** 0.67–0.89 0.60*** 0.52–0.70 0.68*** 0.61–0.75 0.77** 0.61–0.97 0.70*** 0.64–0.77 Age 25–39 0.59*** 0.55–0.63 0.58*** 0.43–0.79 0.61*** 0.56–0.67 0.59*** 0.46–0.76 0.58*** 0.53–0.64 40–64 0.43*** 0.38–0.49 0.60*** 0.53–0.68 0.65*** 0.53–0.81 0.47*** 0.32–0.69 0.52*** 0.43–0.62 65+ 0.48*** 0.41–0.56 0.83** 0.70–0.98 1.08 0.84–1.38 0.58*** 0.39–0.87 0.68*** 0.53–0.87 Education Primary 1.03 0.80–1.33 1.27** 1.02–1.58 1.22 0.86–1.73 – – 1.04 0.86–1.26 Secondary 0.90 0.70–1.16 1.43*** 1.09–1.88 1.07 0.62–1.82 0.98 0.82–1.16 1.01 0.80–1.27 High school 0.99 0.74–1.33 1.35*** 1.15–1.60 1.38 0.88–2.16 0.96 0.83–1.11 1.07 0.87–1.32 College or above 1.10*** 1.04–1.16 1.87*** 1.53–2.28 1.54* 0.98–2.42 1.11* 1.00–1.24 1.26** 1.03–1.52 PSU-level indices Antismoking information 0.96* 0.92–1.01 0.99 0.93–1.05 1.01 1.00–1.02 1.08 0.96–1.21 1.03 0.98–1.08 Tobacco promotion 0.78*** 0.74–0.82 1.06 0.89–1.27 0.83*** 0.79–0.87 1.08 0.90–1.30 0.98 0.85–1.12 Tobacco advertising 1.05*** 1.02–1.08 0.96* 0.92–1.00 1.04 0.94–1.16 0.98 0.92–1.04 1.01 0.97–1.04 Warning label 0.98 0.88–1.09 0.94*** 0.93–0.95 1.00 0.94–1.07 1.03* 1.00–1.06 0.99 0.95–1.03 Work ban* indoor occupation 0.66 0.35–1.24 1.20*** 1.06–1.35 1.85* 0.93–3.68 1.16 0.90–1.49 1.18 0.94–1.46 Work ban* other occupation 1.29*** 1.17–1.43 1.12* 1.00–1.25 1.09 0.84–1.41 1.05 0.87–1.25 1.13*** 1.04–1.22 Cigarette price 1.01 0.99–1.03 0.94*** 0.92–0.97 1.03*** 1.01–1.05 0.99 0.90–1.09 0.99 0.97–1.01 Bidi price – – 1.07*** 1.04–1.11 – – – – – – Note. All regressions also control for country fixed effects. In the last two columns, the sample includes all 14 countries. The pooled sample of America includes Mexico, Brazil, and Uruguay. The pooled sample of South East Asia includes India, Bangladesh, and Thailand, where the bidi price for Thailand is replaced by the mean of the bidi price for India and Bangladesh. The pooled sample of Asia Pacific includes China, Vietnam, and the Philippines. The pooled sample of Europe includes Poland, Russia, Ukraine, and Turkey. The information on indoor work-site smoking policy in Uruguay is not available, and its work-site smoking ban index is replaced by the mean index of the other countries in the pooled models of America and all countries. CI = confidence interval; OR = odds ratio. *.05 < p ≤ .1, **.01 < p ≤ .05, ***p ≤ .01. The associations between individual demographic characteristics and the probability of making a recent quit attempt are quite similar to the associations observed in the models of recent quitting (Table 5). Living in rural areas is associated with higher odds of quit attempts in most regions other than Southeast Asia. Having formal education is associated with higher odds of quit attempts in most regions other than Latin American. Being male and being older than 24 years are associated with lower odds in all models. Greater exposure to antismoking mass-media messages, higher cigarette prices, and greater exposure to warning labels are significantly associated with increased odds of making a recent quit attempt. Although working indoors is associated with higher odds of a quit attempt, the exposure to work-site smoking bans also affects those who do not usually work indoors in increasing quit attempts. Table 5. Pooled Models of Quit Attempts (Unsuccessful Quit Attempt in the Past 12 Months) Latin America (N = 10,215) Southeast Asia (N = 18,606) Asia Pacific (N = 6,607) Europe (N = 12,307) All (N = 51,890) OR 95% CI OR 95% CI OR 95% CI OR 95% CI OR 95% CI Household size 0.99 0.96–1.03 1.01 0.97–1.06 0.99 0.96–1.02 1.01 0.98–1.04 1.00 0.98–1.02 Rural 1.02 0.81–1.29 0.95* 0.90–1.00 1.26*** 1.13–1.39 1.19*** 1.11–1.28 1.10** 1.01–1.19 Male 0.75*** 0.69–0.82 0.87*** 0.79–0.96 0.78** 0.62–0.99 0.86** 0.75–0.99 0.83*** 0.77–0.89 Wealth 1.00 0.99–1.00 0.99 0.89–1.10 1.02 0.95–1.10 1.03** 1.00–1.05 1.00 0.98–1.02 Indoor occupation 1.79*** 1.46–2.19 1.33 0.93–1.88 0.97 0.68–1.38 1.13 0.82–1.55 1.18** 1.00–1.39 Outdoor occupation 1.00 0.95–1.05 1.18*** 1.08–1.28 0.85*** 0.79–0.91 0.95 0.86–1.04 1.00 0.89–1.13 Age 25–39 1.01 0.96–1.06 0.89** 0.80–1.00 1.01 0.84–1.22 0.75*** 0.68–0.82 0.89*** 0.82–0.97 40–64 0.83*** 0.73–0.95 0.90 0.78–1.03 0.91 0.67–1.23 0.65*** 0.53–0.80 0.81*** 0.72–0.92 65+ 0.67*** 0.57–0.78 0.88 0.64–1.23 0.82*** 0.71–0.94 0.55*** 0.41–0.73 0.75*** 0.61–0.92 Education Primary 1.10 0.84–1.44 1.05 0.98–1.13 1.16* 0.98–1.38 – – 1.09** 1.02–1.17 Secondary 1.10 0.96–1.26 1.30*** 1.17–1.43 1.50*** 1.24–1.81 1.01 0.99–1.04 1.23*** 1.14–1.32 High school 1.00 0.75–1.34 1.22*** 1.12–1.33 1.53*** 1.18–1.98 0.97 0.80–1.17 1.17*** 1.06–1.30 College or above 0.81* 0.66–1.00 1.25*** 1.11–1.40 1.39*** 1.21–1.58 0.95 0.74–1.23 1.12** 1.00–1.25 PSU-level indices Antismoking information 1.05* 1.00–1.11 1.01 0.94–1.07 1.09*** 1.07–1.12 1.20*** 1.11–1.30 1.08** 1.00–1.17 Tobacco promotion 0.98 0.95–1.01 1.09* 0.98–1.20 1.04 0.99–1.10 1.03** 1.00–1.05 1.05** 1.00–1.10 Tobacco advertising 1.01 0.98–1.04 1.02 0.99–1.05 1.02 0.94–1.10 0.97 0.91–1.04 1.01 0.98–1.04 Warning label 0.99 0.96–1.02 1.04*** 1.02–1.07 1.00 0.96–1.04 1.05** 1.01–1.09 1.03*** 1.01–1.05 Work ban* indoor occupation 0.75*** 0.63–0.89 1.09 0.90–1.32 1.02 0.72–1.45 0.81 0.62–1.05 0.97 0.83–1.12 Work ban* other occupation 1.11*** 1.09–1.13 1.21*** 1.08–1.37 0.98 0.87–1.10 0.96 0.89–1.05 1.11* 0.99–1.26 Cigarette price 0.96* 0.92–1.00 0.99*** 0.98–1.00 1.01 0.99–1.03 1.04*** 1.04–1.05 1.01*** 1.00–1.02 Bidi price – – 1.17*** 1.11–1.22 – – – – – – Latin America (N = 10,215) Southeast Asia (N = 18,606) Asia Pacific (N = 6,607) Europe (N = 12,307) All (N = 51,890) OR 95% CI OR 95% CI OR 95% CI OR 95% CI OR 95% CI Household size 0.99 0.96–1.03 1.01 0.97–1.06 0.99 0.96–1.02 1.01 0.98–1.04 1.00 0.98–1.02 Rural 1.02 0.81–1.29 0.95* 0.90–1.00 1.26*** 1.13–1.39 1.19*** 1.11–1.28 1.10** 1.01–1.19 Male 0.75*** 0.69–0.82 0.87*** 0.79–0.96 0.78** 0.62–0.99 0.86** 0.75–0.99 0.83*** 0.77–0.89 Wealth 1.00 0.99–1.00 0.99 0.89–1.10 1.02 0.95–1.10 1.03** 1.00–1.05 1.00 0.98–1.02 Indoor occupation 1.79*** 1.46–2.19 1.33 0.93–1.88 0.97 0.68–1.38 1.13 0.82–1.55 1.18** 1.00–1.39 Outdoor occupation 1.00 0.95–1.05 1.18*** 1.08–1.28 0.85*** 0.79–0.91 0.95 0.86–1.04 1.00 0.89–1.13 Age 25–39 1.01 0.96–1.06 0.89** 0.80–1.00 1.01 0.84–1.22 0.75*** 0.68–0.82 0.89*** 0.82–0.97 40–64 0.83*** 0.73–0.95 0.90 0.78–1.03 0.91 0.67–1.23 0.65*** 0.53–0.80 0.81*** 0.72–0.92 65+ 0.67*** 0.57–0.78 0.88 0.64–1.23 0.82*** 0.71–0.94 0.55*** 0.41–0.73 0.75*** 0.61–0.92 Education Primary 1.10 0.84–1.44 1.05 0.98–1.13 1.16* 0.98–1.38 – – 1.09** 1.02–1.17 Secondary 1.10 0.96–1.26 1.30*** 1.17–1.43 1.50*** 1.24–1.81 1.01 0.99–1.04 1.23*** 1.14–1.32 High school 1.00 0.75–1.34 1.22*** 1.12–1.33 1.53*** 1.18–1.98 0.97 0.80–1.17 1.17*** 1.06–1.30 College or above 0.81* 0.66–1.00 1.25*** 1.11–1.40 1.39*** 1.21–1.58 0.95 0.74–1.23 1.12** 1.00–1.25 PSU-level indices Antismoking information 1.05* 1.00–1.11 1.01 0.94–1.07 1.09*** 1.07–1.12 1.20*** 1.11–1.30 1.08** 1.00–1.17 Tobacco promotion 0.98 0.95–1.01 1.09* 0.98–1.20 1.04 0.99–1.10 1.03** 1.00–1.05 1.05** 1.00–1.10 Tobacco advertising 1.01 0.98–1.04 1.02 0.99–1.05 1.02 0.94–1.10 0.97 0.91–1.04 1.01 0.98–1.04 Warning label 0.99 0.96–1.02 1.04*** 1.02–1.07 1.00 0.96–1.04 1.05** 1.01–1.09 1.03*** 1.01–1.05 Work ban* indoor occupation 0.75*** 0.63–0.89 1.09 0.90–1.32 1.02 0.72–1.45 0.81 0.62–1.05 0.97 0.83–1.12 Work ban* other occupation 1.11*** 1.09–1.13 1.21*** 1.08–1.37 0.98 0.87–1.10 0.96 0.89–1.05 1.11* 0.99–1.26 Cigarette price 0.96* 0.92–1.00 0.99*** 0.98–1.00 1.01 0.99–1.03 1.04*** 1.04–1.05 1.01*** 1.00–1.02 Bidi price – – 1.17*** 1.11–1.22 – – – – – – Note. See Note for Table 4. OR = odds ratio; CI = confidence interval. *.05 < p ≤ .1, **0.01 < p ≤ .05, ***p ≤ .01. ## DISCUSSION In this study, we use GATS data from 14 countries to describe the factors associated with quitting and quit attempts. We find that living in rural areas, having more education, and being wealthier are factors associated with higher odds of being a quitter and trying to quit. Men are less likely to have quit or tried to quit than women. Greater exposure to work-site smoking bans is associated with higher odds of recent quitting. Although higher cigarette prices are associated with higher probability of quit attempts, higher bidi prices are associated with higher probabilities of both quitting and quit attempts in Southeast Asian countries where bidi use is common. Greater exposure to work-site bans is strongly associated with quitting in China where smoking at indoor work-sites is prevalent. Our estimates also call attention to the potential influence of tobacco marketing in the Asia Pacific and Latin American regions where greater exposure to tobacco promotion is linked to reduced likelihood of quitting. Our findings are in line with the existing limited literature that investigates quitting and quit attempts in LMICs. For example, we consistently find that quitting and quit attempts in high tobacco-using LMICs are low, which has been documented in a series of reports using GATS (http://nccd.cdc.gov/gtssdata/Ancillary/DataReports) and reports from the Bloomberg Global Initiative to Reduce Tobacco Use (www.tobaccofreeunion.org/content/en/217). In this study, we further explore how quitting and quit attempts are associated with individual and environmental risk factors. Our findings show that although the associations between these factors and quitting may vary by countries, the results of pooled analyses that take account of unobserved country-specific attributes tend to indicate that tobacco control polices such as work-site smoking bans, warning labels, and antismoking media messaging can be linked to either quitting or quit attempts. Meanwhile, we have observed that not all LMICs are at the same stage of employing these tobacco control policies or are not applying them at the same level so that there are substantial differences across countries in exposure to them. The potential of these policies in encouraging quitting may be especially relevant to policy makers in countries where they do not appear to reach enough smokers. There are some limitations in this analysis. We use cross-sectional data from 14 countries to study cessation. The cessation measures, prices, and indices for policy and marketing exposures are constructed using self-reported information. In addition, these prices and indices are contemporaneous measures; while most previous literature has shown that it is the change of prices or policies over time that drives quitting, we cannot estimate the effects of changes over time in this study. However, given that most tobacco policies are recently adopted in LMICs, we find their associations with quitting to be significant and strong even when these policies are contemporaneously measured. This study takes the initial steps in investigating the associations between determinants and quitting across LMICs, but future research that employs longitudinal surveys in many countries is needed to better understand the effectiveness and cost-effectiveness of tobacco control interventions in LMICs. ## FUNDING Funding for the Global Adult Tobacco Survey (GATS) is provided by the Bloomberg Initiative to Reduce Tobacco Use, a program of Bloomberg Philanthropies. Governments of Brazil and India contributed to GATS implementation in their respective countries. Bill and Melinda Gates Foundation provided additional funding for GATS implementation in China. ## DECLARATION OF INTERESTS The conclusions in this paper are those of the authors and do not necessarily represent the official position of their affiliated organizations. ## ACKNOWLEDGMENTS We thank Nahleen Zahra, Pavel Dramski, and William Ridgeway for excellent research assistance. The findings of this study are those of the authors and do not represent the official position of the Centers for Disease Control and Prevention. ## REFERENCES Blecher E. H. van Walbeek C. P . (2004) . An international analysis of cigarette affordability . Tobacco Control , 13 , 339 346 . doi: 10.1136/tc.2003.006726 Blecher E. H. van Walbeek C. P . (2009) . Cigarette affordability trends: An update and some methodological comments . Tobacco Control , 18 , 167 175 . doi: 10.1136/tc.2008.026682 Centers for Disease Control and Prevention . (2012) Increases in Quitline Calls and Smoking Cessation Website Visitors During a National Tobacco Education Campaign—March 19–June 10 . Morbidity and Mortality Weekly Report , 61 , 667 670 Chaloupka F. J. Warner K. E . (2000) . The economics of smoking . In Culyer A. J. Newhouse J. P. (Eds.), Handbook of health economics (pp. 1539 1627 ). Amsterdam : Elsevier Science, North-Holland . doi: 10.1016/S1574-0064(00)80042–6 Chaloupka F. J. Kostova D. Shang C . (2013) . Cigarette excise tax structure and cigarette prices—Evidence from the Global Adult Tobacco Survey and the U.S. National Adult Tobacco Survey . Nicotine & Tobacco Research . doi: 10.1093/ntr/ntt121 Chaloupka F. J. Straif K. Leon M. E . (2011) . Effectiveness of tax and price policies in tobacco control . Brief report. Tobacco Control , 20 , 235 238 . doi: 10.1136/tc.2010.039982 Davis K. C. Nonnemaker J. M. Farrelly M. C. Niederdeppe J . (2011) . Exploring differences in smokers’ perceptions of the effectiveness of cessation media messages . Tobacco Control , 20 , 26 33 . doi: 10.1136/tc.2009.035568 DeCicca P. Kenkel D. Mathios A . (2008) . Cigarette taxes and the transition from youth to adult smoking: Smoking initiation, cessation, and participation . Journal of Health Economics , 27 , 904 917 . doi: 10.1016/j.jhealeco.2008.02.008 Eriksen M. Mackay J. Ross H . (2012) . The tobacco atlas ( 4th ed .). Atlanta, GA : American Cancer Society; New York, NY : World Lung Foundation . Retrived from: http://www.tobaccoatlas.org/more Fagan P. Augustson E. Backinger C. L. O’Connell M. E. Vollinger R. E. Jr Kaufman A. Gibson J. T . (2007) . Quit attempts and intention to quit cigarette smoking among young adults in the United States . American Journal of Public Health , 97 , 1412 1420 . Grossman M . (1972) . The demand for health: A theoretical and empirical investigation . Columbia : University Press for the National Bureau of Economic Research . Guindon G. E. Perucic A. M. Boisclair D . (2003) . Higher tobacco prices and taxes in South-east Asia: An effective tool to reduce tobacco use, save lives and increase government revenue. HNP Discussion Paper Economics of Tobacco Control Paper No 11. The World Bank . Retrived from http://siteresources.worldbank.org/ Hammond D. Fong G. T. McNeill A. Borland R. Cummings K. M . (2006) . Effectiveness of cigarette warning labels in informing smokers about the risks of smoking: Findings from the International Tobacco Control (ITC) Four Country Survey . Tobacco Control , 15 ( Suppl. 3 ), iii19 iii25 . doi: 10.1136/tc.2005.012294 IARC Handbooks of Cancer Prevention, Tobacco Control . (2008) . Methods for evaluating tobacco control policies (Vol. 12 ). Lyon, France : IARC . Jha P. Jacob B. Gajalakshmi V. Gupta P. C. Dhingra N. Kumar R. Peto R .; RGI-CGHR Investigators . (2008) . A nationally representative case-control study of smoking and death in India . New England Journal of Medicine , 358 , 1137 1147 . doi: 10.1056/NEJMsa0707719 King B. Yong H. Borland R. Omar M. A. A. Sirirassamee B. Thrasher J . (2010) . Malaysian and Thai smokers’ beliefs about the harmfulness of ‘light’ and menthol cigarettes . Tobacco Control , 19 , 444 450 . doi: 10.1136/tc.2009.034256 Kostova D . (2012) . A (nearly) global look at the dynamics of youth smoking initiation and cessation: The role of cigarette prices . Applied Economics , 45 , 3943 3951 . Kostova D. Chaloupka F. J. Yurekli A. Ross H. Cherukupalli R. Andes L. Asma S . (2012) . A cross-country analysis of cigarette prices and affordability: Evidence from the Global Adult Tobacco Survey . Tobacco Control . doi: 10.1136/tobaccocontrol-2011–050413 Kostova D. Ross H. Blecher E. Markowitz S . (2011) . Is youth smoking responsive to cigarette prices? Evidence from low- and middle-income countries . Tobacco Control , 20 , 419 424 . doi: 10.1080/00036846.2012.736947 Kotz D. West R . (2009) . Explaining the social gradient in smoking cessation: It’s not in the trying, but in the succeeding . Tobacco Control , 18 , 43 46 . doi: 10.1136/tc.2008.025981 Rani M. Bonu S. Jha P. Nguyen S. N. Jamjoum L . (2003) . Tobacco use in India: Prevalence and predictors of smoking and chewing in a national cross sectional household survey . Tobacco Control , 12 , e4 . doi: 10.1136/tc.12.4.e4 Ranson M. K. Jha P. Chaloupka F. J. Nguyen S. N . (2002) . Global and regional estimates of the effectiveness and cost-effectiveness of price increases and other tobacco control policies . Nicotine & Tobacco Research , 4 , 311 319 . doi: 10.1080/14622200210141000 Ross H. Kostova D. Stoklosa M. Leon M . (in press). The impact of cigarette excise taxes on smoking cessation rates from 1994 to 2010 in Poland, Russia, and Ukraine . Nicotine & Tobacco Research . Shang C. Chaloupka F. J. Fong G. T. Zahra N . (2013) . The distribution of cigarette prices under different tax structures: Findings from the International Tobacco Control Policy Evaluation (ITC) Project . Tobacco Control . doi: 10.1136/tobaccocontrol-2013–050966 Siahpush M. McNeill A. Borland R. Fong G. T . (2006) . Socioeconomic variations in nicotine dependence, self-efficacy, and intention to quit across four countries: Findings from the International Tobacco Control (ITC) Four Country Survey . Tobacco Control , 15 ( Suppl. 3 ), iii71 iii75 . doi: 10.1136/tc.2004.008763 Tauras J. A . (2004) . Public policy and smoking cessation among young adults in the United States . Health Policy , 68 , 321 332 . doi: 10.1016/j.healthpol.2003.10.007 World Health Organization ( 2010a ). Economics of tobacco toolkit: Economic analysis of demand using data from the Global Adult TobaccoSurvey (GATS) . World Health Organization ( 2010b ). Guidelines for implementation of Article 14: Guidelines on demand reduction measures concerning tobacco dependence and cessation
{}
# Are there any collisions in CSMA? If so, why? QuestionsCategory: QuestionsAre there any collisions in CSMA? If so, why? Are there any collisions in CSMA? If so, why? 1. Explain the difference between CSMA/CD and CSMA/CA. 2. Is it ok to use a router instead of a switch? Explain your reasons. Question Tags:
{}
The security to [party 40] The 40th party of the new year is being held at a local mansion. The host is very rich and his success is because of one thing — his famous recipe for Linguini! So rich indeed, that 39 parties have already occurred in a span of 13 days. The only guests that may attend are people who correctly reply to the guard at the door. Here's where you come in. You and a friend are trying to steal this recipe. You sneak by and listen to the passwords. For $$1 \le n \le 9:$$ The $$n$$th guest arrives, whereupon the guard, holding a mirror, says $$n,$$ the guest says $$f(n),$$ and the guest is let in. Note: the 9th guest happens to be your friend. Your hearing allows you to pick up that $$3, 6, 8, 7, 10, 10, 8, 9, 4$$ are $$f(1), \dots, f(9)$$ respectively. It's getting late, about 7 or 8. So you pull up to the guard and he's holding a pair of dice. If anything, you could say this mansion is rare. But you don't say anything yet, for the guard has not given you your number yet. Now the guard says "10". How do you respond, given that the only viable option is to utter another natural number? • And here I was taking the Fresh Prince hint. James Avery (Phil Banks) was in a film called "The Linguini Incident" I figured my friend would be Jazz, and thought the 4 that he said could be related to the letters in his name. So Viv would be 3, and then it all fell apart. Then I started looking at episodes starting with Ep. 40. Wasted hours! – Chris Cudmore Jan 15 '19 at 2:56 You should say 3 Because $$f(n)$$ is the Scrabble value of the number $$n$$ when written in English Example $$f(8)$$ is the Scrabble score of EIGHT which is $$1+1+2+4+1 = 9$$ • $$\,$$Correct. – Display name Jan 14 '19 at 23:10
{}
## Polynomial Time Boolean Satisfiability View as PDF Points: 15 (partial) Time limit: 7.0s Memory limit: 512M Authors: Problem types The boolean satisfiability problem is a famous problem in computer science. You are given booleans, and a list of numbers. A boolean is satisfied if a subset from the numbers sum up to less than or equal to . What is the maximum number of booleans that can be satisfied? Note: the empty subset sums up to 0. #### Input Specification On the first line, there are two integers , and , separated by a space. On the second line is a space separated list of the integers. Each line is followed by one line feed character (ASCII code 0x0a). There are no trailing spaces or empty lines. #### Output Specification One integer, the number of subsets that sum to less than or equal to . #### Constraints For all subset sums , For 2 of the 15 available marks, #### Sample Input 1 10 10 8 2 10 10 6 1 10 10 1 5 #### Sample Output 1 33 #### Sample Input 2 5 5 1 2 3 4 5 #### Sample Output 2 10
{}
Entity Time filter Source Type Utrecht, Netherlands Utrecht University is a university in Utrecht, the Netherlands. It is one of the oldest universities in the Netherlands and one of the largest in Europe. Established March 26, 1636, it had an enrollment of 30,449 students in 2012, and employed 5,295 faculty and staff. In 2011, 485 PhD degrees were awarded and 7,773 scientific articles were published. The 2013 budget of the university was €765 million.The university is rated as the best university in the Netherlands by the Shanghai Ranking of World Universities 2013, and ranked as the 13th best European university and the 52nd best university of the world.The university's motto is "Sol Iustitiae Illustra Nos," which means "Sun of Justice, shine upon us." This motto was gleaned from a literal Latin Bible translation of Malachi 4:2. Utrecht University is led by the University Board, consisting of prof. dr. Bert van der Zwaan and Hans Amman. Wikipedia. Gadella B.M.,University Utrecht Society of Reproduction and Fertility supplement | Year: 2010 In order to achieve fertilization sperm cells, first need to successfully interact with the zona pellucida. To this end, the sperm surface is extensively remodeled during capacitation and the resulting sperm cells also possess hyperactivated motility. Together, this serves to mediate optimal recognition of the zona pellucida in the oviduct or after in vitro fertilization incubations (primary zona pellucida binding). When the sperm cell attaches to the zona pellucida, it will be triggered to undergo the acrosome reaction which allows the hyperactivated motile sperm cell to drill through the zona pellucida (secondary zona pellucida binding coinciding with sequential local zona pellucida digestion and rebinding). After successful zona penetration, some sperm cells may enter the perivitelline space. This delaying strategy of the oocyte allows only one sperm cell at a given time to bind and fuse with the oocyte (fertilization) and thus minimizes the risk of polyspermy. The fertilization fusion between the oocyte and the first sperm cell is immediately followed by a polyspermic fertilization block, in which the content of the oocyte's cortical granules is released into the perivitelline space. The cortical reaction blocks further sperm-oocyte fusion either by sticking at the oolemma or by the induction of a biochemical reaction of the zona pellucida (zona pellucida hardening). The cortical reaction thus blocks sperm-zona pellucida binding and/or sperm-zona pellucida penetration. This review summarizes the current understanding of sperm-zona pellucida interactions in relation to mammalian fertilization. The lack of knowledge about sperm-zona pellucida binding in ruminants will be critically discussed. Karin M.,University of California at San Diego | Clevers H.,Princess Maxima Center and Hubrecht Institute | Clevers H.,University Utrecht Nature | Year: 2016 Inflammation underlies many chronic and degenerative diseases, but it also mitigates infections, clears damaged cells and initiates tissue repair. Many of the mechanisms that link inflammation to damage repair and regeneration in mammals are conserved in lower organisms, indicating that it is an evolutionarily important process. Recent insights have shed light on the cellular and molecular processes through which conventional inflammatory cytokines and Wnt factors control mammalian tissue repair and regeneration. This is particularly important for regeneration in the gastrointestinal system, especially for intestine and liver tissues in which aberrant and deregulated repair results in severe pathologies. © 2016 Macmillan Publishers Limited. All rights reserved. Leslie N.R.,Heriot - Watt University | Den Hertog J.,University Utrecht | Den Hertog J.,Institute of Biology Leiden Cell | Year: 2014 Tumor suppressors block the development of cancer and are often lost during tumor development. Papa et al. show that partial loss of normal PTEN tumor suppressor function can be compounded by additional disruption caused by the expression of inactive mutant PTEN protein. This has significant implications for patients with PTEN gene mutations. © 2014 Elsevier Inc. O'Leary D.H.,Tufts University | Bots M.L.,University Utrecht European Heart Journal | Year: 2010 Carotid ultrasound provides quantitative measurements of carotid intima-media thickness (CIMT) that can be used to assess cardiovascular disease (CVD) risk in individuals and monitor ongoing disease progression and regression in clinical trials. It is non-invasive, rapid, reproducible, and carries no risk. Numerous epidemiological studies have established that CIMT is a marker of subclinical atherosclerosis and is associated with established CVD risk factors and with both prevalent and incident CVD. The use of CIMT in outcome trials as a surrogate or predictor of CVD outcomes is widespread. Carotid ultrasound is being employed to test the efficacy of CVD treatment in order to identify potential useful drugs earlier and to possibly speed regulatory approval. Successive trials have generated lessons learned and applied, with slow but steady improvement in CIMT measurement reproducibility. © 2010 The Author. Dekker F.J.,University of Groningen | Van Den Bosch T.,University of Groningen | Martin N.I.,University Utrecht Drug Discovery Today | Year: 2014 Lysine acetylation is a reversible post-translational modification (PTM) of cellular proteins and represents an important regulatory switch in signal transduction. Lysine acetylation, in combination with other PTMs, directs the outcomes as well as the activation levels of important signal transduction pathways such as the nuclear factor (NF)-κB pathway. Small molecule modulators of the 'writers' (HATs) and 'erasers' (HDACs) can regulate the NF-κB pathway in a specific manner. This review focuses on the effects of frequently used HAT and HDAC inhibitors on the NF-κB signal transduction pathway and inflammatory responses, and their potential as novel therapeutics. © 2013 The Authors. Houbraken J.,Fungal Biodiversity Center | Houbraken J.,University Utrecht | Samson R.A.,Fungal Biodiversity Center Studies in Mycology | Year: 2011 Species of Trichocomaceae occur commonly and are important to both industry and medicine. They are associated with food spoilage and mycotoxin production and can occur in the indoor environment, causing health hazards by the formation of β-glucans, mycotoxins and surface proteins. Some species are opportunistic pathogens, while others are exploited in biotechnology for the production of enzymes, antibiotics and other products. Penicillium belongs phylogenetically to Trichocomaceae and more than 250 species are currently accepted in this genus. In this study, we investigated the relationship of Penicillium to other genera of Trichocomaceae and studied in detail the phylogeny of the genus itself. In order to study these relationships, partial RPB1, RPB2 (RNA polymerase II genes), Tsr1 (putative ribosome biogenesis protein) and Cct8 (putative chaperonin complex component TCP-1) gene sequences were obtained. The Trichocomaceae are divided in three separate families: Aspergillaceae, Thermoascaceae and Trichocomaceae. The Aspergillaceae are characterised by the formation flask-shaped or cylindrical phialides, asci produced inside cleistothecia or surrounded by Hülle cells and mainly ascospores with a furrow or slit, while the Trichocomaceae are defined by the formation of lanceolate phialides, asci borne within a tuft or layer of loose hyphae and ascospores lacking a slit. Thermoascus and Paecilomyces, both members of Thermoascaceae, also form ascospores lacking a furrow or slit, but are differentiated from Trichocomaceae by the production of asci from croziers and their thermotolerant or thermophilic nature. Phylogenetic analysis shows that Penicillium is polyphyletic. The genus is re-defined and a monophyletic genus for both anamorphs and teleomorphs is created (Penicillium sensu stricto). The genera Thysanophora, Eupenicillium, Chromocleista, Hemicarpenteles and Torulomyces belong in Penicillium s. str. and new combinations for the species belonging to these genera are proposed. Analysis of Penicillium below genus rank revealed the presence of 25 clades. A new classification system including both anamorph and teleomorph species is proposed and these 25 clades are treated here as sections. An overview of species belonging to each section is presented. © 2011 CBS-KNAW Fungal Biodiversity Centre. Mickelbart M.V.,Purdue University | Hasegawa P.M.,Purdue University | Bailey-Serres J.,University of California at Riverside | Bailey-Serres J.,University Utrecht Nature Reviews Genetics | Year: 2015 Crop yield reduction as a consequence of increasingly severe climatic events threatens global food security. Genetic loci that ensure productivity in challenging environments exist within the germplasm of crops, their wild relatives and species that are adapted to extreme environments. Selective breeding for the combination of beneficial loci in germplasm has improved yields in diverse environments throughout the history of agriculture. An effective new paradigm is the targeted identification of specific genetic determinants of stress adaptation that have evolved in nature and their precise introgression into elite varieties. These loci are often associated with distinct regulation or function, duplication and/or neofunctionalization of genes that maintain plant homeostasis. © 2015 Macmillan Publishers Limited. Christgen M.,Hannover Medical School | Derksen P.W.B.,University Utrecht Breast Cancer Research | Year: 2015 Infiltrating lobular breast cancer (ILC) is the most common special breast cancer subtype. With mutational or epigenetic inactivation of the cell adhesion molecule E-cadherin (CDH1) being confined almost exclusively to ILC, this tumor entity stands out from all other types of breast cancers. The molecular basis of ILC is linked to loss of E-cadherin, as evidenced by human CDH1 germline mutations and conditional knockout mouse models. A better understanding of ILC beyond the level of descriptive studies depends on physiologically relevant and functional tools. This review provides a detailed overview on ILC models, including well-characterized cell lines, xenograft tumors and genetically engineered mouse models. We consider advantages and limitations of these models and evaluate their representativeness for human ILC. The still incompletely defined mechanisms by which loss of E-cadherin drives malignant transformation are discussed based on recent findings in these models. Moreover, candidate genes and signaling pathways potentially involved in ILC development and progression as well as anticancer drug and endocrine resistance are highlighted. © Christgen and Derksen licensee BioMed Central. Gouw S.C.,University Utrecht Blood | Year: 2013 The objective of this study was to examine the association of the intensity of treatment, ranging from high-dose intensive factor VIII (FVIII) treatment to prophylactic treatment, with the inhibitor incidence among previously untreated patients with severe hemophilia A. This cohort study aimed to include consecutive patients with a FVIII activity < 0.01 IU/mL, born between 2000 and 2010, and observed during their first 75 FVIII exposure days. Intensive FVIII treatment of hemorrhages or surgery at the start of treatment was associated with an increased inhibitor risk (adjusted hazard ratio [aHR], 2.0; 95% confidence interval [CI], 1.3-3.0). High-dose FVIII treatment was associated with a higher inhibitor risk than low-dose FVIII treatment (aHR, 2.3; 95% CI, 1.0-4.8). Prophylaxis was only associated with a decreased overall inhibitor incidence after 20 exposure days of FVIII. The association with prophylaxis was more pronounced in patients with low-risk F8 genotypes than in patients with high-risk F8 genotypes (aHR, 0.61, 95% CI, 0.19-2.0 and aHR, 0.85, 95% CI, 0.51-1.4, respectively). In conclusion, our findings suggest that in previously untreated patients with severe hemophilia A, high-dosed intensive FVIII treatment increases inhibitor risk and prophylactic FVIII treatment decreases inhibitor risk, especially in patients with low-risk F8 mutations. Van Dillen L.F.,Leiden University | Papies E.K.,University Utrecht | Hofmann W.,University of Chicago Journal of Personality and Social Psychology | Year: 2013 The present research shows in 4 studies that cognitive load can reduce the impact of temptations on cognition and behavior and, thus, challenges the proposition that distraction always hampers selfregulation. Participants performed different speeded categorization tasks with pictures of attractive and neutral food items (Studies 1-3) and attractive and unattractive female faces (Study 4), while we assessed their reaction times as an indicator of selective attention (Studies 1, 3, and 4) or as an indicator of hedonic thoughts about food (Study 2). Cognitive load was manipulated by a concurrent digit span task. Results show that participants displayed greater attention to tempting stimuli (Studies 1, 3, and 4) and activated hedonic thoughts in response to palatable food (Study 2), but high cognitive load completely eliminated these effects. Moreover, cognitive load during the exposure to attractive food reduced food cravings (Study 1) and increased healthy food choices (Study 3). Finally, individual differences in sensitivity to food temptations (Study 3) and interest in alternative relationship partners (Study 4) predicted selective attention to attractive stimuli, but again, only when cognitive load was low. Our findings suggest that recognizing the tempting value of attractive stimuli in our living environment requires cognitive resources. This has the important implication that, contrary to traditional views, performing a concurrent demanding task may actually diminish the captivating power of temptation and thus facilitate self-regulation. © 2012 American Psychological Association. de Jong G.,University Utrecht Journal of Thermal Biology | Year: 2010 The temperature-size rule, the observation that most ectotherms grow faster but reach smaller size at higher temperatures, has defied a general explanation. Here, the temperature-size rule in Drosophila aldrichi and Drosophila buzzatii is investigated, using data on development rate and adult dry weight at nine temperatures. In both species the linear regression of dry weight on temperature is negative. The data are used to infer the potential for a description of temperature dependent size by biophysical modelling. The biophysical Sharpe-Schoolfield model for biological rates and its derivative model for adult weight yield detailed patterns for the two species' development rate, growth rate, and adult weight. These detailed patterns do not confirm the existence of a simple temperature-size rule. The species differ significantly in the values of the parameters in the Sharpe-Schoolfield model, and as a consequence in different patterns of weight over temperatures. The different parameters of the Sharpe-Schoolfield model play distinct roles in the patterns of weight over temperatures. A temperature-size rule as a negative regression of weight on temperature might statistically follow from an upper temperature boundary for growth that is lower than the upper temperature boundary for development; as such a relation between the upper temperature boundaries for growth and development would lead to a decrease of weight at high temperature. © 2009 Elsevier Ltd. Boelen P.A.,University Utrecht | Carleton R.N.,University of Regina Journal of Nervous and Mental Disease | Year: 2012 Intolerance of uncertainty (IU) has been found to be involved in several anxiety disorders, including generalized anxiety disorder and obsessive- compulsive disorder (OCD). Few studies have examined the role of IU in health anxiety (HA)/hypochondriacal concerns (HC). We conducted two studies exploring the associations between IU and HA/HC. The first study included undergraduates (n = 114) and indicated an association between IU and several HA/HC indices. When controlling for neuroticism, worry about illness was the single index of HA/HC that remained associated with IU. In the second study among bereaved adults (n = 126), IU was associated with one index of HA/HC but not when neuroticism and anxiety sensitivity were controlled. In both studies, IU was found to be more strongly associated with OCD symptoms and worry than with HA/HC. Copyright © 2012 by Lippincott Williams & Wilkins. Lutgens M.W.,University Utrecht Inflammatory bowel diseases | Year: 2013 Recently reported risks of colorectal cancer (CRC) in inflammatory bowel disease (IBD) have been lower than those reported before 2000. The aim of this meta-analysis was to update the CRC risk of ulcerative and Crohn's colitis, investigate time trends, and identify high-risk modifiers. The MEDLINE search engine was used to identify all published cohort studies on CRC risk in IBD. Publications were critically appraised for study population, Crohn's disease localization, censoring for colectomy, and patient inclusion methods. The following data were extracted: total and stratified person-years at risk, number of observed CRC, number of expected CRC in background population, time period of inclusion, and geographical location. Pooled standardized incidence ratios and cumulative risks for 10-year disease intervals were calculated. Results were corrected for colectomy and isolated small bowel Crohn's disease. The pooled standardized incidence ratio of CRC in all patients with IBD in population-based studies was 1.7 (95% confidence interval [CI], 1.2-2.2 ). High-risk groups were patients with extensive colitis and an IBD diagnosis before age 30 with standardized incidence ratios of 6.4 (95% confidence interval, 2.4-17.5) and 7.2 (95% confidence interval, 2.9-17.8), respectively. Cumulative risks of CRC were 1%, 2%, and 5% after 10, 20, and >20 years of disease duration, respectively. The risk of CRC is increased in patients with IBD but not as high as previously reported and not in all patients. This decline could be the result of aged cohorts. The risk of CRC is significantly higher in patients with longer disease duration, extensive disease, and IBD diagnosis at young age. Bakker S.,University Utrecht Energy Policy | Year: 2010 The hydrogen hype of the last decade has passed and it is now seemingly substituted by the electric vehicle hype. A technological hype can have both positive as well as negative consequences. On the one hand it attracts sponsors for technology development but on the other hand the high expectations might result in disappointment and subsequent withdrawal of the sponsors. In this paper I ask the question to what extent the car industry has created the hype and how it has done so. The industry's role is studied through their prototyping activities and accompanying statements on market entry. I conclude that the car industry has indeed inflated the hype, especially through its public statements on market release after the turn of the millennium. Furthermore, it can be concluded that the industry has shown a double repertoire of both highly optimistic and more modest statements. It is possible that statements are used deliberately to serve the industry's interests whenever needed. Without neglecting the positive influence of technological hype on public policy and private funding for R&D efforts, more modest promises could serve the development of sustainable mobility better. For policy makers the challenge is to remain open to different options instead of following hypes and disappointments as they come and go. © 2010 Elsevier Ltd. Stefano R.D.,Harvard - Smithsonian Center for Astrophysics | Voss R.,Radboud University Nijmegen | Claeys J.S.W.,University Utrecht Astrophysical Journal Letters | Year: 2011 In the single-degenerate scenario for Type Ia supernovae (SNe Ia), a white dwarf (WD) must gain a significant amount of matter from a companion star. Because the accreted mass carries angular momentum, the WD is likely to achieve fast spin periods, which can increase the critical mass, M crit, needed for explosion. When M crit is higher than the maximum mass achieved by the WD, the central regions of the WD must spin down before it can explode. This introduces super-Chandrasekhar single-degenerate explosions, and a delay between the completion of mass gain and the time of the explosion. Matter ejected from the binary during mass transfer therefore has a chance to become diffuse, and the explosion occurs in a medium with a density similar to that of typical regions of the interstellar medium. Also, either by the end of the WD's mass increase or else by the time of explosion, the donor may exhaust its stellar envelope and become a WD. This alters, generally diminishing, explosion signatures related to the donor star. Nevertheless, the spin-up/spin-down model is highly predictive. Prior to explosion, progenitors can be super-M Ch WDs in either wide binaries with WD companions or cataclysmic variables. These systems can be discovered and studied through wide-field surveys. Post-explosion, the spin-up/spin-down model predicts a population of fast-moving WDs, low-mass stars, and even brown dwarfs. In addition, the spin-up/spin-down model provides a paradigm which may be able to explain both the similarities and the diversity observed among SNe Ia. © 2011. The American Astronomical Society. All rights reserved. Schuurman J.-P.,St. Antonius Hospital | Rinkes I.H.M.B.,University Utrecht | Go P.M.N.Y.H.,St. Antonius Hospital Annals of Surgery | Year: 2012 Objective: The aim of this study was to compare the outcome of the hemorrhoidal artery ligation procedure for hemorrhoidal disease with and without use of the provided Doppler transducer. Background: Hemorrhoidal artery ligation, known as HAL (hemorrhoidal artery ligation) or THD (transanal hemorrhoidal dearterialization) procedure, is a common treatment modality for hemorrhoidal disease in which a Doppler transducer is used to locate the supplying arteries that are subsequently ligated. It has been suggested that the use of the Doppler transducer does not contribute to the beneficial effect of these ligation procedures. Methods: The authors conducted a single-blinded randomized clinical trial and assigned a total of 82 patients with grade II and III hemorrhoidal disease to undergo either a HAL/THD procedure without use of the Doppler transducer (non-Doppler group, 40 patients) or a conventional HAL/THD procedure (Doppler group, 42 patients). Primary endpoint was improvement of self-reported clinical parameters after both 6 weeks and 6 months. This study is registered at trialregister.nl and carries the ID number: NTR2139. Results: After 6 weeks and 6 months in both the non-Doppler and the Doppler group, significant improvement was observed with regard to blood loss, pain, prolapse, and problems with defecation (P < 0.05). The improvement of symptoms between both groups did not differ significantly (P > 0.05), except for prolapse, which improved more in the non-Doppler group (P = 0.047). There were more complications and unscheduled postoperative events in the Doppler group (P < 0.0005). After 6 months, 31% of the patients in the non-Doppler group and 21% in the Doppler group reported completely complaint free (P = 0.313). Conclusions: The authors' findings confirm that the hemorrhoidal artery ligation procedure significantly reduces signs and symptoms of hemorrhoidal disease. The authors' data also show that the Doppler transducer does not contribute to this beneficial effect. © 2012 by Lippincott Williams & Wilkins. Vermeulen L.,Center for Experimental Molecular Medicine | Vermeulen L.,University of Cambridge | Snippert H.J.,University Utrecht Nature Reviews Cancer | Year: 2014 Intestinal stem cells (ISCs) and colorectal cancer (CRC) biology are tightly linked in many aspects. It is generally thought that ISCs are the cells of origin for a large proportion of CRCs and crucial ISC-associated signalling pathways are often affected in CRCs. Moreover, CRCs are thought to retain a cellular hierarchy that is reminiscent of the intestinal epithelium. Recent studies offer quantitative insights into the dynamics of ISC behaviour that govern homeostasis and thereby provide the necessary baseline parameters to begin to apply these analyses during the various stages of tumour development. © 2014 Macmillan Publishers Limited. All rights reserved. Erkelens C.J.,University Utrecht Journal of Vision | Year: 2013 One of the striking features of vision is that we can experience depth in two-dimensional images. Since the Renaissance, artists have used linear perspective to create sensations of depth and slant. What is not known is how the brain measures linear perspective information from the retinal image. Here, an experimental technique and geometric computations were used to isolate slant related to linear perspective from slant induced by other cues. Grid stimuli, designed to induce strong impressions of slant, were sufficiently simple to allow accurate predictions on the basis of numeric computations. Measurement of slant about the vertical axis as functions of slant depicted on the screen and slant of the screen relative to the observer showed that linear perspective explained 95% of the slant judgments. Precision and accuracy of the judgments suggest a neural substrate that is able to make highly accurate comparisons between orientations of lines imaged at different retinal locations. The neural basis of slant from the linear perspective has not yet been clarified. Long-range connections in V1, however, and cells in V2, V4, lateral occipital cortex, and caudal intraparietal sulcus have features that suggest an involvement in slant perception. © 2013 ARVO. Castelle B.,CNRS Laboratory of Oceanic Environments and Paleo-environments (EPOC) | Ruessink B.G.,University Utrecht Journal of Geophysical Research: Earth Surface | Year: 2011 We use a nonlinear morphodynamic model to demonstrate that time-varying forcing, in particular the time-varying angle of wave incidence, is crucial to the development of rip channels in terms of rip channel morphology, nonlinear behavior, longshore migration, and mean rip spacing. The time-varying angle of incidence leads to different mean rip spacings than the time-integrated time-invariant forcing and to systematically less developed bar and rip morphologies at more alongshore variable scales. This supports the common field observation of irregular and random alongshore rip spacings, and contrasts with the regular spacing predicted by existing time-invariant template, and instability models. Time-varying wave incidence also generally results in the onset of splitting of shoals and an increase in merging of rip channels. In addition, a time-varying angle of incidence with zero mean can drive a significant net alongshore migration of the rip channels. Abrupt changes in wave conditions are responsible for this net longshore migration through cumulative effects of the mismatch between wave conditions and bar and rip morphology orientation. Copyright 2011 by the American Geophysical Union. Bijlsma J.W.J.,University Utrecht Polskie Archiwum Medycyny Wewnetrznej | Year: 2010 Three important concepts have become standard of care in treating rheumatoid arthritis (RA): 1) development of new drugs: biologic agents; 2) treatment strategies: not an individual drug, but the timely combination of different drugs, given as a specific strategy; 3) treat to target: targeting treatment to the individual patient and adapting treatment when necessary. These concepts led to the development of the European League against Rheumatism recommendations for the management of rheumatoid arthritis (RA) with synthetic and biological disease-modifying antirheumatic drugs. Three so called overarching principles have been formulated, followed by 15 concrete recommendations for the management of RA. These 15 recommendations are described and discussed in this review, with some personal comments. An enormous gain in the development of RA has been achieved, and it is now time to consolidate that gain and make optimal treatment available for every RA patient in Europe. The guidelines described in this article will help physicians to actually do so. Copyright by Medycyna Praktyczna, 2010. van Pinxteren B.,University Utrecht Cochrane database of systematic reviews (Online) | Year: 2010 Approximately 25% of adults regularly experience heartburn, a symptom of gastro-oesophageal reflux disease (GORD). Most patients are treated empirically (without specific diagnostic evaluation e.g. endoscopy. Among patients who have an upper endoscopy, findings range from a normal appearance, mild erythema to severe oesophagitis with stricture formation. Patients without visible damage to the oesophagus have endoscopy negative reflux disease (ENRD). The pathogenesis of ENRD, and its response to treatment may differ from GORD with oesophagitis. Summarise, quantify and compare the efficacy of short-term use of proton pump inhibitors (PPI), H2-receptor antagonists (H2RA) and prokinetics in adults with GORD, treated empirically and in those with endoscopy negative reflux disease (ENRD). We searched MEDLINE (January 1966 to November 2008), EMBASE (January 1988 to November 2008), and EBMR in November 2008. Randomised controlled trials reporting symptomatic outcome after short-term treatment for GORD using proton pump inhibitors, H2-receptor antagonists or prokinetic agents. Participants had to be either from an empirical treatment group (no endoscopy used in treatment allocation) or from an endoscopy negative reflux disease group (no signs of erosive oesophagitis). Two authors independently assessed trial quality and extracted data. Thirty-two trials (9738 participants) were included: fifteen in the empirical treatment group, thirteen in the ENRD group and four in both. In empirical treatment of GORD the relative risk (RR) for heartburn remission (the primary efficacy variable) in placebo-controlled trials for PPI was 0.37 (two trials, 95% confidence interval (CI) 0.32 to 0.44), for H2RAs 0.77 (two trials, 95% CI 0.60 to 0.99) and for prokinetics 0.86 (one trial, 95% CI 0.73 to 1.01). In a direct comparison PPIs were more effective than H2RAs (seven trials, RR 0.66, 95% CI 0.60 to 0.73) and prokinetics (two trials, RR 0.53, 95% CI 0.32 to 0.87). In treatment of ENRD, the RR for heartburn remission for PPI versus placebo was 0.73 (eight trials, 95% CI 0.67 to 0.78) and for H2RA versus placebo was 0.84 (two trials, 95% CI 0.74 to 0.95). The RR for PPI versus H2RA was 0.78 (three trials, 95% CI 0.62 to 0.97) and for PPI versus prokinetic 0.72 (one trial, 95% CI 0.56 to 0.92). PPIs are more effective than H2RAs in relieving heartburn in patients with GORD who are treated empirically and in those with ENRD, although the magnitude of benefit is greater for those treated empirically. Rodrigues J.P.G.L.M.,Stanford University | Rodrigues J.P.G.L.M.,University Utrecht | Levitt M.,Stanford University | Chopra G.,Stanford University Nucleic Acids Research | Year: 2012 The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin. © 2012 The Author(s). Donega C.D.M.,University Utrecht Chemical Society Reviews | Year: 2011 Colloidal heteronanocrystals (HNCs) can be regarded as solution-grown inorganic-organic hybrid nanomaterials, since they consist of inorganic nanoparticles that are coated with a layer of organic ligand molecules. The hybrid nature of these nanostructures provides great flexibility in engineering their physical and chemical properties. The inorganic particles are heterostructured, i.e. they comprise two (or more) different materials joined together, what gives them remarkable and unique properties that can be controlled by the composition, size and shape of each component of the HNC. The interaction between the inorganic component and the organic ligand molecules allows the size and shape of the HNCs to be controlled and gives rise to novel properties. Moreover, the organic surfactant layer opens up the possibility of surface chemistry manipulation, making it possible to tailor a number of properties. These features have turned colloidal HNCs into promising materials for a number of applications, spurring a growing interest on the investigation of their preparation and properties. This critical review provides an overview of recent developments in this rapidly expanding field, with emphasis on semiconductor HNCs (e.g., quantum dots and quantum rods). In addition to defining the state of the art and highlighting the key issues in the field, this review addresses the fundamental physical and chemical principles needed to understand the properties and preparation of colloidal HNCs (283 references). © 2011 The Royal Society of Chemistry. Dai L.,University Utrecht Water International | Year: 2015 Although formal law plays an increasing role in water governance in China, the political arena has a large influence upon it. This article seeks to provide a new perspective to understand water governance and what role formal laws play during China’s transition phase through the lens of the ‘Captain of the River’, a newly developed water governance instrument in China. © 2014, © 2014 International Water Resources Association. Holwerda S.J.,University Utrecht Philosophical transactions of the Royal Society of London. Series B, Biological sciences | Year: 2013 CTCF has it all. The transcription factor binds to tens of thousands of genomic sites, some tissue-specific, others ultra-conserved. It can act as a transcriptional activator, repressor and insulator, and it can pause transcription. CTCF binds at chromatin domain boundaries, at enhancers and gene promoters, and inside gene bodies. It can attract many other transcription factors to chromatin, including tissue-specific transcriptional activators, repressors, cohesin and RNA polymerase II, and it forms chromatin loops. Yet, or perhaps therefore, CTCF's exact function at a given genomic site is unpredictable. It appears to be determined by the associated transcription factors, by the location of the binding site relative to the transcriptional start site of a gene, and by the site's engagement in chromatin loops with other CTCF-binding sites, enhancers or gene promoters. Here, we will discuss genome-wide features of CTCF binding events, as well as locus-specific functions of this remarkable transcription factor. Cornelissen S.A.,University Utrecht Journal of vascular surgery | Year: 2012 During endovascular abdominal aortic aneurysm repair (EVAR), blood is trapped in the aneurysm sac at the moment the endograft is deployed. It is generally assumed that this blood will coagulate and evolve into an organized thrombus. It is unknown whether this process always occurs, what its time span is, and how it influences aneurysm shrinkage. With magnetic resonance imaging (MRI), quantitative analysis of the aneurysm sac is possible in terms of endoleak volume as well as unorganized thrombus volume and organized thrombus volume. We investigated the presence of unorganized thrombus in nonshrinking aneurysms years after EVAR. Fourteen patients with a nonshrinking aneurysm without endoleak on computed tomography/computed tomography angiography underwent MRI with a blood pool agent (gadofosveset trisodium). Precontrast T1-, precontrast T2-, and postcontrast T1-weighted images (3 and 30 minutes after injection) were acquired and evaluated for the presence of endoleak. The aneurysm sac was segmented into endoleak, unorganized thrombus, and organized thrombus by interactively thresholding the differently weighted images. The classification was visualized in real-time as a color overlay on the MR images. The volumes of endoleak, unorganized thrombus, and organized thrombus were calculated. Median time after EVAR was 2 years (range, 1-8.2 years). The average aneurysm sac volume of the patients was 167 ± 107 mL (mean ± standard deviation). Nine patients had an endoleak on the postcontrast T1-w images 30 minutes after injection. On average, the aneurysm sac contained 78 ± 61 mL unorganized thrombus, which corresponded to 51 ± 21 volume-percentage, irrespective of the presence of an endoleak on the blood pool agent enhanced MRI images (independent t-test, P = .8). In our study group, half of the nonshrinking aneurysm sac contents consisted of unorganized thrombus years after EVAR. Copyright © 2012 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved. Bijlsma J.W.J.,University Utrecht Rheumatology (United Kingdom) | Year: 2012 DMARDs aim to improve long-term prognosis of RA, as indicated by reduced progression of radiographic damage and maintenance of function. However, it may be more appropriate to consider disease-modifying strategies rather than drugs alone. Despite the challenges (e.g. lack of standard outcome measures, poor reporting of dose levels), a systematic review of 15 studies involving more than 1400 patients showed that glucocorticoid treatment for 1-2 years slowed radiographic progression compared with control treatment. Evidence for longer term disease-modifying benefits of glucocorticoids comes from individual studies with extended follow-up. In the Utrecht study, patients with early RA originally assigned to prednisone 10 mg/day for 2 years and then tapered off the therapy showed significantly less radiographic progression at follow-up after a further 3 years than patients originally assigned placebo, with no significant difference in the use of synthetic DMARD therapy. In the combination therapy in early RA (COBRA) study, patients with newly diagnosed RA treated with glucocorticoid (starting with 60 mg/day, quickly reduced to 7.5 mg/day for weeks 7-28 and subsequently stopped), MTX up to week 40 and SSZ showed significantly decreased radiographic progression compared with those treated with SSZ alone. The benefits of short-term combination therapy on disease progression were still apparent at 5-year and 11-year follow-up. In conclusion, there is clear evidence that treatment regimens including low-dose glucocorticoids given early in RA slow radiographic progression, meeting the definition of a DMARD. Furthermore, the evidence suggests that such treatment strategies favourably alter the disease course even after glucocorticoid discontinuation. © The Author 2012. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. Van Koten G.,University Utrecht Topics in Organometallic Chemistry | Year: 2013 During the past 40 years, the monoanionic, tridentate ligand platform that has been named Pincer has established itself as a privileged ligand in a variety of research and application areas. Exciting discoveries with NCN and PCP-pincer metal complexes in the late 1970s created a firm basis for the tremendous development of the field. Some of the basic findings are summarized with emphasis on the organometallic aspects of the ECE-pincer metal system. © 2013 Springer-Verlag Berlin Heidelberg. Speksnijder C.M.,University Utrecht Journal of the American Podiatric Medical Association | Year: 2010 Background: The proximal insertional disorder of the plantar fascia is plantar fasciosis. Although plantar fasciosis is frequently seen by different health-care providers, indistinctness of etiology and pathogenesis is still present. A variety of interventions are seen in clinical practice. Taping constructions are frequently used for the treatment of plantar fasciosis. However, a systematic review assessing the efficacy of this therapy modality is not available. Methods: To assess the efficacy of a taping construction as an intervention or as part of an intervention in patients with plantar fasciosis on pain and disability, controlled trials were searched for in CINAHL, EMBASE, MEDLINE, Cochrane CENTRAL, and PEDro using a specific search strategy. The Physiotherapy Evidence Database scale was used to judge methodological quality. Clinical relevance was assessed with five specific questions. A best-evidence synthesis consisting of five levels of evidence was applied for qualitative analysis. Results: Five controlled trials met the inclusion criteria. Three trials with high methodological quality and of clinical relevance contributed to the best-evidence synthesis. The findings were strong evidence of pain improvement at 1-week follow-up, inconclusive results for change in level of disability in the short term, and indicative findings that the addition of taping on stretching exercises has a surplus value. Conclusions: There is limited evidence that taping can reduce pain in the short term in patients with plantar fasciosis. The effect on disability is inconclusive. Grasseni C.,University Utrecht Journal of Political Ecology | Year: 2014 This article presents a case study of the solidarity economy in Italy: the Italian G.A.S. - Gruppi di Acquisto Solidale, which I translate as Solidarity Purchase Groups. GAS are often conceptualized as "alternative food networks". Beyond this categorization, I highlight their novelty in relational, political, and ecological terms, with respect to their capacity to forge new partnerships between consumers and producers. Introducing an ethnographic study that I have developed in a recent monograph (Grasseni 2013), I dwell here in particular on how the solidarity economy is embedded in practice. I argue that gasistas' provisioning activism is something different to mere "ethical consumerism." Activists use the notion of "co-production" to describe their engagement as a concurrent rethinking of the social, economic, and ecological aspects of provisioning. Building also on a quantitative survey of the GAS movement in northern Italy, I pursue an ethnographic understanding of "co-production." I argue that producers and consumers in GAS networks "co-produce" both economic value and ecological knowledge, while re-embedding their provisioning practice in mutuality and relationality. Zuiderbaan W.,University Utrecht Journal of vision | Year: 2012 Antagonistic center-surround configurations are a central organizational principle of our visual system. In visual cortex, stimulation outside the classical receptive field can decrease neural activity and also decrease functional Magnetic Resonance Imaging (fMRI) signal amplitudes. Decreased fMRI amplitudes below baseline-0% contrast-are often referred to as "negative" responses. Using neural model-based fMRI data analyses, we can estimate the region of visual space to which each cortical location responds, i.e., the population receptive field (pRF). Current models of the pRF do not account for a center-surround organization or negative fMRI responses. Here, we extend the pRF model by adding surround suppression. Where the conventional model uses a circular symmetric Gaussian function to describe the pRF, the new model uses a circular symmetric difference-of-Gaussians (DoG) function. The DoG model allows the pRF analysis to capture fMRI signals below baseline and surround suppression. Comparing the fits of the models, an increased variance explained is found for the DoG model. This improvement was predominantly present in V1/2/3 and decreased in later visual areas. The improvement of the fits was particularly striking in the parts of the fMRI signal below baseline. Estimates for the surround size of the pRF show an increase with eccentricity and over visual areas V1/2/3. For the suppression index, which is based on the ratio between the volumes of both Gaussians, we show a decrease over visual areas V1 and V2. Using non-invasive fMRI techniques, this method gives the possibility to examine assumptions about center-surround receptive fields in human subjects. Markus H.S.,University of Cambridge | van der Worp H.B.,University Utrecht | Rothwell P.M.,University of Oxford The Lancet Neurology | Year: 2013 A fifth of all strokes and transient ischaemic attacks occur in the posterior circulation arterial territory. Diagnosis can be challenging, in part because of substantial overlap in symptoms and signs with ischaemia in the anterior circulation. Improved methods of non-invasive imaging of the vertebrobasilar arterial tree have been used in recent prospective follow-up studies, which have shown a high risk of early recurrent stroke, particularly when there is associated vertebrobasilar stenosis. This finding emphasises the importance of urgent secondary prevention, and the role of stenting for vertebral stenosis is being investigated. © 2013 Elsevier Ltd. Middelburg J.J.,University Utrecht Geophysical Research Letters | Year: 2011 Organic matter recycling releases ammonium, and under anoxic conditions, other reduced metabolites that can be used by chemoautotrophs to fix inorganic carbon. Here I present an estimate for the global rate of oceanic carbon fixation by chemoautotrophs (0.77 Pg C y -1). Near-shore and shelf sediments (0.29 Pg C y -1) and nitrifiers in the euphotic zone (0.29 Pg C y -1) and the dark ocean (0.11 Pg C y -1) are the most important contributors. This input of new organic carbon to the ocean is similar to that supplied by world-rivers and eventually buried in oceanic sediments. Chemoautotrophy driven by organic carbon recycling is globally more important than that fuelled by water-rock interactions and hydrothermal vent systems. Copyright © 2011 by the American Geophysical Union. Jacobs J.W.G.,University Utrecht Rheumatology (United Kingdom) | Year: 2012 Optimizing the use of key non-biologic drugs (MTX, prednisone) may prolong disease control, thereby delaying the need for costly biologic therapies. A number of lessons about the optimal use of therapy emerge from clinical studies. Clinical outcomes with non-biologic treatments, given early in the course of the disease, are as good as with biologic treatments. Combinations of treatments are usually required to achieve rapid and sustained remission. MTX remains an important anchor drug for RA therapy and should be given as soon as the diagnosis is made. As early disease control is important, the dose of MTX should be escalated rapidly to adequate levels. Tolerability of MTX is generally good relative to that of other alternative treatments. MTX (s.c.) may be considered if the response to oral MTX is inadequate or MTX is poorly tolerated. In addition to suppressing signs and symptoms of RA, glucocorticoids appear to have disease-modifying effects, at least in early RA. The disease-modifying effects of glucocorticoids probably persist after discontinuation of therapy. The risk of adverse effects of low-dose glucocorticoids is often overestimated. Administration of low-dose glucocorticoids in accordance with physiological circadian rhythms may bring efficacy and safety benefits. As a case in point, the CAMERA (Computer Assisted Management in Early Rheumatoid Arthritis) II study applied these lessons and has clearly shown the benefits of optimizing MTX and prednisone therapy. © The Author 2012. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. Oome S.,Center for Biosystems Genomics | Van Den Ackerveken G.,University Utrecht Molecular Plant-Microbe Interactions | Year: 2014 Nep1-like proteins (NLP) are best known for their cytotoxic activity in dicot plants. NLP are taxonomically widespread among microbes with very different lifestyles. To learn more about this enigmatic protein family, we analyzed more than 500 available NLP protein sequences from fungi, oomycetes, and bacteria. Phylogenetic clustering showed that, besides the previously documented two types, an additional, more divergent, third NLP type could be distinguished. By closely examining the three NLP types, we identified a noncytotoxic subgroup of type 1 NLP (designated type 1a), which have substitutions in amino acids making up a cation-binding pocket that is required for cytotoxicity. Type 2 NLP were found to contain a putative calcium-binding motif, which was shown to be required for cytotoxicity. Members of both type 1 and type 2 NLP were found to possess additional cysteine residues that, based on their predicted proximity, make up potential disulfide bridges that could provide additional stability to these secreted proteins. Type 1 and type 2 NLP, although both cytotoxic to plant cells, differ in their ability to induce necrosis when artificially targeted to different cellular compartments in planta, suggesting they have different mechanisms of cytotoxicity. © 2014 The American Phytopathological Society. Poot M.,University Utrecht Molecular Syndromology | Year: 2015 Based on genomic rearrangements and copy number variations, the contactin-associated protein-like 2 gene (CNTNAP2) has been implicated in neurodevelopmental disorders such as Gilles de la Tourette syndrome, intellectual disability, obsessive compulsive disorder, cortical dysplasia-focal epilepsy syndrome, autism, schizophrenia, Pitt-Hopkins syndrome, and attention deficit hyperactivity disorder. To explain the phenotypic pleiotropy of CNTNAP2 alterations, several hypotheses have been put forward. Those include gene disruption, loss of a gene copy by a heterozygous deletion, altered regulation of gene expression due to loss of transcription factor binding and DNA methylation sites, and mutations in the amino acid sequence of the encoded protein which may provoke altered interactions of the CNTNAP2-encoded protein, Caspr2, with other proteins. Also exome sequencing, which covers <0.2% of the CNTNAP2 genomic DNA, has revealed numerous single nucleotide variants in healthy individuals and in patients with neurodevelopmental disorders. In some of these disorders, disruption of CNTNAP2 may be interpreted as a susceptibility factor rather than a directly causative mutation. In addition to being associated with impaired development of language, CNTNAP2 may turn out to be a central node in the molecular networks controlling neurodevelopment. This review discusses the impact of CNTNAP2 mutations on its functioning at multiple levels of the combinatorial genetic networks that govern brain development. In addition, recommendations for genomic testing in the context of clinical genetic management of patients with neurodevelopmental disorders and their families are put forward. © 2015 S. Karger AG, Basel. Joels M.,University Utrecht Psychoneuroendocrinology | Year: 2011 Exposure to stressful situations activates two hormonal systems that help the organism to adapt. On the one hand stress hormones achieve adaptation by affecting peripheral organs, on the other hand by altering brain function such that appropriate behavioral strategies are selected for optimal performance at the short term, while relevant information is stored for reference in the future. In this chapter we describe how cellular effects induced by stress hormones - in particular by glucocorticoids - may contribute to the behavioral outcome after a single stressor. In addition to situations of acute stress, chronic uncontrollable and unpredictable stress also exerts profound effects on structure and function of limbic neurons. The impact of chronic stress is not a mere cumulative effect of what is seen after acute stress exposure. Dendritic trees are expanded in some regions but reduced in others. In general, cells are exposed to a higher calcium load upon depolarization, but show attenuated responses to serotonin. Synaptic strengthening is largely impaired. In this viewpoint we speculate how cellular effects after chronic stress may be maladaptive and could contribute to the development of psychopathology in genetically vulnerable individuals. © 2010 Elsevier Ltd. Geraerts R.,University Utrecht Proceedings - IEEE International Conference on Robotics and Automation | Year: 2010 A central problem of applications dealing with virtual environments is planning a collision-free path for a character. Since environments and their characters are growing more realistic, a character's path needs to be visually convincing, meaning that the path is smooth, short, has some clearance to the obstacles in the environment, and avoids other characters. Up to now, it has proved difficult to meet these criteria simultaneously and in real-time. We introduce a new data structure, i.e. the Explicit Corridor Map, which allows creating the shortest path, the path that has the largest amount of clearance, or any path in between. Besides being efficient, the corresponding algorithms are surprisingly simple. By integrating the data structure and algorithms into the Indicative Route Method, we show that visually convincing short paths can be obtained in real-time. ©2010 IEEE. Van Beijeren H.,University Utrecht Physical Review Letters | Year: 2012 Anomalous transport in one-dimensional translation invariant Hamiltonian systems with short range interactions is shown to belong in general to the Kardar-Parisi-Zhang universality class. Exact asymptotic forms for density-density and current-current time correlation functions and their Fourier transforms are given in terms of the Prähofer-Spohn scaling functions, obtained from their exact solution for the polynuclear growth model. The exponents of corrections to scaling are found as well, but not so the coefficients. Mode coupling theories developed previously are found to be adequate for weakly nonlinear chains but in need of corrections for strongly anharmonic interparticle potentials. A simple condition is given under which Kardar-Parisi-Zhang behavior does not apply, sound attenuation is only logarithmically superdiffusive, and heat conduction is more strongly superdiffusive than under Kardar-Parisi-Zhang behavior. © 2012 American Physical Society. Schauer R.,University of Kiel | Kamerling J.P.,University Utrecht ChemBioChem | Year: 2011 trans-Sialidases constitute a special group of the sialidase family. They occur in some trypanosome species and, in a unique reversible reaction, transfer sialic acids from one glycosidic linkage with galactose (donor) to another galactose (acceptor), to form (α2-3)-sialyl linkages. Trypanosomes cause such devastating human diseases as Chagas disease in South America (Trypanosoma cruzi) or sleeping sickness in Africa (Trypanosoma brucei). The trans-sialidases strongly contribute to the pathogenicity of the trypanosomes by scavenging sialic acids from the host or blood meal to coat the parasite surface; this aids their survival strategy in the insect's intestine, and in the blood circulation or cells of the host, and serves to compromise the immune system of the human or animal host. American and African trypanosomes express trans-sialidases at different stages of their vector/host development. They are transmitted to humans by insect vectors (tsetse fly or other insect "bug" species). trans-Sialidase activity with varying linkage specificity has also been found in a few bacteria species and in human serum. trans-Sialidases are of increasing practical importance for the chemo-enzymatic synthesis of sialylated glycans. The search for appropriate inhibitors of trans-sialidases and vaccination strategies is intensifying, as less toxic medicaments for the treatment of these widespread and often chronic tropical diseases are required. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Van Binsbergen E.,University Utrecht Cytogenetic and Genome Research | Year: 2011 Array-based methods have enabled the detection of many genomic gains and losses. These are stated as copy number variants (CNVs) and comprise up to 13% of the human genome. Based on their breakpoints and modes of formation CNVs are termed recurrent or nonrecurrent. Recurrent CNVs are flanked by low copy repeats and are of a fixed size. They arise as a result of misalignment during meiosis by a mechanism named nonallelic homologous recombination. Several of such recurrent CNVs have been linked to human diseases. Nonrecurrent CNVs, which are not flanked by low copy repeats, are of variable size and may arise via mechanisms like nonhomologous end joining and replication-based mechanisms described by the fork stalling and template switching and microhomology-mediated break-induced replication models. It is becoming clear that most disease-causing CNVs are nonrecurrent and generally arise via replication-based mechanisms. Furthermore, it is now appreciated that genomic features other than low copy repeats play a role in the formation of nonrecurrent CNVs. This review will discuss the different mechanisms of CNV formation and how high resolution analyses of CNV breakpoints have added to our knowledge of their precise structure. Copyright © 2011 S. Karger AG, Basel. Verkuyten M.,University Utrecht Child Development | Year: 2016 This article proposes a further conceptualization of ethnic and racial identity (ERI) as a fundamental topic in developmental research. Adding to important recent efforts to conceptually integrate and synthesize this field, it is argued that ERI research will be enhanced by more fully considering the implications of the social identity approach. These implications include (a) the conceptualization of social identity, (b) the importance of identity motives, (c) systematic ways for theorizing and examining the critical role of situational and societal contexts, and (d) a dynamic model of the relation between ERI and context. These implications have not been fully considered in the developmental literature but offer important possibilities for moving the field forward in new directions. © 2016 The Society for Research in Child Development, Inc. Van Nostrum C.F.,University Utrecht Soft Matter | Year: 2011 Polymeric micelles constitute an important class of nanomaterials that are highly attractive for pharmaceutical applications. The hydrophobic core of the micelles can be loaded with poorly water soluble drugs, while the shell of the micelles provides colloidal stability in vitro and in vivo. In recent years, covalent cross-linking of micelles is attracting increasing attention, because in vitro data show that it can prevent the self-assembled micelles from dissociation, can modulate drug release and may provide tools for triggered release. This paper reviews the methods used to cross-link either the core or the shell of the micelles, and focuses on drug delivery applications of cross-linked micelles. Whereas non-cross-linked micelles generally do not improve pharmacokinetics of encapsulated drugs when compared to common drug formulations, recent in vivo data show that cross-linking provides dramatic improvements in both pharmacokinetics and biodistribution of the micelles, and that drugs can fully benefit from that when they are covalently linked to the micelles. Although the field is still in its infancy, the latest results promise a bright future of cross-linked micelles for drug delivery and/or diagnostic applications. © The Royal Society of Chemistry 2011. Leogrande E.,University Utrecht Nuclear Physics A | Year: 2014 Two-particle angular correlations between unidentified charged trigger and associated particles are measured by the ALICE detector in p-Pb collisions. The correlations are expressed as associated yield per trigger particle. Near-side and away-side per-trigger yields are studied as a function of multiplicity class. From the near-side and away-side yields, the uncorrelated seeds are calculated, which are proportional to the number of multiple parton interactions (MPIs) in Pythia. The uncorrelated seeds are also studied as a function of the number of binary collisions Ncoll from Glauber model in order to relate them to the number of hard scatterings. © 2014 CERN. Pierik R.,University Utrecht | De Wit M.,University of Lausanne Journal of Experimental Botany | Year: 2014 Plants compete with neighbouring vegetation for limited resources. In competition for light, plants adjust their architecture to bring the leaves higher in the vegetation where more light is available than in the lower strata. These architectural responses include accelerated elongation of the hypocotyl, internodes and petioles, upward leaf movement (hyponasty), and reduced shoot branching and are collectively referred to as the shade avoidance syndrome. This review discusses various cues that plants use to detect the presence and proximity of neighbouring competitors and respond to with the shade avoidance syndrome. These cues include light quality and quantity signals, mechanical stimulation, and plant-emitted volatile chemicals. We will outline current knowledge about each of these signals individually and discuss their possible interactions. In conclusion, we will make a case for a whole-plant, ecophysiology approach to identify the relative importance of the various neighbour detection cues and their possible interactions in determining plant performance during competition. © 2014 The Author. De Graaf-Roelfsema E.,University Utrecht Veterinary Journal | Year: 2014 One of the principal components of equine metabolic syndrome (EMS) is hyperinsulinaemia combined with insulin resistance. It has long been known that hyperinsulinaemia occurs after the development of insulin resistance. But it is also known that hyperinsulinaemia itself can induce insulin resistance and obesity and might play a key role in the development of metabolic syndrome. This review focuses on the physiology of glucose and insulin metabolism and the pathophysiological mechanisms in glucose homeostasis in the horse (compared with what is already known in humans) in order to gain insight into the pathophysiological principles underlying EMS. The review summarizes new insights on the oral uptake of glucose by the gut and the enteroinsular axis, the role of diet in incretin hormone and postprandial insulin responses, the handling of glucose by the liver, muscle and fat tissue, and the production and secretion of insulin by the pancreas under healthy and disrupted glucose homeostatic conditions in horses. © 2013 Elsevier Ltd. Ketting R.F.,University Utrecht Advances in Experimental Medicine and Biology | Year: 2010 During the last decade of the 20th century a totally novel way of gene regulation was revealed. Findings that at frst glance appeared freak features of plants or C. elegans turned out to be mechanistically related and deeply conserved throughout evolution. this important insight was primed by the landmark discovery of RNA interference, or RNAi, in 1998. This work started an entire novel feld of research, now usually referred to as RNA silencing. the common denominator of the phenomena grouped in this feld are small RNA molecules, often derived from double stranded RNA precursors, that in association with proteins of the so-called Argonaute family, are capable of directing a variety of effector complexes to cognate RNA and/or DNA molecules. one of these processes is now widely known as microRNA-mediated gene silencing and I will provide a partially historical framework of the many steps that have led to our current understanding of micro RNA biogenesis and function. this chapter is meant to provide a general overview of the various processes involved. for a comprehensive description of current models, I refer interested readers to the reviews and primary literature references provided in this chapter and to the further contents of this book. © 2010 Landes Bioscience and Springer Science+Business Media, LLC. Leusen J.H.W.,University Utrecht EMBO Reports | Year: 2012 The 'Immunoreceptors' meeting took place in July 2012 in beautiful Snowmass Village in Colorado, USA. At an altitude of more than two kilometres, researchers and clinicians discussed the molecular aspects of immunoreceptors, ranging from B- and T-cell receptors, to complement and Fc receptors. © 2012 European Molecular Biology Organization. Roelofs G.-J.,University Utrecht Atmospheric Chemistry and Physics | Year: 2013 The dominant removal mechanism of soluble aerosol is wet deposition. The atmospheric lifetime of aerosol, relevant for aerosol radiative forcing, is therefore coupled to the atmospheric cycling time of water vapor. This study investigates the coupling between water vapor and aerosol lifetimes in a well-mixed atmosphere. Based on a steady-state study by Pruppacher and Jaenicke (1995) we describe the coupling in terms of the processing efficiency of air by clouds and the efficiencies of water vapor condensation, of aerosol activation, and of the transfer from cloud water to precipitation. We extend this to expressions for the temperature responses of the water vapor and aerosol lifetimes. Previous climate model results (Held and Soden, 2006) suggest a water vapor lifetime temperature response of +5.3 ± 2.0% K-1. This can be used as a first guess for the aerosol lifetime temperature response, but temperature sensitivities of the aerosol lifetime simulated in recent aerosol-climate model studies extend beyond this range and include negative values. This indicates that other influences probably have a larger impact on the computed aerosol lifetime than its temperature response, more specifically changes in the spatial distributions of aerosol (precursor) emissions and precipitation patterns, and changes in the activation efficiency of aerosol. These are not quantitatively evaluated in this study but we present suggestions for model experiments that may help to understand and quantify the different factors that determine the aerosol atmospheric lifetime. © Author(s) 2013. Oerlemans J.,University Utrecht Cryosphere | Year: 2013 In this note, the total dissipative melting in temperate glaciers is studied. The analysis is based on the notion that the dissipation is determined by the loss of potential energy due to the downward motion of mass (ice, snow, meltwater and rain). A mathematical formulation of the dissipation is developed and applied to a simple glacier geometry. In the next step, meltwater production resulting from enhanced ice motion during a glacier surge is calculated. The amount of melt energy available follows directly from the lowering of the centre of gravity of the glacier. To illustrate the concept, schematic calculations are presented for a number of glaciers with different geometric characteristics. Typical dissipative melt rates, expressed as water-layer depth averaged over the glacier, range from a few centimetres per year for smaller glaciers to half a metre per year for Franz Josef Glacier, one of the most active glaciers in the world (in terms of mass turnover). The total generation of meltwater during a surge is typically half a metre. For Variegated Glacier a value of 70 cm is found, for Kongsvegen 20 cm. These values refer to water layer depth averaged over the entire glacier. The melt rate depends on the duration of the surge. It is generally an order of magnitude greater than water production by 'normal' dissipation. On the other hand, the additional basal melt rate during a surge is comparable in magnitude with the water input from meltwater and precipitation. This suggests that enhanced melting during a surge does not grossly change the total water budget of a glacier. Basal water generated by enhanced sliding is an important ingredient in many theories of glacier surges. It provides a positive feedback mechanism that actually makes the surge happen. The results found here suggest that this can only work if water generated by enhanced sliding accumulates in a part of the glacier base where surface meltwater and rain have no or very limited access. This finding seems compatible with the fact that, on many glaciers, surges are initiated in the lower accumulation zone. © 2013 Author(s). Yarde F.,University Utrecht Human reproduction (Oxford, England) | Year: 2013 Is there an association between acute prenatal famine exposure or birthweight and subsequent reproductive performance and age at menopause? No association was found between intrauterine famine exposure and reproductive performance, but survival analysis showed that women exposed in utero were 24% more likely to experience menopause at any age. Associations between prenatal famine and subsequent reproductive performance have been examined previously with inconsistent results. Evidence for the effects of famine exposure on age at natural menopause is limited to one study of post-natal exposure. This cohort study included men and women born around the time of the Dutch famine of 1944-1945. The study participants (n = 1070) underwent standardized interviews on reproductive parameters at a mean age of 59 years. The participants were grouped as men and women with prenatal famine exposure (n = 407), their same-sex siblings (family controls, n = 319) or other men and women born before or after the famine period (time controls, n = 344). Associations of famine exposure with reproductive performance and menopause were analysed using logistic regression and survival analysis with competing risk, after controlling for family clustering. Gestational famine exposure was not associated with nulliparity, age at birth of first child, difficulties conceiving or pregnancy outcome (all P> 0.05) in men or women. At any given age, women were more likely to experience menopause after gestational exposure to famine (hazard ratio 1.24; 95% CI 1.03, 1.51). The association was not attenuated with an additional control for a woman's birthweight. In this study, there was no association between birthweight and age at menopause after adjustment for gestational famine exposure. Age at menopause was self-reported and assessed retrospectively. The study power to examine associations with specific gestational periods of famine exposure and reproductive function was limited. Our findings support previous results that prenatal famine exposure is not related to reproductive performance in adult life. However, natural menopause occurs earlier after prenatal famine exposure, suggesting that early life events can affect organ function even at the ovarian level. This study was funded by the NHLBI/NIH (R01 HL-067914). Not applicable. Dobrin A.,University Utrecht Nuclear Physics A | Year: 2014 The elliptic flow coefficient, v2, is presented for π±, K±, KS0, p+p, ϕ, Λ+Λ, E-+E+, Ω-+Ω+ in Pb-Pb collisions at √sNN=2.76 TeV with the ALICE detector. Results obtained with the scalar product method are reported as a function of transverse momentum, pT, out to pT=6 GeV/c at different collision centralities. For pT<2 GeV/c, v2 exhibits a particle mass dependence. Particles tend to group into mesons and baryons for pT>3 GeV/c. Deviations from the number of constituent quark scaling at the level of ±20% are found for pT>2-3 GeV/c. The results are compared to hydrodynamic calculations coupled to a hadronic cascade model. © 2014 CERN. Zhou Y.,Nikhef | Zhou Y.,University Utrecht Nuclear Physics A | Year: 2014 Anisotropic azimuthal correlations are used to probe the properties and the evolution of the system created in heavy-ion collisions. Two-particle azimuthal correlations are used in the searches of pT dependent fluctuations of flow angle and magnitude, measured with the ALICE detector. The comparison of hydrodynamic calculations with measurements are also presented in this contribution. © 2014 The Authors. Westerink R.H.S.,University Utrecht Environmental Science and Pollution Research | Year: 2014 Non-dioxin-like polychlorinated biphenyls (NDL-PCBs) and polybrominated diphenyl ethers (PBDEs) are environmental pollutants that exert neurodevelopmental and neurobehavioral effects in vivo in humans and animals. Acute in vitro neurotoxic effects include changes in cell viability, oxidative stress, and basal intracellular calcium levels. Though these acute cellular effects could partly explain the observed in vivo effects, other mechanisms, such as effects on calcium influx and neurotransmitter receptor function, likely contribute to the disturbance in neurotransmission. This concise review combines in vitro data on cell viability, oxidative stress and basal calcium levels with recent data that clearly demonstrate that (hydroxylated) PCBs and (hydroxylated) PBDEs can exert acute effects on voltage-gated Ca2+ channels as well as on excitatory and inhibitory neurotransmitter receptors in vitro. These novel mechanisms of action are shared by NDL-PCBs, OH-PBDEs, and some other persistent organic pollutants, such as tetrabromobisphenol-A, and could have profound effects on neurodevelopment, neurotransmission, and neurobehavior in vivo. © 2013 Springer-Verlag Berlin Heidelberg. Van Dinter M.,University Utrecht Geologie en Mijnbouw/Netherlands Journal of Geosciences | Year: 2013 From the 40s A.D. onwards a dense military system was established in the Lower Rhine delta in the Netherlands. Long since, it is questioned why this system was established in a wetland area and even turned into the northwest frontier of the Roman Empire, the Limes. A new detailed palaeogeographical map, based on a digital elevation model (LIDAR), soil maps and excavation results, was constructed. This reconstruction provides insight and understanding of the interactions between the natural environment in this part of the delta on the one hand and the establishment of this part of the Limes along the Old Rhine between Utrecht and Katwijk on the other. This study shows that the distinctive landscape of the western Rhine-Meuse delta, with an exceptionally large number of tributaries, determined the spatial pattern of the military structures. All forts (castella) were erected on the southern natural levees of the river Rhine, directly alongside the river, regardless of height and composition of the subsoil and alongside or opposite routes that provided natural access to the river. We conclude that their aim was to guard all waterways that gave access to the river Rhine from the Germanic residential areas further north and from/to the Meuse tributary further south in the delta. In addition, a system of small military structures, mostly watchtowers, was erected between the forts to watch over the river Rhine and its river traffic. Furthermore, at least two canals were established to create shorter and safely navigable transport routes to the river Meuse. At first, this integrated system of castella and watchtowers probably aimed to protect against Germanic invasions and to create a safe corridor for transport and built up of army supplies for the British invasion in 43 A.D. Only later on, probably by the end of the first century, this corridor turned into a frontier zone. Hol E.M.,University Utrecht | Hol E.M.,An institute of the Royal Netherlands Academy of Arts and science | Hol E.M.,University of Amsterdam | Pekny M.,Gothenburg University | And 2 more authors. Current Opinion in Cell Biology | Year: 2015 Glial fibrillary acidic protein (GFAP) is the hallmark intermediate filament (IF; also known as nanofilament) protein in astrocytes, a main type of glial cells in the central nervous system (CNS). Astrocytes have a range of control and homeostatic functions in health and disease. Astrocytes assume a reactive phenotype in acute CNS trauma, ischemia, and in neurodegenerative diseases. This coincides with an upregulation and rearrangement of the IFs, which form a highly complex system composed of GFAP (10 isoforms), vimentin, synemin, and nestin. We begin to unravel the function of the IF system of astrocytes and in this review we discuss its role as an important crisis-command center coordinating cell responses in situations connected to cellular stress, which is a central component of many neurological diseases. © 2015 Elsevier Ltd. Storek J.,University of Calgary | Mohty M.,University Pierre and Marie Curie | Boelens J.J.,University Utrecht Biology of Blood and Marrow Transplantation | Year: 2015 Anti-T cell globulin (ATG) is polyclonal IgG from rabbits immunized with human thymocytes or a human T cell line. Prophylaxis using ATG infused with conditioning for adult marrow or blood stem cell transplantation reduces both acute and chronic graft-versus-host disease (GVHD). However, ATG is not or minimally efficacious in steroid refractory GVHD treatment. Regarding preemptive therapy, ATG is promising; however, further work is needed on establishing adequate biomarkers to be used as triggers for preemptive therapy before it can be used routinely. Relapse is not increased by ATG, except possibly in the setting of reduced-intensity conditioning. Infections are probably increased when using high but not low-dose ATG, except for Epstein-Barr virus-driven post-transplantation lymphoproliferative disorder, which may be increased even with low-dose ATG. Survival is not improved with ATG; however, survival free of immunosuppressive therapy is improved. Pharmacokinetics of ATG are highly variable, resulting in highly variable areas under the time-concentration curves. Optimized dosing of ATG might improve transplantation outcomes. In conclusion, ATG reduces GVHD and, thus, may improve quality of life, without compromising survival. © 2015 American Society for Blood and Marrow Transplantation. Modgil S.,Kings College London | Prakken H.,University Utrecht Argument and Computation | Year: 2014 This article gives a tutorial introduction to the ASPIC+ framework for structured argumentation. The philosophical and conceptual underpinnings of ASPIC+ are discussed, the main definitions are illustrated with examples and several ways are discussed to instantiate the framework and to reconstruct other approaches as special cases of the framework. The ASPIC+ framework is based on two ideas: the first is that conflicts between arguments are often resolved with explicit preferences, and the second is that arguments are built with two kinds of inference rules: strict, or deductive rules, whose premises guarantee their conclusion, and defeasible rules, whose premises only create a presumption in favour of their conclusion. Accordingly, arguments can in ASPIC+ be attacked in three ways: on their uncertain premises, or on their defeasible inferences, or on the conclusions of their defeasible inferences. ASPIC+ is not a system but a framework for specifying systems. A main objective of the study of the ASPIC+ framework is to identify conditions under which instantiations of the framework satisfy logical consistency and closure properties. © 2014 Taylor and Francis. 'T Hooft G.,University Utrecht Foundations of Physics | Year: 2016 Hawking particles emitted by a black hole are usually found to have thermal spectra, if not exactly, then by a very good approximation. Here, we argue differently. It was discovered that spherical partial waves of in-going and out-going matter can be described by unitary evolution operators independently, which allows for studies of space-time properties that were not possible before. Unitarity dictates space-time, as seen by a distant observer, to be topologically non-trivial. Consequently, Hawking particles are only locally thermal, but globally not: we explain why Hawking particles emerging from one hemisphere of a black hole must be 100 % entangled with the Hawking particles emerging from the other hemisphere. This produces exclusively pure quantum states evolving in a unitary manner, and removes the interior region for the outside observer, while it still completely agrees locally with the laws of general relativity. Unitarity is a starting point; no other assumptions are made. Region I and the diametrically opposite region II of the Penrose diagram represent antipodal points in a PT or CPT relation, as was suggested before. On the horizon itself, antipodal points are identified. A candidate instanton is proposed to describe the formation and evaporation of virtual black holes of the type described here. © 2016 The Author(s) Wosten H.A.B.,University Utrecht | Scholtmeijer K.,Wageningen University Applied Microbiology and Biotechnology | Year: 2015 Hydrophobins are proteins exclusively produced by filamentous fungi. They self-assemble at hydrophilic-hydrophobic interfaces into an amphipathic film. This protein film renders hydrophobic surfaces of gas bubbles, liquids, or solid materials wettable, while hydrophilic surfaces can be turned hydrophobic. These properties, among others, make hydrophobins of interest for medical and technical applications. For instance, hydrophobins can be used to disperse hydrophobic materials; to stabilize foam in food products; and to immobilize enzymes, peptides, antibodies, cells, and anorganic molecules on surfaces. At the same time, they may be used to prevent binding of molecules. Furthermore, hydrophobins have therapeutic value as immunomodulators and can been used to produce recombinant proteins. © 2015, Springer-Verlag Berlin Heidelberg. Sark W.G.J.H.M.V.,University Utrecht Applied Energy | Year: 2011 Outdoor performance of photovoltaic (PV) modules suffers from elevated temperatures. Conversion efficiency losses of up to about 25% can result, depending on the type of integration of the modules in the roof. Cooling of modules would therefore enhance annual PV performance. Instead of module cooling we propose to use the thermal waste by attaching thermoelectric (TE) converters to the back of PV modules, to form a PV-TE hybrid module. Due to the temperature difference over the TE converter additional electricity can be generated. Employing present day thermoelectric materials with typical figure of merits (Z) of 0.004K-1 at 300K may lead to efficiency enhancements of up to 23% for roof integrated PV-TE modules, as is calculated by means of an idealized model. The annual energy yield would increase by 14.7-11%, for two annual irradiance and temperature profiles studied, i.e., for Malaga, Spain, and Utrecht, the Netherlands, respectively. As new TE materials are being developed, efficiency enhancements of up to 50% and annual energy yield increases of up to 24.9% may be achievable. The developed idealized model, however, is judged to overestimate the results by about 10% for practical PV-TE hybrids. © 2011 Elsevier Ltd. Braun K.P.J.,University Utrecht | Schmidt D.,Epilepsy Research Group Current Opinion in Neurology | Year: 2014 Purpose of review: Based on the available evidence, we aim to balance risks and benefits of antiepileptic drug (AED) withdrawal in medically and surgically treated adults and children who achieved remission. We summarize risks and predictors of seizure relapse after AED withdrawal and chances of not regaining seizure freedom. Finally, we discuss how AED discontinuation can inform us on the natural course of the epileptic disorder. Recent findings: In medically treated patients, the risk of recurrence after AED withdrawal is increased until 2 years after withdrawal, although long-term seizure outcomes seem to be unaffected by drug policies. Most relapses occur during the first year after withdrawal. Several predictors of postwithdrawal relapse have been identified. The risk of developing uncontrollable epilepsy following withdrawal is less than one in five. Whether AED withdrawal after epilepsy surgery contributes to seizure outcome has never been studied in a randomized controlled manner. Recent studies suggested that AED reduction merely unmasks incomplete surgical success. The risk of not regaining seizure freedom after postoperative relapse is around 30% and probably not affected by AED reduction. Timing of AED discontinuation does not influence eventual seizure outcomes. Summary: There is no proof that AED withdrawal itself negatively affects long-term seizure outcomes in patients who became seizure-free under AED treatment or after epilepsy surgery. AED discontinuation unveils the natural history of the epilepsy in medically treated patients, and the completeness of resection of the epileptogenic network in patients who underwent epilepsy surgery. Vermeulen M.,University Utrecht Methods in Enzymology | Year: 2012 Posttranslational modifications (PTMs) on core histones regulate essential processes inside the nucleus such as transcription, replication, and DNA repair. An important function of histone PTMs is the recruitment or stabilization of chromatin-modifying proteins, which are also called chromatin "readers." We have developed a generic SILAC-based peptide pull-down approach to identify such readers for histone PTMs in an unbiased manner. In this chapter, the workflow behind this method will be presented in detail. © 2012 Elsevier Inc. All rights reserved. Burbach J.P.H.,University Utrecht Methods in Molecular Biology | Year: 2011 We know neuropeptides now for over 40 years as chemical signals in the brain. The discovery of neuropeptides is founded on groundbreaking research in physiology, endocrinology, and biochemistry during the last century and has been built on three seminal notions: (1) peptide hormones are chemical signals in the endocrine system; (2) neurosecretion of peptides is a general principle in the nervous system; and (3) the nervous system is responsive to peptide signals. These historical lines have contributed to how neuropeptides can be defined today: "Neuropeptides are small proteinaceous substances produced and released by neurons through the regulated secretory route and acting on neural substrates." Thus, neuropeptides are the most diverse class of signaling molecules in the brain engaged in many physiological functions. According to this definition almost 70 genes can be distinguished in the mammalian genome, encoding neuropeptide precursors and a multitude of bioactive neuropeptides. In addition, among cytokines, peptide hormones, and growth factors there are several subfamilies of peptides displaying most of the hallmarks of neuropeptides, for example neural chemokines, cerebellins, neurexophilins, and granins. All classical neuropeptides as well as putative neuropeptides from the latter families are presented as a resource. © 2011 Springer Science+Business Media, LLC. Boeters S.,CPB | Koornneef J.,University Utrecht Energy Economics | Year: 2011 What are the excess costs of a separate 20% target for renewable energy as a part of the EU climate policy for 2020? We answer this question using a computable general equilibrium model, WorldScan, which has been extended with a bottom-up module of the electricity sector. The model set-up makes it possible to base the calibration directly on available estimates of costs and capacity potentials for renewable energy sources. In our base case simulation, the costs of EU climate policy with the renewables target are 6% higher than those of a policy without this target. The uncertainty in this estimate is considerable, however, and depends on our assumptions about the availability of low-cost renewable energy: the initial cost level, the steepness of the supply curves and share of renewable energy in the baseline. Within the range we explore, the excess costs vary from zero (when the target is not a binding constraint) to 32% (when the cost progression and the initial cost disadvantage for renewable energy are high and its initial share is low). © 2011 Elsevier B.V. Medema R.H.,University Utrecht | Medema R.H.,Netherlands Cancer Institute | MacUrek L.,Academy of Sciences of the Czech Republic Oncogene | Year: 2012 DNA-damaging therapies represent the most frequently used non-surgical anticancer strategies in the treatment of human tumors. These therapies can kill tumor cells, but at the same time they can be particularly damaging and mutagenic to healthy tissues. The efficacy of DNA-damaging treatments can be improved if tumor cell death is selectively enhanced, and the recent application of poly-(ADP-ribose) polymerase inhibitors in BRCA1/2-deficient tumors is a successful example of this. DNA damage is known to trigger cell-cycle arrest through activation of DNA-damage checkpoints. This arrest can be reversed once the damage has been repaired, but irreparable damage can promote apoptosis or senescence. Alternatively, cells can reenter the cell cycle before repair has been completed, giving rise to mutations. In this review we discuss the mechanisms involved in the activation and inactivation of DNA-damage checkpoints, and how the transition from arrest and cell-cycle re-entry is controlled. In addition, we discuss recent attempts to target the checkpoint in anticancer strategies. © 2012 Macmillan Publishers Limited All rights reserved. Van De Meent M.,University Utrecht Classical and Quantum Gravity | Year: 2011 We examine the continuum limit of the piecewise flat locally finite gravity model introduced by 't Hooft. In the linear weak field limit, we find the energy-momentum tensor and metric perturbation of an arbitrary configuration of defects. The energy-momentum turns out to be restricted to satisfy certain conditions. The metric perturbation is mostly fixed by the energy-momentum except for its lightlike modes which reproduce linear gravitational waves, despite no such waves being present at the microscopic level. © 2011 IOP Publishing Ltd. Ten Cate O.,University Utrecht Academic Medicine | Year: 2014 The undergraduate medical degree, leading to a license to practice, has traditionally been the defining professional milestone of the physician. Developments in health care and medical education and training, however, have changed the significance of the medical degree in the continuum of education toward clinical practice. The author discusses six questions that should lead us to rethink the current status and significance of the medical degree and, consequently, that of the physician. These questions include the quest for core knowledge and competence of the doctor, the place of the degree in the education continuum, the increasing length of training, the sharing of health care tasks with other professionals, and the nature of professional identity in a multitasking world. The author concludes by examining ways to redefine what it means to be a "medical doctor.". Tee J.-M.,University Utrecht | Peppelenbosch M.P.,Erasmus University Rotterdam Critical Reviews in Biochemistry and Molecular Biology | Year: 2010 The ankyrin repeat is a protein module with high affinity for other ankyrin repeats based on strong Van der Waals forces. The resulting dimerization is unusually resistant to both mechanical forces and alkanization, making this module exceedingly useful for meeting the extraordinary demands of muscle physiology. Many aspects of muscle function are controlled by the superfamily ankyrin repeat domain containing proteins, including structural fixation of the contractile apparatus to the muscle membrane by ankyrins, the archetypical member of the family. Additionally, other ankyrin repeat domain containing proteins critically control the various differentiation steps during muscle development, with Notch and developmental stage-specific expression of the members of the Ankyrin repeat and SOCS box (ASB) containing family of proteins controlling compartment size and guiding the various steps of muscle specification. Also, adaptive responses in fully formed muscle require ankyrin repeat containing proteins, with Myotrophin/V-1 ankyrin repeat containing proteins controlling the induction of hypertrophic responses following excessive mechanical load, and muscle ankyrin repeat proteins (MARPs) acting as protective mechanisms of last resort following extreme demands on muscle tissue. Knowledge on mechanisms governing the ordered expression of the various members of superfamily of ankyrin repeat domain containing proteins may prove exceedingly useful for developing novel rational therapy for cardiac disease and muscle dystrophies. © 2010 Informa UK Ltd. Braidot E.,University Utrecht Nuclear Physics A | Year: 2011 During the 2008 run RHIC provided high luminosity in both p + p and d + Au collisions at sNN=200 GeV. Electromagnetic calorimeter acceptance in STAR was enhanced by the new Forward Meson Spectrometer (FMS), and is now almost continuous over -1<η<4 and the full azimuth. This large acceptance provides sensitivity to the gluon density in the nucleus down to x≈10-3, as expected for 2→2 parton scattering. Measurements of the azimuthal correlation between a forward π0 and an associated particle at large rapidity are sensitive to the low- x gluon density. Data exhibit the qualitative features expected from gluon saturation. A comparison to calculations using the Color Glass Condensate (CGC) model is presented. © 2011 Elsevier B.V. Bijlsma J.W.,University Utrecht Rheumatology (Oxford, England) | Year: 2010 There is a range of pharmacological options available to the rheumatologist for treating arthritis. Non-selective NSAIDs or Cox-2 selective inhibitors are widely prescribed to reduce inflammation and alleviate pain; however, they must be used with caution in individuals with an increased cardiovascular, renal or gastrointestinal (GI) risk. The potential cardiovascular risks of Cox-2 selective inhibitors came to light over a decade ago. The conflicting nature of the study data reflects some context dependency, but the evidence shows a varying degree of cardiovascular risk with both Cox-2 selective inhibitors and non-selective NSAIDs. This risk appears to be dose dependent, which may have important ramifications for arthritis patients who require long-term treatment with high doses of anti-inflammatory drugs. The renal effects of non-selective NSAIDs have been well characterized. An increased risk of adverse renal events was found with rofecoxib but not celecoxib, suggesting that this is not a class effect of Cox-2 selective inhibitors. Upper GI effects of non-selective NSAID treatment, ranging from abdominal pain to ulceration and bleeding are extensively documented. Concomitant prescription of a proton pump inhibitor can help in the upper GI tract, but probably not in the lower. Evidence suggests that Cox-2 selective inhibitors are better tolerated in the entire GI tract. More evidence is required, and a composite end-point is being evaluated. Appropriate treatment strategies are needed depending on the level of upper and lower GI risk. Rheumatologists must be vigilant in assessing benefit-risk when prescribing a Cox-2 selective inhibitor or non-selective NSAID and should choose appropriate agents for each individual patient. De Mol N.J.,University Utrecht Methods in Molecular Biology | Year: 2012 Surface plasmon resonance (SPR) is a well-established label-free technique to detect mass changes near an SPR surface. For 20 years the benefits of SPR have been proven in biomolecular interaction analysis, including measurements of affinity and kinetics. The emergence of proteomics and a need for high throughput analysis drives the development of SPR systems capable of analyzing microarrays. The use of SPR imaging (also known as SPR microscopy) makes it possible to use multiplexed arrays to follow binding reactions. As SPR only analyzes the binding process, but not the identity of captured molecules on the SPR surface, technologies have been developed to integrate SPR with mass spectrometric (MS) analysis. Such approaches involve the recovery of analytes from the SPR surface and subsequent MALDI-TOF MS analysis, or LC-MS/MS after tryptic digestion of recovered proteins. An approach compatible with SPR arrays is on-chip MALDI-TOF MS, from arrayed spots on an SPR surface. This review describes some exciting developments in the application of SPR to proteomics, using instruments which are on the market already, or are expected to be available in the years to come. © 2012 Springer Science+Business Media, LLC. Pons T.L.,University Utrecht Photosynthesis Research | Year: 2012 The effect of temperature and irradiance during growth on photosynthetic traits of two accessions of Arabidopsis thaliana was investigated. Plants were grown at 10 and 22 °C, and at 50 and 300 μmol photons m-2 s-1 in a factorial design. As known from other cold-tolerant herbaceous species, growth of Arabidopsis at low temperature resulted in increases in photosynthetic capacity per unit leaf area and chlorophyll. Growth at high irradiance had a similar effect. However, the growth temperature and irradiance showed interacting effects for several capacity-related variables. Temperature effects on the ratio between electron transport capacity and carboxylation capacity were also different in low compared to high irradiance grown Arabidopsis. The carboxylation capacity per unit Rubisco, a measure for the in vivo Rubisco activity, was low in low irradiance grown plants but there was no clear growth temperature effect. The limitation of photosynthesis by the utilization of triose-phosphate in high temperature grown plants was less when grown at low compared to high irradiance. Several of these traits contribute to reduced efficiency of the utilization of resources for photosynthesis of Arabidopsis at low irradiance. The two accessions from contrasting climates showed remarkably similar capabilities of developmental acclimation to the two environmental factors. Hence, no evidence was found for photosynthetic adaptation of the photosynthetic apparatus to specific climatic conditions. © 2012 The Author(s). Lozano R.,University Utrecht | Lozano R.,Organisational Sustainability Ltd. Journal of Cleaner Production | Year: 2013 Recently, there has been a rapid growth in company sustainability reporting, as well as an improvement in quality of reports. A number of guidelines have been instrumental in this process; however, they still do not consider the importance of the inter-linkages and synergies among the different indicators and dimensions. This paper focuses on assessing sustainability inter-linkages in corporate sustainability reporting. For this study, the reports from fifty-three European companies, covering thirteen industries at A+ Global Reporting Initiative level and third party certified, were selected. These reports were analysed following a two prong, quasi-quantitative approach - firstly by checking which of the reports covered any of the inter-linking issues, and secondly by checking how well these were covered (i.e. the performance). The results showed that, although not explicitly demanded by the guidelines, the coverage of the interlinking issues ranged from medium to high, whilst performance ranged from low to high. Given the holistic nature of business and of sustainability, and the lack of inclusion of this in the current reporting guidelines, this paper calls for an update of the theory, and of the guidelines, to ensure that a more systemic approach is adopted in business praxis. It also makes an appeal to SR managers and champions, and those compiling the reports, to actively look for the inter-linking issues and dimensions, in order to gain new insights with a view to reducing, or even avoiding, conflicts between/among issues. © 2013 Elsevier Ltd. All rights reserved. Lambooy T.,Nyenrode Business University | Lambooy T.,University Utrecht Journal of Cleaner Production | Year: 2011 Freshwater scarcity is no longer limited to sub-Saharan developing countries; also in Western society, access to unlimited amounts of freshwater is not assured at all times. It has been argued - and laid down in many national legal systems - that access to freshwater is a basic human right. What if corporate freshwater use threatens to interfere with this human right? The main focus of the article is to explore the role of todays companies in relation to freshwater. A number of tools have been developed to attend to the necessity to reduce corporate use of freshwater. The article discusses specialised water reporting instruments such as the 2007 Global Water Tool and the water footprint calculation method. In addition, attention is paid to a CERES report (2010) revealing that the majority of the 100 worlds leading companies in water-intensive industries still has weak management and disclosures of water-related risks and opportunities. To obtain concrete information about corporate water strategies and practices, an explorative analysis was conducted on 20 Dutch multinational companies. The article highlights various innovative practices. In sum, it is demonstrated that companies are expected to bear responsibility for their impact on water resources, in particular when it influences public access to water in areas with freshwater scarcity and/or weak government. Notwithstanding the critical conclusions of the CERES report, it is interesting to see an evolution in corporate research concerning sustainable water use and the development of greener products and greener ways of production. © 2010 Elsevier Ltd. All rights reserved. Donahue M.J.,Vanderbilt University | Strother M.K.,Vanderbilt University | Hendrikse J.,University Utrecht Stroke | Year: 2012 Changes in cerebral hemodynamics underlie a broad spectrum of ischemic cerebrovascular disorders. An ability to accurately and quantitatively measure hemodynamic (cerebral blood flow and cerebral blood volume) and related metabolic (cerebral metabolic rate of oxygen) parameters is important for understanding healthy brain function and comparative dysfunction in ischemia. Although positron emission tomography, single-photon emission tomography, and gadolinium-MRI approaches are common, more recently MRI approaches that do not require exogenous contrast have been introduced with variable sensitivity for hemodynamic parameters. The ability to obtain hemodynamic measurements with these new approaches is particularly appealing in clinical and research scenarios in which follow-up and longitudinal studies are necessary. The purpose of this review is to outline current state-of-the-art MRI methods for measuring cerebral blood flow, cerebral blood volume, and cerebral metabolic rate of oxygen and provide practical tips to avoid imaging pitfalls. MRI studies of cerebrovascular disease performed without exogenous contrast are synopsized in the context of clinical relevance and methodological strengths and limitations. © 2012 American Heart Association, Inc. Dunaif A.,Northwestern University | Fauser B.C.J.M.,University Utrecht Journal of Clinical Endocrinology and Metabolism | Year: 2013 Context: It has become evident over the past 30 years that polycystic ovary syndrome (PCOS) is more than a reproductive disorder. It has metabolic sequelae that can affect women across the lifespan. Diagnostic criteria based on the endocrine features of the syndrome, hyperandrogenism and chronic anovulation, such as the National Institutes of Health (NIH) criteria, identifywomenat high metabolic risk. The additional phenotypes defined by the Rotterdam diagnostic criteria identify women with primarily reproductive rather than metabolic dysfunction. Objective: The aim is to discuss the rationale for a separatenamefor the syndrome that is associated with high metabolic risk while maintaining the current name for the phenotypes with primarily reproductive morbidity. Intervention: The NIH Office for Disease Prevention-Sponsored Evidence-Based Methodology Workshop on Polycystic Ovary Syndrome recommended that a new name is needed for PCOS. Positions: The authors propose that PCOS be retained for the reproductive phenotypes and that a new name be created for the phenotypes at high metabolic risk. Conclusions: There should be two names for the PCOS phenotypes: those with primarily reproductive consequences should continue to be called PCOS, and those with important metabolic consequences should have a new name. Copyright © 2013 by The Endocrine Society. Leget C.,University Utrecht Medicine, Health Care and Philosophy | Year: 2013 The concept of dignity is notoriously vague. In this paper it is argued that the reason for this is that there are three versions of dignity that are often confused. First we will take a short look at the history of the concept of dignity in order to demonstrate how already from Roman Antiquity two versions of dignity can be distinguished. Subsequently, the third version will be introduced and it will be argued that although the three versions of dignity hang together, they should also be clearly distinguished in order to avoid confusion. The reason for distinguishing the three versions is because all three of them are only partially effective. This will be demonstrated by taking the discussion about voluntary 'dying with dignity' as an example. Inspired by both Paul Ricoeur's concept of ethics and the ethics of care a proposition will be done as to how the three versions of dignity may sustain each other and help achieve what neither one of the versions can do on its own. © 2012 Springer Science+Business Media B.V. Kasteleijn-Nolst Trenite D.G.A.,University Utrecht Epilepsia | Year: 2012 Summary Most patients with epilepsy report that seizures are sometimes, or exclusively, provoked by general internal precipitants (such as stress, fatigue, fever, sleep, and menstrual cycle) and by external precipitants (such as excess alcohol, heat, bathing, eating, reading, and flashing lights). Some patients describe very exotic and precise triggers, like tooth brushing or listening to a particular melody. Nevertheless, the most commonly noticed seizure increasers by far are stress, lack of sleep, and fatigue. Recognized reflex seizure triggers are usually sensory and visual, such as television, discotheques, and video games. Visually evoked seizures comprise 5% of the total of 6% reflex seizures. The distinction between provocative and reflex factors and seizures seems artificial, and in many patients, maybe all, there is a combination of these. It seems plausible that all of the above-mentioned factors can misbalance the actual brain network; at times, accumulation of factors leads then to primary generalized, partial, or secondarily generalized seizures. If the provoking factors are too exotic, patients may be sent to the psychiatrist. Conversely, if the seizure-provoking fluctuating mechanisms include common habits and environmental factors, these may hardly be considered as provocative factors. Awareness of precipitating factors and its possible interactions might help us to unravel the pathophysiology of epilepsy and to change the notion that seizure occurrence is unpredictable. This article provides an overview of the epidemiology, classification, diagnosis, treatment, and especially similarities in the variety of provocative and reflex factors with resulting general hypotheses. © 2012 International League Against Epilepsy. Gonzalez S.F.,Harvard University | Degn S.E.,University of Aarhus | Pitcher L.A.,Harvard University | Woodruff M.,Harvard University | And 2 more authors. Annual Review of Immunology | Year: 2011 The clonal selection theory first proposed by Macfarlane Burnet is a cornerstone of immunology (1). At the time, it revolutionized the thinking of immunologists because it provided a simple explanation for lymphocyte specificity, immunological memory, and elimination of self-reactive clones (2). The experimental demonstration by Nossal & Lederberg (3) that B lymphocytes bear receptors for a single antigen raised the central question of where B lymphocytes encounter antigen. This question has remained mostly unanswered until recently. Advances in techniques such as multiphoton intravital microscopy (4, 5) have provided new insights into the trafficking of B cells and their antigen. In this review, we summarize these advances in the context of our current view of B cell circulation and activation. © 2011 by Annual Reviews. All rights reserved. Svircevic V.,University Utrecht The Cochrane database of systematic reviews | Year: 2013 A combination of general anaesthesia (GA) with thoracic epidural analgesia (TEA) may have a beneficial effect on clinical outcomes by reducing the risk of perioperative complications after cardiac surgery. The objective of this review was to determine the impact of perioperative epidural analgesia in cardiac surgery on perioperative mortality and cardiac, pulmonary or neurological morbidity. We performed a meta-analysis to compare the risk of adverse events and mortality in patients undergoing cardiac surgery under general anaesthesia with and without epidural analgesia. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (2012, Issue 12) in The Cochrane Library; MEDLINE (PubMed) (1966 to November 2012); EMBASE (1989 to November 2012); CINHAL (1982 to November 2012) and the Science Citation Index (1988 to November 2012). We included randomized controlled trials comparing outcomes in adult patients undergoing cardiac surgery with either GA alone or GA in combination with TEA. All publications found during the search were manually and independently reviewed by the two authors. We identified 5035 titles, of which 4990 studies did not satisfy the selection criteria or were duplicate publications, that were retrieved from the five different databases. We performed a full review on 45 studies, of which 31 publications met all inclusion criteria. These 31 publications reported on a total of 3047 patients, 1578 patients with GA and 1469 patients with GA plus TEA. Through our search (November 2012) we have identified 5035 titles, of which 31 publications met our inclusion criteria and reported on a total of 3047 patients. Compared with GA alone, the pooled risk ratio (RR) for patients receiving GA with TEA showed an odds ratio (OR) of 0.84 (95% CI 0.33 to 2.13, 31 studies) for mortality; 0.76 (95% CI 0.49 to 1.19, 17 studies) for myocardial infarction; and 0.50 (95% CI 0.21 to 1.18, 10 studies) for stroke. The relative risks (RR) for respiratory complications and supraventricular arrhythmias were 0.68 (95% CI 0.54 to 0.86, 14 studies) and 0.65 (95% CI 0.50 to 0.86, 15 studies) respectively. This meta-analysis of studies, identified to 2010, showed that the use of TEA in patients undergoing coronary artery bypass graft surgery may reduce the risk of postoperative supraventricular arrhythmias and respiratory complications. There were no effects of TEA with GA on the risk of mortality, myocardial infarction or neurological complications compared with GA alone. De Groot P.G.,University Utrecht Thrombosis Research | Year: 2011 The antiphospholipid syndrome is an autoimmune disease characterised by the clinical features of recurrent thrombosis in the venous or arterial circulation and foetal losses in combination with circulating anti-phospholipid antibodies in the blood of the afflicted patients. Over the last 25 years numerous studies have established the correlation between the presence of antibodies against anionic phospholipids and thrombo-embolic manifestations but how these antibodies cause thrombosis is still unclear. Most scientists now accept the fact that only a subset of the antiphospholipid antibodies has clinical relevance. Not antibodies to anionic phospholipids but rather antibodies to β2-glycoprotein I are thought to be the major cause for the pathological manifestations. β2-Glycoprotein I is a plasma protein without a known function and persons lacking β2-Glycoprotein I are completely healthy. Our challenge is to understand why auto-antibodies against such a dispensable protein are so common and how antibodies directed against a protein without obvious function can induce the severe clinical manifestations observed in this syndrome. © 2010 Elsevier B.V. All rights reserved. Hoogenraad T.U.,University Utrecht International Journal of Alzheimer's Disease | Year: 2011 Breakthrough in treatment of Alzheimer's disease with a shift from irrational dangerous chelation therapy to rational safe evidence based oral zinc therapy. Evidence based medicine: After synthesizing the best available clinical evidence I conclude that oral zinc therapy is a conscientious choice for treatment of free copper toxicosis in individual patients with Alzheimer's disease. Hypothesis 1: Age related free copper toxicosis is a causal factor in pathogenesis of Alzheimer's disease. There are 2 neurodegenerative diseases with abnormalities in copper metabolism: (a) the juvenile form with degeneration in the basal ganglia (Wilson's disease) and (b) the age related form with cortical neurodegeneration (Alzheimer's disease). Initially the hypothesis has been that neurodegeneration was caused by accumulation of copper in the brain but later experiences with treatment of Wilson's disease led to the conviction that free plasma copper is the toxic form of copper: it catalyzes amyloid formation thereby generating oxidative stress, free radicals and degeneration of cortical neurons. Hypothesis 2: Oral zinc therapy is an effective and safe treatment of free copper toxicosis in Alzheimer's disease. Proposed dosage: 50mg elementary zinc/day. Warning: Chelation therapy is irrational and dangerous in treatment of copper toxicosis in Alzheimer's disease. © 2011 Tjaard U. Hoogenraad. Princen S.,University Utrecht Marine Policy | Year: 2010 Over the past two decades profound changes have taken place in the European Union's (EU) fisheries policy. Partly these changes have occurred within the EU's Common Fisheries Policy itself, but partly policy change has been effected by the application of environmental legislation and policy instruments to fisheries issues. This article argues that the process of policy change in EU fisheries policy can best be understood in terms of the interaction of policy images and policy venues that is at the core of the punctuated equilibrium theory of policy-making. As a result of the rise of a biodiversity perspective on fisheries issues, environmental policy-makers have become active in fisheries issues, which has led to profound changes in both the content of fisheries policies and the institutional organisation around this issue area. © 2009 Elsevier Ltd. All rights reserved. Chubar N.,University Utrecht Journal of Colloid and Interface Science | Year: 2011 New inorganic ion exchangers based on double Mg-Al hydrous oxides were generated via the new non-traditional sol-gel synthesis method which avoids using metal alkoxides as raw materials. Surface chemical and adsorptive properties of the final products were controlled by several ways of hydrogels and xerogels treatments which produced the materials of the layered structure, mixed hydrous oxides or amorphous adsorbents. The final adsorptive materials obtained via thermal treatment of xerogels were the layered mesoporous materials with carbonate in the interlayer space, surface abundance with hydroxylic groups and maximum adsorptive capacity to arsenate. Higher affinity of Mg-Al hydrous oxides towards H2AsO4- is confirmed by steep adsorption isotherms having plateau (removal capacity) at 220. mg[As]. gdw-1 for the best sample at pH = 7, fast adsorption kinetics and little pH effect. Adsorption of arsenite, fluoride, bromate, bromide, selenate, borate by Mg-Al hydrous oxides was few times high either competitive (depending on the anion) as compare with the conventional inorganic ion exchange adsorbents. © 2011 Elsevier Inc. Waldinger M.D.,University Utrecht Current Opinion in Psychiatry | Year: 2014 Purpose of review: As there are various drugs and different treatment strategies to delay ejaculation, a review of the current drug treatments for premature ejaculation is relevant for daily clinical practice. Recent findings: There are four premature ejaculation subtypes: lifelong premature ejaculation, acquired premature ejaculation, variable premature ejaculation and subjective premature ejaculation. These premature ejaculation subtypes vary in the duration of the intravaginal ejaculation latency time, their course in life and frequency of early ejaculations. Drug treatment is mainly required for lifelong and acquired premature ejaculation. On the other hand, counseling, psychoeducation and local anesthetics are particularly indicated for variable premature ejaculation and subjective premature ejaculation. Apart from the efficacy of various drugs, drugs against premature ejaculation can be taken on-demand or on a daily basis. However, apart from the on-demand use of dapoxetine, all other premature ejaculation treatments are off-label. Summary: Drug treatment is the first choice of treatment for lifelong premature ejaculation and may also be indicated for acquired premature ejaculation. Together with the patient, the clinician can choose which drug and which treatment strategy is most suitable for the patient and his partner. © 2014 Wolters Kluwer Health | Lippincott Williams and Wilkins. Snellings R.,University Utrecht Journal of Physics G: Nuclear and Particle Physics | Year: 2011 The ALICE detector at the Large Hadron Collider (LHC) recorded first Pb-Pb collisions at √SNN = 2.76 TeV in November and December of 2010. We report on the measurements of anisotropic flow for charged and identified particles. From the comparison with measurements at lower energies and with model predictions, we find that the system created at these collision energies is described well by hydrodynamical model calculations and behaves like analmost perfect fluid. © CERN 2011. Published under licence by IOP Publishing Ltd. Weicht B.,University Utrecht Journal of Aging Studies | Year: 2013 The provision and arrangement of care for elderly people is one of the main challenges for the future of European welfare states. In both political and public discourses elderly people feature as the subjects who are associated with particular needs, wishes and desires and for whom care needs to be guaranteed and organised. Underlying the cultural construction of the care regime and culture is an ideal type model of the elderly person. This paper analyses the discursive construction of elderly people in the discourses on care in Austria. An understanding of how elderly people as subjects, their wishes and needs and their position within society are constructed enables us to analyse, question and challenge the current dominant care arrangements and its cultural embeddings. The paper demonstrates the processes of silencing, categorisation and passivation of elderly people and it is argued that the socio-discursive processes lead to a particular image of the elderly person which consequently serves as the basis on which the care regime is built. © 2013 Elsevier Inc. de Vries S.J.,University Utrecht Nature protocols | Year: 2010 Computational docking is the prediction or modeling of the three-dimensional structure of a biomolecular complex, starting from the structures of the individual molecules in their free, unbound form. HADDOCK is a popular docking program that takes a data-driven approach to docking, with support for a wide range of experimental data. Here we present the HADDOCK web server protocol, facilitating the modeling of biomolecular complexes for a wide community. The main web interface is user-friendly, requiring only the structures of the individual components and a list of interacting residues as input. Additional web interfaces allow the more advanced user to exploit the full range of experimental data supported by HADDOCK and to customize the docking process. The HADDOCK server has access to the resources of a dedicated cluster and of the e-NMR GRID infrastructure. Therefore, a typical docking run takes only a few minutes to prepare and a few hours to complete. Chojnacki M.,University Utrecht Journal of Physics G: Nuclear and Particle Physics | Year: 2011 Results of the measurement of the π, K, p transverse momentum (p t) spectra at mid-rapidity in proton-proton collisions at √s = 7 TeV are presented. Particle identification was performed using the energy loss signal in the inner tracking system and the time projection chamber, while information from the time-of-flight detector was used to identify particles at higher transverse momentum. From the spectra at √s = 7 TeV, the mean transverse momentum (〈pt〉) and particle ratios were extracted and compared to results obtained for collisions at √s = 0.9 TeV and lower energies. © CERN 2011. Published under licence by IOP Publishing Ltd. Gehring U.,University Utrecht Environmental health : a global access science source | Year: 2013 Environmental exposures during pregnancy and early life may have adverse health effects. Single birth cohort studies often lack statistical power to tease out such effects reliably. To improve the use of existing data and to facilitate collaboration among these studies, an inventory of the environmental exposure and health data in these studies was made as part of the ENRIECO (Environmental Health Risks in European Birth Cohorts) project. The focus with regard to exposure was on outdoor air pollution, water contamination, allergens and biological organisms, metals, pesticides, smoking and second hand tobacco smoke (SHS), persistent organic pollutants (POPs), noise, radiation, and occupational exposures. The review lists methods and data on environmental exposures in 37 European birth cohort studies. Most data is currently available for smoking and SHS (N=37 cohorts), occupational exposures (N=33), outdoor air pollution, and allergens and microbial agents (N=27). Exposure modeling is increasingly used for long-term air pollution exposure assessment; biomonitoring is used for assessment of exposure to metals, POPs and other chemicals; and environmental monitoring for house dust mite exposure assessment. Collaborative analyses with data from several birth cohorts have already been performed successfully for outdoor air pollution, water contamination, allergens, biological contaminants, molds, POPs and SHS. Key success factors for collaborative analyses are common definitions of main exposure and health variables. Our review emphasizes that such common definitions need ideally be arrived at in the study design phase. However, careful comparison of methods used in existing studies also offers excellent opportunities for collaborative analyses. Investigators can use this review to evaluate the potential for future collaborative analyses with respect to data availability and methods used in the different cohorts and to identify potential partners for a specific research question. Benschop J.J.,University Utrecht Molecular cell | Year: 2010 Analyses of biological processes would benefit from accurate definitions of protein complexes. High-throughput mass spectrometry data offer the possibility of systematically defining protein complexes; however, the predicted compositions vary substantially depending on the algorithm applied. We determine consensus compositions for 409 core protein complexes from Saccharomyces cerevisiae by merging previous predictions with a new approach. Various analyses indicate that the consensus is comprehensive and of high quality. For 85 out of 259 complexes not recorded in GO, literature search revealed strong support in the form of coprecipitation. New complexes were verified by an independent interaction assay and by gene expression profiling of strains with deleted subunits, often revealing which cellular processes are affected. The consensus complexes are available in various formats, including a merge with GO, resulting in 518 protein complex compositions. The utility is further demonstrated by comparison with binary interaction data to reveal interactions between core complexes. Copyright (c) 2010 Elsevier Inc. All rights reserved. Gho J.M.,University Utrecht Journal of cardiac failure | Year: 2013 Dilated cardiomyopathy (DCM) is the most common form of nonischemic cardiomyopathy worldwide and can lead to sudden cardiac death and heart failure. Despite ongoing advances made in the treatment of DCM, improvement of outcome remains problematic. Stem cell therapy has been extensively studied in preclinical and clinical models of ischemic heart disease, showing potential benefit. DCM is associated with a major health burden, and few studies have been performed on cell therapy for DCM. In this systematic review we aimed to provide an overview of preclinical and clinical studies performed on cell therapy for DCM. A systematic search, critical appraisal, and summarized outcomes are presented. In total, 29 preclinical and 15 clinical studies were included. Methodologic quality of reported studies in general was low based on the Centre for Evidence Based Medicine, Oxford University, criteria. A large heterogeneity in inclusion criteria, procedural characteristics, and outcome measures was noted. The majority of studies showed a significant increase in left ventricular ejection fraction after cell therapy during follow-up. Stem cell therapy has shown moderate but significant effects in clinical trials for ischemic heart disease, but it remains to be determined if we can extrapolate these results to DCM patients. There is a need for methodologically sound studies to elucidate underlying mechanisms and translate those into improved therapy for clinical practice. To validate safety and efficacy of cell therapy for DCM, adequate randomized (placebo) controlled trials using different strategies are mandatory. Copyright © 2013 Elsevier Inc. All rights reserved. ten Cate O.T.J.,University Utrecht Advances in Health Sciences Education | Year: 2013 Providing feedback to trainees in clinical settings is considered important for development and acquisition of skill. Despite recommendations how to provide feedback that have appeared in the literature, research shows that its effectiveness is often disappointing. To understand why receiving feedback is more difficult than it appears, this paper views the feedback process through the lens of Self-Determination Theory (SDT). SDT claims that the development and maintenance of intrinsic motivation, associated with effective learning, requires feelings of competence, autonomy and relatedness. These three psychological needs are not likely to be satisfied in most feedback procedures. It explains why feedback is often less effective than one would expect. Suggestions to convey feedback in ways that may preserve the trainee's autonomy are provided. © 2012 Springer Science+Business Media B.V. Zhou H.,University Utrecht Molecular & cellular proteomics : MCP | Year: 2011 Metal and metal oxide chelating-based phosphopeptide enrichment technologies provide powerful tools for the in-depth profiling of phosphoproteomes. One weakness inherent to current enrichment strategies is poor binding of phosphopeptides containing multiple basic residues. The problem is exacerbated when strong cation exchange (SCX) is used for pre-fractionation, as under low pH SCX conditions phosphorylated peptides with multiple basic residues elute with the bulk of the tryptic digest and therefore require more stringent enrichment. Here, we report a systematic evaluation of the characteristics of a novel phosphopeptide enrichment approach based on a combination of low pH SCX and Ti(4+)-immobilized metal ion affinity chromatography (IMAC) comparing it one-to-one with the well established low pH SCX-TiO(2) enrichment method. We also examined the effect of 1,1,1,3,3,3-hexafluoroisopropanol (HFP), trifluoroacetic acid (TFA), or 2,5-dihydroxybenzoic acid (DHB) in the loading buffer, as it has been hypothesized that high levels of TFA and the perfluorinated solvent HFP improve the enrichment of phosphopeptides containing multiple basic residues. We found that Ti(4+)-IMAC in combination with TFA in the loading buffer, outperformed all other methods tested, enabling the identification of around 5000 unique phosphopeptides containing multiple basic residues from 400 μg of a HeLa cell lysate digest. In comparison, ∼ 2000 unique phosphopeptides could be identified by Ti(4+)-IMAC with HFP and close to 3000 by TiO(2). We confirmed, by motif analysis, the basic phosphopeptides enrich the number of putative basophilic kinases substrates. In addition, we performed an experiment using the SCX/Ti(4+)-IMAC methodology alongside the use of collision-induced dissociation (CID), higher energy collision induced dissociation (HCD) and electron transfer dissociation with supplementary activation (ETD) on considerably more complex sample, consisting of a total of 400 μg of triple dimethyl labeled MCF-7 digest. This analysis led to the identification of over 9,000 unique phosphorylation sites. The use of three peptide activation methods confirmed that ETD is best capable of sequencing multiply charged peptides. Collectively, our data show that the combination of SCX and Ti(4+)-IMAC is particularly advantageous for phosphopeptides with multiple basic residues. Venekamp R.P.,University Utrecht Cochrane database of systematic reviews (Online) | Year: 2013 Acute otitis media (AOM) is one of the most common diseases in early infancy and childhood. Antibiotic use for AOM varies from 56% in the Netherlands to 95% in the USA, Canada and Australia. To assess the effects of antibiotics for children with AOM. We searched CENTRAL (2012, Issue 10), MEDLINE (1966 to October week 4, 2012), OLDMEDLINE (1958 to 1965), EMBASE (January 1990 to November 2012), Current Contents (1966 to November 2012), CINAHL (2008 to November 2012) and LILACS (2008 to November 2012). Randomised controlled trials (RCTs) comparing 1) antimicrobial drugs with placebo and 2) immediate antibiotic treatment with expectant observation (including delayed antibiotic prescribing) in children with AOM. Two review authors independently assessed trial quality and extracted data. For the review of antibiotics against placebo, 12 RCTs (3317 children and 3854 AOM episodes) from high-income countries were eligible. However, one trial did not report patient-relevant outcomes, leaving 11 trials with generally low risk of bias. Pain was not reduced by antibiotics at 24 hours (risk ratio (RR) 0.89; 95% confidence interval (CI) 0.78 to 1.01) but almost a third fewer had residual pain at two to three days (RR 0.70; 95% CI 0.57 to 0.86; number needed to treat for an additional beneficial outcome (NNTB) 20) and fewer had pain at four to seven days (RR 0.79; 95% CI 0.66 to 0.95; NNTB 20). When compared with placebo, antibiotics did not alter the number of abnormal tympanometry findings at either four to six weeks (RR 0.92; 95% CI 0.83 to 1.01) or at three months (RR 0.97; 95% CI 0.76 to 1.24), or the number of AOM recurrences (RR 0.93; 95% CI 0.78 to 1.10). However, antibiotic treatment did lead to a statistically significant reduction of tympanic membrane perforations (RR 0.37; 95% CI 0.18 to 0.76; NNTB 33) and halved contralateral AOM episodes (RR 0.49; 95% CI 0.25 to 0.95; NNTB 11) as compared with placebo. Severe complications were rare and did not differ between children treated with antibiotics and those treated with placebo. Adverse events (such as vomiting, diarrhoea or rash) occurred more often in children taking antibiotics (RR 1.34; 95% CI 1.16 to 1.55; number needed to treat for an additional harmful outcome (NNTH) 14). Funnel plots do not suggest publication bias. Individual patient data meta-analysis of a subset of included trials found antibiotics to be most beneficial in children aged less than two with bilateral AOM, or with both AOM and otorrhoea.For the review of immediate antibiotics against expectant observation, five trials (1149 children) were eligible. Four trials (1007 children) reported outcome data that could be used for this review. From these trials, data from 959 children could be extracted for the meta-analysis on pain at days three to seven. No difference in pain was detectable at three to seven days (RR 0.75; 95% CI 0.50 to 1.12). No serious complications occurred in either the antibiotic group or the expectant observation group. Additionally, no difference in tympanic membrane perforations and AOM recurrence was observed. Immediate antibiotic prescribing was associated with a substantial increased risk of vomiting, diarrhoea or rash as compared with expectant observation (RR 1.71; 95% CI 1.24 to 2.36). Antibiotic treatment led to a statistically significant reduction of children with AOM experiencing pain at two to seven days compared with placebo but since most children (82%) settle spontaneously, about 20 children must be treated to prevent one suffering from ear pain at two to seven days. Additionally, antibiotic treatment led to a statistically significant reduction of tympanic membrane perforations (NNTB 33) and contralateral AOM episodes (NNTB 11). These benefits must be weighed against the possible harms: for every 14 children treated with antibiotics, one child experienced an adverse event (such as vomiting, diarrhoea or rash) that would not have occurred if antibiotics had been withheld. Antibiotics appear to be most useful in children under two years of age with bilateral AOM, or with both AOM and otorrhoea. For most other children with mild disease, an expectant observational approach seems justified. We have no trials in populations with higher risks of complications. Walder F.,Institute for Sustainability science | Van Der Heijden M.G.A.,Institute for Sustainability science | Van Der Heijden M.G.A.,University of Zurich | Van Der Heijden M.G.A.,University Utrecht Nature Plants | Year: 2015 Arbuscular mycorrhizal (AM) fungi are one of the most important groups of plant symbionts. These fungi provide mineral nutrients to plants in exchange for carbon. Although substantial amounts of resources are exchanged, the factors that regulate trade in the AM symbiosis are poorly understood. Recent evidence for the reciprocally regulated exchange of resources by AM fungi and plants has led to the suggestion that these symbioses operate according to biological market dynamics, in which interactions are viewed from an economic perspective, and the most beneficial partners are favoured. Here we present five arguments that challenge the importance of reciprocally regulated exchange, and thereby market dynamics, for resource exchange in the AM symbiosis, and suggest that such reciprocity is only found in a subset of symbionts, under specific conditions. We instead propose that resource exchange in the AM symbiosis is determined by competition for surplus resources, functional diversity and sink strength. © 2015 Macmillan Publishers Limited. All rights reserved. McQuarrie N.,University of Pittsburgh | Van Hinsbergen D.J.,University of Oslo | Van Hinsbergen D.J.,University Utrecht Geology | Year: 2013 The Arabia-Eurasia collision has been linked to global cooling, the slowing of Africa, Mediterranean extension, the rifting of the Red Sea, an increase in exhumation and sedimentation on the Eurasian plate, and the slowing and deformation of the Arabian plate. Collision age estimates range from the Late Cretaceous to Pliocene, with most estimates between 35 and 20 Ma. We assess the consequences of these collision ages on the magnitude and location of continental consumption by compiling all documented shortening within the region, and integrating this with plate kinematic reconstructions. Shortening estimates across the orogen allow for ~350 km of Neogene upper crustal contraction, necessitating collision by 20 Ma. A 35 Ma collision requires additional subduction of ~400-600 km of Arabian continental crust. Using the Oman ophiolite as an analogue, ophiolitic fragments preserved along the Zagros suture zone permit ~180 km of subduction of the Arabian continental margin plus overlying ophiolites. Wholesale subduction of this more dense continental margin plus ophiolites would reconstruct ~400-500 km of postcollisional Arabia-Eurasia convergence, consistent with a ca. 27 Ma initial collision age. This younger Arabia-Eurasia collision suggests a noncollisional mechanism for the slowing of Africa, and associated extension. © 2013 Geological Society of America. De Vos M.,Center for Reproductive Medicine | Devroey P.,Center for Reproductive Medicine | Fauser B.C.,University Utrecht The Lancet | Year: 2010 Primary ovarian insufficiency is a subclass of ovarian dysfunction in which the cause is within the ovary. In most cases, an unknown mechanism leads to premature exhaustion of the resting pool of primordial follicles. Primary ovarian insufficiency might also result from genetic defects, chemotherapy, radiotherapy, or surgery. The main symptom is absence of regular menstrual cycles, and the diagnosis is confirmed by detection of raised follicle-stimulating hormone and declined oestradiol concentrations in the serum, suggesting a primary ovarian defect. The disorder usually leads to sterility, and has a large effect on reproductive health when it arises at a young age. Fertility-preservation options can be offered to some patients with cancer and those at risk of early menopause, such as those with familial cases of primary ovarian insufficiency. Long-term deprivation of oestrogen has serious implications for female health in general; and for bone density, cardiovascular and neurological systems, wellbeing, and sexual health in particular. © 2010 Elsevier Ltd. Twigt B.,University Utrecht International orthopaedics | Year: 2013 C-type distal radial fractures remain challenging fractures. Currently locking plates are very popular because of their length preserving, stability. A considerable drawback is the high cost. Since 2003 we have been using mini AO plates (2.7 mm) as an alternative. We analysed our results and performed a cost analysis. Retrospective analysis was performed of all patients operated upon between 2003 and 2008 for C type distal radius fractures. Reduction was achieved with mini AO plates, applied in a buttress fashion, with ligamentotaxis. Rehabilitation consisted of immediate mobilisation. Pre- and postoperative X-rays, operative results and patient charts were reviewed. Furthermore, we prospectively evaluated the functional results using VAS, DASH and Mayo wrist scores. Lastly, we assessed the implant costs and compared them to locking plates. Thirty-four patients were treated with a mean age of 49 years. Mean radial shortening improved 2 mm; dorsal and radial angulation improved 23 and 4°, respectively. At consolidation (eight weeks) the average radial shortening was 0.75 mm, a volar angulation of 3°, and 21° of radial angulation. Functional results were excellent, demonstrated by a mean VAS score less than 1, a DASH score of 12 and a Mayo wrist score of 87. Compared to locking plates, there was an overall reduction in material costs of 15,300 Euro. Our technique has excellent biomechanical stability, enabling immediate functional rehabilitation, good anatomical and functional outcome with significantly lower costs. Scheerlinck L.M.,University Utrecht The International journal of oral & maxillofacial implants | Year: 2013 To compare the donor site complication rate and length of hospital stay following the harvest of bone from the iliac crest, calvarium, or mandibular ramus. Ninety-nine consecutively treated patients were included in this retrospective observational single-center study. Iliac crest bone was harvested in 55 patients, calvarial bone in 26 patients, and mandibular ramus bone in 18 patients. Harvesting of mandibular ramus bone was associated with the lowest percentages of major complications (5.6%), minor complications (22.2%), and total complications (27.8%). Harvesting of iliac crest bone was related to the highest percentages of minor complications (56.4%) and total complications (63.6%), whereas harvesting of calvarial bone induced the highest percentage of major complications (19.2%). The length of the hospital stay was significantly influenced by the choice of donor site (P = .003) and age (P = .009); young patients with the mandibular ramus as the donor site had the shortest hospital stay. Harvesting of mandibular ramus bone was associated with the lowest percentage of complications and the shortest hospital stay. When the amount of bone to be obtained is deemed sufficient, mandibular ramus bone should be the first choice for the reconstruction of maxillofacial defects. Clevers H.C.,University Utrecht | Bevins C.L.,University of California at Davis Annual Review of Physiology | Year: 2013 Paneth cells are highly specialized epithelial cells of the small intestine, where they coordinate many physiological functions. First identified more than a century ago on the basis of their readily discernible secretory granules by routine histology, these cells are located at the base of the crypts of Lieberkühn, tiny invaginations that line the mucosal surface all along the small intestine. Investigations over the past several decades determined that these cells synthesize and secrete substantial quantities of antimicrobial peptides and proteins. More recent studies have determined that these antimicrobial molecules are key mediators of host-microbe interactions, including homeostatic balance with colonizing microbiota and innate immune protection from enteric pathogens. Perhaps more intriguing, Paneth cells secrete factors that help sustain and modulate the epithelial stem and progenitor cells that cohabitate in the crypts and rejuvenate the small intestinal epithelium. Dysfunction of Paneth cell biology contributes to the pathogenesis of chronic inflammatory bowel disease. Copyright © 2013 by Annual Reviews. All rights reserved. Kooijman E.,University Utrecht PloS one | Year: 2014 Subarachnoid hemorrhage (SAH) represents a considerable health problem with an incidence of 6-7 per 100.000 individuals per year in Western society. We investigated the long-term consequences of SAH on behavior, neuroinflammation and gray- and white-matter damage using an endovascular puncture model in Wistar rats. Rats were divided into a mild or severe SAH group based on their acute neurological score at 24 h post-SAH. The degree of hemorrhage determined in post-mortem brains at 48 h strongly correlated with the acute neurological score. Severe SAH induced increased TNF-α, IL-1β, IL-10, MCP-1, MIP2, CINC-1 mRNA expression and cortical neutrophil influx at 48 h post-insult. Neuroinflammation after SAH was very long-lasting and still present at day 21 as determined by Iba-1 staining (microglia/macrophages) and GFAP (astrocytes). Long-term neuroinflammation was strongly associated with the degree of severity of SAH. Cerebral damage to gray- and white-matter was visualized by immunohistochemistry for MAP2 and MBP at 21 days after SAH. Severe SAH induced significant gray- and white-matter damage. MAP2 loss at day 21 correlated significantly with the acute neurological score determined at 24 h post-SAH. Sensorimotor behavior, determined by the adhesive removal task and von Frey test, was affected after severe SAH at day 21. In conclusion, we are the first to show that SAH induces ongoing cortical inflammation. Moreover, SAH induces mainly cortical long-term brain damage, which is associated with long-term sensorimotor damage. Cramer J.,University Utrecht Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2013 Material efficiency is one of the major challenges facing our society in the twenty-first century. Research can help to understand how we can make the transition towards a material-efficient society. This study focuses on the role of the government in such transition processes. Use is made of literature in the field of public administration and innovation literature, particularly transition management. On the basis of three Dutch examples (plastics, e-waste and bio-energy), the complex system change towards a material-efficient society will be reflected upon. These case studies underline the need for a tailor-made governance approach instead of a top-down government approach to enhance material efficiency in practice. The role of the government is not restricted to formulating policies and then leaving it up to other actors to implement these policies. Instead, it is a continuous interplay between the different actors during the whole implementation process. As such, the government's role is to steer the development in the desired direction and orchestrate the process from beginning to end. In order to govern with a better compass, scientifically underpinned guiding principles and indicators are needed. This is a challenge for researchers both in public administration and in transition management. © 2013 The Author(s) Published by the Royal Society. All rights reserved. Bender S.A.,University of California at Los Angeles | Duine R.A.,University Utrecht | Tserkovnyak Y.,University of California at Los Angeles Physical Review Letters | Year: 2012 We theoretically investigate spin transfer between a system of quasiequilibrated Bose-Einstein-condensed magnons in an insulator in direct contact with a conductor. While charge transfer is prohibited across the interface, spin transport arises from the exchange coupling between insulator and conductor spins. In a normal insulator phase, spin transport is governed solely by the presence of thermal and spin-diffusive gradients; the presence of Bose-Einstein condensation (BEC), meanwhile, gives rise to a temperature-independent condensate spin current. Depending on the thermodynamic bias of the system, spin may flow in either direction across the interface, engendering the possibility of a dynamical phase transition of magnons. We discuss the experimental feasibility of observing a BEC steady state (fomented by a spin Seebeck effect), which is contrasted to the more familiar spin-transfer-induced classical instabilities. © 2012 American Physical Society. Herfs P.G.,University Utrecht Human resources for health | Year: 2014 BACKGROUND: In most countries of the European Economic Area (EEA), there is no large-scale migration of medical graduates with diplomas obtained outside the EEA, which are international medical graduates (IMGs). In the United Kingdom however, health care is in part dependent on the influx of IMGs. In 2005, of all the doctors practising in the UK, 31% were educated outside the country. In most EEA-countries, health care is not dependent on the influx of IMGs.The aim of this study is to present data relating to the changes in IMG migration in the UK since the extension of the European Union in May 2004. In addition, data are presented on IMG migration in the Netherlands. These migration flows show that migration patterns differ strongly within these two EU-countries.METHOD: This study makes use of registration data on migrating doctors from the General Medical Council (GMC) in the UK and from the Dutch Department of Health. Moreover, data on the ratio of medical doctors in relation to a country's population were extracted from the World Health Organization (WHO).RESULTS: The influx of IMGs in the UK has changed in recent years due to the extension of the European Union in 2004, the expansion of UK medical schools and changes in the policy towards non-EEA doctors.The influx of IMGs in the Netherlands is described in detail. In the Netherlands, many IMGs come from Afghanistan, Iraq and Surinam.DISCUSSION AND CONCLUSIONS: There are clear differences between IMG immigration in the UK and in the Netherlands. In the UK, the National Health Service continues to be very reliant on immigration to fill shortage posts, whereas the number of immigrant doctors working in the Netherlands is much smaller. Both the UK and the Netherlands' regulatory bodies have shared great concerns about the linguistic and communication skills of both EEA and non-EEA doctors seeking to work in these countries. IMG migration is a global and intricate problem. The source countries, not only those where English is the first or second language, experience massive IMG migration flows. De Haan M.C.,University Utrecht | Pickhardt P.J.,University of Wisconsin - Madison Gut | Year: 2015 Colorectal cancer (CRC) is the second most common cancer and second most common cause of cancerrelated deaths in Europe. The introduction of CRC screening programmes using stool tests and flexible sigmoidoscopy, have been shown to reduce CRC-related mortality substantially. In several European countries, population-based CRC screening programmes are ongoing or being rolled out. Stool tests like faecal occult blood testing are non-invasive and simple to perform, but are primarily designed to detect early invasive cancer. More invasive tests like colonoscopy and CT colonography (CTC) aim at accurately detecting both CRC and cancer precursors, thus providing for cancer prevention. This review focuses on the accuracy, acceptance and safety of CTC as a CRC screening technique and on the current position of CTC in organised population screening. Based on the detection characteristics and acceptability of CTC screening, it might be a viable screening test. The potential disadvantage of radiation exposure is probably overemphasised, especially with newer technology. At this time-point, it is not entirely clear whether the detection of extracolonic findings at CTC is of net benefit and is cost effective, but with responsible handling, this may be the case. Future efforts will seek to further improve the technique, refine appropriate diagnostic algorithms and study cost-effectiveness. Bots M.L.,University Utrecht | Sutton-Tyrrell K.,University of Pittsburgh Journal of the American College of Cardiology | Year: 2012 Carotid intima-media thickness (CIMT) measurements have been used in cardiovascular research for more than 2 decades. There is a wealth of evidence showing that CIMT can be assessed in a reproducible manner and that increased CIMT relates to unfavorable risk factor levels and atherosclerosis elsewhere in the arterial system and to the risk of vascular events. Change in CIMT over time can be readily assessed, and trials showed that the rate of change is modifiable by treatment. Several issues important for the cardiovascular research community and its application in clinical practice are still outstanding. Promising future areas for CIMT measurements are: 1) application in studies among children and adolescents; 2) use of CIMT trials positioned decisively before the start of a morbidity and mortality trial; and 3) the use of CIMT measurement in risk stratification in those with an intermediate 10-year risk estimate. © 2012 American College of Cardiology Foundation. Knol M.J.,University Utrecht | VanderWeele T.J.,Harvard University International Journal of Epidemiology | Year: 2012 Authors often do not give sufficient information to draw conclusions about the size and statistical significance of interaction on the additive and multiplicative scales. To improve this, we provide four steps, template tables and examples. We distinguish two cases: when the causal effect of intervening on one exposure, across strata of another factor, is of interest ('effect modification'); and when the causal effect of intervening on two exposures is of interest ('interaction'). Assume we study whether X modifies the effect of A on D, where A, X and D are dichotomous. We propose presenting: (i) relative risks (RRs), odds ratios (ORs) or risk differences (RDs) for each (A, X) stratum with a single reference category taken as the stratum with the lowest risk of D; (ii) RRs, ORs or RDs for A within strata of X; (iii) interaction measures on additive and multiplicative scales; (iv) the A-D confounders adjusted for. Assume we study the interaction between A and B on D, where A, B and D are dichotomous. Steps (i) and (iii) are similar to presenting effect modification. (ii) Present RRs, ORs or RDs for A within strata of B and for B within strata of A. (iv) List the A-D and B-D confounders adjusted for. These four pieces of information will provide a reader the information needed to assess effect modification or interaction. The presentation can be further enriched when exposures have multiple categories. Our proposal hopefully encourages researchers to present effect modification and interaction analyses in as informative a manner as possible. Published by Oxford University Press on behalf of the International Epidemiological Association © The Author 2012; all rights reserved. Bolhuis J.J.,University Utrecht | Okanoya K.,RIKEN | Okanoya K.,University of Tokyo | Scharff C.,Free University of Berlin Nature Reviews Neuroscience | Year: 2010 Vocal imitation in human infants and in some orders of birds relies on auditory-guided motor learning during a sensitive period of development. It proceeds from 'babbling' (in humans) and 'subsong' (in birds) through distinct phases towards the full-fledged communication system. Language development and birdsong learning have parallels at the behavioural, neural and genetic levels. Different orders of birds have evolved networks of brain regions for song learning and production that have a surprisingly similar gross anatomy, with analogies to human cortical regions and basal ganglia. Comparisons between different songbird species and humans point towards both general and species-specific principles of vocal learning and have identified common neural and molecular substrates, including the forkhead box P2 (FOXP2) gene. © 2010 Macmillan Publishers Limited. All rights reserved. Kruiswijk F.,University Utrecht Oncogene | Year: 2015 Melanoma is the most lethal form of skin cancer and successful treatment of metastatic melanoma remains challenging. BRAF/MEK inhibitors only show a temporary benefit due to rapid occurrence of resistance, whereas immunotherapy is mainly effective in selected subsets of patients. Thus, there is a need to identify new targets to improve treatment of metastatic melanoma. To this extent, we searched for markers that are elevated in melanoma and are under regulation of potentially druggable enzymes. Here, we show that the pro-proliferative transcription factor FOXM1 is elevated and activated in malignant melanoma. FOXM1 activity correlated with expression of the enzyme Pin1, which we found to be indicative of a poor prognosis. In functional experiments, Pin1 proved to be a main regulator of FOXM1 activity through MEK-dependent physical regulation during the cell cycle. The Pin1-FOXM1 interaction was enhanced by BRAFV600E, the driver oncogene in the majority of melanomas, and in extrapolation of the correlation data, interference with\ Pin1 in BRAFV600E-driven metastatic melanoma cells impaired both FOXM1 activity and cell survival. Importantly, cell-permeable Pin1-FOXM1-blocking peptides repressed the proliferation of melanoma cells in freshly isolated human metastatic melanoma ex vivo and in three-dimensional-cultured patient-derived melanoids. When combined with the BRAFV600E-inhibitor PLX4032 a robust repression in melanoid viability was obtained, establishing preclinical value of patient-derived melanoids for prognostic use of drug sensitivity and further underscoring the beneficial effect of Pin1-FOXM1 inhibitory peptides as anti-melanoma drugs. These proof-of-concept results provide a starting point for development of therapeutic Pin1-FOXM1 inhibitors to target metastatic melanoma.Oncogene advance online publication, 17 August 2015; doi:10.1038/onc.2015.282. © 2015 Macmillan Publishers Limited Pieters R.J.,University Utrecht Advances in Experimental Medicine and Biology | Year: 2011 In the process of adhesion, bacteria often carry proteins on their surface, adhesins, that bind to specific components of tissue cells or the extracellular matrix. In many cases these components are carbohydrate structures. The carbohydrate binding specificities of many bacteria have been uncovered over the years. The design and synthesis of inhibitors of bacterial adhesion has the potential to create new therapeutics for the prevention and possibly treatment of bacterial infections. Unfortunately, the carbohydrate structures often bind only weakly to the adhesion proteins, although drug design approaches can improve the situation. Furthermore, in some cases linking carbohydrates covalently together, to create so-called multivalent systems, can also significantly enhance the inhibitory potency. Besides adhesion inhibition as a potential therapeutic strategy, the adhesion proteins can also be used for detection. Novel methods to do this are being developed. These include the use of microarrays and glyconanoparticles. New developments in these areas are discussed. © 2011 Springer Science+Business Media B.V. Vanderschuren L.J.,University Utrecht Cold Spring Harbor perspectives in medicine | Year: 2013 It is increasingly recognized that studying drug taking in laboratory animals does not equate to studying genuine addiction, characterized by loss of control over drug use. This has inspired recent work aimed at capturing genuine addiction-like behavior in animals. In this work, we summarize empirical evidence for the occurrence of several DSM-IV-like symptoms of addiction in animals after extended drug use. These symptoms include escalation of drug use, neurocognitive deficits, resistance to extinction, increased motivation for drugs, preference for drugs over nondrug rewards, and resistance to punishment. The fact that addiction-like behavior can occur and be studied in animals gives us the exciting opportunity to investigate the neural and genetic background of drug addiction, which we hope will ultimately lead to the development of more effective treatments for this devastating disorder. Prakken B.,University Utrecht | Albani S.,Sanford Burnham Institute for Medical Research | Martini A.,University of Genoa The Lancet | Year: 2011 Juvenile idiopathic arthritis is a heterogeneous group of diseases characterised by arthritis of unknown origin with onset before age of 16 years. Pivotal studies in the past 5 years have led to substantial progress in various areas, ranging from disease classification to new treatments. Gene expression profiling studies have identified different immune mechanisms in distinct subtypes of the disease, and can help to redefine disease classification criteria. Moreover, immunological studies have shown that systemic juvenile idiopathic arthritis is an acquired autoinflammatory disease, and have led to successful studies of both interleukin-1 and interleukin-6 blockade. In other forms of the disease, synovial inflammation is the consequence of a disturbed balance between proinflammatory effector cells (such as T-helper-17 cells), and anti-inflammatory regulatory cells (such as FOXP3-positive regulatory T cells). Moreover, specific soluble biomarkers (S100 proteins) can guide individual treatment. Altogether these new developments in genetics, immunology, and imaging are instrumental to better define, classify, and treat patients with juvenile idiopathic arthritis. © 2011 Elsevier Ltd. Middelburg J.J.,University Utrecht Biogeosciences | Year: 2014 Stable isotopes have been used extensively to study food-web functioning, that is, the flow of energy and matter among organisms. Traditional food-web studies are based on the natural variability of isotopes and are limited to larger organisms that can be physically separated from their environment. Recent developments allow isotope ratio measurements of microbes and this in turn allows the measurement of entire food webs, in other words, from small producers at the bottom to large consumers at the top. Here, I provide a concise review on the use and potential of stable isotopes to reconstruct end-to-end food webs. I will first discuss food web reconstruction based on natural abundances isotope data and will then show that the use of stable isotopes as deliberately added tracers provides complementary information. Finally, challenges and opportunities for end-to-end food web reconstructions in a changing world are discussed. © Author(s) 2014. Rothwell P.M.,University of Oxford | Algra A.,University Utrecht | Amarenco P.,University Paris Diderot The Lancet | Year: 2011 Stroke is a major cause of death and disability worldwide. Without improvements in prevention, the burden will increase during the next 20 years because of the ageing population, especially in developing countries. Major advances have occurred in secondary prevention during the past three decades, which demonstrate the broader potential to prevent stroke. We review the main medical treatments that should be considered for most patients with transient ischaemic attack or ischaemic stroke in the acute phase and the long term, and draw attention to recent developments. © 2011 Elsevier Ltd. Van Der Schee W.,University Utrecht | Romatschke P.,University of Colorado at Boulder | Pratt S.,Michigan State University Physical Review Letters | Year: 2013 We present a fully dynamical simulation of central nuclear collisions around midrapidity at LHC energies. Unlike previous treatments, we simulate all phases of the collision, including the equilibration of the system. For the simulation, we use numerical relativity solutions to anti-de Sitter space/conformal field theory for the preequilibrium stage, viscous hydrodynamics for the plasma equilibrium stage, and kinetic theory for the low-density hadronic stage. Our preequilibrium stage provides initial conditions for hydrodynamics, resulting in sizable radial flow. The resulting light particle spectra reproduce the measurements from the ALICE experiment at all transverse momenta. © 2013 American Physical Society. Hakanen J.J.,Finnish Institute of Occupational Health | Schaufeli W.B.,University Utrecht Journal of Affective Disorders | Year: 2012 Background: Burnout and work engagement have been viewed as opposite, yet distinct states of employee well-being. We investigated whether work-related indicators of well-being (i.e. burnout and work engagement) spill-over and generalize to context-free well-being (i.e. depressive symptoms and life satisfaction). More specifically, we examined the causal direction: does burnout/work engagement lead to depressive symptoms/life satisfaction, or the other way around? Methods: Three surveys were conducted. In 2003, 71% of all Finnish dentists were surveyed (n = 3255), and the response rate of the 3-year follow-up was 84% (n = 2555). The second follow-up was conducted four years later with a response rate of 86% (n = 1964). Structural equation modeling was used to investigate the cross-lagged associations between the study variables across time. Results: Burnout predicted depressive symptoms and life dissatisfaction from T1 to T2 and from T2 to T3. Conversely, work engagement had a negative effect on depressive symptoms and a positive effect on life satisfaction, both from T1 to T2 and from T2 to T3, even after adjusting for the impact of burnout at every occasion. Limitations: The study was conducted among one occupational group, which limits its generalizability. Conclusions: Work-related well-being predicts general wellbeing in the long-term. For example, burnout predicts depressive symptoms and not vice versa. In addition, burnout and work engagement are not direct opposites. Instead, both have unique, incremental impacts on life satisfaction and depressive symptoms. © 2012 Elsevier B.V. Hornsveld M.,University Utrecht Cell Death and Differentiation | Year: 2016 Loss of cellular adhesion leads to the progression of breast cancer through acquisition of anchorage independence, also known as resistance to anoikis. Although inactivation of E-cadherin is essential for acquisition of anoikis resistance, it has remained unclear how metastatic breast cancer cells counterbalance the induction of apoptosis without E-cadherin-dependent cellular adhesion. We report here that E-cadherin inactivation in breast cancer cells induces PI3K/AKT-dependent FOXO3 inhibition and identify FOXO3 as a novel and direct transcriptional activator of the pro-apoptotic protein BMF. As a result, E-cadherin-negative breast fail to upregulate BMF upon transfer to anchorage independence, leading to anoikis resistance. Conversely, expression of BMF in E-cadherin-negative metastatic breast cancer cells is sufficient to inhibit tumour growth and dissemination in mice. In conclusion, we have identified repression of BMF as a major cue that underpins anoikis resistance and tumour dissemination in E-cadherin-deficient metastatic breast cancer.Cell Death and Differentiation advance online publication, 1 April 2016; doi:10.1038/cdd.2016.33. © 2016 Macmillan Publishers Limited Jakobsson L.,Karolinska Institutet | van Meeteren L.A.,University Utrecht Experimental Cell Research | Year: 2013 Blood vessels are composed of endothelial cells, mural cells (smooth muscle cells and pericytes) and their shared basement membrane. During embryonic development a multitude of signaling components orchestrate the formation of new vessels. The process is highly dependent on correct dosage, spacing and timing of these signaling molecules. As vessels mature some cascades remain active, albeit at very low levels, and may be reactivated upon demand. Members of the Transforming growth factor Β (TGF-Β) protein family are strongly engaged in developmental angiogenesis but are also regulators of vascular integrity in the adult. In humans various genetic alterations within this protein family cause vascular disorders, involving disintegration of vascular integrity. Here we summarize and discuss recent data gathered from conditional and endothelial cell specific genetic loss-of-function of members of the TGF-Β family in the mouse. © 2013 Elsevier Inc. Bosch J.L.H.R.,University Utrecht | Weiss J.P.,SUNY Downstate Medical School Journal of Urology | Year: 2013 Purpose: Nocturia is a troubling condition with implications for daytime functioning. However, it often goes unreported. Many prevalence studies exist but differences in populations and definitions of nocturia render assimilation of the data difficult. This review provides an overview of the nocturia prevalence literature. Materials and Methods: A PubMed® search was performed to identify articles published in English from 1990 to February 2009 reporting nocturia prevalence in community based populations. Rates reported as overall data, and by age and by gender, were plotted for comparison. Results: A total of 43 relevant articles were identified. Prevalence rates in younger men (20 to 40 years) were 1 or more voids in 11% to 35.2% and 2 or more voids in 2% to 16.6%. Prevalence rates in younger women were 1 or more voids in 20.4% to 43.9% and 2 or more voids in 4.4% to 18%. In older men (older than 70 years) rates were 1 or more void in 68.9% to 93% and 2 or more voids in 29% to 59.3%. In older women rates were 1 or more void in 74.1% to 77.1% and 2 or more voids in 28.3% to 61.5%. Therefore, in practice up to 1 in 5 or 6 younger people consistently wake to void at least twice each night. In some studies younger women appeared more likely to be affected than men. Up to 60% of older people void 2 or more times nightly. Conclusions: Nocturia is common across populations. It is most prevalent in older people but it also affects a significant proportion of younger individuals. Clinicians should be alert to the possibility that nocturia may impact the sleep, quality of life and overall health of their patients. Since the condition is highly multifactorial, frequency-volume charts are invaluable tools for the diagnosis of underlying factors and for treatment selection. © 2013 American Urological Association Education and Research, Inc. Boelen P.,University Utrecht Anxiety, Stress and Coping | Year: 2010 Research has shown that intolerance of uncertainty (IU) - the tendency to react negatively to situations that are uncertain - is involved in worry and generalized anxiety disorder, as well as in other anxiety symptoms and disorders. To our knowledge, no studies have yet examined the association between IU and emotional distress connected with the death of a loved one. Yet, it seems plausible that those who have more difficulties to tolerate the uncertainties that oftentimes occur following such a loss experience more intense distress. The current study examined this assumption, using self-reported data from 134 bereaved individuals. Findings showed that IU was positively and significantly correlated with symptom levels of complicated grief and posttraumatic stress disorder (PTSD), even when controlling for time since loss (the single demographic/loss-related variable associated with symptom levels), and for neuroticism and worry, which are both correlates of IU. Furthermore, IU was specifically related with worry and symptom levels of PTSD, but not complicated grief, when controlling the shared variance between worry, complicated grief severity, and PTSD-severity. The present findings complement prior research that has shown that IU is a cognitive vulnerability factor for worry, and indicate that it may also be involved in emotional distress following loss. © 2010 Taylor & Francis. Dittrich B.,Albert Einstein Institute | Hohn P.A.,University Utrecht Classical and Quantum Gravity | Year: 2012 A general canonical formalism for discrete systems is developed, which can handle varying phase space dimensions and constraints. The central ingredient is Hamilton's principal function that generates canonical time evolution and ensures that the canonical formalism reproduces the dynamics of the covariant formulation following directly from the action. We apply this formalism to simplicial gravity and (Euclidean) Regge calculus, in particular. A discrete forward/backward evolution is realized by gluing/removing single simplices step by step to/from a bulk triangulation and amounts to Pachner moves in the triangulated hypersurfaces. As a result, the hypersurfaces evolve in a discrete multi-fingered time through the full Regge solution. Pachner moves are an elementary and ergodic class of homeomorphisms and generically change the number of variables, but can be implemented as canonical transformations on naturally extended phase spaces. Some moves introduce a priori free data that, however, may become fixed a posteriori by constraints arising in subsequent moves. The end result is a general and fully consistent formulation of canonical Regge calculus, thereby removing a longstanding obstacle in connecting covariant simplicial gravity models to canonical frameworks. The presented scheme is, therefore, interesting in view of many approaches to quantum gravity, but may also prove useful for numerical implementations. © 2012 IOP Publishing Ltd. Sandler H.,German Cancer Research Center | Kreth J.,German Cancer Research Center | Timmers H.T.M.,University Utrecht | Stoecklin G.,German Cancer Research Center Nucleic Acids Research | Year: 2011 The carbon catabolite repressor protein 4 (Ccr4)-Negative on TATA (Not) complex controls gene expression at two levels. In the nucleus, it regulates the basal transcription machinery, nuclear receptor-mediated transcription and histone modifications. In the cytoplasm, the complex is required for messenger RNA (mRNA) turnover through its two associated deadenylases, Ccr4 and Caf1. Not1 is the largest protein of the Ccr4-Not complex and serves as a scaffold for other subunits of the complex. Here, we provide evidence that human Not1 in the cytoplasm associates with the C-terminal domain of tristetraprolin (TTP), an RNA binding protein that mediates rapid degradation of mRNAs containing AU-rich elements (AREs). Not1 shows extensive interaction through its central region with TTP, whereas binding of Caf1 is restricted to a smaller central domain within Not1. Importantly, Not1 is required for the rapid decay of ARE-mRNAs, and TTP can recruit the Caf1 deadenylase only in presence of Not1. Thus, cytoplasmic Not1 provides a platform that allows a specific RNA binding protein to recruit the Caf1 deadenylase and thereby trigger decay of its target mRNAs. © 2011 The Author(s). Zoller S.,ETH Zurich | Zoller S.,Swiss Institute of Bioinformatics | Schneider A.,University Utrecht Molecular Biology and Evolution | Year: 2013 Amino acid substitution matrices describe the rates by which amino acids are replaced during evolution. In contrast to nucleotide or codon models, amino acid substitution matrices are in general parameterless and empirically estimated, probably because there is no obvious parametrization for amino acid substitutions. Principal component analysis has previously been used to improve codon substitution models by empirically finding the most relevant parameters. Here, we apply the same method to amino acid substitution matrices, leading to a semiempirical substitution model that can adjust the transition rates to the protein sequences under investigation. Our new model almost invariably achieves the best likelihood values in large-scale comparisons with established amino acid substitution models OTT, WAG, and LC). In particular for longer alignments, these likelihood gains are considerably larger than what could be expected from simply having more parameters. The application of our model differs from that of mixture models (such as UL2 or UL3), as we optimize one rate matrix per alignment, whereas mixture models apply the variation per alignments site. This makes our model computationally more efficient, while the performance is comparable to that of UL3. Applied to the phylogenetic problem of the origin of placental mammals, our new model and the UL3 mixed model are the only ones of the tested models that cluster Afrotheria and Xenarthra into a clade called Atlantogenata, which would be in correspondence with recent findings using more sophisticated phylogenetic methods. © The Author 2012. Devuyst O.,Catholic University of Louvain | Devuyst O.,University of Zurich | Knoers N.V.A.M.,University Utrecht | Remuzzi G.,Centro Anna Maria Astori | Schaefer F.,University of Heidelberg The Lancet | Year: 2014 At least 10% of adults and nearly all children who receive renal-replacement therapy have an inherited kidney disease. These patients rarely die when their disease progresses and can remain alive for many years because of advances in organ-replacement therapy. However, these disorders substantially decrease their quality of life and have a large effect on health-care systems. Since the kidneys regulate essential homoeostatic processes, inherited kidney disorders have multisystem complications, which add to the usual challenges for rare disorders. In this review, we discuss the nature of rare inherited kidney diseases, the challenges they pose, and opportunities from technological advances, which are well suited to target the kidney. Mechanistic insights from rare disorders are relevant for common disorders such as hypertension, kidney stones, cardiovascular disease, and progression of chronic kidney disease. Snippert H.J.,University Utrecht Cell | Year: 2016 The notion that the colon's deep crypt pockets provide a protected location that shields stem cells from potentially toxic substances is widely accepted. In this issue of Cell, Kaiko et al. reveal how a metabolite abundantly produced by the gut microbiota can inhibit stem cell proliferation but is blocked from doing so by crypt architecture. © 2016 Elsevier Inc. Moss A.C.,Center for Inflammatory Bowel Disease | Brinks V.,University Utrecht | Carpenter J.F.,Aurora Pharmaceutical Alimentary Pharmacology and Therapeutics | Year: 2013 Background Anti-drug antibodies (ADAs) to biologic therapies contribute to the loss of response and infusion reactions to anti-TNF drugs in patients with inflammatory bowel disease (IBD). The reasons behind this immunogenicity are complex, and have not been the focus of a dedicated review for prescribers. Aim To provide an overview of the patient, product and prescriber factors, which have been associated with the immunogenicity of anti-TNF therapy, and draw conclusions for clinical practice. Methods Review of representative observational studies and clinical trials from the IBD and other literature, which report associations with ADA development, with a focus on infliximab and adalimumab. Results ADAs develop in 10-20% of patients receiving anti-TNF maintenance therapy, and these patients are three times more likely to lose response as ADA-negative patients. Patient genotype plays a role in ADA risk in a minority of patients, but age or disease type is not a major factor. Drug mishandling, such as agitation or freeze-thaw cycles, can induce protein aggregates, which are known to be immunogenic. Prescription of maintenance therapy with concomitant immunomodulators, and achieving suitable trough drug levels, reduces the risk of ADAs in patients with IBD. Conclusions Patients and prescribers can take several steps to reduce the risk of development of anti-drug antibodies to anti-TNF antibodies. Further research is required to determine if immunogenic factors identified in other situations apply to use of anti-TNFs in IBD. © 2013 John Wiley & Sons Ltd. Nierkens S.,Radboud University Nijmegen | Nierkens S.,University Utrecht | Tel J.,Radboud University Nijmegen | Janssen E.,University of Cincinnati | Adema G.J.,Radboud University Nijmegen Trends in Immunology | Year: 2013 Antigen cross-presentation describes the process through which dendritic cells (DCs) acquire exogenous antigens for presentation on MHC class I molecules. The ability to cross-present has been thought of as a feature of specialized DC subsets. Emerging data, however, suggest that the cross-presenting ability of each DC subset is tuned by and dependent on several factors, such as DC location and activation status, and the type of antigen and inflammatory signals. Thus, we argue that capacity of cross-presentation is not an exclusive trait of one or several distinct DC subtypes, but rather a common feature of the DC family in both mice and humans. Understanding DC subset activation and antigen-presentation pathways might yield improved tools and targets to exploit the unique cross-presenting capacity of DCs in immunotherapy. © 2013 Elsevier Ltd. Den Hartog S.A.M.,Pennsylvania State University | Spiers C.J.,University Utrecht Journal of Geophysical Research: Solid Earth | Year: 2014 A microphysical model is developed for the steady state frictional behavior of illite-quartz fault gouge and applied to subduction megathrust P-T conditions. The model assumes a foliated, phyllosilicate-supported microstructure which shears by rate-independent frictional slip on the aligned phyllosilicates plus thermally activated deformation of the intervening quartz clasts. At low slip rates or high temperatures, the deformation of the clasts is easy, accommodating slip on the foliation without dilatation. With increasing velocity or decreasing temperature, the shear of the clasts becomes more difficult, increasing bulk shear strength, until slip is activated on inclined portions of the phyllosilicate foliation, where it anastomoses around the clasts. Slip at these sites leads to dilation involving clast/matrix debonding, balanced, at steady state, by compaction through thermally activated clast deformation. Model predictions, taking pressure solution as the thermally activated mechanism, show three regimes of velocity-dependent frictional behavior at temperatures in the range of 200-500°C, with velocity weakening occurring at 300-400°C, in broad agreement with previous experiments on illite-quartz gouge. Effects of slip rate, normal stress, and quartz fraction predicted by the model also resemble those seen experimentally. Extrapolation of themodel to earthquake nucleation slip rates successfully predicts the onset of velocity-weakening behavior at the updip seismogenic limit on subduction megathrusts. The model further implies that the onset of seismogenesis is controlled by the thermally activated initiation of fault rock compaction through pressure solution of quartz, which counteracts dilatation due to slip on the fault rock foliation. ©2014. American Geophysical Union. Meye F.J.,Institute du Fer a Moulin | Meye F.J.,French Institute of Health and Medical Research | Meye F.J.,University Pierre and Marie Curie | Adan R.A.H.,University Utrecht Trends in Pharmacological Sciences | Year: 2014 Overconsumption of high caloric food plays an important role in the etiology of obesity. Several factors drive such hedonic feeding. High caloric food is often palatable. In addition, when an individual is sated, stress and food-related cues can serve as potent feeding triggers. A better understanding of the neurobiological underpinnings of food palatability and environmentally triggered overconsumption would aid the development of new treatment strategies. In the current review we address the pivotal role of the mesolimbic dopamine reward system in the drive towards high caloric palatable food and its relation to stress- and cue-induced feeding. We also discuss how this system may be affected by both established and potential anti-obesity drug targets. © 2013 Elsevier Ltd. All rights reserved. Matsumoto T.,University Utrecht | Yoshida K.,Kyoto University Journal of High Energy Physics | Year: 2014 We consider γ-deformations of the AdS5×S5 superstring as Yang-Baxter sigma models with classical r-matrices satisfying the classical Yang-Baxter equation (CYBE). An essential point is that the classical r-matrices are composed of Cartan generators only and then generate abelian twists. We present examples of the r-matrices that lead to real γ-deformations of the AdS5×S5 superstring. Finally we discuss a possible classification of integrable deformations and the corresponding gravity solution in terms of solutions of CYBE. This classification may be called the gravity/CYBE correspondence. © 2014 The Author(s). Clevers H.,University Utrecht Cell | Year: 2016 Recent advances in 3D culture technology allow embryonic and adult mammalian stem cells to exhibit their remarkable self-organizing properties, and the resulting organoids reflect key structural and functional properties of organs such as kidney, lung, gut, brain and retina. Organoid technology can therefore be used to model human organ development and various human pathologies 'in a dish." Additionally, patient-derived organoids hold promise to predict drug response in a personalized fashion. Organoids open up new avenues for regenerative medicine and, in combination with editing technology, for gene therapy. The many potential applications of this technology are only beginning to be explored. © 2016 Elsevier Inc. Plantinga E.A.,University Utrecht The British journal of nutrition | Year: 2011 Cats are strict carnivores and in the wild rely on a diet solely based on animal tissues to meet their specific and unique nutritional requirements. Although the feeding ecology of cats in the wild has been well documented in the literature, there is no information on the precise nutrient profile to which the cat's metabolism has adapted. The present study aimed to derive the dietary nutrient profile of free-living cats. Studies reporting the feeding habits of cats in the wild were reviewed and data on the nutrient composition of the consumed prey items obtained from the literature. Fifty-five studies reported feeding strategy data of cats in the wild. After specific exclusion criteria, twenty-seven studies were used to derive thirty individual dietary nutrient profiles. The results show that feral cats are obligatory carnivores, with their daily energy intake from crude protein being 52 %, from crude fat 46 % and from N-free extract only 2 %. Minerals and trace elements are consumed in relatively high concentrations compared with recommended allowances determined using empirical methods. The calculated nutrient profile may be considered the nutrient intake to which the cat's metabolic system has adapted. The present study provides insight into the nutritive, as well as possible non-nutritive aspects of a natural diet of whole prey for cats and provides novel ways to further improve feline diets to increase health and longevity. VanderWeele T.J.,Harvard University | Knol M.J.,University Utrecht Annals of Internal Medicine | Year: 2011 In randomized trials with subgroup analyses, the primary treatment or intervention of interest is randomized, but the secondary factors defining subgroups are not. This article clarifies when confounding is an issue in subgroup analyses. If investigators are interested simply in targeting subpopulations for intervention, control for confounding is not needed. If investigators are interested in intervening on the secondary factors that define the subgroups to increase the treatment effect or in attributing the subgroup differences to the secondary factors themselves, then confounding is relevant and must be controlled for. The authors demonstrate this point by using examples from published randomized trials. © 2011 American College of Physicians. Lovelock C.E.,University of Oxford | Rinkel G.J.E.,University Utrecht | Rothwell P.M.,University of Oxford Neurology | Year: 2010 BACKGROUND: Treatment of aneurysmal subarachnoid hemorrhage (SAH) has changed substantially over the last 25 years but there is a lack of reliable population-based data on whether case-fatality or functional outcomes have improved. METHODS: We determined changes in the standardized incidence and outcome of SAH in the same population between 1981 and 1986 (Oxford Community Stroke Project) and 2002 and 2008 (Oxford Vascular Study). In a meta-analysis with other population-based studies, we used linear regression to determine time trends in outcome. RESULTS: There were no reductions in incidence of SAH (RR = 0.79, 95% confidence interval [CI] 0.48-1.29, p = 0.34) and in 30-day case-fatality (RR = 0.67, 95% CI 0.39-1.13, p = 0.14) in the Oxford Vascular Study vs Oxford Community Stroke Project, but there was a decrease in overall mortality (RR = 0.47, 0.23-0.97, p = 0.04). Following adjustment for age and baseline SAH severity, patients surviving to hospital had reduced risk of death or dependency (modified Rankin score > 3) at 12 months in the Oxford Vascular Study (RR = 0.51, 0.29-0.88, p = 0.01). Among 32 studies covering 39 study periods from 1980 to 2005, 7 studied time trends within single populations. Unadjusted case-fatality fell by 0.9% per annum (0.3-1.5, p = 0.007) in a meta-analysis of data from all studies, and by 0.9% per annum (0.2-1.6%, p = 0.01) within the 7 population studies. CONCLUSION: Mortality due to subarachnoid hemorrhage fell by about 50% in our study population over the last 2 decades, due mainly to improved outcomes in cases surviving to reach hospital. This improvement is consistent with a significant decrease in case-fatality over the last 25 years in our pooled analysis of other similar population-based studies. Copyright © 2010 by AAN Enterprises, Inc. Horst D.,University Utrecht Viruses | Year: 2012 The immune system plays a major role in protecting the host against viral infection. Rapid initial protection is conveyed by innate immune cells, while adaptive immunity (including T lymphocytes) requires several days to develop, yet provides high specificity and long-lasting memory. Invariant natural killer T (iNKT) cells are an unusual subset of T lymphocytes, expressing a semi-invariant T cell receptor together with markers of the innate NK cell lineage. Activated iNKT cells can exert direct cytolysis and can rapidly release a variety of immune-polarizing cytokines, thereby regulating the ensuing adaptive immune response. iNKT cells recognize lipids in the context of the antigen-presenting molecule CD1d. Intriguingly, CD1d-restricted iNKT cells appear to play a critical role in anti-viral defense: increased susceptibility to disseminated viral infections is observed both in patients with iNKT cell deficiency as well as in CD1d- and iNKT cell-deficient mice. Moreover, viruses have recently been found to use sophisticated strategies to withstand iNKT cell-mediated elimination. This review focuses on CD1d-restricted lipid presentation and the strategies viruses deploy to subvert this pathway. Ben-Elia E.,University Utrecht | Shiftan Y.,Technion - Israel Institute of Technology Transportation Research Part A: Policy and Practice | Year: 2010 This paper presents a learning-based model of route-choice behavior when information is provided in real time. In a laboratory controlled experiment, participants made a long series of binary route-choice trials relying on real-time information and learning from their personal experience reinforced through feedback. A discrete choice model with a Mixed Logit specification, accounting for panel effects, was estimated based on the experiment's data. It was found that information and experience have a combined effect on drivers' route-choice behavior. Informed participants had faster learning rates and tended to base their decisions on memorization relating to previous outcomes whereas non-informed participants were slower in learning, required more exploration and tended to rely mostly on recent outcomes. Informed participants were more prone to risk-seeking and had greater sensitivity to travel time variability. In comparison, non-informed participants appeared to be more risk-averse and less sensitive to variability. These results have important policy implications on the design and implementation of ATIS initiatives. The advantage of incorporating insights from Prospect Theory and reinforced learning to improve the realism of travel behavior models is also discussed. © 2010 Elsevier Ltd. All rights reserved. Chen G.-Q.,Tsinghua University | Patel M.K.,University Utrecht Chemical Reviews | Year: 2012 A technical and environmental review dealt with the derivation of plastics from biological sources in the past and to be done so in the future. Bio-based sustainable plastics needed to be developed to avoid problems caused by the petrochemical plastics. Materials derived from biological sources including starch, cellulose, fatty acids, sugars, proteins, and other sources were consumed by microorganisms that converted these raw materials into various monomers. These monomers were suitable for polymer production including, hydroxyalkanoic acids, D- and L-lactic acid, succinic acid, bio-1,4-butanediol, (R)-3-hydroxypropionic acid, bio-ethylene, and 1,3-propanediol. These monomers were used to produce various bio-based plastics including polyhydroxyalkanoates (PHA), polylactic acid (PLA), and poly(butylene succinate) (PBS). Roussel-Jazede V.,University Utrecht Molecular membrane biology | Year: 2011 Autotransporters produced by Gram-negative bacteria consist of an N-terminal signal sequence, a C-terminal translocator domain (TD), and a passenger domain in between. The TD facilitates the secretion of the passenger across the outer membrane. It generally consists of a channel-forming β-barrel that can be plugged by an α-helix that is formed by a polypeptide fragment immediately N-terminal to the barrel domain in the sequence. In this work, we characterized the TD of the hemoglobin protease Hbp of Escherichia coli by comparing its properties with the TDs of NalP of Neisseria meningitidis and IgA protease of Neisseria gonorrhoeae. All TDs were produced in inclusion bodies and folded in vitro. In the case of the TD of Hbp, this procedure resulted in autocatalytic intramolecular processing, which mimicked the in vivo processing. Liposome-swelling assays and planar lipid bilayer experiments revealed that the pore of the Hbp TD was largely obstructed. In contrast, an Hbp TD variant that lacked only one amino-acid residue from the N terminus showed the opening and closing of a channel comparable to what was reported for the TD of NalP. Additionally, the naturally processed helix contributed to the stability of the TD, as shown by chemical denaturation monitored by tryptophan fluorescence. Overall these results show that Hbp is processed by an autocatalytic intramolecular mechanism resulting in the stable docking of the α-helix in the barrel. In addition, we could show that the α-helix contributes to the stability of TDs. Hogeweg L.,University Utrecht Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention | Year: 2010 Automatic detection of tuberculosis (TB) on chest radiographs is a difficult problem because of the diverse presentation of the disease. A combination of detection systems for abnormalities and normal anatomy is used to improve detection performance. A textural abnormality detection system operating at the pixel level is combined with a clavicle detection system to suppress false positive responses. The output of a shape abnormality detection system operating at the image level is combined in a next step to further improve performance by reducing false negatives. Strategies for combining systems based on serial and parallel configurations were evaluated using the minimum, maximum, product, and mean probability combination rules. The performance of TB detection increased, as measured using the area under the ROC curve, from 0.67 for the textural abnormality detection system alone to 0.86 when the three systems were combined. The best result was achieved using the sum and product rule in a parallel combination of outputs. Snippert H.J.,University Utrecht Nature protocols | Year: 2011 In recent years, many mouse models have been developed to mark and trace the fate of adult cell populations using fluorescent proteins. High-resolution visualization of such fluorescent markers in their physiological setting is thus an important aspect of adult stem cell research. Here we describe a protocol to produce sections (150-200 μm) of near-native tissue with optimal tissue and cellular morphology by avoiding artifacts inherent in standard freezing or embedding procedures. The activity of genetically expressed fluorescent proteins is maintained, thereby enabling high-resolution three-dimensional (3D) reconstructions of fluorescent structures in virtually all types of tissues. The procedure allows immunofluorescence labeling of proteins to depths up to 50 μm, as well as a chemical 'Click-iT' reaction to detect DNA-intercalating analogs such as ethynyl deoxyuridine (EdU). Generation of near-native sections ready for imaging analysis takes approximately 2-3 h. Postsectioning processes, such as antibody labeling or EdU detection, take up to 10 h. Persengiev S.P.,University Utrecht Discovery medicine | Year: 2012 The study of microRNA (miRNA) regulation in the pathogenesis of autoimmune diseases and hematopoietic malignancies provides new understanding of the mechanisms of disease and is currently the focus of many researchers in the field. Autoimmune disorders and cancers of immune system comprise a wide range of genetically complex diseases that share certain aspects of dysregulated genetic networks, most notably deactivation of apoptosis. miRNA mechanisms control gene expression at the post-transcriptional level, linking mRNA processing and gene function. Considerable amount of data have been accumulated that indicate that the alteration of miRNA expression closely mirrors the development of immune system diseases and is likely to play a role in their pathogenesis. However, a knowledge gap remains in our understanding of how miRNA dysregulation and the specific effects of miRNAs on target gene expression underlay the disease phenotype. Here we review a number of studies describing miRNA alterations in autoimmune diseases and hematopoietic cancers and discuss potential miRNA-regulated mechanisms that differentially influence the development of autoimmunity as compared to cancer progression. Vellinga T.T.,University Utrecht Oncogene | Year: 2016 Gene expression-based classification systems have identified an aggressive colon cancer subtype with mesenchymal features, possibly reflecting epithelial-to-mesenchymal transition (EMT) of tumor cells. However, stromal fibroblasts contribute extensively to the mesenchymal phenotype of aggressive colon tumors, challenging the notion of tumor EMT. To separately study the neoplastic and stromal compartments of colon tumors, we have generated a stroma gene filter (SGF). Comparative analysis of stromahigh and stromalow tumors shows that the neoplastic cells in stromahigh tumors express specific EMT drivers (ZEB2, TWIST1, TWIST2) and that 98% of differentially expressed genes are strongly correlated with them. Analysis of differential gene expression between mesenchymal and epithelial cancer cell lines revealed that hepatocyte nuclear factor 4α (HNF4α), a transcriptional activator of intestinal (epithelial) differentiation, and its target genes are highly expressed in epithelial cancer cell lines. However, mesenchymal-type cancer cell lines expressed only part of the mesenchymal genes expressed by tumor-derived neoplastic cells, suggesting that external cues were lacking. We found that collagen-I dominates the extracellular matrix in aggressive colon cancer. Mimicking the tumor microenvironment by replacing laminin-rich Matrigel with collagen-I was sufficient to induce tumor-specific mesenchymal gene expression, suppression of HNF4α and its target genes, and collective tumor cell invasion of patient-derived colon tumor organoids. The data connect collagen-rich stroma to mesenchymal gene expression in neoplastic cells and to collective tumor cell invasion. Targeting the tumor-collagen interface may therefore be explored as a novel strategy in the treatment of aggressive colon cancer.Oncogene advance online publication, 21 March 2016; doi:10.1038/onc.2016.60. © 2016 Macmillan Publishers Limited Stork M.,University Utrecht PLoS pathogens | Year: 2010 Since the concentration of free iron in the human host is low, efficient iron-acquisition mechanisms constitute important virulence factors for pathogenic bacteria. In Gram-negative bacteria, TonB-dependent outer membrane receptors are implicated in iron acquisition. It is far less clear how other metals that are also scarce in the human host are transported across the bacterial outer membrane. With the aim of identifying novel vaccine candidates, we characterized in this study a hitherto unknown receptor in Neisseria meningitidis. We demonstrate that this receptor, designated ZnuD, is produced under zinc limitation and that it is involved in the uptake of zinc. Upon immunization of mice, it was capable of inducing bactericidal antibodies and we could detect ZnuD-specific antibodies in human convalescent patient sera. ZnuD is highly conserved among N. meningitidis isolates and homologues of the protein are found in many other Gram-negative pathogens, particularly in those residing in the respiratory tract. We conclude that ZnuD constitutes a promising candidate for the development of a vaccine against meningococcal disease for which no effective universal vaccine is available. Furthermore, the results suggest that receptor-mediated zinc uptake represents a novel virulence mechanism that is particularly important for bacterial survival in the respiratory tract. Visser G.H.A.,University Utrecht Journal of Obstetrics and Gynaecology Canada | Year: 2012 This review assesses the rise and fall of the unique Dutch system of obstetric care. Why did home deliveries continue in the Netherlands when they almost completely disappeared in the rest of the Western world? Why is the Dutch system currently under so much pressure? Did the participants continue for too long with too conservative an approach? Which of the good things of the past have been lost? © 2012 Society of Obstetricians and Gynaecologists of Canada. Zufferey S.,University Utrecht Journal of Pragmatics | Year: 2014 I argue that the communication of given information is part of the procedural instructions conveyed by some connectives like the French puisque. I submit in addition that the encoding of givenness has cognitive implications that are visible during online processing. I assess this hypothesis empirically by comparing the way the clauses introduced by two French causal connectives, puisque and parce que, are processed during online reading when the following segment is 'given' or 'new'. I complement these results by an acceptability judgement task using the same sentences. These experiments confirm that introducing a clause conveying given information is a core feature characterizing puisque, as the segment following it is read faster when it contains given rather than new information, and puisque is rated as more acceptable than parce que in such contexts. I discuss the implications of these results for future research on the description of the meaning of connectives. © 2013 Elsevier B.V. Schrickx J.A.,University Utrecht Veterinary Journal | Year: 2014 Inhibition of the drug transporter P-glycoprotein (P-gp) by the oral flea preventative spinosad has been suggested as the underlying cause of the drug-drug interaction with ivermectin. In this study, an in vitro model consisting of canine cells was validated to describe the inhibitory effect of drugs on canine P-gp. In this model, ivermectin, cyclosporin, verapamil, loperamide and ketoconazole inhibited P-gp function with IC50 values ranging from 0.1 to 3.7μmol/L. Spinosad was a potent inhibitor of canine P-gp with an IC50 value of 0.27μmol/L or 0.2μg/mL. The risk of spinosad causing P-gp related drug-drug interactions in the dog could be predicted by the IC50 value, the oral dosage and plasma concentrations. © 2014 Elsevier Ltd. Koster E.S.,University Utrecht Journal of managed care pharmacy : JMCP | Year: 2014 The number of patients using methotrexate (MTX) has increased during the last decade. Because of the narrow therapeutic range and potential risks of incorrect use, vigilance is required when dispensing MTX. In 2009, the Royal Dutch Pharmacists Society, in accordance with the Dutch Health Care Inspectorate, published safe MTX dispensing recommendations for community pharmacies. To examine adherence to recommendations aimed at safe {line separator}MTX dispensing. This study was conducted within a convenience sample of 78 community pharmacies belonging to the Utrecht Pharmacy Practice Network for Education and Research (UPPER). Data were collected in May 2011. 95 pharmacists and 337 pharmacy technicians were interviewed to assess self-reported adherence with dispensing recommendations. In addition, medication records for patients using MTX were extracted in 52 pharmacies in order to objectively assess adoption of recommendations. More than 75% of the pharmacists and pharmacy technicians reported to be adherent to 6 of the 11 recommendations. There are variations in reported adherence between team members working in 1 pharmacy; higher adherence rates ( greater than 75%) for the pharmacy team as a whole were only shown for 2 recommendations (recording of day of intake on the label and moment of authorization by the pharmacist). The medication records showed that adherence with working procedures significantly increased: The number of dispensed records with notification of the day of intake on the medication label increased from 9.9% of the records per pharmacy in 2008 to 77.1% in 2010 (P less than 0.001). Dutch community pharmacies seem to be adherent to most safe dispensing recommendations. However, inconsistencies exist between team members that emphasize the importance of addressing this issue and discussing recommendations within the team, as there is still room for improvement to ensure safe dispensing. Siersema P.D.,University Utrecht Endoscopy | Year: 2015 Publication of scientific manuscripts remains our core method of sharing knowledge and advanced scientific inquiry. Pressures to publish for reasons other than pure discovery have the potential to corrupt this process. The core principles of scientific ethics outlined above provide guidance on how to maintain the integrity of our scientific process. We, as journal editors, are committed to the advancement of scientific knowledge and the ethical process of publication. We do the best we can to make sure that the articles we publish fulfill all the criteria of a well-conducted study. © Georg Thieme Verlag KGStuttgart New York. Van der Stigchel S.,University Utrecht Vision Research | Year: 2010 In recent years, the number of studies that have used deviations of saccade trajectories as a measure has rapidly increased. This review discusses these recent studies and summarizes advances in this field. A division can be made into studies that have used saccade deviations to measure the amount of attention allocated in space and studies that have measured the strength of the activity of a distractor. Saccade deviations have also been used to measure target selection in special populations. Most importantly, recent studies have revealed novel knowledge concerning the spatial tuning and temporal dynamics of target selection in the oculomotor system. Deviations in saccade trajectories have shown to constitute a valuable measure of various processes that control and influence our behavior which can be applied to multiple domains. © 2010 Elsevier Ltd. Clevers H.,University Utrecht Nature Medicine | Year: 2011 Over the last decade, the notion that tumors are maintained by their own stem cells, the so-called cancer stem cells, has created great excitement in the research community. This review attempts to summarize the underlying concepts of this notion, to distinguish hard facts from beliefs and to define the future challenges of the field. © 2011 Nature America, Inc. All rights reserved. Van Smeden M.,University Utrecht American Journal of Epidemiology | Year: 2014 Latent class models (LCMs) combine the results of multiple diagnostic tests through a statistical model to obtain estimates of disease prevalence and diagnostic test accuracy in situations where there is no single, accurate reference standard. We performed a systematic review of the methodology and reporting of LCMs in diagnostic accuracy studies. This review shows that the use of LCMs in such studies increased sharply in the past decade, notably in the domain of infectious diseases (overall contribution: 59%). The 64 reviewed studies used a range of differently specified parametric latent variable models, applying Bayesian and frequentist methods. The critical assumption underlying the majority of LCM applications (61%) is that the test observations must be independent within 2 classes. Because violations of this assumption can lead to biased estimates of accuracy and prevalence, performing and reporting checks of whether assumptions are met is essential. Unfortunately, our review shows that 28% of the included studies failed to report any information that enables verification of model assumptions or performance. Because of the lack of information on model fit and adequate evidence "external" to the LCMs, it is often difficult for readers to judge the validity of LCM-based inferences and conclusions reached. © The Author 2013. Akhmanova A.,University Utrecht | Dogterom M.,FOM Institute for Atomic and Molecular Physics Cell | Year: 2011 The ability of growing microtubules to undergo catastrophes-abrupt switches from growth to shortening-is one of the key aspects of microtubule dynamics important for shaping cellular microtubule arrays. Gardner et al. show that catastrophes occur at a microtubule age-dependent rate and that depolymerizing kinesins can affect this process in fundamentally different ways. © 2011 Elsevier Inc. van Nuland R.,University Utrecht PloS one | Year: 2013 The process of eukaryotic transcription initiation involves the assembly of basal transcription factor complexes on the gene promoter. The recruitment of TFIID is an early and important step in this process. Gene promoters contain distinct DNA sequence elements and are marked by the presence of post-translationally modified nucleosomes. The contributions of these individual features for TFIID recruitment remain to be elucidated. Here, we use immobilized reconstituted promoter nucleosomes, conventional biochemistry and quantitative mass spectrometry to investigate the influence of distinct histone modifications and functional DNA-elements on the binding of TFIID. Our data reveal synergistic effects of H3K4me3, H3K14ac and a TATA box sequence on TFIID binding in vitro. Stoichiometry analyses of affinity purified human TFIID identified the presence of a stable dimeric core. Several peripheral TAFs, including those interacting with distinct promoter features, are substoichiometric yet present in substantial amounts. Finally, we find that the TAF3 subunit of TFIID binds to poised promoters in an H3K4me3-dependent manner. Moreover, the PHD-finger of TAF3 is important for rapid induction of target genes. Thus, fine-tuning of TFIID engagement on promoters is driven by synergistic contacts with both DNA-elements and histone modifications, eventually resulting in a high affinity interaction and activation of transcription. Grieve A.G.,University Utrecht Cold Spring Harbor perspectives in biology | Year: 2011 Classical secretion consists of the delivery of transmembrane and soluble proteins to the plasma membrane and the extracellular medium, respectively, and is mediated by the organelles of the secretory pathway, the Endoplasmic Reticulum (ER), the ER exit sites, and the Golgi, as described by the Nobel Prize winner George Palade (Palade 1975). At the center of this transport route, the Golgi stack has a major role in modifying, processing, sorting, and dispatching newly synthesized proteins to their final destinations. More recently, however, it has become clear that an increasing number of transmembrane proteins reach the plasma membrane unconventionally, either by exiting the ER in non-COPII vesicles or by bypassing the Golgi. Here, we discuss the evidence for Golgi bypass and the possible physiological benefits of it. Intriguingly, at least during Drosophila development, Golgi bypass seems to be mediated by a Golgi protein, dGRASP, which is found ectopically localized to the plasma membrane. Kawaguchi I.,Kyoto University | Matsumoto T.,University Utrecht | Yoshida K.,Kyoto University Journal of High Energy Physics | Year: 2014 We consider Jordanian deformations of the AdS5×S 5 superstring action. These deformations correspond to non-standard q-deformations. In particular, it is possible to perform a partial deformation, for example, of the AdS5 part only, or of the S5 part only. Then the classical action and the Lax pair are constructed with a linear, twisted and extended R operator. It is shown that the action preserves the symmetry. © The Authors. Lozano R.,University Utrecht Corporate Social Responsibility and Environmental Management | Year: 2015 Since company boards are increasingly discussing 'sustainability', it becomes necessary to examine the nature of sustainability drivers. Most approaches to corporate sustainability drivers have focused either on internal or external drivers. This paper is aimed at providing a more holistic perspective on the different corporate sustainability drivers in order to better catalyse change from the unsustainable status quo to a more sustainable-oriented state. Empirical data was collected from experts and company leaders. The findings show that, internally, leadership and the business case are the most important drivers, whilst the most important external drivers are reputation, customer demands and expectations, and regulation and legislation. The paper proposes a corporate sustainability driver model, which considers both internal and external drivers, and complements these with drivers that connect them. This offers a holistic perspective on how companies can be more proactive in their journey to becoming more sustainability orientated. © 2013 John Wiley & Sons, Ltd and ERP Environment. Avalos A.M.,Whitehead Institute For Biomedical Research | Meyer-Wentrup F.,University Utrecht | Ploegh H.L.,Whitehead Institute For Biomedical Research Advances in Immunology | Year: 2014 The B-cell receptor (BCR) for antigen is a key sensor required for B-cell development, survival, and activation. Rigorous selection checkpoints ensure that the mature B-cell compartment in the periphery is largely purged of self-reactive B cells. However, autoreactive B cells escape selection and persist in the periphery as anergic or clonally ignorant B cells. Under the influence of genetic or environmental factors, which are not completely understood, autoreactive B cells may be activated. Similar activation can also occur at different stages of B-cell maturation in the bone marrow or in peripheral lymphoid organs and give rise to malignant B cells. The pathology that typifies neoplastic lymphocytes and autoreactive B cells differs: malignant B cells proliferate and occupy niches otherwise taken up by healthy leukocytes or erythrocytes, while autoreactive B cells produce pathogenic antibodies or present self-antigen to T cells. However, both malignant and autoreactive B cells share the commonality of deregulated BCR pathways as principal contributors to pathogenicity. We first summarize current views of BCR activation. We then explore how anomalous BCR pathways correlate with malignancies and autoimmunity. We also elaborate on the activation of TLR pathways in abnormal B cells and how they contribute to maintenance of pathology. Finally, we outline the benefits and emergence of mouse models generated by somatic cell nuclear transfer to study B-cell function in manners for which current transgenic models may be less well suited. © 2014 Elsevier Inc. Spek A.L.,University Utrecht Acta Crystallographica Section C: Structural Chemistry | Year: 2015 The completion of a crystal structure determination is often hampered by the presence of embedded solvent molecules or ions that are seriously disordered. Their contribution to the calculated structure factors in the least-squares refinement of a crystal structure has to be included in some way. Traditionally, an atomistic solvent disorder model is attempted. Such an approach is generally to be preferred, but it does not always lead to a satisfactory result and may even be impossible in cases where channels in the structure are filled with continuous electron density. This paper documents the SQUEEZE method as an alternative means of addressing the solvent disorder issue. It conveniently interfaces with the 2014 version of the least-squares refinement program SHELXL [Sheldrick (2015). Acta Cryst. C71. In the press] and other refinement programs that accept externally provided fixed contributions to the calculated structure factors. The PLATON SQUEEZE tool calculates the solvent contribution to the structure factors by back-Fourier transformation of the electron density found in the solvent-accessible region of a phase-optimized difference electron-density map. The actual least-squares structure refinement is delegated to, for example, SHELXL. The current versions of PLATON SQUEEZE and SHELXL now address several of the unnecessary complications with the earlier implementation of the SQUEEZE procedure that were a necessity because least-squares refinement with the now superseded SHELXL97 program did not allow for the input of fixed externally provided contributions to the structure-factor calculation. It is no longer necessary to subtract the solvent contribution temporarily from the observed intensities to be able to use SHELXL for the least-squares refinement, since that program now accepts the solvent contribution from an external file (.fab file) if the ABIN instruction is used. In addition, many twinned structures containing disordered solvents are now also treatable by SQUEEZE. The details of a SQUEEZE calculation are now automatically included in the CIF archive file, along with the unmerged reflection data. The current implementation of the SQUEEZE procedure is described, and discussed and illustrated with three examples. Two of them are based on the reflection data of published structures and one on synthetic reflection data generated for a published structure. © 2015 International Union of Crystallography. van Kampen H.S.,University Utrecht Behavioural Processes | Year: 2015 To be able to reproduce, animals need to survive and interact with an ever changing environment. Therefore, they create a cognitive representation of that environment, from which they derive expectancies regarding current and future events. These expected events are compared continuously with information gathered through exploration, to guide behaviour and update the existing representation. When a moderate discrepancy between perceived and expected events is detected, exploration is employed to update the internal representation so as to alter the expectancy and make it match the perceived event. When the discrepancy is relatively large, exploration is inhibited, and animals will try to alter the perceived event utilizing aggression or fear. The largest discrepancies are associated with a tendency to flee. When an exploratory, fear, or aggressive behaviour pattern proofs to be the optimal solution for a particular discrepancy, the response will become conditioned to events that previously preceded the occurrence of that discrepancy. When primary needs are relatively low, animals will actively look for or create moderately violated expectancies in order to learn about objects, behaviour patterns, and the environment. In those situations, exploratory tendencies will summate with ongoing behaviour and, when all primary needs are satiated, may even be performed exclusively. This results in behavioural variability, play, and active information-seeking.This article is part of a Special Issue entitled: In Honor of Jerry Hogan. © 2014 Elsevier B.V. Diederen K.M.,University Utrecht Psychological medicine | Year: 2013 Although auditory verbal hallucinations (AVH) are a core symptom of schizophrenia, they also occur in non-psychotic individuals, in the absence of other psychotic, affective, cognitive and negative symptoms. AVH have been hypothesized to result from deviant integration of inferior frontal, parahippocampal and superior temporal brain areas. However, a direct link between dysfunctional connectivity and AVH has not yet been established. To determine whether hallucinations are indeed related to aberrant connectivity, AVH should be studied in isolation, for example in non-psychotic individuals with AVH. Resting-state connectivity was investigated in 25 non-psychotic subjects with AVH and 25 matched control subjects using seed regression analysis with the (1) left and (2) right inferior frontal, (3) left and (4) right superior temporal and (5) left parahippocampal areas as the seed regions. To correct for cardiorespiratory (CR) pulsatility rhythms in the functional magnetic resonance imaging (fMRI) data, heartbeat and respiration were monitored during scanning and the fMRI data were corrected for these rhythms using the image-based method for retrospective correction of physiological motion effects RETROICOR. In comparison with the control group, non-psychotic individuals with AVH showed increased connectivity between the left and the right superior temporal regions and also between the left parahippocampal region and the left inferior frontal gyrus. Moreover, this group did not show a negative correlation between the left superior temporal region and the right inferior frontal region, as was observed in the healthy control group. Aberrant connectivity of frontal, parahippocampal and superior temporal brain areas can be specifically related to the predisposition to hallucinate in the auditory domain. Hamdine O.,University Utrecht Human reproduction (Oxford, England) | Year: 2013 What is the impact of initiating GnRH antagonist co-treatment for in vitro fertilization (IVF) on cycle day (CD) 2 compared with CD 6 on live birth rate (LBR) per started cycle and on the cumulative live birth rate (CLBR)? Early initiation of GnRH antagonist does not appear to improve clinical outcomes of IVF compared with midfollicular initiation. During ovarian stimulation for IVF, GnRH antagonist co-treatment is usually administered from the midfollicular phase onwards. Earlier initiation may improve the follicular phase hormonal milieu and therefore overall clinical outcomes. This open-label, multicentre randomized controlled trial was conducted between September 2009 and July 2011. A web-based program was used for randomization and 617 IVF-intracytoplasmic sperm injection (ICSI) patients were included. Recombinant FSH (150-225 IU) was administered daily from CD 2 onwards in both groups. The study group (CD2; n = 308) started GnRH antagonist co-treatment on CD 2, whereas the control group (CD6; n = 309) started on CD 6. There were no significant differences in clinical outcomes between the two groups. A non-significant trend towards a higher LBR per started cycle and CLBR was observed in the CD6 group compared with the CD2 group (LBR: 24.0 versus 21.5%, P = 0.5; CLBR: 29.9 versus 26.7%, P = 0.6). The study was terminated prematurely because no significant difference was observed in clinical outcomes after 617 inclusions. A much larger study population would be needed to detect a small significant difference in favour of either study arm, which raises the question of whether this would be relevant for clinical practice. The present study shows that the additional treatment burden and costs of starting GnRH antagonist on CD 2 instead of on CD 6 are not justified, as early initiation of GnRH antagonist does not improve LBRs. This study was partially supported by a grant from Merck Serono. O.H., M.J.C.E, A.V., P.A.D., R.E.B., G.J.E.O., C.A.G.H., G.C.D.M., H.J.V., P.F.M.H. and A.B. have nothing to declare. F.J.B. has received fees and grant support from the following companies (in alphabetic order): Ferring, Gedeon Richter, Merck Serono, MSD and Roche. B.J.C. has received fees and grant support from the following companies (in alphabetic order): Ferring, Merck Serono and MSD. C.B.L has received fees and grant support from the following companies (in alphabetic order): Auxogen, Ferring, Merck Serono and MSD. B.C.J.M.F. has received fees and grant support from the following companies (in alphabetic order): Andromed, Ardana, Ferring, Genovum, Merck Serono, MSD, Organon, Pantharei Bioscience, PregLem, Schering, Schering Plough, Serono and Wyeth. J.S.E.L. has received fees and grant support from the following companies (in alphabetic order): Ferring, Gennovum, MSD, Merck Serono, Organon, Schering Plough and Serono. N.S.M. has received fees and grant support from the following companies (in alphabetic order): Anecova, Ferring, Merck Serono, MSD, Organon and Serono. www.clinicaltrials.gov, no. NCT00866034. Kenemans J.L.,University Utrecht Neuroscience and Biobehavioral Reviews | Year: 2015 Inhibition concerns the capacity to suppress on-going response tendencies. Patient data and results from neuro-imaging and magnetic-stimulation studies point to a proactive mechanism involving top-down control signals that potentiate inhibitory sensory-motor connections, depending on whether possibly necessary inhibition is anticipated or not. The proactive mechanism is manifest in stronger sensory-cortex responses to stop signals yielding successful inhibition, observed as a modulation of short-latency human evoked potentials (N1) which may overlap with generic mechanisms for infrequent-event detection. A second, reactive, mechanism would be much more independent of the specific inhibition context, and generalize to situations in which behavioral interrupt is not dictated by task demands but invoked by the salience of task-irrelevant but potentially distracting events. The reactive mechanism is visible in a longer-latency human event-related potential termed frontal P3 (fP3) which is elicited by (successful) stop stimuli and most likely originates from dorsal-medial prefrontal cortex (preSMA), and is dissociated from the proactive mechanism pharmacologically and by individual differences. Implications may arise for more personalized treatments of disorders such as ADHD. © 2015 Elsevier Ltd. Beckers G.J.L.,University Utrecht | Rattenborg N.C.,Max Planck Institute for Ornithology (Seewiesen) Neuroscience and Biobehavioral Reviews | Year: 2015 Brain rhythms occurring during sleep are implicated in processing information acquired during wakefulness, but this phenomenon has almost exclusively been studied in mammals. In this review we discuss the potential value of utilizing birds to elucidate the functions and underlying mechanisms of such brain rhythms. Birds are of particular interest from a comparative perspective because even though neurons in the avian brain homologous to mammalian neocortical neurons are arranged in a nuclear, rather than a laminar manner, the avian brain generates mammalian-like sleep-states and associated brain rhythms. Nonetheless, until recently, this nuclear organization also posed technical challenges, as the standard surface EEG recording methods used to study the neocortex provide only a superficial view of the sleeping avian brain. The recent development of high-density multielectrode recording methods now provides access to sleep-related brain activity occurring deep in the avian brain. Finally, we discuss how intracerebral electrical imaging based on this technique can be used to elucidate the systems-level processing of hippocampal-dependent and imprinting memories in birds. © 2014 Elsevier Ltd. Olivier B.,University Utrecht | Olivier B.,Yale University European Journal of Pharmacology | Year: 2015 The neurotransmitter serotonin is an evolutionary ancient molecule that has remarkable modulatory effects in almost all central nervous system integrative functions, such as mood, anxiety, stress, aggression, feeding, cognition and sexual behavior. After giving a short outline of the serotonergic system (anatomy, receptors, transporter) the authors contributions over the last 40 years in the role of serotonin in depression, aggression, anxiety, stress and sexual behavior is outlined. Each area delineates the work performed on animal model development, drug discovery and development. Most of the research work described has started from an industrial perspective, aimed at developing animals models for psychiatric diseases and leading to putative new innovative psychotropic drugs, like in the cases of the SSRI fluvoxamine, the serenic eltoprazine and the anxiolytic flesinoxan. Later this research work mainly focused on developing translational animal models for psychiatric diseases and implicating them in the search for mechanisms involved in normal and diseased brains and finding new concepts for appropriate drugs. © 2014 Elsevier B.V. All rights reserved. Dantzer R.,University of Houston | Heijnen C.J.,University of Houston | Heijnen C.J.,University Utrecht | Kavelaars A.,University of Houston | And 2 more authors. Trends in Neurosciences | Year: 2014 The exact nature and pathophysiology of fatigue remain largely elusive despite its high prevalence in physically ill patients. Studies on the relationship between the immune system and the central nervous system provide a new perspective on the mechanisms of fatigue. Inflammatory mediators that are released by activated innate immune cells at the periphery and in the central nervous system alter the metabolism and activity of neurotransmitters, generate neurotoxic compounds, decrease neurotrophic factors, and profoundly disturb the neuronal environment. The resulting alterations in fronto-striatal networks together with the activation of insula by inflammatory interoceptive stimuli underlie the many dimensions of fatigue including reduced incentive motivation, decreased behavioral flexibility, uncertainty about usefulness of actions, and awareness of fatigue. © 2013. Pouw M.E.,University Utrecht BMJ (Clinical research ed.) | Year: 2013 To assess the consequences of applying different mortality timeframes on standardised mortality ratios of individual hospitals and, secondarily, to evaluate the association between in-hospital standardised mortality ratios and early post-discharge mortality rate, length of hospital stay, and transfer rate. Retrospective analysis of routinely collected hospital data to compare observed deaths in 50 diagnostic categories with deaths predicted by a case mix adjustment method. 60 Dutch hospitals. 1 228 815 patients discharged in the period 2008 to 2010. In-hospital standardised mortality ratio, 30 days post-admission standardised mortality ratio, and 30 days post-discharge standardised mortality ratio. Compared with the in-hospital standardised mortality ratio, 33% of the hospitals were categorised differently with the 30 days post-admission standardised mortality ratio and 22% were categorised differently with the 30 days post-discharge standardised mortality ratio. A positive association was found between in-hospital standardised mortality ratio and length of hospital stay (Pearson correlation coefficient 0.33; P=0.01), and an inverse association was found between in-hospital standardised mortality ratio and early post-discharge mortality (Pearson correlation coefficient -0.37; P=0.004). Applying different mortality timeframes resulted in differences in standardised mortality ratios and differences in judgment regarding the performance of individual hospitals. Furthermore, associations between in-hospital standardised mortality rates, length of stay, and early post-discharge mortality rates were found. Combining these findings suggests that standardised mortality ratios based on in-hospital mortality are subject to so-called "discharge bias." Hence, early post-discharge mortality should be included in the calculation of standardised mortality ratios. Tauzin B.,University Utrecht | Debayle E.,Ecole Normale Superieure de Lyon | Wittlinger G.,French National Center for Scientific Research Nature Geoscience | Year: 2010 Within the upper mantle, the seismic discontinuity at 410-km depth marks the top of the transition zone and is attributed to pressure-induced transformation of olivine into wadsleyite mineral assemblage. Just above the 410-km discontinuity, a layer characterized by low seismic wave velocities has been identified regionally. This low velocity layer shows poor lateral continuity and is thought to represent partial melting induced by local effects, such as the dehydration of subducted crust or the dehydration of water-bearing silicates beneath continental platforms in association with mantle plumes. However, some models predict that the low-velocity layer should extend globally, because the weaker water storage capacity of upper mantle minerals should induce partial melting of water-bearing silicates throughout this region. Here we report seismic observations from 89 stations worldwide that indicate a thick, intermittent low-velocity layer is located near 350 km depth in the mantle. The low velocity layer is not limited to regions associated with subduction or mantle plumes, and shows no affinity to a particular tectonic environment. We suggest that our data image the thickest parts of a more continuous global structure that shows steep lateral variations in thickness. The presence of a global layer of partial melt above the 410-km discontinuity would modify material circulation in the Earth mantle and may help to reconcile geophysical and geochemical observations. © 2010 Macmillan Publishers Limited. All rights reserved. Smallenburg F.,University of Rome La Sapienza | Filion L.,University Utrecht | Sciortino F.,University of Rome La Sapienza Nature Physics | Year: 2014 One of the most controversial hypotheses for explaining the origin of the thermodynamic anomalies characterizing liquid water postulates the presence of a metastable second-order liquid-liquid critical point located in the 'no-man's land'. In this scenario, two liquids with distinct local structure emerge near the critical temperature. Unfortunately, as spontaneous crystallization is rapid in this region, experimental support for this hypothesis relies on significant extrapolations, either from the metastable liquid or from amorphous solid water. Although the liquid-liquid transition is expected to feature in many tetrahedrally coordinated liquids, including silicon, carbon and silica, even numerical studies of atomic and molecular models have been unable to conclusively prove the existence of this transition. Here we provide such evidence for a model in which it is possible to continuously tune the softness of the interparticle interaction and the flexibility of the bonds, the key ingredients controlling the existence of the critical point. We show that conditions exist where the full coexistence is thermodynamically stable with respect to crystallization. Our work offers a basis for designing colloidal analogues of water exhibiting liquid-liquid transitions in equilibrium, opening the way for experimental confirmation of the original hypothesis. Feelders A.,University Utrecht Proceedings - IEEE International Conference on Data Mining, ICDM | Year: 2010 In many applications of data mining we know beforehand that the response variable should be increasing (or decreasing) in the attributes. Such relations between response and attributes are called monotone. In this paper we present a new algorithm to compute an optimal monotone classification of a data set for convex loss functions. Moreover, we show how the algorithm can be extended to compute all optimal monotone classifications with little additional effort. Monotone relabeling is useful for at least two reasons. Firstly, models trained on relabeled data sets often have better predictive performance than models trained on the original data. Secondly, relabeling is an important building block for the construction of monotone classifiers. We apply the new algorithm to investigate the effect on the prediction error of relabeling the training sample for k nearest neighbour classification and classification trees. In contrast to previous work in this area, we consider all optimal monotone relabelings. The results show that, for small training samples, relabeling the training data results in significantly better predictive performance. © 2010 IEEE. Adan R.A.H.,University Utrecht Trends in Neurosciences | Year: 2013 Regulation of body weight is organized by distributed brain circuits that use a variety of neuropeptides and transmitters, and that are responsive to endocrine and metabolic signals. Targeting of these circuits with novel pharmaceutical drugs would be helpful additions to lifestyle interventions for the treatment of obesity. The recent FDA approval of two anti-obesity drugs holds promise in a field in which previous drugs were removed from clinical use because of unacceptable psychiatric and cardiovascular side effects. Here, the modes of action of anti-obesity drugs are reviewed. © 2012 Elsevier Ltd. Biessels G.J.,University Utrecht | Reagan L.P.,University of South Carolina | Reagan L.P.,Wm Jennings Bryan Dorn Veterans Affairs Medical Center Nature Reviews Neuroscience | Year: 2015 Clinical studies suggest a link between type 2 diabetes mellitus (T2DM) and insulin resistance (IR) and cognitive dysfunction, but there are significant gaps in our knowledge of the mechanisms underlying this relationship. Animal models of IR help to bridge these gaps and point to hippocampal IR as a potential mediator of cognitive dysfunction in T2DM, as well as in Alzheimer disease (AD). This Review highlights these observations and discusses intervention studies which suggest that the restoration of insulin activity in the hippocampus may be an effective strategy to alleviate the cognitive decline associated with T2DM and AD. © 2015 Macmillan Publishers Limited. Iseger T.A.,Kings College London | Bossong M.G.,Kings College London | Bossong M.G.,University Utrecht Schizophrenia Research | Year: 2015 Despite extensive study over the past decades, available treatments for schizophrenia are only modestly effective and cause serious metabolic and neurological side effects. Therefore, there is an urgent need for novel therapeutic targets for the treatment of schizophrenia. A highly promising new pharmacological target in the context of schizophrenia is the endocannabinoid system. Modulation of this system by the main psychoactive component in cannabis, δ9-tetrahydrocannabinol (THC), induces acute psychotic effects and cognitive impairment. However, the non-psychotropic, plant-derived cannabinoid agent cannabidiol (CBD) may have antipsychotic properties, and thus may be a promising new agent in the treatment of schizophrenia. Here we review studies that investigated the antipsychotic properties of CBD in human subjects. Results show the ability of CBD to counteract psychotic symptoms and cognitive impairment associated with cannabis use as well as with acute THC administration. In addition, CBD may lower the risk for developing psychosis that is related to cannabis use. These effects are possibly mediated by opposite effects of CBD and THC on brain activity patterns in key regions implicated in the pathophysiology of schizophrenia, such as the striatum, hippocampus and prefrontal cortex. The first small-scale clinical studies with CBD treatment of patients with psychotic symptoms further confirm the potential of CBD as an effective, safe and well-tolerated antipsychotic compound, although large randomised clinical trials will be needed before this novel therapy can be introduced into clinical practice. © 2015 Elsevier B.V. Sattari S.Z.,Wageningen University | Bouwman A.F.,PBL Environmental Assessment Agency | Bouwman A.F.,University Utrecht | Giller K.E.,Wageningen University | Van Ittersum M.K.,Wageningen University Proceedings of the National Academy of Sciences of the United States of America | Year: 2012 Phosphorus (P) is a finite and dwindling resource. Debate focuses on current production and use of phosphate rock rather than on the amounts of P required in the future to feed the world. We applied a two-pool soil P model to reproduce historical continental crop P uptake as a function of P inputs from fertilizer and manure and to estimate P requirements for crop production in 2050. The key feature is the consideration of the role of residual soil P in crop production. Model simulations closely fit historical P uptake for all continents. Cumulative inputs of P fertilizer and manure for the period 1965-2007 in Europe (1,115 kg·ha -1 of cropland) grossly exceeded the cumulative P uptake by crops (360 kg·ha -1). Since the 1980s in much of Europe, P application rates have been reduced, and uptake continues to increase due to the supply of plant-available P from residual soil P pool. We estimate that between 2008 and 2050 a global cumulative P application of 700-790 kg·ha -1 of cropland (in total 1,070-1,200 teragrams P) is required to achieve crop production according to the various Millennium Ecosystem Assessment scenarios [Alcamo J, Van Vuuren D, Cramer W (2006) Ecosystems and Human Well-Being: Scenarios, Vol 2, pp 279-354]. We estimate that average global P fertilizer use must change from the current 17.8 to 16.8-20.8 teragrams per year in 2050, which is up to 50% less than other estimates in the literature that ignore the role of residual soil P. Magan J.M.,University Utrecht Physical Review Letters | Year: 2016 Having analytical instances of the eigenstate thermalization hypothesis (ETH) is of obvious interest, both for fundamental and applied reasons. This is generally a hard task, due to the belief that nonlinear interactions are basic ingredients of the thermalization mechanism. In this article we prove that random Gaussian-free fermions satisfy ETH in the multiparticle sector, by analytically computing the correlations and entanglement entropies of the theory. With the explicit construction at hand, we finally comment on the differences between fully random Hamiltonians and random Gaussian systems, providing a physically motivated notion of randomness of the microscopic quantum state. © 2016 American Physical Society. Gursoy U.,University Utrecht Journal of High Energy Physics | Year: 2011 We investigate continuous Hawking-Page transitions in Einstein's gravity coupled to a scalar field with an arbitrary potential in the weak gravity limit. We show that this is only possible in a singular limit where the black-hole horizon marginally traps a curvature singularity. Depending on the subleading terms in the potential, a rich variety of continuous phase transitions arise. Our examples include second and higher order, including the Berezinskii- Kosterlitz-Thouless type. In the case when the scalar is dilaton, the condition for continuous phase transitions lead to (asymptotically) linear-dilaton background. We obtain the scaling laws of thermodynamic functions, as well as the viscosity coefficients near the transition. In the limit of weak gravitational interactions, the bulk viscosity asymptotes to a universal constant, independent of the details of the scalar potential. As a byproduct of our analysis we obtain a one-parameter family of kink solutions in arbitrary dimension d that interpolate between AdS near the boundary and linear-dilaton background in the deep interior. The continuous Hawking-Page transitions found here serve as holographic models for normal-to superfluid transitions. Prokopec T.,University Utrecht Journal of Cosmology and Astroparticle Physics | Year: 2015 We consider stochastic inflation in an interacting scalar field in spatially homogeneous accelerating space-times with a constant principal slow roll parameter. We show that, if the scalar potential is scale invariant (which is the case when scalar contains quartic self-interaction and couples non-minimally to gravity), the late-time solution on accelerating FLRW spaces can be described by a probability distribution function (PDF) ρ which is a function of H only, where () is the scalar field and H=H(t) denotes the Hubble parameter. We give explicit late-time solutions for ρarrow ρ (H), and thereby find the order corrections to the Starobinsky-Yokoyama result. This PDF can then be used to calculate e.g. various n-point functions of the (self-interacting) scalar field, which are valid at late times in arbitrary accelerating space-times with constant. © 2015 IOP Publishing Ltd and Sissa Medialab srl. Biazin B.,Hawassa University | Sterk G.,University Utrecht Agriculture, Ecosystems and Environment | Year: 2013 The Ethiopian Rift Valley is a dry land zone where for a long time pastoral communities have made their living from acacia-based woodlands. But many pastoralists have changed from a pastoral way of life to mixed farming over time. The aim of this study was to evaluate land-use and land cover (LULC) changes in the Central Rift Valley dry lands of Ethiopia, and determine the role of drought vulnerability as a driver. A combination of GIS/remote sensing techniques, drought vulnerability analyses, field observation and surveying were employed. Because drought vulnerability is linked more closely to the types of land-uses and social contexts rather than only climatological events, it was examined based on locally perceived criteria of drought. Accordingly, the pastoral way of life was vulnerable to severe drought during 25% of the last 28. years while the mixed farming (livestock and maize farming combined) system was vulnerable to severe drought only during 4% of the years. Over the last 5 decades, cultivated lands increased to threefold while the dense acacia coverage declined from 42% in 1965 to 9% in 2010. The observed LULC changes were driven by the interplay of recurrent drought, socioeconomic and institutional dynamics, access to markets and improved technologies such as early-maturing maize cultivars and better land management. Proper policy and technological interventions are required to develop appropriate drought adaptation strategies and avert the increasing degradation of woodlands in the Rift Valley dry lands where a pastoral way of life is still present. © 2012 Elsevier B.V. De Rooij D.G.,University Utrecht | Griswold M.D.,Washington State University Journal of Andrology | Year: 2012 This review focuses on 3 important advances in our understanding of rodent spermatogonial stem cells (SSC) that have emerged since 2000: the identity of SSC, the existence of a SSC niche, and gene expression in spermatogonia. It is now apparent that the original scheme, in which the Asingle (As) spermatogonia are the only stem cells, may be too simple. Rather, separation of pairs of Apaired (Apr) spermatogonia into singles might also play a role in the steady state situation. However, evidence that in the normal epithelium fragmentation of chains of Aaligned (Aal) spermatogonia into smaller clones also plays a role is not yet conclusive. New evidence presented during the last decade indicates that the As, Apr, and Aal (As,pr,al) spermatogonia are not localized at random over the tubule basal lamina, as originally assumed, but are restricted to those areas that border on interstitial tissue and, in particular, to areas containing venules and arterioles, suggesting a specific relationship of this localization with a possible SSC niche. Finally, gene expression studies are showing how both extrinsic factors produced by Sertoli cells and intrinsic factors that are products of the germ cells act either to maintain progenitor cells or to promote differentiation and the commitment to meiosis. Taken together, this new knowledge adds to our understanding of the balance between 2 opposing forces: one promoting the undifferentiated state and the other promoting the commitment to meiosis and differentiation that is essential for spermatogenesis to proceed. © American Society of Andrology. Reggiori F.,University Utrecht | Klionsky D.J.,University of Michigan Genetics | Year: 2013 Autophagy refers to a group of processes that involve degradation of cytoplasmic components including cytosol, macromolecular complexes, and organelles, within the vacuole or the lysosome of higher eukaryotes. The various types of autophagy have attracted increasing attention for at least two reasons. First, autophagy provides a compelling example of dynamic rearrangements of subcellular membranes involving issues of protein trafficking and organelle identity, and thus it is fascinating for researchers interested in questions pertinent to basic cell biology. Second, autophagy plays a central role in normal development and cell homeostasis, and, as a result, autophagic dysfunctions are associated with a range of illnesses including cancer, diabetes, myopathies, some types of neurodegeneration, and liver and heart diseases. That said, this review focuses on autophagy in yeast. Many aspects of autophagy are conserved from yeast to human; in particular, this applies to the gene products mediating these pathways as well as some of the signaling cascades regulating it, so that the information we relate is relevant to higher eukaryotes. Indeed, as with many cellular pathways, the initial molecular insights were made possible due to genetic studies in Saccharomyces cerevisiae and other fungi. © 2013 by the Genetics Society of America. Dieleman J.M.,University Utrecht Cochrane database of systematic reviews (Online) | Year: 2011 High-dose prophylactic corticosteroids are often administered during cardiac surgery. Their use, however, remains controversial, as no trials are available that have been sufficiently powered to draw conclusions on their effect on major clinical outcomes. The objective of this meta-analysis was to estimate the effect of prophylactic corticosteroids in cardiac surgery on mortality, cardiac and pulmonary complications. Major medical databases (CENTRAL, MEDLINE, EMBASE, CINAHL and Web of Science) were systematically searched for randomised studies assessing the effect of corticosteroids in adult cardiac surgery. Database were searched for the full period covered, up to December 2009. No language restrictions were applied. Randomised controlled trials comparing corticosteroid treatment to either placebo treatment or no treatment in adult cardiac surgery were selected. There were no restrictions with respect to length of the follow-up period. All selected studies qualified for pooling of results for one or more end-points. The processes of searching and selection for inclusion eligibility were performed independently by two authors. Also, quality assessment and data-extraction of selected studies were independently performed by two authors. The primary endpoints were mortality, cardiac and pulmonary complications. The main effect measure was the Peto odds ratio comparing corticosteroids to no treatment/placebo. Fifty-four randomised studies, mostly of limited quality, were included. Altogether, 3615 patients were included in these studies. The pooled odds ratio for mortality was 1.12 (95% CI 0.65 to 1.92), showing no mortality reduction in patients treated with corticosteroids. The odds ratios for myocardial and pulmonary complications were 0.95, (95% CI 0.57 to 1.60) and 0.83 (95% CI 0.49 to 1.40), respectively. The use of a random effects model did not substantially influence study results. Analyses of secondary endpoints showed a reduction of atrial fibrillation and an increase in gastrointestinal bleeding in the corticosteroids group. This meta-analysis showed no beneficial effect of corticosteroid use on mortality, cardiac and pulmonary complications in cardiac surgery patients. Schutter D.J.L.G.,University Utrecht Medical Hypotheses | Year: 2012 Depressive disorder can be viewed as an adaptive defense mechanism in response to excessive stress that has gone awry. The hypothalamic-pituitary-adrenal (HPA) axis is an important node in the brain's stress circuit and suggested to play a role in several subtypes of depression. While the hippocampus, amygdala and prefrontal cortex are considered important regions implicated in stress regulation and depressive disorder, the existence of reciprocal monosynaptic cerebello-hypothalamic connections and the presence of dense glucocorticoid binding sites point towards the view that the cerebellum plays a functional role in the regulation of HPA-axis as well. The present hypothesis may further contribute to contemporary neurobiological views on stress regulation and depressive disorder, and may offer a potential biological basis for developing novel neurosomatic treatment protocols. © 2012 Elsevier Ltd. Lammers T.,University Utrecht | Lammers T.,German Cancer Research Center | Lammers T.,RWTH Aachen Advanced Drug Delivery Reviews | Year: 2010 Copolymers based on N-(2-hydroxypropyl)methacrylamide (HPMA) are prototypic and well-characterized polymeric drug carriers that have been broadly implemented in the delivery of anticancer agents. HPMA copolymers circulate for prolonged periods of time, and by means of the Enhanced Permeability and Retention (EPR) effect, they localize to tumors both effectively and selectively. Because of their beneficial biodistribution, and because of the fact that they are able to improve the balance between the efficacy and the toxicity of chemotherapy, it is reasonable to assume that HPMA copolymers combine well with other treatment modalities. In the present review, efforts in this regard are summarized, and HPMA copolymers are shown to be able to beneficially interact with surgery, with radiotherapy, with hyperthermia, with photodynamic therapy, with chemotherapy and with each other. Together, the insights provided and the evidence obtained strongly suggest that HPMA copolymer-based nanomedicine formulations hold significant potential for improving the efficacy of combined modality anticancer therapy. © 2009 Elsevier B.V. All rights reserved. Arts H.H.,Radboud University Nijmegen | Knoers N.V.A.M.,University Utrecht Pediatric Nephrology | Year: 2013 Ciliopathies are a group of clinically and genetically overlapping disorders whose etiologies lie in defective cilia. These are antenna-like organelles on the apical surface of numerous cell types in a variety of tissues and organs, the kidney included. Cilia play essential roles during development and tissue homeostasis, and their dysfunction in the kidney has been associated with renal cyst formation and renal failure. Recently, the term "renal ciliopathies" was coined for those human genetic disorders that are characterized by nephronophthisis, cystic kidneys or renal cystic dysplasia. This review focuses on renal ciliopathies from a human genetics perspective. We survey the newest insights with respect to gene identification and genotype-phenotype correlations, and we reflect on candidate ciliopathies. The opportunities and challenges of next-generation sequencing (NGS) for genetic renal research and clinical DNA diagnostics are also reviewed, and we discuss the contribution of NGS to the development of personalized therapy for patients with renal ciliopathies. © 2012 The Author(s). Voesenek L.A.C.J.,University Utrecht | Bailey-Serres J.,University of California at Riverside Current Opinion in Plant Biology | Year: 2013 The investigation of flooding survival strategies in model, crop and wild plant species has yielded insights into molecular, physiological and developmental mechanisms of soil flooding (waterlogging) and submergence survival. The antithetical flooding escape and quiescence strategies of deepwater and submergence tolerant rice (Oryza sativa), respectively, are regulated by members of a clade of ethylene responsive factor transcriptional activators. This knowledge paved the way for the discovery that these proteins are targets of a highly conserved O2-sensing protein turnover mechanism in Arabidopsis thaliana. Further examples of genes that regulate transcription, root and shoot metabolism or development during floods have emerged. With the rapid advancement of genomic technologies, the mining of natural genetic variation in flooding tolerant wild species may ultimately benefit crop production. © 2013 Elsevier Ltd. Meijer P.T.,University Utrecht Marine Geology | Year: 2012 Theory for the dynamics of flow in sea straits holds promise to provide, in addition to geological evidence, insight into the configuration of the connection between the Mediterranean Sea and the Atlantic Ocean at the onset of the Messinian Salinity Crisis. This paper, for the first time, systematically explores the application of hydraulic control theory to the question of how, about 6. Ma ago, Mediterranean salinity could have risen to values associated with gypsum saturation. The theory is based on the notion that it is the greatest constriction of the flow between basin and ocean that acts to limit the exchange. The response of basin salinity to strait depth, strait width, and relative thickness of the outflow layer proves to be highly nonlinear. For strait width on the order of kilometres, an asymptotic rise in basin salinity occurs when the strait depth is on the order of a few tens of metres. Completely blocked outflow takes place when the depth is reduced to metres. The nonlinear nature of the system implies that even a slow gradual reduction in the sill depth leads to an event-like rise in basin salinity. For values of basin salinity approaching gypsum saturation the response of the basin to changes in the strait depth is significantly delayed. © 2012 Elsevier B.V. Kalinina Ayuso V.,University Utrecht Investigative ophthalmology & visual science | Year: 2013 To investigate the presence of biomarkers in aqueous humor (AH) from patients with uveitis associated with juvenile idiopathic arthritis (JIA). AH (N = 73) AND SERUM (N = 105) SAMPLES FROM 116 CHILDREN WERE ANALYZED USING SURFACE ENHANCED LASER DESORPTION/IONIZATION TIME OF FLIGHT MASS SPECTROMETRY (SELDI-TOF MS). JIA, silent chronic anterior uveitis (AU), other uveitis entities, and noninflammatory controls. Statistical biomarker identification was performed using the SELDI-ToF Biomarker Analysis Cluster Wizard followed by multivariate statistical analysis. Biochemical identification of biomarkers was performed by polyacrylamide gel protein separation, followed by liquid chromatography tandem mass spectrometry. ELISA was performed in a number of AH samples representing all four study groups. In the JIA group, one AH protein peak at mass/charge (m/z) 13,762 had qualitative and quantitative differences in expression compared with the other uveitis entities and the controls, but not to the group of silent chronic AU. Its quantitative expression in AH of patients with JIA and other silent chronic AU was positively associated with uveitis activity. The protein at m/z 13,762 in AH was identified as transthyretin (TTR). The TTR concentration in AH differed significantly between the study groups (P = 0.006) with considerably higher TTR concentrations in JIA and silent chronic AU samples positive for m/z 13,762 than those of the other uveitis and control groups. TTR is a potential intraocular biomarker of JIA- associated uveitis. Its role in the pathogenesis of silent chronic AU with and without arthritis needs further investigation. Matsumoto T.,University Utrecht | Yoshida K.,Kyoto University Journal of High Energy Physics | Year: 2014 We derive the gravity duals of noncommutative gauge theories from the Yang-Baxter sigma model description of the AdS5 × S5 superstring with classical r-matrices. The corresponding classical r-matrices are 1) solutions of the classical Yang-Baxter equation (CYBE), 2) skew-symmetric, 3) nilpotent and 4) abelian. Hence these should be called abelian Jordanian deformations. As a result, the gravity duals are shown to be integrable deformations of AdS5 × S5. Then, abelian twists of AdS5 are also investigated. These results provide a support for the gravity/CYBE correspondence proposed in arXiv:1404.1838. © 2014 The Author(s). Van De Laar L.,Erasmus Medical Center | Coffer P.J.,University Utrecht | Coffer P.J.,Center for Cellular and Molecular Intervention | Woltman A.M.,Erasmus Medical Center Blood | Year: 2012 Dendritic cells (DCs) represent a small and heterogeneous fraction of the hematopoietic system, specialized in antigen capture, processing, and presentation. The different DC subsets act as sentinels throughout the body and perform a key role in the induction of immunogenic as well as tolerogenic immune responses. Because of their limited lifespan, continuous replenishment of DC is required. Whereas the importance of GM-CSF in regulating DC homeostasis has long been underestimated, this cytokine is currently considered a critical factor for DC development under both steady-state and inflammatory conditions. Regulation of cellular actions by GM-CSF depends on the activation of intracellular signaling modules, including JAK/STAT, MAPK, PI3K, and canonical NF-κB. By directing the activity of transcription factors and other cellular effector proteins, these pathways influence differentiation, survival and/or proliferation of uncommitted hematopoietic progenitors, and DC subset-specific precursors, thereby contributing to specific aspects of DC subset development. The specific intracellular events resulting from GM-CSF-induced signaling provide a molecular explanation for GM-CSF-dependent subset distribution as well as clues to the specific characteristics and functions of GM-CSF-differentiated DCs compared with DCs generated by fms-related tyrosine kinase 3 ligand. This knowledge can be used to identify therapeutic targets to improve GM-CSF-dependent DC-based strategies to regulate immunity. © 2012 by The American Society of Hematology. Overbeek S.A.,University Utrecht Respiratory research | Year: 2011 Cigarette smoking induces peripheral inflammatory responses in all smokers and is the major risk factor for neutrophilic lung disease such as chronic obstructive pulmonary disease. The aim of this study was to investigate the effect of cigarette smoke on neutrophil migration and on β2-integrin activation and function in neutrophilic transmigration through endothelium. Utilizing freshly isolated human PMNs, the effect of cigarette smoke on migration and β2-integrin activation and function in neutrophilic transmigration was studied. In this report, we demonstrated that cigarette smoke extract (CSE) dose dependently induced migration of neutrophils in vitro. Moreover, CSE promoted neutrophil adherence to fibrinogen. Using functional blocking antibodies against CD11b and CD18, it was demonstrated that Mac-1 (CD11b/CD18) is responsible for the cigarette smoke-induced firm adhesion of neutrophils to fibrinogen. Furthermore, neutrophils transmigrated through endothelium by cigarette smoke due to the activation of β2-integrins, since pre-incubation of neutrophils with functional blocking antibodies against CD11b and CD18 attenuated this transmigration. This is the first study to describe that cigarette smoke extract induces a direct migratory effect on neutrophils and that CSE is an activator of β2-integrins on the cell surface. Blocking this activation of β2-integrins might be an important target in cigarette smoke induced neutrophilic diseases. Theunissen B.,University Utrecht ISIS | Year: 2012 In the 1970s and 1980s Dutch farmers replaced their dual-purpose Friesian cows with Holsteins, a highly specialized American dairy breed. The changeover was related to a major turnabout in breeding practices that involved the adoption of quantitative genetics. Dutch commercial breeders had long resisted the quantitative approach to breeding that scientists had been recommending since World War II. After about 1970, however, they gave up their resistance: the art of breeding, it was said, finally became a science. In historical overviews this turnabout is seen as part of what is called the "modernization project" in Dutch agriculture that the government instigated after the war. Economic developments are assumed to have necessitated this project, and specialization of production is seen as a natural consequence. This essay argues that the idea that the art of breeding was turned into a science is to a certain extent misleading. Furthermore, it aims to show that economic pressures and government policies cannot adequately explain the turn toward Holsteins. A better understanding can be obtained by framing the Holsteinization process as the result of a changeover in breeding culture-that is, in the ensemble of shared convictions, beliefs, conventions, methods, practices, and the like that characterized practical cattle breeding and that involved scientific, technical, economic, aesthetic, normative, and commercial considerations. © 2012 by The History of Science Society. Paffen C.L.E.,University Utrecht | Alais D.,University of Sydney Frontiers in Human Neuroscience | Year: 2011 Ever since Wheatstone initiated the scientific study of binocular rivalry, it has been debated whether the phenomenon is under attentional control. In recent years, the issue of attentional modulation of binocular rivalry has seen a revival. Here we review the classical studies as well as recent advances in the study of attentional modulation of binocular rivalry. We show that (1) voluntary control over binocular rivalry is possible, yet limited, (2) both endogenous and exogenous attention influence perceptual dominance during rivalry, (3) diverting attention from rival displays does not arrest perceptual alternations, and that (4) rival targets by themselves can also attract attention. From a theoretical perspective, we suggest that attention affects binocular rivalry by modulating the effective contrast of the images in com-petition. This contrast enhancing effect of top-down attention is counteracted by a response attenuating effect of neural adaptation at early levels of visual processing, which weakens the response to the dominant image. Moreover, we conclude that although frontal and parietal brain areas involved in both binocular rivalry and visual attention overlap, an adapting reciprocal inhibition arrangement at early visual cortex is sufficient to trigger switches in perceptual dominance independently of a higher-level "selection" mechanisms. Both of these processes are reciprocal and therefore self-balancing, with the consequence that complete attentional control over binocular rivalry can never be realized. © 2011 Paffen and Alais. Vroege G.J.,University Utrecht Liquid Crystals | Year: 2014 A review is given of liquid crystals formed in colloidal dispersions, in particular those consisting of mineral particles. Starting with the historical development and early theory, the characteristic properties related to the colloidal nature of this type of liquid crystals are discussed. The possibility to find biaxial nematic and smectic phases is described for mixtures of rods and plates and recent examples are given of biaxial liquid crystal phases of mineral particles with inherent biaxial shape. © 2013 © 2013 Taylor & Francis. Boomsma C.M.,University Utrecht Cochrane database of systematic reviews (Online) | Year: 2012 In order to improve embryo implantation for in vitro fertilisation (IVF) or intracytoplasmic sperm injection (ICSI) cycles the use of glucocorticoids has been advocated. It has been proposed that glucocorticoids may improve the intrauterine environment by acting as immunomodulators to reduce the uterine natural killer (NK) cell count and normalise the cytokine expression profile in the endometrium and by suppression of endometrial inflammation. To investigate whether the administration of glucocorticoids around the time of implantation improved clinical outcomes in subfertile women undergoing IVF or ICSI when compared to no glucocorticoid administration. The Cochrane Menstrual Disorders and Subfertility Group Trials Register (September 2011), the Cochrane Central Register of Controlled Trials (CENTRAL) (September 2011), MEDLINE (1966 to September 2011), EMBASE (1976 to September 2011), CINAHL (1982 to September 2011) and Science Direct (1966 to September 2011) were searched. Reference lists of relevant articles and relevant conference proceedings were handsearched. All randomised controlled trials (RCTs) addressing the research question were included. Two review authors independently assessed eligibility and quality of trials and extracted relevant data. Fourteen studies (involving 1879 couples) were included. Three studies reported live birth rate and these did not identify a significant difference after pooling the (preliminary) results (OR 1.21, 95% CI 0.67 to 2.19). With regard to pregnancy rates, there was also no evidence that glucocorticoids improved clinical outcome (13 RCTs; OR 1.16, 95% CI 0.94 to 1.44). However, a subgroup analysis of 650 women undergoing IVF (6 RCTs) revealed a significantly higher pregnancy rate for women using glucocorticoids (OR 1.50, 95% CI 1.05 to 2.13). There were no significant differences in adverse events, but these were poorly and inconsistently reported. Overall, there was no clear evidence that administration of peri-implantation glucocorticoids in ART cycles significantly improved the clinical outcome. The use of glucocorticoids in a subgroup of women undergoing IVF (rather than ICSI) was associated with an improvement in pregnancy rates of borderline statistical significance and should be interpreted with care. These findings were limited to the routine use of glucocorticoids and cannot be extrapolated to women with autoantibodies, unexplained infertility or recurrent implantation failure. Further well designed randomised studies are required to elucidate the possible role of this therapy in well defined patient groups. Veltman-Verhulst S.M.,University Utrecht Cochrane database of systematic reviews (Online) | Year: 2012 Intra-uterine insemination (IUI) is a widely used fertility treatment for couples with unexplained subfertility. Although IUI is less invasive and less expensive than in vitro fertilisation (IVF), the safety of IUI in combination with ovarian hyperstimulation (OH) is debated. The main concern about IUI treatment with OH is the increase in multiple pregnancy rate. To determine whether, for couples with unexplained subfertility, IUI improves the live birth rate compared with timed intercourse (TI), both with and without ovarian hyperstimulation (OH). We searched the Cochrane Menstrual Disorders and Subfertility Group Trials Register (searched July 2011), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2011, Issue 7), MEDLINE (1966 to July 2011), EMBASE (1980 to July 2011), PsycINFO (1806 to July 2011), SCIsearch and reference lists of articles. Authors of identified studies were contacted for missing or unpublished data. Truly randomised controlled trials (RCTs) with at least one of the following comparisons were included: IUI versus TI, both in a natural cycle; IUI versus TI, both in a stimulated cycle; IUI in a natural cycle versus IUI in a stimulated cycle; IUI with OH versus TI in a natural cycle; IUI in a natural cycle versus TI with OH. Only couples with unexplained subfertility were included. Quality assessment and data extraction were performed independently by two review authors. Outcomes were extracted and the data were pooled. Subgroup and sensitivity analyses were done where possible. One trial compared IUI in a natural cycle with expectant management and showed no evidence of increased live births (334 women: odds ratio (OR) 1.60, 95% confidence interval (CI) 0.92 to 2.8). In the six trials where IUI was compared with TI, both in stimulated cycles, there was evidence of an increased chance of pregnancy after IUI (six RCTs, 517 women: OR 1.68, 95% CI 1.13 to 2.50). A significant increase in live birth rate was found for women where IUI with OH was compared with IUI in a natural cycle (four RCTs, 396 women: OR 2.07, 95% CI 1.22 to 3.50). However the trials provided insufficient data to investigate the impact of IUI with or without OH on several important outcomes including live births, multiple pregnancies, miscarriage and risk of ovarian hyperstimulation. There was no evidence of a difference in pregnancy rate for IUI with OH compared with TI in a natural cycle (two RCTs, total 304 women: data not pooled). The final comparison of IUI in natural cycle to TI with OH showed a marginal, significant increase in live births for IUI (one RCT, 342 women: OR 1.95, 95% CI 1.10 to 3.44). There is evidence that IUI with OH increases the live birth rate compared to IUI alone. The likelihood of pregnancy was also increased for treatment with IUI compared to TI in stimulated cycles. One adequately powered multicentre trial showed no evidence of effect of IUI in natural cycles compared with expectant management. There is insufficient data on multiple pregnancies and other adverse events for treatment with OH. Therefore couples should be fully informed about the risks of IUI and OH as well as alternative treatment options. Hulzebos E.H.,University Utrecht Cochrane database of systematic reviews (Online) | Year: 2012 After cardiac surgery, physical therapy is a routine procedure delivered with the aim of preventing postoperative pulmonary complications. To determine if preoperative physical therapy with an exercise component can prevent postoperative pulmonary complications in cardiac surgery patients, and to evaluate which type of patient benefits and which type of physical therapy is most effective. Searches were run on the Cochrane Central Register of Controlled Trials (CENTRAL) on the Cochrane Library (2011, Issue 12 ); MEDLINE (1966 to 12 December 2011); EMBASE (1980 to week 49, 2011); the Physical Therapy Evidence Database (PEDro) (to 12 December 2011) and CINAHL (1982 to 12 December 2011). Randomised controlled trials or quasi-randomised trials comparing preoperative physical therapy with no preoperative physical therapy or sham therapy in adult patients undergoing elective cardiac surgery. Data were collected on the type of study, participants, treatments used, primary outcomes (postoperative pulmonary complications grade 2 to 4: atelectasis, pneumonia, pneumothorax, mechanical ventilation > 48 hours, all-cause death, adverse events) and secondary outcomes (length of hospital stay, physical function measures, health-related quality of life, respiratory death, costs). Data were extracted by one review author and checked by a second review author. Review Manager 5.1 software was used for the analysis. Eight randomised controlled trials with 856 patients were included. Three studies used a mixed intervention (including either aerobic exercises or breathing exercises); five studies used inspiratory muscle training. Only one study used sham training in the controls. Patients that received preoperative physical therapy had a reduced risk of postoperative atelectasis (four studies including 379 participants, relative risk (RR) 0.52; 95% CI 0.32 to 0.87; P = 0.01) and pneumonia (five studies including 448 participants, RR 0.45; 95% CI 0.24 to 0.83; P = 0.01) but not of pneumothorax (one study with 45 participants, RR 0.12; 95% CI 0.01 to 2.11; P = 0.15) or mechanical ventilation for > 48 hours after surgery (two studies with 306 participants, RR 0.55; 95% CI 0.03 to 9.20; P = 0.68). Postoperative death from all causes did not differ between groups (three studies with 552 participants, RR 0.66; 95% CI 0.02 to 18.48; P = 0.81). Adverse events were not detected in the three studies that reported on them. The length of postoperative hospital stay was significantly shorter in experimental patients versus controls (three studies with 347 participants, mean difference -3.21 days; 95% CI -5.73 to -0.69; P = 0.01). One study reported a reduced physical function measure on the six-minute walking test in experimental patients compared to controls. One other study reported a better health-related quality of life in experimental patients compared to controls. Postoperative death from respiratory causes did not differ between groups (one study with 276 participants, RR 0.14; 95% CI 0.01 to 2.70; P = 0.19). Cost data were not reported on. Evidence derived from small trials suggests that preoperative physical therapy reduces postoperative pulmonary complications (atelectasis and pneumonia) and length of hospital stay in patients undergoing elective cardiac surgery. There is a lack of evidence that preoperative physical therapy reduces postoperative pneumothorax, prolonged mechanical ventilation or all-cause deaths. Heimeriks G.,University Utrecht Science and Public Policy | Year: 2013 In this paper we study developments in biotechnology, genomics and nanotechnology in the period 1998-2008. The fields show changing interdisciplinary characteristics in relation to distinct co-evolutionary dynamics in research, science and society. Biotechnology emerged as a discipline in publication patterns at the same time as the number of biotechnology departments increased, whereas genomics emerged as a stable discipline, while the number of genomics departments declined. Nanotechnology maintains an interdisciplinary journal citation pattern while the number of nanotechnology departments increased. In all three fields the importance of industry-university collaborations increased, albeit to different degrees. Patterns of interdisciplinarity can thus be distinguished, as different ways in which the three dynamics co-evolve. From a governance perspective, this conceptualization provides distinct rationales for policy interventions in relation to interdisciplinarity in research, science and society. © The Author 2012. Published by Oxford University Press. All rights reserved. van der Bilt A.,University Utrecht Journal of Oral Rehabilitation | Year: 2011 During chewing, food is reduced in size, while saliva moistens the food and binds the masticated food into a bolus that can be easily swallowed. Characteristics of the oral system, like number of teeth, bite force and salivary flow, will influence the masticatory process. Masticatory function of healthy persons has been studied extensively the last decades. These results were used as a comparison for outcomes of various patient groups. In this review, findings from literature on masticatory function for both healthy persons and patient groups are presented. Masticatory function of patients with compromised dentition appeared to be significantly reduced when compared with the function of healthy controls. The influence of oral rehabilitation, e.g. dental restorations, implant treatment and temporomandibular disorder treatment, on masticatory function will be discussed. For instance, implant treatment was shown to have a significant positive effect on both bite force and masticatory performance. Also, patient satisfaction with an implant-retained prosthesis was high in comparison with the situation before implant treatment. The article also reviews the neuromuscular control of chewing. The jaw muscle activity needed to break solid food is largely reflexly induced. Immediate muscle response is necessary to maintain a constant chewing rhythm under varying food resistance conditions. Finally, the influence of food characteristics on the masticatory process is discussed. Dry and hard products require more chewing cycles before swallowing than moist and soft foods. More time is needed to break the food and to add enough saliva to form a cohesive bolus suitable for swallowing. © 2011 Blackwell Publishing Ltd. Weese J.S.,University of Guelph | van Duijkeren E.,University Utrecht Veterinary Microbiology | Year: 2010 Staphylococci are important opportunistic pathogens in most animal species. Among the most relevant species are the coagulase positive species Staphylococcus aureus and Staphylococcus pseudintermedius. Methicillin resistance has emerged as an important problem in both of these organisms, with significant concerns about animal and public health. The relative importance of these staphylococci on different animal species varies, as do the concerns about zoonotic transmission, yet it is clear that both present a challenge to veterinary medicine. © 2009 Elsevier B.V. All rights reserved. Matenco L.,University Utrecht | Radivojevi D.,Gazprom Tectonics | Year: 2012 The large number and distribution of rollback systems in Mediterranean orogens infer the possibility of interacting extensional back-arc deformation driven by different slabs. The formation of the Pannonian back-arc basin is generally related to the rapid Miocene rollback of a slab attached to the European continent. A key area of the entire system that is neglected by kinematic studies is the connection between the South Carpathians and Dinarides. In order to derive an evolutionary model, we interpreted regional seismic lines traversing the entire Serbian part of the Pannonian Basin. The observed deformation is dominantly expressed by the formation of Miocene extensional detachments and (half) grabens. The extensional geometries and associated synkinematic sedimentation that migrated in time and space allow the definition of a continuous and essentially asymmetric early to late Miocene extensional evolution. This evolution was followed by the formation of few uplifted areas during the subsequent latest Miocene-Quaternary inversion. The present-day extensional geometry changing the strike across the basin is an effect of the clockwise rotation of the South Carpathians and Apuseni Mountains in respect to the Dinarides. Our study infers that the Carpathian rollback is not the only mechanism responsible for the formation of the Pannonian Basin; an additional middle Miocene rollback of a Dinaridic slab is required to explain the observed structures. Furthermore, the study provides constraints for the pre-Neogene orogenic evolution of this junction zone, including the affinity of major crustal blocks, obducted ophiolitic sequences and the Sava suture zone.© 2012. American Geophysical Union. All Rights Reserved. Veldhoen M.,University Utrecht Nuclear Physics A | Year: 2013 Particle ratios are important observables used to constrain models of particle production in heavy-ion collisions. In this work we report on a measurement of the p/π ratio in the transverse momentum range 2.0 < p T,a s s o c < 4.0 GeV/c, associated with a charged trigger particle of 5.0 < p T,t r i g < 10.0 GeV/c, in 0-10% central Pb-Pb collisions at sNN=2.76TeV. The ratio is measured in the jet peak and in a region at large δη separation from the peak (bulk region). The presented results are based on 14M minimum-bias Pb-Pb collisions, recorded by the ALICE detector. It is observed that the p/π ratio in the bulk region is compatible with the p/π ratio of an inclusive measurement, and is much larger than the p/π ratio in the jet peak. The p/π ratio in the jet peak is compatible with a PYTHIA reference, in which fragmentation in the vacuum is the dominant mechanism of particle production. © 2013 CERN. Boer J.,Deventer Hospital | Nazary M.,University Utrecht British Journal of Dermatology | Year: 2011 Background Hidradenitis suppurativa (HS) is a distressing chronic inflammatory skin disorder which affects predominantly the groins and axillae. In analogy to acne, oral isotretinoin has been considered in the treatment of HS, although there are strong indications that this drug has only a very limited therapeutic effect. During the past 25 years scattered case reports have described promising results of treatment with acitretin. Objectives To evaluate the long-term efficacy of acitretin monotherapy. Methods A retrospective study in 12 patients with severe, recalcitrant HS who were treated with acitretin for 9-12 months at one Dermatology Centre in the Netherlands between 2005 and 2007 and were followed up to 4 years. The patients were men and infertile women. The efficacy of the treatment was rated by the patients on global maximum pain of nodules and abscesses on a visual analogue scale (VAS) as well as by physician global assessment. Results All 12 patients achieved remission and experienced a significant decrease in pain as assessed by VAS. In nine patients long-lasting improvement was observed, with no recurrence of lesions after 6 months (n = 1), 1 year (n = 3), > 2 years (n = 2), > 3 years (n = 2) and > 4 years (n = 1). Conclusions Acitretin appears to be an effective treatment for refractory HS, leading to reduction of pain from painful nodules and reducing the extent of the disease for a prolonged period. Verweij M.,University Utrecht Nuclear Physics A | Year: 2013 We report a measurement of transverse momentum spectra of jets detected with the ALICE detector in Pb-Pb collisions at sNN=2.76 TeV. Jets are reconstructed from charged particles using the anti-k T jet algorithm. The transverse momentum of tracks is measured down to 150 MeV/c which gives access to the low p T fragments of the jet. The background from soft particle production is determined for each event and subtracted. The remaining influence of underlying event fluctuations is quantified by embedding different probes into heavy-ion data. The reconstructed transverse momentum spectrum is corrected for background fluctuations by unfolding. We observe a strong suppression in central events of inclusive jets reconstructed with radii of 0.2 and 0.3. The fragmentation bias on jets introduced by requiring a high p T leading particle which rejects jets with a soft fragmentation pattern is equivalent for central and peripheral events. © 2013 CERN. Grelli A.,University Utrecht Nuclear Physics A | Year: 2013 The measurement of D meson production provides key tests for parton energy-loss models, which predict that charm quarks should experience less in-medium energy loss than light quarks and gluons. The ALICE experiment has measured the production of prompt D0, D + and D * + mesons in pp and Pb-Pb collisions at the LHC at s=7 and 2.67 TeV and at sNN=2.76 TeV, respectively, via the exclusive reconstruction of their hadronic decay. The p T-differential production yields in the range 2 < p T < 16 GeV/c at central rapidity, |y| < 0.5, were used to calculate the nuclear modification factor. A suppression of a factor 3 to 4 for transverse momenta larger than 5 GeV/c in the 20% most central collisions was observed. Preliminary results in an extended p T-range, using the data sample collected during the 2011 Pb-Pb run, together with the first measurement of Ds+ nuclear modification factor will be shown. © 2013 CERN. Wadman R.I.,University Utrecht Cochrane database of systematic reviews (Online) | Year: 2012 Spinal muscular atrophy (SMA) is caused by degeneration of anterior horn cells of the spinal cord, which leads to progressive muscle weakness. Children with SMA type I will never be able to sit without support and usually die by the age of two years. There are no known efficacious drug treatments that influence the course of the disease. This is an update of a review first published in 2009. To evaluate whether drug treatment is able to slow or arrest the disease progression of SMA type I, and to assess if such therapy can be given safely. Drug treatment for SMA types II and III is the topic of a separate updated Cochrane review. We searched the Cochrane Neuromuscular Disease Group Specialized Register (8 March 2011), CENTRAL (The Cochrane Library 2011, Issue 1), MEDLINE (January 1991 to February 2011), EMBASE (January 1991 to February 2011) and ISI Web of Knowledge (January 1991 to 8 March 2011). We searched the Clinical Trials Registry of the U.S. National Institute of Health (www.ClinicalTrials.gov) (8 March 2011) to identify additional trials that had not yet been published. We sought all randomised or quasi-randomised trials that examined the efficacy of drug treatment for SMA type I. Participants had to fulfil the clinical criteria and have a deletion or mutation of the SMN1 gene (5q11.2-13.2) confirmed by genetic analysis.The primary outcome measure was time from birth until death or full time ventilation. Secondary outcome measures were development of rolling, sitting or standing within one year after the onset of treatment, and adverse events attributable to treatment during the trial period. Two authors (RW and AV) independently reviewed and extracted data from all potentially relevant trials. For included studies, pooled relative risks and standardised mean differences were to be calculated to assess treatment efficacy. One small randomised controlled study comparing riluzole treatment to placebo for 10 SMA type 1 children was identified and included in the original review. No further trials were identified for the update in 2011. Regarding the primary outcome measure, three of seven children treated with riluzole were still alive at the ages of 30, 48 and 64 months, whereas all three children in the placebo group died; but the difference was not statistically significant. Regarding the secondary outcome measures, none of the children in the riluzole or placebo group developed the ability to roll, sit or stand, and no adverse effects were observed. For several reasons the overall quality of the study was low, mainly because the study was too small to detect an effect and because of baseline differences. Follow-up of the 10 included children was complete. No drug treatment for SMA type I has been proven to have significant efficacy. Wadman R.I.,University Utrecht Cochrane database of systematic reviews (Online) | Year: 2012 Spinal muscular atrophy (SMA) is caused by degeneration of anterior horn cells, which leads to progressive muscle weakness. Children with SMA type II do not develop the ability to walk without support and have a shortened life expectancy, whereas children with SMA type III develop the ability to walk and have a normal life expectancy. There are no known efficacious drug treatments that influence the disease course of SMA. This is an update of a review first published in 2009. To evaluate whether drug treatment is able to slow or arrest the disease progression of SMA types II and III and to assess if such therapy can be given safely. Drug treatment for SMA type I is the topic of a separate updated Cochrane review. We searched the Cochrane Neuromuscular Disease Group Specialized Register (8 March 2011), Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2011, Issue 1), MEDLINE (January 1991 to February 2011), EMBASE (January 1991 to February 2011) and ISI Web of Knowledge (January 1991 to March 8 2011). We also searched clinicaltrials.gov to identify as yet unpublished trials (8 March 2011). We sought all randomised or quasi-randomised trials that examined the efficacy of drug treatment for SMA types II and III. Participants had to fulfil the clinical criteria and have a deletion or mutation of the survival motor neuron 1 (SMN1) gene (5q11.2-13.2) that was confirmed by genetic analysis.The primary outcome measure was to be change in disability score within one year after the onset of treatment. Secondary outcome measures within one year after the onset of treatment were to be change in muscle strength, ability to stand or walk, change in quality of life, time from the start of treatment until death or full time ventilation and adverse events attributable to treatment during the trial period. Two authors independently reviewed and extracted data from all potentially relevant trials. Pooled relative risks and pooled standardised mean differences were to be calculated to assess treatment efficacy. Risk of bias was systematically analysed. Six randomised placebo-controlled trials on treatment for SMA types II and III were found and included in the review: the four in the original review and two trials added in this update. The treatments were creatine (55 participants), phenylbutyrate (107 participants), gabapentin (84 participants), thyrotropin releasing hormone (9 participants), hydroxyurea (57 participants), and combination therapy with valproate and acetyl-L-carnitine (61 participants). None of these studies were completely free of bias. All studies had adequate blinding, sequence generation and reports of primary outcomes.None of the included trials showed any statistically significant effects on the outcome measures in participants with SMA types II and III. One participant died due to suffocation in the hydroxyurea trial and one participant died in the creatine trial. No participants in any of the other four trials died or reached the state of full time ventilation. Serious side effects were infrequent. There is no proven efficacious drug treatment for SMA types II and III. Caeyenberghs K.,Ghent University | Leemans A.,University Utrecht Human Brain Mapping | Year: 2014 The study on structural brain asymmetries in healthy individuals plays an important role in our understanding of the factors that modulate cognitive specialization in the brain. Here, we used fiber tractography to reconstruct the left and right hemispheric networks of a large cohort of 346 healthy participants (20-86 years) and performed a graph theoretical analysis to investigate this brain laterality from a network perspective. Findings revealed that the left hemisphere is significantly more "efficient" than the right hemisphere, whereas the right hemisphere showed higher values of "betweenness centrality" and "small-worldness." In particular, left-hemispheric networks displayed increased nodal efficiency in brain regions related to language and motor actions, whereas the right hemisphere showed an increase in nodal efficiency in brain regions involved in memory and visuospatial attention. In addition, we found that hemispheric networks decrease in efficiency with age. Finally, we observed significant gender differences in measures of global connectivity. By analyzing the structural hemispheric brain networks, we have provided new insights into understanding the neuroanatomical basis of lateralized brain functions. © 2014 Wiley Periodicals, Inc. van Tilborg T.C.,University Utrecht BMC women's health | Year: 2012 Costs of in vitro fertilisation (IVF) are high, which is partly due to the use of follicle stimulating hormone (FSH). FSH is usually administered in a standard dose. However, due to differences in ovarian reserve between women, ovarian response also differs with potential negative consequences on pregnancy rates. A Markov decision-analytic model showed that FSH dose individualisation according to ovarian reserve is likely to be cost-effective in women who are eligible for IVF. However, this has never been confirmed in a large randomised controlled trial (RCT). The aim of the present study is to assess whether an individualised FSH dose regime based on an ovarian reserve test (ORT) is more cost-effective than a standard dose regime. Multicentre RCT in subfertile women indicated for a first IVF or intracytoplasmic sperm injection cycle, who are aged < 44 years, have a regular menstrual cycle and no major abnormalities at transvaginal sonography. Women with polycystic ovary syndrome, endocrine or metabolic abnormalities and women undergoing IVF with oocyte donation, will not be included. Ovarian reserve will be assessed by measuring the antral follicle count. Women with a predicted poor response or hyperresponse will be randomised for a standard versus an individualised FSH regime (150 IU/day, 225-450 IU/day and 100 IU/day, respectively). Participants will undergo a maximum of three stimulation cycles during maximally 18 months. The primary study outcome is the cumulative ongoing pregnancy rate resulting in live birth achieved within 18 months after randomisation. Secondary outcomes are parameters for ovarian response, multiple pregnancies, number of cycles needed per live birth, total IU of FSH per stimulation cycle, and costs. All data will be analysed according to the intention-to-treat principle. Cost-effectiveness analysis will be performed to assess whether the health and associated economic benefits of individualised treatment of subfertile women outweigh the additional costs of an ORT. The results of this study will be integrated into a decision model that compares cost-effectiveness of the three dose-adjustment strategies to a standard dose strategy. The study outcomes will provide scientific foundation for national and international guidelines. NTR2657. Glyn-Jones S.,University of Oxford | Palmer A.J.R.,University of Oxford | Agricola R.,Erasmus Medical Center | Price A.J.,University of Oxford | And 3 more authors. The Lancet | Year: 2015 Osteoarthritis is a major source of pain, disability, and socioeconomic cost worldwide. The epidemiology of the disorder is complex and multifactorial, with genetic, biological, and biomechanical components. Aetiological factors are also joint specific. Joint replacement is an effective treatment for symptomatic end-stage disease, although functional outcomes can be poor and the lifespan of prostheses is limited. Consequently, the focus is shifting to disease prevention and the treatment of early osteoarthritis. This task is challenging since conventional imaging techniques can detect only quite advanced disease and the relation between pain and structural degeneration is not close. Nevertheless, advances in both imaging and biochemical markers offer potential for diagnosis and as outcome measures for new treatments. Joint-preserving interventions under development include lifestyle modification and pharmaceutical and surgical modalities. Some show potential, but at present few have proven ability to arrest or delay disease progression. © 2015 Elsevier Ltd. Huettig F.,Max Planck Institute for Psycholinguistics | Brouwer S.,University Utrecht Dyslexia | Year: 2015 It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar deCOM afgebeelde pianoCOM', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd. Boelens J.J.,University Utrecht Blood | Year: 2013 We report transplantation outcomes of 258 children with Hurler syndrome (HS) after a myeloablative conditioning regimen from 1995 to 2007. Median age at transplant was 16.7 months and median follow-up was 57 months. The cumulative incidence of neutrophil recovery at day 60 was 91%, acute graft-versus-host disease (GVHD) (grade II-IV) at day 100 was 25%, and chronic GVHD and 5 years was 16%. Overall survival and event-free survival (EFS) at 5 years were 74% and 63%, respectively. EFS after HLA-matched sibling donor (MSD) and 6/6 matched unrelated cord blood (CB) donor were similar at 81%, 66% after 10/10 HLA-matched unrelated donor (UD), and 68% after 5/6 matched CB donor. EFS was lower after transplantation in 4/6 matched unrelated CB (UCB) (57%; P = .031) and HLA-mismatched UD (41%; P = .007). Full-donor chimerism (P = .039) and normal enzyme levels (P = .007) were higher after CB transplantation (92% and 98%, respectively) compared with the other grafts sources (69% and 59%, respectively). In conclusion, results of allogeneic transplantation for HS are encouraging, with similar EFS rates after MSD, 6/6 matched UCB, 5/6 UCB, and 10/10 matched UD. The use of mismatched UD and 4/6 matched UCB was associated with lower EFS. van Zon A.,University Utrecht Cochrane database of systematic reviews (Online) | Year: 2012 Otitis media with effusion (OME) is characterised by an accumulation of fluid in the middle ear behind an intact tympanic membrane, without the symptoms or signs of acute infection. In approximately one in three children with OME, however, a bacterial pathogen is identified in the middle ear fluid. In most cases, OME causes mild hearing impairment of short duration. When experienced in early life and when episodes of (bilateral) OME persist or recur, the associated hearing loss may be significant and have a negative impact on speech development and behaviour. Since most cases of OME will resolve spontaneously, only children with persistent middle ear effusion and associated hearing loss potentially require treatment. Previous Cochrane reviews have focused on the effectiveness of ventilation tube insertion, adenoidectomy, autoinflation, antihistamines, decongestants, and oral and topical intranasal steroids in OME. This review focuses on the effectiveness of antibiotics in children with OME. To assess the effects of antibiotics in children up to 18 years with OME. We searched the Cochrane Ear, Nose and Throat Disorders Group Trials Register; the Cochrane Central Register of Controlled Trials (CENTRAL); PubMed; EMBASE; CINAHL; Web of Science; BIOSIS Previews; Cambridge Scientific Abstracts; ICTRP and additional sources for published and unpublished trials. The date of the search was 22 February 2012. Randomised controlled trials comparing oral antibiotics with placebo, no treatment or therapy of unproven effectiveness. Our primary outcome was complete resolution of OME at two to three months. Secondary outcomes included resolution of OME at other time points, hearing, language and speech, ventilation tube insertion and adverse effects. Two authors independently extracted data using standardised data extraction forms and assessed the quality of the included studies using the Cochrane 'Risk of bias' tool. We presented dichotomous results as risk differences as well as risk ratios, with their 95% confidence intervals. If heterogeneity was greater than 75% we did not pool data. We included 23 studies (3027 children) covering a range of antibiotics, participants, outcome measures and time points of evaluation. Overall, we assessed the studies as generally being at low risk of bias.Our primary outcome was complete resolution of OME at two to three months. The differences (improvement) in the proportion of children having such resolution (risk difference (RD)) in the five individual included studies ranged from 1% (RD 0.01, 95% CI -0.11 to 0.12; not significant) to 45% (RD 0.45, 95% CI 0.25 to 0.65). Results from these studies could not be pooled due to clinical and statistical heterogeneity.Pooled analysis of data for complete resolution at more than six months was possible, with an increase in resolution of 13% (RD 0.13, 95% CI 0.06 to 0.19).Pooled analysis was also possible for complete resolution at the end of treatment, with the following increases in resolution rates: 17% (RD 0.17, 95% CI 0.09 to 0.24) for treatment for 10 days to two weeks, 34% (RD 0.34, 95% CI 0.19 to 0.50) for treatment for four weeks, 32% (RD 0.32, 95% CI 0.17 to 0.47) for treatment for three months, and 14% (RD 0.14, 95% CI 0.03 to 0.24) for treatment continuously for at least six months.We were unable to find evidence of a substantial improvement in hearing as a result of the use of antibiotics for otitis media with effusion; nor did we find an effect on the rate of ventilation tube insertion. We did not identify any trials that looked at speech, language and cognitive development or quality of life. Data on the adverse effects of antibiotic treatment reported in six studies could not be pooled due to high heterogeneity. Increases in the occurrence of adverse events varied from 3% (RD 0.03, 95% CI -0.01 to 0.07; not significant) to 33% (RD 0.33, 95% CI 0.22 to 0.44) in the individual studies. The results of our review do not support the routine use of antibiotics for children up to 18 years with otitis media with effusion. The largest effects of antibiotics were seen in children treated continuously for four weeks and three months. Even when clear and relevant benefits of antibiotics have been demonstrated, these must be balanced against the potential adverse effects when making treatment decisions. Immediate adverse effects of antibiotics are common and the emergence of bacterial resistance has been causally linked to the widespread use of antibiotics for common conditions such as otitis media. Klumperman J.,University Utrecht | Raposo G.,University Pierre and Marie Curie | Raposo G.,French National Center for Scientific Research Cold Spring Harbor Perspectives in Biology | Year: 2014 Live-cell imaging reveals the endolysosomal system as a complex and highly dynamic network of interacting compartments. Distinct types of endosomes are discerned by kinetic, molecular, and morphological criteria. Although none of these criteria, or combinations thereof, can capture the full complexity of the endolysosomal system, they are extremely useful for experimental purposes. Some membrane domain specializations and specific morphological characteristics can only be seen by ultrastructural analysis after preparation for electron microscopy (EM). Immuno-EM allows a further discrimination of seemingly identical compartments by their molecular makeup. In this review we provide an overview of the ultrastructural characteristics and membrane organization of endosomal compartments, along with their organizing machineries. ©2014 Cold Spring Harbor Laboratory Press; all rights reserved. Bonten M.J.M.,University Utrecht Critical Care | Year: 2012 The recognition of colonization pressure as an important risk factor for acquisition of antibiotic-resistant bacteria in the ICU, including Acinetobacter species, has major consequences for our understanding of risk factor analyses. Moreover, the importance of colonization pressure underpins the role of cross-transmission in the dynamics of antibiotic-resistant bacteria in the ICU, which has major consequences for the evaluation of the effectiveness of infection control measures. © 2012 BioMed Central Ltd. Jennings J.H.,University of North Carolina at Chapel Hill | Rizzi G.,University of North Carolina at Chapel Hill | Rizzi G.,University Utrecht | Stamatakis A.M.,University of North Carolina at Chapel Hill | And 2 more authors. Science | Year: 2013 The growing prevalence of overeating disorders is a key contributor to the worldwide obesity epidemic. Dysfunction of particular neural circuits may trigger deviations from adaptive feeding behaviors. The lateral hypothalamus (LH) is a crucial neural substrate for motivated behavior, including feeding, but the precise functional neurocircuitry that controls LH neuronal activity to engage feeding has not been defined. We observed that inhibitory synaptic inputs from the extended amygdala preferentially innervate and suppress the activity of LH glutamatergic neurons to control food intake. These findings help explain how dysregulated activity at a number of unique nodes can result in a cascading failure within a defined brain network to produce maladaptive feeding. De Goeij J.M.,University of Amsterdam | Van Oevelen D.,Netherlands Institute for Sea Research | Vermeij M.J.A.,University of Amsterdam | Osinga R.,Wageningen University | And 3 more authors. Science | Year: 2013 Ever since Darwin's early descriptions of coral reefs, scientists have debated how one of the world's most productive and diverse ecosystems can thrive in the marine equivalent of a desert. It is an enigma how the flux of dissolved organic matter (DOM), the largest resource produced on reefs, is transferred to higher trophic levels. Here we show that sponges make DOM available to fauna by rapidly expelling filter cells as detritus that is subsequently consumed by reef fauna. This "sponge loop" was confirmed in aquarium and in situ food web experiments, using 13C- and 15N-enriched DOM. The DOM-sponge-fauna pathway explains why biological hot spots such as coral reefs persist in oligotrophic seas - the reef's paradox - and has implications for reef ecosystem functioning and conservation strategies. Vennin V.,University of Portsmouth | Starobinsky A.A.,University Utrecht European Physical Journal C | Year: 2015 Combining the stochastic and $$\delta N$$δN formalisms, we derive non-perturbative analytical expressions for all correlation functions of scalar perturbations in single-field, slow-roll inflation. The standard, classical formulas are recovered as saddle-point limits of the full results. This yields a classicality criterion that shows that stochastic effects are small only if the potential is sub-Planckian and not too flat. The saddle-point approximation also provides an expansion scheme for calculating stochastic corrections to observable quantities perturbatively in this regime. In the opposite regime, we show that a strong suppression in the power spectrum is generically obtained, and we comment on the physical implications of this effect. © 2015, The Author(s). Braakman I.,University Utrecht | Hebert D.N.,University of Massachusetts Amherst Cold Spring Harbor Perspectives in Biology | Year: 2013 In this article, we will cover the folding of proteins in the lumen of the endoplasmic reticulum (ER), including the role of three types of covalent modifications: signal peptide removal, Nlinked glycosylation, and disulfide bond formation, aswell as the function and importance of resident ER folding factors. These folding factors consist of classical chaperones and their cochaperones, the carbohydrate-binding chaperones, and the folding catalysts of the PDI and proline cis-trans isomerase families. We will conclude with the perspective of the folding protein: a comparison of characteristics and folding and exit rates for proteins that travel through the ER as clients of the ER machinery. © 2013 Cold Spring Harbor Laboratory Press; all rights reserved. Verheul R.J.,University Utrecht Journal of controlled release : official journal of the Controlled Release Society | Year: 2011 The physical stability of polyelectrolyte nanocomplexes composed of trimethyl chitosan (TMC) and hyaluronic acid (HA) is limited in physiological conditions. This may minimize the favorable adjuvant effects associated with particulate systems for nasal and intradermal immunization. Therefore, covalently stabilized nanoparticles loaded with ovalbumin (OVA) were prepared with thiolated TMC and thiolated HA via ionic gelation followed by spontaneous disulfide formation after incubation at pH 7.4 and 37°C. Also, maleimide PEG was coupled to the remaining thiol-moieties on the particles to shield their surface charge. OVA-loaded TMC/HA nanoparticles had a size of around 250-350nm, a positive zeta potential and OVA loading efficiencies up to 60%. Reacting the thiolated particles with maleimide PEG resulted in a slight reduction of zeta potential (from +7 to +4mV) and a minor increase in particle size. Stabilized TMC-S-S-HA particles (PEGylated or not) showed superior stability in saline solutions compared to non-stabilized particles (composed of nonthiolated polymers) but readily disintegrated upon incubation in a saline buffer containing 10mM dithiothreitol. In both the nasal and intradermal immunization study, OVA loaded stabilized TMC-S-S-HA particles demonstrated superior immunogenicity compared to non-stabilized particles (indicated by higher IgG titers). Intranasal, PEGylation completely abolished the beneficial effects of stabilization and it induced no enhanced immune responses against OVA after intradermal administration. In conclusion, stabilization of the TMC/HA particulate system greatly enhances the immunogenicity of OVA in nasal and intradermal vaccination. Copyright © 2011. Published by Elsevier B.V. Poot M.,University Utrecht Molecular Syndromology | Year: 2013 Recent genomic research into autism spectrum disorders (ASD) has revealed a remarkably complex genetic architecture. Large numbers of common variants, copy number variations and single nucleotide variants have been identified, yet each of them individually afforded only a small phenotypic impact. A polygenic model in which multiple genes interact either in an additive or a synergistic way appears the most plausible for the majority of ASD patients. Based on recently identified ASD candidate genes, transgenic mouse models for neuroligins/ neurorexins and genes such as Cntnap2,Cntn5, Tsc1, Tsc2, Akt3, Cyfip1, Scn1a, En2, Slc6a4, and Bckdk have been generated and studied with respect to behavioral and neuroanatomical phenotypes and sensitivity to drug treatments. From these models, a few clues for potential pharmacologic intervention emerged. The Fmr1,Shank2 and Cntn5 knockout mice exhibited alterations of glutamate receptors, which may become a target for pharmacologic modulation. Some of the phenotypes of Mecp2 knockout mice can be ameliorated by administering IGF1. In the near future, comprehensive genotyping of individual patients and siblings combined with the novel insights generated from the transgenic animal studies may provide us with personalized treatment options. Eventually, autism may indeed turn out to be a phenotypically heterogeneous group of disorders ('autisms') caused by combinations of changes in multiple possible candidate genes, being different in each patient and requiring for each combination of mutations a distinct, individually tailored treatment. Copyright © 2013 S. Karger AG, Basel. Toelch U.,University Utrecht Proceedings. Biological sciences / The Royal Society | Year: 2014 Copying others appears to be a cost-effective way of obtaining adaptive information, particularly when flexibly employed. However, adult humans differ considerably in their propensity to use information from others, even when this 'social information' is beneficial, raising the possibility that stable individual differences constrain flexibility in social information use. We used two dissimilar decision-making computer games to investigate whether individuals flexibly adjusted their use of social information to current conditions or whether they valued social information similarly in both games. Participants also completed established personality questionnaires. We found that participants demonstrated considerable flexibility, adjusting social information use to current conditions. In particular, individuals employed a 'copy-when-uncertain' social learning strategy, supporting a core, but untested, assumption of influential theoretical models of cultural transmission. Moreover, participants adjusted the amount invested in their decision based on the perceived reliability of personally gathered information combined with the available social information. However, despite this strategic flexibility, participants also exhibited consistent individual differences in their propensities to use and value social information. Moreover, individuals who favoured social information self-reported as more collectivist than others. We discuss the implications of our results for social information use and cultural transmission. Vissers L.E.,University Utrecht Journal of the American Heart Association | Year: 2013 Dietary vitamin K intake is thought to decrease the risk of cardiovascular disease (CVD) by reducing vascular calcification, although vitamin K is also involved in coagulation. Studies investigating the association between phylloquinone intake and risk of stroke are scarce, and the relation with menaquinones has not been investigated to date. We investigated the association between intake of phylloquinone and menaquinones and stroke in a prospective cohort of 35,476 healthy subjects. Information on occurrence of stroke was obtained by linkage to national registries, and stroke was further specified into ischemic and hemorrhagic stroke. Vitamin K intake was estimated using a validated food-frequency questionnaire. Multivariate Cox proportional hazards models adjusted for cardiovascular risk factors, lifestyle, and other dietary factors were used to estimate the associations. During a follow-up of 12.1 ± 2.1 years, 580 incident cases of stroke were identified, 163 of which were hemorrhagic and 324 were ischemic. Phylloquinone intake was not associated with risk of stroke with a hazard ratio (HR) of 1.09 (95% CI: 0.85 to 1.40, P(trend) 0.41) for the highest versus lowest quartile. For intake of menaquinones similar results were found, with an HR(Q4 versus Q1) of 0.99 (95% CI: 0.75 to 1.29, P(trend) 0.82). When specifying hemorrhagic and ischemic stroke or menaquinone subtypes, no significant associations were detected. In our study, neither dietary phylloquinone nor dietary menaquinones intake were associated with stroke risk. Lorimer J.,University of Oxford | Driessen C.,University Utrecht Transactions of the Institute of British Geographers | Year: 2014 This paper draws together recent literatures on the geography of experiments and the potential of experimental modes of conducting science and politics. It examines their implications for environmentalism in the Anthropocene. We differentiate between two different conceptions of an experiment, contrasting the singular, modern scientific understanding of an experiment with recent appeals for deliberative public experiments. Developing the concept of wild experiments we identify three axes for critical enquiry. These relate to the status of the nonhuman world as found or made, the importance afforded order and surprise in the conduct of any experiment and the degree and means by which publics are included in decisionmaking. We then illustrate the potential of this framework through a case study investigation of nature conservation, critically examining efforts to rewild and de-domesticate a polder landscape and its nonhuman inhabitants at the Oostvaardersplassen in the Netherlands. This is a flagship example of the wider enthusiasm for rewilding in nature conservation. In conclusion we reflect on wider significance and potential of these wild experiments for rethinking environmentalism in the Anthropocene. © 2013 Royal Geographical Society (with the Institute of British Geographers). Gouda E.J.,University Utrecht Phytotaxa | Year: 2012 Two new species belonging to the subfamily Tillandsioideae from Machu Picchu, Cusco, Peru are described and illustrated here. The new species Tillandsia machupicchuensis is close to T. tovarensis and the other new species Guzmania inkaterrae is close to G. morreniana and G. tenuifolia. Both species are abundant in the area. © 2012 Magnolia Press. Istamto T.,University Utrecht Environmental health : a global access science source | Year: 2014 The health impacts from traffic-related pollutants bring costs to society, which are often not reflected in market prices for transportation. We set out to simultaneously assess the willingness-to-pay (WTP) for traffic-related air pollution and noise effect on health, using a single measurement instrument and approach. We investigated the proportion and determinants of "protest vote/PV responses (people who were against valuing their health in terms of money)" and "don't know"/DK answers, and explored the effect of DK on the WTP distributions. Within the framework of the EU-funded project INTARESE, we asked over 5,200 respondents in five European countries to state their WTP to avoid health effects from road traffic-related air pollution and noise in an open-ended web-based questionnaire. Determinants of PV and DK were studied by logistic regression using variables concerning socio-demographics, income, health and environmental concern, and risk perception. About 10% of the respondents indicated a PV response and between 47-56% of respondents gave DK responses. About one-third of PV respondents thought that costs should be included in transportation prices, i.e. the polluter should pay. Logistic regression analyses showed associations of PV and DK with several factors. In addition to social-demographic, economic and health factors known to affect WTP, environmental concern, awareness of health effects, respondent's ability to relax in polluted places, and their view on the government's role to reduce pollution and on policy to improve wellbeing, also affected the PV and DK response. An exploratory weighting and imputation exercise did not show substantial effects of DK on the WTP distribution. With a proportion of about 50%, DK answers may be a more relevant issue affecting WTP than PV's. The likelihood to give PV and DK response were influenced by socio-demographic, economic and health factors, as well as environmental concerns and appreciation of environmental conditions and policies. In contested policy issues where actual policy may be based on WTP studies, PV and DK answers may indeed affect the outcome of the WTP study. PV and DK answers and their determinants therefore deserve further study in CV studies on environmental health effects. Koo B.-K.,University of Cambridge | Clevers H.,Hubrecht Institute KNAW | Clevers H.,University Utrecht Gastroenterology | Year: 2014 Since the discovery of LGR5 as a marker of intestinal stem cells, the field has developed explosively and led to many new avenues of research. The inner workings of the intestinal crypt stem cell niche are now well understood. The study of stem cell-enriched genes has uncovered some previously unknown aspects of the Wnt signaling pathway, the major driver of crypt dynamics. LGR5 + stem cells can now be cultured over long periods in vitro as epithelial organoids or "mini-guts." This technology opens new possibilities of using cultured adult stem cells for drug development, disease modeling, gene therapy, and regenerative medicine. This review describes the rediscovery of crypt base columnar cells as LGR5+ adult stem cells and summarizes subsequent progress, promises, unresolved issues, and challenges of the field. © 2014 by the AGA Institute. Bakker W.J.,University Utrecht Transcription | Year: 2013 Recently, we showed that E2F7 and E2F8 (E2F7/8) are critical regulators of angiogenesis through transcriptional control of VEGFA in cooperation with HIF. (1) Here we investigate the existence of other novel putative angiogenic E2F7/8-HIF targets, and discuss the role of the RB-E2F pathway in regulating angiogenesis during embryonic and tumor development. DeLuca K.F.,Colorado State University | Lens S.M.A.,University Utrecht | DeLuca J.G.,Colorado State University Journal of Cell Science | Year: 2011 Precise control of the attachment strength between kinetochores and spindle microtubules is essential to preserve genomic stability. Aurora B kinase has been implicated in regulating the stability of kinetochore - microtubule attachments but its relevant kinetochore targets in cells remain unclear. Here, we identify multiple serine residues within the N-terminus of the kinetochore protein Hec1 that are phosphorylated in an Aurora-B-kinase-dependent manner during mitosis. On all identified target sites, Hec1 phosphorylation at kinetochores is high in early mitosis and decreases significantly as chromosomes bi-orient. Furthermore, once dephosphorylated, Hec1 is not highly rephosphorylated in response to loss of kinetochore - microtubule attachment or tension. We find that a subpopulation of Aurora B kinase remains localized at the outer kinetochore even upon Hec1 dephosphorylation, suggesting that Hec1 phosphorylation by Aurora B might not be regulated wholly by spatial positioning of the kinase. Our results define a role for Hec1 phosphorylation in kinetochore - microtubule destabilization and error correction in early mitosis and for Hec1 dephosphorylation in maintaining stable attachments in late mitosis. Boya P.,CSIC - Biological Research Cen
{}
# Information density¶ When using uncertainty sampling (or other similar strategies), we are unable to take the structure of the data into account. This can lead us to suboptimal queries. To alleviate this, one method is to use information density measures to help us guide our queries. For an unlabeled dataset $$X_{u}$$, the information density of an instance $$x$$ can be calculated as $I(x) = \frac{1}{|X_{u}|} \sum_{x^\prime \in X} sim(x, x^\prime),$ where $$sim(x, x^\prime)$$ is a similarity function such as cosine similarity or Euclidean similarity, which is the reciprocal of Euclidean distance. The higher the information density, the more similar the given instance is to the rest of the data. To illustrate this, we shall use a simple synthetic dataset. For more details, see Section 5.1 of the Active Learning book by Burr Settles! [1]: from sklearn.datasets import make_blobs X, y = make_blobs(n_features=2, n_samples=1000, centers=3, random_state=0, cluster_std=0.7) [2]: from modAL.density import information_density cosine_density = information_density(X, 'cosine') euclidean_density = information_density(X, 'euclidean') [3]: import matplotlib.pyplot as plt %matplotlib inline # visualizing the cosine and euclidean information density with plt.style.context('seaborn-white'): plt.figure(figsize=(14, 7)) plt.subplot(1, 2, 1) plt.scatter(x=X[:, 0], y=X[:, 1], c=cosine_density, cmap='viridis', s=50) plt.title('The cosine information density') plt.colorbar() plt.subplot(1, 2, 2) plt.scatter(x=X[:, 0], y=X[:, 1], c=euclidean_density, cmap='viridis', s=50) plt.title('The Euclidean information density') plt.colorbar() plt.show() As you can see, the certain similarity functions highlight distinct features of the dataset. The Euclidean information density prefers the center of clusters, while the cosine describes the middle cluster as most important.
{}
### SLA_REFRO Refraction ACTION: Atmospheric refraction, for radio or optical/IR wavelengths. CALL: CALL sla_REFRO (ZOBS, HM, TDK, PMB, RH, WL, PHI, TLR, EPS, REF) ##### GIVEN: ZOBS D observed zenith distance of the source (radians) HM D height of the observer above sea level (metre) TDK D ambient temperature at the observer (K) PMB D pressure at the observer (mb) RH D relative humidity at the observer (range 0 – 1) WL D effective wavelength of the source ($\mu m$) PHI D latitude of the observer (radian, astronomical) TLR D temperature lapse rate in the troposphere (K per metre) EPS D precision required to terminate iteration (radian) ##### RETURNED: REF D refraction: in vacuo ZD minus observed ZD (radians) NOTES: (1) A suggested value for the TLR argument is 0.0065D0 (sign immaterial). The refraction is significantly affected by TLR, and if studies of the local atmosphere have been carried out a better TLR value may be available. (2) A suggested value for the EPS argument is 1D$-$8. The result is usually at least two orders of magnitude more computationally precise than the supplied EPS value. (3) The routine computes the refraction for zenith distances up to and a little beyond $9{0}^{\circ }$ using the method of Hohenkerk & Sinclair (NAO Technical Notes 59 and 63, subsequently adopted in the Explanatory Supplement to the Astronomical Almanac, 1992 – see section 3.281). (4) The code is based on the AREF optical/IR refraction subroutine (HMNAO, September 1984, RGO: Hohenkerk 1985), with extensions to support the radio case. The modifications to the original HMNAO optical/IR refraction code which affect the results are: • The angle arguments have been changed to radians, any value of ZOBS is allowed (see Note 6, below) and other argument values have been limited to safe values. • Revised values for the gas constants are used, from Murray (1983). • A better model for ${P}_{s}\left(T\right)$ has been adopted, from Gill (1982). • More accurate expressions for $P{w}_{o}$ have been adopted (again from Gill 1982). • The formula for the water vapour pressure, given the saturation pressure and the relative humidity, is from Crane (1976), expression 2.5.5. • Provision for radio wavelengths has been added using expressions devised by A. T. Sinclair, RGO (Sinclair 1989). The refractivity model is from Rueger (2002). • The optical refractivity for dry air is from IAG (1999). (5) The radio refraction is chosen by specifying WL $>100$ $\mu m$. Because the algorithm takes no account of the ionosphere, the accuracy deteriorates at low frequencies, below about 30 MHz. (6) Before use, the value of ZOBS is expressed in the range $±\pi$. If this ranged ZOBS is negative, the result REF is computed from its absolute value before being made negative to match. In addition, if it has an absolute value greater than $9{3}^{\circ }$, a fixed REF value equal to the result for ZOBS $=9{3}^{\circ }$ is returned, appropriately signed. (7) As in the original Hohenkerk & Sinclair algorithm, fixed values of the water vapour polytrope exponent, the height of the tropopause, and the height at which refraction is negligible are used. (8) The radio refraction has been tested against work done by Iain Coulson, JACH, (private communication 1995) for the James Clerk Maxwell Telescope, Mauna Kea. For typical conditions, agreement at the ′′01 level is achieved for moderate ZD, worsening to perhaps ′′05 – ′′10 at ZD $8{0}^{\circ }$. At hot and humid sea-level sites the accuracy will not be as good. (9) It should be noted that the relative humidity RH is formally defined in terms of “mixing ratio” rather than pressures or densities as is often stated. It is the mass of water per unit mass of dry air divided by that for saturated air at the same temperature and pressure (see Gill 1982). The familiar $\nu ={p}_{w}/{p}_{s}$ or $\nu ={\rho }_{w}/{\rho }_{s}$ expressions can differ from the formal definition by several percent, significant in the radio case. (10) The algorithm is designed for observers in the troposphere. The supplied temperature, pressure and lapse rate are assumed to be for a point in the troposphere and are used to define a model atmosphere with the tropopause at 11km altitude and a constant temperature above that. However, in practice, the refraction values returned for stratospheric observers, at altitudes up to 25km, are quite usable. REFERENCES: (1) Coulsen, I. 1995, private communication. (2) Crane, R.K., Meeks, M.L. (ed), 1976, “Refraction Effects in the Neutral Atmosphere”, Methods of Experimental Physics: Astrophysics 12B, Academic Press. (3) (4) Hohenkerk, C.Y. 1985, private communication. (5) Hohenkerk, C.Y., & Sinclair, A.T. 1985, NAO Technical Note No. 63, Royal Greenwich Observatory. (6) International Association of Geodesy, XXIIth General Assembly, Birmingham, UK, 1999, Resolution 3. (7) Murray, C.A. 1983, Vectorial Astrometry, Adam Hilger, Bristol. (8) Seidelmann, P.K. et al. 1992, Explanatory Supplement to the Astronomical Almanac, Chapter 3, University Science Books. (9) Rueger, J.M. 2002, Refractive Index Formulae for Electronic Distance Measurement with Radio and Millimetre Waves, in Unisurv Report S-68, School of Surveying and Spatial Information Systems, University of New South Wales, Sydney, Australia. (10) Sinclair, A.T. 1989, private communication.
{}
# Tag Info Equation (7) in the Grad-CAM++ paper is linear. In fact for a given class $c$ we have just one equation and many unknowns (the $\alpha_{ab}^{kc}$), hence the equation is underdetermined and will have ...
{}
Making an array in the Table So we have an Table and there is an array which we wish to have it in the Table: Here is the code \documentclass[onecolumn,amsmath,amssymb,nofootinbib,superscriptaddress,floatfix]{revtex4} \begin{document} \begin{table}[!h] \begin{tabular}{|c ||c| c| c| c|} \hline $d$ \; & \; $Z$ & \; $UZ1$ & \; $UZ2$ & \; $U$\\ \; & \; & \; \; & \; \; & \; \\ \hline $d=1$ \; & \; $S_1=\int x_1{}^2$, & \; $Y_3=\int y_3{}^2$, \; & \; \; & \; \\ \; & \; $S_2=\int x_2{}^2$ & \; $G_4=\int g_4{}^2$ \; & \; \; & \; \\ \cline{2-5} \; & \multicolumn{4}{l|}{$S1, S2, S3, \dots$ } \\ \hline $d=2$ \; & \; & \; \; & \; \; & \; \\ \; & \; & \; \; & \; \; & \; \\ \hline \end{tabular} \caption{} \end{table} $$\left\{ \begin{array}{ll} S_1=\int x_1{}^2,\\ S_2=\int x_2{}^2 \end{array} \right.$$ \end{document} We have the array outside the Table I: Question: What we wish is that to put this array form: $$\left\{ \begin{array}{ll} S_1=\int x_1{}^2,\\ S_2=\int x_2{}^2 \end{array} \right.$$ into the table, which a single array exactly joins the two command lines $S_1=\int x_1{}^2$, $S_2=\int x_2{}^2$ together inside the table. How to do that? While we keep $Y3$ and $G_4$ separated not joined by another array. - argggg all those \; came back:-) and the [!h] –  David Carlisle May 24 at 0:24 Here is a more compact code, with the cases and matrix* environments. In addition, as I don't like too small integrals, I used the medint switch from the nccmath package: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{fourier} \usepackage{booktabs,mathtools, nccmath} \def\mint{\medint\int} \begin{document} \renewcommand{\arraystretch}{1.2}% Stretch out the tabular $\begin{array}{*{5}{c}} \toprule d & Z & UZ1 & UZ2 & U \\ \midrule d = 1 & \begin{cases} S_1 = \mint x_1{}^2, \\[3pt] S_2 = \mint x_2{}^2 \end{cases} & \begin{matrix*}[l]Y_3 = \mint y_3{}^2, \\ G_4 = \mint g_4{}^2\end{matrix*} \\ \addlinespace & \multicolumn{4}{l}{ S1, S2, S3, \dots } \\[0.5\normalbaselineskip] d = 2 \\ \bottomrule \end{array}$ \end{document} - There's nothing fancy about putting an array inside a tabular: \documentclass{article} \usepackage{booktabs,amsmath} \begin{document} \renewcommand{\arraystretch}{1.2}% Stretch out the tabular \begin{tabular}{*{5}{c}} \toprule $d$ & $Z$ & $UZ1$ & $UZ2$ & $U$ \\ \midrule $1$ & \raisebox{-.5\normalbaselineskip}{$\biggl\{$}$\begin{array}[t]{@{}r@{}l@{}} S_1 & {}= \int x_1{}^2, \\ S_2 & {}= \int x_2{}^2 \end{array}$ & $\begin{array}[t]{@{}r@{}l@{}} Y_3 & {}= \int y_3{}^2, \\ G_4 & {}= \int g_4{}^2 \end{array}$ \\ & \multicolumn{4}{l}{$S1, S2, S3, \dots$} \\[.5\normalbaselineskip] $2$ \\ \bottomrule \end{tabular} \end{document} If you must "keep Y3 and G4 ... not joined by another array", the following produces the exact same output (with proper alignment of Y3 and G4: \documentclass{article} \usepackage{booktabs,mathtools} \begin{document} \renewcommand{\arraystretch}{1.2}% Stretch out the tabular \begin{tabular}{*{5}{c}} \toprule $d$ & $Z$ & $UZ1$ & $UZ2$ & $U$ \\ \midrule $1$ & \smash{\raisebox{-.5\normalbaselineskip}{$\biggl\{$}$\begin{array}[t]{@{}r@{}l@{}} S_1 & {}= \int x_1{}^2, \\ S_2 & {}= \int x_2{}^2 \end{array}$} & $\phantom{G_4}\mathllap{Y_3} = \int y_3{}^2,$ \\ & & $G_4 = \int g_4{}^2\phantom{,}$ \\ & \multicolumn{4}{l}{$S1, S2, S3, \dots$} \\[.5\normalbaselineskip] $2$ \\ \bottomrule \end{tabular} \end{document} - OP asked "While we keep $Y3$ and $G_4$ separated not joined by another array." (which is why the alignment is worse in my image:-) –  David Carlisle May 24 at 0:41 @DavidCarlisle: Done... sorry, it didn't seem reasonable. –  Werner May 24 at 0:54 It's not:-) (actually OP may have just meant that Y_3/G_4 were not to be braced, so perhaps you should put it back –  David Carlisle May 24 at 1:05 \documentclass[onecolumn,amsmath,amssymb,nofootinbib,superscriptaddress,floatfix]{revtex4} \usepackage{array} \begin{document} \begin{table}[htp] \setlength\tabcolsep{8pt} \setlength\extrarowheight{2pt} \begin{tabular}{|c ||c| c| c| c|} \hline $d$ & $Z$ & $UZ1$ & $UZ2$ & $U$\\ & & & & \\ \hline $d=1$ & \smash{\raisebox{-10pt}{$\left\{ \begin{array}{ll} S_1=\int x_1{}^2,\\ S_2=\int x_2{}^2 \end{array} \right.$}} & $Y_3=\int y_3{}^2$, & & \\[7pt] & & $G_4=\int g_4{}^2$ & & \\ \cline{2-5} & \multicolumn{4}{l|}{$S1, S2, S3, \dots$ } \\ \hline $d=2$ & & & & \\ & & & & \\ \hline \end{tabular} \caption{} \end{table} \end{document} - Here is a different approach. \documentclass[onecolumn,amsmath,amssymb,nofootinbib,superscriptaddress,floatfix]{revtex4} \usepackage{array} \newcommand{\head}[1]{% %% code stolen from egreg \bfseries \begin{tabular}{@{}c@{}} \strut#1\strut \end{tabular}% } \begin{document} \begin{table}[htp] \setlength\tabcolsep{8pt} \setlength\extrarowheight{2pt} \begin{tabular}{|c ||c| c| c| c|} \hline $d$ & $Z$ & $UZ1$ & $UZ2$ & $U$\\ \hline \head{$d=1$} & \head{$\left\{ \begin{array}{ll} S_1=\int x_1{}^2,\\ S_2=\int x_2{}^2 \end{array} \right.$} & \head{$Y_3=\int y_3{}^2$,\\[7pt] $G_4=\int g_4{}^2$ } & & \\\cline{2-5} & \multicolumn{4}{l|}{$S1, S2, S3, \dots$ } \\ \hline $d=2$ & & & & \\ & & & & \\ \hline \end{tabular} \caption{} \end{table} \end{document} And another: \documentclass[onecolumn,amsmath,amssymb,nofootinbib,superscriptaddress,floatfix]{revtex4} \usepackage{array} \newcommand{\head}[1]{% %% code stolen from egreg \begin{tabular}{@{}c@{}} \strut#1\strut \end{tabular}% } \begin{document} \begin{table}[htp] \setlength\tabcolsep{8pt} \setlength\extrarowheight{2pt} \begin{tabular}{|c ||c| c| c| c|} \hline $d$ & $Z$ & $UZ1$ & $UZ2$ & $U$\\ \hline \raisebox{-1.5\height}{\head{$d=1$}} & \raisebox{-0.5\height}{\head{$\left\{ \begin{array}{ll} S_1=\int x_1{}^2,\\ S_2=\int x_2{}^2 \end{array} \right.$}} & $Y_3=\int y_3{}^2$, & & \\[-1.25em] & & $G_4=\int g_4{}^2$ & & \\ \cline{2-5} & \multicolumn{4}{l|}{$S1, S2, S3, \dots$ } \\ \hline $d=2$ & & & & \\ & & & & \\ \hline \end{tabular} \caption{} \end{table} \end{document} -
{}
# Elliptic Integrals 1. Sep 2, 2006 ### boarie Dear gurus Can any one kindly enlighten me how to go abt solving the attached equation expressed in spherical coordinates? basically, it describes the magnetic field in the radial direction with r,theta and phi denoting the radius, polar and azimuthal angles. My problem is that I do not know how to relate this equation to elliptic integral as it is to the power of 3/2. Any help is deeply appreciated. File size: 11.4 KB Views: 69 2. Sep 6, 2006 ### dextercioby Here's the trick. U need to use some notations $$R^{2}+r^{2} =p^{2}$$ $$2rR\sin\vartheta =u$$ One has that $p^2 >0 \ ,\ u>0$. Then the integral becomes $$B_{r}(r,\vartheta) =C \int_{0}^{2\pi} \frac{d\phi}{\left(p^{2}-u \sin\phi\right)^{\frac{3}{2}}} = C$$ times the result below. The notation for the complete elliptic integrals is the one Mathematica uses. U can check it out on the Wolfram site and compare it to the standard one (for example the one in Gradshteyn & Rytzik). Daniel. Last edited: Nov 22, 2006 3. Sep 6, 2006
{}
Concerning the definition of limit "In the first place, let me say that delta depends upon epsilon and not the other way round as you have stated. We select an arbitrary epsilon greater than zero and if we succeed in finding delta greater than zero satisfying the dfinition of the limit, then L is the limit. We have to express delta in terms of epsilon and taking the condition that epsilon is positive, prove that delta is positive for every positive epsilon. If, on the other hand, the relation between delta and epsilon so turns out that delta is not positiver for "every" positive epsilon, then we can conclude that L is not the limit. Remember that the definition has to be satisfied for every possible positive epsilon and not just one arbitrary positive epsilon." reference: So, Ive asked about how do we know (what is the proof) that Epsilon is a function of Delta? and now I wanna ask the same question here.. but also, I have an additional question, why is the MR talking about "positive-negative" I thought that the limit doesnt exist when we cant even make a relation between the two parts of the definition?? Thank you, Last edited: The answer lies in how a mathematical statement is interpreted. In this case, how is a universal quantifier followed by an existential quantifier interpreted? For example, “For every epsilon, there exists a delta…” The following is the DEFINITION of how these are interpreted: For this example I will use P(x,y) to represent some statement about x and y such as “x<y” or “x is the brother of y”. The exact statement is not relevant to the discussion. Consider: "For all x, there exists a y, such that P(x,y)" To show this statement is true, you need to let someone pick any x they wish. Then based upon that x, you need to find a y such that P(x,y) is true. Then, they get to go again! They can pick any x, you again have to find a y such that P(x,y) is true. This game goes on until the person has gone through all possible x’s and in each case you found a corresponding y. Is this sense, y “depends” on x. If you succeeded EVERY time an x was chosen (and did so for all x’s in the set) then the statement is true. Note, if for some x they pick, you cannot find a y, then the statement is false. Also, note, if the set of x’s is infinite in size then going thru one by one is ridiculous to do, so another proof strategy is needed. The usual technique is to say “let x be arbitrary and fixed” then based on that “random” x, go find a y that will work. This is why it is necessary to write y as a function of the x. Think of it as instructions that tell the person how to go find the y based upon whatever x they chose. Alternately, compare “For all x, there exists a y, such that P(x,y)” with the following statement: “There exists a y, for all x, P(x,y).” Now, for this to be true, you must find some y (fixed!) that will work for every x!!! HallsofIvy Homework Helper The definition of "limit of f(x) as x approaches a" is "$\lim_{x\rightarrow a} f(x)= L$ if and only if, for every $\epsilon> 0$ there exist $\delta> 0$ such that if 0< $|x-a|< \delta$, then $|f(x)- L|< \epsilon$." The "for every $\epsilon> 0$ there exist $\delta> 0$" says that given any $\epsilon$, we can find $\delta$. That is the part that says $\delta$ depends on $\epsilon$. It does NOT mean, nor does any part of what you give above, imply that $\delta$ is a function of of $\epsilon$. The same $\delta$ may apply for many different $\epsilon$. Also, that definition requires that "$\epsilon> 0$" and "$\delta> 0$" which is why it is talking about "positive". If, given $\epsilon> 0$, you can find a corresponding $\delta$, but it is negative, that's not good enough. matt grime Homework Helper Epsilon, as Halls states (though I believe he meant to say that different delta may work for the same epsilon; functions cannot be one to many, but may be many to one), is not (necessarily) a function of delta in the mathematical sense of the word, but it can be viewed as such in the looser use of the word that exists in everyday English, and in fact one can often see the usefully suggestive notation d(e) to remind the reader that delta depends on epsilon, although that dependency may be trivial, and not actually a function. And often the way one proves something is continuous is to find a particular function d(e) in the proper mathematical sense. For example, to show that f(x)=x is continuous then using the function d(e)=e suffices. In fact it is clear that one can actually require that delta be a function of epsilon (just choose a delta for each epsilon to make a function). HallsofIvy Homework Helper Yes, thanks, Matt. Don't know where my mind was. HallsofIvy; but If? it is negative said: How can it be negative??? Another question: what are the cases where the limit doesnt exist? like, what situations could stop a limit from existing? Russel Berty, Well it wasnt exactly what I was asking about, but that added alots to me -really-, thank you -really- :) Thanks Matt for the clarification :D, and yes I got what you said, thank you :d. matt grime Homework Helper f(x)=0 for x=/=0 and f(0)=1 is not continuous at 0 - discontinuities can be thought of as 'steps' in a simplistic way. 1) sorry, but in terms of the definition of limit, what are "all" the cases where the limit doesnt exist? (what are the possibilities?) 2) HallsofIvy; but If? it is negative said: How can it be negative??? matt grime Homework Helper A function that is discontinuous at every point? These are sort of pathological functions, but f(x)=0 when x is rational and 1 when x is irrational is the typical example - it even has a name, but I can't remember it for sure (dirichlet function maybe?). A function that is discontinuous at every point? These are sort of pathological functions, but f(x)=0 when x is rational and 1 when x is irrational is the typical example - it even has a name, but I can't remember it for sure (dirichlet function maybe?). ummm, lemme ask my question in a better way, but I want to know an answer to this question first, plz but If? it is negative, that's not good enough. How can it be negative??? If you think you have a rule for finding delta given a value for epsilon, then the rule must not cause you to choose a negative value for delta. For example: delta = epsilon – 1 This would never work for a rule in the limit argument because when we pick epsilon = .5 we get delta = -.5. Not allowed. The definition of limit requires both epsilon and delta to be positive (think of them as distances.) As far as a function not having a limit at a certain point, there are several ways this can happen. When f(x) does not have a limit at x = a it could be due to one of these cases: 1) f(x) is not even defined on an interval around a. For example, ln(x) is not defined for x < 0 (nor x=0). So, ln(x) cannot have a limit at x = -1. 2) f(x) is unbounded as it approaches x=a. For example, f(x)=1/x does not have a limit at x=0. This is because, f(x) climbs forever upwards from the right side of 0. Also, it falls forever downward from the left side of 0. Either argument would indicate that 1/x has no limit at 0. 3) f(x) keeps “jumping” around as you get close to x = a. This is a messier situation to describe. It happens when there are two values (at least two actually) that f(x) tries to go to as x approaches a. Say, f(x) tries to go to 1 and it tries to go to 0 as x approaches a. So, there are infinitely many points on the graph that are close to height 1 and infinitely many that are close to the height 0 no matter what interval around x=a you are in. This is like the example mentioned earlier in the post (f(x) = 1 if x is irrational and f(x) = 0 when x is rational.) Another example is f(x) = sin(1/x). In this case, as you get close to x = 0, f(x) will continually jump from -1 to 1. No matter how small the interval you choose around 0, say (-1/N , 1/N), f(x) will be at height 1 and at height -1 an infinite amount of times. So, f(x) is not limiting to any specific value as it approaches 0. If you think you have a rule for finding delta given a value for epsilon, then the rule must not cause you to choose a negative value for delta. For example: delta = epsilon – 1 This would never work for a rule in the limit argument because when we pick epsilon = .5 we get delta = -.5. Not allowed. The definition of limit requires both epsilon and delta to be positive (think of them as distances.) Well, if this is the definition of Limit: for each real ε > 0 there exists a real δ > 0 such that for all x with 0 < |x − c| < δ, we have |f(x) − L| < ε. Then how would I ever get a negative delta.. on the right sides we want delta and epsilon to be greater than 0, so epsilon is already positive. on the left sides, we have absolute values so we cant get any negative value. well, the only possibilty I can think of is that we may need to solve |f(x) − L| for some cases and put it on the form [f(x) − L]&-[f(x) − L] So, can you give me a function and the flow of the proof where we will end with haveing a negative value? HallsofIvy Homework Helper Well, if this is the definition of Limit: for each real ε > 0 there exists a real δ > 0 such that for all x with 0 < |x − c| < δ, we have |f(x) − L| < ε. Then how would I ever get a negative delta.. You wouldn't. If you did, then you made a mistake. That was my point! on the right sides we want delta and epsilon to be greater than 0, so epsilon is already positive. on the left sides, we have absolute values so we cant get any negative value. well, the only possibilty I can think of is that we may need to solve |f(x) − L| for some cases and put it on the form [f(x) − L]&-[f(x) − L] So, can you give me a function and the flow of the proof where we will end with haveing a negative value? To show that $x^2$ is continous at 0, you must show that, given some $\epsilon> 0$, there exist $\delta> 0$ such that if $|x|< \delta$, then $|x^2|< \epsilon$. You might find $\delta$ by taking the square root of both sides: $x< \pm \sqrt{\epsilon}$. If you took $\delta= -\sqrt{\epsilon}$ you would have a negative value for $\delta$. That would, of course, be a mistake.
{}
Question 69 # Bottle 1 contains a mixture of milk and water in 7: 2 ratio and Bottle 2 contains a mixture of milk and water in 9: 4 ratio. In what ratio of volumes should the liquids in Bottle 1 and Bottle 2 be combined to obtain a mixture of milk and water in 3:1 ratio? Solution The ratio of milk and water in Bottle 1 is 7:2 and the ratio of milk and water in Bottle 2 is 9:4 Therefore, the proportion of milk in Bottle 1 is $$\frac{7}{9}$$ and the proportion of milk in Bottle 2 is $$\frac{9}{13}$$ Let the ratio in which they should be mixed be equal to X:1. Hence, the total volume of milk is $$\frac{7X}{9}+\frac{9}{13}$$ The total volume of water is $$\frac{2X}{9}+\frac{4}{13}$$ They are in the ratio 3:1 Hence, $$\frac{7X}{9}+\frac{9}{13} = 3*(\frac{2X}{9}+\frac{4}{13})$$ Therefore, $$91X+81=78X+108$$ Therefore $$X = \frac{27}{13}$$ ### View Video Solution Create a FREE account and get: • All Quant CAT Formulas and shortcuts PDF • 20+ CAT previous papers with solutions PDF • Top 500 CAT Solved Questions for Free Comments OR Boost your Prep!
{}
+0 +1 85 13 +187 How many five digit even integers have a digit sum of 13? edit: i dont want them listed I want a way to do it Jul 15, 2019 edited by Mathgenius  Jul 15, 2019 #1 0 There are 968 such numbers that begin with: 10048  10066  10084  10138  10156  10174  10192  10228  .............etc. Note: If you want them all listed, just let us know. Jul 15, 2019 #2 +18754 0 Never mind !    I see it says 'even' numbers    !!!!!! ElectricPavlov  Jul 15, 2019 edited by ElectricPavlov  Jul 15, 2019 #3 +187 0 its even numbers Mathgenius  Jul 15, 2019 #4 +18754 0 Sorry, M-G.....saw that a little too late ElectricPavlov  Jul 15, 2019 #5 +28125 +2 Here is a "brute force and ignorance" piece of pseudo code to count the number of five-digit even integers that have a digit sum of 13: Set n = 0                                       n is the counter for k = 10000 : 2 : 99998              loop from 10000 to 99998 in steps of 2 a = floor(k/10^4) t = k – a*10^4 b = floor(t/10^3) t = t – b*10^3 c = floor(t/10^2) t = t – c*10^2 d = floor(t/10) t = t – d*10 sum = a + b + c + d + t if sum ==13 then n = n+1 end for loop display n Jul 15, 2019 #6 0 Alan: Here is a "brute force and ignorance" REAL code that lists 968 numbers with a sum total of 13. But, that is not what the questioner wants! He/she, I believe, wants a solution using combinations and permutations, and I haven't the faintest idea how to approach it!. n=0;p=0;cycle:a(10000+n);b=int(a/10000);c=int(a/1000);d=c%10;e=int(a/100);f=e%10;g=int(a/10);h=g%10;i=int(a/10);j=a%10;n=n+1;if(a%2==0 and b+d+f+h+j==13, goto loop,goto cycle);loop:p=p+1;printa," ",;if(n<84001, goto cycle, 0);print"Total = ",p Jul 15, 2019 #7 +28125 0 I have no idea how to do it using permutations and combinations either - that's why I listed the pseudo code! Alan  Jul 15, 2019 #8 0 OK, young person, here is a solution to your problem courtesy of Wolfram/Alpha: expand | (x + x^2 + x^3 + x^4 + x^5 + x^6 + x^7 + x^8 + x^9) (1 + x + x^2 + x^3 + x^4 + x^5 + x^6 + x^7 + x^8 + x^9)^4: x^45 + 5 x^44 + 15 x^43 + 35 x^42 + 70 x^41 + 126 x^40 + 210 x^39 + 330 x^38 + 495 x^37 + 714 x^36 + 992 x^35 + 1330 x^34 + 1725 x^33 + 2170 x^32 + 2654 x^31 + 3162 x^30 + 3675 x^29 + 4170 x^28 + 4620 x^27 + 4998 x^26 + 5283 x^25 + 5460 x^24 + 5520 x^23 + 5460 x^22 + 5283 x^21 + 4998 x^20 + 4620 x^19 + 4170 x^18 + 3675 x^17 + 3162 x^16 + 2654 x^15 + 2170 x^14 + 1725 x^13 + 1330 x^12 + 992 x^11 + 714 x^10 + 495 x^9 + 330 x^8 + 210 x^7 + 126 x^6 + 70 x^5 + 35 x^4 + 15 x^3 + 5 x^2 + x. So, the answer is the coefficient x^13, which is 1725 minus 757, which is number that sums up to 13 for all ODD numbers. Therefore, the total number of all EVEN 5-digit numbers that sum up to 13 is =1,725 - 757 =968 such numbers. Jul 15, 2019 #9 +187 0 how did you get how many odd numbers there are? Mathgenius  Jul 16, 2019 #10 0 By the above computer code. Guest Jul 16, 2019 #11 +22896 +5 How many five digit even integers sums up to 13 $$\begin{array}{|l|l|r|r|r|} \hline \text{5 digit even integers} & \text{partition} & \text{permutation} & - \text{partition} &- \text{permutation} \\ \hline 9\{4,0,0\}0 & P(4,1), P(4,2), P(4,3) & \\ &\{4,0,0\},\{3,1,0\},\{2,1,1\} & \binom{6}{2} \\ & \qquad\qquad \{2,2,0\} & \\ 9\{2,0,0\}2 & P(2,1), P(2,2), P(2,3) & \\ &\{2,0,0\},\{1,1,0\} & \binom{4}{2} \\ 9\{0,0,0\}4 & 1 & \frac{3!}{3!}=1= \binom{2}{2} \\ \hline 8\{5,0,0\}0 & P(5,1), P(5,2), P(5,3) & \\ &\{5,0,0\},\{4,1,0\},\{3,1,1\} & \binom{7}{2} \\ & \qquad\qquad \{3,2,0\},\{2,2,1\} & \\ 8\{3,0,0\}2 & P(3,1), P(3,2), P(3,3) & \\ &\{3,0,0\},\{2,1,0\},\{1,1,1\} & \binom{5}{2} \\ 8\{1,0,0\}4 &P(1,1), P(1,2), P(1,3) & \\ &\{1,0,0\} & \binom{3}{2} \\ \hline 7\{6,0,0\}0 & P(6,1), P(6,2), P(6,3) & \binom{8}{2} \\ 7\{4,0,0\}2 & P(4,1), P(4,2), P(4,3) & \binom{6}{2} \\ 7\{2,0,0\}4 & P(2,1), P(2,2), P(2,3) & \binom{4}{2} \\ 7\{0,0,0\}6 & 1 & \frac{3!}{3!}=1= \binom{2}{2} \\ \hline 6\{7,0,0\}0 & P(7,1), P(7,2), P(7,3) & \binom{9}{2} \\ 6\{5,0,0\}2 & P(5,1), P(5,2), P(5,3) & \binom{7}{2} \\ 6\{3,0,0\}4 & P(3,1), P(3,2), P(3,3) & \binom{5}{2} \\ 6\{1,0,0\}6 & P(1,1), P(1,2), P(1,3) & \binom{3}{2} \\ \hline 5\{8,0,0\}0 & P(8,1), P(8,2), P(8,3) & \binom{10}{2} \\ 5\{6,0,0\}2 & P(6,1), P(6,2), P(6,3) & \binom{8}{2} \\ 5\{4,0,0\}4 & P(4,1), P(4,2), P(4,3) & \binom{6}{2} \\ 5\{2,0,0\}6 & P(2,1), P(2,2), P(2,3) & \binom{4}{2} \\ 5\{0,0,0\}8 & 1 & \frac{3!}{3!}=1= \binom{2}{2} \\ \hline 4\{9,0,0\}2 & P(9,1), P(9,2), P(9,3) & \binom{11}{2} \\ 4\{7,0,0\}2 & P(7,1), P(7,2), P(7,3) & \binom{9}{2} \\ 4\{5,0,0\}4 & P(5,1), P(5,2), P(5,3) & \binom{7}{2} \\ 4\{3,0,0\}6 & P(3,1), P(3,2), P(3,3) & \binom{5}{2} \\ 4\{1,0,0\}8 & P(1,1), P(1,2), P(1,3) & \binom{3}{2} \\ \hline 3\{10,0,0\}0 & P(10,1), P(10,2), P(10,3) & \binom{12}{2} & \{10,0,0\} & - \frac{3!}{1!2!} \\ 3\{8,0,0\}2 & P(8,1), P(8,2), P(8,3) & \binom{10}{2} \\ 3\{6,0,0\}4 & P(6,1), P(6,2), P(6,3) & \binom{8}{2} \\ 3\{4,0,0\}6 & P(4,1), P(4,2), P(4,3) & \binom{6}{2} \\ 3\{2,0,0\}8 & P(2,1), P(2,2), P(2,3) & \binom{4}{2} \\ \hline 2\{11,0,0\}0 & P(11,1), P(11,2), P(11,3) & \binom{13}{2} & \{11,0,0\} & - \frac{3!}{1!2!} \\ & & & \{10,1,0\} & - \frac{3!}{1!1!1!} \\ 2\{9,0,0\}2 & P(9,1), P(9,2), P(9,3) & \binom{11}{2} \\ 2\{7,0,0\}4 & P(7,1), P(7,2), P(7,3) & \binom{9}{2} \\ 2\{5,0,0\}6 & P(5,1), P(5,2), P(5,3) & \binom{7}{2} \\ 2\{3,0,0\}8 & P(3,1), P(3,2), P(3,3) & \binom{5}{2} \\ \hline 1\{12,0,0\}0 & P(12,1), P(12,2), P(12,3) & \binom{14}{2} & \{12,0,0\} & - \frac{3!}{1!2!} \\ & & & \{11,1,0\} & - \frac{3!}{1!1!1!} \\ & & & \{10,2,0\} & - \frac{3!}{1!1!1!} \\ & & & \{10,1,1\} & - \frac{3!}{1!2!} \\ 1\{10,0,0\}2 & P(10,1), P(10,2), P(10,3) & \binom{12}{2} & \{10,0,0\} & - \frac{3!}{1!2!} \\ 1\{8,0,0\}4 & P(8,1), P(8,2), P(8,3) & \binom{10}{2} \\ 1\{6,0,0\}6 & P(6,1), P(6,2), P(6,3) & \binom{8}{2} \\ 1\{4,0,0\}8 & P(4,1), P(4,2), P(4,3) & \binom{6}{2} \\ \hline \end{array}$$ Sum off all permutations: $$\begin{array}{|rcll|} \hline && \binom{2}{2} + \binom{3}{2}+ \binom{4}{2}+ \binom{5}{2}+ \binom{6}{2}+ \binom{7}{2} \quad &|\quad 9\ldots , \text{ and } 8\ldots \\ &+& \binom{2}{2} + \binom{3}{2}+ \binom{4}{2}+ \binom{5}{2}+ \binom{6}{2}+ \binom{7}{2}+ \binom{8}{2}+ \binom{9}{2} \quad &|\quad 7\ldots ,\ \text{ and } 6\ldots \\ &+& \binom{2}{2} + \binom{3}{2}+ \binom{4}{2}+ \binom{5}{2}+ \binom{6}{2}+ \binom{7}{2}+ \binom{8}{2}+ \binom{9}{2}+ \binom{10}{2}+ \binom{11}{2} \quad &|\quad 5\ldots ,\ \text{ and } 4\ldots \\ &+& \binom{4}{2}+ \binom{5}{2}+ \binom{6}{2}+ \binom{7}{2}+ \binom{8}{2}+ \binom{9}{2}+ \binom{10}{2}+ \binom{11}{2}+ \binom{12}{2}+ \binom{13}{2}- 2\times\frac{3!}{1!2!}- 1\times \frac{3!}{1!1!1!} \quad &|\quad 3\ldots ,\ \text{ and } 2\ldots \\ &+& \binom{6}{2} + \binom{8}{2} + \binom{10}{2} + \binom{12}{2} + \binom{14}{2} - 3\times\frac{3!}{1!2!}- 2\times \frac{3!}{1!1!1!} \quad &|\quad 1\ldots \\\\ &=& \underbrace{\binom{2}{2} + \binom{3}{2}+ \binom{4}{2}+ \binom{5}{2}+ \binom{6}{2}+ \binom{7}{2} }_{= \binom{8}{3}\text{( hockey stick identity)} } \quad &|\quad 9\ldots , \text{ and } 8\ldots \\ &+& \underbrace{\binom{2}{2} + \binom{3}{2}+ \binom{4}{2}+ \binom{5}{2}+ \binom{6}{2}+ \binom{7}{2}+ \binom{8}{2}+ \binom{9}{2}}_{=\binom{10}{3}\text{( hockey stick identity)} } \quad &|\quad 7\ldots ,\ \text{ and } 6\ldots \\ &+& \underbrace{\binom{2}{2} + \binom{3}{2}+ \binom{4}{2}+ \binom{5}{2}+ \binom{6}{2}+ \binom{7}{2}+ \binom{8}{2}+ \binom{9}{2}+ \binom{10}{2}+ \binom{11}{2} }_{=\binom{12}{3}\text{( hockey stick identity)} } \quad &|\quad 5\ldots ,\ \text{ and } 4\ldots \\ &+& \underbrace{\binom{4}{2}+ \binom{5}{2}+ \binom{6}{2}+ \binom{7}{2}+ \binom{8}{2}+ \binom{9}{2}+ \binom{10}{2}+ \binom{11}{2}+ \binom{12}{2}+ \binom{13}{2} }_{=\binom{14}{3} -\binom{3}{2} -\binom{2}{2} \text{( hockey stick identity)} }- 2\times\frac{3!}{1!2!}- 1\times \frac{3!}{1!1!1!} \quad &|\quad 3\ldots ,\ \text{ and } 2\ldots \\ &+& \binom{6}{2} + \binom{8}{2} + \binom{10}{2} + \binom{12}{2} + \binom{14}{2} - 3\times\frac{3!}{1!2!}- 2\times \frac{3!}{1!1!1!} \quad &|\quad 1\ldots \\\\ &=& \binom{8}{3} + \binom{10}{3} + \binom{12}{3} + \binom{14}{3} -\underbrace{\left(\binom{2}{2} +\binom{3}{2}\right)}_{=\binom{4}{3}\text{( hockey stick identity)} } \\ &+& \binom{6}{2} + \binom{8}{2} + \binom{10}{2} + \binom{12}{2} + \binom{14}{2} - 5\times\frac{3!}{1!2!}- 3\times \frac{3!}{1!1!1!} \\\\ &=& \binom{8}{3} + \binom{10}{3} + \binom{12}{3} + \binom{14}{3} - \binom{4}{3} \\ &+& \binom{6}{2} + \binom{8}{2} + \binom{10}{2} + \binom{12}{2} + \binom{14}{2} - 5\times\frac{3!}{1!2!}- 3\times \frac{3!}{1!1!1!} \\\\ && \boxed{\binom{8}{2}+\binom{8}{3} = \binom{9}{3} \\ \binom{10}{2}+\binom{10}{3} = \binom{11}{3} \\ \binom{12}{2}+\binom{12}{3} = \binom{13}{3} \\ \binom{14}{2}+\binom{14}{3} = \binom{15}{3} } \\\\ &=& \mathbf{\binom{9}{3} + \binom{11}{3} + \binom{13}{3} + \binom{15}{3} - \binom{4}{3} + \binom{6}{2} - 5\times\frac{3!}{1!2!}- 3\times \frac{3!}{1!1!1!}} \\\\ &=& \binom{9}{3} + \binom{11}{3} + \binom{13}{3} + \binom{15}{3} - \binom{4}{3} + \binom{6}{2} - 5\times 3- 3\times 6 \\\\ &=& \binom{9}{3} + \binom{11}{3} + \binom{13}{3} + \binom{15}{3} - \binom{4}{3} + \binom{6}{2} -33 \\\\ &=& 84 + 165 + 286 + 455 - 4 + 15 -33 \\ &=& \mathbf{968} \\ \hline \end{array}$$ Jul 16, 2019 edited by heureka  Jul 16, 2019 #12 +1698 +3 An Amazing and Very COOL Presentation, Heureka! GingerAle  Jul 16, 2019 #13 +22896 +3 Thank you, GingerAle ! heureka  Jul 16, 2019
{}
## Algebra 1 Published by Prentice Hall # Chapter 8 - Polynomials and Factoring - Chapter Review - 8-1 Adding and Subtracting Polynomials: 14 #### Answer $9h^{3}-3h+3$ #### Work Step by Step Simplify and write in standard form $(4h^{3}+3h+1)-(-5h^{3}+6h-2)$ Distribute the $-$ in the second parenthesis $4h^{3}+3h+1+5h^{3}-6h+2$ Combine like terms and simplify $9h^{3}-3h+3$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
Now showing items 1-1 of 1 • #### Charged-particle multiplicities in $pp$ interactions at $\sqrt{s}$ = 900 GeV measured with the ATLAS detector at the LHC  (Peer reviewed; Journal article, 2010-04) The first measurements from proton-proton collisions recorded with the ATLAS detector at the LHC are presented. Data were collected in December 2009 using a minimum-bias trigger during collisions at a centre-of-mass energy ...
{}
# Smoothness proof for harmonic function I was reading the proof of theorem 6 in Evans PDE. I do not understand last 2 steps in the proof. Please, can anyone help me to understand? Any Help will be appreciated The second-to-last equality follows directly from the mean-value property. The last on e comes from recognizing that \begin{align*}\frac{1}{\epsilon^n}u(x)\int\limits_0^\epsilon\eta\left(\frac{r}{\epsilon}\right)n\alpha(n)r^{n-1}\, dr&=u(x)\int\limits_0^\epsilon\eta_\epsilon(r)n\alpha(n)r^{n-1}\, dr\\ &=u(x)\int\limits_{S^{n-1}}\int\limits_0^\epsilon\eta_\epsilon(r)r^{n-1}\, drd\sigma(\omega)\\ &=u(x)\int\limits_{B(0,\epsilon)} \eta_\epsilon\, dy, \end{align*} where $$d\sigma(\omega)$$ is the surface measure on the sphere. In particular, the last inequality just comes from polar coordinates: $$\int\limits_{B(0,s)} f(x)\, dx=\int\limits_{S^{n-1}}\int\limits_0^s f(r\omega)r^{n-1}\, drd\sigma(\omega).$$ In our case, the integrand is purely radial, so the integral over $$S^{n-1}$$ just gives the surface area of $$S^{n-1}.$$ It's perhaps easiest to understand by starting at the end and reading backwards to the beginning.
{}
# 5 z 34 000 4. 135,20 143,60 139,40 15 0,38. 34. 0,85. A. 5. 143,60 152,00 147,80 6 0,15. 40 la media y la varianza de la distribución de Y. La nueva variable aleatoria Z, σ is the standard deviation of the population.. The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation.z is negative when the raw … Share your videos with friends, family, and the world Task: Convert 1,500 millimeters to meters (show work) Formula: mm ÷ 1,000 = m Calculations: 1,500 mm ÷ 1,000 = 1.5 m Result: 1,500 mm is equal to 1.5 m. Conversion Table. For quick reference purposes, below is a conversion table that you can use to convert from mm to m. Millimeters to Meters Conversion Chart. millimeters (mm) meters (m) The T-5 five-speed transmission was developed for a broad range of vehicle applications from passenger car to compact sport utility vehicle (SUV) requirements. 2 7 3. 8 9 2 6 = 90 _ /þr(jO 200 - 37.0 40 zqoo 300 = A 80 700 - 600 400 — zoo 700 = _ as-SC) C) 900 "coo 600 - 3soo 500 - 80 geo 240 6 40 90 630 60 80 so 40 600 400 800 800 5 … загальна площа : 118,5 м² : Площа забудови : 77,9 м²: Кубатура World’s top manufacturer of performance camshafts, lifters, valve springs, rocker arms and related valve train parts for all race and street performance engines 10 DEE 23, IF 7.1 IF 34, 000 2F 39, 000 3,300 / 10 fi 30 a f±Ffr O Ini ) 2F 7.5 rþh) ('cD(U z -3 *9300m to 39 011-1 7, 5th 7. 9 63 — 0101 3 (5) a=3b —142 cm3 L FAD t BE Z AT-.D=Z BEG — 000 (D arc, A Dc?' z Ach = z A DF z AFD= z AFC = z FAC co /2 34- (2) b & 12 /50& 24 (3) (2) Commemorative Banknotes of the National Bank of the Feedback Zillow has 1,578 homes for sale. View listing photos, review sales history, and use our detailed real estate filters to find the perfect place. Calculation. If the population mean and population standard deviation are known, a raw score x is converted into a standard score by = − where: μ is the mean of the population. ### 1 000 10 100 - 1 000 40 9-3= 7-7= 6-7= Z 30 - 34 00 2 8. 5. 9. 8 9 5. 4. 3 3 9. 9. 5 4 8 3 5. 2 7 3. 8 9 2 6 = 90 _ /þr(jO 200 - 37.0 40 zqoo 300 = A 80 700 - 600 400 — zoo 700 = _ as-SC) C) 900 "coo 600 - 3soo 500 - 80 geo 240 6 40 90 630 60 80 so 40 600 400 800 800 5 … 0,1. 15. 0. ,21. 6. 5 × 1 0 3. Now, let us multiply both these standard forms to get the final result. 4 × 8.5 × 103 × 103 4 \times 8.5 \times 103 \times 103 4 × 8. 5 × 1 0 3 × 1 0 3. Additionally, our Super Lock titanium design gives you confidence; no $38467.88 Use this formula for compound interest: ) V is the salary is 5 years P is the when you first joined R is the percentage increase per year N is the number of years Therefore, V= 34000xx(1+2.5%)^5 = 34000xx1.025^5 =$38467.88 (2dp) Created Date: 12/5/2018 3:56:29 PM Buy 1250-34-000 Laser Module, 670nm 0.8mW, Continuous Wave Ellipse pattern +4.5 → +5.5 V 1250-34-000. Browse our latest Laser Modules offers. Free Next Day Delivery available. Parikkalan keskustassa 1976 valmistunut ja 2000-luvulla sisätiloiltaan uudistettu, siisti 3. (ylin) kerroksen päätykolmio. Contactar Vendedor. Pedir detalhes de Buy 1250-34-000 Laser Module, 670nm 0.8mW, Continuous Wave Ellipse pattern +4.5 → +5.5 V 1250-34-000. Browse our latest Laser Modules offers. Free Next Day Delivery available. Rs 34,000 Crore Worth Military Hardware Exported In Last 5 Years: Centre NDTV - Press Trust of India. India exported military hardware and equipment worth over Rs 34,000 crore in the last five years, according to details provided by the government in … % x 34000 = 4.5 x 100. Divide by 34000 to get the percentage: % = (4.5 x 100) / 34000 = 0.013235294117647%. ' ' La probabilidad pedida es el área a la derecha de z = 0'75. Consultando La tabla proporciona, para cada valor de z, el áre 13 Nov 2017 1. 6º Marcha. 0,79. Marcha atrás. 4,5. Tabla 3.2: relaciones de transmisión m=1 mm ø=35 mm z= 34 dientes RpMA=24,00 mm α=20º β=0º. 5.7L and 6.4L HEMI w/ VVT (2009+) 34 items LS2/LS3 GEN IV Single-Bolt w/o VVT and w/o AFM 8 Cylinder (2007-2015) 12 items 5% of 40,000 = 2,000: 5% of 42,500 = 2,125: 5% of 45,000 = 2,250: 5% of 47,500 = 2,375: 5% of 40,100 = 2,005: 5% of 42,600 = 2,130: 5% of 45,100 = 2,255: 5% of 47,600 5% of 34 = 1.70: 5% of 164 = 8.20: 5% of 294 = 14.70: 5% of 424 = 21.20: 5% of 35 = 1.75: 5% of 165 = 8.25: 5% of 295 = 14.75: 5% of 425 = 21.25: 5% of 36 = 1.80: 5% Apr 04, 2013 · \$ 20 000. (400000*5/100) Peace. 0 0. jeyakumar n. Lv 7. místo pro výměnu mincí v mém okolí cena ether etheru převést pesos colombianos a dolares australianos mac ikona rtěnka míra vzplanutí omezující nat jak zrušit účet amazon chime je 2 500 měsíčně dobrá uk ### Calculation. If the population mean and population standard deviation are known, a raw score x is converted into a standard score by = − where: μ is the mean of the population. σ is the standard deviation of the population.. The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation.z is negative when the raw … Some 66% of 34,000 eligible people have applied,  dostępne w kategorii Rolnicze. Największy serwis z ogłoszeniami motoryzacyjnymi w Polsce. Renault RENAULT 90 34 klima. 120.54 ares. 1990.
{}
# zbMATH — the first resource for mathematics Improved energy estimates for interior penalty, constrained and discontinuous Galerkin methods for elliptic problems. I. (English) Zbl 0951.65108 The authors discuss three numerical algorithms for elliptic problems which employ discontinuous approximation spaces and obtain optimal a priori $$hp$$ error estimates in $$H^1$$ and $$L^2$$ or $$H^1$$ for all three procedures. The three methods are called the non-symmetric interior penalty Galerkin method, the non-symmetric constrained Galerkin method, and the discontinuous Galerkin method. The three algorithms are closely related in that the underlying bilinear form for all three is the same and is non-symmetric. All three methods are locally conservative. ##### MSC: 65N15 Error bounds for boundary value problems involving PDEs 65N30 Finite element, Rayleigh-Ritz and Galerkin methods for boundary value problems involving PDEs 35J25 Boundary value problems for second-order elliptic equations Full Text:
{}
## Abstract and Applied Analysis ### Explicit Formulas Involving $q$-Euler Numbers and Polynomials #### Abstract We deal with $q$-Euler numbers and $q$-Bernoulli numbers. We derive some interesting relations for $q$-Euler numbers and polynomials by using their generating function and derivative operator. Also, we derive relations between the $q$-Euler numbers and $q$-Bernoulli numbers via the $p$-adic $q$-integral in the $p$-adic integer ring. #### Article information Source Abstr. Appl. Anal., Volume 2012 (2012), Article ID 298531, 11 pages. Dates First available in Project Euclid: 28 March 2013 https://projecteuclid.org/euclid.aaa/1364475903 Digital Object Identifier doi:10.1155/2012/298531 Mathematical Reviews number (MathSciNet) MR2994920 Zentralblatt MATH identifier 1257.11019 #### Citation Araci, Serkan; Acikgoz, Mehmet; Seo, Jong Jin. Explicit Formulas Involving $q$ -Euler Numbers and Polynomials. Abstr. Appl. Anal. 2012 (2012), Article ID 298531, 11 pages. doi:10.1155/2012/298531. https://projecteuclid.org/euclid.aaa/1364475903 #### References • S. Araci, D. Erdal, and J.-J. Seo, “A study on the fermionic $p$-adic $q$-integral representation on ${\mathbb{Z}}_{p}$ associated with weighted $q$-Bernstein and $q$-Genocchi polynomials,” Abstract and Applied Analysis, vol. 2011, Article ID 649248, 10 pages, 2011. • T. Kim, B. Lee, S. H. Lee, and S. H. Rim, “Identities for the Bernoulli and Euler numbers and čommentComment on ref. [2?]: Please update the information of this reference, if possible. polynomials,” Ars Combinatoria. In press. • D. Kim, T. Kim, S.-H. Lee, D.-V. Dolgy, and S.-H. Rim, “Some new identities on the Bernoulli numbers and polynomials,” Discrete Dynamics in Nature and Society, vol. 2011, Article ID 856132, 11 pages, 2011. • T. Kim, J. Choi, and Y. H. Kim, “Some identities on the $q$-Bernoulli numbers and polynomials with weight 0,” Abstract and Applied Analysis, vol. 2011, Article ID 361484, 8 pages, 2011. • T. Kim, “On a $q$-analogue of the $p$-adic log gamma functions and related integrals,” Journal of Number Theory, vol. 76, no. 2, pp. 320–329, 1999. • T. Kim and J. Choi, “On the $q$-Euler numbers and polynomials with weight 0,” Abstract and Applied Analysis, vol. 2012, Article ID 795304, 7 pages, 2012. • T. Kim, “On the $q$-extension of Euler and Genocchi numbers,” Journal of Mathematical Analysis and Applications, vol. 326, no. 2, pp. 1458–1465, 2007. • T. Kim, “On the weighted $q$-Bernoulli numbers and polynomials,” Advanced Studies in Contemporary Mathematics, vol. 21, no. 2, pp. 207–215, 2011. • T. Kim, “$q$-Volkenborn integration,” Russian Journal of Mathematical Physics, vol. 9, no. 3, pp. 288–299, 2002. • T. Kim, “$q$-Euler numbers and polynomials associated with $p$-adic $q$-integrals,” Journal of Nonlinear Mathematical Physics, vol. 14, no. 1, pp. 15–27, 2007. • T. Kim, “New approach to $q$-Euler polynomials of higher order,” Russian Journal of Mathematical Physics, vol. 17, no. 2, pp. 218–225, 2010. • T. Kim, “Some identities on the $q$-Euler polynomials of higher order and $q$-Stirling numbers by the fermionic $p$-adic integral on ${\mathbb{Z}}_{p}$,” Russian Journal of Mathematical Physics, vol. 16, no. 4, pp. 484–491, 2009.
{}
## 5.1 Bayesian Kernel Machine Regression ### 5.1.1 Introduction Bayesian Kernel Machine Regression (BKMR) is designed to address, in a flexible non-parametric way, several objectives such as detection and estimation of an effect of the overall mixture, identification of pollutant or group of pollutants responsible for observed mixture effects,visualizing the exposure-response function, or detection of interactions among individual pollutants. The main idea of BKMR is to model exposure through means of a kernel function. Specifically, the general modeling framework is $Y_i=h(z_{i1},…,z_{iM})+βx_i+\epsilon_i$ where $$Y_i$$ is a continuous, normally distributed health endpoint, $$h$$ is a flexible function of the predictor variables $$z_{i1},…,z_{iM}$$, and $$x_i$$ is a vector of covariates assumed to have a linear relationship with the outcome There are several choices for the kernel function used to represent $$h$$. The focus here is on the Gaussian kernel, which flexibly captures a wide range of underlying functional forms for h and can accommodate nonlinear and non-additive effects of the multivariate exposure. Specifically, the Gaussian kernel implies the following representation for $$h$$: $K_{vs}(z_i,z_j)=exp\{-\sum_{M}r_m(z_{im}-z_{jm})^2\}$ Intuitively, the kernel function shrinks the estimated health effects of two individuals with similar exposure profiles toward each other. The weights $$r_m$$ present the probability that each exposure is important in the function, with $$r_m=0$$ indicating that there is no association between the $$m^{th}$$ exposure and the outcome. By allowing some weights to be 0, the method is implicitly embedding a variable selection procedure. This can also integrate information on existing structures among exposures (e.g. correlation clusters, PCA results, similar mechanisms …) with the so-called hierarchical variable selection, which estimates the probability that each group of exposures is important, and the probability that, given a group is important, each exposure in that group is driving that group-outcome association. ### 5.1.2 Estimation BKMR takes its full name from the Bayesian approach used for estimating the parameters. The advantages of this include the ability of estimating the importance of each variable ($$r_m$$) simultaneously, estimating uncertainties measures, and easily extending the estimation to longitudinal data. Since the estimation is built within an iterative procedure (MCMC), variable importance are provided in terms of Posterior Inclusion Probability (PIP), the proportion of iterations with $$r_m>0$$. Typically, several thousands of iterations are required. The bkmr R package developed by the authors makes implementation of this technique relatively straightforward. Using our illustrative example, the following chunk of code presents a set of lines that are required before estimating a BKMR model. Specifically, we are defining the object containing the mixture ($$X_{1}-X_{14}$$), the outcome ($$Y$$), and the confounders ($$Z_1-Z_3$$). We also need to generate a seed (we are using an iterative process with a random component) and a knots matrix that will help speeding up the process. This final step is very important as the model estimation can be extremely long (the recommendation is to use a number of knots of more or less n/10). mixture<-as.matrix(data2[,3:16]) y<-data2$y covariates<-as.matrix(data2[,17:19]) set.seed(10) knots100 <- fields::cover.design(mixture, nd = 50)$design The actual estimation of a BKMR model is very simple and requires one line of R code. With the following lines we fit a BKMR model with Gaussian predictive process using 100 knots. We are using 1000 MCMC iterations for the sake of time, but a final analysis should be run on a much larger number of samples, up to 50000. Here we are allowing for variable selection, but not providing any information on grouping. temp <- kmbayes(y=y, Z=mixture, X=covariates, iter=1000, verbose=FALSE, varsel=TRUE, knots=knots100) Table 5.1: Posterior Inclusion Probabilities in the simulated dataset variable PIP x1 0.110 x2 0.082 x3 0.000 x4 0.000 x5 0.072 x6 0.142 x7 0.000 x8 0.336 x9 0.062 x10 0.400 x11 0.188 x12 0.818 x13 0.080 x14 0.158 The ExtractPIPs() command will show one of the most important results, the posterior inclusion probabilities, shown in Table 5.1. We can interpret this output as the variable selection part, in which we get information on the importance of each covariate in defining the exposures-outcome association. In descending order, the most important contribution seems to come from $$X_{12}, X_{6}, X_{10}, X_{2}, X_{14}, X_{11}$$. This is in agreement with Elastic Net and WQS, which also identified $$X_{12}$$ and $$X_6$$ as the most important contributors. Also note that within the other cluster we haven’t yet been able to understand who the bad actor is, if any exists. ### 5.1.3 Trace plots and burning phase Since we are using several iterations it is important to evaluate the convergence of the parameters. These can be checked by looking at trace plots (what we expect here is some kind of random behaving around a straight line). What we generally observe is an initial phase of burning, which we should remove from the analysis. Here, we are removing the first 100 iterations and this number should be modify depending on the results of your first plots (Figures 5.1 and 5.2). sel<-seq(0,1000,by=1) TracePlot(fit = temp, par = "beta", sel=sel) sel<-seq(100,1000,by=1) TracePlot(fit = temp, par = "beta", sel=sel) ### 5.1.4 Visualizing results After estimation of a BKMR model, which is relatively straightforward and just requires patience throughout iterations, most of the work will consist of presenting post-estimation figures that can present the complex relationship between the mixture and the outcome. The R package includes several functions to summarize the model output in different ways and to visually display the results. To visualize the exposure-response functions we need to create different dataframes with the predictions that will be then graphically displayed with ggpolot. pred.resp.univar <- PredictorResponseUnivar(fit=temp, sel=sel, method="approx") pred.resp.bivar <- PredictorResponseBivar(fit=temp, min.plot.dist = 1, sel=sel, method="approx") pred.resp.bivar.levels <- PredictorResponseBivarLevels(pred.resp.df= pred.resp.bivar, Z = mixture, both_pairs=TRUE, qs = c(0.25, 0.5, 0.75)) risks.overall <- OverallRiskSummaries(fit=temp, qs=seq(0.25, 0.75, by=0.05), q.fixed = 0.5, method = "approx", sel=sel) risks.singvar <- SingVarRiskSummaries(fit=temp, qs.diff = c(0.25, 0.75), q.fixed = c(0.25, 0.50, 0.75), method = "approx") risks.int <- SingVarIntSummaries(fit=temp, qs.diff = c(0.25, 0.75), qs.fixed = c(0.25, 0.75)) The first three objects will allow us to examine the predictor-response functions, while the next three objects will calculate a range of summary statistics that highlight specific features of the surface. #### 5.1.4.1 Univariate dose-responses One cross section of interest is the univariate relationship between each covariate and the outcome, where all of the other exposures are fixed to a particular percentile (Figure 5.3). This can be done using the function PredictorResponseUnivar. The argument specifying the quantile at which to fix the other exposures is given by q.fixed (the default value is q.fixed = 0.5). ggplot(pred.resp.univar, aes(z, est, ymin = est - 1.96*se, ymax = est + 1.96*se)) + geom_smooth(stat = "identity") + ylab("h(z)") + facet_wrap(~ variable) We can conclude from these figures that all selected covariates have weak to moderate associations, and that all dose-responses seem to be linear (maybe leaving some benefit of doubt to $$X_6$$). #### 5.1.4.2 Bivariable Exposure-Response Functions This visualizes the bivariate exposure-response function for two predictors, where all of the other predictors are fixed at a particular percentile (Figure 5.4). ggplot(pred.resp.bivar, aes(z1, z2, fill = est)) + geom_raster() + facet_grid(variable2 ~ variable1) + xlab("expos1") + ylab("expos2") + ggtitle("h(expos1, expos2)") #### 5.1.4.3 Interactions Figure 5.4 might not be the most intuitive way of checking for interactions. An alternative approach is to investigate the predictor-response function of a single predictor in Z for the second predictor in Z fixed at various quantiles (and for the remaining predictors fixed to a specific value). These plots can be obtained using the PredictorResponseBivarLevels function, which takes as input the bivariate exposure-response function outputted from the previous command, where the argument qs specifies a sequence of quantiles at which to fix the second predictor. From the full set of combinations (Figure 5.5) we can easily select a specific one that we want to present, like the X6-X12 one (Figure 5.6). ggplot(pred.resp.bivar.levels, aes(z1, est)) + geom_smooth(aes(col = quantile), stat = "identity") + facet_grid(variable2 ~ variable1) + ggtitle("h(expos1 | quantiles of expos2)") + xlab("expos1") These figures do not provide any evidence of interactions throughout the mixture. As we know, this is correct since no interactions were specified in the simulated dataset. #### 5.1.4.4 Overall Mixture Effect Another interesting summary plot is the overall effect of the mixture, calculated by comparing the value of $$h$$ when all of predictors are at a particular percentile as compared to when all of them are at their 50th percentile (Figure 5.7). ggplot(risks.overall, aes(quantile, est, ymin = est - 1.96*sd, ymax = est + 1.96*sd)) + geom_hline(yintercept=00, linetype="dashed", color="gray") + geom_pointrange() + scale_y_continuous(name="estimate") In agreement with WQS, higher exposure to the overall mixture is associated with higher mean outcome. #### 5.1.4.5 Single Variables effects This additional function summarizes the contribution of an individual predictor to the response. For example, we may wish to compare the outcome when a single predictor in $$h$$ is at the 75th percentile as compared to when that predictor is at its 25th percentile, where we fix all of the remaining predictors to a particular percentile (Figure 5.8). ggplot(risks.singvar, aes(variable, est, ymin = est - 1.96*sd, ymax = est + 1.96*sd, col = q.fixed)) + geom_hline(aes(yintercept=0), linetype="dashed", color="gray") + geom_pointrange(position = position_dodge(width = 0.75)) + coord_flip() + theme(legend.position="none")+scale_x_discrete(name="") + scale_y_continuous(name="estimate") #### 5.1.4.6 Single Variable Interaction Terms Finally, this function is similar to the latest one, but refers to the interaction of a single exposure with all other covariates. It attempts to represent an overall interaction between that exposure and all other components (Figure 5.9). ggplot(risks.int, aes(variable, est, ymin = est - 1.96*sd, ymax = est + 1.96*sd)) + geom_pointrange(position = position_dodge(width = 0.75)) + geom_hline(yintercept = 0, lty = 2, col = "brown") + coord_flip() As we concluded before, this graph also leads us to conclude that we have no evidence of interaction for any covariate. ### 5.1.5 Hierarchical selection The variable selection procedure embedded into BKMR can also operate within a hierarchical procedure. Using our example, we could for instance inform the model that there are highly correlated clusters of exposures. This will allow us to get an estimate of the relative importance of each cluster and of each exposure within it. The procedure is implemented as follows, where we are specifically informing the model that there is a cluster of three highly correlated covariates: hier <- kmbayes(y=y, Z=mixture, X=covariates, iter=1000, verbose=FALSE, varsel=TRUE, knots=knots100, groups=c(1,1,2,2,2,1,1,1,1,1,1,1,1,1)) Table 5.2: Posterior Inclusion Probabilities from Hierarchical BKMR in the simulated dataset variable group groupPIP condPIP x1 1 1.000 0.0000000 x2 1 1.000 0.0000000 x3 2 0.044 0.3636364 x4 2 0.044 0.4545455 x5 2 0.044 0.1818182 x6 1 1.000 0.0160000 x7 1 1.000 0.0040000 x8 1 1.000 0.0620000 x9 1 1.000 0.0540000 x10 1 1.000 0.0000000 x11 1 1.000 0.0020000 x12 1 1.000 0.4500000 x13 1 1.000 0.1460000 x14 1 1.000 0.2660000 Group PIPs, shown in Table 5.2, seem to point out that the cluster is somehow relevant in the dose-response association, and indicates that that $$X_4$$ might be the most relevant of the three exposures. ### 5.1.6 BKMR Extensions The first release of BKMR was only available for evaluating continuous outcomes, but recent work has extended its use to the context of binary outcomes, which are also integrated in the latest versions of the package. have also described how to apply BKMR with time-to-event outcomes. Additional extensions of the approach that could be of interest in several settings also include a longitudinal version of BKMR based on lagged regression, which can be used to evaluate time-varying mixtures (). While this method is not yet implemented in the package, it is important to note that similar results can be achieved by evaluating time-varying effects through hierarchical selection. In brief, multiple measurements of exposures can be included simultaneously in the kernel, grouping exposures by time. An example of this application can be found in , evaluating exposures to phthalates during pregnancy, measured at different trimester, as they relate to final gestational weight. By providing a measure of group importance, group PIPs can here be interpreted as measures of relative importance of the time-windows of interest, thus allowing a better understanding of the timing of higher susceptibility to mixture exposures. ### 5.1.7 Practical considerations and discussion To conclude our presentation of BKMR, let’s list some useful considerations that one should take into account when applying this methodology: • As a Bayesian technique, prior information could be specified on the model parameters. Nevertheless this is not commonly done, and all code presented here is assuming the use of non-informative priors. In general, it is good to remember that PIP values can be sensitive to priors (although relative importance tends to be stable). • Because of their sensitivity, PIP values can only be interpreted as a relative measure of importance (as ranking the importance of exposures). Several applied papers have been using thresholds (e.g. 0.5) to define a variable “important” but this interpretation is erroneous and misleading. • The BKMR algorithm is more stable when it isn’t dealing with exposures on vastly different scales. We typically center and scale both the outcome and the exposures (and continuous confounders). Similarly, we should be wary of exposure outliers, and log-transforming exposures is also recommended. • BKMR is operating a variable selection procedure. As such, a PIP of 0 will imply that the dose-response for that covariate is a straight line on zero. This does not mean that a given exposure has no effect on the outcome, but simply that it was not selected in the procedure. As a matter of fact, when an exposure has a weak effect on the outcome BKMR will tend to exclude it. As a consequence of this, the overall mixture effect will really present the overall effect of the selected exposures. • As a Bayesian technique, BKMR is not based on the classical statistical framework on null-hypothesis testing. 95% CI are interpreted as credible intervals, and common discussions on statistical power should be avoided. • Despite the estimation improvements through the use of knots as previously described, fitting a BKMR model remains time-demanding. In practice, you might be able to fit a BKMR model on a dataset of up to 10.000 individuals (still waiting few hours to get your results). For any larger dataset, alternative approaches should be considered. • BKMR is a flexible non-parametric method that is designed to deal with complex settings with non-linearities and interactions. In standard situations, regression methods could provide a better estimation and an easier interpretation of results. In practical terms, you would never begin your analysis by fitting a BKMR model but only get to it for results validation or if alternative techniques were not sufficiently equipped to deal with your data. ### References Domingo-Relloso, Arce, Maria Grau-Perez, Laisa Briongos-Figuero, Jose L Gomez-Ariza, Tamara Garcia-Barrera, Antonio Dueñas-Laita, Jennifer F Bobb, et al. 2019. “The Association of Urine Metals and Metal Mixtures with Cardiovascular Incidence in an Adult Population from Spain: The Hortega Follow-up Study.” International Journal of Epidemiology 48 (6): 1839–49. Liu, Shelley H, Jennifer F Bobb, Kyu Ha Lee, Chris Gennings, Birgit Claus Henn, David Bellinger, Christine Austin, et al. 2018. “Lagged Kernel Machine Regression for Identifying Time Windows of Susceptibility to Exposures of Complex Mixtures.” Biostatistics 19 (3): 325–41. Tyagi, Pooja, Tamarra James-Todd, Lidia Mı́nguez-Alarcón, Jennifer B Ford, Myra Keller, John Petrozza, Antonia M Calafat, et al. 2021. “Identifying Windows of Susceptibility to Endocrine Disrupting Chemicals in Relation to Gestational Weight Gain Among Pregnant Women Attending a Fertility Clinic.” Environmental Research 194: 110638.
{}
## mathmath333 one year ago Counting Problem 1. mathmath333 \large \color{black}{\begin{align} & \normalsize \text{Find the number of words with or without meaning which can be made } \hspace{.33em}\\~\\ & \normalsize \text{using all the letters of the word AGAIN. If these words are written as } \hspace{.33em}\\~\\ & \normalsize \text{as in a dictionary, what will be the 50th word?} \hspace{.33em}\\~\\ \end{align}} 2. freckles hmmm we know there will by 5!/2! words aka 60 words so now we got to figure out I guess how many words start with a, how many with g, how many with i, and then last how many with n 3. mathmath333 yes 4. mathmath333 dictionary operates in alphabetical order 5. ganeshie8 I think it would be easy to count from the last page (60th word) 6. ganeshie8 or in reverse alphabetical order.. 7. mathmath333 i still dont understand how to find 50th word 8. ganeshie8 First notice that finding 50th word in alphabetical order is same as finding 11th word in reverse alphabetical order 9. mathmath333 yes i get that 10. ganeshie8 whats the first word in reverse alphabetical order ? 11. mathmath333 \large \color{black}{\begin{align} sleek-feathered oneA\hspace{.33em}\\~\\ \end{align}} 12. mathmath333 lol \large \color{black}{\begin{align} \text{sleek-feathered oneA}\hspace{.33em}\\~\\ \end{align}} 13. ganeshie8 openstudy doesn't like sleek-feathered oneA try putting spaces between letters.. 14. mathmath333 openstudy is misunderstanding my word hmm 15. mathmath333 \large \color{black}{\begin{align} \text{N_I_G_A_A}\hspace{.33em}\\~\\ \end{align}} 16. ganeshie8 Right, Fix first letter. Forget about first letter. How many words can you make with the remaining four letters ? 17. mathmath333 4!/2!=12 18. ganeshie8 Yes, whats the 12th word ? 19. mathmath333 \large \color{black}{\begin{align} \text{N_A_A_G_I}\hspace{.33em}\\~\\ \end{align}} 20. ganeshie8 Yes, whats the 11th word ? 21. mathmath333 \large \color{black}{\begin{align} \text{N_A_A_I_G}\hspace{.33em}\\~\\ \end{align}} 22. mathmath333 is it correct 23. ganeshie8 Yep! thats the 11th word in reverse alphabetical order 24. ganeshie8 As discussed earlier, that will be the 50th word in alphabetical order 25. mathmath333 yep thnkz 26. mathmath333 by the what is this file about , i get it with an error page http://assets.openstudy.com/updates/1235123123/file-name.thing 27. ganeshie8 looks the requested file either doesnt exist or you don't have permissions to access that file
{}
Limits Derivatives Integrals Infinite Series Parametrics Polar Coordinates Conics Limits Epsilon-Delta Definition Finite Limits One-Sided Limits Infinite Limits Trig Limits Pinching Theorem Indeterminate Forms L'Hopitals Rule Limits That Do Not Exist Continuity & Discontinuities Intermediate Value Theorem Derivatives Power Rule Product Rule Quotient Rule Chain Rule Trig and Inverse Trig Implicit Differentiation Exponentials & Logarithms Logarithmic Differentiation Hyperbolic Functions Higher Order Derivatives Differentials Slope, Tangent, Normal... Linear Motion Mean Value Theorem Graphing 1st Deriv, Critical Points 2nd Deriv, Inflection Points Related Rates Basics Related Rates Areas Related Rates Distances Related Rates Volumes Optimization Integrals Definite Integrals Integration by Substitution Integration By Parts Partial Fractions Improper Integrals Basic Trig Integration Sine/Cosine Integration Secant/Tangent Integration Trig Integration Practice Trig Substitution Linear Motion Area Under/Between Curves Volume of Revolution Arc Length Surface Area Work Moments, Center of Mass Exponential Growth/Decay Laplace Transforms Describing Plane Regions Infinite Series Divergence (nth-Term) Test p-Series Geometric Series Alternating Series Telescoping Series Ratio Test Limit Comparison Test Direct Comparison Test Integral Test Root Test Absolute Convergence Conditional Convergence Power Series Taylor/Maclaurin Series Interval of Convergence Remainder & Error Bounds Fourier Series Study Techniques Choosing A Test Sequences Infinite Series Table Practice Problems Exam Preparation Exam List Parametrics Parametric Curves Parametric Surfaces Slope & Tangent Lines Area Arc Length Surface Area Volume Polar Coordinates Converting Slope & Tangent Lines Area Arc Length Surface Area Conics Parabolas Ellipses Hyperbolas Conics in Polar Form Vectors Vector Functions Partial Derivatives/Integrals Vector Fields Laplace Transforms Tools Vectors Unit Vectors Dot Product Cross Product Lines In 3-Space Planes In 3-Space Lines & Planes Applications Angle Between Vectors Direction Cosines/Angles Vector Projections Work Triple Scalar Product Triple Vector Product Vector Functions Projectile Motion Unit Tangent Vector Principal Unit Normal Vector Acceleration Vector Arc Length Arc Length Parameter Curvature Vector Functions Equations MVC Practice Exam A1 Partial Derivatives Directional Derivatives Lagrange Multipliers Tangent Plane MVC Practice Exam A2 Partial Integrals Describing Plane Regions Double Integrals-Rectangular Double Integrals-Applications Double Integrals-Polar Triple Integrals-Rectangular Triple Integrals-Cylindrical Triple Integrals-Spherical MVC Practice Exam A3 Vector Fields Curl Divergence Conservative Vector Fields Potential Functions Parametric Curves Line Integrals Green's Theorem Parametric Surfaces Surface Integrals Stokes' Theorem Divergence Theorem MVC Practice Exam A4 Laplace Transforms Unit Step Function Unit Impulse Function Square Wave Shifting Theorems Solve Initial Value Problems Prepare For Calculus 1 Trig Formulas Describing Plane Regions Parametric Curves Linear Algebra Review Word Problems Mathematical Logic Calculus Notation Simplifying Practice Exams More Math Help Tutoring Tools and Resources Learning/Study Techniques Math/Science Learning Memorize To Learn Music and Learning Note-Taking Motivation Instructor or Coach? Books Math Books You CAN Ace Calculus 17calculus > limits > infinite limits ### Calculus Main Topics Limits Single Variable Calculus Multi-Variable Calculus ### Tools math tools general learning tools Infinite Limits - Limits At Infinity ### Finite Limits, Infinite Limits, Limits At Infinity . . . Terminology Explained The use of the terms finite limits, infinite limits and limits at infinity are used differently in various books and your instructor may have their own idea of what they mean. In this panel, we will try to break down the cases and explain the various ways these terms can be used as well as how we use them here at 17calculus. When we talk about limits, we are looking at the $$\displaystyle{ \lim_{x \to c}{f(x)} = L }$$. The various terms apply to the description of $$c$$ and $$L$$ and are shown in the table below. The confusion lies with the terms finite limits and infinite limits. They can mean two different things. $$\displaystyle{ \lim_{x \to c}{f(x)} = L }$$ when term(s) used $$c$$ is finite limits approaching a finite value or finite limits $$c$$ is infinite $$\pm \infty$$ limits at infinity or infinite limits $$L$$ is finite finite limits $$L$$ is infinite $$\pm \infty$$ infinite limits You can see where the confusion lies. The terms finite limits and infinite limits are used to mean two different things, referring to either $$c$$ or $$L$$. It is possible to have $$c = \infty$$ and $$L$$ be finite. So is this an infinite limit or a finite limit? It depends if you are talking about $$c$$ or $$L$$. How 17calculus Uses These Terms The pages on this site are constructed based on what $$c$$ is, i.e. we use the terms finite limits and infinite limits based on the value of $$c$$ only ( using the first two rows of the table above and ignoring the last two ). This seems to be the best way since, when we are given a problem, we can't tell what $$L$$ is until we finish the problem, and therefore we are unable to determine what type of problem we have and know what techniques to use until we are done with the problem. Important: Make sure and check with your instructor to see how they use these terms. This Page - Infinite Limits (or they may be called Limits At Infinity in your textbook) refers to cases where the variable in question goes off to infinity. In limit notation, they look like $$\displaystyle{ \lim_{x \rightarrow c}{~f(x)} }$$ where c is $$\infty$$ or $$-\infty$$. If your limit shows c as a finite number, then you need to go to the finite limits page. ( The panel above explains the terminology and how 17calculus defines finite and infinite limits. ) When evaluating $$\displaystyle{ \lim_{x \to \pm \infty}{~f(x)} }$$ you need to determine if the graph of the function is leveling off at a value ( and, if so, what that value is ) or if it is going off to infinity ( either $$+\infty$$ or $$-\infty$$ ). You don't want to try to figure it out off a graph. You need to do it mathematically ( from the equation ). This is the main theorem you will use. Infinite Limits Theorem $$\displaystyle{ \lim_{x \rightarrow \pm \infty}{\left[ \frac{1}{x} \right]} = 0 }$$ You can use the limit laws to apply this theorem to the case when you have $$\displaystyle{ \lim_{x \to \infty}{\left[ \frac{a}{x^k}\right]} }$$ where $$k$$ is a positive rational number and $$a$$ is a real number. Here is an example. Try it on your own before looking at the solution. Evaluate $$\displaystyle{ \lim_{x\to\infty}{\frac{3}{x^2}} }$$. ### Search 17Calculus Practice Problems Instructions - Unless otherwise instructed, evaluate the following limits, giving your answers in exact terms. Level A - Basic Practice A01 $$\displaystyle{\lim_{x\to\infty}{(x^4+7x^2+3)}}$$ solution Practice A02 $$\displaystyle{\lim_{x\to\infty}{(x^5-3x^2+x-21)}}$$ solution Practice A03 $$\displaystyle{\lim_{x\to\infty}{\frac{3x^2+5x+4}{x^3+7x}}}$$ solution Practice A04 $$\displaystyle{\lim_{x\to-\infty}{\left[\frac{x+5}{3x+7}\right]}}$$ solution Practice A05 $$\displaystyle{\lim_{x\to-\infty}{\frac{7}{x^3-16}}}$$ solution Practice A06 $$\displaystyle{\lim_{x\to-\infty}{\frac{x^4+x}{5x^3+7}}}$$ solution Practice A07 $$\displaystyle{\lim_{x\to-\infty}{\left[x-\sqrt{x^2+9}\right]}}$$ solution Practice A08 $$\displaystyle{\lim_{x\to\infty}{(3x^3-17x^2)}}$$ solution Practice A09 $$\displaystyle{\lim_{x\to\infty}{\frac{3}{x^2+5}}}$$ solution Practice A10 $$\displaystyle{\lim_{x\to\infty}{\left[\frac{7x^3+x+12}{2x^3-5x}\right]}}$$ solution Practice A11 $$\displaystyle{ \lim_{x \to \infty}{\left[ \frac{7x^2 - 3x + 12}{x^3 + 4x + 127} \right]} }$$ solution Practice A12 $$\displaystyle{\lim_{x\to-\infty}{\left[\frac{7x^2+x+21}{11-x}\right]}}$$ solution Practice A13 $$\displaystyle{\lim_{x\to\infty}{\frac{4x^{10}+10000x^9}{5x^{10}+4}}}$$ solution Practice A14 $$\displaystyle{\lim_{x\to\infty}{\frac{3x^7}{5x^8+10x+2}}}$$ solution Practice A15 $$\displaystyle{\lim_{x\to\infty}{\frac{x^4}{x^3+5}}}$$ solution Level B - Intermediate Practice B01 $$\displaystyle{\lim_{x\to\infty}{\frac{x+3}{\sqrt{x^2+4}}}}$$ solution Practice B02 $$\displaystyle{\lim_{x\to\infty}{\left[\sqrt{x^2+4x+1}-x\right]}}$$ solution Practice B03 $$\displaystyle{ \lim_{x\to\infty}{\arctan(x)}}$$ solution Practice B04 $$\displaystyle{\lim_{x\to\infty}{\left[x-\sqrt{x^2+9}\right]}}$$ solution Practice B05 $$\displaystyle{\lim_{x\to\infty}{\sqrt{\frac{x^3+3x}{4x^3+7}}}}$$ $$\displaystyle{\lim_{x\to-\infty}{\frac{x^3}{\sqrt{x^6+4}}}}$$
{}
# Barycentric Volumes and Area 14 Aug 2017 In a paper I’m currently going over they apply a trick in equation 13 that I haven’t processed through where $Cv$ expresses a map from the nodal velocities of a triangle mesh to face velocities over cut cells. This term is finally doted with area-weighted normals to compute flux in a fluid cell to make hte form $N^TCv$. At a later point a map from these fluxes to vertex velocities is computed via $C^TNp$, for which I’m unsure of the “correctness” of. This map preserves a self-adjoint structure in the linear system that the paper solves later on, so maintaining this property is important, but if this choice isn’t “correct” perhaps a slight modification could be. This document is my derivation of how this term operates and thoughts. I will begin by describing barycentric coordinates in order to review how they’re related to areas. I will then start considering how these may be related to an area-weighted map from nodal to cut-cell face velocities. ### Definition of triangular barycentric coordinates and area Say we have a triangle $\mathcal{T}$ that has vertices ${v_i}_{i=\mathbb{Z}_3}$ and wish to represent the position of a point $p$ that lies in hte span of $\mathcal{T}$. Because it lies in the span we of those vertices we know that we can write $p$ as a linear combination of $v_i$: By the fact that we have three variables and the plane only requires two variables we clearly have an underdetermined system. This is resolved by adding the constraint that the sum of weights must be unit: The unique solution is quite esay to solve for when embedded in $\mathbb{R}^2$, which we will do now. Let us shift the system to hte origin via and construct a matrix in $\mathbb{R}^{2\times 2}$ and now the system we’re solving is By applying Cramer’s rule we see that $\alpha_0$ is given by where $| \cdot |$ is the volume of $\cdot$ (this is just the standard volume of a simplex formula). without loss of generality we see that the weight in front of the $i$th vertex is the area of the triangle constructed by the other two vertices and the point in question. The summation constraint is trivially satisfied by the fact that the signed volume of each subtriangle $\mathcal{T}_i^p$ is the volume of the entire triangle. $|\mathcal{T}| = \sum_{i \in \mathbb{Z}_3} |\mathcal{T}_i^p| \Rightarrow 1 = \sum_{i \in \mathbb{Z}_3} \frac{|\mathcal{T}_i^p|}{|\mathcal{T}|} = \sum_{i \in \mathbb{Z}_3} \alpha_i$. Intuitively we see that $p$ lies within a triangle if and only if the volume of each $\mathcal{T}_i^p$ is positive. In higher dimensions signed volume ceases to exist, although it can be trivially reconstructed by deriving an isometric map from the span of $\mathcal{T}$ to $\mathbb{R}^2$. Finally, this is all trivially extended to simplices of higher and lower dimension than $3$. ### Nodal velocities to face velocities The paper simply computes the centroid of of the cut cell which resolves to a system, when computing the pressure term associated with cell $C_j$ and ignoring fluid neighboring cells where $N_{ji} = \delta_{i \in \mathcal{\partial C_j}}a_in_i$ and $v(c_i) = C^TV$, i.e $C$ encodes the barycentric weights for the cut cell centroid positions. These barycentric weights are given with respect to the triangles for which each cut cell belongs to, and $V$ encodes the vertex velocities. We’ll let $A$ be a diagonal area matrix and rewrite the above term as $C^TNA$
{}
Last edited by Grogore Friday, August 7, 2020 | History 2 edition of Can people compare stimulus information by a ratio operation? found in the catalog. Can people compare stimulus information by a ratio operation? Clairice T. Veit # Can people compare stimulus information by a ratio operation? ## by Clairice T. Veit Written in English Subjects: • Measurement., • Physical measurements. • Edition Notes The Physical Object ID Numbers Statement Clairice T. Veit. Contributions Rand Corporation. Pagination 23 p. : Number of Pages 23 Open Library OL16591128M Stimulus can be conceptualized as the e-WOM is also a strong factor that can influence travellers to book th rough the Since information can now be obtained anywhere at any time by anyone. Remember, you can, to get an equivalent ratio you can multiply or divide these numbers by the same number. So, to get from 16 to eight, you could do that as, well, we just divided by two. And to go from 12 to six, you also divide by two. So this actually is an equivalent ratio. I'll circle that in. a procedure for transferring stimulus control in which features of an antecedent stimulus (e.g., shape, size, position, color) controlling a beahvior are gradually changed to a new stimulus while maintaining the current behaviour; stimulus features can be faded in (enhanced) or out (reduced). Ratio analysis is the comparison of line items in the financial statements of a business. Ratio analysis is used to identify various problems with a firm, such as its liquidity, efficiency of operations, and profitability. It is also used to identify the positives or strengths of a firm. It's far more useful to look at three different valuation tools: price-to-earnings ratio, or P/E; price-to-free cash flow ratio, or P/FCF; and price-to-earnings-growth ratio, or PEG. Each of these. Randomly varying functionally irrelevant stimuli within and across teaching sessions; promote setting/situation generalization by reducing the likelihood that (a) a single or small group of non-critical stimuli will acquire exclusive control over the target behavior and (b) the learner's performance of the target behavior will be impeded or "thrown off" should he encounter any of the "loose. You might also like Infinite man Infinite man Record of Lancashire cricket. Record of Lancashire cricket. Guide to publications Guide to publications Cost-benefit analysis Cost-benefit analysis The evidence given at the Bar of the House of Commons, upon the complaint of SirJohn Pakington, against Wiliam Lord Bishop of Worcester and Mr. Lloyd, his son The evidence given at the Bar of the House of Commons, upon the complaint of SirJohn Pakington, against Wiliam Lord Bishop of Worcester and Mr. Lloyd, his son To drink or not to drink To drink or not to drink Edge of glass Edge of glass Monetary policy and economic stabilisation Monetary policy and economic stabilisation Catalog of 1912 cars. Catalog of 1912 cars. South East airports South East airports Early childhood and child care study Early childhood and child care study Scandinavia (New Nations and Peoples) Scandinavia (New Nations and Peoples) ### Can people compare stimulus information by a ratio operation? by Clairice T. Veit Download PDF EPUB FB2 The evidence suggests that people's underlying operation is subtraction when performing both ratios and intervals of two stimuli. Research supports the conclusion that people can compare stimulus information by a ratio operation only when the stimulus information is. You need to compare the two fractions than the ratio comparison is used. For example, you need to compare $$\frac{}{}$$ and $$\frac{}{}$$ Now, in such cases just by estimating 10% ranges for the ratios you can clearly see that the first ratio given will be greater 80% and the second ratio given will be less than 80%. Operating ratios compare the operating expenses and assets of a business to several other performance benchmarks. The intent is to determine whether the amount of operating expenses incurred or assets used is reasonable. If not, management can take steps to prune back on certain expenses or assets. The information ratio. It’s amazing how those three little words can cause so much controversy in the business world. To this day, many portfolio managers still dispute what the information ratio actually is and how it is calculated. Some investors put a lot of weight on what the information ratio (IR) tells them and may even use thisFile Size: KB. The operating ratio is calculated as follows: $61 billion /$ billion, which equals or 72%. The operating ratio for Apple means that 72%. Ratio analysis is used to evaluate relationships among financial statement items. The ratios are used to identify trends over time for one company or to compare two or more companies at one point in time. Financial statement ratio analysis focuses on three key aspects of. For example, the Can people compare stimulus information by a ratio operation? book 1 minute ∶ 40 seconds can be reduced by changing the first value to 60 seconds, so the ratio becomes 60 seconds ∶ 40 seconds. Once the units are the same, they can be omitted, and the ratio can be reduced to 3∶2. On the other hand, there are non-dimensionless ratios. An investor can utilize these financial ratios to determine whether a manufacturing company is efficient, profitable, and a good long-term investment option. stimulus gradient AC/A ratio were () and the mean (SD) distance response gradient AC/A ratio were (). A paired t-test found a significant difference between the distance response and stimu-lus gradient AC/A ratio values (t=, p=). A Bland-Altman plot suggested that the difference increased as the size of AC/A ratio. Motivating operations can also produce this type of multiple control, and their effects are similar to those observed with stimulus control. For example, a shoe as a nonverbal stimulus can evoke the tact “shoe,” “sneaker,” “Nike,” or any number of other response forms. Market To Book ratio is used to compare a company’s current market price to its book value. The calculation can be performed in two ways, but the result should be the same using either method. The higher the ratio, the greater risk will be associated with the firm’s operation. D/E Ratio = Long-term Debt/Equity = 3,/7, = percent. Explain the similarities and differences between motivating operations and discriminative stimuli. Provide examples. Explain the differences between response and stimulus prompts that are used to develop stimulus control for teaching purposes. Similarities and differences According to Cooper, Heron, & Hewardmotivating operations and discriminative stimulus both are antecedent variables. A ratio is a comparison of two quantities based on the operation of division. For example, if a school has one teacher for every eight students, you can express the teacher-to-student ratio in any of the following ways: Notice that this ratio expresses the ratio of teachers to students. Thus, the 1 goes before the 8 and, in the fraction, the 1. Part 3 - Comparing Ratios to the Industry. In Part 1, we calculated The Widget Manufacturing Company's Y ratios. This provided us with some information regarding the company's performance. In Part 2, we gained a greater insight into the performance of the Widget Manufacturing Company by comparing their X ratios to their Y ratios. The operating ratio compares production and administrative expenses to net ratio reveals the cost per sales dollar of operating a business. A lower operating ratio is a good indicator of operational efficiency, especially when the ratio is low in comparison to the same ratio for competitors and benchmark firms. The operating ratio is only useful for seeing if the core business is. RATIO & PROPORTION A RATIO is a comparison between two quantities. We use ratios everyday; one Pepsi costs 50 cents describes a ratio. On a map, the legend might tell us one inch is equivalent to 50 miles or we might notice one hand has five fingers. Those are all examples of comparisons – ratios. A ratio can be written three different ways. Occurs with reversibility of the sample stimulus and the comparison stimulus If A=B, then B=A. there are events, operations and stimulus conditions with value-altering motivating effects that are unlearned. Conditioned Motivating Operations (CMOs) The larger the ratio. Operating ratio (also known as operating cost ratio or operating expense ratio) is computed by dividing operating expenses of a particular period by net sales made during that expense ratio, it is expressed in percentage. Formula: Operating ratio is computed as follows: The basic components of the formula are operating cost and net sales. Operating cost is equal to cost of goods. Advertising, movies, religion, Education are some of the examples which require a mental stimulus. Physical exchange of objects or people is not necessary. 4) Information processing. The last type of service processing occurs where information is being processed and there is no other processing involved. So when you go to a bank, the customer. You will learn how to build bar models to solve word problems that compare quantities using fraction and ratio. You will also learn about the link between ratio and fraction by drawing models to organize and solve problems. Follow along with the videos - take notes, pause, try to solve the problem yourself, rewind if needed, or just watch!. Any entry within one ratio table for one term can me compared with any entry from the ratio table for the other term. Plan your minute lesson in Math or Number Sense and Operations. China's central bank, the People's Bank of China, doesn't have a single primary monetary policy tool like the U.S. Federal Reserve. The PBOC instead .If the salesforce does a lot of traveling, the number of sales ops practitioners tends to be on the higher side, and companies that rely on a lot of inside sales or virtual selling tend to have fewer sales ops people. We had heard this or ratio anecdotally from a lot of people, and we were able to confirm it with a SalesPulse survey.
{}
# Malloc, free and FFI February 8, 2015 Yuras Shumovich TL;DR You should always free memory with the same allocator that allocated it for you. We’ll discuss two different sets of malloc and free functions. The first one is defined in Foreign.Marshal.Alloc, and the second one is part of C runtime. To distinguish them, I’ll use H-malloc and H-free names for Haskell functions, and C-malloc and C-free for C functions. Actually H-malloc and H-free just call their C counterparts, so usually the same allocator is used in all this 4 functions. But that doesn’t mean that we can mix them. The documentation is clear: -- |Free a block of memory that was allocated with 'malloc', -- 'mallocBytes', 'realloc', 'reallocBytes', 'Foreign.Marshal.Utils.new' -- or any of the @new@/X/ functions in "Foreign.Marshal.Array" or -- "Foreign.C.String". -- free :: Ptr a -> IO () free = _free Note that it enumerates all the cases when H-free can be used, and C-malloc is not listed here. There are two reasons for that. First of all, the implementation may be changed to use some other allocator. (You can skip this paragraph, it contains some low level details.) The second reason is that sometimes your program happens to be linked with multiple versions of C runtime. That sounds strange, but it is a very real situation. For example, your program may load external plugin statically linked with C runtime other then yours. As a result you have three sets of malloc/free functions: one from Haskell, another from your C runtime, and one more from the plugin’s C runtime. The last two are probably incompatible, and you’ll get random failures if you are not careful enough. The usual rule to avoid any issue with allocator is: you should deallocate memory in the same module where you allocated it. E.g. if you allocated memory in Haskell, then please free it in Haskell. If you allocated memory in C library, then please deallocate it in the same C library. (The same goes for dynamically loaded plugins.) The reason I wrote about it? There is an issue on ghc bug tracker about replacing the allocator used in H-malloc. And I decided to check how often code on github relies on the current behavior (e.g. uses H-free to deallocate memory, that was allocated by C-malloc). I was surprised how common it is. Even RWH recommends wrong approach: -- file: ch17/PCRE-compile.hs compile :: ByteString -> [PCREOption] -> Either String Regex compile str flags = unsafePerformIO $useAsCString str$ \pattern -> do alloca $\errptr -> do alloca$ \erroffset -> do pcre_ptr <- c_pcre_compile pattern (combineOptions flags) errptr erroffset nullPtr if pcre_ptr == nullPtr then do err <- peekCString =<< peek errptr return (Left err) else do reg <- newForeignPtr finalizerFree pcre_ptr -- release with free() return (Right (Regex reg str)) Here the pcre_ptr is allocated somewhere in pcre C library (probably using C-malloc), and deallocated using H-free (via finalizerFree). This code works most of the time, but it is wrong. The correct approach would be to call C function to deallocate memory. Some C libraries provide a special function that guarantees to call the correct deallocator. In case of pcre it seems to be pcre_free, and it should be used here instead of H-free.
{}
PROBLEMS OF INFORMATION TRANSMISSION A translation of Problemy Peredachi Informatsii Volume 22, Number 3, July–September, 1986 Back to contents page CONTENTS A Bound on Decoding Error Probability in a Noisy Multiple-Access Channel A. N. Trofimov and F. A. Taubin pp. 159–169 Abstract—We derive upper bounds on the decoding error probability in a multiple-user channel with concentrated and impulse noise and white Gaussian noise. The concentrated noise is defined by a sum of harmonics with random frequencies and phases. The impulse noise is modeled by a Poisson stream of short pulses. All the users are independent, using common-format compound signals with frequency and phase jumps. For each receiver, the signals of other stations are treated as interference. The decision procedure includes correlative reception with subsequent decoding using a continuous output. Noiseless Coding of Combinatorial Sources, Hausdorff Dimension, and Kolmogorov Complexity B. Ya. Ryabko pp. 170–179 Abstract—We consider the problem of noiseless coding of combinatorial (nonstochastic) sources. The cost of the optimal code is shown to be equal to the Hausdorff dimension of the source. The same problem is solved with algorithmic constraints on the code in two settings: coding and decoding realized by Turing machines and by finite automata. The lower bounds on the cost of the code in these cases are expressed in terms of the Kolmogorov complexity and the quasi-entropy, respectively. Optimal codes are constructed for sources generated by formal grammars. Matching Block Codes to a Channel with Phase Shifts E. E. Nemirovskii and S. L. Portnoi pp. 179–185 Abstract—We investigate the characteristics of block codes in a phase telegraphy channel with phase shifts. Various known classes of codes are considered, namely, BCH codes, majority codes, and generalized concatenated codes. A transparency and phasability condition is given for code constructions. A new notion of automatic phasability is introduced for majority codes. The use of nonbinary codes in multiposition phase telegraphy channels with phase shifts is described. Two Classes of Minimum Generalized Distance Decoding Algorithms S.I. Kovalev pp. 186–192 Abstract—We consider two classes of erasure, choosing algorithms for decoding in the generalized distance metric with $l<\lceil(d+1)/2\rceil$ decoding attempts. In each class, we identify and investigate algorithms that minimize the loss of distance compared with the Forney algorithm. On Nonparametric Estimation of a Linear Functional of the Regression Function in Observation Planning R. Z. Khas'minskii pp. 192–208 Abstract—We construct an asymptotically optimal observation plan and a corresponding asymptotically efficient nonparametric estimator of a linear functional of the regression function under various assumptions about the observation noise. This estimation plan is asymptotically best both when we have very poor prior (compactness) information about the regression function as well as when sufficiently detailed information is available and the function is smooth. Self-Tuning Algorithm for Minimax Nonparametric Estimation of Spectral Density S. Yu. Efroimovich and M. S. Pinsker pp. 209–221 Abstract—We consider an adaptive algorithm for minimax nonparametric estimation of an unknown spectral density, which is assumed to be a point in an ellipsoid with unknown axes in the Hilbert space. On Optimal Estimation of Scale Parameters A. G. Tartakovskii pp. 222–231 Abstract—We solve the problem of optimal estimation (in the class of regular estimators) of a scale parameter defined as the inverse of the observation mean with quadratic and nonquadratic loss functions. The general results are applied to construct point and interval estimators of the parameter of the gamma distribution. Median Filtering of Deterministic and Stationary Stochastic Signals L. I. Piterbarg pp. 232–239 Abstract—The notion of media filtering (MF) of a continuous time process was introduced in [L.I. Piterbarg, Probl. Peredachi Inf., 1984, vol. 20, no. 1, pp. 65–73], where we determined the rate of convergence of the statistical characteristics of the output signal to the corresponding characteristics of the input signal with the window width going to zero for some normal processes. In this paper, the previous results are generalized to arbitrary stationary processes. We investigate the robustness of the media with respect to a thinning stream of impulse noise. The analysis of the stochastic case is preceded by a number of propositions relating to MF of deterministic signals. Packet Transmission Using a Blocked Unmodified RMA Stack-Algorithm B. S. Tsybakov and S. P. Fedortsov pp. 239–245 Abstract—We derive an upper bound on packet delay in a random multiple access system with an unmodified tree algorithm. This bound is linear for low intensities of the incoming packet stream. Analysis of the Language of Ants by Information-Theoretical Methods Zh. I. Reznikova and B. Ya. Ryabko pp. 245–249 Abstract—The information transmission time in ants is shown to be proportional to the amount of information in the messages. An experiment was developed in which the ants were required to transmit a known amount of information before reaching food. Ants were found to be capable of the simplest techniques of information compression when transmitting a “text.” BRIEF COMMUNICATIONS (available in Russian only) Decoding Algorithm for the Golay $(24, 12, 8)$ Code S. V. Bezzateev pp. 109–112 (Russian issue) Abstract—We propose a decoding algorithm, which corrects triple errors and detects quadruple errors. The algorithm uses a $6\times 4$ matrix composed of positions of a codeword and a permutation from the Mathieu group $M_{24}$.
{}
# Biocarbonation: a novel method for synthesizing nano-zinc/zirconium carbonates and oxides ## FG Baustoffe und Bauchemie It is well known that the chemical precipitation is regarded as an effective approach for the preparation of nano-materials. Nevertheless, it represented several drawbacks, including high energy demand, high cost, and high toxicity. This work investigated the eco-sustainable application of plant-derived urease enzyme (PDUE)-urea mixture for synthesizing Zn–/Zr–carbonates and –oxides nanoparticles. Hydrozincite nanosheets and spherical-shaped Zr-carbonate nano-particles were produced after adding PDUE-urea mixture to the dissolved Zn and Zr salts, respectively. PDUE not only acts as a motivator for urea hydrolysis, but it is also used as a dispersing agent for the precipitated nano-carbonates. The exposure of these carbonates to 500 °C for 2 h has resulted in the production of the relevant oxides. The retention time (after mixing urea with urease enzyme) is the dominant parameter which positively affects the yield% of the nano-materials, as confirmed by statistical analyses. Compared with traditional chemical-precipitation, the proposed method exhibited higher efficiency in the formation of nano-materials with smaller particle size and higher homogeneity.
{}
2013 12-11 # 50 years, 50 colors On Octorber 21st, HDU 50-year-celebration, 50-color balloons floating around the campus, it’s so nice, isn’t it? To celebrate this meaningful day, the ACM team of HDU hold some fuuny games. Especially, there will be a game named "crashing color balloons". There will be a n*n matrix board on the ground, and each grid will have a color balloon in it.And the color of the ballon will be in the range of [1, 50].After the referee shouts "go!",you can begin to crash the balloons.Every time you can only choose one kind of balloon to crash, we define that the two balloons with the same color belong to the same kind.What’s more, each time you can only choose a single row or column of balloon, and crash the balloons that with the color you had chosen. Of course, a lot of students are waiting to play this game, so we just give every student k times to crash the balloons. Here comes the problem: which kind of balloon is impossible to be all crashed by a student in k times. There will be multiple input cases.Each test case begins with two integers n, k. n is the number of rows and columns of the balloons (1 <= n <= 100), and k is the times that ginving to each student(0 < k <= n).Follow a matrix A of n*n, where Aij denote the color of the ballon in the i row, j column.Input ends with n = k = 0. For each test case, print in ascending order all the colors of which are impossible to be crashed by a student in k times. If there is no choice, print "-1". 1 1 1 2 1 1 1 1 2 2 1 1 2 2 2 5 4 1 2 3 4 5 2 3 4 5 1 3 4 5 1 2 4 5 1 2 3 5 1 2 3 4 3 3 50 50 50 50 50 50 50 50 50 0 0 -1 1 2 1 2 3 4 5 -1 /* */ #include <iostream> #define re(i, n) for(int i = 0; i < n; ++ i) using namespace std; const int nMax = 105; int map[nMax][nMax]; int useif[nMax]; int ans[nMax]; int len; int n, k; int dfs(int t, int col) { re(i, n) { if(!useif[i] && map[t][i] == col) { useif[i] = 1; { return 1; } } } return 0; } int maxMatch(int col) { int num = 0; re(i, n) { memset(useif, 0, sizeof(useif)); if(dfs(i, col)) num ++; } return num; } int main() { //freopen("f://data.in", "r", stdin); while(scanf("%d %d", &n, &k) != EOF) { memset(map, 0, sizeof(map)); len = 0; if(!n && !k) break; re(i, n) re(j, n) scanf("%d", &map[i][j]); for(int i = 1; i <= 50; ++ i) { if(maxMatch(i) > k) ans[len ++] = i; } if(!len) printf("-1\n"); else { re(i, len - 1) printf("%d ", ans[i]); printf("%d\n",ans[len - 1]); } } return 0; } 1. 5.1处,反了;“上一个操作符的优先级比操作符ch的优先级大,或栈是空的就入栈。”如代码所述,应为“上一个操作符的优先级比操作符ch的优先级小,或栈是空的就入栈。” 2. 我还有个问题想请教一下,就是感觉对于新手来说,递归理解起来有些困难,不知有没有什么好的方法或者什么好的建议?
{}
# Without actually performing the long division, state whether the following rational numbers will have a terminating decimal expansion or a non-terminating repeating decimal expansion : Question : Without actually performing the long division, state whether the following rational numbers will have a terminating decimal expansion or a non-terminating repeating decimal expansion : (i) $$13\over 3125$$ (ii)  $$17\over 8$$ (iii)  $$64\over 455$$ (iv)  $$15\over 1600$$ (v)  $$29\over 343$$ (vi)  $$23\over {2^3 5^2}$$ (vii)  $$129\over {2^2 5^7 7^5}$$ (viii)  $$6\over 15$$ (ix)  $$35\over 50$$ (x)  $$77\over 210$$ Solution : (i)  Since the factors of the denominator 3125 are $$2^0 \times 5^5$$. Therefore $$13\over 3125$$ is a terminating decimal. (ii)  Since the factors of the denominator 8 are $$2^3 \times 5^0$$. So, $$17\over 8$$ is a terminating decimal. (iii)  Since the factors of the denominator 455 is not in the form $$2^n \times 5^m$$. Therefore $$64\over 55$$ is a non-terminating repeating decimal. (iv)  Since the factors of the denominator 1600 are $$2^6 \times 5^2$$. Therefore $$15\over 1600$$ is a terminating decimal. (v)  Since the factors of the denominator 343 is not of the form $$2^n \times 5^m$$. Therefore it is a non-terminating repeating decimal. (vi)  Since the factors of the denominator is of form $$2^3 \times 5^2$$. Therefore it is a terminating decimal. (vii)  Since the factors of the denominator $$2^2 \times 5^7 \times 7^5$$ is not of the form $$2^n \times 5^m$$. Therefore it is a non-terminating repeating decimal. (viii)  $$16\over 5$$ = $$2\over 5$$ here the factors of the denominator 5 are $$2^0 \times 5^1$$. Therefore it is a terminating decimal. (ix)  Since the factors of the denominator 50 are $$2^1 \times 5^2$$. Therefore $$35\over 30$$ is a terminating decimal. (x)  Since the factors of the denominator 210 is not of the form $$2^n \times 5^m$$. Therefore $$77\over 210$$ is a non-terminating rrepeating decimal.
{}
Public Group # Particles spread in a square shape as opposed to a circle? This topic is 3778 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts void ExplosionFX::createParticle(int ammount) { int x = 0; while( x < ammount) { x++; Particle p; //p.x = rand()%1024; //p.y = rand()%768; p.x = 1024/2; p.y = 768/2; p.r = rand()%255; p.g = rand()%255; p.b = rand()%255; p.velx = 2.0f * ((float)rand() / (float)200.0f) - 200.0f; p.vely = 2.0f * ((float)rand() / (float)200.0f) - 200.0f; p.alpha = 200+rand()%255; particles.push_back(p); } } Hello, I'm writing my first particle system and so far its working great. For some reason, this group of settings makes all of my pixels spread out in a box shape, as opposed to a circle. Considering I have them come from the same point, and spread out randomly, my logic would make me think that this would produce a circular shape as opposed to a square. Any idea what I'm doing wrong? ##### Share on other sites Your X and Y velocity components are independently uniform, that is, if you plot the random velocity vectors, you'll find that they fill a box. To fix this, you should discard all velocity vectors that have an absolute value greater than your maximum value. Pseudocode: for i in # of particles: x=1 y=1 while(x^2 + y^2 > 1) x=random() y=random() createParticle BTW, if your code is C++, you are using the rand() function wrong. Rand() returns a value between 0 and RAND_MAX, so if you want a value between 0 and 1 you should use (float)rand()/RAND_MAX. ##### Share on other sites The problem is that your x and y coordinates are independent of each other. You choose one x coordinate randomly, then one y coordinate randomly. This means that particles can occur anywhere on the x-axis between your max and min values, and similarily for the y axis. That is a square or rectangular area. To remedy this, the simplest solution is probably to use polar coordinates : randomize values for radius and angle, and calculate x and y from that. ##### Share on other sites That code will produce an evenly distributed but approximately square browning velocity. To get a circle, you'll need to either normalize the vector (which will result in slightly higher density in the diagonal directions, but round distribution), or randomly generate an angle and a speed separately. Normalize: p.velx = (float_rand() * 400.0f) - 200.0f;p.vely = (float_rand() * 400.0f) - 200.0f;float inv_dist = 1.0f/sqrt(p.velx*p.velx + p.vely*p.vely);p.velx *= dist;p.vely *= dist; float theta = float_rand() * PI;float speed = float_rand() * 400.0f - 200.0f;p.velx = cos(theta) * speed;p.vely = sin(theta) * speed; Also: the C <stdlib.h>, aka the C++ <cstdlib> rand() function doesn't generate a particularly float-usable value. IIRC It generates a random number between 0 and RAND_MAX, which is usually 2^16-1 (65535). I've used float_rand() above, which I assume is an acceptable random number generator which produces a value x where 0 <= x < 1. ##### Share on other sites Thanks everyone for the great help. I ended up using the previous example. I have the exact fire work effect I was hoping to achieve. thanks! void ExplosionFX::createParticle(int ammount){ int x = 0; while( x < ammount) { x++; Particle p; //p.x = rand()%1024; //p.y = rand()%768; float theta = rand() * 3.14; float speed = 1+rand()%150; p.velx = cos(theta) * speed; p.vely = sin(theta) * speed; p.x = 1024/2; p.y = 768/2; p.r = rand()%255; p.g = rand()%255; p.b = rand()%255; //p.velx = 2.0f * ((float)rand() / (float)200.0f) - 200.0f; //p.vely = 2.0f * ((float)rand() / (float)200.0f) - 200.0f; p.lifetime = 20+rand()%40; p.alpha = 200+rand()%255; particles.push_back(p); }} While I have everyone's attention already, is there any website that explains cool ways to use the particles? So far I have created a rain and explosion effect. Instead of just randomly messing around until I find something that looks cool, is there any place that already has them listed? ##### Share on other sites try looking at existing particle engines. There used to be a stand alone particle script editor for OGRE but I can't seem to find it anymore. If this particle engine is for a certain purpose I'd encourage you to keep it simple, stupid. However if it's just for fun and curiousity, you might want to look at things like water and fire, which can be simulated with particle systems. Can you create a water fountain? a gush of water? a fire (you would want the particles in a fire to start off white or yellow, then fade through orange, red, then die out). and once you're happy, you can go off and write a scripting language that can export/import particle behaviour files ;) 1. 1 Rutin 27 2. 2 3. 3 4. 4 5. 5 • 11 • 11 • 10 • 13 • 20 • ### Forum Statistics • Total Topics 632948 • Total Posts 3009398 • ### Who's Online (See full list) There are no registered users currently online ×
{}
Research Projects  Special inverse monoids: subgroups, structure, geometry, rewriting systems and the word problem (EPSRC grant EP/N033353/1, July 2016-July 2018) Algorithmic problems in algebra have their origins in problems in logic and topology investigated in the beginning of the 20th century by Thue (1914), Tietze (1912), and Dehn (1912). This work shows how the problems of decidability of relations in Thue systems, the homeomorphism problem for topological manifolds, and the homotopy equivalence problem in finite dimensional manifolds, are equivalent to certain algebraic problems, specifically the word problem for finitely presented semigroups and groups, and the isomorphism and conjugacy problems for finitely presented groups. Since then the subject has developed into what is now a highly active and exciting area of research, which provides a meeting point for ideas from logic, algebra and theoretical computer science. One of the most fundamental and important algorithmic problems in algebra is the word problem. Recall that an algebraic structure (e.g. a group, semigroup, associative algebra or Lie algebra) presented by a set of generators $X$ and defining relations is said to have decidable word problem if there exists an algorithm which, for any pair of terms over the alphabet $X$, tells us whether they represent the same element. The importance of the word problem is clear: decidability of the word problem for a class of algebras indicates that we have some hope of studying the structural properties of algebras in the class, while undecidability of the word problem would suggest there would likely be major difficulties in investigating the class as a whole. Markov (1947) and Post (1947) proved independently that the word problem for finitely presented monoids is undecidable in general. This result was extended by Turing (1950) to cancellative semigroups, and then by Novikov (1955) and Boone (1958) to groups. Given that the word problem is undecidable in general, a central theme running through the development of combinatorial and geometric group and semigroup theory over the last sixty years has been to identify and study classes of finitely presented groups and semigroups all of whose members have solvable word problem. Important examples include commutative semigroups, hyperbolic groups (in the sense of Gromov), word hyperbolic semigroups, and groups and semigroups that are automatic, admit presentations by finite complete rewriting systems, or satisfy certain small overlap conditions. One of the most interesting classes of groups that has arisen in this context is that of one-relator groups, that is, groups defined by a finite presentation with a single defining relator. In 1932 Magnus developed a powerful general approach to one-relator groups, now known as the Magnus break-down procedure, which he used to prove several important results about arbitrary one-relator groups, including the Freiheitssatz, and decidability of the word problem. One key tool in Magnus's method is the Reidemeister-Schreier rewriting process for rewriting a presentation of a group to obtain a presentation for a subgroup. He uses this method to give structural information about one-relator groups, showing how any such group can be built up from cyclic groups, in an elegant and intricate way, by repeatedly forming amalgamated products. While this does give a solution to the word problem, the decision algorithm is complicated and its time complexity is unknown. Since Magnus's groundbreaking work, many other important results about one-relator groups have been proved, including on: the conjugacy and isomorphism problems, hyperbolicity, residual finiteness and solvability, hopficity, automaticity, and cohomology. Given how much combinatorial algebra has developed over the past sixty years, it is quite striking that the following problem remains open: Open problem. Is the word problem decidable for one-relation monoids $\mathrm{Mon}\langle A \:|\: u=v \rangle$? This is widely regarded as one of the most important longstanding open problems in the area. The problem has received significant attention, and a number of special cases have been solved. Adjan (1966) proved that the word problem for $\mathrm{Mon}\langle A \:|\: u=v \rangle$ is decidable if one of the words $u$ or $v$ is empty, or if they are both non-empty and have different initial and different terminal letters. For each of these particular cases, Adjan showed how to reduce decidability of the word problem to solving the word problem for an associated one-relator group, and then appealed to Magnus's result for one-relator groups. Adjan and Oganessian (1987) showed that the word problem in general can be reduced just to considering presentations of the form ${\rm Mon} \langle A \; | \; bsa=cta \rangle$ where $a,b,c \in A$, $b \neq c$ and $s, t \in A^*$. All such monoids are known to be left cancellative. Other important results on one-relation monoids include results on: residual finiteness, the isomorphism problem, conjugacy problem, finite derivation type ($\mathrm{FDT}$), and the Freiheitssatz. Since Adjan's work, two of the most important contributions to the problem may be found firstly in the work on Zhang on special monoid presentations, and secondly in the work of Ivanov, Margolis and Meakin, on inverse monoid presentations. Zhang (1991), (1992) showed how any presentation of the form $\mathrm{Mon}\langle A \:|\: w_1=1, \ldots, w_k=1 \rangle$ (so-called, special monoids) can be rewritten to give a finite presentation for its group of units $G$. Using the theory of noetherian confluent string rewriting systems, he showed that in this situation $M$ has decidable word problem if and only if $G$ has decidable word problem. In the particular case that $M$ is one-relator, one obtains a one-relator presentation for $G$, and hence applying Magnus it follows that $M$ has decidable word problem. In this way, Zhang's work both generalises, and provides a new more elegant proof of, Adjan's theorem that special one-relator monoids have decidable word problem. Ivanov, Margolis and Meakin (2001) give an entirely new approach to the word problem for one-relation monoids via the theory of inverse monoid presentations. The study of algorithmic problems in inverse semigroups goes back to work of Scheiblich (1973) and Munn (1974) who showed how one can use birooted edge-labelled trees (Munn trees) to represent elements of the free inverse monoid. This work was extended by Stephen who used Schutzenberger graphs to study presentations of inverse semigroups. Utilising Adyan (1987), Ivanov, Margolis and Meakin (2001) made the crucial and fundamental observation that a positive solution to the word problem for one-relator special inverse monoid presentations ${\rm Inv} \langle A \; | \; w=1 \rangle$ would imply a positive solution to the word problem for one-relation monoids $\mathrm{Mon}\langle A \:|\: u=v \rangle$. This important result translates the question of decidability of the word problem for arbitrary one-relation monoids into the the realm of inverse monoids---a key step, since inverse monoids are a class that lie closer to groups than arbitrary monoids, and for groups the word problem has been solved. The word problem for one-relator special inverse monoids has been solved in some particular cases, including the case that $w$ is an idempotent. While Reidemeister-Schreier-rewriting methods are of fundamental importance in the Magnus break-down procedure described above, no attempt has yet been made to use semigroup-theoretic -rewriting methods to investigate the corresponding problems for monoids and inverse monoids. A central theme of the current research project will be to use Reidemeister-Schreier-rewriting methods, and string rewriting systems, to carry out a comprehensive investigation of the class of special inverse monoids with the ultimate aim of making further progress towards the question of decidability of the word problem for one-relation monoids. Various aspect of one-relation monoids, and inverse monids, will be investigated in the project. Including: I) Subgroups of special inverse monoids Developing RS-rewriting methods to study the units (right units, and other maximal subgroups) of special inverse monoids. II) Rewriting systems and the word problem To develop a theory of convergent rewriting systems which operate on Munn trees, and apply it to relate decidability of the word problem of the special inverse monoid $M$ and its group of units $G$. III) Directed geometry Investigate the directed geometry of the one-relation left-cancellative monoids, and the directed geometry of the Schutzenberger graphs, and groups, of special one-relator inverse monoids. IV) Homological finiteness properties Investiage the question of whether every one-relation monoid admits a presentation by a finite complete rewriting system, and closely related to this the question of whether every such monoid is of homological type left- and right-FP infinity? The project involes extensive collaboration with researchers both from the UK and from universities in Portugal, Serbia and the USA. We will organise a workshop midway through the project, centred around its main themes, which will bring together leading experts from a diverse range of topics in algebra, logic and theoretical computer science.
{}
# Which algorithm is used by STL sort? [closed] I implemented my own shellsort, shown in plot below are the timings for it. 0 means std::sort, used for comparision 1 means single thread Which algorithm is used by STL sort? https://en.cppreference.com/w/cpp/algorithm/sort ## closed as off-topic by Evil, David Richerby, Discrete lizard♦, Yuval Filmus, vonbrandAug 1 '18 at 13:27 This question appears to be off-topic. The users who voted to close gave this specific reason: • "Questions about software development or programming tools are off-topic here, but can be asked on Stack Overflow." – David Richerby, Discrete lizard, Yuval Filmus, vonbrand If this question can be reworded to fit the rules in the help center, please edit the question. • There are multiple implementations of STL. Some of them are available online: cplusplus.com/forum/general/219746. – Yuval Filmus Jul 12 '18 at 16:39 • At any rate, this seems beyond the scope of this site. – Yuval Filmus Jul 12 '18 at 16:39 The C++ Library Specification does not prescribe any particular algorithm or implementation strategy. Every implementor is free to choose any algorithm, strategy, or combination of algorithms and strategies they please. An implementor targeting resource-constrained IoT sensor devices, for example, may choose a different algorithm than one who targets high-performance workstations or servers. Note that the C++ Library Specification does prescribe certain complexity guarantees that an implementor must obey: $O(N \log N)$ comparisons, where $N = \texttt{last - first}$. This may restrict the choice of algorithms. For example, Quicksort cannot be used in this case. • It's more correct to say that quicksort alone cannot be used in this case. All real-world industrial-strength sort systems tend to use a combination of "basic" sort algorithms and may dynamically adjust their behaviour as the key distribution becomes more clear. The obvious example is quicksort falling back to insertion sort for the base case. – Pseudonym Jul 13 '18 at 5:05 ShellSort is bad choice due to inefficient use of CPU caches. You question answered in https://stackoverflow.com/questions/5038895/does-stdsort-implement-quicksort
{}
#### Vol. 8, No. 4, 2015 Recent Issues The Journal About the Journal Editorial Board Subscriptions Editors’ Interests Scientific Advantages Submission Guidelines Submission Form Ethics Statement Editorial Login ISSN: 1944-4184 (e-only) ISSN: 1944-4176 (print) Author Index Coming Soon Other MSP Journals Maximization of the size of monic orthogonal polynomials on the unit circle corresponding to the measures in the Steklov class ### John Hoffman, McKinley Meyer, Mariya Sardarli and Alex Sherman Vol. 8 (2015), No. 4, 571–592 ##### Abstract We investigate the size of monic, orthogonal polynomials defined on the unit circle corresponding to a finite positive measure. We find an upper bound for the ${L}_{\infty }$ growth of these polynomials. Then we show, by example, that this upper bound can be achieved. Throughout these proofs, we use a method developed by Rahmanov to compute the polynomials in question. Finally, we find an explicit formula for a subsequence of the Verblunsky coefficients of the polynomials. However, your active subscription may be available on Project Euclid at https://projecteuclid.org/involve We have not been able to recognize your IP address 18.210.28.227 as that of a subscriber to this journal. Online access to the content of recent issues is by subscription, or purchase of single articles. or by using our contact form.
{}
# Finding all atomic formulas in a language. Suppose I'm working in a language of equality in which the only non-logical symbol is the 2-place relation symbol =. I'm trying to find all possible atomic formulas that can be represented in this language. Would they be: 1. x=x 2. x=y 3. t1=t1 4. t1=t2 where x and y are variables in the language and t1 and t2 are terms in the language? - If there are no function symbols, there aren't very many terms possible. –  Chris Eagle Dec 19 '11 at 19:53 You have to keep in mind that saying "$x=y$ is an atomic formula for all variable symbols $x, y$" does not preclude the possibility that $x$ and $y$ are actually the same variable symbol. Similarly for terms. (This is similar to "regular" mathematics where saying "$x+y$ is a real number for all real numbers $x,y$" does not mean that $x$ and $y$ necessarily denote different real numbers.)
{}
User:Matthew Cordova/Notebook/Physics 307L/2010/09/29 Speed of Light Lab Main project page Next entry Safety SJK 18:45, 28 October 2010 (EDT) 18:45, 28 October 2010 (EDT) This is a very good primary lab notebook. Very easy to understand your analysis. Take a look at Sebastian's notebook: I really like his discussion of the difficulties you had with the triggering. That is really good information that is missing from your notebook. That could be due to the fact that you're duplicating entries, which isn't strictly necessary--I only require you to have a non-shared notebook once the analysis starts. Thanks for the hard work to get good data! • There will be a voltage source being used for this lab. Don't mishandle equipment/wires. • The PMT receiver is sensitive. Do not expose it to room light when it is operational or it will be damaged. Equipment • PMT (photomultiplier tube) • LED • Oscilloscope TDS Tektronix 1002 • Bertan Power Supply Model 313B • Canberra Delay Module NSEC 2058 • Ortec TAC/SCA Model 567 (time-to-amplitude converter) • Harrison Laboratories Power Supply model 6207A • Multiple BNC Cables • Long Carboard Tube Set Up • A detailed set up procedure can be found in Prof. Gold's lab manual(Ch 10). Basically, we connected the PMT to the TAC in order to read the time delay between when the LED sent out a pulse of light and when the PMT received this pulse as a voltage on the oscilloscope. Procedure SJK 18:22, 28 October 2010 (EDT) 18:22, 28 October 2010 (EDT) This is a very good description. I like the photos in Sebstian's notebook, you could copy or link to those: here After set up, Sebastian and I found our 'zero' point where our last measurement would be made. The closer the LED is to the PMT, the better (the reason will be mentioned momentarily). Our zero was 150 cm from the end of the push-stick, measured at the entrance of the cardboard tube. From this point, we measured 100 cm farther down the meter stick (i.e. our first measurement was at our 100 cm). We took our first measurement at the farthest distance due to the fact that we must achieve the same light intensity when taking measurements (the intensity is manipulated by polarizers), and the most accurate reading is obtained when the intensity the PMT receives is large. So why not use the highest intensity from the closest point? Because you can not achieve this value from farther distances. We achieved our max amplitude measured through channel 1 on the oscilloscope (directly related to the light intensity) and made note of the value. We also measured the peak to peak value on channel 2 (related to time of flight). We then decreased the distance between the PMT and the LED by 10 cm, returned to the same channel 1 amplitude, and made note of the peak to peak value on channel 2. This process was repeated until we returned to our zero. This concluded one trial of data. We completed three 'good' trials, two 'unsatisfactory' trials, and one trial which was not worth recording due to incoherent data. The first two trials of data were taken left to our own devices, and contained large amounts of systematic error. With some help from Prof. Koch, our last three trials contained some worthy data. • Note: The reason we must return to the same amplitude for the channel 1 amplitude is due to 'time walk', probably the greatest contributor to systematic error in this lab. The TAC triggers off of the same value for each pulse. If we have a different amplitude (different shape), the TAC would trigger either before or after we want it to. • Note: The reason that the first couple of trials were so bad were for a couple of reasons. One, the amplitude measured by channel 1 was low, and the oscilloscope triggered on a noisy part of the graph. The other was due to the fact that the time delay was set too low, making the amplitude measured by channel 2 very low and inconsistent. • Note: To achieve the graph on the oscilloscope, do the following: Double check your connections and make sure they are secure and in the right location. With all devices on, push the 'auto set' button to obtain the general graph. You may have to 'zoom in' to get a good looking graph. Now set the oscilloscope to obtain an average. This should greatly reduce the noise visible. A picture is provided for our o-scope graph. Calculations and Results The measurements for our lab are enclosed in the following spreadsheet. The value located in the top left cell of the LINEST function represents the slope of the adjacent set of data, and the top right cell represents the uncertainty.SJK 18:24, 28 October 2010 (EDT) 18:24, 28 October 2010 (EDT) This is a typo, uncertainty is 2nd row, left--you used the correct uncertainty, just have typo here. key=0AmM4ABQqnjT9dG43MWRXNkY4eDhQckJXbmpXSU1mTnc= width=950 height=600 }} $\displaystyle c_{calculated}=\frac{1}{slope}*\frac{1mV}{10^{-2}ns}$ • We use $\displaystyle \frac{1}{slope}$ because the slope of this data yields $\displaystyle \frac{mV}{cm}$ , and we need $\displaystyle \frac{cm}{mV}$ . We then convert $\displaystyle \frac{cm}{mV}$ into $\displaystyle \frac{cm}{ns}$ using the conversion listed below. $\displaystyle 1V=10ns$ • From this we can get $\displaystyle 1mv=10*10^{-3}=10^{-2}ns$ Using these basic equations, I will provide $\displaystyle c_{best}$ , $\displaystyle c_{low}$ , and $\displaystyle c_{high}$ , which correspond to $\displaystyle slope$ , $\displaystyle slope+uncertainty$ , and $\displaystyle slope-uncertainty$ , respectively. • Note: The uncertainty I will be using is .03242, located in the LINEST function for the 'Average' plot (cell 2,1 of the function). I will be achieving my final results using the 'Average' plot as opposed to finding three different c values and the taking the average of those. I'm not entirely sure if the way I'm doing it is more accurate or not (shouldn't be too bad, since I can see no drift in our data), but this seems to be the standard way of doing it from what I have seen while looking at other notebooks.SJK 18:29, 28 October 2010 (EDT) 18:29, 28 October 2010 (EDT) I don't know the answer for sure either. I'm leaning slightly towards computing the slope of each run independently and then computing the mean slope later. Just in case there is some kind of drift, it would have less effect in this manner. $\displaystyle c_{best}=\frac{1}{slope}\frac{1}{10^{-2}}=31.26\frac{cm}{ns}$ $\displaystyle c_{low}=\frac{1}{slope+uncertainty}\frac{1}{10^{-2}}=30.95\frac{cm}{ns}$ $\displaystyle c_{high}=\frac{1}{slope-uncertainty}\frac{1}{10^{-2}}=31.58\frac{cm}{ns}$ The accepted value of c, according to Wikipedia is $\displaystyle c=29.98\frac{cm}{ns}$ , which is not within our range. We can say with a good amount of confidence that there was some systematic error. This could contribute to the fact that we were measuring the fastest quantity known in physics, which would require very accurate equipment. Also, although we accounted for time walk, it is still a possibility that this effected our results. All in all, however, we achieved a fairly accurate measurement of the speed of light, with a %error of less than 5%. $\displaystyle %error=\frac{calculated-actual}{actual}*100=4.27%$ References SJK 18:31, 28 October 2010 (EDT) 18:31, 28 October 2010 (EDT) Good acknowledgements. Sebastian also credited Alex Andrego, assuming you should also. Wikipedia for values. Thanks goes to David Weiss and Brian Josey for help with calculation results and general notebook formatting.
{}
# How do you find the area of the region bounded by the polar curves r=3+2cos(theta) and r=3+2sin(theta) ? Nov 8, 2014 Let us look at the region bounded by the polar curves, which looks like: Red: $y = 3 + 2 \cos \theta$ Blue: $y = 3 + 2 \sin \theta$ Green: $y = x$ Using the symmetry, we will try to find the area of the region bounded by the red curve and the green line then double it. $A = 2 {\int}_{\frac{\pi}{4}}^{\frac{5 \pi}{4}} \setminus {\int}_{0}^{3 + 2 \cos \theta} r \mathrm{dr} d \theta$ $= 2 {\int}_{\frac{\pi}{4}}^{\frac{5 \pi}{4}} {\left[{r}^{2} / 2\right]}_{0}^{3 + 2 \cos \theta} d \theta$ $= {\int}_{\frac{\pi}{4}}^{\frac{5 \pi}{4}} \left(9 + 12 \cos \theta + 4 {\cos}^{2} \theta\right) d \theta$ by ${\cos}^{2} \theta = \frac{1}{2} \left(1 + \cos 2 \theta\right)$, $= {\int}_{\frac{\pi}{4}}^{\frac{5 \pi}{4}} \left(11 + 12 \cos \theta + 2 \cos 2 \theta\right) d \theta$ $= {\left[11 \theta + 12 \sin \theta + \sin 2 \theta\right]}_{\frac{\pi}{4}}^{\frac{5 \pi}{4}}$ $= \frac{55 \pi}{4} - 6 \sqrt{2} + 1 - \left(\frac{11 \pi}{4} + 6 \sqrt{2} + 1\right)$ $= 11 \pi - 12 \sqrt{2}$ Hence, the area of the region is $11 \pi - 12 \sqrt{2}$. I hope that this was helpful.
{}
# Recent questions tagged vector Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 State the difference between a row vector and a column vector. State the difference between a row vector and a column vector.State the difference between a row vector and a column vector. ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Are all matrices linear operators? Are all matrices linear operators?Are all matrices linear operators? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Show that $\langle v|A| w\rangle=\sum_{i j} A_{i j} v_{i}^{*} w_{j}$ Show that $\langle v|A| w\rangle=\sum_{i j} A_{i j} v_{i}^{*} w_{j}$Show that $\langle v|A| w\rangle=\sum_{i j} A_{i j} v_{i}^{} w_{j}$ ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Give a characterization of the linear operators over $V \otimes W$ in terms of linear operators over $V$ and $W$. Remember that they form a vector space. Give a characterization of the linear operators over $V \otimes W$ in terms of linear operators over $V$ and $W$. Remember that they form a vector space.Give a characterization of the linear operators over $V \otimes W$ in terms of linear operators over $V$ and $W$. Remember that they form a vector spa ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Express $\overrightarrow{\boldsymbol{P} \boldsymbol{Q}}$ as a column vector. Express $\overrightarrow{\boldsymbol{P} \boldsymbol{Q}}$ as a column vector. (a) $\quad$ Express $\overrightarrow{\boldsymbol{P} Q}$ as a column vector. (b) Find (i) $|\overrightarrow{\boldsymbol{O Q}}|$, (ii) the co ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Give the equation for the $L^{p}$ metric on $\mathbb{R}^{2}$. Give the equation for the $L^{p}$ metric on $\mathbb{R}^{2}$.Give the equation for the $L^{p}$ metric on $\mathbb{R}^{2}$. ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 An inner product on a real vector space $V$ is a function $\langle\cdot, \cdot\rangle: V \times V \rightarrow \mathbb{R}$ satisfying An inner product on a real vector space $V$ is a function $\langle\cdot, \cdot\rangle: V \times V \rightarrow \mathbb{R}$ satisfyingAn inner product on a real vector space $V$ is a function $\langle\cdot, \cdot\rangle: V \times V \rightarrow \mathbb{R}$ satisfying ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 A metric on a set $S$ is a function $d: S \times S \rightarrow \mathbb{R}$ that satisfies A metric on a set $S$ is a function $d: S \times S \rightarrow \mathbb{R}$ that satisfiesA metric on a set $S$ is a function $d: S \times S \rightarrow \mathbb{R}$ that satisfies ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Vector spaces can contain other vector spaces. If $V$ is a vector space, then $S \subseteq V$ is said to be a subspace of $V$ if Vector spaces can contain other vector spaces. If $V$ is a vector space, then $S \subseteq V$ is said to be a subspace of $V$ ifVector spaces can contain other vector spaces. If $V$ is a vector space, then $S \subseteq V$ is said to be a subspace of $V$ if ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 How do I describe a Euclidean space? How do I describe a Euclidean space?How do I describe a Euclidean space? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What are vector spaces? What are vector spaces?What are vector spaces? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Each diagram shows 3 vectors of equal magnitude. In which diagram is the magnitude of the resultant vector different from the other 3? Each diagram shows 3 vectors of equal magnitude. In which diagram is the magnitude of the resultant vector different from the other 3? Each diagram shows 3 vectors of equal magnitude. In which diagram is the magnitude of the resultant vector different from the other 3? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is a vector equation of a line? What is a vector equation of a line?What is a vector equation of a line? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 If an object has constant acceleration, the If an object has constant acceleration, theIf an object has constant acceleration, the ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Does the acceleration vector always point in the direction in which an object is moving? Does the acceleration vector always point in the direction in which an object is moving?Does the acceleration vector always point in the direction in which an object is moving? If so, describe a situation in which the direction of the acc ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is the 'y' length of a vector with a beginning point of (1, -2) and an end point of (-3, 4) What is the 'y' length of a vector with a beginning point of (1, -2) and an end point of (-3, 4)What is the 'y' length of a vector with a beginning point of (1, -2) and an end point of (-3, 4) ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is the x length of a vector with a start point of (1, 2) and an end point of (5, 8)? What is the x length of a vector with a start point of (1, 2) and an end point of (5, 8)?What is the x length of a vector with a start point of (1, 2) and an end point of (5, 8)? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Two vectors are multiplied together. The answer that is produced is also a vector. What kind of multiplication is this? Two vectors are multiplied together. The answer that is produced is also a vector. What kind of multiplication is this?Two vectors are multiplied together. The answer that is produced is also a vector. What kind of multiplication is this? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What does the given vector represent? What does the given vector represent? What does the given vector represent? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 A normal vector is _____ to a given vector. A normal vector is _____ to a given vector.A normal vector is _____ to a given vector. ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Is the gradient a row vector or a column vector? Is the gradient a row vector or a column vector?Is the gradient a row vector or a column vector? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 When evaluating ATE for continuous RVs, should the answer be always a scalar, or could it be a vector? Would propensity scores still be applicable in such setting? When evaluating ATE for continuous RVs, should the answer be always a scalar, or could it be a vector? Would propensity scores still be applicable in such setting?when evaluating ATE for continuous RVs, should the answer be always a scalar, or could it be &nbsp;a vector? Would propensity scores still be applicab ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is Support vector regression (SVR)? What is Support vector regression (SVR)?What is Support vector regression (SVR)? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is a hyper-plane?
{}
# Exercise 4 Let A € R"xm have singular values 01 2 G2 2 20, > 0. Show that IAIz 01 ###### Question: Exercise 4 Let A € R"xm have singular values 01 2 G2 2 20, > 0. Show that IAIz 01 = (Hint: use the definition of 2-nOrmn, SVD of A, and invariance of Uhe norn unudev othogonal trunsfonmations. #### Similar Solved Questions ##### 0a ReviewConstantsPeriodic TableIt takes 3.0 pJ of work to move a 18 nC charge from point A to B_ It takes -5.0 pJ of work to move the charge from C to B.Part AWhat is the potential difference Vc VA ? Express your answer in volts:AzdVc - VA 0a Review Constants Periodic Table It takes 3.0 pJ of work to move a 18 nC charge from point A to B_ It takes -5.0 pJ of work to move the charge from C to B. Part A What is the potential difference Vc VA ? Express your answer in volts: Azd Vc - VA... ##### What specific type of neural pathway is shown in figure 4? a. abdominal reflex b. stretch... what specific type of neural pathway is shown in figure 4? a. abdominal reflex b. stretch ... Question: What specific type of neural pathway is shown in Figure 4? a. Abdominal reflex b. Stretch reflex ... What specific type of neural pathway is shown in Figure 4? a. Abdominal reflex b. Stretch refle... ##### $21-48=$ Solve the system, or show that it has no solution. If the system has infinitely many solutions, express them in the ordered-pair form given in Example $6 .$ \left\{\begin{aligned} 0.2 x-0.2 y &=-1.8 \\-0.3 x+0.5 y &=3.3 \end{aligned}\right. $21-48=$ Solve the system, or show that it has no solution. If the system has infinitely many solutions, express them in the ordered-pair form given in Example $6 .$ \left\{\begin{aligned} 0.2 x-0.2 y &=-1.8 \\-0.3 x+0.5 y &=3.3 \end{aligned}\right.... ##### Find the lateral area of the cylinder:UJ_ Find the area of the hemisphere: Find the lateral area of the cylinder: UJ_ Find the area of the hemisphere:... ##### 01210.0 pointsAn object is thrown vertically up and attains an upward velocity of 25 m/s when it reaches one fourth of its maximum height above its launch point. What was the initial speed of the object? The acceleration of gravity is 9.8 m/s?. Answer in units of m/s. 012 10.0 points An object is thrown vertically up and attains an upward velocity of 25 m/s when it reaches one fourth of its maximum height above its launch point. What was the initial speed of the object? The acceleration of gravity is 9.8 m/s?. Answer in units of m/s.... ##### [-05 Polnu] dliais CRAuocOlilcg 15 5e 003AMon6enetneenntedo Lea thoDieuTa uednta 80D ~cJeldnanete erieeltetG5 ,utJs .L07 3.79.97 ~aentiot3Al + D[-ipalcti]DULCUTNOFSdestos HattanaSpptlanGomimandcommandoption [-05 Polnu] dliais CRAuocOlilcg 15 5e 003 AMon 6enetneenntedo Lea tho Dieu Ta uednta 80D ~c Jeldnanete erieeltet G5 ,ut Js .L07 3.79.97 ~aentiot 3 Al + D [-ipalcti] DULC UTNOFS destos Ha ttana S pptlan Gomimand command option... ##### A higher interest rate (discount rate) would: Multiple Choice increase the price of corporate bonds. reduce... A higher interest rate (discount rate) would: Multiple Choice increase the price of corporate bonds. reduce the price of preferred stock increase the price of common stock reduce the cost of dividends.... ##### The records of a casualty insurance company show that, in the past, its clients have had... The records of a casualty insurance company show that, in the past, its clients have had a mean of 1.7 auto accidents per day with a standard deviation of 0.05. The actuaries of the company claim that the standard deviation of the number of accidents per day is no longer equal to 0.05. Suppose that ... ##### Ktr-P T+qKy(t)1 + ((tK-K(_P)e-(r-p)t] (r+qK)K(r-p) T+qky(t)(St+ktZ)e-(-px] (+qk)Kl-P) T+qK 1+ ((rtak)yo _ Ovo-2)e-(-p)] (r+qK)yoy(t) Ktr-P T+qK y(t) 1 + ((tK-K(_P)e-(r-p)t] (r+qK) K(r-p) T+qk y(t) (St+ktZ)e-(-px] (+qk) Kl-P) T+qK 1+ ((rtak)yo _ Ovo-2)e-(-p)] (r+qK)yo y(t)... ##### 17. Draw a three seconds strip of EKG paper. How many R waves would you have... 17. Draw a three seconds strip of EKG paper. How many R waves would you have on that strip pf paper of the heart rate was 60 bpm? We were unable to transcribe this image... ##### Acompany produces steel rods: The lengths of the steel rods are normally distributed with J mean of 168.5-cm and standard deviation of 2.4 cm_ For shipment, 16 steel rods are bundled together. Find the probability that the average length of & randomly selected bundle of steel rods is less than 169.88 cna P(M 169.88-cm)Round t0 2 decimal places Acompany produces steel rods: The lengths of the steel rods are normally distributed with J mean of 168.5-cm and standard deviation of 2.4 cm_ For shipment, 16 steel rods are bundled together. Find the probability that the average length of & randomly selected bundle of steel rods is less than 1... ##### 5. Show the effects of a tax cut in the IS-LM model. Be sure to show... 5. Show the effects of a tax cut in the IS-LM model. Be sure to show the underlying dynamics in the goods market.... ##### The following reaction is exothermic: COzlg) 2 Hzlg) CH3OH(I). The reaction isspontaneous at all temperaturesnonspontaneous atall temperaturesspontaneous at low temperatures_spontaneous at high temperatureswe cannot predict spontaneity without knowing the entropy _ change: The following reaction is exothermic: COzlg) 2 Hzlg) CH3OH(I). The reaction is spontaneous at all temperatures nonspontaneous atall temperatures spontaneous at low temperatures_ spontaneous at high temperatures we cannot predict spontaneity without knowing the entropy _ change:... ##### Question 331 ptsWhat is the molar solubility of Pb3(PO4)2? The solubility product constant, Ksp' of Pb3(PO4)z in water is 1.0 x10-541.2 * 10-13 M4.6 * 10-10 B6.2 * 10-12 M1.6 * 10-55 M Question 33 1 pts What is the molar solubility of Pb3(PO4)2? The solubility product constant, Ksp' of Pb3(PO4)z in water is 1.0 x10-54 1.2 * 10-13 M 4.6 * 10-10 B 6.2 * 10-12 M 1.6 * 10-55 M... ##### If two fair dice are rolled, find the probability of the following result: double, given that the sum was 6,The probability is(Type an linteger or a simplified fraction ) If two fair dice are rolled, find the probability of the following result: double, given that the sum was 6, The probability is (Type an linteger or a simplified fraction )... ##### Can you please do all work in steps on a paper if possible Exercises: Parity conditions... can you please do all work in steps on a paper if possible Exercises: Parity conditions in real markets and financial markets EXERCISE 4 (Purchasing Power Parity) We live in a four-country world where people only grow and eat coconuts. We have the following data: Brazil a Mexico Argentina United... ##### Solve each equation in the real number system.$3 x^{3}+4 x^{2}-7 x+2=0$ Solve each equation in the real number system. $3 x^{3}+4 x^{2}-7 x+2=0$...
{}
Browse Questions # True-or-False:A candidate is required to answer 7 questions out of 12 questions which are divided into two groups,each containing 6 questions. He is not permitted to attempt more than 5 questions from either group. He can choose the seven questions in 650 ways. $\begin{array}{1 1}(A)\;\text{True}\\(B)\;\text{False}\end{array}$ Total no of questions =12 Questions required to Ans =7 Two groups $\rightarrow 6-6$ No of ways =$6C_5\times 6C_2+6C_2\times 6C_5+6C_3\times 6C_4+6C_4\times 6C_3$ $6C_5=\large\frac{6!}{5!1!}$$=6 6C_2=\large\frac{6!}{2!4!}=\frac{6\times 5}{2}$$=15$ $6C_3=\large\frac{6!}{3!3!}=\frac{6\times 5\times 4\times 3!}{3!\times 3\times 2}$$=20 6C_4=\large\frac{6!}{4!2!}=\frac{6\times 5\times 4!}{4!\times 2}$$=15$ No of ways =$6\times 15+15\times 6+20\times 15+15\times 20$ $\Rightarrow 90+90+300+300$ $\Rightarrow 180+600$ $\Rightarrow 780$ ways Hence the given statement is false
{}
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PERSONAL OFFICE General information Latest issue Archive Impact factor Subscription License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Uspekhi Mat. Nauk: Year: Volume: Issue: Page: Find Uspekhi Mat. Nauk, 2014, Volume 69, Issue 3(417), Pages 145–172 (Mi umn9589) A system of three quantum particles with point-like interactions R. A. Minlos A. A. Kharkevich Institute for Information Transmission Problems, Russian Academy of Sciences Abstract: Consider a quantum three-particle system consisting of two fermions of unit mass and another particle of mass $m>0$ interacting in a point-like manner with the fermions. Such systems are studied here using the theory of self-adjoint extensions of symmetric operators: the Hamiltonian of the system is constructed as an extension of the symmetric energy operator $$H_0=-\frac{1}{2}(\frac{1}{m}\Delta_y+\Delta_{x_1}+\Delta_{x_2}),$$ which is defined on the functions in $L_2(\mathbb{R}^3)\otimes L_2^{\operatorname{asym}}(\mathbb{R}^3\times\mathbb{R}^3)$ that vanish whenever the position of the third particle coincides with the position of a fermion. To construct a natural family of extensions of $H_0$, one must solve the problem of self-adjoint extensions for an auxiliary sequence $\{T_l, l=0,1,2,…\}$ of symmetric operators acting in $L_2(\mathbb{R}^3)$. All the operators $T_l$ with even $l$ are self-adjoint, and for every odd $l$ there are two numbers $0<m_l^{(1)}<m_l^{(2)}<\infty$ such that $T_l$ is self-adjoint and lower semibounded for $m>m_l^{(2)}$, and has deficiency indices for $m\leqslant m_l^{(2)}$. When $m\in[m_l^{(1)}, m_l^{(2)}]$, every self-adjoint extension of $T_l$ which is invariant under rotations of $\mathbb{R}^3$ is lower semibounded, but if $0<m<m_l^{(1)}$, then it has an infinite sequence of eigenvalues $\{\lambda_n\}$ of multiplicity $2l+1$ such that $\lambda_n\to-\infty$ as $n\to\infty$ (the Thomas effect). It follows from the last fact that there is a sequence of bound states of $H_0$ with spectrum $P^2/(2(m+2))+z_n$, where the numbers $z_n<0$ cluster at 0 (Efimov's effect). Bibliography: 19 titles. Keywords: symmetric operator, deficiency indices, semibounded operator, self-adjoint extensions, spectrum, Mellin transform, the Riemann–Hilbert–Privalov problem. Funding Agency Grant Number Russian Foundation for Basic Research 13-01-12410 This paper was written with the support of the Russian Foundation for Basic Research (grant no. 13-01-12410). DOI: https://doi.org/10.4213/rm9589 Full text: PDF file (830 kB) References: PDF file   HTML file English version: Russian Mathematical Surveys, 2014, 69:3, 539–564 Bibliographic databases: Document Type: Article UDC: 517.958:530.145+517.984 MSC: 81Q10, 81V15 Citation: R. A. Minlos, “A system of three quantum particles with point-like interactions”, Uspekhi Mat. Nauk, 69:3(417) (2014), 145–172; Russian Math. Surveys, 69:3 (2014), 539–564 Citation in format AMSBIB \Bibitem{Min14} \by R.~A.~Minlos \paper A system of three quantum~particles with point-like interactions \jour Uspekhi Mat. Nauk \yr 2014 \vol 69 \issue 3(417) \pages 145--172 \mathnet{http://mi.mathnet.ru/umn9589} \crossref{https://doi.org/10.4213/rm9589} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=3287506} \zmath{https://zbmath.org/?q=an:1300.81039} \adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?2014RuMaS..69..539M} \elib{http://elibrary.ru/item.asp?id=21826588} \transl \jour Russian Math. Surveys \yr 2014 \vol 69 \issue 3 \pages 539--564 \crossref{https://doi.org/10.1070/RM2014v069n03ABEH004900} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000341511800005} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-84906814720} • http://mi.mathnet.ru/eng/umn9589 • https://doi.org/10.4213/rm9589 • http://mi.mathnet.ru/eng/umn/v69/i3/p145 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. M. Correggi, D. Finco, A. Teta, “Energy lower bound for the unitary $N+1$ fermionic model”, EPL, 111:1 (2015), 10003 2. M. Correggi, G. Dell'Antonio, D. Finco, A. Michelangeli, A. Teta, “A class of Hamiltonians for a three-particle fermionic system at unitarity”, Math. Phys. Anal. Geom., 18:1 (2015), 32, 36 pp. 3. A. Michelangeli, P. Pfeiffer, “Stability of the $(2+2)$-fermionic system with zero-range interaction”, J. Phys. A, 49:10 (2016), 105301, 27 pp. 4. T. Moser, R. Seiringer, “Stability of a fermionic $N+1$ particle system with point interactions”, Comm. Math. Phys., 356:1 (2017), 329–355 5. G. Basti, A. Teta, “On the quantum mechanical three-body problem with zero-range interactions”, Functional analysis and operator theory for quantum physics, EMS Ser. Congr. Rep., Eur. Math. Soc., Zürich, 2017, 71–93 6. G. Basti, A. Teta, “Efimov effect for a three-particle system with two identical fermions”, Ann. Henri Poincaré, 18:12 (2017), 3975–4003 7. A. Michelangeli, A. Ottolini, “On point interactions realised as Ter-Martirosyan-Skornyakov Hamiltonians”, Rep. Math. Phys., 79:2 (2017), 215–260 8. K. Yoshitomi, “Finiteness of the discrete spectrum in a three-body system with point interaction”, Math. Slovaca, 67:4 (2017), 1031–1042 9. A. Michelangeli, A. Ottolini, “Multiplicity of self-adjoint realisations of the $(2+1)$-fermionic model of Ter-Martirosyan-Skornyakov type”, Rep. Math. Phys., 81:1 (2018), 1–38 10. Moser T., Seiringer R., “Stability of the 2+2 Fermionic System With Point Interactions”, Math. Phys. Anal. Geom., 21:3 (2018), 19 11. Becker S., Michelangeli A., Ottolini A., “Spectral Analysis of the 2+1 Fermionic Trimer With Contact Interactions”, Math. Phys. Anal. Geom., 21:4 (2018), 35 • Number of views: This page: 418 Full text: 62 References: 43 First page: 38
{}
# Conjecture: all complex roots of $\sum_{k=0}^\infty \frac{z^k}{\left(nk\right)!}$ are real Conjecture: $$\left[n\in\mathbb{Z}^+,z\in\mathbb{C},0=\sum_{k=0}^\infty \frac{z^k}{\left(nk\right)!}\right]\Rightarrow z\in\mathbb{R}$$ This conjecture has been verified for $$n\in\{1,2,4\}$$. The motivation for this conjecture arose during the study of the exponential sum function which has applications to exponentiation in rings with abelian multiplication: $$\text{rues}_n\left(z\right)=\sum_{k=0}^\infty \frac{z^{nk}}{\left(nk\right)!}=\frac{1}{n}\sum _{k=1}^n \exp\left(ze^{2ki\pi/n}\right)$$ • $e^z$ does not have any roots and thus $n=1$ is vacuously true. $0\neq e^{2ki\pi}=1$. May 11 '19 at 0:26 It has been shown that the zeros of the Mittag-Leffler function $$E_{\alpha}(z)\stackrel{\text{def}}{=}\sum_{n=0}^{\infty}\frac{z^n}{\Gamma(\alpha n+1)}\quad(\alpha > 0)$$ are real and negative whenever $$\alpha\geq 2$$. Wiman, A. 1905. “Über die Nullstellen der Funktionen $$E_a(x)$$.” Acta Mathematica 29: 217–34. Pólya, G. 1921. “Bemerkung Über Die Mittag-Lefflerschen Funktionen $$E_a(z)$$.” Tohoku Mathematical Journal, First Series 19: 241–48.
{}
Should I fit my parameters with brute force I am running analysis on data for this type of sensor my company makes. I want to quantify the health of the sensor based on three features using the following formula: sensor health index = feature1 * A + feature2 * B + feature3 *C We also need to pick a threshold so that if this index exceeds the threshold, the sensor is considered as bad sensor. We only have a legacy list which shows about 100 sensors are bad. But now we have data for more than 10,000 sensors. Anything not in that 100 sensor list is NOT necessarily bad. So I guess the linear regression methods don't work in this scenario. The only way I can think of is the brute force fitting. Pseudo code is as follows: # class definition for params(coefficients) class params{ a b c th } # dictionary of parameter and accuracy rate map = {} for thold in range (1..20): for a in range (1..10): for b in range (1..10): for b in range (1..10): params = new params[a, b, c, thold] for each sensor: health_index = sensor.feature1*a+sensor.feature2*b+sensor.feature3*c if health_index > thold: map[params] = accuracy # rank params based on accuracy rank(map) # the params with most accuracy is the best model print map.index(0) I really don't like this method since it is using 5 for loops which is very efficient. I wonder if there is a better way to do it. Using existing library such as sk-learn perhaps?
{}
💬 👋 We’re always here. Join our Discord to connect with other students 24/7, any time, night or day.Join Here! # Fanciful shapes can be created by using the implicit plotting capabilities of computer algebra systems, (a) Graph the curve with equation$y(y^2 - 1)( y -2) = x(x -1)(x -2)$At how many points does this curve have horizontal tangents? Estimate the $x-$ coordinates of these points. (b) Find equations of the tangent lines at the points (0, 1) and (0, 2).(c) Find the exact r-coordinates of the points in part (a).(d) Create even more fanciful curves by modifying the equation in part (a). ## (a) $$x \approx 0.42265$$(b) $$y=-x+1 \text { and } y=\frac{1}{3} x+2$$(c) $$x=1 \pm \frac{1}{3} \sqrt{3}$$(d) Derivatives Differentiation ### Discussion You must be signed in to discuss. Lectures Join Bootcamp ### Video Transcript so for this probably expanded sign should be easy to take. Impressive derivative. Then computers exploding off the point. Where's a horizontal tension? The tension lines horizontal, which means that the relative zero So that's something which is have the sob numerator. Zero. So three X Square minus six x plus cheery ho zero and we probably just fact arises. Actually, that's that would probably have to use a former so ex evos one over six six prosper minus square of sixties thirty six miners twenty four twelve. So that's one prosper on minus. Um, so that's one over three square root of three. So those are the point where you have horizontal tension line and for from the point zero one actually, I'm just answering posse People's Party. They asked you to ask me which I will leave you to you. Ah, for the to use a graphing calculator and the party are also leave it to you for part B. Ah, we just have to figure out a point and a slope you're planning an exit was zero. Wife wants one. We have slow because Shiyu over negative two, which is never if one. So the equation should be why minus one equals negative X. And for zero two, the slope, It's reply it. Actually, it was. You know why I was two, Saul. Numerous ways too. Denominator is, ah, eight times for his thirty two minors. Twenty five twenty four. So is eight minus four is four plus two is six. So Slope is one of our history. So the equation should be why minus two horse won over three X and and those are the equation of tension alive. Derivatives Differentiation Lectures Join Bootcamp
{}
# Unexpected error using "stdin" in bedtools intersect as a piped command From the bedtools intersect man (for version 2.30.0): -a BAM/BED/GFF/VCF file “A”. Each feature in A is compared to B in search of overlaps. Use “stdin” if passing A with a UNIX pipe. The stdin part, however, does not apply to version 2.27.1. The help on this version gives no advice for piping. bam mem ref fq1.fastq fq2.fastq | samtools view -b | bedtools intersect -a stdin -b blacklist.bed -v > align_black.sam Error: unable to open file or unable to determine types for file stdin - Please ensure that your file is TAB delimited (e.g., cat -t FILE). - Also ensure that your file has integer chromosome coordinates in the expected columns (e.g., cols 2 and 3 for BED). I also tried: (map to bam) | bedtools intersect -a "stdin" -b blacklist.bed -v > align_black.sam (map to bam) | bedtools intersect -a -b blacklist.bed -v > align_black.sam (map to bam) | bedtools intersect -b blacklist.bed -v > align_black.sam But these don't work either, either for the same or a different reason. Here is a site that tells me to use the above syntax: https://bedtools.readthedocs.io/en/latest/content/example-usage.html What is the solution to this if I want to pipe everything together for my users? With v2.27.1 (or any version I ever used) instead of stdin use - (...) | bedtools intersect -a - -b blacklist.bed -v > out.bam Be aware that the output of this is a BAM, not SAM file. It would be easier though to simply make a list of regions you want to keep, e.g. the complement of the backlist file with the entire genome (bedtools complement), and then use samtools view -L option to only keep overlapping reads. That saves you the run through bedtools. • Thanks, I wonder why this isn't in the docs. I had my own workaround before you commented, but I'll accept this as the answer. Is there a reason you suggest not using bedtools? – Jeff Apr 19 at 13:59 • @Jeff As samtools can do the same and you anyway run it you save one additional tool/pipe/core. That is the only reason. Not sure how the performance compares. Apr 20 at 7:34 Here's a way to skip conversion to BAM and pipe between tools, which performs the same set operations: \$ bam mem ref fq1.fastq fq2.fastq | sam2bed - | bedops -e 1 - blacklist.bed > blacklisted_reads.bed With this toolkit, the hyphen (-) follows a Unix convention of serving as a placeholder for stdin. While I have chosen ATpoint as the official answer, another possible syntax to use is bedtools intersect -a stdin -b blacklist.bed -v < (map to bam) > align_black.bam The main issue with this syntax is that it obfuscates the direction of flow for the pipeline, piping step 2 after step 1 is better in the official answer.
{}
# How to write this small program better? This is just a fun little exercise I had to do for a homework once (in Java rather than Clojure though). Basically, the goal is to find the number of different coin stacks you can build with the coins 1,2,5 and 10 to form a number N (I believe a closed form solution for this exists, but this isn't what this is about). My solution here works, but I'm not quite happy with it: • Explicit loop. Not sure how to do away with it though. • A few functions there that I think should be core functions, but I can't find them. • Even if they aren't in core, my functions could probably be written somewhat more concisely Anyways, here's my code (by the way, the function is called "fast-iterative" because the exercise was about calculating these values recursively. Which I think is insane due to the branching factor, but whatever): (ns test.coin-stacks) (def maximum (partial reduce max)) (defn shift-vector "Shifts v one to the left, insert shift-val at the right" [v shift-val] (conj (vec (rest v)) shift-val)) (defn update "Updates ks in m with f applied to ks. I can't believe I have to write this myself." [m ks f] (apply assoc m (interleave ks (map (comp f m) ks)))) (defn fast-iterative-coinstacks "Returns the number of ways to form a coin stack with a total value of n with coins" [n coins] (let [max-coin (maximum coins) initial-stacks (apply conj [1] (repeat max-coin 0))] (loop [stacks initial-stacks iteration 0] (if-not (< iteration n) (first stacks) (recur (shift-vector (update stacks coins (partial +' (stacks 0))) 0) (inc iteration)))))) - (defn shift-vector "Shifts v one to the left, insert shift-val at the right" [v shift-val] (conj (subvec v 1) shift-val)) (defn update-in-all "Updates ks in m with f applied for each value at k." [m ks f] (reduce (fn [m' k] (update-in m' [k] f)) m ks)) (defn fast-iterative-coinstacks "Returns the number of ways to form a coin stack with a total value of n with coins" [n coins] (let [max-coin (reduce max coins) initial-stacks (into [1] (repeat max-coin 0)) process-stacks (fn [stacks] (-> stacks (update-in-all coins (partial + (first stacks))) (shift-vector 0)))] (-> (iterate process-stacks initial-stacks) (nth n) first))) Some explanations: • update-in works both with vectors and maps, and can update nested values as well, • the threading macro -> makes the flow of your code easier to follow, • iterate is a really convenient function that given f and x returns a lazy sequence of x, (f x), (f (f x))... • you can do a lot of things with reduce! - I would write (def maximum (partial reduce max)) as: (defn max-coll [coll] (apply max coll)) although writing a special function for (apply max coll) seems a bit overdone in my taste. Another minor rewrite: (apply conj [1] (repeat max-coin 0)) => (into [1] (repeat max-coin 0)) - There's a nice recursive solution to this problem: (defn options [total available-coins] (if (seq available-coins) ;; have we got any coins left? (let [[[coin max-available] & more-coins] (seq available-coins) needed (quot total coin) rem (mod total coin)] (mapcat (fn [n] (map #(if (> n 0) (merge % {coin n}) %) (options (- total (* coin n)) more-coins))) ;; recursive call (range 0 (inc (min needed max-available))))) ;; all possible quantities (if (== total 0) [{}] ;; empty solution, no coins required []))) ;; no solution This outputs the solutions for making a total using a map of coins -> count e.g. (options 50 {10 3 20 3}) => ({10 1, 20 2} {10 3, 20 1}) Obviously, you can just count the number of solutions if you want the number of different stacks that could be made. - I don't understand this one, what do you pass to this function and what do you return? –  Cubic Oct 29 '12 at 9:26 @Cubic: You pass 1) value of the wanted coin stack (total) and 2) "a map of coins -> count" (available-coins) and return a list of map of coins -> count. –  user272735 Oct 29 '12 at 13:03 In that case, this really doesn't solve the same problem. The problem I described doesn't actually put any limits on the coins that can be used, and the ordering of the coins in the stack matters (hence the metaphor of a stack). –  Cubic Oct 29 '12 at 14:04 Your guess is almost right. Because your shift-vector function simply performs dequeue and enqueue, it can be rewritten using clojure.lang.PersistentQueue. (defn shift-vector [q x] (-> q pop (conj x))) You can use Clojure queues by starting with clojure.lang.PersistentQeueue/EMPTY. By the way, your task seems to be a typical DP problem, since you mentioned that the order of coins mattered. I may show you another solution of O(n) DP using an array destructively. (defn solve [n coins] (let [dp (doto (long-array (inc n)) (aset 0 1))] (loop [i 0] (if (< i n) (do (doseq [c coins] (let [j (+ i c)] (if (<= j n) (aset dp j (+ (aget dp i) (aget dp j)))))) (recur (inc i))) (aget dp n))))) Examples: user> (solve 1 [1 2 5 10]) 1 user> (solve 2 [1 2 5 10]) 2 user> (solve 5 [1 2 5 10]) 9 user> (solve 82 [1 2 5 10]) 7637778505022614185 - I didn't use PersistentQueue because I wanted random access for my algorithm. I also didn't want to use more space than strictly necessary (this might be an unnecessary concern, since it takes pretty long with a million elements anyways), so I create a smaller vector than you do. I don't know what a "dp" is or how your solution is different from mine though. –  Cubic Oct 31 '12 at 16:10
{}
# Probabilistic graphical models over continuous index sets Placeholder for my notes on probabilistic graphical models over a continuum, i.e. with possibly-uncountably many nodes in the graph; or put another way, where the random field has an uncountable index set (but some kind of structure – a metric space, say.) There is much formalising to be done here, which I do not propose to attempt right now. Here’s a concrete example. Consider Gaussian process whose covariance kerne $$K$$ is continuous and of bounded support. Let it be over index space $$\mathcal{T}:=\mathbb{R}^n$$ for the sake of argument. It implicitly defines an undirected graphical model where for any given observation index $$t_0\in\mathcal{T}$$, the value $$x_0$$ is influenced by the values of the field at $$\operatorname{supp}\{K(\cdot, t_0)\}$$; (or a really a continuum of different strengths of influence depending on the magnitude of the kernel). Does this kind of factoring buy us anything? Does the standard finite dimensional distribution argument get us anywhere in this setting if we can introduce some conditional independence? I suspect that (Lauritzen 1996) is sufficiently general to cover this, but TBH I haven’t read it for long enough that I can’t remember. (Eichler, Dahlhaus, and Dueck 2016) is probably an example of what I mean; they construct a continuous index directed graphical model for point process fields, based on limiting cases of a discrete field, which seems like the obvious method of attack. Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Information Science and Statistics. New York: Springer. Eichler, Michael, Rainer Dahlhaus, and Johannes Dueck. 2016. “Graphical Modeling for Multivariate Hawkes Processes with Nonparametric Link Functions.” Journal of Time Series Analysis, January, n/a–n/a. https://doi.org/10.1111/jtsa.12213. Lauritzen, Steffen L. 1996. Graphical Models. Clarendon Press.
{}
4 # Problem Five (Hypothesis Testing) Suppose want to test if Yifu claims to have Extrasensory Perception (ESP) , and that if draw card drawn randomly and with replacem... ## Question ###### Problem Five (Hypothesis Testing) Suppose want to test if Yifu claims to have Extrasensory Perception (ESP) , and that if draw card drawn randomly and with replacement from an ordinary deck of cards, he can guess the suit without seeing the cardam sceptical and propose to test his claim_ formulate the hypothesis that when he tells me the suit; he is just guessing and the probability of his getting it right is 25%Therefore perform 100 trials where shuffle the deck, draw card, ask him to guess the Problem Five (Hypothesis Testing) Suppose want to test if Yifu claims to have Extrasensory Perception (ESP) , and that if draw card drawn randomly and with replacement from an ordinary deck of cards, he can guess the suit without seeing the card am sceptical and propose to test his claim_ formulate the hypothesis that when he tells me the suit; he is just guessing and the probability of his getting it right is 25% Therefore perform 100 trials where shuffle the deck, draw card, ask him to guess the suit, and then replace the card: decide that since ESP is about guessing the card"s suit correctly; will make this one-sided (right) test with LOS 0.05. We perform the test and Yifu identifies 32 of the cards correctly: Now at this point; realize that if use the normal distribution, it will be an estimate and am not sure how well it will work; because B(100,0.25) is not symmetric. So decide for maximum precision | Il use the binomial directly; but how to calculate an LOS that goes along with discrete distribution (think about how to calculate the top % of B(3,1/2)4)? So decide to reason backwards: I'Il calculate P(X > 32) and see if it is ess than my LOS (which would mean it has to be in the critical region). (A) Perform the test as specified. Reject if P(X > 32) < LOS and fail to reject otherwise (B) What is the smallest number of cards Yifu would have to identify to make me reject my hypothesis at the % LOS? Hint: use binom cdf #### Similar Solved Questions ##### HxamIHediia918 uilinosAaQ Tedl mc Khat KOU Wanl [ahe pdf ofuhe rindom vanableEiven by: X=3,0 25 + 2a X+(6 xIB X =5,7 where 25 + 2a otherwvisef (x;ap)In the paf presentedthe perissible values of 4 and have been leli off; Rasea how the modified configuration ofthe Was obluined, deterine the appronmuite sct permissible vales for = andPennissible valucs for 4;Pentnissible values for B:According the pdL the PLAmultiple of 5)Derenninmean of! Simplify much possible; Fealizing anSswer will [ thal Your L HxamIHediia918 ui linos Aa Q Tedl mc Khat KOU Wanl [a he pdf ofuhe rindom vanable Eiven by: X=3,0 25 + 2a X+(6 xIB X =5,7 where 25 + 2a otherwvise f (x;ap) In the paf presentedthe perissible values of 4 and have been leli off; Rasea how the modified configuration ofthe Was obluined, deterine the app... ##### Which of the following path of light is correct? Light incident at right angle with one of the faces of a triangular prism (nt 1.62) in water (n2 1.33) as shown in the figure. Take the angle 0 = 508.of light passes from liquid to glass. The index of refraction of the glass is A ray 1.52, what is the index of refraction of the liquid? 3 Which of the following path of light is correct? Light incident at right angle with one of the faces of a triangular prism (nt 1.62) in water (n2 1.33) as shown in the figure. Take the angle 0 = 508. of light passes from liquid to glass. The index of refraction of the glass is A ray 1.52, what is th... ##### Which of the following species is NOT a resonance form of the following species?A)B)C)D) None of the above Which of the following species is NOT a resonance form of the following species? A) B) C) D) None of the above... ##### Th namepoknet nclut^) Polystyrene B) Polyvinyl chloride (PVC) C) Polycthylene FolyannaThe IUPAC name ofthe _ following compound2,3-dimcthyl-S-hexyne 45-dimethyl-[~hexyne 2,3-dimcthyl-2-hetyyue 5,6-dimethyl-2-heptyneWhat thc major product tte following reaction?NaNH_NH;CH,CH,BrCHsCh} Th name poknet nclut ^) Polystyrene B) Polyvinyl chloride (PVC) C) Polycthylene Folyanna The IUPAC name ofthe _ following compound 2,3-dimcthyl-S-hexyne 45-dimethyl-[~hexyne 2,3-dimcthyl-2-hetyyue 5,6-dimethyl-2-heptyne What thc major product tte following reaction? NaNH_NH; CH,CH,Br CHs Ch}... ##### Flnd the net torque on the wheel the figure below about the axle through perpendicular the page, taking Assume that the positive direction counterclockwise )5.00 cm and17.0 cm: (Indlcate the direction wlth the signYour answer_InyI20JN Flnd the net torque on the wheel the figure below about the axle through perpendicular the page, taking Assume that the positive direction counterclockwise ) 5.00 cm and 17.0 cm: (Indlcate the direction wlth the sign Your answer_ Iny I20 JN... ##### QUESTION 12What volume (L) ot 2200 HNO} reauteo water?Mnsolunon Necdddissolving 25.5= 80f CalOHIz (74-10 &/mol} S5tMLHNO3(4q) Ca(OH1z (aq).HzOI) = Ca(NO3h2/aq6.882.,511.725,10344 QUESTION 12 What volume (L) ot 2200 HNO} reauteo water? Mn solunon Necdd dissolving 25.5= 80f CalOHIz (74-10 &/mol} S5tML HNO3(4q) Ca(OH1z (aq). HzOI) = Ca(NO3h2/aq 6.88 2.,51 1.72 5,10 344... ##### Lrulna GuleaWaltani sRnOvANICouiceAntcl 9953205628Lesson 10-5 Rotations on the coordinate planeQuestion - Conr- clockwise rotation 180" aboulvuter Cive the ccardinates of Triangle ABC has vertices 4/2,0}, Bl"-Il and C0l-3) Graph the figure ana image aftet vertices for triangle AB'CQuestion Detetmunt whotherthe shapo ofthe sign showinit does Alcb theanal GrionalsummetIntatonAne Lrulna Gulea Waltani s RnOvANICouiceAntcl 9953205628 Lesson 10-5 Rotations on the coordinate plane Question - Conr- clockwise rotation 180" aboulvuter Cive the ccardinates of Triangle ABC has vertices 4/2,0}, Bl"-Il and C0l-3) Graph the figure ana image aftet vertices for triangle AB'... ##### The equation(3yeZxdx +(3x + Syt ))dy = 0 in differential form M dx + N dy = 0 is not exact. Indeed, we haveM, - N,For this exercise we can find an integrating factor which is a function of x alone sinceM ,can be considered as function of x alone:Namely we have u(x)Multiplying the original equation by the integrating factor we obtain new equation M dx + N dy = 0 whereM =N =Which is exact sinceMyNxare equal:This problem is exact: Therefore an implicit general solution can be written in the form F( The equation (3yeZx dx + (3x + Syt ))dy = 0 in differential form M dx + N dy = 0 is not exact. Indeed, we have M, - N, For this exercise we can find an integrating factor which is a function of x alone since M , can be considered as function of x alone: Namely we have u(x) Multiplying the original e... ##### 1 15 HW Ch mework: 1 1 5 HW Ch mework:... ##### Estimate the area uucler the graph of' f(x) 10 cos from I = 0 to I = T/2 USIIg end- points. Include in your answer four approximating rectaugles together with right , sketch of the graph of f(2) and the four epproximating rectangles Estimate the area uucler the graph of' f(x) 10 cos from I = 0 to I = T/2 USIIg end- points. Include in your answer four approximating rectaugles together with right , sketch of the graph of f(2) and the four epproximating rectangles... ##### Two identical bars $A B$ and $A C$ of equal mass $m$ and lengths $l$ are hinged at $A$ and rest on a smooth horizontal surface as shown in Fig. P.23.4. The bars are conEined to move in a vertical plane and are released from rest in the position shown. Find the angular velocity of the bars when the hinge A strikes the surface. Assume friction-less conditions. If $l=1 mathrm{~m}$ and $heta=45^{circ}$, with what velocity the hinge $A$ hits the surface?left[omega=sqrt{frac{3 g}{l} sin heta, 4.56 m Two identical bars $A B$ and $A C$ of equal mass $m$ and lengths $l$ are hinged at $A$ and rest on a smooth horizontal surface as shown in Fig. P.23.4. The bars are conEined to move in a vertical plane and are released from rest in the position shown. Find the angular velocity of the bars when the h... ##### Find the areas of the regions bounded by the lines and curves. $y=\sin x, y=\cos x$ from $x=0$ to $x=\frac{\pi}{4}$ Find the areas of the regions bounded by the lines and curves. $y=\sin x, y=\cos x$ from $x=0$ to $x=\frac{\pi}{4}$... ##### Consider the quadratic function $f(x)=A x^{2}+B x+C,$ where $A, B,$ and $C$ are real numbers with $A eq 0 .$ Show that when the Mean Value Theorem is applied to $f$ on the interval $[a, b],$ the number $c$ guaranteed by the theorem is the midpoint of the interval. Consider the quadratic function $f(x)=A x^{2}+B x+C,$ where $A, B,$ and $C$ are real numbers with $A \neq 0 .$ Show that when the Mean Value Theorem is applied to $f$ on the interval $[a, b],$ the number $c$ guaranteed by the theorem is the midpoint of the interval.... ##### (5%) Problem Il: Consider tWo Vectors 4-13*- (-10.5) y and €-28 x, where and > are UIt Vectors In the X- and Y-directions. These vectors are related by third B which satisfies 4 + B=C:33% Part (a) What is the x-component of B?Grade Summary Deductions 000 Potential 100%0BxsinO coso tanO) cotano) asino acoso) atan() acotan() sinho cosho tanho) cotanho) Degrees RadiansHOLEESubmissions Attempts remaining: 10 per attempt) detailed viewEDBACRSPACECLEARSubmitHintFeedbackgivze Up!Hints: 18 deductio (5%) Problem Il: Consider tWo Vectors 4-13*- (-10.5) y and €-28 x, where and > are UIt Vectors In the X- and Y-directions. These vectors are related by third B which satisfies 4 + B=C: 33% Part (a) What is the x-component of B? Grade Summary Deductions 000 Potential 100%0 Bx sinO coso tanO)... ##### Parameterize the portion of the surface f(x,y) =x+y2 that lies below the plane y + 2 = 1. Parameterize the portion of the surface f(x,y) =x+y2 that lies below the plane y + 2 = 1.... ##### Q1: An expenimen Consksiia tossing JhrccColn (w) Find the sample spice(b) Find the eventgetting the same face the three tosses(c) Find the cvcnt that getting laast onc head:(d) Fun thc cvent 'getting two hcud MlLQ2: Prove I2 folloring statemznt: Knp-k UP(G) = 1 - P(G)If Pr8 | 4 > P(B}. thcn P(A | 8) > Pia} Q1: An expenimen Consksiia tossing JhrccColn (w) Find the sample spice (b) Find the event getting the same face the three tosses (c) Find the cvcnt that getting laast onc head: (d) Fun thc cvent ' getting two hcud MlL Q2: Prove I2 folloring statemznt: Knp-k U P(G) = 1 - P(G) If Pr8 | 4 > P(B...
{}
# Number Theory (math.NT) • In this article, we use the Combinatorial Nullstellensatz to give new proofs of the Cauchy-Davenport, the Dias da Silva-Hamidoune and to generalize a previous addition theorem of the author. Precisely, this last result proves that for a set A $\subset$ Fp such that A $\cap$ (--A) = $\emptyset$ the cardinality of the set of subsums of at least $\alpha$ pairwise distinct elements of A is: |$\Sigma$$\alpha$(A)| $\ge$ min (p, |A|(|A| + 1)/2 -- $\alpha$($\alpha$ + 1)/2 + 1) , the only cases previously known were $\alpha$ $\in$ 0, 1. The Combinatorial Nullstellensatz is used, for the first time, in a direct and in a reverse way. The direct (and usual) way states that if some coefficient of a polynomial is non zero then there is a solution or a contradiction. The reverse way relies on the coefficient formula (equivalent to the Combinatorial Nullstellensatz). This formula gives an expression for the coefficient as a sum over any cartesian product. For these three addition theorems, some arithmetical progressions (that reach the bounds) will allow to consider cartesian products such that the coefficient formula is a sum all of whose terms are zero but exactly one. Thus we can conclude the proofs without computing the appropriate coefficients. • The classical Kronecker limit formula describes the constant term in the Laurent expansion at the first order pole of the non-holomorphic Eisenstein series associated to the cusp at infinity of the modular group. Recently, the meromorphic continuation and Kronecker limit type formulas were investigated for non-holomorphic Eisenstein series associated to hyperbolic and elliptic elements of a Fuchsian group of the first kind by Jorgenson, Kramer and the first named author. In the present work, we realize averaged versions of all three types of Eisenstein series for $\Gamma_0(N)$ as regularized theta lifts of a single type of Poincaré series, due to Selberg. Using this realization and properties of the Poincaré series we derive the meromorphic continuation and Kronecker limit formulas for the above Eisenstein series. The corresponding Kronecker limit functions are then given by the logarithm of the absolute value of the Borcherds product associated to a special value of the underlying Poincaré series. • We find nice representatives for the 0-dimensional cusps of the degree $n$ Siegel upper half-space under the action of $\Gamma_0(\stufe)$. To each of these we attach a Siegel Eisenstein series, and then we make explicit a result of Siegel, realizing any integral weight average Siegel theta series of arbitrary level $\stufe$ and Dirichlet character $\chi_L$ modulo $\stufe$ as a linear combination of Siegel Eisenstein series. • Feb 22 2017 math.NT arXiv:1702.06487v1 I solve here a question of Vladimir Reshetnikov in Mathoverflow (question 261649) about the values of Fabius function. Namely, I prove that the numbers $R_n:=2^{-\binom{n-1}{2}}(2n)! F(2^{-n})\prod_{m=1}^{\lfloor n/2\rfloor}(2^{2m}-1)$ are integers. We show also some other arithmetical properties of the values of Fabius function at dyadic points. • Let $(\mathbb{T}_f,\mathfrak{m}_f)$ denote the mod $p$ local Hecke algebra attached to a normalised Hecke eigenform $f$, which is a commutative algebra over some finite field $\mathbb{F}_q$ of characteristic $p$ and with residue field $\mathbb{F}_q$. By a result of Carayol we know that, if the residual Galois representation $\overline{\rho}_f:G_\mathbb{Q}\rightarrow\mathrm{GL}_2(\mathbb{F}_q)$ is absolutely irreducible, then one can attach to this algebra a Galois representation $\rho_f:G_\mathbb{Q}\rightarrow\mathrm{GL}_2(\mathbb{T}_f)$ that is a lift of $\overline{\rho}_f$. We will show how one can determine the image of $\rho_f$ under the assumptions that $(i)$ the image of the residual representation contains $\mathrm{SL}_2(\mathbb{F}_q)$, $(ii)$ that $\mathfrak{m}_f^2=0$ and $(iii)$ that the coefficient ring is generated by the traces. As an application we will see that the methods that we use allow us to deduce the existence of certain $p$-elementary abelian extensions of big non-solvable number fields. • We show that a Born-Infeld soliton can be realised either as a spacelike minimal graph or timelike minimal graph over a timelike plane or a combination of both away from singular points. We also obtain some exact solutions of the Born-Infeld equation from already known solutions to the maximal surface equation. Further we present a method to construct a one-parameter family of complex solitons from a given one parameter family of maximal surfaces. Finally, using Ramanujan's Identities and the Weierstrass-Enneper representation of maximal surfaces, we derive further non-trivial identities. • We show that (under mild assumptions) the generating function of log homology torsion of a knot exterior has a meromorphic continuation to the entire complex plane. As corollaries, this gives new proofs of (a) the Silver-Williams asymptotic, (b) Fried's theorem on reconstructing the Alexander polynomial (c) Gordon's theorem on periodic homology. Our results generalize to other rank 1 growth phenomena, e.g. Reidemeister-Franz torsion growth for higher-dimensional knots. We also analyze the exceptional cases where the meromorphic continuation does not exist. • We prove that the family of non-cocompact non-commensurable lattices $PSL(2,O_F)$ in $PSL(2, R^{r_1}\oplus C^{r_2})$ with F running over number fields with fixed archimedean signature $(r_1, r_2)$ has the limit multiplicity property. • Feb 22 2017 math.NT arXiv:1702.06422v1 We realize that geometric polynomials and p-Bernoulli polynomials and numbers are closely related with an integral representation. Therefore, using geometric polynomials, we extend some properties of Bernoulli polynomials and numbers such as recurrence relations, telescopic formula and Raabe's formula to p-Bernoulli polynomials and numbers. In particular cases of these results, we establish some new results for Bernoulli polynomials and numbers. Moreover, we evaluate a Faulhaber-type summation in terms of p-Bernoulli polynomials. • In this paper, using geometric polynomials, we obtain a generating function of p-Bernoulli numbers. As a consequences this generating function, we derive closed formulas for the finite summation of Bernoulli and harmonic numbers involving Stirling numbers of the second kind. • We show that whenever $\delta>0$, $\eta$ is real and constants $\lambda _i$ satisfy some necessary conditions, there are infinitely many prime triples $p_1,\, p_2,\, p_3$ satisfying the inequality $|\lambda _1p_1 + \lambda _2p_2 + \lambda _3p_3+\eta|<(\max p_j)^{-1/12+\delta}$ and such that, for each $i\in\{1,2,3\}$, $p_i+2$ has at most $28$ prime factors. Mario Jun 08 2016 06:58 UTC Too bad, the paper has been withdrawn due to a mistake :-/ Māris Ozols Mar 19 2016 16:34 UTC This result has caused quite a lot of excitement in number theory (see the articles in [Quanta Magazine][1] and [Nature News][2]). It turns out that the last digits of consecutive primes are not uniformly distributed but rather tend to be anti-correlated. For example, in base 10 the last digit of ...(continued) Zoltán Zimborás Sep 18 2015 04:26 UTC I can only quote Derrick Stolee: 'Terry Tao just dropped a bomb'. :) Charles Greathouse Nov 17 2014 18:38 UTC The basic idea of this paper is to test whether the decimal digits of three special constants $(\pi,e,\sqrt2)$ act as though chosen from a uniform distribution, based on their first ten million digits. In particular the author studies the sum of the digits compared to the expected behavior by the la ...(continued) Noon van der Silk Jun 20 2013 07:29 UTC This paper seems pretty interesting, really. (In how it would relate to the algorithm of Shor). Does anyone know more about this work? Is it possible to improve the restriction on the characteristic size? Is that even an important restriction? Noon van der Silk Jun 22 2013 01:22 UTC Thanks Anthony and Juan. There's another blog post on this here: http://ellipticnews.wordpress.com/2013/06/21/quasi-polynomial-time-algorithm-for-discrete-logarithm-in-finite-fields-of-smallmedium-characteristic/. Juan Bermejo-Vega Jun 20 2013 15:09 UTC @Noon Silk. I do not know what to say about heuristic 3, but, in relation to your first question, let's assume that all heuristics are valid and apply theorem 2 to solve the Discrete Logarithm over Z_q*, where q is prime. As far as I understood (someone please correct me if I am wrong) the algorithm ...(continued) Anthony Jun 20 2013 07:42 UTC There was a discussion about a previous paper of Joux (with a weaker result) on this blog post: https://rjlipton.wordpress.com/2013/05/06/a-most-perplexing-mystery Alessandro Jul 12 2013 03:45 UTC
{}
## The Annals of Mathematical Statistics ### Sequential Selection of Experiments K. B. Gray, Jr. #### Abstract The problem of sequential selection of experiments, with fixed and optional stopping, is considered. Conditions are given which allow selection, stopping and terminal action rules to be based on a sequence $\{T_j\}$ of statistics, where $T_j$ is a function of past observations $\mathbf{X}^j = (X_1, \cdots, X_j)$ and experiment selections $\mathbf{E}^j = (E_1, \cdots, E_j)$. Randomized stopping, selection, and terminal action rules are allowed, and all probability distributions are defined by densities relative to $\sigma$-finite measures over Euclidean spaces. Here we give a heuristic description of the principal results for the case of optional stopping. At each time $j$ the random variable $X_j$ is observed and a decision is made to stop or continue. If the procedure is stopped, a terminal action $A$ is taken. If it is continued, an experiment $E_{j+1}$, to be performed at time $j + 1$, is chosen. At time $j$, all decisions are based on $\mathbf{X}^j,\mathbf{E}^j$, the past observations and experiment selections. Upon stopping, and taking action $A$, a loss $L(\theta, A)$, where $\theta$ is the unknown state of nature, is incurred. The sampling cost of stopping at $j$ is $C_j(\theta, \mathbf{X}^j, \mathbf{E}^j)$. Let the random variable $N$ denote the random stopping time. A selection rule $\gamma = (\gamma_0, \gamma_1, \cdots)$ is defined by the sequence of conditional densities $\gamma_j(e_{j+1}\mid\mathbf{x}^j, \mathbf{e}^j)$, a stopping rule $(\mathbb{\Phi} = (\phi_0, \phi_1, \cdots)$ by the probabilities $\phi_j(\mathbf{x}^j,\mathbf{e}^j) = P\{N = j\mid N \geqq j, \mathbf{x}^j,\mathbf{e}^j\}$, and a terminal action rule $\delta = (\delta_0, \delta_1, \cdots)$ by the conditional densities $\delta_j(a\mid\mathbf{x}^j,\mathbf{e}^j)$. Definition of the population densities $f_\theta(x_{j+1}\mid\mathbf{x}^j, \mathbf{e}^{j+1})$ for $j = 0, 1, 2, \cdots$ completely fixes the probability structure. Define $\{T_j\}$ to be parameter sufficient (PARS) if, for $j = 0, 1, 2, \cdots, \operatorname{Dist}_{\theta,\gamma}(\mathbf{X}^j, \mathbf{E}^j\mid T_j)$ is independent of $\theta$ for all $\gamma$ and policy sufficient (POLS) if, for $j = 0, 1, 2, \cdots, \operatorname{Dist}_{\theta,\Phi,\gamma} (T_{j+1}\mid T_j, E_{j+1}, N \geqq j + 1)$ is independent of $\mathbf{\phi}, \mathbf{\gamma}$ for all $\theta$. THEOREM. If $\{T_j\}$ is PARS; then the class of policies $\{\mathbf{\phi}, \mathbf{\gamma}, \mathbf{\delta}^0\}$, where $\delta^0$ is based on $\{T_j\}$, is essentially complete. THEOREM. If $\{T_j\}$ is PARS and POLS, and the sampling cost is of the form $C_j(\theta, T_j)$, then the class of policies $\{\mathbf{\Phi}^0, \mathbf{\gamma}^0, \mathbf{\delta}^0\}$, where $\mathbf{\phi}^0, \mathbf{\gamma}^0, \mathbf{\delta}^0$ are based on $\{T_j\}$, is essentially complete. Conditions are given to aid in the verification of PARS and POLS. The theorems are applied to examples, including versions of the two armed bandit problem. #### Article information Source Ann. Math. Statist., Volume 39, Number 6 (1968), 1953-1977. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177698025 Digital Object Identifier doi:10.1214/aoms/1177698025 Mathematical Reviews number (MathSciNet) MR243690 Zentralblatt MATH identifier 0187.16202 JSTOR #### Citation Gray, K. B. Sequential Selection of Experiments. Ann. Math. Statist. 39 (1968), no. 6, 1953--1977. doi:10.1214/aoms/1177698025. https://projecteuclid.org/euclid.aoms/1177698025
{}
# Natural Language Processing: An Introduction to Predictive Text Views and opinions expressed are solely my own. ## Introduction This post explains the mathematics behind predictive text in natural language processing (NLP), as well as a brief simulation. ## Some Jargon The objective of text normalization is analogous to the concept of tidying data when working with structured data: cleaning up text so that it is easier to work with. Several steps go into text normalization: • Tokenization: separating text into individual subsets, such as word tokenization (separating text into its word subsets), or sentence tokenization (separating text into sentence subsets). • Lemmatization: the act of converting all words with the same root (e.g., “cat” and “cats” would be lemmatized to “cat”). There are various algorithms that exist for executing these steps. I won’t be covering these in detail for the current post. However, there are some things that are worth mentioning: • What is a word? The answer to this question depends on the application. Filler words, such as “uh” or “um,” may occur when doing speech-to-text transcription and may not be desirable to use in normalized text in some cases. • What is a sentence? In English, we can tokenize sentences by looking for characters such as periods, exclamation marks, etc. - but not all instances of such characters can be used to tokenize sentences, for example, the word “Ph.D..” • The Porter Stemming Algorithm is one example of a lemmatization algorithm. An $$n$$-gram is a sequence of words of length $$n$$, given by $$w_1, \dots, w_n$$, which may also be denoted $$w_{1:n}$$. Using the language of probability, we may let $$W_1, \dots, W_n$$ be a sequence of random variables and consider the joint probability mass function $\begin{equation*} \mathbb{P}(W_1 = w_1, \dots, W_n = w_n) = p(w_1, \dots, w_n)\text{.} \end{equation*}$ By the probability chain rule, we may write $\begin{equation*} p(w_1, \dots, w_n) = p(w_1)p(w_2 \mid w_1) \cdots p(w_n \mid w_{1:n-1}) = p(w_1)\prod_{i=2}^{n}p(w_i \mid w_{1:i-1})\text{.} \end{equation*}$ The product above is, of course, extremely difficult to calculate, but we may choose to implement any number of simplfying assumptions to make the product tractable. For example, if we assume the Markov property holds, $\begin{equation*} p(w_1, \dots, w_n) = p(w_1)p(w_2 \mid w_1) \cdots p(w_n \mid w_{1:n-1}) = p(w_1)\prod_{i=2}^{n}p(w_i \mid w_{i-1})\text{.} \end{equation*}$ In NLP, the Markov property is known as the bigram ($$2$$-gram) model - i.e., the conditional probability only includes two words. In general, the $$k$$-gram model assumes that $\begin{equation*} p(w_i \mid w_{1:i-1}) = p(w_i \mid w_{i-1}, \dots, w_{i-(k-1)}) = p(w_i \mid w_{i-(k-1):i-1}) \end{equation*}$ for appropriately chosen values of $$k$$. We estimate these probabilities via maximum likelihood estimation, drawing from a corpus of $$N > n$$ words to estimate these probabilities.1 For the $$k$$-gram model, the maximum likelihood estimator is given by $\begin{equation*} \hat{p}(w_i \mid w_{i-(k-1):i-1}) = \dfrac{C(w_{i-(k-1):i-1}, w_i)}{C(w_{i-(k-1):i-1})} \end{equation*}$ where $$C(w_{i-(k-1):i-1}, w_i)$$ is the count of word sequences $$w_{i-(k-1):i-1}w_i$$, and $$C(w_{i-(k-1):i-1})$$ is the count of the word sequence $$w_{i-(k-1):i-1}$$. One can find the word $$w_i$$ which maximizes the above probability based on a corpus so as to predict text. ## Simulating Shakespearean Text We obtain all of Shakespeare’s sonnets in a tidy format using bardr, and then do some additional data cleansing. library(bardr) library(dplyr) library(tidyr) library(tidytext) library(stringr) sonnets <- all_works_df %>% # show only Sonnets filter(name == "Sonnets") %>% # Remove "THE SONNETS", "THE END", and # number labels for the sonnets filter(!grepl("THE SONNETS", content) & !grepl("THE END", content) & !grepl("^( )*[0-9]+( )*", content)) # replace the character \032 with an apostrophe sonnets$content <- gsub("\032", "'", sonnets$content) We will be using a trigram model to perform the simulation. Thus, we gather all unigrams, bigrams, and trigrams from the sonnets and compute their counts. # gather unigrams unigrams <- sonnets %>% unnest_tokens(unigram, content, token = "ngrams", n = 1) # gather bigrams bigrams <- sonnets %>% unnest_tokens(bigram, content, token = "ngrams", n = 2) # gather trigrams trigrams <- sonnets %>% unnest_tokens(trigram, content, token = "ngrams", n = 3) rm(sonnets) # compute counts by unigram, trigram, and bigram unigrams <- unigrams %>% group_by(unigram) %>% summarize(count = n()) %>% as.data.frame() bigrams <- bigrams %>% group_by(bigram) %>% summarize(count = n()) %>% as.data.frame() trigrams <- trigrams %>% group_by(trigram) %>% summarize(count = n()) %>% as.data.frame() Then, we create a third data frame with a predicted word conditioned on a preceding bigram. # extract predicted (third) word from trigram trigrams_pred <- trigrams %>% # look for a word, followed by a space, # then a second word and then a second space. # replace with blank mutate(pred_word = gsub("((\\w|')+\\s*(\\w|')+)\\s", "", trigram)) %>% # remove the last word from the trigram: # look for one space, a word (with possibly an apostrophe) # right at the end mutate(prior_bigram = sub(" {1}(\\w|')+$", "", trigram)) %>% select(pred_word, prior_bigram, trigram) %>% distinct() We compute the maximum likelihood estimates of these probabilities using the formula previously given by joining these data. trigrams_pred <- trigrams_pred %>% # join to preceding bigrams data, gather count of bigrams left_join(bigrams, by = c("prior_bigram" = "bigram")) %>% # join to trigrams data, gather count of trigrams left_join(trigrams, by = "trigram", suffix = c(".bigram", ".trigram")) %>% # compute maximum likelihood estimate mutate(prob = count.trigram/count.bigram) %>% # some cleansing select(pred_word, prior_bigram, prob) %>% arrange(pred_word, prior_bigram) # clear memory rm(bigrams, trigrams) Now, we simulate some text. We will begin with a prespecified bigram, and choosing subsequent words based on maximum probabilities conditioned on the prior bigram. # function to choose the most likely next word, given a # preceding bigram next_word <- function(preceding_bigram, trigrams_df, seed) { set.seed(seed) preceding_bigram <- tolower(preceding_bigram) # show all possible next words based on the bigram, # and show the one with the highest probability trigrams_df <- trigrams_df %>% filter(prior_bigram == preceding_bigram) %>% filter(prob == max(prob)) # if there are multiple words with the same probability, # randomly (uniformly) choose one of the predicted words if (nrow(trigrams_df) > 1) { next_word <- sample(trigrams_df$pred_word, size = 1) } else { # otherwise, just choose the corresponding word next_word <- trigrams_df$pred_word } return(next_word) } # generate some text, with an initial bigram # sim_length is the number of words of the output, # which must be greater than 2. text_gen <- function(bigram_init, trigrams_df, seed = 50, sim_length) { # checks on sim_length sim_length <- as.integer(sim_length) if (sim_length <= 2) { stop("sim_length must be greater than 2!") } # generate next work based on maximum likelihood estimate out <- paste(bigram_init, next_word(bigram_init, trigrams_df, seed)) # stop if the desired sim_length is 3, otherwise, keep adding words if (sim_length >= 4) { for (i in 1:(sim_length - 3)) { # extract the most recent bigram last_bigram <- str_extract(out, "(\\w|')+ (\\w|')+$") # append the next word out <- paste(out, next_word(last_bigram, trigrams_df, seed)) out <- str_trim(out, side = "both") } } return(out) } We then use the function above to simulate some Shakespearean phrases, with prespecified bigrams. # "A friend" text_gen("a friend", trigrams_pred, seed = 30, sim_length = 10) ## [1] "a friend came debtor for my sake even so being" # "A fool" text_gen("a fool", trigrams_pred, seed = 30, sim_length = 15) ## [1] "a fool is love that in guess they measure by thy beauty and thy love's" # "There is" text_gen("there is", trigrams_pred, seed = 40, sim_length = 8) ## [1] "there is such strength and warrantise of skill" # "In The" text_gen("in the", trigrams_pred, seed = 20, sim_length = 14) ## [1] "in the world will wail thee like a lamb he could his looks translate" text_gen("in the", trigrams_pred, seed = 30, sim_length = 14) ## [1] "in the world will wail thee like a winter hath my added praise beside" # "Thou Art" text_gen("thou art", trigrams_pred, seed = 20, sim_length = 14) ## [1] "thou art as fair in knowledge as in hue all hues in his thoughts" text_gen("thou art", trigrams_pred, seed = 30, sim_length = 15) ## [1] "thou art as fair in knowledge as in hue all hues in his fiery race" ## Next Steps and Conclusion In the above, I had purposefully chosen bigrams that I knew were likely to work, but one of the disadvantages of the maximum likelihood approach given is that it relies on exact matching to generate phrases. That is, if a bigram does not currently exist in the data provided, the code above would error out. There are probably probabilistic matching methods and other sophisticated ways to deal with these problems. One may run into numerical underflow problems given the above procedure. We will discuss this in a subsequent post. It is also worth noting that we did not use cross validation for prediction for subsequent words, or any sort of smoothing techniques to deal with zero-frequency bigrams. These will likely be explored in a future post. ## References Jurafsky, D. & Martin, J. H. (2020). Speech and Language Processing (3rd ed.). August 21, 2021, https://web.stanford.edu/~jurafsky/slp3/. 1. Details are provided in Lei Mao’s Log Book at https://leimao.github.io/blog/Maximum-Likelihood-Estimation-Ngram/.↩︎ ##### Yeng Miller-Chang I am a Senior Data Scientist - Global Knowledge Solutions with General Mills, Inc. Views and opinions expressed are my own.
{}
###### 小坏蛋_千千 I was caught in a heavy rain! ## Description Today Sonya learned about long integers and invited all her friends to share the fun. Sonya has an initially empty multiset with integers. Friends give her t queries, each of one of the following type: 1. + ai — add non-negative integer ai to the multiset. Note, that she has a multiset, thus there may be many occurrences of the same integer. 2. - ai — delete a single occurrence of non-negative integer ai from the multiset. It’s guaranteed, that there is at least one ai in the multiset. 3. ? s — count the number of integers in the multiset (with repetitions) that match some pattern s consisting of 0 and 1. In the pattern, 0 stands for the even digits, while 1 stands for the odd. Integer x matches the pattern s, if the parity of the i-th from the right digit in decimal notation matches the i-th from the right digit of the pattern. If the pattern is shorter than this integer, it’s supplemented with 0-s from the left. Similarly, if the integer is shorter than the pattern its decimal notation is supplemented with the 0-s from the left. For example, if the pattern is s = 010, than integers 92, 2212, 50 and 414 match the pattern, while integers 3, 110, 25 and 1030 do not. ## Input The first line of the input contains an integer t (1 ≤ t ≤ 100 000) — the number of operation Sonya has to perform. Next t lines provide the descriptions of the queries in order they appear in the input file. The i-th row starts with a character ci — the type of the corresponding operation. If ci is equal to ‘+’ or ‘-’ then it’s followed by a space and an integer ai (0 ≤ ai < 10^18) given without leading zeroes (unless it’s 0). If ci equals ‘?’ then it’s followed by a space and a sequence of zeroes and onse, giving the pattern of length no more than 18. It’s guaranteed that there will be at least one query of type ‘?’. It’s guaranteed that any time some integer is removed from the multiset, there will be at least one occurrence of this integer in it. ## Output For each query of the third type print the number of integers matching the given pattern. Each integer is counted as many times, as it appears in the multiset at this moment of time. ## Examples input 12 + 1 + 241 ? 1 + 361 - 241 ? 0101 + 101 ? 101 - 101 ? 101 + 4000 ? 0 ## Examples output 2 1 2 1 1 ## 题意 1. 向多重集中添加一个元素 2. 去除多重集中的一个元素 3. 给定模式串,询问多重集中有多少个元素与之匹配(奇偶性匹配) ## AC 代码 #include<bits/stdc++.h> #define IO ios::sync_with_stdio(false);\ cin.tie(0);\ cout.tie(0); using namespace std; typedef long long LL; const int maxn = 1e6+10; char op; string tmp; int ans[maxn]; int get_binary() { int x = 0; for(int i=0; i<(int)tmp.length(); i++) { x<<=1; x|=tmp[i]&1; } return x; } int main() { IO; int T; cin>>T; while(T--) { cin>>op>>tmp; if(op=='?') cout<<ans[get_binary()]<<endl; else ans[get_binary()] += (op=='+')?1:-1; } return 0; } #### codeforces 714C Sonya and Queries [思维]【STL】 2016-09-14 14:25:00 #### Codeforces #371(Div.2)C.Sonya and Queries【map应用】【思维】 2016-09-14 14:13:13 #### Codeforces-713A Sonya and Queries 2016-10-10 20:52:40 #### Codeforces #371(Div.2)C.Sonya and Queries【思维+map应用】 2016-09-14 13:20:10 #### Codeforces Round #371(Div.1) 2017-07-03 17:15:46 #### codeforces 714C. Sonya and Queries (思维题) 2018-03-21 16:40:14 #### Codeforces Round #371 (Div. 2)C. Sonya and Queries 2016-09-14 14:30:34 2018-03-29 09:16:17 #### Codeforces 600B Queries about less or equal elements(二分查找) 2015-11-28 01:27:46 #### CodeForces - 859C Pie Rules DP(逆推)(思维好题) 2017-09-20 23:20:35 ## 不良信息举报 Codeforces 714 C. Sonya and Queries (思维)
{}
# Bitcoin at $3,000 Bitcoin breaks$3,000 to reach new all-time high Been long bitcoin since 2013. Glad I ignored the headlines about bubbles. The news is useless, as are most forecasts. For every bubble the media calls correctly, they get 10-20 of them wrong. Bitcoin is like General Electric and PG&E…not only does it fill a specific niche/function, but it’s not going anywhere. Against all odds it succeeded and surpassed everyone’s expectations. Including Bitcoin Cash, the price is closer to $3400, a 1,600-2,000% gain since mid-2013 (by late 2013, the price got to$800-1000). As for the media being wrong, they were wrong about: -The post-2009 bull market, which keeps going on and on. Since 2014 when I launched this blog, I have been telling people to buy. The S&P 500 (including dividends) has gained a whopping 25% since early 2014. -All of the ‘FANG’ stocks (Google, Facebook, Amazon, and Netflix keep going up…recommended all of them , except Netflix) -Hillary winning, but the pundits incorrectly predicted that Trump and Brexit would cause a recession (the exact opposite happened, and the S&P 500 is up 12% this year…I predict it has a lot further to go). -Tesla (the media sure got this one wrong. In 2013, the left-wing NYTs wrote a hit piece about Tesla, when the stock was at $40. Tesla shares have risen from$130 in early 2014 to \$340+ now, and will keep going up despite all the failed predictions of Tesla being a bubble. This blog was right about Tesla here and here.) -Bay Area real estate prices. Home prices in America most expensive areas refuse to fall despite the media’s insistence that prices are in a bubble. -Doom & gloom predictions about Ebola, Russia, Emails, Comey, Impeachment—all wrong -Web 2.0 being a bubble (a broken clock is right twice a day, but the liberal media is never right). Uber is under attack by the media for alleged sexism and and breaking the law. I predict Uber will prevail…the media wants Uber to fail. -The mass hysteria over the alleged ‘college rape epidemic’, which turned out to be a bunch of hoaxes. -Hillary Clinton’s health (Hillary Clinton collapsed after attending a 911 memorial and the media tried to cover it up, but failed) -Hyperinflation & dollar collapse (Treasury bond yields are the lowest they have been in decades) -A ‘post America era’…this is not really a media prediction, but rather a prediction made by many economists in pundits between 2008-2010. During the depth and recovery of the financial crisis, many pundits predicted a ‘new status quo’ with Europe, not America, being on top. They were wrong. Instead, China and America dominate economically and culturally, with France, Germany, and the UK suffocating under the weight of economic stagnation, incompetent leadership, migrants, and general societal decay. and many more… Of course, bitcoin could fall 70% by next year…or it may double again. But given the media’s certitude that THIS IS REALLY THE TOP, and the media’s horrible track record regarding everything, I’m erring towards the latter.
{}
# Index of a number $a$ modulo $m$ The exponent $\gamma$ in the congruence $a \equiv g ^ {\gamma\ (} \mathop{\rm mod} m )$, where $a$ and $m$ are relatively prime integers and $g$ is a fixed primitive root modulo $m$. The index of $a$ modulo $m$ is denoted by $\gamma = \mathop{\rm ind} _ {g} a$, or $\gamma = \mathop{\rm ind} a$ for short. Primitive roots exist only for moduli of the form $2 , 4 , p ^ \alpha , 2 p ^ \alpha$, where $p > 2$ is a prime number; consequently, the notion of an index is only defined for these moduli. If $g$ is a primitive root modulo $m$ and $\gamma$ runs through the values $0 \dots \phi ( m) - 1$, where $\phi ( m)$ is the Euler function, then $g ^ \gamma$ runs through a reduced system of residues modulo $m$. Consequently, for each number relatively prime with $m$ there exist a unique index $\gamma$ for which $0 \leq \gamma \leq \phi ( m) - 1$. Any other index $\gamma ^ \prime$ of $a$ satisfies the congruence $\gamma ^ \prime \equiv \gamma$ $\mathop{\rm mod} \phi ( m)$. Therefore, the indices of $a$ form a residue class modulo $\phi ( m)$. The notion of an index is analogous to that of a logarithm of a number, and the index satisfies a number of properties of the logarithm, namely: $$\mathop{\rm ind} ( a b ) \equiv \ \mathop{\rm ind} a + \mathop{\rm ind} b \ ( \mathop{\rm mod} \phi ( m) ) ,$$ $$\mathop{\rm ind} ( a ^ {n} ) \equiv n \mathop{\rm ind} a ( \mathop{\rm mod} \phi ( n) ) ,$$ $$\mathop{\rm ind} \frac{a}{b} \equiv \mathop{\rm ind} a - \mathop{\rm ind} b ( \mathop{\rm mod} \phi ( m) ) ,$$ where $a / b$ denotes the root of the equation $$b x \equiv a ( \mathop{\rm mod} m ) .$$ If $m = 2 ^ \alpha p _ {1} ^ {\alpha _ {1} } \dots p _ {s} ^ {\alpha _ {s} }$ is the canonical factorization of an arbitrary natural number $m$ and $g _ {1} \dots g _ {s}$ are primitive roots modulo $p _ {1} ^ {\alpha _ {1} } \dots p _ {s} ^ {\alpha _ {s} }$, respectively, then for each $a$ relatively primitive with $m$ there exist integers $\gamma , \gamma _ {0} \dots \gamma _ {s}$ for which $$a \equiv ( - 1 ) ^ \gamma 5 ^ {\gamma _ {0} } \ ( \mathop{\rm mod} 2 ^ \alpha ) ,$$ $$a \equiv g _ {1} ^ {\gamma _ {1} } ( \mathop{\rm mod} p _ {1} ^ {\alpha _ {1} } ) ,$$ $${\dots \dots \dots \dots }$$ $$a \equiv g _ {s} ^ {\gamma _ {s} } ( \mathop{\rm mod} p _ {s} ^ {\alpha _ {s} } ) .$$ The above system $\gamma , \gamma _ {0} \dots \gamma _ {s}$ is called a system of indices of $a$ modulo $m$. To each number $a$ relatively prime with $m$ corresponds a unique system of indices $\gamma , \gamma _ {0} \dots \gamma _ {s}$ for which $$0 \leq \gamma \leq c - 1 ,\ \ 0 \leq \gamma _ {0} \leq c _ {0} - 1 ,$$ $$0 \leq \gamma _ {1} \leq c _ {1} \dots 0 \leq \gamma _ {s} \leq c _ {s} ,$$ where $c _ {i} = \phi ( p _ {i} ^ {\alpha _ {i} } )$, $i = 1 \dots s$, and $c$ and $c _ {0}$ and defined as follows: $$c = 1 , c _ {0} = 1 \ \ \textrm{ for } \ \alpha = 0 \ \textrm{ or } \alpha = 1 ,$$ $$c = 2 , c _ {0} = 2 ^ {\alpha - 2 } \textrm{ for } \alpha \geq 2 .$$ Every other system $\gamma ^ \prime , \gamma _ {0} ^ \prime \dots \gamma _ {s} ^ \prime$ of indices of $a$ satisfies the congruences $$\gamma ^ \prime \equiv \gamma ( \mathop{\rm mod} c ) ,\ \gamma _ {0} ^ \prime \equiv \gamma _ {0} ( \mathop{\rm mod} c _ {0} ) \dots \gamma _ {s} ^ \prime \equiv \gamma _ {s} ( \mathop{\rm mod} c _ {s} ) .$$ The notion of a system of indices of $a$ modulo $m$ is convenient for the explicit construction of characters of the multiplicative group of reduced residue classes modulo $m$. #### References [1] I.M. Vinogradov, "Elements of number theory" , Dover, reprint (1954) (Translated from Russian)
{}
# Klein bottle homology by Mayer Vietoris I'm still applying Mayer Vietoris, this time to the Klein bottle. I'm using the decomposition as on Wikipedia here and I've calculated $H_0$ and $H_n$ for $n \geq 2$ correctly. Now I'm struggling with $H_1$. My sequence: $$0 \xrightarrow{} H_1( S^1) \xrightarrow{(i,j)} H_1(M) \oplus H_1(M^\prime )\xrightarrow{k-l} H_1(K) \xrightarrow{\partial_1} \tilde{H_0} = 0$$ Wikipedia writes "The central map $(i,j)$ sends $1$ to $(2, −2)$". Does it matter whether it's sent to $(2,-2)$ or $(2,2)$ ? I think $(i,j)$ maps $1$ to $(2,2)$. Using this, I get (i) $im((i,j)) = 2 \mathbb{Z} \oplus 2 \mathbb{Z} = ker (k-l)$ And using the first isomorphism theorem I get (ii) $H_1(K) / ker (\partial_1) = im(\partial_1) = \tilde{H_0} = 0$ and therefore $H_1(K) \cong \mathbb{Z}$ because $ker(\partial_1) = \mathbb{Z}$ which is clearly wrong but I don't see where the mistake is. (iii) I also know $k-l$ is surjective so $im (k-l) = H_1(K) \cong \mathbb{Z} \oplus \mathbb{Z} / ker (k-l)$ But if $ker (k-l) = 2 \mathbb{Z} \oplus 2 \mathbb{Z}$ then I'd get $H_1(K) = \mathbb{Z}/2 \oplus \mathbb{Z}/2$ which is also wrong. What am I doing wrong? Many thanks for your help! • I am doing my first course in algebraic (or in fact any) topology, and I often come across your questions on this site. By now it seems to me that you are like an expert in this field... Having done other topics in maths, I find this area unusually painful, and so I was just wondering how you found this journey? Any tips on resources besides Hatcher? – gen Oct 29 '18 at 21:13 • Why is $A\cap B\simeq \mathbb{S}^1$? According to the wikipedia picture we should have $A\cap B\simeq \mathbb{S}^1\sqcup\mathbb{S}^1$, shouldn't we? – rmdmc89 Oct 6 '19 at 16:23 Where $(i,j)$ sends $1$ depends on how your inclusion maps look and which orientation you pick, that is which isomorphisms $H_1(S^1) \cong \mathbb Z$, $H_1(M) \cong \mathbb Z$ and $H_1(M') \cong \mathbb Z$ you pick. It is therefore ok to assume $(i,j)1 = (2,2)$. (1) $im(i,j) = \ker (k-l) = 2\mathbb Z(1,1) \not = 2\mathbb Z \times 2 \mathbb Z$. (2) $\partial_1 = 0$ and therefore $k-l$ is surjective. We now have $H_1(K) \cong [\mathbb Z \times \mathbb Z] / [2\mathbb Z(1,1)]$. (3) Use that $\mathbb Z \times \mathbb Z = \mathbb Z (1,0) \oplus \mathbb Z(1,1)$ to conclude $H_1(K) \cong \mathbb Z \times \mathbb Z/2\mathbb Z$. • Thank you! But what is $2 \mathbb{Z} (1,1)$? First I thought it's the free abelian group over the basis consisting of one element, $(1,1)$ but then that would be $\mathbb{Z}$ and I need it to be $\mathbb{Z} \oplus 2\mathbb{Z}$ but I don't see how $2 \mathbb{Z} (1,1) = \mathbb{Z} \oplus 2\mathbb{Z}$... – Rudy the Reindeer Aug 21 '11 at 17:54 • You should read this with some linear algebra in mind. $2\mathbb Z (1,1)$ is the subgroup of $\mathbb Z \times \mathbb Z$ that consists of the elements $(n,n)$ for some $n \in 2\mathbb Z$. – Alexander Thumm Aug 21 '11 at 18:02
{}
# Proof of Product Rule for Sequences using definition of infinitesimal and properties of infinitesimal sequences. I have been trying to understand this proof for the product rule of sequences, where the author makes use of some properties for infinitesimals, to prove this theorem. This is quite a long question, but please answer it as explicitly as you possibly can. "A sequence ($y_nz_n$) is convergent to $ab$ if sequences ($y_n$) and($z_n$) are convergent to $a$ and $b$, respectively." • First of all how would you prove this. • The author uses an important property described earlier in the book: That for any convergent sequence ($y_n$) there corresponds an infinitesimal sequence ($\alpha_n$) where $\alpha_n$ = $y_n$- $a$. Why is this true, is there any intuition/ a precise reason behind this? Explain this property please. • Lastly after initial steps are taken we get: ($y_nz_n$) = ab + $\gamma_n$ where $\gamma_n$ = $b\alpha_n$+$a\beta_n$+ $\alpha_n\beta_n$ The author then states: the sequences ($b\alpha_n$) , ($a\beta_n$) , ($\alpha_n\beta_n$) are infinitesimal as well. • Why? Is it true that if we multiply a limit with a infinitesimal, we get another infinitesimal as $n\to\infty$? Explain please. Your textbook has a typically roundabout way of proving simple things (typically I strongly dislike this kind of books). But anyway let's proceed with the same. A sequence $\alpha_{n}$ of real numbers is said to be an infinitesimal sequence if $\lim_{n \to \infty} \alpha_{n} = 0$. The correct/better term is null sequence. The meaning of the above definition is that if $\alpha_{n}$ is a null sequence then for any given number $\epsilon > 0$ it is possible to find a positive integer $m$ such that $|\alpha_{n}| < \epsilon$ whenever $n \geq m$. It is easy to prove (and you should try to prove it, if you face a problem you can post that here) that if $\alpha_{n}, \beta_{n}$ are null sequences and $a, b$ are any real numbers then $a\alpha_{n} + b\beta_{n}, \alpha_{n}\beta_{n}$ are also null sequences. Further it is almost obvious (your question suggests that it is not obvious to you, but its hard to believe unless you are lost in symbols and jargon which is typical of such bad textbooks) that $\lim_{n \to \infty}x_{n} = a$ if and only if $\alpha_{n} = x_{n} - a$ is a null sequence. Similary if $y_{n} \to b$ then $\beta_{n} = y_{n} - b$ is also a null sequence. Now consider the sequence \begin{align} \gamma_{n} &= x_{n}y_{n} - ab\notag\\ &= (\alpha_{n} + a)(\beta_{n} + b) - ab\notag\\ &= \alpha_{n}\beta_{n} + a\beta_{n} + b\alpha_{n}\notag \end{align} Clearly each of the three terms in the expression for $\gamma_{n}$ is a null sequence and hence $\gamma_{n}$ is a null sequence. It follows that $x_{n}y_{n} \to ab$ as $n \to \infty$. Let me know if you need more elaboration. Update: Based on comments from OP I provide a short and simple proof that if $\alpha_{n},\beta_{n}$ are null sequences then so is their product $\alpha_{n}\beta_{n}$. Note the following simple idea. If $|x| < 1, |y| < 1$ then $|x||y| = |xy| < 1$ and more strongly $|xy| < |x|, |xy| < |y|$. Thus if we we take two very small numbers (at least smaller than $1$) $x, y$ then their product is going to be much smaller (at least smaller than each of them individually). This simple fact is the key to understanding the proof below. To prove that $\alpha_{n}\beta_{n}$ is a null sequence we need to show that values of sequence can be made arbitrarily small as $n$ increases. We are given that each sequence $\alpha_{n}, \beta_{n}$ is a null sequence. Let $\epsilon$ be an arbitrarily given positive number and we would like to ensure $$|\alpha_{n}\beta_{n}| < \epsilon$$ for large $n$. Since $\alpha_{n}, \beta_{n}$ are themselves null sequences it follows that there are positive integers $m_{1}, m_{2}$ such that $$|\alpha_{n}| < \epsilon$$ whenever $n \geq m_{1}$ and $$|\beta_{n}| < 1$$ whenever $n \geq m_{2}$. Thus if $m = \max(m_{1}, m_{2})$ then both the inequalities above will hold for all $n \geq m$. Therefore on multiplication of the inequalities we have $$|\alpha_{n}\beta_{n}| = |\alpha_{n}||\beta_{n}| < \epsilon$$ for all $n \geq m$. It follows that $\alpha_{n}\beta_{n}$ is a null sequence. We have not used anywhere the symbol $0$ and the fact that $0 \times 0 = 0$. The fact $0 \times 0 = 0$ belongs to elementary algebra whereas analysis mostly deals with inequalities like $<, >$ instead of $=$ and the above proof is typical of arguments used in analysis. So in order to understand concepts of calculus you need to focus less on computations (dealing with $=$) and more on comparison of big and small (dealing with $<, >$). • Would you advise me to read a different textbook (e.g. Micheal Spivak)? For a more unambiguous approach to proofs etc. – xAly Aug 20 '15 at 9:04 • @LostAce: For self study I have suggested Hardy's "A Course of Pure Mathematics" many times on MSE. If you want for your university then better to stick to the book suggested by your professor. Spivak's book is good but it is written in a formal style not so suitable for self study. Aug 20 '15 at 9:12 • So if $\alpha_n\beta_n$ is a null sequence then$a\alpha_n + b\beta_n, \alpha_n\beta_n$ are also null sequences because (My attempt at a proof): $\lim_{n \to \infty} \alpha_{n} = 0$ and $\lim_{n \to \infty}\beta_n = 0$ so in other words as we let n approach infinity we are able to say that $\alpha_n$ and $\beta_n$ = $0$ and we know that any real number multiplied by $0$ is also $0$. QED? – xAly Aug 20 '15 at 12:57 • is that correct? – xAly Aug 22 '15 at 4:32 • @LostAce: I am afraid your approach is not correct. Calculus/analysis is much more than elementary algebra and you are not adding any details more than algebra. Aug 22 '15 at 4:44
{}
# Is it possible to use object motion to drive audio properties in Blender? I've seen plenty of examples on how to drive motion of objects in Blender with audio, but I'm trying to do the opposite - drive audio properties based on the motion of objects. More specifically, I'd like to tie the volume of an audio file to the rate of motion of an object in Blender (length of the displacement vector?) so that it adjusts "automatically". So if the object is stationary the audio is quiet (or silent). As the object speeds up the volume of the audio would increase in tandem (to some limit of course), and as it slows down the volume would decrease again. If this is possible, can it also be used to control playback rate as well to affect the pitch of the audio? Note that, despite my previous question, I am definitely not trying to achieve a Doppler effect here (which would depend on rate of approach/retreat from the viewer). Thanks! • Have you tried for example driving the volume and or pitch of a speaker object's data? Sep 14 '20 at 7:17 • Thanks for that suggestion, I'll definitely look into it. I probably should have mentioned that at my current level of experience the main problem is that I'm not really sure how to start approaching the problem. Sep 15 '20 at 6:29 • suggestion: if your motion is baked to keyframe you could evaluate past and/or future keyframes to get a motion/acceleration-value. Using this to drive the properties of a speaker as batFINGER suggested would acomplish your goal. check out animation nodes and this answer (blender.stackexchange.com/questions/175287/…) to get a starting point of evaluation a animation curve. Beware, its not the same thing 1:1 but needs some tweaking for your usecase! Sounds like a nice idea, wish you well ;) – A M Sep 15 '20 at 8:23 For anyone else interested in this, the following StackExchange post contains a script which does exactly what I need. https://blender.stackexchange.com/a/66257/70741 Having never used drivers before, I found this an incredibly useful introduction to them, and I can see myself using them a lot in future - I am, by trade, a programmer, so it kind of suits my mindset I guess. Basically just follow the instructions there to get the delta X,Y,Z between frames and drive some custom properties dX, dY, and dZ on the moving object - the linked discussion talks you through doing all of this. Once you've done this, you can use the custom properties in turn to derive the distance of motion between frames as sqrt(dX*dX + dY*dY + dZ*dZ) and drive another custom property distance Once you've done that, you can use the custom distance property to drive the volume of a speaker object (or whatever else) however you'd like. You can also easily calculate a speed value using the frame rate if you need it (distance was sufficient for me) You'll likely need to tweak the resulting values to suit your requirements, but it's not that hard to do. The adjustments are a little "choppy", but I'm pretty sure that can be smoothed out by updating the script to track a moving average over several frames, rather than just between single frames (which I'll be experimenting with next). Thanks to everyone who dropped by to help out! The main problem i would say is getting some kind of motion- or speed-value. The idea ist getting the current location and comparing it to some locations in the past or future from the FCurve. The AddOn Animation Node is used for this here. (Blender 2.9 with AN Version for 2.83 LTS) The time info node gives the current frame, subtracting or adding some values to it will give it a moment in past or future time. Then you can check for both or multiple locations in the animation FCurve and calculate the distance or median distance. In the current solution only the current frame and last frame is checked (that means the checking of the future frame is redundant at the moment) Having this motion- or speed-value you have to decide what behavior you want to have in motion or while resting and filter, map, calculate accordingly. Unfortunatly the Attribute Output Node does set the value in the speaker object correctly but the sound is not affected while playing. The workaround herefore would be to set those current numbers as a Keyframe with the corresponding node. Heavy inspiration for this workflow comes from Leander in an old question of mine: Continuous rotation in AN with certain control behavior? (Speed) You can check out how this solution works in the following screenshot. • Amazingly detailed answer - thank you! I'll look into this for sure. Sep 18 '20 at 2:39
{}
Translating coordinates on a Riemann surface Let $U\subset X$ be an open subset of a connected Riemann surface $X$. Let $z:U\longrightarrow B(0,1)$ be a diffeomorphism, where $B(0,1)$ is the open unit disc in $\mathbf{C}$. Let $P\in U$ be the unique point such that $z(P) =0$. Suppose I take another point $Q\in U$. I want to use $z:U\longrightarrow B(0,1)$ to construct a coordinate around $Q$. Question. Does the following work? Consider an open set $V$ in $U$ whose image under $z$ is a small open disc around $z(Q)$. Then we define $w: V\longrightarrow B(0,1)$ by $$w(x) = z(x) - z(Q).$$ Is this is a coordinate around $Q$? - Sure. You could also use a Möbius transformation of the unit disk to move $Q$ to $0$, that would let you get away with restricting to a smaller neighborhood $V$. –  Gunnar Magnusson Sep 29 '11 at 19:24 I don't quite understand. I'm moving $Q$ to $0$ by simply translating. This is not a Mobius transformation? If not, can I write down an explicit formula for this Mobius transformation? So my coordinate at Q would be w(x) = mobius(z(x)), right? What can one take for mobius? –  shaye Sep 30 '11 at 15:09 The translation is a Möbius transformation, but you can also use a different one to map the entire unit disk holomorphically to itself in such a way that $Z(Q) \mapsto 0$. Your formula for the new coordinate is correct. Googling Möbius transformation or the automorphisms of the unit disk will turn up an explicit formula for this map if you want one. –  Gunnar Magnusson Oct 1 '11 at 9:30
{}
# Calculating pi to 7 significant figures without using Math.PI I need to calculate pi to 7 significant figures in Java—without using Math.PI. Here is the code I came up with to do that: public class ComputePI { public static void main(String[] args) { double sum = 0.0; double sumOne = 0.0; double delta; int counter = 0; final double DENOMINATOR_CANCEL = 4.0; final int LARGE_NUMBER = 5000000; final double SMALLEST_DELTA = 0.0000004535899; boolean closeEnough = false; for (int j = 1; j < LARGE_NUMBER; j += 2) // Computes number close to pi { double firstFrac = (1.0 / (j * 2.0 - 1.0)); double secondFrac = (1.0 / (j * 2.0 + 1.0)); sumOne += firstFrac - secondFrac; } for (int i = 1; (!closeEnough); i += 2) //"My computed value of pi" { double firstNum = (1.0 / (i * 2.0 - 1.0)); double secondNum = (1.0 / (i * 2.0 + 1.0)); sum += firstNum - secondNum; delta = sumOne * DENOMINATOR_CANCEL - sum * DENOMINATOR_CANCEL; if (delta < SMALLEST_DELTA) // If delta reaches 7-sig accuracy { closeEnough = true; // End loop } counter = i / 2; // Counts iterations } /* output results */ System.out.println("My computed value of pi is: " + sum * DENOMINATOR_CANCEL); System.out.println("The library constant value of pi is: " + Math.PI); System.out.println("The number of iterations needed to reach " + "seven-significant digit accuracy is: " + counter); } } Is this a good approach, or is it horribly inefficient? How might I improve it? Perhaps having a for loop inside a for loop would work better. One with n+1 iterations, the other with n iterations. The loop would terminate if the two sums were subtracted, and produced a difference of 0.0000009 or less (7 significant figure accuracy). ## migrated from stackoverflow.comSep 6 '17 at 21:30 This question came from our site for professional and enthusiast programmers. Your implementation is fine. You could add some tweaks (i += 4 instead of i += 2) to avoid multiplying i by 2.0, but this won't help much to make your program fast. This is because you chose a simple formula, but that formula converges very slowly towards $\pi$. Read on Wikipedia about efficient algorithms for computing $\pi$, there are several. Or, if you want to cheat a bit, just return 4 * Math.atan(1.0). You don't need a delta in the whole program, so only declare it where you really need it. You already did that with firstFrac, for example. Be consistent with your chosen names. What's the difference between firstFrac and firstNum? — There is none, therefore you should use the same name in both places. What does sumOne mean? Think of a better name for it.
{}
Power | kullabs.com Notes, Exercises, Videos, Tests and Things to Remember on Power Please scroll down to get to the study materials. Note on Power • Note • Things to remember • Videos • Exercise • Quiz The rate of doing work is called power. One watt power can be defined as that in which one joule work is done in one second. Mathematically, Or, $$P = \frac {F\times d}{t}$$ Or, P = F× v The SI unit of power is joule per second(Js-1) which is called watt (w). Power of lamp is 60 watt means that lamp can convert 60 J electrical energy into light and heat energy in one second. Since the SI unit of power is joule per second, also known as Watt we have, $$\therefore$$ 1 Watt = $$\frac{ 1\; joule}{1\; second}$$ Thus, a body is said to be one watt power if it can do one joule of work in one second. There are some bigger units of power like Kilowatt (kW), megawatt (MW) and Horsepower (HP). The relation of them with watt is given below, i.e., 1kW = 103W i.e., 1 MW = 106W or 103 kW 1 HP = 746 W (approx. 750 W) Relationbetween Work and Energy. Energy is the capacity of doing work. When the person does some work on a body, then the energy of the body increases. According to the principleof conservation of energy, the energy gain by the body is equal to the energy lost by the person. For example, when we push table, then we lost some muscular energy. But consequently, the table gains some kinetic energy while it is in motion. When spring is compressed, it stores some potential energy and we loss equivalent amount of muscular energy. We take food which contains chemical energy. The chemical energy is converted into muscular energy inside our body. Because of which we are able to do some work. But, we feel week when we do not take food and we are not able to do work. We feel hungry after having a meal when we sleep for 7- 8 hours. This is due to the work done inside the body like, heart pumps the blood, muscles stretch and relax, etc. The energy provided by the food is used in these internal processes of our body even though we do not perform any external work. Comparison of work, energy and power S. N. work energy power 1 It is the product of force and displacement in the direction of force. It is the capacity of doing work It is the rate of doing work or rate of conservation of energy 2 Its SI unit is joule. Its SI unit is joule. Its SI unit is watt. 3 Its value does not depend on time. Its value does not depend on time. Its value depends on time. 4 In general, work is done against friction or gravity. It has different forms. It does not have any form. • The rate of doing work is called power. • The SI unit of power is joule per second (js-1) which is called watt (w). • One watt power can be defined as that in which one joule work is done in one second. . Very Short Questions The rate of doing work is called power. Its SI unit is watt (W). Mathematically, Power = = 60W written on an electric bulb means 60 joules of electrical energy is converted into light energy. The relation between horsepower (hp) and watt(w) is 1hp = 746 watt = 750 Watt (approximately). At a thermal power station, chemical energy is converted into heat energy and heat energy is converted into electrical energy. Some mathematical relations of the powers are: 1.P = W/ t 2. P= F. d / t 3. P = F v 4. P = mgh/t Here, Mass (m) = 100kg Height (h) = 3m Time (t) = 5s Power (p) = ? Here, the wprk is done against the gravity. Thus, work done (W) is given by, i.e, P = $$\frac{W}{t}$$ = $$\frac{mgh}{t}$$ = $$\frac{100 \times 10 \times 3}{5}$$ = 600watt. Hence, power of the man is 600W 0% • Power is expressed as work/time time/work joule force X avg speed capacity power energy force • The formula of power is time taken/work done work done/time taken work done/velocity force/area erg pascal joule watt • If 1 Joule of work is done in 1 second, the power is said to be 1 watt power 1 joule power 3 pascal power 2 erg power • You scored /5 Forum Time Replies Report
{}
For discussion of specific patterns or specific families of patterns, both newly-discovered and well-known. GUYTU6J wrote:Could we prove or disprove that all spaceship partials can be completed(even at an incredible width)? For a suitable definition of "partial", proving this would essentially require a formula that outputs spaceships. For more lenient definitions of partial, however, it should be trivial to construct infeasible partials that cannot possibly work, like a glider as a c/4 diagonal partial in the opposite direction or something silly like that. LifeWiki: Like Wikipedia but with more spaceships. [citation needed] Posts: 1889 Joined: November 8th, 2014, 8:48 pm Location: Getting a snacker from R-Bee's For the definition of "partial" in which cells which cannot be affected within the period from outside must be correct, it might be possible to construct a partial with a GoE in the back which is partially reconstructed. It can't be completed due to the impossibility of producing the (full) GoE. I like making rules fluffykitty Posts: 617 Joined: June 14th, 2014, 5:03 pm Extension of 60P5H2V0: x = 19, y = 245bo7bo$5bo7bo$$5boo5boo5booboboboo6bobobobo6bobobobo6bo5bo5bo7bo4booboobooboo4bobbooboobbo4bobbooboobbo4boobbobobbooo6booboo6boo7bobo7bobboo3booboo3boobbobbobbobobbobboo5bobobobo5bo3o5bobo5b3obo4bobobobo4bo8bobo6boo3boo3booboo3booboo5bo7bo! x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)$$x_1=\eta xV^*_\eta=c^2\sqrt{\Lambda\eta}K=\frac{\Lambda u^2}2P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A for awesome Posts: 1876 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 ### Re: Spaceship Discussion Thread Here are two new 90-cell 2c/5 ships: x = 78, y = 15, rule = B3/S234b3o39b3o3bob2o38bob2o2bo41bob2o40b2o2b2o4b2o17b4o13b2o4b2o3bo4b3o3bo8bo2bo4bo13bo4b3o3bo8b2o4b2o3bo6bo5b4o5bo14b2o3bo6bo5bo5bo6b2o4bobo4bo4b2o15bo6b2o4bobo4bobo4b2o6b3ob4ob2obobo4b3o7bo4b2o6b3ob3o4bo8bo2ob4o9bo7b3obo4b2o7b2ob4o9bo7b3obo4b2ob5o11b4o4bo8bo8b5o11b5ob2obobo4b3o17b2obo4bo33b2obo4bo4b2o22bo41b4o5bo23b2o40bo2bo4bo69b4o! The first one was found by extending part of A for awesome's 59-cell ship using gfind. The second one was found by noticing that the trailing component on the first ship could be flipped. A for awesome wrote:Extension of 60P5H2V0 This is known and can be found in jslife. The tagalong can connect to four different phases of the ship: x = 24, y = 110, rule = B3/S237bo6b5o5bo5bo6b2o7bo2bo2bo2b2o9bob2ob2o6bo7b4o2bob2o6b4o4bo3bo3bo2bobo2b2obobob2o4b7o5bo24b7o5bo3bo3bo2bobo2b2obobob2o2bob2o6b4o4bob2ob2o6bo7b4o2b2o9bo7bo2bo2bo6b2o5bo5bo6b5o7bo106b2obo6b5o5bo3b2o6b2o6b2o5bob4o8bobo3bo6bo7b4ob2o2b2o5b4o4bo3bo3bo2bobo2b2obobob2o4b7o5bo24b7o5bo3bo3bo2bobo2b2obobob2ob2o2b2o5b4o4bobo3bo6bo7b4ob4o8bo6b2o5bo6b2o5bo3b2o6b5o6b2obo106bo2b2o5bo5bo4bo5bob2o2b2ob3o5bob6o6boo4b2o5bo7b4ob2ob3o5b4o4bo2b2o6bobo2b2obobob2o4b7o5bo24b7o5bo2b2o6bobo2b2obobob2ob2ob3o5b4o4boo4b2o5bo7b4ob6o6bo2b2ob3o5bo5bob2o5bo4bo5bo6bo2b2o115b2o2b2o4b2o5bob2obo6bo4bobo11boo6bo4bo7b4ob2obobo5b4o4bob2o5bobobo2b2obobob2o3b8o5bo23b8o5bob2o5bobobo2b2obobob2ob2obobo5b4o4boo6bo4bo7b4obo11bobo6bo4bo5bob2o4b2o5b2o2b2o! Edit: Here are three more 2c/5 ships that are slightly too big to be included in the small ships collection: x = 42, y = 74, rule = B3/S234b2ob2o3bobo2bo20bo3b2obo11b2o4b2o2b7o2bo4bo3bobo3bobo7bob2o3b2o3b5o3b3o2b2o3bob2o12b2ob2o2bo4b5o2bobobo4bo3bobobobo6bob2obo4bo4bo5b3obo2b2obob4o2bo7bo5bobobo3b3ob2o15bo4b2obo4b2o18bo19bobo175bo4bobo3bo2bob3obobobo4bobo2b2o6bo12bo17bo4b2o2b3o2bo9b2o4b4o6b3o4b2o7b2obobob2o7bo3b2o3b2o3o4bo5b4obobo8b2o5b2oo6bo8bo4bobo2bo7bo2o4bo9bo3b2obo2bo4bo2bo2b3o12bo2bo5bo6bo2b3o22b2ob2o2bo4b5o22b2obobo2bo3b2obo28bo28bobo145bo4bobo3bo2bob3obo26bobobobo4bobo17bo2b2o6bo11b2obobo2bo3b2obo4b2o2b3o2bo8b2ob2o2bo4b5o4b2o7b2obobobo5bo6bo2b3o3o4bo5b4obob2obo2bo4bo2boo6bo8bo4bobo2bo7bo2o4bo9bo3bo8b2o5b2o2b3o12bo2b2o7bo3b2o3b2o23b2o4b4o6b3o23bo17bo! -Matthias Merzenich Sokwe Moderator Posts: 1480 Joined: July 9th, 2009, 2:44 pm ### Re: Spaceship Discussion Thread 2c/10 glide symmetry partial x = 6, y = 11, rule = B3/S232o2o32b3o22bo2bo3b3o2b2ob2o! Current status: outside the continent of cellular automata. Specifically, not on the plain of life. GUYTU6J Posts: 669 Joined: August 5th, 2016, 10:27 am Location: outside Plain of Life ### Re: Spaceship Discussion Thread GUYTU6J wrote:2c/10 glide symmetry partial x = 6, y = 11, rule = B3/S232o2o32b3o22bo2bo3b3o2b2ob2o! That more closely resembles a tagalong component than a partial. Making a c/5 ship with the ability to support that seems somewhat difficult if you ask me. LifeWiki: Like Wikipedia but with more spaceships. [citation needed] BlinkerSpawn Posts: 1889 Joined: November 8th, 2014, 8:48 pm Location: Getting a snacker from R-Bee's ### Re: Spaceship Discussion Thread Oh,that's right.But it will be more amazing if it is used as a spaceship's both front end and back end. Current status: outside the continent of cellular automata. Specifically, not on the plain of life. GUYTU6J Posts: 669 Joined: August 5th, 2016, 10:27 am Location: outside Plain of Life ### Re: Spaceship Discussion Thread GUYTU6J wrote:Oh,that's right.But it will be more amazing if it is used as a spaceship's both front end and back end. Except it can't be a front end because the blocks are in front of the LOM. And the reaction's too slow to be supportable by a block chain from a puffer but the reaction can be supported by gliders from a c/5 p10 backrake or a certain spark: x = 16, y = 8, rule = B3/S232b3o7b3o22bo2bo6bo2bo3b3o7b3o23o8b2o2bobo11bo! LifeWiki: Like Wikipedia but with more spaceships. [citation needed] BlinkerSpawn Posts: 1889 Joined: November 8th, 2014, 8:48 pm Location: Getting a snacker from R-Bee's ### Re: Spaceship Discussion Thread BlinkerSpawn wrote: GUYTU6J wrote:Oh,that's right.But it will be more amazing if it is used as a spaceship's both front end and back end. Except it can't be a front end because the blocks are in front of the LOM. And the reaction's too slow to be supportable by a block chain from a puffer but the reaction can be supported by gliders from a c/5 p10 backrake or a certain spark: x = 16, y = 8, rule = B3/S232b3o7b3o22bo2bo6bo2bo3b3o7b3o23o8b2o2bobo11bo! x = 80, y = 94, rule = B3/S23ob2o2o4bobo5b2o5bo3bo10b2o9b2o13bobo14b2o14bo3bo19b2o18b2o22bobo23b2o23bo3bo28b2o27b2o31bobo32b2o32bo3bo37b2o36b2o40bobo41b2o41bo3bo46b2o45b2o49bobo50b2o50bo3bo55b2o54b2o58bobo59b2o59bo3bo64b2o63b2o67bobo68b2o68bo3bo73b2o72b2o476b3o276bo2bo77b3o274b3o76bo70b2o3bo69bobo71bo65b3o67bo61b2o3bo60bobo62bo56b3o58bo52b2o3bo51bobo53bo47b3o49bo43b2o3bo42bobo44bo38b3o40bo34b2o3bo33bobo35bo29b3o31bo25b2o3bo24bobo26bo20b3o22bo16b2o3bo15bobo17bo11b3o13bo7b2o3bo6bobo8bo2b3o4bo3bo! PHPBB12345 Posts: 583 Joined: August 5th, 2015, 11:55 pm ### Re: Spaceship Discussion Thread A possible small 2c/5 component: x = 13, y = 13oo$$bbo7bobo$bobo8bo$bobbo4bo$boboo3bobo$4bobbo$5boboo$5bo3boo$6boo$6booboo$8bo! I don't know if it can be attached to anything known. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A for awesome Posts: 1876 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 A for awesome wrote:A possible small 2c/5 component Here is a 79-cell ship and a 90-cell ship using this component (found with gfind): x = 41, y = 42, rule = B3/S2332b2o$12bo19bo$7b3o2bo15b3o$10bo4bo$7bobo3bo4bo8bo3bo$6b3o3bo6b2o4b2o4bo$4b2o6bo3bo4b2obo4b2o$2b2o9b2obobo2b4o4bobo$2b2ob2o6b2obobo2b2o4bo$bo3bobo15b4o$2bobo3bo7b2o6bo$2bobo2bo2bo2$9b2o18$8b2o$7b4o27bo$6b2o2bo22b2o2bo2bo$5bo3bo9bo13bob2o$12b2o5bo13bob2o$3b2o3bo2b2ob3ob2obo12bo5bo$2bo9bobobo3bo4bob2ob3obo2b3o$bo2b2obo5b2obo4bo2b2obo3bo$2obo4bo5b2o7b2o2bo2bo$b2ob2ob2o14bo5b2o$5b5o! Here are two more 2c/5 ships that are slightly too large for the small ships collection: x = 38, y = 34, rule = B3/S233b2ob2o4b2o3b3o11bo$2bobo5b2o3bob2o11bobo$bo8b2o4bob3o8bo3b3o$o9b5o5bobob3o3bob3o$bo4bo6b3o4b2obo7b2o$2b2o3b3o13bob2o$11bo10b2o4bobo$bobobo4bo14b3o2bo$o4bo3bo14b2o4bo$ob4o21b3o$2o25b2o11$11bo$6b3o2bo9b3o2bobo$9bo4bo4b2o3bobobo5b3o$6bobo3bo5b4ob2obo6bo$5b3o3bo5b2o18bo$3b2o6bo3bobo5b3obo3b2o3bo$b2o9b2obo2b3o2b2obo2bo3bo$b2ob2o6b2ob2o2b2o2b2ob4o3b2o$o3bobo8bo13bo3b2o$bobo3bo$bobo2bo2bo2$8b2o! Edit: the 79-cell ship can support the B-heptomino tagalong to give an 86-cell ship: x = 33, y = 16, rule = B3/S2316bo$15bo$14b2o15b2o$11bo3b2o14bo$6b3o2bo4bo10b3o$9bo4bo$6bobo3bo4bo8bo3bo$5b3o3bo6b2o4b2o4bo$3b2o6bo3bo4b2obo4b2o$b2o9b2obobo2b4o4bobo$b2ob2o6b2obobo2b2o4bo$o3bobo15b4o$bobo3bo7b2o6bo$bobo2bo2bo2$8b2o! Edit 2: while running a width-11 knightt search I found this small 2c/5 tagalong that can be attached to the back of the 34-cell ship to make a 54-cell ship: x = 18, y = 15, rule = B3/S235bo3bo$5bo3bo$4bobo$3b3o2b2o2$6b2o$6b2o5bo$6b2o3bobo$11bo$2b2obo5b3o$bobo2bo6b3o$2o3bo6b2o2bo$bobobo7bob2o$2b2ob2o7b4o$5bo! Obviously there are other known ships that can pull this tagalong, but I haven't enumerated all the small cases yet. -Matthias Merzenich Sokwe Moderator Posts: 1480 Joined: July 9th, 2009, 2:44 pm This all looks like real progress after a decade of practically no activity on small periods space ships. Could we extend this to short wide ones? This would then give the flexibility to finalize puffer engines, grey ships and other extensible structures. Also, further progress in the p6 and p7 area (more examples) would help. HartmutHolzwart Posts: 422 Joined: June 27th, 2009, 10:58 am Location: Germany HartmutHolzwart wrote:Could we extend this to short wide ones? It would certainly be nice to have more short c/4 and 2c/5 components. One possible way to make a short c/6 orthogonal ship might be to start with this well known component: x = 10, y = 8, rule = B3/S232b2ob2o$b4obo$o6bo$bo4bo$6bo$4bo3b2o$8bo$8bo! I think I have run through a full height-9 c/6 search using WLS and found nothing, so it might be best to search at a height of 10. The 2c/5 width-11 knightt search finished, and I have attached the results to this post. Attachments knightt-2c5-w11.rle (60.4 KiB) Downloaded 115 times -Matthias Merzenich Sokwe Moderator Posts: 1480 Joined: July 9th, 2009, 2:44 pm ### Re: Spaceship Discussion Thread A rather sparky 75(?)-cell ship: x = 14, y = 274bo$5bo$obo$obbo$obboo$o4bo$b4o$bo4bo$oobo3bo$oobo$3bobo3boo$3bobo$obb3oboboo$b3o5bo$$9bobbo10boo4bo4bo3bo4b5o3b3obboo3bo5bo3boboo9bobo6boob3o7bo3bo7bobo8bobbo9b3o! On an unrelated note, I have eliminated the possibility of a (2,1)c/6 knightship with a diagonal width of 14 half-diagonals in all phases using JLS, and I have almost certainly eliminated the possibility of a knightship with the same single-phase width (certain quirks with JLS's unset cells make it hard for me to know for certain, but the probability that a partial ever almost reached the edge of the grid that I used seems astronomically low). x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)$$x_1=\eta xV^*_\eta=c^2\sqrt{\Lambda\eta}K=\frac{\Lambda u^2}2P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A for awesome Posts: 1876 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 ### Re: Spaceship Discussion Thread A for awesome wrote:A rather sparky 75(?)-cell ship It's 77 cells. The back end showed up in the knightt search I ran yesterday. A small tagalong from that search makes an 83-cell ship and the B-heptomino tagalong makes an 84-cell ship: x = 31, y = 45, rule = B3/S2323b2o18bo2b4o14b2o2bo2bobobo3b2o13b2obo4bo2bo3bo8b2o4bob3o2bob2o3b3o20b2ob2o7b3o5b2o4b2o5b2o8bo3b3o2bo7bo3b3obo2b4o3bo2b2o2obo7b2ob2ob2ob2obobob2o5bobob2o8bo1229bo28b2o27b2obo223b2o4bo18bo2b4o14b2o2bo2bobobo13b2obo4bo2bo8b2o4bob3o2bob2o20b2ob2o7b3o5b2o4b2o5b2o8bo3b3o2bo7bo3b3obo2b4o3bo2b2o2obo7b2ob2ob2ob2obobob2o5bobob2o8bo! I noticed that applying one of my new tagalongs to the new 56-cell ship gives a 73-cell ship: x = 28, y = 16, rule = B3/S2317bo15b2o3bo15b2o2bo10b2o3b4obo7b2obo6bobob2o2bob2o2bo4bobo6bo2o4bo2b2ob4o4b3oo4bo3b2o3bo5b2o5o7b2o9bo13bo7bo2bo22bo2bo3b3o16bob2o3bo19bo2b2o3bo4b2o! This is the first known 73-cell 2c/5 ship. Now 2c/5 ships are known for all bit counts from 56 to 90. I'm sure that every bit count over 90 could be achieved using only components found in the current small ships collection. A for awesome wrote:I have eliminated the possibility of a (2,1)c/6 knightship with a diagonal width of 14 half-diagonals in all phases using JLS Could you describe your method for this search? I suspect I know what it is, but I'm curious. -Matthias Merzenich Sokwe Moderator Posts: 1480 Joined: July 9th, 2009, 2:44 pm ### Re: Spaceship Discussion Thread Sokwe wrote: A for awesome wrote:I have eliminated the possibility of a (2,1)c/6 knightship with a diagonal width of 14 half-diagonals in all phases using JLS Could you describe your method for this search? I suspect I know what it is, but I'm curious. It's relatively ugly and requires a lot of manual intervention. (Also, I'm not quite as confident about the single-phase prediction as I was before.) I first set up a large grid of off cells in one phase, and empty those that fall inside the requisite diagonal swath. The swath must entirely intersect the leading edge of the grid (not the corner). I then mark all cells at the trailing edge of the grid that could possibly be active in a longer partial as unset in all phases. Next, I mark the first cell in the leading row (that is not preprocessed to be empty) as on, and start the search. When that completes, I mark that cell off, set the next one in the row on, and restart the search. I continue that until the last cell in that row has been set to off. If a solution is ever output, that means that either there is a ship or a very long partial. I feel like there should be a better way, but the JLS manual doesn't have anything. I've been setting up most of my recent searches in a similar way, in fact, except with a knightwise swath instead of a diagonal one. However, I would be surprised if no one has done a search for knightships on a knightwise swath before. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)$$x_1=\eta xV^*_\eta=c^2\sqrt{\Lambda\eta}K=\frac{\Lambda u^2}2P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A for awesome Posts: 1876 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 ### Re: Spaceship Discussion Thread c/8 partial. Front wibbles in WLS, tried to extend with zfind at width 10 and this is the best it could find. x = 22, y = 43, rule = B3/S23b2o16b2o4o14b4oo2b2o12b2o2bobo18bo6b2o6b2o7b2o4b2o5b3obo2bob3o8b6o8b6o29b4o7b3o2b3o7bo6bo6bo3b2o3bo6bo8bo7bobo2bobo6bo2bo2bo2bo7b2o4b2o7bob4obo9b4o7bobo2bobo7bobo2bobo5b2ob2o2b2ob2o4bobo2b4o2bobo9b4o8bob2obo4b3ob2o2b2ob3o2b5o2bo2bo2b5o2b2obob2o4b2obob2o3bo3bobo2bobo3bo5b2o2bo2bo2b2o6bo2bo2bo2bo2b3obob2o2b2obob3o6bo8bo2b2ob2ob6ob2ob2o3bo4b2o2b2o4bo2b2o14b2o3bob5o2b5obo3bo3bobo2bobo3bo3b2o2b2ob2ob2o2b2ob3ob2o2b4o2b2ob3o4b2ob2o4b2ob2obo2b2obo6bob2o2bo! -Josh Ball. velcrorex Posts: 339 Joined: November 1st, 2009, 1:33 pm ### Re: Spaceship Discussion Thread It's a pity that I couldn't complete it. By the way,I noticed two partials with the same front end but different period x = 42, y = 15, rule = B3/S232b2o3b2o3b2o14b2o3b2o3b2o2b3obo2bob3o14b3obo2bob3ob2o10b2o12b2o10b2o4bob4obo18bob4obob5o4b5o11b2o2bob4obo2b2ob3o3b2o3b3o14b2o6b2ob2obob4obob2o14b4o2b4o3bob2o2b2obo17b2o4b2ob2o10b2obobo3b2o3bobo13bo3bo2bo3bobo12bo12b2obobo2bobob2oo2bo2bo2bo2bo2bo10b2o3bo4bo3b2oobo4b2o4bobo10b3ob2o4b2ob3oo2bob2o2b2obo2bo14bo2b2o2bo3ob2ob2ob2ob3o11b2o3bo2bo3b2o! Current status: outside the continent of cellular automata. Specifically, not on the plain of life. GUYTU6J Posts: 669 Joined: August 5th, 2016, 10:27 am Location: outside Plain of Life ### Re: Spaceship Discussion Thread I ran the width-12 c/4 orthogonal knightt search. The results are attached to this post, although they are probably not very useful to anyone. I looked through the ships a little and managed to construct the following small ships based on new components: x = 56, y = 80, rule = B3/S23b2o47bo3o47b2o2b2obo15b2o31bo2bo2b2ob2o10b3obo10b2o19bo7bo5bob3o13b3o2b2o7bobo6bobo9bo20b3o3bo6bobo10bo9b2ob2o16bo4bo2bo14b3o13bo5bo6bobo5bobobob2o20b2o7bobob3o4b2o3bo12bo11bo6bobo4b3obo6bo12bo2bo12bo2bo9b2o19b2o2b2o11bobobob2o19bo15b2o3bo37bo1129bo28b2o16bo2bo16bo2bo4b2obo14b2obo4b4obobo19bo3bo2b2obo13bo2bobo9bo9bo3bobo3bobo7bo2b2o5b3o4bobo4bob2o3bo4bo2b2o2bo4b2o5bo2b3o4bo5bo4b2o5bo5b2o2bo2b2o2o3bo2bo1116b2obo12bobo3b2o10b2o4bo12bob3o17b2o3bobo16bo4bo2bo20bo3bo14bobobobo5b3o4boboboo2b2o4b2obobo5b2o2o7boo3bo6b3o5b2o4bo2bo10b2o2bo7b3o4bo14bo13b2o11bo2b2o9b2o11bo! I also found this 70-cell ship that has probably been seen before, but has been overlooked for some reason: x = 34, y = 11, rule = B3/S236bo4b2o3bo4b3o5bobobob2o3b2o4bo2bo6bo3b2o3bo6bobo8b2o2b2o7bo7bo9b2o7bo4bo3b2o2bo3bo2bo2bo2bo2b2ob2o6bo3b2o2bo2b3o6bobo9bo9bo8bo2bo3o6b2o11b2o3bob2ob2o8bo14bo! I am also running the width-12 2c/5 knightt search. So far it has found these two small tagalongs: x = 36, y = 44, rule = B3/S236bo7bob2obo5bobo6bobo3bo4bo3b3o2bobobo7bobo5bob3o3bobo5bob2o3bo6b2o6bobobo3bo2bob3o14bobobo4bo6bo2bo2b2obo11b2o3bobobo2bo12b2o12bo2o3bo24bo3bobobobo23b2ob2o2b2ob2o23bo5bo25b2ob2o1126bo2bo26bo3bo25bo2bo2bo24b3o212bo3bob2obobo4bo11bo4bo4bobo2b2obobo11bo2b2obobobo4b2obobo11b2o9bo2bo4bo15bob3o2bo3b4o6bo8bo11b2o5bobo7bo4bo3b3o5bob3o6b2o22b2obobobo2bo2o3bobobobo2b2ob2o5bo! A for awesome wrote:I mark the first cell in the leading row (that is not preprocessed to be empty) as on, and start the search. When that completes, I mark that cell off, set the next one in the row on, and restart the search. This is what I suspected. Do you do this for every cell in the entire diagonal row (as opposed to the first half-diagonal)? Do you also do this in every phase? Attachments knightt-c4-w12.zip (384.2 KiB) Downloaded 115 times -Matthias Merzenich Sokwe Moderator Posts: 1480 Joined: July 9th, 2009, 2:44 pm ### Re: Spaceship Discussion Thread Sokwe wrote: A for awesome wrote:I mark the first cell in the leading row (that is not preprocessed to be empty) as on, and start the search. When that completes, I mark that cell off, set the next one in the row on, and restart the search. This is what I suspected. Do you do this for every cell in the entire diagonal row (as opposed to the first half-diagonal)? Do you also do this in every phase? Every cell, but not every phase. [s]That's why I'm sure about the every-phase width result, but not the single-phase width result.[/s]EDIT 3: Never mind, I would have to do this for every phase to truly rule out all ships. I'll do that for the width-16 search. EDIT: I'm currently doing the same thing at 15hd width, and the second-to-last phase of the search underwent 7788888882 iterations exactly. I suspect that mathematics itself just played a joke on me. EDIT 2: I finished the last search phase, so there are (probably) no width-15hd (2,1)c/6 knightships. For width-16 I'll give up on the single-phase width search concept, because it didn't work anyway. EDIT 3: See above. EDIT 4: An example partial (probably not one of the longest ones): x = 16, y = 133boobbobbob5obbooobo5b3o9bobooboboboobb4obobbobbo3b4o3bo3bo10boo6bo3bo4bo5bobb4obbo5bobo3bobo6bo4b3o! EDIT 5: An interesting 2c/5 frontend: x = 13, y = 13oobooobbo4boobbo3bobo3bo4b4obbo4bo6bo4bob6o3bo3bo4bo6boobb3o6bo4bo3bobb3o4b4o! x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)$$x_1=\eta xV^*_\eta=c^2\sqrt{\Lambda\eta}K=\frac{\Lambda u^2}2P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A for awesome Posts: 1876 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 ### Re: Spaceship Discussion Thread Here are two new 2c/5 tagalongs from the still ongoing width-12 search: x = 28, y = 45, rule = B3/S2312bo3bob2o11bo4bo3bo11bo2b2obobo11b2o15bob3o6bo8bo5bobo7bo4bo3b3o5bob3o8b4obo6b2o10b2o2bo18b2o3bo2b2obo14bobobo2bo2o3bobobobo2b2ob2o5bo96bo5bobo4bo3b3o5bob3o6b2o22b2obobobo2bo2o3bo17bobobobo16b2o2b2ob2o15bo5bo9b2o6bo12b2ob3o4bobo8bobo3b2o2bo3bo2bo8bo4bobo2bo3bo5b2obo3bob2o7bo2bo5b2o3b2o4b2ob3obo3bo4bo13b4o4bo5bo3bo8b2obob2o5bo! Here are all of the new 2c/5 ships found since I last updated the small ships collection: x = 166, y = 498, rule = B3/S235bo3bo5bo3bo4bobo3b3o2b2o26b2o6b2o5bo6b2o3bobo11bo2b2obo5b3obobo2bo6b3o2o3bo6b2o2bobobobo7bob2o2b2ob2o7b4o5bo1112bo3bob2o11bo4bo3bo11bo2b2obobo11b2o15bob3o6bo8bo5bobo7bo4bo3b3o5bob3o8b4obo6b2o10b2o2bo18b2o3bo2b2obo14bobobo2bo2o3bobobobo2b2ob2o5bo1117bo15b2o3bo15b2o2bo10b2o3b4obo7b2obo6bobob2o2bob2o2bo4bobo6bo2o4bo2b2ob4o4b3oo4bo3b2o3bo5b2o5o7b2o9bo13bo7bo2bo22bo2bo3b3o16bob2o3bo19bo2b2o3bo4b2o1237b2o2b2o36b2o37bob2o23b2o8bo6bo18bo2b4o8bo14b2o2bo2bobobo6bo13b2obo4bo2bo8b2obob2o8b2o4bob3o2bob2o8b2o5b2o20b2ob2o10b4o7b3o5b2o4b2o15bobo5b2o8bo24bo3b3o32bobo2bo7bo3b3o18b4o7bobo2b4o3bo2b2o17b2o5b2o2bobo2obo7b2ob2o17b2obob2o4bob2ob2obobob2o19bo11b3o5bobob2o22bo12b3o8bo24bo6bo4b2o2bo37bob2o5bob2o36b2o9b4o37b2o2b2o12139bo2b2o96bo41bob2o95b5o25bo3bob2o5bo2bo31b2o61bo5bo23bo4bo3bo3bo2bo11bo19bo13bo7bob2obo36b2o27bo2b2obobo6bo6b3o2bo15b3o14bobo6bobo3bo39bo24b2o10b2o9bo4bo28bo3b3o2bobobo7bobo29b2o30bob3o3b3o6bobo3bo4bo8bo3bo13bob3o3bobo5bob2o3bo16b2ob2o4b2o24bo8bo9bo5b3o3bo6b2o4b2o4bo14b2o6bobobo3bo2bob3o14bobo5b2o4bo20bobo7bo3b2o6bo3bo4b2obo4b2o23bobobo4bo6bo2bo9bo8b2o4bo4bo14bo3b3ob2o9b2obobo2b4o4bobo10b2obo11b2o3bo19bo9b6o3bobo15bob3o8b4obob2ob2o6b2obobo2b2o4bo13bobo2bo12b2o12bo9bo4bo6b2o4bo18b2o10b2o2boo3bobo15b4o13b2o3bo24bo3bo9b2o3b3o9b3o28b2o3bobobo3bo7b2o6bo16bobobo23b2ob2o19bo9b3o10b2obo14bobobo2bo2bo31b2ob2o23bo12bobobo4bo9b2o2bo8bobo2bo44bo25b2ob2o6bo4bo3bo11bob2o7b2o3bo8b2o71bob4o16b4o7bobobo81b2o32b2ob2o118bo953b2o14b2o2bo32b4o14b3o33b2o10b3obo34b3o9bobobob3obo29bo9bo3b6o30bo2bo9bo28bo3bob2o2b2o10bobo24bo4bo3bo2b2o5bo31bo2b2obobo5bo3bo27b2o4bobo34bob3o3b3o2b2o22bo8bo31bobo7bo6b2o22bo3b3o6b2o5bo17bob3o8b4obo6b2o3bobo18b2o10b2o2bo11bo32b2o3bo2b2obo5b3o14b2obo14bobobo2bo6b3o11bobo2bo2o3bo6b2o2bo9b2o3bobobobo7bob2o10bobobo2b2ob2o7b4o10b2ob2o5bo25bo118bo6b2o3bo6b2o2bo6b4obo38bo8b2o7bo11bob3o5b2o6bo5bo8b2o5b2o3bo2b2obo7bo2bo3bobo8b2o4bo3bo15b2o2bo5b2o12bo4b6o3bo3bo3bo4b2o3bo2bo25b2o5bo2b2oboob2oboo3boob2ob2obo4bobo4bobo6128bobo22b2o128bobo21b6o89b2ob2o31b2obo8bo2b2o9bo3bo88bo35bob2o8bob2o11bo4b2o26bo2bo57b2ob2o9bo2bo19bo3bo7bo2bo11bo2bo26bo3bo57bo3bo8bo3bo18bob2obo5bo2bo25bo2bo2bo59bo8bo3b2o19b2obo8bo15b2o5bo24b3o60bo13b2o31b2o17bo4b3o61b2o22bo15b2ob2o23bo4b3o21b2o12bo3bob2obobo4bo27bo2b4o21b2ob3o14b2obo21b2o5bo12b2obo8bo11bo4bo4bobo2b2obobo20b2o2bo2bobobo3b2o14b2o22b2o2bo36bob2obo5bo2bo11bo2b2obobobo4b2obobo19b2obo4bo2bo3bo17b2obo20bobo4bo11bo2bo17bo3bo7bo2bo11b2o9bo2bo4bo15b2o4bob3o2bob2o3b3o16bo19b5ob5o11bo4b2o14bob2o8bob2o15bob3o2bo3b4o28b2ob2o17bo6b3o15bo2bo18bo3bo17b2obo8bo2b2o6bo8bo11b2o16b3o5b2o4b2o19bo3bo3b2o15b5ob5o12b6o18bobo5bobo7bo27b2o8bo25bobo26bobo4bo13b2o21bobo4bo3b3o30b3o34b3o2b2o22b2o2bo5bob3o30bo7bo3b3o49b2obo6b2o31bo2b4o3bo2b2o27b2o18b2ob2o20b2o22b2o38b2obo7b2ob2o27b2o5bo12b2o9bo13bo2b4o17bo2b4o2b2obo33b2ob2obobob2o30b2o3bobo11bo3b2o4b3o13bo2b4o17bo2b4obobo2bo36bobob2o37bo14bo3bo4b2o15bo23bo2o3bo40bo30b2obo5b3o12bo2bo8bo16bo23bobobobo70bobo2bo6b3o20bo2bo16b2o22b2o2b2ob2o68b2o3bo6b2o2bo20bo2bo15b2o22b2o5bo70bobobo7bob2o20bob2o77b2ob2o7b4o20bo2b2o10b3o2b2o17b3o2b2o80bo48bobo21bobo130bo3bo19bo3bo130bo3bo19bo3bo933b2ob2o20b2ob2o13bo4bo3bo23bo24bo17bobo4bo3bo22b2ob2o20b2ob2o13bo3b3o3bobo26bo3bo20bo3bo13bob3o2b3o2b2o26bo24bo15b2o31bo24bo68bo5b2o22bo24bo22b2o45b2o5b2o21b2ob3o19b2ob3o17b3o44b2obo4bo22b2o23b2o23bobo26b2obo21b2obo18bo43b2o4boo2b4o22bo24bo19b2o38bo2b4oo2b4o4b3ob2o14b3o22b3o14bo36b2o2bo2bobobo2o8bo2bo18b2o23b2o15b2o2b2o29b2obo4bo2bo10b3o63bo27b2o4bob3o2bob2o10b2o17b2o22b2o19bobo39b2ob2o13b2o13b6o18b6o15bo5bo23b3o5b2o4b2o2b2o8bo2bo11bo3bo19bo3bo17bo5bo21b2o8bo2bo2b4o2bobobobo9bo4b2o17bo4b2o16b2o23b3o2bo2b4o2bo4bo10bo2bo20bo2bo24bo18bo7bo3b3o3bo8bo2bo59b2o3bo6b2o8bo2b4o3bo2b2o6bo5bo2bo13b2o5bo16b2o5bo14bo3bo6bo2bo6b2obo7b2ob2o7b2o20bo4b3o16bo4b3o14bo3bo4b2o11b2ob2obobob2o7b2o25b2o22b2o14b3obo5bo3bo12bobob2o25b2obo8bo11b2obo8bo11bo4bo4bo3bo16bo4b3o2b2o13bob2obo5bo2bo9bob2obo5bo2bo9bobobob2o3bobo5bobo16bo3bo7bo2bo8bo3bo7bo2bo8bo3bobo3b2obo6bo3bo13bob2o8bob2o8bob2o8bob2o8b2ob3o6bo3bo14b2obo8bo2b2o7b2obo8bo2b2o13bo4bo28bobo21bobo24b2o28bobo21bobo24b2o831bo22bo30bobo20bobo5bo23bo3b3o16bo3b3o2b2ob2o23bob3o4bo2b2o9bob3obobobo25b2o5bob2o12b2o29bo3bo2o3bo32bo2bo43b2o2b2obobo2bo25b2o3bo2bo14b2o26b2ob2o2bo2b2obo25b3o5bo14b3o25b2obob2obo32bo3b2o17bo5bo19bo3bo5bo6b2o22bo5b3o14bo5b3o20b3o5bob3o19b2o7bo13b2o5b2o22bo4bo3b3o3b3ob2o8bo22bo10bo15b2o11bo5bobo5bo2bo12b2o2b2o17b2o2b2o2bo2bo14bo2bo7bobo6bo6b3o15bo22bo6bo2bo12bo3b2o6bo13b2o14bobo20bobo6bob2o11b3o2b2o6b3o16b2o10bo5bo16bo5bo4bo2b2o10b2o5b2o8bo2bo9bo5bo16bo5bo20bo5bo2b4o2bobobobo8b2o21b2o24b2obo5bo2b4o2bo4bo14bo22bo24bob3o6bo8bo2bo11b2o3bo17b2o3bo18bo6bo9bo5bo2bo11bo3bo18bo3bo17bo9b2o10b2o18bo3bo18bo3bo16b2ob3o4b2obo10b2o17b3obo18b3obo16b2o10bobo28bo4bo17bo4bo17b2obo7bo3bo7b3o2b2o13bobobob2o15bobobob2o17bo10b2o2bo8bobo16bo3bobo16bo3bobo20b3o10bo9bo3bo13b2ob3o17b2ob3o22b2o9bo3bo20bo22bo34b2o21b2o34b2o21b2o1028bobo21bobo6bo21bobo21bobo4b2o3bo15b2obo8bo2b2o7b2obo8bo2b2o4b2o2bo15bob2o8bob2o8bob2o8bob2o4b4obo14bo3bo7bo2bo8bo3bo7bo2bo24bob2obo5bo2bo9bob2obo5bo2bo25b2obo8bo11b2obo8bob2obo29b2o22b2o28boob2obo23bo4b3o16bo4b3o26boo3bo24b2o5bo16b2o5bo25b2o15b2o12bo7bob2obo15boob2o79bo3b2o14bo12bobo6bobo3bo13bob2obo6b3ob2o10bo2bo20bo2bo23b3o2bo4bo10b3o13bo3b3o2bobobo7bobo5b2o4bobo3bo2bo13bo4b2o17bo4b2o23bo4bo29bob3o3bobo5bob2o3bo5b2o4bobo3b3o14bo3bo19bo3bo22bobo3bo4bo8bo3bo14b2o6bobobo3bo2bob3o5bo10b2o16b6o18b6o19b3o3bo6b2o4b2o4bo22bobobo4bo6bo2bo13b2o14b2o22b2o20b2o6bo3bo4b2obo4b2o11b2obo11b2o3bo2b2o8bo2bo57b2o9b2obobo2b4o4bobo9bobo2bo12b2o12bo2bo2b4o2bobobobo55b2ob2o6b2obobo2b2o4bo12b2o3bo24bo3bo2bo2b4o2bo4bo8b7o21b7o12bo3bobo15b4o14bobobo23b2ob2o3bo8bo2bo9bo6bo20bo6bo12bobo3bo7b2o6bo17b2ob2o23bo6bo5bo2bo9b2o3bo22b2o3bo14bobo2bo2bo34bo25b2ob2o7b2o18bob2o24bob2o7b2o21b2o26b2o20b2o30bobo25bobo4b3o2b2o18bo27bo5bobo20bo4bo22bo4bo6bo3bo17bo5bo21bo5bo6bo3bo17bo2b2o23bo2b2o29b4o24b4o30bo27bo1138b2o38bo2bo37bo3b2o36b3o2b2o5b2o2b2o26b2o4b2o32bo5bob2o5bo23b2obobo6bo4b4o25bob3obo10bo2b2o20bo6boo10bo2bo20bo9b2ob2obob2o4bo2bo18b2ob3o4b2obob2o5b2o3b4o16b2o10bobo3b4o7bo2b2o15b2obo7bo3bo6bobo4bo2b2o17bo10b2o2bo8bo4bo2b3o11bo6b3o10bo6bobo4b2o2b2o11bo3bo3b2o3b4o22bobob2o5b2o18b3o2b2ob2obob2oo30b2obo29b2obo6bo22b2o5bob2o4b2o21b2obo5b2o2b2o15bobo2bo25b2o3bo26bobobo27b2ob2o30bo1030b2ob2o21b2ob2o29bo25bo25b2o10bo17b2ob2o21b2ob2o21b6o23b2o27b2o7b2ob2o17bo3bo21bo3bo19bo3bo24b6o23b6o6bobobo21bo25bo20bo4b2o21bo3bo24bo3bo5b2o3bo17bo25bo24bo2bo24bo4b2o22bo4b2o6bobo2bo14bo25bo54bo2bo25bo2bo7b2obo14b2ob3o20b2ob3o24b2o5bo24b2o24b2o29bo4b3o20b2o5bo21b2o5bo11b2o12b2obo22b2obo31b2o21bo4b3o21bo4b3o9bo2bo13bo25bo24b2obo8bo24b2o27b2o8b5ob2o12b3o23b3o19bob2obo5bo2bo14b2obo8bo16b2obo8bo7bo21b2o24b2o19bo3bo7bo2bo12bob2obo5bo2bo14bob2obo5bo2bo8b2o66bob2o8bob2o12bo3bo7bo2bo13bo3bo7bo2bo9b3o20bo25bo18b2obo8bo2b2o10bob2o8bob2o13bob2o8bob2o11bo19b2o24b2o21bobo22b2obo8bo2b2o12b2obo8bo2b2o5bo26bo25bo21bobo25bobo26bobo5bo3bo98bobo26bobo4bobo22b2o22b2o22bo3bo3b3o2b2o18b3o10bo10b6o19bo3bo23bo3bo23bo3bo26bo10b2o2bo9bo3bo20bobo26bo3bo23bo3bo6b2o17b2obo7bo3bo10bo4b2o17b3o2b2o22bobo25bobo6b2o5bo10b2o10bobo12bo2bo48b3o2b2o21b3o2b2o6b2o3bobo11b2ob3o4b2obo39b2o11bo14bo9b2o15b2o5bo17b2o26b2o26b2o2b2obo5b3o14bo6bo17bo4b3o17b2o26b2o26b2obobo2bo6b3o17bob3o20b2o46b2o26b2o2o3bo6b2o2bo12b2obo16b2obo8bo12b2obobobobo7bob2o12bo18bob2obo5bo2bo10bobo2bo23b2obo24b2obo2b2ob2o7b4o10b2o18bo3bo7bo2bo8b2o3bo23bobo2bo22bobo2bo5bo21b3o2b2o14bob2o8bob2o9bobobo22b2o3bo22b2o3bo28bo3b2o15b2obo8bo2b2o8b2ob2o22bobobo23bobobo29bo2bo19bobo22bo24b2ob2o23b2ob2o29b2o21bobo50bo27bo1135bo2bo6bo28bo3bo5bobo26bo3b2o4bo3b3o24b2o5bob3o25b2ob2o11bo6b2o30b2obo8b2o41b2o2bo4bobo2b2obo36bobo3bobobo2bo32b5ob3o2o3bo17bo15bo2bobobobo16b2o15b5ob3o2b2ob2o15bo19bobo3bo5bo9b2o6bo17b2o2bo4bobo12b2ob3o4bobo13b2obo8b2o8bobo3b2o2bo3bo2bo9b2ob2o11bo8bo4bobo2bo3bo12b2o9bo5b2obo3bob2o7bo2bo7bo3b2o4b3o5b2o3b2o4b2ob3obo3bo7bo3bo4b2o4bo13b4o4bo8bo2bo8bo5bo3bo8b2obob2o20bo2bo5bo40bo2bo46bob2o47bo2b2o3145b2o145bo2bo144bo3b2o143b3o2b2o144b2o145bo145b2obo52b3o39b3o52bo51bob2o38bob2o47bo8b2o40bo41bo49bo7b4o27bo10b2o40b2o48b2ob3o6b2o2bo22b2o2bo2bo9b2o4b2o17b4o13b2o4b2o40b2o5bo3bo9bo13bob2o14bo4b3o3bo8bo2bo4bo13bo4b3o3bo8b2o26b2obo12b2o5bo13bob2o15b2o3bo6bo5b4o5bo14b2o3bo6bo5bo29bo3b2o3bo2b2ob3ob2obo12bo5bo12bo6b2o4bobo4bo4b2o15bo6b2o4bobo4bo21bo6b3o2bo9bobobo3bo4bob2ob3obo2b3o9bo4b2o6b3ob4ob2obobo4b3o7bo4b2o6b3ob3o4bo8bo12bo3bo3b2obo2b2obo5b2obo4bo2b2obo3bo16b2ob4o9bo7b3obo4b2o7b2ob4o9bo7b3obo4b2o11bobo2obo4bo5b2o7b2o2bo2bo18b5o11b4o4bo8bo8b5o11b5ob2obobo4b3o9b3o2b2ob2ob2ob2o14bo5b2o34b2obo4bo33b2obo4bo4b2o5b5o60bo41b4o5bo16b2o71b2o40bo2bo4bo16b2o5bo117b4o17b2o3bobo143bo134b2obo5b3o133bobo2bo6b3o132b2o3bo6b2o2bo133bobobo7bob2o134b2ob2o7b4o137bo! Here is the updated unique 2c/5 ships collection: #C This collection contains all known "unique" 2c/5 ships up to 90 bits.#C That is, each ship in this collection has some component that is not#C found in any smaller 2c/5 ship.#C#C For a complete collection of known 2c/5 ships up to 90 bits, see#C ships-2c5-small.rle#C#C Discovery credits:#C AP = Aidan F. Pierce#C DB = David Bell#C DH = Dean Hickerson#C HH = Hartmut Holzwart#C JB = Josh Ball#C MM = Matthias Merzenich#C PT = Paul Tooke#C RW = Robert Wainwright#C SS = Stephen Silver#C TC = Tim Coe#C#C 30 PT 7 Dec 2000#C 34 MM 8 Aug 2015#C 44 DH 23 Jul 1991#C 51 PT 28 Nov 2000#C 54 MM 25 Jan 2017 (tag)#C 56 MM 27 Sep 2015 (tag)#C AP 21 Jan 2017 (B-heptomino component by PT between Feb 2000 and#C Mar 2000. Larger component by PT 3 Jul 2000.)#C 57 PT 1 Nov 2000#C 58 MM 22 Jan 2017#C 59 AP 21 Jan 2017#C 60 TC 3 May 1996#C 62 MM 28 Jan 2017 (tag)#C MM 9 Aug 2015#C 64 SS 2 Mar 1999#C PT 7 Dec 2000 (tag by RW between Jul 1991 and Jul 1992)#C MM 8 Aug 2015 (tag by DB 11 May 2000)#C 66 JB Feb 2013#C PT Between Feb 2000 and Mar 2000#C 67 MM 27 Sep 2015 (tag)#C 68 HH 23 Jan 2008#C 69 HH 26 Nov 1993#C 70 HH 5 Dec 1992#C 72 PT Between Feb 2000 and Mar 2000#C PT 12 Apr 2002#C PT Between Feb 2000 and Mar 2000#C PT 7 Dec 2000 (tag by RW 25 Jul 1992)#C 74 PT 7 Dec 2000 (tag by DB between Jul 1991 and Jul 1992)#C 75 PT Between Feb 2000 and Mar 2000#C 77 AP 26 Jan 2017#C 78 PT Between Feb 2000 and Mar 2000#C 79 MM 25 Jan 2017#C MM 28 Jan 2017 (tag)#C 81 MM 10 Aug 2015#C MM 27 Sep 2015 (tag)#C 83 MM 28 Sep 2017 (tag)#C MM 26 Jan 2017 (tag)#C 85 PT 7 Dec 2000 (tag by DB 11 May 2000)#C PT Between Feb 2000 and Mar 2000#C 89 MM 28 Jan 2017 (tag)#C 90 MM 25 Jan 2017#C MM 25 Jan 2017x = 535, y = 281, rule = B3/S23282bo156bo125bo3b2obob2o155b5o121bo4bobo2bo154bo5bo121b2ob3o3bo190b2o27bo127b2o125b2obo5bo170bo19bo13bo7bob2oboobobo3bobobo13bobo91bobobo3bobobo26bo100bobobo3bobobo14bobobo138bobobo3bobobo14b3o2bo15b3o14bobo6bobo3bo25bo3b3o124b2o126b2o6bobo165bo4bo28bo3b3o2bobobo7bobo4bo3bo3bo13bob3o89bo7bo3bo11b2ob2o4b2o105bo7bo3bo14bobo144bo3bo3bo14bobo3bo4bo8bo3bo13bob3o3bobo5bob2o3bo27b2o114bobo5b2o4bo130b4o164b3o3bo6b2o4b2o4bo14b2o6bobobo3bo2bob3oobobo3bo3bo107bobobo3bobobo9bo8b2o4bo102bobobo3bobobo15bo145bo3bobobo11b2o6bo3bo4b2obo4b2o23bobobo4bo6bo2bo23b2obo114bo9b6o131b4o160b2o9b2obobo2b4o4bobo10b2obo11b2o3bo4bo3bo3bo9bobo2bo96bo7bo9bo4bo6b2o104bo3bo7bo14bobo144bo7bo9b2ob2o6b2obobo2b2o4bo13bobo2bo12b2o12bo21b2o3bo116b2o3b3o133b2o6bobo156bo3bobo15b4o13b2o3bo24bo3boobobo3bobobo9bobobo93bobobo3bobobo19bo107bobobo3bobobo14bobobo142bo3bobobo9bobo3bo7b2o6bo16bobobo23b2ob2o23b2ob2o114bobobo4bo130b2obo5bo160bobo2bo2bo31b2ob2o23bo26bo114bo4bo3bo131b2ob3o3bo203bo25b2ob2o141bob4o134bo4bobo2bo167b2o141b2o139bo3b2obob2o282bo20146b2o2b2o145b2o136b2o168b2o2bo6bo26bo3bo115bob2o132b3o167b3o2b2o2bobobo26bo3bo111bo6bo131bo3b3o163bo7b4o2bo3bo25bobo114bo139b2o2bobo5bo157b2ob5o3b2o3b2oobobo3bo3bo11b3o2b2o89bobobo3bobobo8bo6bo111bobobo3bobobo10bo2bob2o4b3o133bobobo3bobobo10b2o10b3o142b2obobo138b2ob2o2bo2bo158b2o11bo4bo3bo3bo14b2o91bo7bo3bo9b2o5bobo112bo3bo3bo14bo3bo2bo139bo3bo3bo17bo4b2o27b2o115b8o136b3o3bo162bo7bo3boobobo3bobobo14b2o91bobobo3bo3bo131bo3bo3bo16bo144bo3bobobo15b3o4bo3bo144b8o137bo176bobo4bo7bo10b2obo93bo3bo3bo3bo9b2o5bobo112bo3bo3bo15b3o3bo139bo3bo3bo22bobo2bo114b2obobo139bo3bo2bo171boboobobo7bo8b2o3bo93bobobo3bobobo8bo6bo115bo3bobobo13b2ob2o2bo2bo137bo3bobobo22bo22bobobo115bo140bo2bob2o4b3o168bo3b6o23b2ob2o114bo6bo132b2o2bobo5bo170bobobob3obo26bo119bob2o131bo3b3o178b3obo145b2o135b3o185b3o146b2o2b2o131b2o185b2o2bo14364b2ob2o363bo362b2ob2o363bo3bo366bo178b2o2bo130b3o46bo142b2obo178b3o101bo2bo26bo47bo97b2o2b2o37b2o3bo2b2o153bo3bob2o13b3obo103bo3bo28bo43b2ob3o92b2o41b2ob3o27bobo122bo4bo3bo11bobobob3obo97bo3b2o23b2o46b2o98bob2o25b2o10bo24bo4bo122bo2b2obobo12bo3b6o99b2o24b2o49b2obo91bo6bo25bobo14b4o23b3o4bo121b2o19bo108b2ob2o11bo8b2ob5o21bo23bo93bo32bobobobo3bobo2bobob2oo3bo3bo3bo9bobo95bobobo3bobobo23bob3o13bobo83bobobo3bobobo12b2obo8b2o7bo7bo4bo15b2o25b3o65bobobo4bo13bo6bo27b2obobo3bo4bo21b2o2b2o4bo115bo8bo131b2o2bo4bobo7b3o2b2obo3b2o14bobo4b2o19b2o89b2obobo6bo2bo13bo5b2o2bo2boboboo3bo3bo3bo9b2o2b5o89bo11bo13bobo7bo20bo86bo7bo16bobo3bo12b2o2bo4bo3bo11bo7bob3obobobo78bo3bo4bo14b2o5bo3bo3bo13bo3bo5b2o145bo3b3o23b3o108b5ob3o22b2o2bo9b2o8b3o3bobobo14bo3b3o83b6o3bo3b2o11bobo8b2oobobo3bobobo107bobobo3bobobo13bob3o8b4obo9b2o88bo3bobobo13bo2bo31bo7b2o4bo4bo3bo20b2o4b3o56bobobo4bo27b3o11b3o2b2o147b2o10b2o2bo9bo112b5ob3o26bo7b2ob5obo7b6o14bo5b2o80b12o5bo4bo7bo9b2o2b5o89bo3bo3bo30b2o3bo9b5ob2o82bo3bo20bobo3bo21b2o2bo6bo3bo5bo3b3o6bo78bo3bo4bo12b2o5bo11bo12b2o21b2o2b2o4bo111b2obo14bo13bo2bo109b2o2bo4bobo8b2o2bo4bo3bo7bobo2bobobob3o4b3obo11b2o87b2obobo3b2o2bo18b2o4bo7bo9bobo95bobobo3bobobo9bobo2bo29b2o85bo3bobobo12b2obo8b2o8b3o2b2obo3b2o8bobo2b3obo4bo4bobo11b3o65bobobo4bo11bo6bo2b2o3b2obo14b2o23b3o4bo110b2o3bo135b2ob2o11bo7bo7bo4bo24bobo13bo91bo15bo24bo4bo112bobobo26b2obo105b2o23b2ob5o44b2obo89bo6bo20b2obo27bobo113b2ob2o24bobo2bo103bo3b2o21b2o48b2o96bob2o19bobo2bo146bo24b2o3bo105bo3bo23b2o47b2ob3o90b2o21b2o3bo172bobobo105bo2bo29bo44bo95b2o2b2o17bobobo173b2ob2o134bo49bo117b2ob2o176bo136b3o50bo116bo363bo3bo362b2ob2o363bo364b2ob2o8183b2ob2o14bo84b2ob2o182bo17b2o3bo80bo181b2ob2o14b2o2bo80b2ob2o182bo3bo13b4obo80bo3bo185bo103bo27bobo151bo103bo191bo2bo24bo4bo149bo22bo80bo193bo3bo23b3o4bo147b2ob3o18b2o78b2ob3o188bo2bo2bo22bobo152b2o22bo79b2o192b3o34b2o21b2o2b2o4bo146b2obo23bo76b2obo221bo2b4o22b2o2b5o148bo19b2o82bo179bo3bob2obobo4bo23b2o2bo2bobobo3b2oobobo4bo110bobobo3bo3bo10b2obo34b3o15bo60bobobo3bo3bo12b3o142bobobo3bobobo19bo4bo4bobo2b2obobo19b2obo4bo2bo3bo142b3obo35b2o15b2o3bo81b2o174bo2b2obobobo4b2obobo14b2o4bob3o2bob2o3b3oo8bo110bo7bo3bo8bo6bo10b2o7b2o31bo2bo59bo3bo3bo21bobo133bo3bo7bo19b2o9bo2bo4bo27b2ob2o21b2o119b2obo2b2o5bo2b3o7b3o14bo16b2o85bo3b2obo169bob3o2bo3b4o15b3o5b2o4b2oobobo4bo11bo2b4o92bobobo3bobobo10b3obo2b2o2bo29b2o78bo3bobobo15b2o140bobobo3bobobo14bo8bo11b2o14b2o8bo21bo2b4o117bobobo5bo2bobo2bo3bo17bo13b2o88bo3b2obo159bobo7bo25b3o4bo4bo12bo97bo3bo7bo13b2obob2o3b2o2bo3b3obo29b6o60bo7bo21bobo133bo3bo7bo12bo3b3o29bo7bo3b3o25bo120b2obo3bo8b2ob2obo13b2o13bo3bo84b2o168bob3o29bo2b4o3bo2b2oobobo4bo16b2o92bobobo7bo19bo28b3o13bo4b2o60bo7bo12b3o142bobobo3bobobo14b2o30b2obo7b2ob2o26b2o120bo4bo25bo17bo2bo82bo206b2ob2obobob2o178b2obo100b2obo167b2obo37bobob2o23b3o2b2o147b2o20b2o80b2o169bobo2bo39bo24bobo151b2ob3o15bo82b2ob3o163b2o3bo25bo3bo149bo103bo168bobobo25bo3bo151bo13b2obo86bo167b2ob2o185bo8bob2obo89bo166bo182bo3bo7bo3bo87bo3bo181b2ob2o8bob2o87b2ob2o182bo12b2obo87bo183b2ob2o10bobo86b2ob2o198bobo3462bo461b4o460bo2b2o460bo5bo460bo4bo461bo462bo2bo461bo3bo464bo287b2o2b3ob3o161b2o168bo2bo114b2o171bo2b2o168bo3bo114bob2o2b3o2bo160b2o26bo3bo116bobo17bo3b2o110bo6bo3b3o2bo162bo26bo3bo113bo4bo8bo9b2o113bo14bobo159b2o25bobo115b3o4bo5bo3bo7b2ob2o109bo15b3o158bo2bo18boobobo3bo3bo11b3o2b2o89bobobo3bobobo9bobo15bo10b2obo85bobobo3bobobo10b2obob2o8bo131bobobo3bobobo17b5o15b2o18bo141b2o2b2o4bo2bo19b2o2bo104b2o5b2o169bo2bo15bobo4b2o11b2oo7bo3bo14b2o91bo7bo13b2o2b5o3bob3o16bobo4bo81bo3bo16b4o141bo3bo3bo23b3o13bo7bob3obobob2o2b2o27b2o5bo118bo2bo15b5ob5o105bobo171b2o12b2o8b3o3bobob2o2boobobo3bobobo14b2o3bobo85bobobo3bobobo20bobo16bo2bo88bo3bobobo17bo139bobobo3bobobo31b2o4bo4bo3bo12bo32bo120bo18b5ob5o105bobo171bo11b2ob5obo7b5o3b3o4bo7bo10b2obo5b3o85bo3bo3bo3bo9b2o2b5o3bo2b2obo14bobo4bo81bo7bo12b4o141bo3bo7bo18bo11bo3bo5bo3b3o6b2o4bo22bobo2bo6b3o104b2o2b2o4bo2bo3b2o14b2o2bo104b2o5b2o168bo3bo9bobo2bobobob3o4b3oboobobo7bo8b2o3bo6b2o2bo82bobobo3bobobo9bobo10bo15b2obo89bo3bobobo10b2obob2o140bobobo3bobobo16b2o3bo9bobo2b3obo4bo4bobo22bobobo7bob2o105b3o4bo17b2ob2o109bo177b2o27bobo23b2ob2o7b4o105bo4bo18b2o113bo177bo2bo26bo120bobo17bo3b2o110bo6bo173bo168bo3bo114bob2o171bo168bo2bo114b2o169bo287b2o2b2o163bobo455bo3b3o456bob3o457b2o2453b2obo452bobo2bo451b2o3bo452bobobo453b2ob2o456bo9457bo33bo3bob2obobo411bobo32bo4bo4bobo23bo235b2o149bo3b3o32bo2b2obobobo23b2o3bo227bo2b4o150bob3o32b2o9bo22b2o2bo76bo7bob2obo134b2o2bo2bobobo150b2oobobo3bobobo23bob3o2bo17b2o3b4obo48bobobo3bobobo13bobo6bobo3bo98bobobo3bobobo21b2obo4bo2bo124bobobo3bobobo27bo8bo21b2obo83bo3b3o2bobobo7bo122b2o4bob3o2bob2o147b2oboo7bo17bobo7bo20bobob2o57bo11bo13bob3o3bobo5bob2o4bo93bo7bo28b2ob2o124bo3bo3bo3bo9bobo2bo25bo3b3o21bob2o2bo4bobo80b2o6bobobo3bo2bo121b3o5b2o4b2o147b2o3bo17boobobo3bobobo13bob3o20b2o4bo2b2ob4o53bobobo7bo22bobobo4bo3b2obo92bo7bo13b2o8bo133bobobo3bobobo9bobobo16b2o27b2o22bo4bo3b2o3bo77b2obo11b2o3bo5b2o113b3o166b2ob2o15bo4bo3bo3bo38b5o7b2o55bo3bo7bo9bobo2bo12b2o8bo93bo7bo10bo7bo3b3o132bo3bo7bo13bo9b2o6bo23b2obo37bo76b2o3bo135bo2b4o3bo2b2o166b2ob3o4boboobobo3bobobo9bobo2bo92bobobo7bo9bobobo117bo7bo8b2obo7b2ob2o133bobobo3bobobo16bobo3b2o2bo3bo2bo21b2o3bo27b3o86b2ob2o134b2ob2obobob2o165bo4bobo2bo3bo22bobobo27bo91bo139bobob2o164b2obo3bob2o7bo2bo23b2ob2o26bo234bo166b2o3b2o4b2ob3obo3bo26bo28b2o398bo13b4o4bo456bo3bo8b2obob2o456bo14145bo144bobo143bo3b3o144bob3o26b2o2b2o113b2o25b2o26bob2o116b2o135b2o2bo6bo22bo6bo115b3o134b3o2b2o2bobobo207b3o22bo123bo134bo7b4o2bo3bo202bob2o21bo122bo137b2ob5o3b2o3b2o159b2o40boobobo3bobobo9b2obob2o91bobobo3bobobo10b2o115bobobo3bobobo10b2o10b3o132bobobo3bobobo15b4o27bo10b2o22b2o5b2o111bo142b2o11bo158b2o2bo22b2o2bo2bo9b2o4b2o17b4oo11bo11b4o92bo7bo3bo10b2o2b2o115bo3bo3bo17bo4b2o133bo3bo3bo3bo13bo3bo9bo13bob2o14bo4b3o3bo8bo2bo4bo27bobo115bo141bo7bo3bo163b2o5bo13bob2o15b2o3bo6bo5b4o5boobobo7bo16bo90bobobo3bobobo10bobo118bo3bobobo15b3o4bo3bo130bobobo3bo3bo11b2o3bo2b2ob3ob2obo12bo5bo12bo6b2o4bobo4bo4b2o27bobo112bo5bo147bobo154bo9bobobo3bo4bob2ob3obo2b3o9bo4b2o6b3ob4ob2obobo4b3o4bo7bo11b4o92bo3bo3bo3bo9bo5bo115bo3bo3bo161bo3bo3bo9bo2b2obo5b2obo4bo2b2obo3bo16b2ob4o9bo7b3obo4b2o22b2o5b2o112b2o151bobo152b2obo4bo5b2o7b2o2bo2bo18b5o11b4o4bo8boobobo7bo9b2obob2o91bobobo3bobobo15bo115bo3bobobo22bo134bobobo3bobobo9b2ob2ob2o14bo5b2o34b2obo4bo21bo122b2o3bo145bo3b6o151b5o60bo22bo121bo3bo146bobobob3obo216b2o22bo6bo114bo3bo147b3obo26bob2o113b3obo152b3o25b2o115bo4bo152b2o2bo26b2o2b2o109bobobob2o141bo3bobo141b2ob3o148bo148b2o148b2o1824b2ob2o4b2o3b3oobobo3bobobo10bobo5b2o3bob2obo22bo8b2o4bob3oboo7bo3bo8bo9b5o5b2o22bo4bo6b3oobobo3bobobo10b2o3b3o32bo4bo3bo3bo9bobobo4bo21bo4bo3boobobo3bobobo8bob4o21b2o! Finally, here is the small 2c/5 ships collection (as an attachment, since it no longer fits in a single post): ships-2c5-small.rle (61.88 KiB) Downloaded 118 times Edit: I accidentally forgot to add some ships to the small ships collection (see this post). -Matthias Merzenich Sokwe Moderator Posts: 1480 Joined: July 9th, 2009, 2:44 pm ### Re: Spaceship Discussion Thread I finished the latest knightship search; none with a diagonal width of <= 16hd. I don't currently feel like trying 17hd just yet, and I still think we're nowhere near an actual ship. EDIT: I actually think (2,1)c/7 is somewhat more promising. Here's an example 11hd partial: x = 14, y = 13bbooboobbobobooboobo6boooo5bobo4boobbo3bo4bo4booboo5bo5boo6b4o9boobbo10b3o! EDIT 2: Slightly longer (12hd): x = 17, y = 174bo6boo3booboboobbobboo3booo3boboo4boo3bob5obo5bo7bo6b4obo10b3o12boobo11bo4bo11boobbo10bo3bobo14b3o14b3o! EDIT 3/4: Even more so (same width): x = 19, y = 183bobbo3booboboboobo4o4bo8boobboo4bobboobboboo3bo3b4o4b4o3boboo5bobbobboo3bo8boob4o$$15bobbo$13bob3o$12boboboo$$12boo3boo16boo! The front end advances for 4 full periods. Also, x = 22, y = 224bo4boobo4bo8booobb4obbooboo4bobboo4b4o4bo3b3o5bo3bob3o5bobbobbo3bo6bobbo6bobobboo7b3obboo3bo11booboob3o12bobobb3o19boo19bobo19boo15b3o15boo3bo16bo3bo19boo! is longer, but not quite as robust. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)$$x_1=\eta xV^*_\eta=c^2\sqrt{\Lambda\eta}K=\frac{\Lambda u^2}2P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt} http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A for awesome Posts: 1876 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 I accidentally forgot to include the following ships in my small 2c/5 ships collection: x = 272, y = 143, rule = B3/S2387bo$32b2ob2o49bobo$31bo53bo3b3o$30b2ob2o14bo36bob3o$31bo3bo12bobo36b2o$20b2o12bo12bo3b3o$18b2o3bo24bob3o17b2ob2o8b2obo$7bo7bobobo4bo6bo2bo14b2o18bo12bobo2bo$6bobo6bobobo3bo2bob3o37b2ob2o8b2o3bo$5bo3b3o2bobo5bob2o3bo15b2obo20bo3bo8bobobo4bob2obo$6bob3o3bobobo7bobo15bobo2bo8b2o12bo10b2ob2o3bobo3bo$7b2o6bobo3bo21b2o3bo7b2o3bo24bo3bobobo7bobo$15bob2obo23bobobo4bobobo4bo6bo2bo17bobo5bob2o3bo$3b2obo38b2ob2o3bobobo3bo2bob3o22bobobo3bo2bob3o$2bobo2bo40bo3bobo5bob2o3bo23bobobo4bo6bo2bo$b2o3bo45bobobo7bobo27b2o3bo$2bobobo46bobo3bo36b2o12bo$3b2ob2o45bob2obo48bo3bo$6bo99b2ob2o$107bo$108b2ob2o19$5bo3bo$5bo3bo$4bobo$3b3o2b2o33bo3bo$43bo3bo$6b2o34bobo$6b2o33b3o2b2o$6b2o$44b2o$2b2obo38b2o19b2ob2o$bobo2bo37b2o18bo$2o3bo57b2ob2o$bobobo4bob2obo24b2obo20bo3bo$2b2ob2o3bobo3bo22bobo2bo8b2o12bo$5bo3bobobo7bobo14b2o3bo7b2o3bo$9bobo5bob2o3bo14bobobo4bobobo4bo6bo2bo$10bobobo3bo2bob3o14b2ob2o3bobobo3bo2bob3o$10bobobo4bo6bo2bo13bo3bobo5bob2o3bo$13b2o3bo28bobobo7bobo$15b2o12bo18bobo3bo$26bo3bo17bob2obo$25b2ob2o$26bo$27b2ob2o19$86bo$31b2ob2o49bobo100bo$30bo53bo3b3o33bo61b2o3bo12bo63bo$29b2ob2o14bo36bob3o33b4o37bo21b2o2bo12b4o37bo21b2o3bo$30bo3bo12bobo36b2o34bo2b2o36b4o19b4obo10bo2b2o36b4o19b2o2bo$19b2o12bo12bo3b3o69bo5bo33bo2b2o8b2o25bo5bo33bo2b2o19b4obo$17b2o3bo24bob3o17b2ob2o8b2obo36bo4bo5b2o27bo5bo5b2o26bo4bo34bo5bo6b2o$6bo7bobobo4bo6bo2bo14b2o18bo12bobo2bo36bo6b2o3bo26bo4bo5bob2o2b2o4b2o16bo9b2o27bo4bo6b2o$5bobo6bobobo3bo2bob3o5bo31b2ob2o8b2o3bo38bo2bo2bobo8bo21bo6b2obo5bo4b3o17bo2bo2b2o3bo27bo9bob2o2b2o4b2o$4bo3b3o2bobo5bob2o3bo5b2o8b2obo20bo3bo8bobobo4bob2obo15bo12b3o3bobobo4b4o3bo17bo2bo2bobobo4b4o3bo17b3o3bobo8bo22bo2bo2b2obo5bo4b3o$5bob3o3bobobo7bobo5b2o8bobo2bo8b2o12bo10b2ob2o3bobo3bo13bo11bob3o3b2obo5bo4b3o17b3o3bobo8bo20bob3o3bobobo4b4o3bo17b3o3bobobo4b4o3bo$6b2o6bobo3bo13bo7b2o3bo7b2o3bo24bo3bobobo7bobo5b2o9b4obo7bob2o2b2o4b2o15bob3o3b2o3bo24b4obo4b2obo5bo4b3o15bob3o3bobo8bo$14bob2obo15bo7bobobo4bobobo4bo6bo2bo17bobo5bob2o3bo5b2o7bo2bo2bo8b2o24b4obo7b2o24bo2bo2bo7bob2o2b2o4b2o13b4obo4b2o3bo$2b2obo38b2ob2o3bobobo3bo2bob3o5bo16bobobo3bo2bob3o5bo6bo16b2o22bo2bo2bo32bo15b2o23bo2bo2bo7b2o$bobo2bo40bo3bobo5bob2o3bo5b2o16bobobo4bo6bo2bo9bo2bo23b4obo6bo40bo2bo12b2o21bo$2o3bo45bobobo7bobo5b2o20b2o3bo21b3o23b2o2bo8bo2bo37b3o23b4obo7bo2bo$bobobo46bobo3bo13bo22b2o12bo12b2obo20b2o3bo8b3o39b2obo20b2o2bo9b3o$2b2ob2o45bob2obo15bo32bo3bo13bo23bo13b2obo38bo21b2o3bo10b2obo$5bo99b2ob2o54bo63bo15bo$106bo$107b2ob2o18$5bo3bo$5bo3bo$4bobo$3b3o2b2o33bo3bo$43bo3bo$6b2o34bobo$6b2o33b3o2b2o$6b2o$44b2o$2b2obo38b2o19b2ob2o$bobo2bo37b2o18bo$2o3bo57b2ob2o$bobobo4bob2obo15bo8b2obo20bo3bo$2b2ob2o3bobo3bo13bo8bobo2bo8b2o12bo$5bo3bobobo7bobo5b2o7b2o3bo7b2o3bo$9bobo5bob2o3bo5b2o7bobobo4bobobo4bo6bo2bo$10bobobo3bo2bob3o5bo8b2ob2o3bobobo3bo2bob3o5bo$10bobobo4bo6bo2bo13bo3bobo5bob2o3bo5b2o$13b2o3bo28bobobo7bobo5b2o$15b2o12bo18bobo3bo13bo$26bo3bo17bob2obo15bo$25b2ob2o$26bo$27b2ob2o! Here is the corrected small 2c/5 ships collection: ships-2c5-small.rle (64.74 KiB) Downloaded 122 times A for awesome wrote:I actually think (2,1)c/7 is somewhat more promising. (2,1)c/7 has also not been searched very heavily, so there might be something "easy" to find that has just gone unnoticed (like copperhead). Edit: A small tagalong gives two new small 2c/5 ships: x = 34, y = 65, rule = B3/S2329bo$28b5o$27b2o4bo$28bo4bo$29b2o$29bo$28bo$26bobo$12bo3bob2obobob2o$11bo4bo4bobo$11bo2b2obobobo$11b2o9bo$15bob3o2bo$6bo8bo$5bobo7bo$4bo3b3o$5bob3o$6b2o2$2b2obo$bobo2bo$2o3bo$bobobo$2b2ob2o$5bo11$4bo$3bobo$2bo3b3o$3bob3o$4b2o2$5b2o$4b3o6bo$5bo6b5o$3bo7b2o4bo$2b2o8bo4bo$bo11b2o$2b2o2b2o5bo$4bo7bo$2bobo5bobo$bo5bob2o$bo5bo$2b2o$7bo$3b2o3bo$3bo3bo$3bo3bo$2b3obo$bo4bo$obobob2o$o3bobo$2ob3o$7bo$7b2o$7b2o! Edit 2: another tagalong: x = 38, y = 17, rule = B3/S2316bo2bo$13bo2bo3bo$12bobobo4bo$12b2o2b2o$6b2ob2o2bobobo4bo6bo$5bo8b2obo4bo6bo$4b2ob2o4bo3b2o3bo6bob3o2bo$5bo3bo4bo2b2o2bo3b2o9bo$8bo5bo3b2obo3bo2b2obobobo$4bo20bo4bo4bobo$2bo23bo3bob2obobo$b2ob3o$2o$b2obo$2bo$4b3o$5b2o! -Matthias Merzenich Sokwe Moderator Posts: 1480 Joined: July 9th, 2009, 2:44 pm ### Re: Spaceship Discussion Thread There's so many posts on 2c/5 here now... Should a new thread be made just for them? x = 81, y = 96, rule = LifeHistory58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27.A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A$4.2A18$4.2A$4.2A2.2A$8.2A! Gamedziner Posts: 789 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth Gamedziner wrote:There's so many posts on 2c/5 here now... Should a new thread be made just for them? Not really. Most of the 2c/5 related posts are simply cumulative updates to the spaceship collections and a 2c/5 thread would divide the topic too much to be useful. LifeWiki: Like Wikipedia but with more spaceships. [citation needed] Posts: 1889 Joined: November 8th, 2014, 8:48 pm Location: Getting a snacker from R-Bee's PreviousNext ### Who is online Users browsing this forum: Rhombic and 4 guests
{}
# First Order Diffy Q Problem with Bernoulli/Integrating Factors Homework Statement: If dy/dy + y = (1-1/x)(1/(2y)), solve when x=1, y=1/sqrt(2). Hint: Use a substitution to get into the form of a first order differential equation Relevant Equations: integrating factor = e^integral(P(x)) v=y^(1-n) dy/dx +P(x)y=Q(x)y^n I seem to be getting an unsolvable integral here (integral calculator says it's an Ei function, which I've never seen). My thought was to use Bernoulli to make it linear and then integrating factors. Is that wrong? The basic idea is below: P(x) 1, Q(x) = 1/2(1-1/x), n=-1, so use v=y^1- -1)=y^2 so y=sqrt(v), dy/dx=1/(2sqrt(v))dv/dx Thus the equation becomes 1/(2sqrt(v))dv/dx + sqrt(v) = 1/sqrt(v) * 1/2(1-1/x)) Multiplying by 2* sqrt(v) gives dv/dx +2 v = (1-1/x) Use the integrating factor e^2x gives: d/dx(v*e^(2x)) =e^(2x)(1-1/x) = e^(2x) - e^(2x)*1/x ve^(2x) then just equals the antiderivative of e^(2x) - e^(2x)*1/x, but that second term seems to give a problem. Is my method incorrect? Not sure how to proceed. Delta2 Homework Helper Gold Member I cant spot any mistakes in your solution (except that you got to learn using ##\LaTeX## )so it probably is correct. I tried wolfram and it also gives the solution in terms of the Ei(2x) function https://www.wolframalpha.com/input/?i=y'+y=(1-1/x)0.5(1/y) Maybe check again the exact statement of the problem. There might be some typo after all. haruspex Homework Helper Gold Member 2020 Award Assuming you mean ##\frac{dy}{dx}+y=(1-\frac 1x)(\frac 1{2y})## (you wrote dy/dy), it already is a first order ODE. This does suggest a typo, Delta2 Delta2 Homework Helper Gold Member Assuming you mean ##\frac{dy}{dx}+y=(1-\frac 1x)(\frac 1{2y})## (you wrote dy/dy), it already is a first order ODE. This does suggest a typo, I think there he wanted to write first order linear haruspex Homework Helper Gold Member 2020 Award I think there he wanted to write first order linear Yes, that fits. I agree it leads to a nasty integral; I had it in the form of a double exponential.
{}
Issue No. 05 - May (2015 vol. 27) ISSN: 1041-4347 pp: 1369-1382 Mirjana Ivanovic , Faculty of Sciences, University of Novi Sad, Serbia ABSTRACT Outlier detection in high-dimensional data presents various challenges resulting from the “curse of dimensionality.” A prevailing view is that distance concentration, i.e., the tendency of distances in high-dimensional data to become indiscernible, hinders the detection of outliers by making distance-based methods label all points as almost equally good outliers. In this paper, we provide evidence supporting the opinion that such a view is too simple, by demonstrating that distance-based methods can produce more contrasting outlier scores in high-dimensional settings. Furthermore, we show that high dimensionality can have a different impact, by reexamining the notion of reverse nearest neighbors in the unsupervised outlier-detection context. Namely, it was recently observed that the distribution of points’ reverse-neighbor counts becomes skewed in high dimensions, resulting in the phenomenon known as hubness. We provide insight into how some points (antihubs) appear very infrequently in $k$ -NN lists of other points, and explain the connection between antihubs, outliers, and existing unsupervised outlier-detection methods. By evaluating the classic $k$ -NN method, the angle-based technique designed for high-dimensional data, the density-based local outlier factor and influenced outlierness methods, and antihub-based methods on various synthetic and real-world data sets, we offer novel insight into the usefulness of reverse neighbor counts in unsupervised outlier detection. INDEX TERMS Standards, Correlation, Euclidean distance, Context, Educational institutions, Noise measurement, Histograms CITATION M. Radovanovic, A. Nanopoulos and M. Ivanovic, "Reverse Nearest Neighbors in Unsupervised Distance-Based Outlier Detection," in IEEE Transactions on Knowledge & Data Engineering, vol. 27, no. 5, pp. 1369-1382, 2015. doi:10.1109/TKDE.2014.2365790
{}
Find the centroid of the triangle with vertices $(0,0),(a, b)$, and $(a,-b)$. Question: Find the centroid of the triangle with vertices $(0,0),(a, b)$, and $(a,-b)$. Similar Solved Questions 5.If the temperature distribution in a rod is given by v(x, t) with 0 (2 Points) g (t) = et+1Ux (0,t) = et+1Ut (0,t) = et+1Ut (5,t) = et+1Ux (5,t) = et+1 5.If the temperature distribution in a rod is given by v(x, t) with 0 (2 Points) g (t) = et+1 Ux (0,t) = et+1 Ut (0,t) = et+1 Ut (5,t) = et+1 Ux (5,t) = et+1... What is the molar mass of a nonpolar molecular compound if 6.34 g dissolved in 53.4... What is the molar mass of a nonpolar molecular compound if 6.34 g dissolved in 53.4 g benzene begins to freeze at 2.81 °C? The freezing point of pure benzene is 5.50 °C and the freezing point depression constant, Kf, is -5.12 °C/m. A. 2.26 x 10-4 g/mol B. 0.226 g/mol C. 12.1 g/mol D. 226... One wants to estimate the total number of computer help requests in largc firm that has N 10 divisions. He randomly selects n = 3 divisions and interview every employee in the selected divisions on the number of computer help requests the employee submitted last year. Let yij denote the number of computer help requests submitted by the jth employee in the ith division: The data based on the 3 selected divisions are summarized as follows. Suppose that the 3 divisions are selected using SRSWOR and One wants to estimate the total number of computer help requests in largc firm that has N 10 divisions. He randomly selects n = 3 divisions and interview every employee in the selected divisions on the number of computer help requests the employee submitted last year. Let yij denote the number of co... An irregular piece of the metal is found to weigh 87.4g. It is then placed into... An irregular piece of the metal is found to weigh 87.4g. It is then placed into a graduated cylinder that contains some liquid. The liquid in the cylinder has an initial volume of 18.46 ml. After the metal was added and the liquid's level rises to 24.14 mL afterward. What is the density of this ... Question 15200480 points Save AnswcrThrough what potentlal difference must an Alectron (starting Irom rest) be accelerated 8.6 kVreach speed 0f 3.0Ms?5.8 kV 7.1kv Question 15 200480 points Save Answcr Through what potentlal difference must an Alectron (starting Irom rest) be accelerated 8.6 kV reach speed 0f 3.0 Ms? 5.8 kV 7.1kv... Qustions: The Iahle belou shousl: solubilit; icten *wenis~olulc pet Icml slvcntcomcoundIn >[email protected])JiSluhiliHaVhrh coltcnt Tould Tne sqivent becuusethe bvst ehoice for recryslallizing Compound Why? moinki In @ Lld ce beft for reCrystolizingferr} slallize 3 3 gofun comnound How much of Ihut ~lvcnt would ;ou need DointaEhui _Incook uun 0C during the recrystallization process_ hoty mant erums IIl When the solution NOT ncovered? Where the missing compound Show all ork: Qustions: The Iahle belou shousl: solubilit; icten *wenis ~olulc pet Icml slvcnt comcound In >enl Sulenl Aannilen leln [email protected]) Ji SluhiliHa Vhrh coltcnt Tould Tne sqivent becuuse the bvst ehoice for recryslallizing Compound Why? moinki In @ Lld ce beft for reCrystolizing ferr} slallize 3 3 ... S11-7 (similar to) Question Help stio Inland Equipment's accountants assembled the following data for the year... S11-7 (similar to) Question Help stio Inland Equipment's accountants assembled the following data for the year ended June 30, 2018. (Click the icon to view the data.) Prepare Inland Equipment's statement of cash flows for the year ended June 30, 2018, using the indirect method. The cash bala... Segmented Income Statements, Product-Line Analysis Alard Company produces blenders and coffee makers. During the past year, the company produced and sold 65,000 blenders and 75,000 coffee makers. Fixed costs for Alard totaled $340,000, of which$184,000 can be avoided if the blenders are not produce...
{}
# How to avoid global \sloppy with many \texttt I read that using a global \sloppy is bad practice. I have a document with many variables of a source code that I reference as in \texttt{VariableName}. I use a lot of them both in normal paragraphs and sometimes in section titles. By default they are being handled terribly and produce a lot of overshoots on the right margin. So far I came up with the following solutions: 1. manually hyphenating \texttt{Variable\-Name} which is (a) terrible case-by-case work and (b) might suggest hyphenated variable names to the reader 2. manually breaking the line right before the variable \\ \texttt{VariableName} which is (a) again terrible case-by-case work and (b) produces empty space on the right margin (line not "full") 3. using \sloppy globally, apparently bad practice but in fact the only solution that I see which does not require me to go through my whole document and fix on case-by-case basis. This would be especially terrible if I, e.g. change margins in the end and have a bigger/smaller page and have to go through everything manually again. That is IMO not a solution. Are there any better solutions? • I would really use @egreg solution in tex.stackexchange.com/questions/324869/… --- if you want a justified text and have long identifiers, you need to hyphenate them. The solution linked performs automatic hyphenation on uppercases and underscores, and use a "centered dot" which is in my opinion perfect for this case. Oct 1 '17 at 11:54 • Duplicate of tex.stackexchange.com/questions/44361/… ? Oct 1 '17 at 16:00 • I would typeset these variables using \url of the url package (instead of \texttt), instructing the macro to break at any letter, with this preamble addition: \makeatletter \g@addto@macro{\UrlBreaks}{\do\/do\a\do\b\do\c\do\d\do\e\do\f\do\g\do\h\do\i\do\j\do\k\do\l\do\m\do\n\do\o\do\p\do\q\do\r\do\s\do\t\do\u\do\v\do\w\do\x\do\y\do\z\do\A\do\B\do\C\do\D\do\E\do\F\do\G\do\H\do\I\do\J\do\K\do\L\do\M\do\N\do\O\do\P\do\Q\do\R\do\S\do\T\do\U\do\V\do\W\do\X\do\Y\do\Z}\makeatother Oct 1 '17 at 16:21 • @StevenB.Segletes - very clever. Note though that if you also use hyperref, that package will try to turn your \url macros into hyperlinks. Oct 1 '17 at 18:06 • If you like the basic \url approach, but don't want hyperlinks, then perhaps an answer based on this one could achieve what you want: tex.stackexchange.com/questions/219445/line-break-in-texttt/… Oct 1 '17 at 19:34 This is how \sloppy is defined (from the LaTeX sources, which you can read with texdoc source2e or via CTAN or texdoc.net): So these are the things \sloppy does: • Sets \tolerance to 9999: this is not as bad as the name makes it sound. If you're not inclined to pay attention to underfull/overfull box warnings and fix them, in fact this is never a bad idea: all that a higher \tolerance (but non-infinite, i.e. less than 10000) does is allow TeX to consider worse line breaks, while still trying to generate an optimal paragraph. It is highly unlikely to make the output worse and the worst you can say about it is that it makes TeX work harder, but on today's computers the difference can be measured in milliseconds. • Sets \hfuzz and \vfuzz to 0.5pt. This only affects what warning is shown to you: only lines which are overfull or underfull by more than 0.5pt are warned about. (The default is 0.1pt.) • Sets \emergencystretch to 3em. This is the most crucial thing for preventing overfull lines IMO (what you called “overshoots on the right margin”). It is additional stretch that TeX adds to each line, after everything else has failed. You can set it as large as you want, if you're willing to accept extra-large spaces between words on such lines. (If you're not going to rewrite text for dealing with overfull boxes, extra-large spaces are definitely better than lines sticking out of the right margin!) Now that you know and understand what \sloppy does, you can decide for yourself whether or not you'd prefer to apply it globally, rather than go by maxims like “global \sloppy is bad practice”. There are cases where that makes sense, and cases where it doesn't: if you have such hard-to-break lines throughout your document, then it most definitely makes sense to globally take measures that will result in the best output. Personally, for the case you mentioned (document with many hard-to-justify lines because of variable names in \textt forming unbreakable boxes), this would be my order of preference: 1. Leave everything at the default settings; manually watch out for and fix overfull and underfull boxes by rewriting text, etc. This degree of polishing you can do if your document is worth the effort and you're sure you've already polished the content enough (which you should always do first). Else, below I assume you're not going to manually fix overfull boxes. 2. Globally (across all paragraphs of this type) use whichever parameters of \sloppy you think you want: either \sloppy itself, or \tolerance 9999 along with as much \hfuzz as you don't want to get warned about, and as much \emergencystretch as you can tolerate (the higher the better for avoiding overfull lines). 3. Give up on paragraph justification, and use ragged-right text. Text won't line up at the right margin, but you won't need any awkward spaces or hyphenation. 4. Use hyphenation inside the \textt variables, globally: there are ways to do this, but as you said it can suggest to the reader that the variable is hyphenated, which I would really avoid. The choice between (2) and (3) (\sloppy-like, or ragged-right) would depend on the exact document. Of course everything is subjective and a function of your aesthetics. I would use \sloppy (or use \RaggedRight from the ragged2e package if you don't mind giving up the justified right margin.) General advice about avoiding over stretched white space is general advice but it's mostly advice about natural language text and if what you are writing isn't natural language then different rules may apply (or there may be no rules and you have to do whatever you think best). You don't need to do so much manual work. If you accept hyphenated variable names then you can re-enable hyphenation in the tt font, however I wouldn't do that, especially if - is a legal character in variable names in the language being documented. Similarly you can define a command such that \zz{MyVariable} puts MyVariable in tt and makes a breakpoint before the word, leaving the line short if a break is taken at that point (as if \\ had been used) That's a possibility but if there a lot it's probably better to use \RaggedRight. Or you can use \sloppy which is probably OK if you want justified paragraphs. For the specific paragraphs which have a lot of such function names use \begin{sloppypar} ... \end{sloppypar} it makes more sense than a global \sloppy Here is a solution based on the hyphenat package: \documentclass{article} \usepackage[htt]{hyphenat} \hyphenation{My-Terribly-Excru-ciating-ly-Long-Identi-fier-Thats-What-You-Get-For-Writing-Java} \newcommand{\terrible}{{\ttfamily MyTerriblyExcruciatinglyLongIdentifierThatsWhatYouGetForWritingJava}} \begin{document} This document describes a new program for comparing two numbers according to size. One number is referred to as \texttt{a}, the other one as \terrible. \end{document} • Just for information, TeX won't add hyphens after the 63rd character in a word. For instance it won't honor the discretionary hyphen between Writing and Java. Oct 1 '17 at 16:11
{}
# Thread: converting y to x 1. ## converting y to x I have been working with finding area's and volumes lately and I always seem to get caught up when I need to convert y values into x values, is there some easy method to doing this. For example $y=2\sqrt{x-1} ..... x=\frac{y^2}{4}$ I just have a hard time seeing why this is. If possible please show a general method. 2. Well there is no general method as such for rearranging formulas, but if you remember, whatever we do to one side an equation we must do to the other, then it becomes easy. For example we can add and subtract, multiply and divide, take squares and roots of numbers from both sides: $a+1=b \iff a=b-1 \text{ Subtract 1 from each side}$ $x-c=y \iff x=y+c \text{ Add c to each side}$ $y=\sqrt{x}\iff x=y^2 \text{ Square both sides}$ etc... And this applies to any arithmetic operation. 3. y/2 = (x-1)^0.5 y^2/2^2 = x- 1 x = (y^2 / 4) + 1 ? OK? Now solve my problem http://www.mathhelpforum.com/math-he...nces-hard.html
{}
# Evaluate the line intergral along the curve C \int_{C} xy^{3} ds, where C is given by x = 4 \sin... ## Question: Evaluate the line intergral along the curve C {eq}\displaystyle \hspace{1cm} \int_{C} xy^{3} ds, \hspace{0.25cm} {/eq} where {eq}\displaystyle C {/eq} is given by {eq}\displaystyle x = 4 \sin t, y = 4 \cos t, z = 3t, 0 \leq t \leq \frac{\pi}{2} {/eq} ## Line Integral of Scalar Fields: Derivatives have many applications. A derivative of a position vector will be used to solve this integral. First of all, evaluate the value of the function {eq}F(x,y,z) {/eq} in terms of the given parametric curve {eq}r(t) {/eq}. Then, multiply it with the magnitude of its derivative, i.e. {eq}\left | r'(t) \right | {/eq}, and then integrate it with the given limits of t. So, {eq}\displaystyle \int_C F\ ds=\int_{a}^{b} F(r(t)) \left | r'(t) \right | dt {/eq} ## Answer and Explanation: {eq}\displaystyle \int_C F\ ds=\int_{a}^{b} F(r(t)) \left | r'(t) \right | dt {/eq} {eq}r(t)=\langle 4\sin t, 4\cos t,3t \rangle {/eq} Take the derivative {eq}r'(t)=\langle 4\cos t,-4 \sin t, 3 \rangle {/eq} {eq}\left | r'(t) \right |=\sqrt{16\cos^2 t+16\sin^2 t+9}=\sqrt{16+9}=\sqrt{25}=5 {/eq} {eq}0\leq t\leq \frac{\pi}{2} {/eq} {eq}\displaystyle \int_C (xy^3) \ ds=\int_{0}^{\pi/2}(256 \sin t \cos^3 t)\times 5 dt {/eq} {eq}\displaystyle \int_C (xy^3) \ ds=1280 \int_{0}^{\pi/2} \sin t \cos^3 t dt {/eq} {eq}\displaystyle \int_C (xy^3) \ ds=-1280 [ \frac{\cos^4 t}{4} ]_{0}^{\pi/2} {/eq} {eq}\displaystyle \int_C (xy^3) \ ds= -320 [ \cos^4 t ]_{0}^{\pi/2} {/eq} {eq}\displaystyle \int_C (xy^3) \ ds= -320(-1)=320 {/eq}
{}
# zbMATH — the first resource for mathematics Semiconcavity of the value function for exit time problems with nonsmooth target. (English) Zbl 1064.49024 The author proves a new semiconcavity theorem for the value function $V(x):=\inf_{u(.)}\int_0^{\tau(x;u(.))}L(y(t))dt, \;x\in {\mathcal R}:=\text{dom}(V(.))$ of the exit time problem defined by the control system $y'(t)=f(y(t),u(t)), \;u(t)\in U, \;x(0)=x\in R^n, \;\tau(x;u(.)):=\inf\{t\geq 0; \;y(t)\in {\mathcal K}\}$ where, in contrast with previous work on this topic, the target $${\mathcal K}\subset R^n$$ is an arbitrary closed set with compact boundary while the “vectograms” $$f(x,U)$$ are assumed to be smooth and convex. The main result of the paper may also be interpreted as a regularity result for the viscosity solutions of the associated HJB equation $H(x,DV(x))=0, \;x\in {\mathcal R}\setminus {\mathcal K}, \;H(x,p):= \inf_{u\in U}[-<f(x,u),p>-L(x)],$ $V(x)=0, x\in\partial{\mathcal K}, \;\lim_{x\to \partial{\mathcal R}}V(x)= +\infty$ and may be related to some of the previous results on this topic. ##### MSC: 49L20 Dynamic programming in optimal control and differential games 49L25 Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games 35D10 Regularity of generalized solutions of PDE (MSC2000) 26B25 Convexity of real functions of several variables, generalizations Full Text:
{}
# Trashing macOS Server: Extra 1 - Revisiting Time Machine I mentioned in my post about Time Machine that Samba 4.8 supports Time Machine, but that it wasn’t available on Ubuntu 16.04, so I went with Time Machine over AFP using Netatalk. After months of backing up over AFP, I started having some problems with backups becoming corrupt. This story is part of a series on migrating from macOS Server to Ubuntu Server. You can find all of the other stories in the series here. ## The Problem After a while of backing up 3 computers to my server over AFP, one of the computers began showing a message saying that Time Machine had verified my backup and that it had to make a new backup. Once this message was shown, it refused to back up. I found myself consulting a StackExchange post and a few blog posts, and after combining a few fixes, my computer happily began backing up again. Within a month or so, it happened again. I fixed it again. Then a different computer began showing the same message. Obviously there was a problem, and I didn’t feel like fsck-ing all my backups on a monthly basis until 2020 when Samba 4.8 would arrive for Ubuntu LTS. I felt like it would be a good idea to try Time Machine over SMB with Samba, but I wasn’t about to fiddle with the Samba installation that handles all my shared files. ## A Dedicated Time Machine Server When all of this was happening, Ubuntu 18.10 was available, and it had Samba 4.8. I decided to spin up a new VM that would be only for Time Machine backups (and Windows File History for the one Windows machine on the network). Setup for this was pretty similar to setting up Samba for my shared files. Really, only one config change was needed: sudo EDITOR /etc/samba/smb.conf [Backups] comment = Time Machine Backups path = /media/NetworkBackup/Time Machine fruit:time machine = yes [File History] comment = Windows File History path = /media/NetworkBackup/File History The global config for this server is the same as what I have for my file shares. The only difference for Time Machine is setting fruit:time machine = yes on the share that hosts Time Machine backups. ## Conclusion That’s it! Samba has been working much better for my Time Machine backups, insofar as they haven’t started becoming corrupt spontaneously. I knew I’d have to make this change eventually, I was just expecting to do it with Ubuntu 20.04 LTS.
{}
# User talk:WFPM ## Speedy deletion of Real physical nuclear models A tag has been placed on Real physical nuclear models requesting that it be speedily deleted from Wikipedia. This has been done under section A3 of the criteria for speedy deletion, because it is an article with no content whatsoever, or whose contents consist only of external links, "See also" section, book reference, category tag, template tag, interwiki link, rephrasing of the title, or an attempt to contact the subject of the article. Please see Wikipedia:Stub for our minimum information standards for short articles. Also please note that articles must be on notable subjects and should provide references to reliable sources that verify their content. If you think that this notice was placed here in error, you may contest the deletion by adding {{hangon}} to the top of the page that has been nominated for deletion (just below the existing speedy deletion or "db" tag), coupled with adding a note on the talk page explaining your position, but be aware that once tagged for speedy deletion, if the article meets the criterion it may be deleted without delay. Please do not remove the speedy deletion tag yourself, but don't hesitate to add information to the article that would would render it more in conformance with Wikipedia's policies and guidelines. Lastly, please note that if the article does get deleted, you can contact one of these admins to request that a copy be emailed to you. Oore (talk) 05:23, 28 April 2008 (UTC) ## Real Physical Model A tag has been placed on Real Physical Model, requesting that it be speedily deleted from Wikipedia. This has been done under the criteria for speedy deletion, because it is a very short article providing little or no context to the reader. Please see Wikipedia:Stub for our minimum information standards for short articles. Also please note that articles must be on notable subjects and should provide references to reliable sources that verify their content. Please do not remove the speedy deletion tag yourself. If you plan to expand the article, you can request that administrators wait a while for you to add contextual material. To do this, affix the template {{hangon}} to the article and state your intention on the article's talk page. Feel free to leave a note on my talk page if you have any questions about this. brewcrewer (yada, yada) 20:59, 16 May 2008 (UTC) ## Speedy deletion of Real Physical Model A tag has been placed on Real Physical Model requesting that it be speedily deleted from Wikipedia. This has been done under section A1 of the criteria for speedy deletion, because it is a very short article providing little or no context to the reader. Please see Wikipedia:Stub for our minimum information standards for short articles. Also please note that articles must be on notable subjects and should provide references to reliable sources that verify their content. If you think that this notice was placed here in error, you may contest the deletion by adding {{hangon}} to the top of the page that has been nominated for deletion (just below the existing speedy deletion or "db" tag), coupled with adding a note on the talk page explaining your position, but be aware that once tagged for speedy deletion, if the article meets the criterion it may be deleted without delay. Please do not remove the speedy deletion tag yourself, but don't hesitate to add information to the article that would would render it more in conformance with Wikipedia's policies and guidelines. Lastly, please note that if the article does get deleted, you can contact one of these admins to request that a copy be emailed to you. WikiZorro 23:45, 16 May 2008 (UTC) ## Proper use of wiki Please, please take some time to use this wiki edit tool properly. Your inputs are all malformed. I recommend starting with the Help link along the left.—RJH (talk) 16:49, 18 May 2008 (UTC) Thanks but I'm an old dog and it's hard to learn new tricks. I thought maybe the article setup person could help me but I'll work at it.WFPMWFPM (talk) 18:45, 18 May 2008 (UTC) I picked this up at 50, so it's not that hard. =)—RJH (talk) 14:47, 19 May 2008 (UTC) Wait till you get older but thanks, WFPMWFPM (talk) 14:55, 19 May 2008 (UTC) Well, my dad did his first edit at 68. Getting warmer? —Tamfang (talk) 20:05, 8 May 2010 (UTC) Just a young squirt.How about at 80?WFPM (talk) 20:19, 8 May 2010 (UTC) PS I cant believe you guys would have an article on astronomy without reference to Dr, Asimov. WFPMWFPM (talk) 15:03, 19 May 2008 (UTC) The preference is for primary sources, such as scientific journal articles.—RJH (talk) 15:12, 19 May 2008 (UTC) ## Welcome! Getting started Getting help Policies and guidelines The community Writing articles Miscellaneous Hi, seems to me you've not been properly welcomed. Hope you find the links above helpful (I have by no means digested them all myself yet). This kind of a weird place, but still a noble effort, in my opinion. I got slightly beat up myself a few months back, trying to improve the "2001" article section on interpretation, which had essentially only a link to a really pathetic Wiki article the subject. That ended OK I guess (almost entirely due to the efforts of other editors who stepped into the fray; the separate article is not perfect, but much better), but I was quite astonished to learn that it was impermissible under the WP:NOR rules even to state that the novel was relevant to the meaning of the film. Anyhow, hope you don't get discouraged. I'm a physicist also, currently working in IR astronomy, but earlier did a term in experimental particle physics, and then some years X- & gamma- ray astronomy. So I may be even older (66) than you. Anyhow, I have consulted a bit on RJHall's Cygnus X-1 article, which was how I noticed your interaction with him. All the best, Bill Wwheaton (talk) 17:22, 19 May 2008 (UTC) I just looked at your User page, & saw your plea for help on RJH's user page, and I guess I should add that I probably can't help you with your nuclear article if it involves a radical rejection of special relativity or QM, even though I mostly think of myself as a classical physicist still struggling with the post-Maxwell issues. I am also not at all expert in nuclear physics. I am most uncertain about (and interested in) the interpretations of QM. But per Wiki rules, you cannot get a new idea out here, except maybe informally, in the talk pages. There is some more freedom for discussion there, but of course you lose all intellectual property rights to any ideas mentioned. Wwheaton (talk) 21:01, 19 May 2008 (UTC) Well, I'm curious enough to be interested in what your point is, but I don't know enough nuclear physics to have the concept. I remember going to a talk Pauling gave at UCSD on a "sphereon model" (? I think it was) that argued that alpha particles significantly retain their identity in at least some nuclei. I guess the Ne was 5-alpha's. Anyhow, I have almost no idea what you are talking about, so don't get your hopes up.... Have you got the article somewhere in your user space where I can see it? (I think that tends to be permitted when it is not abusively excessive. I believe you can get a copy of a deleted article e-mailed to you if you want it.) Asimov was my favorite after Clarke, maybe tied with Stanislaw Lem. Good minds, all. Cheers, Wwheaton (talk) 22:44, 19 May 2008 (UTC) I like Asimov a lot in general, and he was often brilliant, but I hated the Foundation series. Kept thinking I'd learn something and never did, gave up after #3. Clarke did some pretty undistinguished stuff too, and I do think we mostly need to be judged by our best, not our worst work. Wish I had more insight to offer on your nuclear model problem, but I think you basically just have to get it published in the mainstream literature. Right or wrong, Wiki has to wait for that. It's frustrating at times, but the necessity for it seems pretty clear to me, to keep us from getting endlessly mired in trying to settle disputes. I believe Pauling probably did have some intelligent things to say about the core halo structure you mentioned, but my problem was I'd gone two days without sleep when I tried to listen to that talk. (What a fiasco!) Best, Wwheaton (talk) 20:20, 21 May 2008 (UTC) ## Weird For reasons that are entirely unclear to me, I can see notices re the material you have put on my talk page in the past few hours, and diff's showing them on the talk page history, but I cannot see the stuff at all when I just go to my talk page and look. The last thing I see there is clocked at 19:58, 21 May 2008 (UTC), but in the page history I see three later messages from you. No idea if this is Wiki software's fault or something here, or what. Anyhow, I can read your messages OK on the diff pages, it's just clumsy. Aha! I see it! There is a dotted but empty box at the bottom of my page, and it is extremely wide, and goes off the right side of the page. The part I see is blank, but if I scroll far far right I start to see your text. There is something about the formatting I do not understand, do you have a space at the beginning of your line? I have had a similar thing happen to me occasionally in the past. Anyhow, I believe I can reconstruct your text from the diffs and put it into my page so I can see it, but hold off for a bit, and don't be surprised if you see some weird things happening there for a bit. Oy!  :-( Computers !! Wwheaton (talk) 01:18, 22 May 2008 (UTC) OK, I got it. For some reason, there were very long lines of blanks pushing your stuff off to the right. I deleted them, and now I can see your text. I have no idea how they got there. Let me now read over it and think a bit. Meanwhile, you can return to normal "welcome mode" over there. Bill Wwheaton (talk) 01:34, 22 May 2008 (UTC) ## Black holes In the chronologic history of the big bang theory of our universes expansion from a point source to our present chaotic condition I have two questions about the point in time when the volume (presumably spherical) of the universe exceeded the volume that would be the singularity size of a black hole with equivalent mass value. The first one is When? and the second one is Why?WFPMWFPM (talk) 20:46, 23 May 2008 (UTC) Hi again. First thing is, if you have a question, it should go on the article discussion pages, not in the main article pages. We are supposed to be "encyclopedic", you know? I've copied your text above, as I expect it will get deleted from the article space pretty quick. Re the subject, if the universe is expanding fast enough, it never gets the news that there is too much mass (way over there on the other side) for it to be able to escape, because the news can only travel at the speed of light. This situation is called a white hole. The older discussion, of more than ten years ago, as to whether the Universe was "open" or "closed" boils down to just that; in the closed case, we would be, now, inside the event horizon, and thus doomed. There is clearly not enough normal matter to do that, I think we now know. But we also know that the geometry is nearly precisely flat (within about 1%) and that there is a lot of mass and energy unaccounted for, maybe 20 times what we can see. See shape of the universe, and stay tuned.... Re "why", all I can say is "Whaaa??" Good question, but nobody has a clue. Cheers, B Wwheaton (talk) 03:46, 24 May 2008 (UTC) Re your first question, the answer would depend on whether or not the universe is open or closed. If closed, then it is and always was a black hole, and its radius was always less than the event horizon radius (only there has still not been time for us to reach the future singularity. If open, then it has never been inside that radius, and never will be. However the question, while still open, has been overtaken by deeper problems about the cosmological constant, dark energy and such; about which we know even less. The second question is teleological and I have no idea (without prejudice...) what it even means. Best, Wwheaton (talk) 01:37, 29 May 2008 (UTC) Hullo! Sounds like we are on the same page re teleology. If you go to the Particle Data Group website, Particle Data Group, you will find that the mean lifetime (to 1/e, not half-life to 1/2) of the neutron in free space is claimed to be 885.7±0.8 s. I don't know exactly how they do that experimentally (maybe with ultra-cold neutrons? I believe they can have such long wavelengths that they are scattered coherently [by many nuclei together, that is] from the walls of some materials with almost 100% reflectivity, so they can be contained in a box for long periods. You may know more about these than I do—it is very interesting, for both "just scientists" and "just engineers"!), but the 110 sigma confidence value makes me naively believe they are pretty sure whoever did it, done it right. High energy solar flares can produce neutrons on the Sun, and both the neutrons themselves (ie, near Earth) and the 2.2 MeV n+p => D capture gammas produced in the solar atmosphere have been observed—I was even involved with an instrument (HEAO 3) that contributed to that. Few make it to Earth directly, because most decay on the way. I doubt these observations constrain the lifetime very well, but the decay does have to be considered (as a function of n energy—the very high energy ones must even live a bit longer due to relativity). Anyhow, that is (a little more than ! ) all I know about the stability of neutrons in free space. I also do not know if this has any relevance to dark energy etc, I just think we are almost totally clueless about the subject, a little like the days between the Michelson–Morley experiment and 1905, waiting for Einstein. But, clearly, we are far from being able to answer even your first question, never mind the second one. Cheers, Bill Wwheaton (talk) 14:58, 29 May 2008 (UTC) ## philosophy, ideas, and reality I noticed your comment on the Philosophy talk page, I think on 5/23? The problem with the suggestion to define philosophy as the "study of ideas", as I see it, is that the definition you suggest is already in the definition, or rather, in the expanded qualification of the definition in the following section. Your particular suggestion, I think, is reduced to how ideas come to be formed, and that seems to be covered a lot in the study of language, which is sort of the area of Analytic Philosophy, and the logical relating of concepts to one another in a "rational" fashion. In cognitive psychology and language there are philosophical points of view which go back to the basic definitions found at the beginning of the article. See the works of Noam Chomsky and his critics. In terms of your suggestion of contributing, you might have a place in the metaphysics division (what is the nature of reality?) That is a particular interest of mine. I'd be interested to hear what you have to say, none the less. Richiar (talk) 04:34, 26 May 2008 (UTC) ## Alternative Periodic Tables Hi! could you please tell me what's wrong with my attempt to edit Alternative Periodic Tables. I cant get my references to go to their appropriate locations??????Thank you WFPMWFPM (talk) 21:08, 5 June 2008 (UTC)WFPMWFPM (talk) 21:33, 5 June 2008 (UTC) There is no such page. I have no idea what you did.—RJH (talk) 22:27, 5 June 2008 (UTC) Ah okay, the page name is lower case. I put in a lower case. Other than that I'm not sure where you were trying to put in references or what they were.—RJH (talk) 22:29, 5 June 2008 (UTC) ## Thanks Dear WFPM, Thank you very much for your support in regard to Left Step Periodic Table and the ADOMAH PT. You are absolutely correct. These two tables follow electron configurations better than the IUPAC standard table. In fact, ADOMAH PT is very helpfull when it comes to deriving electron configurations. Have you seen this? Drova (talk) 02:11, 10 June 2008 (UTC) I left another message for you on my talk. Drova (talk) 02:11, 10 June 2008 (UTC) ## fixed I fixed your image links, complicated ain't it :-) Vsmith (talk) 12:04, 13 June 2008 (UTC) ## proposed electrons Just look at these guys, no one could actually answer to there not being an electron. Science is supposed to be open to question, and when its not, it is not science. The periodic table is good as a suggestion, but when its not open to question anymore, its not science. This sounds silly, but I dont have a model, for the electron. This is a computer. do I need a model for what it is, I just know what it is. This also sounds silly. I believe in every electrochemical cell and in every electrical motor, you blatantly violate the law of mass conservation. The electron, does not hold out on an experimental basis, which is why they cant have a physical model for it. when you set up copper plating, or electric welding, this is somewhat of a radiative effect. You are ignoring this, by saying there electrons , and you have protons and nuetrons, and you only get radiation, if you disosiate the protons and nuetrons. But if you understand that there is no electrons, then you must look at this as a radiative effect, and your totally in viotation of mass conservation. And you know if you look up faradays experiments, he really experimentaly demonstrates this. —Preceding unsigned comment added by 202.89.32.166 (talk) 21:50, 5 September 2008 (UTC) Once we developed the concept of mass energy equivalence, it becomes possible to design the entire universe without mentioning real physical particles, but I'll quote you the following from a book "The evolution of matter" by Dr. Gustave Le Bon to explain my thoughts on the subject, which is "It would, no doubt, be possible for a higher intelligence to conceive energy without substance, for there is nothing to prove that it necessarily needs a support; but such a conception cannot be attained by us. We only understand things by fitting them into the frame of our thoughts".WFPMWFPM (talk) 02:15, 6 September 2008 (UTC) His quote should have said "We only understand things from what were told." . Every student who is shown the scientific method, and then graciously accepts the electron, is already ruined, from ever being a scientist. Every electrician is ruined from ever understanding electricity when he says V=IR. If he just picked that he is going to be practical, and stick to it, it would work out. I'm a graduate EE, but I learned electronics in the navy, before I took EE. I remember that the formula V=IR meant that there was a linear relationship between the measured current in a circuit and the measured voltage accross a resistor in the circuit. But the current was tricky and at very high frequencies (vhf) it didn't want to travel inside the metal conductor but only on the surface. So if you rubbed the silver plating off a copper conductor you could create a high resistance in the circuit, which skin effect was explained by the properties of electron flow and convinced me that the concept of electron flow was a reasonable proposition. And of course we also studied the stairstep storage of electrons in capacitors and the concept of magnetic inductance. And I am interested in basic chemuistry/physics considerations in the interest of better understanding the working principles of the universe, but I cant see any reason to abandon my electron concepts; particularly since I can see that the atom consists of a fairly simple combination of accumulated matter, combined with the ability to interact with small amounts of matter (or energy?] for energy distribution and other thermodynamic purposes.WFPMWFPM (talk) 16:10, 6 September 2008 (UTC) Here is a reason to abandon the electron concept: there is no linear relationship between current and voltage. This is only what you are measuring your voltmeter. If you are a good electrician, you do not use a dignital voltmeter. There are not two but three properties of the voltage source: • 1) current draw • 2) current • 3) voltage To measure the current draw, it is the speed that the voltmeter goes to its spot. Current draw is an accelerative force. Here is my way of look at it. The bernuli equation says that the three forces are pressure + gravity + change in acceleration. The three currents are displacement + conduction + convection so here is my way of looking at it: • Electricity | Fluid flow • Displacement | Accelerative force • Conduction | Pressure • Convection | Gravity Tell me what you think about that. That would mean you need a fluid to explain electricity. There was a fluid in fact that is how vector calculus was invented, that is how we came to know electricity. all of the discoveries that were made in electricity were mad 100 years ago when we viewed electricity as a fluid. That is the correct method of looking at it, and every good electrician and engineer know this but do not talk about it, for fear of being viewed as a quack or someone who has an unconventional view, but that is the conventional view. It is textbooks , wikipedia, and the universities, that are the quacks, and the practitioners are the real deal and they are the ones that really came up with the theroes and their theories are being misrepresented. I just bought and am reading a book at the local 1\$ book store. It is " Electric Universe" by David Bodanis (Crown Publishing 2005) And it traces the history of the electron from Joseph Henry in Albany New York in 1826 through Volta and Henry and Morse and Bell and Edison to present. and I think you should read it. And if you want to deny the existence of a small electrically energized particle in the face of that evidence why be my guest. I'm more interested in its function in the accumulation processes of the atom where I think it is necessary to explain the energy distribution properties of the atom which are needed to accumulate nucleons and achieve dynamic stability and I'm having a hard time rationalizing that.WFPMWFPM (talk) 12:00, 12 September 2008 (UTC) Your having a hard time rationalizing it, because it is not real. I am not denying the existance of small energized particles, I am denying the existance of electrons. I am also saying that you are not a scientist, for accepting them, it is against the scientific method. There is no observation of electrons. PERIOD. You somehow come to the conclusion that because small energized particles might exist in some cases that that, turns into the thing that explains ALL electricity and they are electrons. That is a bold statement to make, you are acting as if I am the bold one. You should read history of chemistry by I. Asimov. All of these so called discoveries of the atom were made by shooting, radiation into matter. Electricity, is a radiative effect, PERIOD. It is called electromagnetic radiation. If you have a nuclear reactor, why are you just using that to heat steam, in these age old ineffecient turbines. There is an entire spectrum of radiation, that is being ommitted. If you didn't have to think of electrons to explain antenea theory, you wouldnt be so bogged down, as to put a really big antenea, right near a nuclear reactor. Why do you need a model, why not model reality with reality? Go flush your toilet, there are not two but three curvatures. The first is going straight down. That is the gaussage flux, what you measure with your voltmeter. The next is going around the toilet, as if there were no toilet the water would spill everywhere. thats what you mistakenly call the current. The next is the water, curving inside the donut hole. That is what you are missing and that is what proves there is no linear realtionship between voltage and current, because there are 3 properties. But this thing you are missing is called the current draw, and its what you measure when you use a amprobe. There is current, voltage and currentdraw, but all of these terms need to be re-evalutated. Ive got over 200 books by Dr. Asimov and I think he's one of the greatest explainers, But I still believe in the existence of a real physical electron; but not necessarily in the electrostatic charge concept, which I'm still trying to rationalize.WFPMWFPM (talk) 10:06, 13 September 2008 (UTC)In science there are only facts and opinions. And when Dr. Asimov said stated that when you electrically charge an electroscope, the leaves separate he was 1:stating a fact, (that the leaves separate), and 2: stating an opinion, (that the leaves repel each other). And when it comes to Rutherford's Alpha deflection experiments, I just cant make myself believe that an impinging charged alpha particle approaching at c/10 velocity, can be stopped in its tracks at a distance of 1.7 fermis from the nucleus, and then accelerated directly back in the sane direction it came from.WFPMWFPM (talk) 10:23, 13 September 2008 (UTC) I was just looking in my copy of the Encyclopaedia Britannica (9th edition) and I notice that they have a 102 page article on Electricity and there's no mention of "electron" in the index and I dont think there's any in the article. So that ought to be a good reference for you.WFPMWFPM (talk) 21:11, 13 September 2008 (UTC) What do you think of the idea of forgetting about electrons, and putting a really big antenea right in the cooling bay of a nuclear reactor, and absorb "electricity" right from the radiation? This would not make sense if you think about atoms and electrons, but all practical knowledge says it would work. I also have a copy of the 11th edition of the EB in which Ernest Rutherford wrote an article about "Radioactivity" saying among other things that Beta rays have been shown to be "negatively charged particles projected with a velocity approaching that of light and having the same small mass as the electrons set free in qa vacuum tube." WFPMWFPM (talk) 23:43, 13 September 2008 (UTC)WFPMWFPM (talk) 23:47, 13 September 2008 (UTC) My dream is to be able to do two things: 1,Move from uranium to thorium fueled nuclear power reactors for electrical energy generation; and 2, figure out some way to modify the lithium6-deuterium fission process so that can only be used for boring tunnels and stuff like that. I think that if we dont blow ourselves up, the problem with fresh water management is bigger than the electric power generation probklem.WFPMWFPM (talk) 00:05, 14 September 2008 (UTC) Why do you even need a model? Is that apart of the scientific method? You already have the differential. Why do you have to let Maxwell interpret the differential for you, when his theory is not even accpeted anymore? The theory that is proposed now, is not even logicly coherent. Is this your science or theirs. If it is theirs, then why dont they just say you have to do whatever your told, and you cant think at all. If it is yours, then why are you not at liberty to accept this or not, and have to memorize this on a test? why is this presented as if they interpreted the scientific method (which they have not) and you have to accept it as if they have? isnt this your scientific method? I got ripped off 3 times • 1) math is seperate from physics. so you cant interpret it as fluid flow. • 2) physics is first taught as low level algebra bastardizations of newton then intergral calculus bastardizations of maxwells-newton theory, then only the graduate students who took vector physics understand any of it, and they dont tell you • 3) circiuts is taugth without physics. This is terrible. this is brainwashing. and there is no electron. if you have a 6 volt motorcycle battery, is it really less than a 9volt duracell? II did a canned "experiment" that measured the amount of electrons using the voltage from a battery. So there are less electrons leaving the 9 volt duracell than the 6 volt motorcycle? —Preceding unsigned comment added by 60.234.30.229 (talk) 01:26, 19 September 2008 (UTC) Seeing as how this is my talk page I'll try to answer your rather off topic contribution. First, Math is separate from physics, so I try to keep the concepts separate. And thus if I can develop a reasonable physical (or chemical) concept about a real physical phenomenon I can then use agreed on concepts of math to determine and communicate the information to others as being one of two sets of types of information (not necessarily mutually exclusive). And to determine about electric current flow, I need a unit of electric current, involving an agreed quantity of same, and that unit is presently defined as an electron. And to further discuss electricity I need defined units of electrical power (volt-amperes or watts) and energy (Joules). Now these units may be arbitrary with respect to certain modern day theories about matter and /or energy) But it is axiomatic that if you're going to play a game successfully, you've got to learn how to play by existing rules. unless maybe you're infallible in fortelling the future, which nobody has done yet.WFPMWFPM (talk) 22:15, 20 September 2008 (UTC) Again you keep saying it. Let me say this again: V=IR is NOT a law of physics. It never was. It is not in Maxwells equations. It is not in electrodynamics. It is a statistic on a meter, that isnt physically measuring voltage or current. And voltage is usefull on a big sphere, or big capacitor where you have a stored charge. IT IS NOT USEFULL IN CIRCUIT ANALYSIS. When you make a circuit you have THREE CURRENTS. Thats right three. The voltmeter, is only measuring the relationship of one of the curretns to another one. So your terminology cannot emcompass the third current. And I am talking out of any electrodynamics textbook, There are three currents. So never say again that ohms law is a property of physics. It never was. and never will be. —Preceding unsigned comment added by 202.89.32.166 (talk) 00:31, 3 October 2008 (UTC) I understand that Maxwell originally thought that electricity was the fluid that provided the 100 percent efficiency of motion of the components of the atom. But you're talking about three different "Currents" while denying the existing of the electron, which is the only entity that I can conceive of that is energetically "loose" enough in the atom to transfer energy down the electric voltage gradient. Actually, I like the BCS theory of double electron entities of current carriers because it makes sense in the theories about "Superconductivity" Phenomena. But Some entity has to carrying the energy (current) and I'll stick with the electron existence theory and wish you luck. WFPMWFPM (talk) 00:50, 3 October 2008 (UTC) • 1) Minimize heat loss in battery, by changing to salt water • 2) Make an array of cathode annode cells, and wire them in PARAELL maximizing, output without increasing galvananometer "Voltage" or circular gaussage as heaviside coins it(faraday said that "voltage", actually makes your batteries go bad) • 3) Make it accessable, to putting probes, in this battery, as well as being able to CLEAN IT. then this will never die because the salt water can be replaced. • all the "chemist" who made batteries for the past 300 years, did not use salt water, because it would mean we dont need "chemistry" to get electrical power. that is why no one has thought to do this. —Preceding unsigned comment added by 60.234.55.9 (talk) 23:12, 1 November 2008 (UTC) • The company I worked for made a silver-chloride/magnesium electrochemical cell which would power a flash light bulb when dropped into water (or beer) plus for other water activated applications. WFPMWFPM (talk) 23:21, 1 November 2008 (UTC) • Electrochenical batteries for space applications have been made to be pretty reliable, as indicated by some of the Nasa Space programs. However they do have their failure modes as dont we all. So the best battery systems usually involve a capability for monitoring and maintenance as in the Submarine battery technology business. WFPMWFPM (talk) 23:34, 1 November 2008 (UTC) • What you need to do is to figure out a system of accumulating energy that can be extracted from a given square area of the ocean and then design a platform that will stay located in the oceans environment and that will extract the solar, wind, physical motion and any other source of energy and then change it into a store of hydrogen gas, which can then be collected and used on shore as a hydrogen powered energy source. That sounds possible and after all. 70+ percent of the planet is covered by the ocean. WFPMKWFPM (talk) 23:47, 1 November 2008 (UTC) What is interesting, is that most lead acid batteries produce gas, that is not used. I was thinking of making a superarray of cells, wireing them all in paraell, that minimizes coorosion, then also collecting the gas. That would be really good. You could run it, and get electrical power, + get the gas. Moving fluids tend to increase coorosion and voltage, but is that really what we want? Maybe we want to decrease the voltage, and increase the array, so that we have a strong Irish current. —Preceding unsigned comment added by 60.234.55.12 (talk) 09:46, 2 November 2008 (UTC) • An overcharged Lead acid battery will certainly create hydrogen gas. But you need to utilize waste energy, like wind and solar energy. I was thinking about what a nation could do to utilize waste energy, like the USA in the caribbean or Australia in the Indean Ocean. There's a book called "The Hysrogen Economy" That you ought to read for its implications about this. WFPMWFPM (talk) 15:19, 2 November 2008 (UTC) • 1) Faradays Law: The voltage (REading on the meter) is always propertional to the amount of decay in a battery • 2) The emf, is proportional not to the voltage but to the current: as in the current that will kill you if you touch it, not any current that you measure with a meter. • 3) These two things, state that batteries wired in paraell, will produce more power, but actually not have as much decay. • 4) This needs to be incorporated into a real physical model to incorporate what charge really is. Your not paying attention. They say i have attention deficit disorder, but your the one that has got it. Faradays law of electrolysis, is that the decay in a battery, is always proportional to the voltage. But the EMF is not the voltage it is what is know as the draw current, and it has nothing to do with the load. I dont know how to else to say it dammit. EMF is not VOLTGAGE. EMF is current, and its got nothing to do with a voltmeter, galvnanometer, or ammeter, its physcially measured by what your running (or if it kills you or not). It can also be physically measured by looking inside your voltage source. This needs to be incorporated into the atom. I cant say it any different. I am a EE too. Most EE's these days cannot even use an osciliscope, they dont even know circuits, they dont even know the difference between digital and analog. This is totally destructive both moraly physically and spiritually, to have this level of brainwashing, and i am calling it what it is. 60.234.28.20 (talk) 06:04, 5 November 2008 (UTC) I guess I'm not good at acientific jokes. That's because in science I try to keep my ideas about facts and agreed opinions as rational as I can and as organized for memory purposes as I can. So when I wake up each morning, and notice that the world continues to exist, I dont have to spend too much time getting it's rules of behavior in my mind so that I can function. I am amazed at your grasp of terminology and am trying to understand your ideas about electricity and electrons. But if you're playing games with me, I'll probably have to look for information elsewhere. If you have a checkerboard, why dont you make nuclear models. Just work on what the nucleus of 4Be9 must be like if it obeys the magnet accumulation protocol. With cylindrical neodymiun magnets you learn even more. WFPMWFPM (talk) 16:48, 5 November 2008 (UTC) The Irish current is what I have been trying to explain to you, for about 4 months now. Close your eyes, and imagine, what you thought electricity was before any ever told you anything. Just forget everything anyones has ever told you. Forget every word relating to electrcity that anyone has ever said. Can you do that? You are knocked out, on ether gas. It knocked you out. You forgot everything, about what anyone ever said about elctrons current, voltage, and everything. Now there is an Irish leprecahn. He is going to make an irish current for you. He gets lots of glass jars, and puts salt water in it. He gets zinc and copper metal. You dont know what zinc or copper is, you just see, a grey metal tinted slightly green and a Irish reddish metal. HE has about 100 glass jars, and puts 1 redish metal in each jar and 1 greenish metal in each jar. then he connects all the redish metals and all the greenish metals. he takes a meter, and measures a reading, it is really really low. he says that the reading is the total amount of corosion in all the jars combined. He says that is some quack who said that, faraday. then he connects it to a motor and boom, its really really powerful. he says that is the irish current. the irish current is completely independent on the amount of corosion you have. Just that if you make your irish jars, a certain way, the irish current will have a statistical relationship to the amount of corosion. but that is a statistic, and if you do it the way the leprecahn does it, you get so much more irish current, and you have very little corosion. ok now come back out of your ether sleep. we could now run are cars on hydrogen, extracted by the irish current, and we could run all our home apliences on the irish current. now that you know what the irish current is. make a model of the atom, that incorporates it!!!! damn it. ## Irish current I start with water (2H + 1 8O16), and salt (1 11Na23 + 1 17Cl) + 29Cu and 30Zn. And in each of 100 glass jars there is an interconnected copper/zinc electrochemical couple within the salt water solution, plus an external connection to first a (presumably) voltmeter and to get a low voltage, and then to a motor? (or maybe a generator?). Now if to a generator you would no doubt start electrolyzing the salt water and probably getting gaseous hydrogen plus oxygen. But if you could make some atomic oxygen, you might be able to create some nitrous oxide (N2O) Which is an anesthetic gas that could make you high. Maybe you could ask a chemist about that. And I wont get unto the herb part because you said you were kidding about that. WFPMWFPM (talk) 02:38, 7 November 2008 (UTC) Electrical phenomena will do that for you. I was once near a lightning strike and could smell the ozone resulting from the atomic oxygen creation. WFPM60.234.55.59 (talk) 08:13, 8 November 2008 (UTC) The Irish current arises, because electricity is viewed as a two phase current and voltage scale, when really its a triple phase. Maybe we actually need tables, to actually understand this. The triple burner, in the atom, is a nuclear reactor. It is in a three phase region, it is gelatinous, but moving. That is the first phase. The seccond phase, is that its moving, it creates a drag, which generates the so called electric effect. The third phase is heat. That is the displacement. It goes, motion => drag => displacement. Electricity is the same. The there is a three phase, and a tripple point. You could exasperate the tripple point, by putting much of the phases pushing against it, or you could just make tables, or even better yet a map of it. If you just knew where the triple burner was, inside the nucleus of the atom, and how it operated, you already have a nuclear reactor, it is the atom itself! AC current uses the displacement effect, but there is also motion and drag. you want to get the most drag(drag is actually what is making your electrical things run not voltage), and the motion is the loss (the voltage). The irish current, is the phantom current, because you are only look at two phases(current and voltage). There are three phases, just like solid liquid and gas. Just having two words, for the three phases (current and voltage) is like having two words for the three phases (solid liquid and gas). So its like lets pretend there is no word, for solid. So sometimes when comparing solid to liquid, i call a solid a gas, but when i compare solid to gas, i call solid a liquid. And then there is no way to compare all three. So, I call the three phases of electricity, current, voltage, and irish current. A suggestion to home electrcity users: dont run many things at the same time, and wire your lights in series. Make it so, you have just enough juice to make it go, but dont use more juice. Put as many power generators: Batteries and motors, in paraell, and as many things you use: Lights: fan, aircon, in series. Put your AC-DC converters in series. Make use of the irish current. do not run many things at the same time, as that will make your power bill go up. Wire your lighting in series. Paraell drains more, from the gauge that your power company bills you. (But if you are the power company, you lose less, if you supply things in parallel) —Preceding unsigned comment added by WaveEtherSniffer (talkcontribs) 23:04, 18 November 2008 (UTC) ## Re: Isotopes of Samarium Replied here; please continue any discussion at Talk:Samarium, for the benefit of any other interested parties. Thanks, Hqb (talk) 15:11, 7 September 2008 (UTC) ## Nuclear fusion Hi - I've continued the discussion at this page if you want to continue it there. Regards, --Oscarthecat (talk) 20:32, 1 October 2008 (UTC) ## Nuclear Power Hi, and thanks for contacting me on my talk page. I've responded there. It's clear from the above two entries that you're attempting to make similar edits in other nuclear related articles and running into similar difficulties. You may wish to slow down on making edits while you work to sort out why these have not been accepted. Hope the wikipedia guideline links and continuing discussion in the nuclear power article are helpful. Cheers. Mishlai (talk) 02:31, 2 October 2008 (UTC) Superscript textSuperscript text== Big Bang == As I say, I am no expert on general relativity, but given the mysterious acceleration (which we seem to observe, and are thus stuck with for the time being), these weird things can happen. One situation I do sort of comprehend is in special relativity, when a bank robber takes off on his super escape rocket with the loot, accelerating at a constant rate a in his own accelerating rocket's frame. Then if the cops don't start off after him pretty quick (depending on a, I think it is about a year if a = 1 gravity), they can never catch him (as long as he keeps accelerating), no matter how much faster their super-duper rockets are. He goes through a kind of event horizon. Replace the rocket with a cosmic acceleration, due to who-knows-what, and you seem to have a similar situation. The product of his "proper acceleration" (measured by his accelerometer, in the ship frame) and his proper (ship clock) time is known as the "rapidity", u. At 1 g for 1 year, it approximately equals c, and it is simply additive, so it can obviously go on increasing without limit, if he has enough fuel. His velocity, seen by an un-accelerated observer, increases asymptotically to c, and v = c*tanh(u) I believe it is. I think essentially the same idea is what allows inflation to blow up the universe to huge size early in the big bang, if I understand that, which is doubtful. Rapidity is simply additive, though velocity is not. That may be part of an answer to your question. Cheers, Wwheaton (talk) 01:26, 15 October 2008 (UTC) = Yeah I get it, except that I dont get it. My concept of the paradox didn't even involve the "a" of the initial mass but but merely the problem of getting his v up to c by throwing things at him with velocity c. I think that's a "thrust" problem that you're familiar with. And thanks.WFPMWFPM (talk) 07:16, 15 October 2008 (UTC) =But I've been thinking about this and if you combine Einstein's E=McSquared equation with the SR Mass expansion equation, you get that Msubv increases to infinity as v increases to c. This kind of casts doubts on the escapees' ability to achieve c velocity by 1 hour of constant acceleration, as well as the theory that the universe consists of matter rushing away from us at the velocity of light. There best plan of escape would therefor be to initially establish an escape velocity v and then wait and not make any detectable emissions until they got far enough away to get lost in the general expansion characteristic of the universe they lived in. Thus the moral of the story would be "you can hide but you cant run!". WFPMWFPM (talk) 11:45, 22 October 2008 (UTC).WFPMWFPM (talk) 11:54, 22 October 2008 (UTC) I can't answer your questions, because I am not sufficiently expert in general relativity (GR). The special relativistic "perfect escape" example I proposed above shows that there can be observers who can never communicate with each other under the rules of special relativity (SR), which does have a phenomenon very much like an event horizon, but in a flat space time. The bank robber never goes faster than light (so none of the infinities and problems with faster-than-light travel appear), but he can never be caught, no matter how much more powerful his pursuer's rocket may be. In GR the spacetime is curved, and although I think I comprehend the principle, I do not really know how to talk about the relative velocities of two observers separated by a curved region of spacetime, or how the flat space concepts (like kinetic energy and especially its associated mass-energy) carry over into the GR regime. To understand all this I would either have to re-invent Riemannian geometry (which would be stupid to even attempt) or else plow through the texts (which is obviously the thing to do, but would take more time than I have available, what with work and all). Alas!! I do encourage you to push forward as far as you can. The Wiki articles on GR & SR probably have the answers, at least implicitly, but they may not be very easy to find or disentangle given our entropy problems here. Good luck! -- Bill PS: Note the Escapee never achieves velocity v greater than c, watched from an unaccelerated frame, at rest w/r the Bank, say. If he has an inertial guidance unit with an accelerometer on his ship, and integrates his indicated ship a*dt for about a year (not an hour) with a = 1g, he will get a reading of c, but that is in his local frame which is moving very fast, though less than c. That reading, u, is the rapidity. Since he can keep on accelerating as long as he likes, u can keep on growing without limit (in this fantasy world of SR physics without economic or engineering constraints). Wwheaton (talk) 19:00, 22 October 2008 (UTC) No, I think I've got him, because his space vehicle, which I designed, started of at a low velocity and acceleration, and I know that it's thrust fuel velocity is such that it will never get his vehicle's velocity up to the speed of light. So when I plot our courses on a single dimensional Minkowski spacetime chart, after 1 year or whenever, I see that we're even in time and he's ahead of me in space, but with my new vehicle moving at the velocity of light I am sure to catch up. WFPMWFPM (talk) 09:57, 23 October 2008 (UTC) No, I take that back. He will be behind me (in time) and ahead of me (in space) when I take off. But later,(to him) when I catch him, we will all be together in the same time and space, which is when I took off. Is that clear? WFPMWFPM (talk) 11:49, 24 October 2008 (UTC) Nope, if he accelerates at constant a in his own instantaneously at rest frame (which he can always do, by the principle of relativity, and my [absurd in practice, of course] hypothesis that he has a super rocket), then his worldline trajectory in the Bank Frame will be a hyperbola asymptotic to a 45 degree straight line (in units where c = 1 — see Minkowski diagram). He will never cross that asymptote if he keeps accelerating, so if the Cop at the bank waits too long to start after him, he can never catch the Thief without exceeding c himself. Wwheaton (talk) 16:47, 24 October 2008 (UTC) ==I'm not worrying about his super rocket, it's your math I'm worrying about. When I design his rocket. I design it with propulsion fuel velocity=c which is the best I can do. And he can use as much of the fuel mass as he needs to accelerate his vehicle up to the desired velocity, which in this case is c. And after he uses half his fuel, his velocity is not quite c, but his mass is infinite, or not quite infinite. So how does he get it on up to c. Maybe there is a converging series in there somewhere, but I dont see it. WFPMWFPM (talk) 18:11, 24 October 2008 (UTC) ==I understand about the hyperbola, but he starts from at rest (going straight up) then his chart trajectory bends toward the asymptote but I dont see how it ever gets there, when the c velocity implies that his ship has infinite mass. WFPMWFPM (talk) 18:39, 24 October 2008 (UTC) ==You can argue that my new rocket wouldn't be able to do better, but you didn't argue that, and maybe I'll just shoot him down with a ray gun or something. But at least I can see him, in my one dimensional universe of course.WFPMWFPM (talk) 19:09, 24 October 2008 (UTC) My Thief never gets to c of course, but his rapidity u obeys the non-relativistic rocket equation, because he is always at rest in the instantaneous rocket frame. The mass of fuel required (for a given payload mass) is exponential in the ratio of u to the motor exhaust velocity, which kills it in practice, but not in principle. This is worked out in Misner, Thorne, & Wheeler's text, somewhere around p 100 to 110 I think. Wwheaton (talk) 22:48, 24 October 2008 (UTC) ==Non relatavistic rocket equation!? Now I feel like the guys with the Universe Inflation Theory. Its too bad the Atoms dont know about that theory , to solve their radiation problems. Or what about SLAC problems with accelerator energy requirements. But maybe it ties in with a theory I had that the radiation velocity was always c, but the direction was always at right angles to the direction of motion, kind of like some of Newton's descriptive drawings. Oh yes, I asked a talk question about the Michelson-Morley experiment. Maybe you could help me with that. And Thanks!! WFPMWFPM (talk) 03:26, 25 October 2008 (UTC) ## Wiki formats None of your edits, at least in the past week, has avoided damaging display of the page edited. Please do not use dozens of non-breaking spaces followed by := again. If you do, it's like your edit will be reverted without comment. — Arthur Rubin (talk) 03:29, 10 November 2008 (UTC) I dont see it here but I have used non-braking spaces to try to get my contribution on a new line and not be an extension of a previous contribution. and I apologize and would appreciate knowing how to start my contribution on a new line. WFPM (talk) 04:18, 10 November 2008 (UTC) Simply use a colon or more than one to indent your comment in reply (as I did to your response above). Or to simply start a new "paragraph", hit enter prior to adding your comment. Hope this helps. Vsmith (talk) 04:43, 10 November 2008 (UTC) Thank you and hello. have you asked any of your students what they thought of my models? WFPM (talk) 04:54, 10 November 2008 (UTC) Actually, to be precise, the usual talk page convention is to start a new line with one more colon than the preceding line. Equal signs at the beginning of a line are only used for headers. I apologize for assuming that you were given a welcome message which includes the appropriate indentation conventions. WP:TALK is a good start, but, and if you are confused, you can add {{helpme}} on this page, followed be a detailed question, and someone will come by to answer. — Arthur Rubin (talk) 07:00, 10 November 2008 (UTC) ## Abolish electrons Every man at age 13 should get: • 1) a woman • 2) an ass • 3) a horse we dont need • 1) cell phones that cause brain cancer • 2) school where we are told electrons exist • 3) cars, that polute • 4) a stock market that crashes • 5) negative energy • 6) laws of conservation of mass energy • 7) to be told what to do other than get food to feed your woman horse and ass We can pump water, using pressure that is already in the ground. That can give good drinking water. We dont need nuclear power. If you understood that in the ground, its the current, that forces the water, and as long as you dont drain the pressure, you can give enough water for everyone by just channeling the currents. then we dont need to worry about kim jong ill assholes who want to blow up the world, and we can have enough of the crap we need, just from nothing, and all of this is total bull. school is prison and the work force are made up of a bunch of scared children procrastinators, who couldn't do in a week what a farmer could do in day. Damn it abolish school abolish electrons. Stop telling people that you know, when you dont, and leave me alone, i dont need your phony electron bullshit. ## samarium If you want you are welcome to start working on the article. Nergaal (talk) 21:05, 12 November 2008 (UTC) I dont have anything against your present article on Samarium. except that I'm trying to pursue any unexplained irregularity in data regarding the stability of isotopes, and particularly EE isotopes and I see that you have joined the crowd in reporting ambiguously about the stability of EE62Sm146. You call it a "synthetic" element, which I think means that it cant be created by nature, or maybe doesn't exist in nature; evidently because nobody has succeded in finding any; and say further that all of any synthetically created EE62Sm146 isotopes will be long half life alpha particle emitters. And I'm still trying to find out how anybody can say categorically that a long half lived element, like for example OE93 Bi209 is 100 percent an alpha particle emitter, (with a 10E19 year halflife). But in the case of atomic number 146 Scientists seem to like EE60Nd146 better than EE62Sm146 even though the Nubase stability data shows a lower Q value for EE62Sm146. And I'd like to help you but in articles I'm like Audie Murphy and I feel like a hen at a hawk's convention.WFPM (talk) 23:10, 12 November 2008 (UTC) ## Troll feeding I saw that IP pushing for electrons as made-up fairytales on the electron page. May I suggest that we collectively ignore him/her?Headbomb {ταλκκοντριβςWP Physics} 13:36, 22 November 2008 (UTC) Yes Except They are so persistent and consistent with this "Irish Current" concept that I thought there might be some legitimate lore attached to it, I'm not enough of a chemist to figure it out.WFPM (talk) 16:06, 22 November 2008 (UTC) The real fairy tale is that about electrons. You keep the fairy tale going by saying that people who try to stop the fairy tale, and get people to think, are big bad angry trolls. Anyway, if you know about the three phases of electricity you already know that current and voltage, only explains two of the phases, while the third phase is the irish current. This should be applied to chemistry, and could simplify chemical bonding, get rid of those dam pi electrons. —Preceding unsigned comment added by 203.167.186.130 (talk) 08:38, 25 November 2008 (UTC) Here is a good fantasy: The substance in which matter is on is niether solid liquid nor gas. you could think of it as gelatinous, or a plasma, or other things that you may know which are not solid liquid or gas. the atom is not just a nuclear reactor,it is a nuclear power plant, it needs no pipes, becuase the substance, which was formerly known as ether, is very adhesive, and it takes so much energy to get it going, that onces it is going, it is almost has a pipe of its own. The three currents, make up the three phases of matter, liquid, being what we know as the electron or corpesular motion itself, solids being hard, as taking part and form, from the drag force of the liquid, and gases, being a displacement current or AC current, in which they expand. The frame work, of it as it is different, determines what atom it is, and the majority of current perpetuating inside it, determines the phase. It is much as hydralic as it is electric.... OK now that you know this fantasy is not true,you come up with something else, but then you seem like just as fantasy too,. —Preceding unsigned comment added by 202.180.114.66 (talk) 04:52, 26 November 2008 (UTC) ## Spelling At talk:number line, you wrote: How come we have a big long history of the Pythagorean theorum and none on the Number line. The Number line must have a reportable history, (I learned about it about 40 years ago), and use it and the Pythagorean theorum to visualize how to find the square root of any integer number. WFPM (talk) 14:25, 4 November 2008 (UTC) Please: The correct spelling is "theorem"; the letter "u" is not there. Michael Hardy (talk) 22:16, 7 February 2009 (UTC) I appreciate that information about the spelling and how about the computer is smarter in spelling than I am. Now we come back to the question about the history of the number line. Or did I mispell that too. Asimov said you cant have a number line without a concept of zero. Is he correct on that?WFPM (talk) 15:35, 9 February 2009 (UTC) ## Cube diagonal and spacetime Am trying to develop a physical concept of a 4 dimensional continuum by analysing the properties of the diagonal of a cube as a 4th dimension which is not orthogonal but equal angled from the other 3 dimensions (let's let it be the time dimension) and am interested in ideas re this subject matter. You have also left me a note on my Talk regarding this ambition. I don't know how you're going to get 4 dimensions into 3, but the diagonal of a cube idea has a pleasing symmetry. For a three-dimensional spacetime you might consider a version of the hyperboloid model which uses xyz = 1 to establish a standard moment future from the corner of the cube representing the origin. Now the three edges of the cube extended are asymptotic to the hyperbola, but the three of them only span a circle of directions. The kinematic geometry of velocities is expressed by Lobachevsky geometry for points on xyz = 1. Rgdboer (talk) 22:17, 11 August 2009 (UTC) Well thank you for replying and what I've been thinking about is a 3 dimensional Michelson-Morley experiment. After all, if we can divide a coherent light ray into 2 orthogonal light rays, I see no reason why we couldn't divide it into 3 orthogonal rays. So if the rays originated at the corner of a cube and if mirrors were placed at the 3 target corners we have the basis for a 3 dimensional experiment. And where would the reflected rays converge? Well I figured out that it would be at the 2/3 distance along the diagonal through the cube, because that's the point where the equal length return distances coincide. So if I had a stationary cube with the light rays on, I could go to the 2/3 diagonal distance and determine that there was no coherence distortion of the light rays. So far so good. Now the next question is to assume that the cube was moving with a uniform velocity in some direction. And that's where I bog down, because I haven't even been able to calculate what a translation in some direction should do to the light rays, but I see that the mathematics of that calculation would be different than that for the 2 dimensional experiment. And I would like to know if the appropriate Lorentz transformation would correct the distortion in the 3 dimensional case like it did in the 2 dimensional one. So I'm in over my head and looking for some mathematician to take the problem in hand and tell me what the answer is. I, of course, suspect that there will be no coherence distortion in the 3 dimensional case as there was not in the 2 dimensional case. But I would think that the experiment would be more convincing as a 3 dimensional test result than it is as a 2 dimensional one. So what do you think of such an idea? I'm kind of sorry I had it.WFPM (talk) 05:42, 12 August 2009 (UTC) Also while considering this matter, it dawned on me that if I plotted the location of the rays within a moving plane perpendicular to the diagonal, I would have changed the problem from a 4 dimensional problem to a 3 dimensional problem; whose complexity was that of properly locating the 3 dimensional volume locations within the 2 dimensional "flatland" area of the moving plane. I can see that the possible moving locations of the orthogonal 3 dimensional light rays of the 3 dimensional Michelson Morley experiment would fit into an expanding and then contracting circle in the moving plane, but I don't know how to calculate the distortion of the circle caused by a translation of the cube. So there you have it.WFPM (talk) 13:31, 12 August 2009 (UTC)It is also to be noted that the Lorentz contraction equations only apply to one direction of motion, and that direction is the one eliminated when the location of the points is transferred to the plane. So the motion of the cube in the diagonal direction does not distort the circle but merely changes it's rate of expansion. And I don't see how that would affect the results of the 3 dimensional experiment.WFPM (talk) 23:29, 14 August 2009 (UTC) ## Re: removal Asimov's book on nitrogen I'm not sure. I reverted you instantly thinking of the old age of the book (biology, and some other fields did advance much since then), poor reference formatting (though it probably doesn't have ISBN) and that the book is hardly scientific and hardly accessible. I respect Asimov as a writer and shouldn't judge a book I haven't read. Feel free to re-add if you're sure about the above. Regards. Materialscientist (talk) 22:15, 6 September 2009 (UTC) Thanks for replying. But I've been reading Asimov practically all my life and have about 200 of his books, and am sorry he's not still around to explain things the way he used to do. And too many of our references are like something I would write and therefore not very interesting as compared to the way he was able to do it. So will Ref him when I think he provides specific and interesting information re a subject matter.WFPM (talk) 13:31, 7 September 2009 (UTC) ## Gnats, and other ponderings Hello, WFPM. You have new messages at Greg L's talk page. You can remove this notice at any time by removing the {{Talkback}} or {{Tb}} template. ## Whirlpool Galaxy I may be as dense as a lead oxide glass lens, but I don't understand your point about the Whirlpool Galaxy at Talk:Photon. I imagine that an optics textbook would suggest a suitable calculation to determine the angular resolution of the telescopes used to gather the beautiful images that we both admire. I believe that the key is the ratio of the diameter of the objective to the wavelength of light. Pilot waves are pretty much a discredited, or unpopular, interpretation of quantum mechanics. --Hroðulf (or Hrothulf) (Talk) 15:15, 19 April 2010 (UTC) I'll buy that!! But I'm still a hardhead about the idea of energy without tangible associated matter and I keep battering it by trying to develop a materialistic concept of the photon. And I get defensive when somebody tries to create a hypothetical mathematical property to explain an observable natural phenomenon. And although the panorama of the starlight is just as good an arguing point as the Whirlpool galaxy image, The variation of light intensity in the details is such as to better explain the necessity for a pretty straight?, or at least consistently bent line of direction of propagation in order for it to be registered on the image.WFPM (talk) 16:15, 19 April 2010 (UTC) Isn't that just what Huygens principle says? The wavelets superpose to create a journey by the path of shortest time: straight or consistently bent. (Quantum electrodynamics comes up with a similar result, afaict.) A straight ray propagates through megaparsecs of deep space, but a wavelike calculation is needed to predict diffraction in the body of the telescope that stops the detector from resolving two stars that subtend a small angle. I suspect it is just convention that classifies light as "not matter". I read that string theory, as yet unverified but more popular by the day, says a photon is a string vibration, just as tangible as any other particle, so it appears that deep theory suggests that energy without 'stuff' is just as weird as you suggest. Meanwhile, the wavelike nature of matter seems to be unavoidable. I think I am starting to understand your original point, though as you can see, I am limited to a high school knowledge of optics. --Hroðulf (or Hrothulf) (Talk) 20:05, 20 April 2010 (UTC) I didn't even know what optics was in High School! In college I was taught that the Huygens principle taught that any position causing a bending of a light could be treated as a new point of origen, and therefor a light beam could be bent around a corner, maybe at a reduced intensity. In Talk:Photon I'm trying to develop a concept of a polarized particle and that you could use discrete (successive?) numbers of them to transfer energy. But the idea that light energy could locally modulate a wave front which could then be sent light years distances away and still retain its details boggles my mind.WFPM (talk) 20:31, 20 April 2010 (UTC) Thinking about this, I will add that whereas I can conceive of a materialistic radiation particle and even calculate it's associated mass, There's now way that I can dream up a background grid on which I locate various amount of energetic matter as a means of providing visual detail information. I know there's programs that provide modulated grid information to Wikipedia, but I don't think that method of transmission is what nature uses to transfer light energy through space.WFPM (talk) 21:09, 20 April 2010 (UTC) Interesting, but Talk:Photon is not for developing concepts but to discuss improvements to the article. There are other wikis and physics forums on the internet for discussing your proposals for a new particle model. Optics is indeed mind boggling, but light (whether you model it as photons, rays or wavefronts) really does travel millions of light years to form an image on your retina or a CCD. For what its worth, my high school physics teacher emphasised that rays from two distant sources reach your eye at different angles. However, until I do the mathematics, I have to take it on trust that all that is required for a Maxwellian EM wave to retain its detail is for the vacuum of space to be a non-dispersive medium. That is also shown in the center of the Whirlpool Galaxy image thus illustrating your point. And I certainly agree with your high school teacher. So why the wavefronts? However, as you know, optics has recently become much more mind boggling than the question of retaining exquisite detail. What is most disturbing about the modern model, to me, is the idea that the photon, or the space it inhabits seems to be non-local. Quantum entanglement experiments, especially the weird quantum eraser, seem to suggest that a photon can be influenced by distant objects, even those a particle of light won't encounter for millions of years. If light knows where it is going before it gets there, my mind is well and truly boggled. --Hroðulf (or Hrothulf) (Talk) 09:42, 21 April 2010 (UTC) Let me refer you to what I just finished sent to User talk:Ldussan and see what you think about that. I'm too old to try to get involved in trying to describe in mathematical terms the things that occur in nature. I think I'll stick to Newton's first principle of philosophy, which is to only consider the most simplest explanations that is adequate to explain the situation. Of course his second principle is wrong, so where are we?WFPM (talk) 10:53, 21 April 2010 (UTC) Now you're getting into concepts involving material teleportation (instant or otherwise). And I believe in the law of casualty sufficiently to not want to violate the laws of permissable physical motion on an S versus T spacetime chart by a discontinuity in the motion line. So I can see instantaneous changes in velocity but not instantaneous changes in position.WFPM (talk) 22:22, 21 April 2010 (UTC) But it isn't theoretically impossible if you can manage the conservation of energy concept. But I agree with Newton that the simplest explanation is preferable, particularly if you don't have a concept of how it happens. And that keeps me from understanding the light interference wave paradox, unless it involves frequency doubling of some sort.WFPM (talk) 22:36, 21 April 2010 (UTC) I sympathize! Some other devices that give exquisite detail are electron and x-ray microscopes. Yet they they too are limited in resolution by diffraction and interference! (To get higher resolution images, you need more energetic electrons, with shorter wavelengths.) Again the mathematics is beyond me so far but the images are astounding, and it is indeed a little surprising that numerous wavefronts propagate from the nanoworld to the detector without destroying all the detail. http://www-cxro.lbl.gov/BL612/index.php?content=research.html http://www.planetaryfolklore.com/2009/05/microcosmic.html Just food for thought. Hroðulf (or Hrothulf) (Talk) 09:23, 22 April 2010 (UTC) There you go with wavefronts again!!!! How about just particles? Meanwhile I'm mulling over a newspaper picture I have of a girl with her head halfway within a microwave energy containing volume of space. And the hair on that side is standing up but the other side is down, and I don't understand that. Do you?WFPM (talk) 10:14, 22 April 2010 (UTC) It must have something to do with electrostatics, and I remember the words "skin effect". Thank you for the links. How about just particles? Electrons and x-ray photons are indeed particles. I mention wavefronts because we don't (yet) have a consisent explanation for diffraction of particles that doesn't mention probability waves. As for the girl in the newspaper, I am not sure, but I guess it is similar to the effect that allegedly lights up fluorescent tubes in the vicinity of high tension power lines. *http://www.gorge.org/images/field/ *http://www-spof.gsfc.nasa.gov/Education/FAQs8.html#q125 --Hroðulf (or Hrothulf) (Talk) 11:38, 22 April 2010 (UTC) It had something to do with microwaves traveling only on the surface, so there was no internal electrical energy. But that wouldn't explain the hair standing on end; or would it? WFPM (talk) 15:51, 22 April 2010 (UTC) Yes. Every time I hear a story about a UFO inspecting power lines I think about that and that we're missing something there.WFPM (talk) 15:56, 22 April 2010 (UTC) I haven't seen the photos, and I guess, unlike me, a high school physics teacher (or WP:RDS) could do the calculation. In principle it seems to be that the electric field generated by the magnetron causes electrons to flow to the hairs, which then repel each other, but I am a little uneasy, as the field will oscillate. --Hroðulf (or Hrothulf) (Talk) 06:58, 23 April 2010 (UTC) Yes and I'm trying to understand it. And not on the basis of electrons repelling each other, because I also want to understand the BCS theory of Superconductivity where electrons move in pairs. I studied the Magnetron while in the navy and my interest in science led me into an interest in science history. As a science teacher you might be interested in "The evolution of matter" by Gustave Le Bon, as well as Milliiken and Gale's "A First Course in Physics" to show how these ideas originate and evolve. Also the 9th edition of the EB has Maxwell on the "Atom" and "Attraction" and 100+ pages on electricity before the idea of the electron.WFPM (talk) 15:53, 23 April 2010 (UTC) I would say that it's related to the phenomenon of spacial storage of energy in space as in your Power line images. I'm at the moment getting shot down for trying to materialize the "Quantum" package of energy, and, of course if it were a material entity that we could work with we might be able to develop an explanation with that.WFPM (talk) 16:08, 23 April 2010 (UTC) You might note how ideas get distorted too. Consider Bohr's orbit theory. He argues that there shouldn't be energy emission in a circular orbit. And why should there be? There's no loss in either potential or kinetic energy, and the emissions are supposed to be related to a loss in energy.WFPM (talk) 16:26, 23 April 2010 (UTC) And in ellipical orbit there's still no loss in free energy, with the change in motion being due to a conversion of potential to kinetic energy.WFPM (talk) 17:56, 23 April 2010 (UTC) As far as I understand recent pop science, we don't really know what space is yet (The Fabric of the Cosmos). So the phenomenon of spacial storage of energy in space seems to capture some of the issues. Notice that Greene doesn't try to suggest that the photon doesn't exist, or that it doesn't have a wavelike macroscopic nature. As long as your proposal doesn't challenge that, you might make progress, though without solid mathematics or the ability to make detectable predictions (in principle), I think professionals would regard your ideas as speculative. You would be treated worse than Bohm and de Broglie were! --Hroðulf (or Hrothulf) (Talk) 17:42, 29 April 2010 (UTC) I'll be like Feynman's little old lady. It's particles all the way to the bottom. And Maxwell doesn't dispute that. Aha! I see that I must read Feynman's Nobel lecture: http://nobelprize.org/nobel_prizes/physics/laureates/1965/feynman-lecture.html Thanks! --Hroðulf (or Hrothulf) (Talk) 20:24, 8 May 2010 (UTC) Well it's pretty good and interesting and complicated for a nonmathematician like me. And when you get to the part about a mirror in the future reflecting back 50% the causative factors of a present event I become a dropout, because I'll buy a time dimension as a working path of action, but only in one direction. See Arrow of time. Just as I wont buy a concept of negative energy, as it was discussed in the Scientific American. And Asimov talks about the relationship of possible events in spacetime and notes that that relationship results in mutually exclusive relationships among the same time events. so that you can't be in 2 nonconverging events at the same time. But my imagination can extend to a much greater range of particle's mass and even of time intervals than that of the photon or even a planck particle, and I guess I'll keep working with that. You ought to read Maxwell's article about the "Atom" and "attraction" in the 9th edition of the Encyclopaedia Britanica.WFPM (talk) 22:32, 8 May 2010 (UTC) Thanks. When I find Volume 3 I will put it beside my bed. --Hroðulf (or Hrothulf) (Talk) 09:38, 10 May 2010 (UTC) ## No hands I know that you can lean on a bicycle and cause the bicycle to turn without turning the handle bars. Is that explanable.WFPM (talk) 16:29, 30 April 2010 (UTC) Sure, if the front assembly is free to rotate about the steering axis, otherwise no. It is explained in Bicycle and motorcycle dynamics#No hands. -AndrewDressel (talk) 18:21, 30 April 2010 (UTC) Your animated model shows it steering to the left as it turns to the left, and I think that is correct. But I think you do first steer to the right before you steer to the left, so now I'm even more confused.WFPM (talk) 18:46, 30 April 2010 (UTC) The animation is consistent with the description "this leftward lean of the bike will cause it to steer to the left and initiate a right-hand turn." In both cases when the bike leans to the left, it then steers to the left, and ends up in a turn to the right. The only difference between the two is the rider's motion relative to the frame. - AndrewDressel (talk) 19:30, 30 April 2010 (UTC) ## Talk page chat Please do not use talk pages such as Talk:Centrifugal force for general discussion of the topic. They are for discussion related to improving the article. They are not to be used as a forum or chat room. If you have specific questions about certain topics, consider visiting our reference desk and asking them there instead of on article talk pages. See here for more information. Thank you. - DVdm (talk) 13:54, 10 May 2010 (UTC) WFPM, Regarding your questions on my talk page in early April 2010, you can e-mail me at sirius184@hotmail.com David Tombe (talk) 14:40, 14 May 2010 (UTC) Thanks for the link. What I am trying to do in Particle physics is to show by Maxwell's discussion od Boscowich's theory in "Atom" (EB 9th Edit), That there are such things as "Restrained Dynamic composite systems" That have dynamic force activities, but don't radiate because they don't lose any of their contained matter or energy. Like the stable atom for instance. If your conceptually inside, like Boscovich you can see all sorts of forces in action. But from the outside, it appears stable. I think that was what Maxwell was trying to figure out with his deliberations, and it's a lack of understanding of this matter that is now complicating our discussions of the subject. I'll try to work some more on the subject and get with you in the future.WFPM (talk) 03:03, 15 May 2010 (UTC) And in Talk:Centrifugal force, DVdm has very generously provided a link to Newton's article and I appreciate that. See the subject matter discussion. And I would move the link to here if I knew how, but I don't. WFPM, Maxwell is my speciality, but I am forbidden by the wikipedia arbitration committee from discussing Maxwell. If you want to discuss these matters with me, it will have to be off-wiki. That is why I have given you the contact address. David Tombe (talk) 15:51, 15 May 2010 (UTC) Boy!! you're really getting hemmed in. But you're knowledgeable, and with status. It's fools like me and Joyce Kilmer that are allowed to blunder around trying to find answers to questions. And I can't add much to what Maxwell had to say on the subject. But you might look at my contribution in Talk:Coriolis effect and tell me if I'm correct in that assertion.WFPM (talk) 18:10, 15 May 2010 (UTC) Tossed ball But maybe you're in a position to tell me something about a book I have. It's "The Evolution of Matter", writen by Dr. Gustave Le Bon, and published by the Walter Scott Publishing Co. Ltd. Manchester Square, London B C. and dated 1907. And is noted to be "Translated from the third edition with an introduction and notes by F legge. And in his notes, Mr. Legge lists a number of papers published by Dr, Le Bon over a period from 1896 to 1906. And since in chapter 5 on "The properties of intermediate substances" he shows a chart indication the mass of particles increasing related to their velocity, and talks about the mass not being constant in magnitude, but varying with speed. And makes me wonder if he is writing (Pre Einstein) or (post Einstein) 0r what? Do you know anyrthing about this book? It's a reject from the Pittsburg (Kan) public library.WFPM (talk) 12:01, 16 May 2010 (UTC) WFPM, Unfortunately, if I even as much as hinted about my opinions on Dr. Gustave Le Bon, or your contributions at Talk:Coriolis effect, I would very quickly find myself the subject of an arbitration enforcement action, and I would be very swiftly blocked. I do have opinions on both of these matters, but if you want to know those opinions, you will have to e-mail me. David Tombe (talk) 12:10, 16 May 2010 (UTC) ## diffraction vs interference Hi, You wrote on my discussion page: "In your Single slit light passage image, you show an interference pattern. How do you explain that?" I think you may be referring to a photograph I took, or you may be referring to a diagram that schematically indicates the same thing. A single slit will produce a diffraction pattern. You can see a great variety of these patterns in Sears, Optics. On the page facing page 222 there are photos of both "Frauhofer diffraction pattern of a single slit...[and] Interference patterns of 2, 3, 4, and 5 slits." In the first photo on that page (he had a really nice physics lab at MIT) you can see a broad central band flanked by 3 progressively lighter bands on each side. This book was published in 1949, but it and its companion volumes are the original source for the "Sears and Zemansky" physics texts that are still in use today. You might find the same photos in the current version. P0M (talk) 21:38, 18 May 2010 (UTC) Well I never did learn what is supposed to happen to light passing threw a single slit, and guess I need to learn about that. But the emphasis about the double slit experiment was about the occurrence of patterns of interference due to the additive/subtractive wave properties of the two separated slit passageways, and I guess that I didn't expect that the light through a single slight would have an interference pattern. You know what they say about a "little" knowledge. I guess this doesn't amount to a complicating factor related to the double slit experiment?WFPM (talk) 22:15, 18 May 2010 (UTC) PS: I've got an excuse for my ignorance, because my Robeson Physics (1943), chapter on Diffraction-Interference shows the first slit of the Double slit experiment as not putting out an interference pattern. But what do they know?WFPM (talk) 22:37, 18 May 2010 (UTC) PPS: Also look at the animation image in the 2-slit experiment which is like in my Robeson. I don't think you can get much better than Francis Weston Sears. Trust your senses. Do your own simple experiment. Following Huygens, Fresnel had the whole thing pretty well figured out by sometime around 1800. P0M (talk) 01:20, 19 May 2010 (UTC) I notice that he's not in the reference section of the Optics article. Where I went to get the specific book information. Does that happen to you often? Ah! where the pursuit of knowledge takes you.WFPM (talk) 01:39, 19 May 2010 (UTC) I just reread my Wiley "Atomic Physics" book 1946, by members of the Physics staff of the university of Pitteburgh, section "Planck's Radiation Formula, Where he got the idea about the emission of a quantum of light energy as a means of explaining the accuracy of the spectral energy curve. and then goes on to say that it was Einstein who developed the idea of the quantum absorption process including an accommodation for the work involved in liberating the electrons due to the photoelectric effect. But neither of them say categorically that a zero time interval was involved in the emission or absorption process, just that that involved a transfer of a unit quantity quantity of something. Kind of like if you used a container to package a "basket" unit quantity of something from a possibly continuous production line. But they did separate the discussion between the two light energy concepts into individual property discussions, and still left the final identity discussion up in the air.WFPM (talk) 20:59, 22 May 2010 (UTC) Physics for majors at my university used Sears and Zemansky. That was a very fat book that crammed the main stuff from Sears individual volumes. I did not like that book because it was not clearly working. Fortunately for me I had already purchased the Optics text before I graduated from high school, so I had it with me. It is a beautifully written book. I think they "cut out all the good parts" when they made the one-volume text. The optics article in Wikipedia probably does not mention the Optics because it was written in 1949. One problem with his original series is that these books were written at a time when classical physics was at the center of the curriculum for majors. So he writes primarily about classical physics, even in places where he could have said something about quantum physics. So that's another reason you won't see him cited by writers much younger than I am. His Optics wasn't even ten years old when I got a hold of it. I'm not sure of exactly what you mean by the "zero time interval," i.e., I don't know which discussion you may have plucked this assertion from. One of the differences between Heisenberg and Schrödinger was that Bohr et al. thought that the electron "jumped" instantaneously from one orbit to another -- with nothing in between. Schrödinger could not imagine that so he tried for an alternative to Heisenberg's theory. But I think the truth is that nobody can watch an electron so it is all a matter of opinion anyway. As for "transfer of a unit quantity," again I am not sure of exactly what you have in mind. Some people have a natural misconception about Planck's constant and what it represents. Some people speak as though Planck's constant is a unit of energy, and that you multiply the frequency of a photon by Planck's constant because you are in effect counting up how many of these little packets of energy are carried forth in that one vibration. But Planck's constant is a proportionality constant. It is used to modify the physical truth that energy is proportional to frequency into an equation that will work in the particular system of units used. (If you change systems of units, you will have to change the value of h.) If one makes use of a different set of units, then, in that system of units, energy can equal frequency. In other words, giving somebody the frequency in this set of units gives them the energy without the need for any further math steps. That's when you have a certain "natural units" system to work with. (And, as another indication of what is going on, the units of h are not energy units, so no multiple of h would give you an energy.)P0M (talk) 00:58, 23 May 2010 (UTC) Well thanks for answering and let me continue. The energy emission graph re planck's theory plotted energy intensity level (ordinate) versus time (abcissa) so sometimes there was no emission and then sometimes there was an emission at one level with plus an instantaneous increase to one or more higher levels, (they mentioned 7 levels), so both the time interval and the discrete levels of emitted energy level were involved in the process. But that can be interpreted as the time of beginning of some event as the result of a process of non instantaneous accumulation of the required initiating energy, and the occurrence of a relaxation property of the target such that its integrative accumulative capability is time dependent. And when I first learned about the Photon's function and energy content I thought, "Aha!, A unit of transmitted energy in a wave/or particle stream such that one seconds worth of reception of would give you the Photon value of energy!. But instantaneous accumulation? No way!!! So we need an energy container package concept such that in a very short time interval a target can accumulate a planck's value of energy, so let's just assume that for the particulate concept each of the particles is carrying a planck's value of translational energy. And since a wave length's distance is already involved in the concept we might as well guess that maybe the wave length distance is that length of the (planck wave frequency quantity) stream of particles that is required to be accumulated during the accumulation time period by the target. And that results in a concept of an approximately 10e-47 gram particle. I think. But that's just my guess and maybe there's a better one about the same concept.WFPM (talk) 03:29, 23 May 2010 (UTC) I cannot follow what you are saying. I suspect it is because you are following the 1949 textbook. I suggest you get a current physics textbook because many things have been changed and corrected in our understanding over the last 50 years. For an overview, I would suggest Introducing Quantum Theory, by J.P. McEvoy and Oscar Zarate. They only screwed one thing up, and that was saying qp ≠ pq and leaving the impression that p and q were single values, i.e., ordinary variables, when in fact in the context they were working in p, q, and their products are all matrices. Also, if I follow the general drift of your discussion, I think it would be useful to consider the generation of a photon of electromagnetic radiation in a radio transmitter antenna, or even what goes on when a wave is created by moving an electron back and forth on some kind of a shuttle system. The time needed for an electron to travel to the end of an antenna is definitely not 0. P0M (talk) 18:18, 23 May 2010 (UTC) Absolutely! And I was discussing that while also promoting the Planck particle concept in User Talk:Wwheaton, About the antenna designs used for long distance communication, Because I cant understand A "modulated wave front concept of long distance electromagnetic creation and/or propagation", like in the visual details of the Whirlpool Galaxy images, and was discussing that with him. And he's like you and more sympathetically shot me down based on the mathematical descriptions related to the QM concept. So we'll just have to shuffle along on different and maybe parallel tracks toward the same understanding goal. And so you have a concept of an electromagnetic energy wave being created by the shuttle movement of an electron up towards the top of say a Marconi (Quarter wave length) antenna, whereas I've got a concept of a lot of planck particles being radiated out in all directions from the top. And of course your concept wins out with relation to the consensus of opinion criteria, and I'm pleasantly surprised that I'm being permitted to voice a few small objections as to the physical irrationality of some of the concepts.WFPM (talk) 20:07, 23 May 2010 (UTC) ## A book that may interest you Since you are interested in building 3D models of atomic nuclei, the forthcoming 2nd edition of Norman Cook's "Models of the Atomic Nucleus" (Springer) (http://www.amazon.com/Models-Atomic-Nucleus-Unification-Nucleons/dp/3642147364) to be published next month should be of great interest to you. I found the first edition very readable. His analysis of the time-independent Schrödinger wave equation and its implications for the Lattice Model (which he favours) are especially interesting. The book also reveals that several notable nuclear physicists have published work on the fcc lattice structure of the nucleus, starting with Wigner in 1937 (Wigner, E., Physical Review 51, 106, 1937.), Everling since the 1950s (e.g. [1], [2]), Lezuo, Cook himself, Bobeszko and others. --TraceyR (talk) 10:32, 16 October 2010 (UTC) Appreciate the info, but I built my 3D models 26 years ago and am not hep to the modern ideas and technology related to the Schrodinger equation's concept and details. If you're interested in some of the ideas related to the concepts of the atomic nucleus, you might look at the National Geographics article "Worlds within the Atom" (May 1985) which is a good article with a lot of historical and technical information, but at the same time shows how little we really know about the details of the atomic nucleus. So much for consensus. I notice that your interests is mostly in private aircraft, particularly British. I had an uncle who was in the USAF from 1930 - 1946 and was in England in the 453rd Bombardment squadron and we were interested in aircraft in those days. But you don't indicate your interests in Nuclear Physics, which is mine as an addition to Engineering and general science matters. So thank you.WFPM (talk) 18:17, 16 October 2010 (UTC) Unfortunately the National Geographic article you mention is not (yet?) available in the online archive. If you're suggesting that, 25 years later, mainstream nuclear physics is still not much wiser about the nucleus, it would seem that you're right (but I'm no expert). Cook shows that lattice model is consistent with the Schrödinger wave equation, thus bridging the gap between current theory and 3D representations, so it could relate to your models too. If you look at the second Everling link above you'll see illustrations of some of his 3D models and the magic numbers that they represent. Cook's book takes a refreshing look at many of the problems with current nuclear physics 'dogma' which have been swept under the carpet. I am interested in many areas of science but qualified in no specific area! As to the reason for my interest in aircraft, it is similar to yours: my father was a bomber pilot with the RNZAF in WWII. --TraceyR (talk) 21:44, 16 October 2010 (UTC) Well I hope that you can get a look at it because it shows what happens when an artist gets pinned down to making an graphic image about something that he doesn't understand (the nucleus of the Carbon atom). So he places 12 marble shaped nucleons inside what looks like a placenta and loosely grouped in 3 groups of 4, and that's it. And it's a big and informative article, and well worth reading and looking at the pictures. So that's where we stand with the triple alpha accumulation theory, as compared to the indications of my models. And I just discussed that in User talk:Sbharris. And I've studied Wave generation and modulation and propagation and Fourier series analysis but I can't get myself to believe that light energy transmission is a wave disturbance phenomenon. That's why I'm currently bogged down in Feynmans QED book. And I was just explaining to my daughter that the reason we need a base on the Moon is so that we can develop a reasonable and (comparably) economical way to get into and explore the space of our Solar system and you are probably interested in that technology.WFPM (talk) 22:31, 16 October 2010 (UTC) ## CF W, you seem to be suffering from the same confusion about centrifugal force that David Tombe suffers. It clear that this topic confuses many, but it's not really that hard. I recommend you start with some good sources. For each book you look at, figure out exactly what definition of centrifugal force they are using; you'll find three kinds: (1) like in centrifugal force (rotating reference frame), the main definition uses by physicists and engineers; (2) the reactive centrifugal force, equal and opposite to the centripetal force that is curving the motion; (3) confused mixtures that don't lead to any useful way to work mechanics problem. Once you know which one you've got, try to understand it apply it to some problems. See what our articles say, and see is you find errors or omissions that we can improve on. I'll be happy to talk you through it here, but it would be best not to keep up the noise on the article talk pages until you have done your homework, so you can make intelligent suggestions, or ask intelligent questions, based on sources. So far, your comments have been largely uninterpretable, just distractions for people who actually want to work on improving the articles. If the centrifugal pump is what you want to understand, say so; I can explain how to apply both types of CF definition to it. If you get to where you understand it, maybe you can fix that article. Dicklyon (talk) 18:32, 12 February 2011 (UTC) I've read it and am amused to note how you have managed to explain the centrifugal pump on an energy transfer basis and without much explanation of its centrifugal operation properties. I imagine that you will probably be questioned about that. So now we're down to zero occurrences of a real centrifugal force, which exceeds even David's aspirations to whittle it down to one subject matter. And I'm looking for an explanation of the principle of operation of the physical properties of the Whip, which I can't find in wikipedia.WFPM (talk) 19:03, 13 February 2011 (UTC)I appreciate your idea that I could "fix" an article. But I'm more like Socrates. I'd like to have a discussion to try to find out.WFPM (talk) 01:09, 14 February 2011 (UTC) Thanks for uploading File:82 Pb Lead 207.png. You don't seem to have said where the image came from, who created it, or what the copyright status is. We require this information to verify that the image is legally usable on Wikipedia, and because most image licenses require giving credit to the image's creator. To add this information, click on this link, then click the "Edit" tab at the top of the page and add the information to the image's description. If you need help, post your question on Wikipedia:Media copyright questions. Thank you for your cooperation. --ImageTaggingBot (talk) 22:05, 15 May 2011 (UTC) ## Would you please disable the MediaWiki email feature when you post to my talk page? Nobody likes getting 6 emails that they have something on their TALK page. I'll get around to WP when I get around to it! Thanks. SBHarris 23:47, 15 May 2011 (UTC) ## Files listed for deletion Some of your images or media files have been listed for deletion. Please see Wikipedia:Files for deletion/2011 October 15 if you are interested in preserving them. Thank you. FyzixFighter (talk) 07:55, 15 October 2011 (UTC) == I understand your frustration == I really do. I want everything to be explained in classical mechanics, because it's hard to accept that there are two sets of rules for everything. I have a very mechanical mind, and tend to look valiently for alternatives. Unfortunately, it's hard to debunk quantum theory when it has, so far, stood up to every test. One of my favorite quotes, besides Feynman's, is this one from Michio Kaku, "It is often stated that of all the theories proposed in this century, the silliest is quantum theory. In fact, some say that the only thing that quantum theory has going for it is that it is unquestionably correct." The only way to really think about light is both as a wave and a particle. I had to come to grips with that before I could ever get my first laser to work, even though I may not like it. The only thing we can do is rack our brains and try to come up with a better theory. I'm like that with gravity. I find myself asking those questions that intrigue scientists and piss-off teachers. Unfortunately, the article talk pages are not the place to do that. There, I can only really provide sourced information which could be used to improve the article. Anyhow, it's been nice chatting with you, and I wish you a merry Christmas. Zaereth (talk) 01:49, 17 December 2011 (UTC) Thank you!! I've reached beyond the existence of frustration to that of being philosophical. And so I'll stick with Newton and Lucretius. And I admire Maxwell, except for his equations, which I don't understand and Isaac Asimov, who gets along without them. He's like Fox News. He reports and you decide. In his book "Atom" which I just reread. He talks about everything concerning nuclear physics and his memories of details must have been fantastic. And I'm somewhat disturbed now because my effort to present a chart to better organize and determine the individual significance of the atomic isotope data got thrown out. See Talk:Isotopes of lead. but Se la Vie. So here in Joplin Mo. were recovering from the tornado and getting ready for Christmas and wish a merry to you too.WFPM (talk) 03:50, 17 December 2011 (UTC) I hope you're OK. We don't get tornados in Alaska, but we did have some hurricane-force winds the other day. Earthquakes and tidal-waves are what we usually have to be prepared for here. Anyhow, I don't now if you're very familiar with thin-film interference r not, but it is one of the reasons that this topic of light so fascinates me. Why do colors appear from a thin oxide layer on steel when it's tempered? I wrote the history section in that article, and find the history to be quite amazing. It is still one of the great mysteries of science. The simple question goes something like this: When a beam of light strikes the surface of glass, 4% is reflected off the surface, roughly 10% is absorbed by the glass, and the other 86% transmits to the next glass/air interface. If I add a layer of material that is of a different refractive index, making sure that the thickness of this layer is exactly 1/4 of the wavelength of the light, another surface is created from which 4% of the light is also reflected. When two of these photons reflect back, they are 1/2 wavelength out-of-phase, so they cancel each other out. In other words, they disappear, so that no reflection comes off the glass. This, in itself, seems impossible in classical mechanics, but it happens. Antireflective coatings can make a surface 99.99% reflection-free. What's even stranger is this: Because these two reflective surfaces have cancelled out 8% of the light, I would expect that, after 10% is absorbed by the glass, only 82% of the light would transmit to the next side of the window. What actually happens is that 90% of the light transmits. The photons that were canceled out by the two reflective surfaces reappear in the transmitted beam. How does this happen? The first law of thermodynamics says it has to occur, because energy cannot be destroyed, but only quantum theory explains it. Alright, it's getting late here, and I need to get home where there are no computers to distract me. Have a good weekend. Zaereth (talk) 06:25, 17 December 2011 (UTC) Well I'll give you my "fringe theory" explanation. Which is that light is a stream of particles, and when you combine 2 separate beams of light you're combining 2 separate beams of particles. And the question becomes as to how light receptors respond and interpret the received sensory information from 2 combined streams of particles. My guess is that if they are in phase, they get a stronger perception as to the intensity of the light source, which is logical. But if they are out of phase, then the sensor's perception properties get fouled up by the doubling of the frequency as concerns color perception, but not as concerns the beam's energy containment properties. And in thin film refraction phenomena the ability to enhance and dim the various colors changes with the angle of view such as to enhance and dim different colors at different angles. So there! So the interference cancellation properties of antireflectors must be due to their ability to result in the doubling of the frequency of the reflected disturbing light beam such that the eyes do not note it and let it obscure their perception of the other visual information with which they are dealing. I keep thinking that you ought to be able to test this theory by experimenting with light beams who's doubled frequency property could be visually observed, like in the infrared frequency range, but I'm not an experimenter, just a fringe theory thinker. Maybe you can set me straight.WFPM (talk) 18:08, 17 December 2011 (UTC) The problem is, that's not what happens experimentally. An Nd:YAG laser can be frequency-doubled by placing a potassium titanium oxide phosphate (KTP) crystal in the beam path. In this case the invisible IR laser beam turns green. The 1064 nm light has been doubled to 532 nm. I can easily do this with my laser, and have a couple of Nd:YVO4 laser pointers that operate off the same principle. However, when the laser is shined through an AR coated window or lens, no green reflection is observed. The other thing that negates frequency-doubling in thin-films is the fact that the reflected energy transmits, proving that it was never reflected in the first place. Energy cannot be created or destroyed, so the beam must be short of the reflected energy. If not, then the energy must not have been reflected at all. Quantum theory suggests that, because the probability of reflection has been eliminated by the coating, then the incident light has no choice but to take the most probable route, which is transmission. This is to avoid violation of the conservation of energy law, because the energy must go somewhere. There are many other examples of the wave nature of light. The ability to form an image is one. Individual photons following ray-path will not form an image. This requires waves with curved fronts, which can be imaged by refracting through curved lenses. Another example is the laser itself. Lasers work by capitalizing on the constructive interference of light. When two mirrors are placed parallel to each other, the only photons that can oscillate between then are 1/2 multiples of the distance between the mirrors. Photons that do not meet this wavelength criteria are cancelled out by destructive interference. The waves that are allowed to interfere constructively form a standing-wave between the mirrors. In a laser like the Nd:YAG, the output is typically around 1064 nm, but can be fine-tuned in that area by adjusting the distance between the mirrors. Spatial hole burning is another wave phenomenon observed in lasers. The standing-wave produced leaves untapped energy between the crests of the waves. Only a ring-laser, which does not produce a standing wave, can stimulate emission throughout the entire gain medium. Ring lasers also have a greater freedom for tuneability, because the color of the beam does not depend on the cavity length. I'm not trying to poke holes in your theory, because these are question that have puzzled me for years. Bringing classical mechanics and quantum mechanics together is something I would really like to see happen, but may require a complete change in our understanding of the universe. Some theories created to alleviate some of these problems include the notion that there are multiple universes coexisting. Another says that the universe itself must be solid, and what we perceive as matter are merely pockets of energy (which seem solid to us because we're made of the same thing) moving through the universe like holes through a semiconductor. There is also string theory, which is in its infancy and is more philosophy than science. Nearly every theory since Einstein requires that the universe be made of some unseen fabric. If you or I could answer these puzzles we'd be looking at a Nobel Prize! I wish you luck. Zaereth (talk) 00:45, 18 December 2011 (UTC) Wow! Experimental evidence already! Well let's see. First we have the evidence of frequency doubling, which says that a beam of light can be doubled in frequency. Then we have the illustrations saying that that is caused by what we can call the phase change amount using wave terminology parlance. And the amount of phase change is related to the length of the path through the transparent medium separating the two points of reflection. And if a visual colored light source is reflected from both sides of a thin film it's intensity will be practically eliminated. Now comes the question of what will happen to an IR light source if it is incident on and double reflected by a thin film. And the answer is that there will be a phase change but how much? If the phase change of the wave length is the same distance amount then the phase change "angle" would only be about half as much as for a visual light frequency, and that might not be enough to result in a light frequency doubling effect, which might be achieved by an increased film thickness. I.ll try that as a preliminary guess, along with a question re your statement that individual ray-path photons not forming an image, which I don't understand. I think that means that in order to make an image you have to have a minimum number of accumulated photons on a particular location. Don't let me get too far astray.WFPM (talk) 19:07, 18 December 2011 (UTC) If you want to read about the state of the art in science and nuclear science matters, I suggest you read Asimov's book "Atom" where he has covered the gamut and more or less tied things together. What is needed now is a few breakthrough ideas about how the small "imponderable?" matter in the universe is organized and functions. I've got a 6 unit cylindrical magnet hanging by my desk that points north and dips slightly and there's no question in my mind but it's coordinating with some matter that is contained within a magnetic field. And when I swing a magnetized pendulum down through an arc toward an opposing magnet at the bottom of the arc and hear the chunk as the swinging magnet bounces off the repelling pillar created by the magnet I know that the magnet has capabilities to organize the magnetic field so as to do work and carry out energy exchange functions. And of course we have transformers and other apparatus to show how the magnetic field interacts with the electric current field. I'll bet that Edison was puzzled when he found out that nothing but an EMF had to enter a customer's house in order to sell them electrical energy. But light energy and light beams traveling at light velocity involve a different concept of matter. But it's still the motion of matter and we have to figure it out.WFPM (talk) 02:57, 19 December 2011 (UTC) My daughter just bought (at Sam's Club) a 2 liter plastic bottle of hand sanitizing gel (Glycerol + Ethyl Alcohol) that is a clear gel with thousands? of varying size stationary bubbles within it. And I note that the bubbles react differently in their sunlight reflection properties and with relation to the stereoscopic visual location of each eye in an interesting manner. I wonder how those same bubbles would react to IR light illumination. And the advantage of that material is that you have a large number of varying size stationary bubbles to illuminate and analyze. What Say?WFPM (talk) 23:33, 23 December 2011 (UTC) Hi. Sorry to take so long, but real life for me is often very hectic. Yes, frequency doubling or even tripling is possible, but requires the use of non-linear optics. I'm not sure exactly how that works, but somehow involves a non-linear polarization-wave being formed in the dielectric, which interacts with the fundamental light-wave to generate the second or third harmonic. This phenomenon is actually how non-linear optics was discovered, because it does not occur in any other medium. If the thickness of the film is ${\displaystyle \lambda }$/4 (wavelength/4 or 1/4 wavelength) the light travels through the film ${\displaystyle \lambda }$/4, and is refleced off of the other side, again traveling ${\displaystyle \lambda }$/4, for a total phase shift of ${\displaystyle \lambda }$/2. This causes destructive interference. If the thickness of the film is increased to ${\displaystyle \lambda }$/2, then the phase shift will be 1 full wavelength. This causes constructive interference, which causes bright colors to appear on the surface. Gasoline on water demonstrates this nicely. So does the tempering colors of steel; of which a photo of mine can be seen on the temperingarticle. The color of the steel is a direct indication of the thickness of the iron-oxide film. The phase change will also occur for whatever light is passed through the film. If the thickness of the coating is only ${\displaystyle \lambda }$/8, then the interference will be partially constructive and partially destructive, and the total energy from both reflections will be equal to the energy of one reflection. The thickness of the coating is usually engineered to give minimum reflection for a particular wavelength, the "center wavelength," but the performance degrades the farther you go through the spectrum to either side of that wavelength. When viewed from an angle, the center wavelength will change. This is why, when you look at a peacock feather straight-on, the spot in the center appears blue, but when viewd at an angle, the spot appears green. At an angle, the distance through the thin-film is longer that when viewed straight-on, and so the center-wavelength is also longer. I can go on and on about this. The wave theory of light only came into being during the early 1800s, due to the work of Fresnel. It behaves just like any other wave, from sound waves to waves in the ocean. Depending on the medium, it travels at a constant velocity, and any change in energy input will only change frequency and amplitude, but not velocity. This is just like a wave. However, if light is a wave, the logic dictates that there must be some medium through which it transmits. Unfortunately, the Michelson-Morley experiment could find no such evidence of a medium. So how can energy transmit without a medium to convey it? By classical mechanics, this should be impossible. So along comes Einstein with his theory of reativity. Einstein shows that light can be bent by a gravitational field. In the newly-forming realm of particle physics, a name is given to this particle: the photon. However, the particle has no mass. It is described only as a "packet of energy." Now we have a particle which transmits energy (speed), exerts pressure force on whatever it hits, but has no mass! This very idea defies common sense. Not only that, but that these photons are suppose to travel, not in straight lines like a speeding bullet, but in perfect sinusoidal wave forms also defies classical mechanical explanation. The fact is that trying to explain light as a particle at all seems ludicrous, because everything else about it screams "wave." The one caveat that seems to support the photon theory is the laser, because laser light tends to behave more like a ray than a wave. This seemed to enforce the idea of the photon, but now new sound-wave lasers (sasers) have shown the phenomenon is possible with any type of wave traveling through a medium. Personally, I tend to lean toward the idea that Einstein was very much on the right track, but not exactly right, and that before any of these mysteries can be sorted out we first need to figure out the very large. I've heard a theory which I think might help do this, but would turn our understanding of physics and the universe upside-down, and I'm not quite sure that the world is ready for that yet. I don't have time to get into this anymore, especially not on Wikipedia. I do try to keep in ming that, in Aristotle's time, the universe was pretty much as it appeared. Now, we relize that reality is far different than it appears. Everytime science has figured it all out, some new discovery comes along to smash everyone's misconceptions, and the whole process of theory-creation begins anew. Zaereth (talk) 00:30, 7 January 2012 (UTC) I don't think that there's any question but a fact that there is an amount of "imponderable" matter in the universe, such that we can create a "ponderable" magnetic force and do work with it. However the idea that this matter is capable of transmitting a pinpoint source of light energy over a 10+ megaparsec distance without blurring any of the details is impossible to imagine based upon the light wave modulation principle. It's only when the concept of a particle beam is being considered that we have any chance of developing a theory as to a frequency principle related to the properties of the received light energy, and with the ability of optical devices to modify this frequency as necessary to cause the observed effects. This agrees with the determination of the Michelson-Morley no medium conclusion, although I wish that it had been carried out in the 3 orthogonal dimensions (Along the diagonals from the 3 corners of a cube). So I'll go along with Newton's concept, except that the particle that I can imagine would have to have a mass of only 10E^-47grams in order for the 1 seconds worth of photon energy value to be correct. And that 1 second's value would have to be crowded into a very short reception time in order for it to be accomplished within the exposure time frame of modern photographic recording devices. So where are we? In limbo! And I can't see that Feynman has helped much by telling us we are not going to be able to understand it after he explains it. And I studied wave principles and terminology in EE, but I always understood that what we were talking about was the relationship and EMF's and frequency of occurrence of the electrons within a circuit and not about any other physical entity in the circuit except the electrons. I'll bet that Edison was surprised to find out that you didn't have to push his DC current carrying particles through his customers' circuits in order to supply them with their required needed electrical energy requirements, but only to provide the needed EMF and electrical energy such as to energize the customer's electrical circuit necessities. And do you think that mother nature had all this in mind when the design of the entities of the universe was being developed? I doubt it! What was need by the larger particles of the universe was for a way to get rid of excess kinetic energy, so that the second law of thermodynamics could take effect. So in order to get rid of localized excess energy we need lots of small mass interactive particles. That's because in an interactive encounter between 2 mass particles, the equal exchange in momentum results in the majority of the (second order) energy transfer amount being transferred to the less mass particle. So in the intermediate stages of an accumulative process the system results in the existence and motion of a large number of small particles, and thus we have the electron and the photon in order to manage the energy transfer functions and resultant properties of the atom.WFPM (talk) 18:45, 8 January 2012 (UTC) I understand all that about the laws of thermodynamics. I don't doubt that photons actually exist, or phonons for that matter, but photons alone don't explain the complete nature of light. When talking about imaging, here is a simple experiment to try. If you hold a magnifying glass between a table and the light overhead, if your distance from the table is just right, an image of the lightbulb will form on the table. Next, turn off the light and shine a laser into it, to provide the illumination. If the bulb has a white coating, the laser light will diffuse and spread out in all directions, but will retain the properties of laser light. It will still be very sparkly, the way one would expect rays to appear. Now try to image the bulb with the magnifying glass. You will see the basic shape of the bulb, but it will be fuzzy with no defined edges, and very sparkly. The farther away the bulb is from the lens, the more fuzzy the image will become, until it looks like a blurry spot. Even if the lens is focused properly, the image will not appear sharp in the way it does with normal light. To imagine why this occurs, imagine the light from a distant galaxy as photons speeding out like bullets following ray paths, with no accompanying wave. As they move they spread out, getting farther and farther away from each other. When they are focused through a telescope, only a small amount of photons are imaged, and the image will appear a a collection of bright spots and empty spaces, allowing no sharp edges to be shown. The image of the galaxy would appear as a blurry, fuzzy, sparkly spot. You could also imagine, that if far enough away, these photons might be so spread out that they miss us entirely. Light, as a wave, however, would fill in all of the empty spaces. As multiple wave-fronts, coming from each individual star, they would refract individually, forming an image of each star. Also, as a wave, we would not need to be directly in line with a photon to see the star, we would be able to see regardless of our position. The main problem is really the double slit experiment. When light is shined through a double-slit, it diffracts. As a simple experiment, cut a couple of slits in a sheet of paper, side by side, about big enough to slip a coin through. Hold it between yourself and your computer screen. You'll notice that you can read a few words through the slits, but you may see the same word in both slits. If light was traveling in ray paths, the light should pass straight through so that one word appear through a slit, the paper would cover the next word, and the following word would appear in the next slit. But that's not what happens. The words appear to be where they shouldn't, and the same sentence will seem to overlap. The image will appear distorted at the edges, as if viewing it through a lens. This is because the wave front has been diffracted into two separate wave fronts. Perhaps this is just a case of photons interfering with each other. However, experiments have been conducted with lasers and double-slits, where one photon is launched at the slits at a time, with detectors sensitive enough to record them on the other side. The results are the same. The individual photons still diffract, just like a wave, and end up where they could not possible if they were following ray paths. The answer to the double-slit experiment is the holy-grail of light-science. It is exactly because of this experiment that theories like "multiple universes" are even considered. This theory states that the individual photons are interfering with photons in the other universes. If you come up with an explanation for this, though, I wouldn't publish it on Wikipedia. Publish it in Nature or some other peer-reviewed journal because, like I said, you'll be looking at a Nobel Prize. Anyhow, it's been nice chatting with you, and I hope you have a great new year! Zaereth (talk) 00:30, 9 January 2012 (UTC) Nice set of experiments and I guess I don't know enough about optics to become confused. When I shine my green laser pin (birthday present) at a 40 watt bulb, (had to find an incandescent bulb) I see a reflection off the bulb's surface plus a dim green luminescence within the bulb. So the light energy is diffused within the bulb. And my eye is designed to detect and integrate the occurrence of the amount and frequency of the light energy emitted by the external surface and the internal spacial volume within the bulb. And in looking at a distant light emitting object I do the same thing. Or maybe the interior light is reflected light off a multitude of interior surfaces within the bulb. But I have no problem in understanding that a light source from a star at a distance would soon shrink to a sparkle due to the star's distance of transmission. And the ability of the path of starlight transmission to obscure and otherwise alter the transmitted light ray (as well as to refract and optically double the light emission frequency in the case of the light evaluation experiments) lead me to believe that the character of the light transmission media is like any space-time continuous transmission system such that it's content can be interrupted and/or distorted by intervention phenomena. But if it is a package of kinetic energy of motion, I cant see how 2 light beams can be merged to reduce the light energy originally carried by the 2 separate individual light beams. And I'll keep thinking and trying your experiments and thanks for the additional comment.WFPM (talk) 02:17, 9 January 2012 (UTC) Yeah, it's mind-boggling. I often find it easiest to think of the wave as being a freeway, and the particles as vehicles following that road. If the offramp (reflection) is destroyed by interference, then there is no "offramp" for them to follow. The particles simply follow the main-road (transmission), which is still intact. Another way to think about it is that photons might only exist as particles at the moment the wave encounters an obstacle. All of that is very abstract, though, and starting to get into "Schrodinger's cat" type stuff. Plus, now we're back to looking for some medium through which this wave transmits. There was a special presentation on PBS; on the show Nova; which is called The fabric of the cosmos, which you may find interesting. I sure did. By the way, perhaps a better example of the double slit experiment is to lay the paper with the slits on top of your magnifying glass, between your magnifying glass and the overhead lightbulb. You will notice two or more separate images of the lightbulb on the table, (one complete image for each slit), which partially (but not completely) overlap when it comes into focus. Zaereth (talk) 02:15, 10 January 2012 (UTC) Well I'm trying to make the slits out of taped together 3" x 5" filing cards, and I can observe some of the line level distortion and when I put on my anti-stigmatism glasses I can't read a thing through the slits so there is some refraction and distortion. But if I fold down the top obscuring card, I can then look over the top end of the slits and almost figure it out. but not yet. Don't have a big enough magnifying glass for the other experiment. How do you make nearby slits?WFPM (talk) 00:43, 11 January 2012 (UTC) After all all you have to do is look along parallel to the hot metal surface of your car finish to know that the light path is temperature sensitive. That's just another argument in favor of directional beam light transmission. The devil is in the details. ## Merry Christmas! Hi WFPM, I've been incredibly busy, and so haven't had time to respond to you. Please let me get back to you after the holidays, as I'm on my vacation right now. We just got a lot of snow, and I'm out at a cabin in the wilderness right now, trying to warm up by the fire. I've been out snowmachining (snowmobiling), trying to go as fast as I can on winding, bumpy trails. With all of the new snow, conditions are perfect. We must've averaged 60 MPH today, hitting speeds from 80 to 120 on the rivers. The amount of acceleration in these newer machines is amazing. I can stop along side the highway, wait for a car to pass doing 75, then punch the throttle and pass him within 5 to 8 seconds. Riding these things is not easy, though, because you really have to man-handle them to control them. Plus, you have to ride jockey-style, (knees bent, butt off the seat), to keep from getting thrown off. At high speeds on rough trails, these things tend to buck like a rodeo bull. Every muscle in my body is sore right now. Probably the worst is my throttle-thumb and my hold-on-for-dear-life-fingers, which is making typing difficult. I hope you have a Merry Christmass and a great New Year! I'll get back to you in a week or two. Zaereth (talk) 07:48, 24 December 2011 (UTC) I noticed that on talk pages, you often do not indent your comments with colons (:). Doing so makes it easier to follow the flow of the discussion. It would help other editors do their job better if you indented comments. Thank You. StringTheory11 23:50, 4 February 2012 (UTC) Not sure of the protocol. However will try to match columns with previous to see if that works. Thank you for comment.WFPM (talk) 00:13, 5 February 2012 (UTC) Here is another very bad example [3]. You have inserted inside a very long post of another user (itself a bad thing to do) without indenting, right before a sub-heading within the user's post. This makes it look like everything the user (Circuitdreamer) wrote up to that point appear to be written by you. Basically, the rule is use one more colon than the post you are replying to; see WP:INDENT. SpinningSpark 21:46, 21 February 2012 (UTC) Sorry! It was at the end of the discussion with no previous signature which would have identified the previous contributor, and I was just trying to add information about the subject matter.WFPM (talk) 22:23, 21 February 2012 (UTC) That is not correct, the post is signed, it is just that it is a long rambling post with several sub-headings. Circuit-dreamer is notorious for such long, and pointless discussions. Most of us think it is best just to ignore him, especially the old conversations (mostly with himself). SpinningSpark 23:25, 21 February 2012 (UTC) Oh yes! At the bottom. I thought the next paragraph was a different subject. Well maybe someday I'll get the right idea. And even where to locate the pertinent article about the subject matter.WFPM (talk) 01:15, 22 February 2012 (UTC) And I apologize for inserting redundant information about inductive loading.WFPM (talk) 01:21, 22 February 2012 (UTC) I hope I don't seem to be nagging you because this really is not very important, it is just one of those funny etiquette things found on Wikipedia but nowhere else on the internet. You still don't have it quite right: your post above should have been indented three times instead of two because you are replying to a post that already has two indents. I would then have indented my post (this one) with four indents instead of three. Well I'm an old dog learning new tricks and I've already been shot down for adding to articles without references. And I'm an EE with interests in Nuclear physics and with Navy training in electronics. Like in Magnetrons etc. But I'm not keeping up in electronic communications and they're not making it simple by throwing in higher mathematics. And the compartmentalization of subject matters in Wiki is thoroughly confusing. I try to discuss facts and/or opinions about scientific things where I notice them and can add information. And I get particularly concerned when I notice that this or that convention is complicating the understanding of a a scientific situation. Like the organization of the Periodic table for instance. So us old dogs stay alive and active by sticking closer to the fundamentals and occasionally adding to their knowledge along their line of interests within the limits of their imaginative powers. That's why I'm glad I read a lot of Science fiction when I was a kid. And I still read Isaac Asimov.WFPM (talk) 16:04, 22 February 2012 (UTC) Here's that link: http://www.informationphilosopher.com/solutions/scientists/maxwell/atom.html - Cheers - DVdm (talk) 18:27, 27 February 2012 (UTC) ## Speedy deletion nomination of File:Modifiedchartofthenuclides-color.jpg A tag has been placed on File:Modifiedchartofthenuclides-color.jpg requesting that it be speedily deleted from Wikipedia. This has been done under section F1 of the criteria for speedy deletion, because the image is an unused redundant copy (all pixels the same or scaled down) of an image in the same file format, which is on Wikipedia (not on Commons), and all inward links have been updated. If you think that the page was nominated in error, contest the nomination by clicking on the button labelled "Click here to contest this speedy deletion" in the speedy deletion tag. Doing so will take you to the talk page where you can explain why you believe the page should not be deleted. You can also visit the page's talk page directly to give your reasons, but be aware that once a page is tagged for speedy deletion, it may be removed without delay. Please do not remove the speedy deletion tag yourself, but do not hesitate to add information that is consistent with Wikipedia's policies and guidelines. ww2censor (talk) 23:32, 29 February 2012 (UTC) ## File:Modified chart of the nuclides.jpg Okay! kind of neat huh? Shows how a format can improve the intelligibility of data presentation. Also shows that just an analysis of the stable nuclides doesn't tell much about the stability characteristic. The halflife data also has to be evaluated to begin to understand the tendency of each isotope to be stable, and the best value to be associated with that tendency is the base 10 logarithm of the half-lifetime seconds value. So maybe sooner or later these halflifetime values will be pinned down sufficiently to provide the needed values for further determinations.WFPM (talk) 16:42, 6 March 2012 (UT ## Talk:Nuclear model http://en.wikipedia.org/w/index.php?title=Talk:Nuclear_model&oldid=478791686 -- Boing! said Zebedee (talk) 23:13, 7 March 2012 (UTC) ## File:82Pb Lead stable isotopes.pdf listed for deletion A file that you uploaded or altered, File:82Pb Lead stable isotopes.pdf, has been listed at Wikipedia:Files for deletion. Please see the discussion to see why this is (you may have to search for the title of the image to find its entry), if you are interested in it not being deleted. Thank you. Stefan2 (talk) 14:01, 5 September 2012 (UTC) ## File:Modified chart of the nuclides.jpg listed for deletion A file that you uploaded or altered, File:Modified chart of the nuclides.jpg, has been listed at Wikipedia:Files for deletion. Please see the discussion to see why this is (you may have to search for the title of the image to find its entry), if you are interested in it not being deleted. Thank you. Stefan2 (talk) 14:02, 5 September 2012 (UTC) ## Talking to myself Hello! I'm trying to talk to myself in this comment in order to find out who I am that is talking to me.68.89.218.132 (talk) 18:42, 11 September 2013 (UTC) ## Trying again Who am I this time?.68.89.218.132 (talk) 18:45, 11 September 2013 (UTC) ## Element stability long past the superheavy region It's been suggested (see group 12 element) that EE164Uhq482 (482164) could be the centre of a second island of stability. Period 8 element and period 9 element have data for surrounding elements from 156 to 172. Double sharp (talk) 14:53, 29 June 2014 (UTC)
{}
Therefore, to cancel the electric force with a magnetic force, the magnetic force has to point up. Full text of "Family Computing Magazine Issue 04" See other formats. The core can be air or any material. Sources of Magnetic Field 29. The magnetic force on a moving charged particle is given by the equation: Isolating the directional component of this equation yields the understanding that the resulting force on a moving charged particle is perpendicular to the plane of the velocity vector and magnetic field vector. Electrical energy is converted into mechanical work in the process. 9780582067035 0582067030 Fundamentals of Nuclear Magnetic Response, Jacek W. This is the principle of electromagnetic induction , and it is responsible for making electric generators and motors work. The Magnetic Field; 30. Based on their. No one had ever been elected president as a businessman. edu See Classes/ Physics 4 S 2010 Check frequently for updates to HW and tests [email protected] As has been said, yes, they cancel. A key component of College Physics: A Strategic Approach is the accompanying student workbook. 4 Energy and Momentum in Electromagnetic Waves 1067 the expressions for energy density [Eq. The speakers looks intimidate but they are very room friendly. A: According to Faraday's law of electromagnetic induction, a time-varying magnetic field induces an emf, According to Maxwell, an electric field sets up a current and hence a magnetic field. 0 cm in the magnetic field. I sort of miss it, although towards the end of the year I was feeling less inspired and more burdened. “Magnetic North” Six programs of experimental Canadian video from the past 30 years that range from documentary to conceptual art. Each moving charge is like a small element of electric current. ; You can check with our electric field calculator that the magnitude of. Lasers beams exert a ponderomotive force on the electrons of plasma in beating frequency which generates THz waves. The most popular current techniques for investigating the brain, like functional magnetic resonance imaging (fMRI), are far too coarse. I was back on the left side of the nose, scrambling up the orange steel ladder to the canopy. lim x- 2) x-,-+(2 x — 3) (x — 1) x-2 (X2 + 1) (x-2),. edu See Classes/ Physics 4 S 2010 Check frequently for updates to HW and tests [email protected] 3T MRI is ready to meet the needs of clinical practice. Magnetic Field Orientation and Flaw Detectability. ) of length L, linear mass density µ, under tension T, which is fixed at both ends as shown in figure 1. In physics, the Poynting vector represents the directional energy flux (the energy transfer per unit area per unit time) of an electromagnetic field. In solvophobic solvents and in bulk they self-assemble in helical columns. Updated abilities, psychic powers, wargear, weapon profiles and points values allow you to field your collection in games of Warhammer 40,000. Mars is currently 213 million miles (343 million kilometers) from Earth. Like electricity*, the magnetic interaction is also an inverse square law, and the law of Biot-Savart gives the field B at distance r due to a small length dL carrying current I. cz Department of Physics, Faculty of Electrical Engineering, Czech Technical University, Technická 2, 166 27 Prague 6, Czech Republic The two-stream instability is a typical instance of instability in plasma. 585, 113 L. Para muchos, cocinar bien es hoy un mundo cerrado, lleno de ‘estrellas’ o extraterrestres, cuando. As has been said, yes, they cancel. Example Problems Problem 1 A particle of charge +7. or go onto the e-Instruction web site chat room and get them to cancel your current. That speaks volumes. 8 m long and 75 cm in diameter. Lecture Notes. By combining trusted author content with digital tools developed to engage students and emulate the office-hour experience, Mastering personalizes learning and often improves results for each student. Furthermore, current developments of practical applications like magnetic tweezers for the study of DNA replication and brain imaging are presented. The magnetic field through a loop can be changed either by changing the magnitude of the field or by changing the area of the loop. Sin embargo, la cocina de una casa y la de un restaurante son mundos distintos… que me he propuesto acercar. Outnumbered, the Skull retreated behind the throne's force field, planning to take over the world by using the Hypno-ray from a satellite. The electric field is introduced as the mediator of electrostatic interactions: objects generate the field which permeates all of space, and charged objects in the field experience a force with magnitude proportional to their charge. Five equal-mass particles (A–E) enter a region of uniform magnetic field directed into the page. We sell the Test Bank for College Physics 10th Edition Young. Part E Assume that the applied magnetic field of size 0. The PONO has a balanced mode achieved with certain headphone / cable combinations that does allow for the player to send double the power up to the headphones, while effectively canceling out noise, but for the purpose of this review, I will limit my observations to single-ended (ie: plugging one 3. The length is easier to discern than the direction. Cross product 2. the ring is lying in the [x,y,z= 0 ] plane. 2019-01-13: Planet's erratic magnetic field forces emergency update to global navigation system 2019-01-13: DNA pioneer James Watson has final honours stripped amid racism row 2019-01-13: Trump to H-1B visa holders: 'Path to citizenship' could be coming soon 2019-01-13: CDC: Americans not having enough babies to sustain population. The area covers a wide spectrum ranging from conventional to new emerging multi-disciplinary areas like physics of highly charged ions (HCI), molecular physics, optical science, ultrafast laser technology etc. High school physics student would be expected to answer question below. The plasma. Taking out of the page (toward the reader) as positive, the net magnetic field at the common center of these coplanar loops is 5. [Martin Harrison; F R McKim] -- This new edition of Mastering Physics has been completely updated and rewritten to give all the information needed to learn and master the essentials of physics. (fields cancel out because the RHR shows that at the center point the magnetic fields point in opposite directions) • Pre-class problem equation. Studyres contains millions of educational documents, questions and answers, notes about the course, tutoring questions, cards and course recommendations that will help you learn and learn. 18 : B 63/5 ♦For Sale by Superintendent of. Contained in the electromagnetic emission from the brain are spikes and patterns called "evoked potentials. All-new Echo (3rd Gen) - Smart speaker with Alexa Amazon $69. It is calculated as has units of T m 2. *0-8053-9070-7, Young, Hugh D. So far, we have studied the examples of distributions such that they had uniform charge distribution. 1395 1 hours, JPL said. 11/24/2016 I3 Ch 29 HW = 0. This would build up charge on the exterior of the conductor. 55 is rotated so that it points horizontally due south. The explanations of the concepts are good enough, but the real strengths of the book lie in the well-done figures, in the generous number…. Firstly, the document, what it is, will be done by a highly-qualified specialist in the field which it’s issue or subject matter relates. For courses in introductory calculus-based physics. The Sun is a big magnet. The symbols N and S denote the north and south poles of the magnet. Steady magnetic fields: Biot Savart’s law, Amperes circuital law and application, Curl and Stroke’s theorems, Magnetic flux density and magnetic flux, scalar and vector magnetic potentials Maxwell’s equations and time varying fields, Faraday’s law, displacement current, Maxwell’s Equations in point & integral form, Retarded potentials. Such a current is called displacement current. [Martin Harrison; F R McKim] -- This new edition of Mastering Physics has been completely updated and rewritten to give all the information needed to learn and master the essentials of physics. Topics of great interest in biophysics. Example Problems Problem 1 A particle of charge +7. In physics dark energy is a hypothetical type of energy that is believed to be the cause for the universe expanding and act against dark matter which causes the universe to shrink. Parliament. Most people find it already beginning even while still working to master the alphabet, as they recognize little words like "of" and "the. By Lenz's law, any induced current will tend to oppose the decrease. Five equal-mass particles (A–E) enter a region of uniform magnetic field directed into the page. Obituaries call 941-206-1028, or email [email protected] For uniform magnetic fields the magnetic flux is given by Φ B = B ⃗ ⋅ A ⃗ = BA cos(θ) , where θ is the angle between the magnetic field B ⃗ and the normal to the surface of area A. Students, if Mastering Physics is a recommended/mandatory component of the course, please ask your instructor for the correct ISBN. High-End Audio / Hi-Res Audio (HRA) Audiophile Industry News S tay informed by joining our e-Newsletter list plus it enters you into our great contests too! Get social with Enjoy the Music. Articles from leading reserachers in the field. Figure 29-20 shows a simple motor, consisting of a single current-carrying loop immersed in a magnetic field B. All-new Echo (3rd Gen) - Smart speaker with Alexa Amazon$69. The compass needle will line up. When combined with educational content written by respected scholars across the curriculum, Mastering Physics helps deliver the learning outcomes that students and instructors aspire to. The workbook bridges the gap between textbook and homework problems by providing students the opportunity to learn and practice skills prior to using those skills in quantitative end-of-chapter problems, much as a musician practices technique separately from performance pieces. Experimentally, we find that the magnetic force on a test charge is at 90° to the magnetic field $$\mathbf{B}$$, and 90° to the velocity of the test charge $$\mathbf{v}$$. What is the magnitude of the magnetic field in the center of the loop?. • The magnetic field exerts a force on any other moving charge (or current) that is present in the field. A research-driven approach, fine-tuned for even greater ease-of-use and student success. 341031 10100000. Mastering Physics A web-based homework system Note that the units cancel properly - this is the ke y to using the conversion factor correctly! Examples: displacement (e. By this, I mean that hormones can be marketed as natural or plant-based, yet not come near to being bioidentical to native. Copyright ©2020 Pearson Education Inc. Capacitors. - The magnetic field is a vector field vector quantity associated with each point in space. Siempre he trabajado en cocinas profesionales y siempre los comensales me han preguntado si les podía enseñar alguna receta. you register under these incorrect destinations, then you will need to call 888-333-7532. 257 x 10^-6. magnetic field lines. To do that, think about the direction of the magnetic field (using the right-hand rule) and what area element to use. com at Instagram , Facebook and Twitter. It contains a dual magnetic field that is scalar canceling along its transverse axle at 90 degrees to its motion vector. Magnets exert forces on other magnets even though they are separated by some distance. The magnetic B- - field field is similar to that of a bar magnet. 341031 10101500. HiFiMAN Will Answer That with HE1000 Planar Magnetic Headphones The new full-size planar magnetic headphone features a Nanometer-grade diaphragm and a double-sided magnetic circuit. Neptune - experiment tracking tool for machine learning projects. Mass Spectrometer. Section 7-2-6 is amended to read: 162 7-2-6. Lecture Notes. 27-3 Force on an Electric Current in a Magnetic Field; Definition of. General Physics PRemastered 3 years 1 Answer 8304 views 0 measured about an axis centered on the bolt. They follow the trajectories illustrated in the figure. The direction of the magnetic force on the particle is: a) Right b) Left c) Into the screen d) Out of the screen e) Zero The magnetic force is given by F qv B r r r = × The cross product of the velocity with the magnetic field is to. canceling out the mixed signals so to speak;). 5 GPA on a 4. Flickr photos, groups, and tags related to the "Jennifer Christie" Flickr tag. It is named after its discoverer John Henry Poynting who first derived it in 1884. By Lenz's law, any induced current will tend to oppose the decrease. Subjective vs Objective Debate INTRO : Given the strong reaction to some of my articles here are some thought’s on what’s likely behind the more emotional responses. human-nets Date: 1984-01-05 02:50:43 PST HUMAN-NETS Digest Thursday, 5 Jan 1984 Volume 7 : Issue 2 Today's Topics: Computers and the Law - Big Computer is Watching You (2 msgs) & How "High Society" gets its two cents, Computers and People - Japan and US on New Generation. Through Feb. The magnetic force on a moving charged particle is given by the equation: Isolating the directional component of this equation yields the understanding that the resulting force on a moving charged particle is perpendicular to the plane of the velocity vector and magnetic field vector. Mariana Delgado Coordinator of El Proyecto Sonidero (Mexico), writes about the Polymarchs posters (1980-1990) by Jaime Ruelas. Lodestones were igneous rocks, which means that they were originally lava. Field & court sports equipment Cd rom mastering services 81112009 Content or data classification services Aero magnetic geophysics. For the Fourth Edition of Physics for Scientists and Engineers, Knight continues to build on strong research-based foundations with fine-tuned and streamlined content, hallmark features, and an even more robust MasteringPhysics program, taking student learning to a new level. Ferromagnets are materials where the atomic magnetic moments line up. The length is easier to discern than the direction. It occurs significantly when the size of the aperture or obstacle is of similar linear dimensions to the wavelength of the incident wave. The wrong answer penalty is 2% per part. The answer to "An electron moving with a speed of 4. The sum of these effects may cancel so that a given type of atom may not be a magnetic dipole. Motors are the most common application of magnetic force on current-carrying wires. Mastering Physics should only be purchased when required by an instructor. WORKSHOP 2008 PHYSICS 38 Two-Stream Instability in Magnetic Field M. You've got a good understanding of the physics of the universe, including the physical properties (luminosity, density, temperature, and chemical composition) of celestial objects such as galaxies, stars, planets, exoplanets, and the interstellar medium, as well as their interactions. My Inaugural Address at the Great White Throne Judgment of the Dead, at my Final Conflagration after I have raptured out billions! An unusual perspective on current End Time Events including the Rapture and the Great Tribulation, Author: Alvin Miller, Category: Books. When combined with educational content written by respected scholars across the curriculum, Mastering Physics helps deliver the learning outcomes that students and instructors aspire to. She shot a blast of energy, but before her attacked could hit, another energy blast struck hers, canceling it out, followed by another one taking out her target. ) and intended for serious students of Thelemic thought. It turns out that the required modifications are quite simple: Just replace with the permittiv-. Combination of resistors. The magnetic field located at element placed at a distance R removed from an prolonged at as quickly as cutting-edge I is given by applying B = (mu-nought)*(I) / (2*pi*R) "mu-nought" is (4*pi*10^-7), so B = (2I)/R the two currents make magnetic fields B1 and B2 at Z, so as that they are at proper angles with XZ and YZ respectively. 3 Experimental evidence has lead to the Electric Charge Model • Friction between objects can cause charge to be added or lost • Charge has two kinds - Positive and Negative. Multiple choice questions are penalized as described in the online help. the beast [the Antichrist]. Magnetic resonance imaging needs a magnetic field strength of 1. 1CQ Explain the difference between a magnetic field and a magnetic flux. • The magnetic field exerts a force on any other moving charge (or current) that is present in the field. B=Magnetic field A= Area of loop In power industry, voltage is generated by rotating coils in fixed magnetic field as shown in the picture. So what is the OVC HC-1000, I would say they are the Chinese version of the Koss PortaPro portable headphones. Physics with MasteringPhysics book. For uniform magnetic fields the magnetic flux is given by Φ B = B ⃗ ⋅ A ⃗ = BA cos(θ) , where θ is the angle between the magnetic field B ⃗ and the normal to the surface of area A. Synonyms for striking include conspicuous, noticeable, astonishing, distinct, evident, extraordinary, marked, obvious, prominent and dramatic. so the force on the breadths cancel to zero (there is no lever arm associated with these forces to create a torque). 341031 10101600. 34 The aurora is caused when electrons and protons, moving in the earth's magnetic eld of ≈ 5 × 10−5 T, collide with molecules of the atmosphere and cause them to glow. An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. These rules do not determine the magnitude but instead show the direction of any of the three parameters (magnetic field, current, force) when the direction of. What is the size of the magnetic force on the wire due to the applied magnetic field now? Hint E. 1440) Physics II. Capacitors. 1 Biot-Savart Law Currents which arise due to the motion of charges are the source of magnetic fields. 55 is rotated so that it points horizontally due south. Magnetic fields of very strong intensity are used in. It is tightly wound with a single layer of 1. University of Houston. 11/24/2016 I3 Ch 29 HW = 0. By Lenz's law, any induced current will tend to oppose the decrease. Circular Path from Magnetic Field. The final motion of the particle will be a spiral motion, spiraling out in the direction of the magnetic field. 8 Every element of current creates magnetic field in the same direction, into the page, at the center of the arc. Contained in the electromagnetic emission from the brain are spikes and patterns called "evoked potentials. Typical HSPQuestion is” using the kinetic theory of gases a/calculate the rms speed of a N2 molecule at STP. Strong terahertz radiation generation by beating of two laser beams in magnetized overdense plasma - Volume 34 Issue 3 - A. 1 Resistors in Series and Parallel and Chapter 21. 1395 1 hours, JPL said. if, as per your example produces eddy currents that oppose each other the net effects of the opposing field pressure would increase by the square of the two fields. The direction of indicated shows that wire #2 will be attracted towards wire #1. Two sets of bass waves go out canceling each other - leaving behind musical notes. The magnetic field at a distance r from a very long straight wire, carrying a steady current I, has a magnitude equal to (31. Test your knowledge on all of Review of Magnetic Forces and Fields. To use the ticket is to accept the terms, even terms that in retrospect are disadvantageous. Lodestones were igneous rocks, which means that they were originally lava. It is named after its discoverer John Henry Poynting who first derived it in 1884. Signs of the Times 06 March 2006 The following caught my eye today as I was gathering articles for the Signs page: McEwan on the afterlife P. General Physics PRemastered 3 years 1 Answer 8304 views 0 measured about an axis centered on the bolt. You can estimate the electric field created by a point charge with below electric field equation: E = k * Q / r². ) At the present time, an operation is working toward the goal of isolating the remaining illuminati rogues left in power positions in the American corporate government. UUCP) Subject: HUMAN-NETS Digest V7 #2 Newsgroups: fa. Physics and Chemistry by a Clear Learning in High School, Middle School, Upper School, Secondary School and Academy. Source point: location of the moving charge. University Physics with Modern Physics with Mastering Physics (12th Edition) (Mastering Physics Series) (Hardcover) The book is extremely well done, and this being the 12th edition this should be no surprise. Magnetic Field of a Moving Charge - A charge creates a magnetic field only when the charge is moving. Such a possibility would seem to hold greater promise for future research than speculations about the “mental health” of the deviant animals. Learn more about how Mastering Physics helps students succeed. When combined with educational content written by respected scholars across the curriculum, Mastering Physics helps deliver the learning outcomes that students and instructors aspire to. At d they both have the same MAGNITUDE of the field but the directions are opposite. There are two general types of magnetic fields that can be established within a component. the ring is lying in the [x,y,z= 0 ] plane. B at any point is the same as the direction indicated by this compass. It is the strength of a magnetic field. A current of 2A flows through a square loop of edge 10m. The earth's magnetic field resembles a huge bar magnet, with the south magnetic pole at the top of our earth in northern Canada and the north magnetic pole at the bottom of our earth near Antarctica. E is the magnitude of electric field,; Q is the charge point,; r is the distance from the point,; k is the Coulomb's constant k = 1/(4 * π * ɛ0) = 8. • The magnetic field exerts a force on any other moving charge (or current) that is present in the field. lines are dense dense and weak where lines are sparse. : 132 Oliver Heaviside also discovered it independently in the more general form that. The 10th edition of Hallidays Fundamentals of Physics building upon previous issues by offering several new features and additions. The magnetic field was considered parallel to the direction of lasers which leads to propagate right-hand circularly polarized or left-hand circularly polarized waves in the plasma depending on the phase matching conditions. Get the option with Mastering Physics. This relation is directionally determined by Fleming's Left Hand rule and Fleming's Right Hand rule respectively. Simulation | Magnet or compass needle can be move. ) of length L, linear mass density µ, under tension T, which is fixed at both ends as shown in figure 1. Sources of Magnetic Field 29. Pacific Film Archive 2575 Bancroft (at Bowditch) 642-1412. N S Problem: A small bicycle generator has 150 turns of wire in a circular coil of radius 1. Φ21 Fall 2006 HW11 Solutions 1 Problem K32. Mathematically, Here, A is a vector quantity with the magnitude equal to the area of the rectangular loop and the direction is given by the right-hand thumb rule. One thing to remember when making this distinction is that bioidentical refers to the shape of the molecule itself rather than the source of the hormone. For the Fourth Edition of Physics for Scientists and Engineers, Knight continues to build on strong research-based foundations with fine-tuned and streamlined content, hallmark features, and an even more robust MasteringPhysics program, taking student learning to a new level. ” The most complex challenges have a simple beginning. The fiction goes on for fifteen more pages where nothing really happens beyond a massive infodump about the setting and important terms. When current is passed through the loops, the magnetic field exerts torque on the loops, which rotates a shaft. The direction. 341031 10102000. Remington 597 Pham bang bang lo hang High society magazine Smoke shops matamoras pennsylvania Jwh-073 recipe What is the ar code for shiny pichu diamond Technet subscription for 2011 any discounts yet Meatotomy pictures Clump of discharge 6 weeks pregnant Lesson plans for pe long rope 3rd grade No hunger bread recipe hcg How do you produce rubber trees in farm town P hook crochet scarves Neo. Examples include a new print component will revised to conform to the Version 5 design; chapter sections organized and numbered to match the Concept Modules; Learning Objectives have been added; illustrations changed to reflect (and advertise) multimedia versions. So yes, there is a magnetic field in a capacitor while it is being charged. 9876 * 10^9 N * m² / C². The magnetic field. Ferromagnets are materials where the atomic magnetic moments line up. Editorial letters email [email protected] In contrast, Room A's Matrix 800 speaker system is a passive 4 sub system - Two high subs two low for 2 channel music. degree in theoretical physics started by taking simple steps in that direction. Mastering Physics in Medical School. Students entering with a master's degree must have at least a 3. Most are forms of spectroscopy: the basic measurements probe the relationship between the variation of a particular parameter, such as wavelength or magnetic field, and the transition energies between various electronic states of an atom or molecule. of and to in a is that for on ##AT##-##AT## with The are be I this as it we by have not you which will from ( at ) or has an can our European was all : also " - 's your We. In solvophobic solvents and in bulk they self-assemble in helical columns. Section 7-2-6 is amended to read: 162 7-2-6. MAGNETIC FIELD | FIELD LINES | BAR MAGNET - Interactive Free flash animation to show the field lines. magnetic field: A condition in the space around a magnet or electric current in which there is a detectable magnetic force, and where two magnetic poles are present. The exact dependence of the magnetic field strength B on radial distance from the wire R is B = μ o I / 2πR where μ o is the permeability of free space and has a value of 4π x 10-7 Tm/A. If the net electric field were not zero, a current would flow inside the conductor. The magnetic field of the coaxial cable exists only in the region between the two cylinders. When the currents are in the same directions, the fields are parallel and. Homework will be with Mastering Physics (and an average of 1 hand- field ! Note that the r's cancel out ! This first occurred to Ernest Lawrence in the late 20's/early 30's (in a bar) and he realized that it would make a cyclotron if the magnetic field. A key component of College Physics: A Strategic Approach is the accompanying student workbook. Source point: location of the moving charge. Take advantage of exclusive store offers, online promo codes, and latest deals on B&N products. 1440) Physics II. From ancient times, China has been the dominant and referential culture in East Asia. Hematizadeh, F. Performance Standard The learners are able to use theoretical and experimental approaches to solve multiconcept and rich-context problems involving electricity and magnetism Learning Competencies At the end of. In all, 40 tapes from 46 artists will be shown on six Wednesday evenings. The direction. And, since this part of Earth’s energy field still is so unsettled, other scenarios may arise to lead Bernie Sanders to the Oval Office. Each loop of current has a direction associated with it: its normal vector is perpendicular to the loop, in the direction given by the right thumb when the right fingers. The field of Atomic and Molecular Physics (AMP) has reached significant advances in high–precision experimental measurement techniques. Which of the dashed boxes A–D represents the position of the north magnetic pole? SOLUTION: As he explains in the video, if you point your right thumb towards yourself to represent the direction of the current, and then your pointer finger down to represent the magnetic field pointing from north to south, the middle finger will represent the direction that the wire is displaced. Theory states that the magnetic field produced by a long straight current-carrying wire decreases in strength as you get further from the wire. 585, 113 L. The magnetic B- - field field is similar to that of a bar magnet. Install Help. Let K be a p-adic field and M a finite continuous Galois module. • If the velocity is at a right angle to the magnetic field the angle between the velocity and the field will always be 90˚. A Catalan, German and Austrian group of physicists has developed a new technology to transfer magnetic fields to arbitrary long distances, which is comparable to transmitting and routing light in. magnetic field. The force on q is expressed as two terms: F = K qQ/r 2 = q (KQ/r 2) = q E The electric field at the point q due to Q is simply the force per unit positive charge at the point q :. Special Magnetic Distortion Cancellation Technology (M. 12 x 10 5V/m, and the magnetic field is 0. the first wire. 22-4 The Magnetic Force Exerted on a Current-Carrying Wire. Parallel wires carrying current produce significant magnetic fields, which in turn produce significant forces on currents. This text is used for the entire PHYS 1210, 1220 and 1230 sequence. Chapter Fifty Year 4, Day 235, I. 9876 * 10^9 N * m² / C². 10101514. 2) The magnetic field exerts a force F m on any other moving charge or current present in that field. 27-3 Force on an Electric Current in a Magnetic Field; Definition of. In either case, the plasma cooling results in a disconnection between the plasma and some of the electrons powering it. 27/08/2013 Quick Answer PHYSICS 1B - Ampere's Law and Magnetic Fields Electricity & Magnetism. Few would expect themselves to finish such a degree in a day or a year. The unit was announced during the General Conference on Weights and Measures in 1960 and is named in honour of Nikola Tesla, upon the proposal of the Slovenian electrical engineer France Avčin. Possession by commissioner -- Notice -- Presentation, allowance, and 163 disallowance of claims -- Objections to claims. When combined with educational content written by respected scholars across the curriculum, Mastering Physics helps deliver the learning outcomes that students and instructors aspire to. The following provides a brief overview of the law of accelerating returns as it applies to the double exponential growth of computation. And, since this part of Earth’s energy field still is so unsettled, other scenarios may arise to lead Bernie Sanders to the Oval Office. magnetic field: A condition in the space around a magnet or electric current in which there is a detectable magnetic force, and where two magnetic poles are present. That instrument, the radar altimeter, and the standby magnetic compass would have to see me through. The wire also forms a loop of radius R. [Martin Harrison; F R McKim] -- This new edition of Mastering Physics has been completely updated and rewritten to give all the information needed to learn and master the essentials of physics. These three courses together offer over 95 hours of content. FIGURE 30-47 Problem 8. V Lacerate, High Orbit, Earth 29 Universe Alexander was dreaming, it was a good dream, he was flying under his own power over a massive city, it seemed to go on forever, an endless vista of glass and metal interspersed by splashes of green and blue. Learn vocabulary, terms, and more with flashcards, games, and other study tools. The magnetic field located at element placed at a distance R removed from an prolonged at as quickly as cutting-edge I is given by applying B = (mu-nought)*(I) / (2*pi*R) "mu-nought" is (4*pi*10^-7), so B = (2I)/R the two currents make magnetic fields B1 and B2 at Z, so as that they are at proper angles with XZ and YZ respectively. 1 Determining the angle theta. Studyres contains millions of educational documents, questions and answers, notes about the course, tutoring questions, cards and course recommendations that will help you learn and learn. Runaway Electrons (And Other Plasma Physics Horror Stories) or the plasma may need to be "quenched" with cold gas to stabilize the magnetic field around it. Each loop of current has a direction associated with it: its normal vector is perpendicular to the loop, in the direction given by the right thumb when the right fingers. Magnetic dipole, generally a tiny magnet of microscopic to subatomic dimensions, equivalent to a flow of electric charge around a loop. Electricity and Magnetism Mastering Physics website www. So far, we have studied the examples of distributions such that they had uniform charge distribution. cz Department of Physics, Faculty of Electrical Engineering, Czech Technical University, Technická 2, 166 27 Prague 6, Czech Republic The two-stream instability is a typical instance of instability in plasma. The magnetic field produced by a solenoid. 341031 10102000. This can be readily demonstrated by moving a compass near the magnet. Music-map suggests Cake, Ween, and the Magnetic Fields. 1CQ Two charged particles move at light angles to a magnetic field and deflect in opposite directions Can one conclude that the particles have opposite charges? Solution: No The particles may have charge of the same sign but move in opposite directions along …. com, also read synopsis and reviews. Remington 597 Pham bang bang lo hang High society magazine Smoke shops matamoras pennsylvania Jwh-073 recipe What is the ar code for shiny pichu diamond Technet subscription for 2011 any discounts yet Meatotomy pictures Clump of discharge 6 weeks pregnant Lesson plans for pe long rope 3rd grade No hunger bread recipe hcg How do you produce rubber trees in farm town P hook crochet scarves Neo. magnetic field lines: A graphical representation of the magnitude and the direction of a magnetic field. 2019-01-13: Planet's erratic magnetic field forces emergency update to global navigation system 2019-01-13: DNA pioneer James Watson has final honours stripped amid racism row 2019-01-13: Trump to H-1B visa holders: 'Path to citizenship' could be coming soon 2019-01-13: CDC: Americans not having enough babies to sustain population. Then you can figure out what additional B field is needed to cancel it out. So in this case, the magnetic field is in. A Catalan, German and Austrian group of physicists has developed a new technology to transfer magnetic fields to arbitrary long distances, which is comparable to transmitting and routing light in. Mastering Physics is the teaching and learning platform that empowers you to reach every student. The region is home to the nation's largest coal field, and these 28 new coal leases mean a trully massive stock of pure carbon is about to be mined, for cheap. If we now take the polarization field of the light photon and spin it as shown in the canceled field at 90 degrees to the path of the photon we get torsion introduced to the axle of the canceling magnetic fields. Instructors, contact your Pearson representative for more information. AMPERES LAW. Sources of Magnetic Field 29. The review features writing from both beginning and experienced writers - writing that comes from the heart, and that is unique, well-crafted, and lively. Therefore, to cancel the electric force with a magnetic force, the magnetic force has to point up. Magnets exert forces on other magnets even though they are separated by some distance. The magnetic field at a distance r from a very long straight wire, carrying a steady current I, has a magnitude equal to (31. Our host Luke Storey brings you the most thought-provoking interviews with the most prominent experts in the fields of health, spirituality, and personal development. Dose optimization for the MRI-accelerator: IMRT in the presence of a magnetic field. – A Anode. Student workbook. Physics with MasteringPhysics book. It is bound with love under will by Frater Zephyros (in 1998 ev. Physics is the study of stuff, while instrumentation is the discipline of measuring and controlling stuff. This expansive textbook survival guide covers 32 chapters, and 3833 solutions. Express your answer in terms of and and use and for the three unit vectors from PHYS 1210 at University of Newcastle. You can use the "à la carte" (loose-leaf format) or a hardbound format. Units, Physical Quantities, and Vectors 2. Also, be sure to. 257 x 10^-6. One tesla is equal to one weber per square metre. A single three dimensional "voxel" in an fMRI scan lumps together the actions of tens or even hundreds of thousands of neurons — yielding a kind of rough geography of the brain (emotion in the amygdala. UTEP offers preparation to teach biology, chemistry, earth science, English, general science (middle school only), history, mathematics, physics, and political science/political philosophy (social studies). You can ignore the resistance of the bar and the rails. What is the radius of the cyclotron orbit for an electron with speed 1. 23)], the Poynting vector [Eq. txz PACKAGE LOCATION:.   Covered in this book: Craftworlds - An overview of the Craftworld Aeldari (formerly Eldar); - A Craftworlds army list, with an explanation of the Craftworld keyword, the Runes of Battle and Runes. 341031 10100000. Because it is a proton (positive charge), it will always travel with the magnetic field (away from the source is how I think of it), and vice versa. Electromagnetic Induction fine-tuned for even greater ease-of-use and student success For the Fourth Edition of Physics for Scientists and Engineers, Knight continues to build on strong research-based foundations with fine-tuned and streamlined content, hallmark features, and an even more robust MasteringPhysics. In the previous section we have studied that when conductor is moved across a magnetic field, an emf induced between its ends. Magnetic Fields and Forces 19. The “returns”, such as chip speed and cost-effectiveness, also increase exponentially. Outnumbered, the Skull retreated behind the throne's force field, planning to take over the world by using the Hypno-ray from a satellite. 31 Determining the mass of an isotope. The mass spectrometer is an instrument which can measure the masses and relative concentrations of atoms and molecules. full band: 2 fiddles. R = Et/B My problem is I'm not given time anywhere in the problem, but it's obviously essential to know how long the particle was accelerating. The Sun is a big magnet. Magnetic effects of an electric current. What is the direction of the magnetic field at a point east of the wire? a. The core can be air or any material. 15 Magnetic field lines are defined to have the direction that a small compass points when placed at a location. Diffraction is the spreading out of waves as they pass through an aperture or around objects. 1) The equation states that the line integral of a magnetic field around an arbitrary closed. (Figure 2) Express the magnetic field B_bal that will just balance the applied electric field in terms of some or all of the variables q, v, and E. Walker and Cram101 Textbook Reviews available in Trade Paperback on Powells. Without more information. Field point: point P where we want to find the field. the ring is lying in the [x,y,z= 0 ] plane. "I never said we couldn't block each others attacks. 2) The magnetic field exerts a force F m on any other moving charge or current present in that field. the beast [the Antichrist]. All-new Echo (3rd Gen) - Smart speaker with Alexa Amazon \$69. Mastering Physics Solutions Chapter 23 Magnetic Flux and Faraday’s Law of Induction Mastering Physics Solutions Chapter 23 Magnetic Flux and Faraday’s Law of Induction Q. Jad and I went to quite a few games that season. The Magnetic Field Interactive allows a learner to explore the magnetic field surrounding a simple bar magnet. txt) or view presentation slides online. Sources of Magnetic Field Assignment is due at 2:00am on Wednesday, March 7, 2007 Credit for problems submitted late will decrease to 0% after the deadline has passed. 31 Determining the mass of an isotope. Electromagnetic Induction fine-tuned for even greater ease-of-use and student success For the Fourth Edition of Physics for Scientists and Engineers, Knight continues to build on strong research-based foundations with fine-tuned and streamlined content, hallmark features, and an even more robust MasteringPhysics. 1894 APW19980923. In this sense, most challenges can be broken down into bits and pieces. E is the magnitude of electric field,; Q is the charge point,; r is the distance from the point,; k is the Coulomb's constant k = 1/(4 * π * ɛ0) = 8. Local Tate duality is a perfect duality between the Galois cohomology of M and the Galois cohomology of its dual module. (The last piece to the puzzle. Fadzilah Hussin Uncategorized Leave a comment 3 Views | May devote the next few years of their own life analyzing the disposition of lightening, electricity, bulbs, magnetic fields, electrons, light , magnetic fields, and Ultra sound. 15 Magnetic field lines are defined to have the direction that a small compass points when placed at a location. Parliament. a d b y N e p t u n e. you register under these incorrect destinations, then you will need to call 888-333-7532. A compass can be dragged about in the space surrounding the bar magnet and the effect of the magnet on the compass needle can be observed. One thing to remember when making this distinction is that bioidentical refers to the shape of the molecule itself rather than the source of the hormone. Get the option with Mastering Physics. asked by karan on May 21, 2013; Physical Science. The magnetic moment of a current loop can be defined as the product of current flowing in the loop and the area of the rectangular loop. qvBsin(theta) = ma qvB = ma = m(v^2/r) v = qBr/m 3. For each of the cases in the figure below, find the magnitude and direction of the magnetic force on the particle. Mu-zero is the permeability of free space, which is a constant that's always equal to 1. Get the option with Mastering Physics. Field point: point P where we want to find the field. com, also read synopsis and reviews. Group consistency instead forms around the motivation and activism phrasing. Motors contain loops of wire in a magnetic field. The region is home to the nation's largest coal field, and these 28 new coal leases mean a trully massive stock of pure carbon is about to be mined, for cheap. 8 m long and 75 cm in diameter. Their solution and bulk shape-persistent. Diffraction is the spreading out of waves as they pass through an aperture or around objects. Without more information. Barron's SAT Subject Test Physics-www. Opposing fields never cancel out yet the net effects at the point of collision would be zero if influencing such as another coil. 1CQ Answers to odd-numbered Conceptual Questions can be found in the back of the book A cup of hot coffee is placed on the table Is it ¡n thermal equilibrium?. And if you have it, then almost anything. pdf), Text File (. Motional emf is not induced emf. B-field: A synonym for the magnetic field. Mastering Physics A web-based homework system Note that the units cancel properly - this is the ke y to using the conversion factor correctly! Examples: displacement (e. Theory states that the magnetic field produced by a long straight current-carrying wire decreases in strength as you get further from the wire. An electron, moving with a speed, 6. Mastering Physics is the teaching and learning platform that empowers you to reach every student. -i" a dissertation presented to the graduate school of the university of florida in partial fulfillment of the requirements for the degree of doctor of philosophy university of florida 1993 acknowledgements. Learning Goal: To understand why charged particles move in circles perpendicular to a magnetic field and why the frequency is an invariant. Translation by Tess Wheelwright. 341031 10100000. 341031 10101700. • If the velocity is at a right angle to the magnetic field the angle between the velocity and the field will always be 90˚. When the currents are in the same directions, the fields are parallel and. 1440) Physics II. 0cm square loop is halfway into a magnetic field that is perpendicular to the plane of the loop. The coil is oriented for maximum flux, and the field is suddenly turned off. “Magnetic North” Six programs of experimental Canadian video from the past 30 years that range from documentary to conceptual art. B is strong where lines are. doc The Electric Field +Q q E The charge Q produces an electric field which in turn produces a force on the charge q. 5 GPA on a 4. University of New Mexico - Albuquerque or University of New Mexico - Valencia. Diffraction is the spreading out of waves as they pass through an aperture or around objects. A focus on visual learning, new problem types, and pedagogy informed by MasteringPhysics metadata headline the improvements designed to create the best learning resource for physics students. By combining trusted author content with digital tools developed to engage students and emulate the office-hour experience, Mastering personalizes learning and often improves results for each student. This charge would oppose the field, ultimately (in a few nanoseconds for a metal) canceling the field to zero. Table of Contents. Problem 3 (25 points) - Solutions A 1. The admirable brace of christian louboutin men replica Christian Louboutin replica shoes will at times accomplish you bandy attention to the apprehension and let down your hair. This causes the magnetic fields of the electrons to cancel out; thus there is no net magnetic moment, and the atom cannot be attracted into a magnetic field. 1 Introduction We have seen that a charged object produces an electric field E G at all points in space. Pedersen, later developed other magnetic recorders that recorded on steel wire, tape, or disks. When sent onto the field of play, the ball is normally caught in the air or snatched up and flung to a person standing at the first base, whereupon the man who had the stick goes over to sit down in a low-slung, dank shed with his friends to await another chance to swing, an hour or so later. The “returns”, such as chip speed and cost-effectiveness, also increase exponentially. What will the rate of acceleration of this charge be in the field? 2. presence of that field. The Hall effect is an ideal magnetic field sensing technology. a d b y N e p t u n e. Here is a long wire carrying current, I. 1 Terms and Definitions Mass (m) is the opposition an object has to acceleration (changes in velocity). 1 Determining the angle theta. This can be readily demonstrated by moving a compass near the magnet. full band: 2 fiddles. The loop's mass is 10g and its resistance is 0. Performance Standard The learners are able to use theoretical and experimental approaches to solve multiconcept and rich-context problems involving electricity and magnetism Learning Competencies At the end of. There are, however, two circuit analysis rules that can be used to analyze any circuit, simple or complex. Electromagnetic Waves. I have obtained a Ph. You can ignore the resistance of the bar and the rails. Quizzes of College Physics Strategic Approach with Mastering Physics study set. 2 V, what is the. Magnetic Fields A magnetic field is a vector field produced by a current flow or a magnetised material. One thing to remember when making this distinction is that bioidentical refers to the shape of the molecule itself rather than the source of the hormone. Mastering Physics Solutions Chapter 16 Temperature and Heat Mastering Physics Solutions Chapter 16 Temperature and Heat Q. Runaway Electrons (And Other Plasma Physics Horror Stories) or the plasma may need to be "quenched" with cold gas to stabilize the magnetic field around it. 18 : M 24/22 128 Creative use of information, challenge to business, address by Charles F. Therefore, to cancel the electric force with a magnetic force, the magnetic force has to point up. human-nets Date: 1984-01-05 02:50:43 PST HUMAN-NETS Digest Thursday, 5 Jan 1984 Volume 7 : Issue 2 Today's Topics: Computers and the Law - Big Computer is Watching You (2 msgs) & How "High Society" gets its two cents, Computers and People - Japan and US on New Generation. 12 x 10 5V/m, and the magnetic field is 0. A particle of charge q and mass m moves in a region of space where there is a uniform magnetic field B? =B0k^(i. Solution 97GPStep 1 of 3We have to calculate the E/B ratio and we have to estimate its value. A delightful site for writers and lovers of words and language. js This package implements a content management system with security features by default. Chapter Fifty Year 4, Day 235, I. This is what i have in mind when thinking about such a wave, an oscillation of the field vectors of an electro magnetic field. It’s been strange, not writing about each day for a while. School Physics Quiz : Magnetic Fields Answer the following questions based on magnetic fields. My present interests deal with condensed matter physics, nanomagnetism and applications to the physics of devices. COURSE POLICY AND GUIDE PHYSICS II (PHYS. A: According to Faraday's law of electromagnetic induction, a time-varying magnetic field induces an emf, According to Maxwell, an electric field sets up a current and hence a magnetic field. Magnetic flux density Magnetic flux density, B, is the force per unit current per unit length, on a current-carrying wire in a magnetic field. Special Magnetic Distortion Cancellation Technology (M. Learn more about how Mastering Physics helps students succeed. Install Help. Translation by Tess Wheelwright. Using the Unity game engine, you’ll build 8 games, mastering C# and object-oriented programming concepts. Most people find it already beginning even while still working to master the alphabet, as they recognize little words like "of" and "the. Chapters 26-36 this quarter www. ) At the present time, an operation is working toward the goal of isolating the remaining illuminati rogues left in power positions in the American corporate government. The following provides a brief overview of the law of accelerating returns as it applies to the double exponential growth of computation. We have heard of calling someone 'a total zero' as an insult, but what does 'zero' really mean?. Solution: Magnetic field: It is the amount of magnetic force experience by a charged particle moving with a velocity …. This is what i have in mind when thinking about such a wave, an oscillation of the field vectors of an electro magnetic field. 22-4 The Magnetic Force Exerted on a Current-Carrying Wire. The mass spectrometer is an instrument which can measure the masses and relative concentrations of atoms and molecules. Magnetic Field of a Moving Charge - A charge creates a magnetic field only when the charge is moving. 9780582067035 0582067030 Fundamentals of Nuclear Magnetic Response, Jacek W.   Covered in this book: Craftworlds - An overview of the Craftworld Aeldari (formerly Eldar); - A Craftworlds army list, with an explanation of the Craftworld keyword, the Runes of Battle and Runes. First of 6 articles as part of the PiratbyrÃ¥n and Friends exhibition at Furtherfield. of and to in a is that for on ##AT##-##AT## with The are be I this as it we by have not you which will from ( at ) or has an can our European was all : also " - 's your We. Shute, 499 U. type 2000mm 22 220 illusion bailey 3ch plane. Introduction to Magnetic Fields 8. Introduction to the cross product Science Physics Magnetic forces, magnetic fields, and Faraday's law Magnets and Magnetic Force. Instead of five long chapters, the book is now comprised of 32 short chapters. Capacitors. 27-3 Force on an Electric Current in a Magnetic Field; Definition of. Add them up with the proper sign. In this sense, most challenges can be broken down into bits and pieces. Test your knowledge on all of Review of Magnetic Forces and Fields. Digital Services and Device Support Find device help & support ; Troubleshoot device issues. Pandora thinks that means I'll like the Lackloves, because they both feature "electric rock instrumentation and a subtle use of vocal harmony". Solutions for Class 9 A New Approach to ICSE Physics Part 1. * ARCHIVED POSTS * TODAY'S POSTS * GOOGLE TRANSLATE – HIGHER DENSITY BLOG * ASHTAR COMMAND ALERT – ASCENSION PROCESS – 3-22-16 * Divine Inner Consciousness by Celestial White Beings and the Pleiades – Channeler Natalie Glasson, OMNA – 3-28-16 * Michelle Walling @ in5d. But for filamentary circuits we can write down one mathematical equation that expresses both Faraday's law and motional emf. The magnetic field in a very long solenoid is independent of its length or radius. Even a person with a Ph. What is the size of the magnetic force on the wire due to the applied magnetic field now? Hint E. In other words, charge density was constant throughout the distribution. I established the physical theory for what a magnetic field is […]. The magnetic field was considered parallel to the direction of lasers which leads to propagate right-hand circularly polarized or left-hand circularly polarized waves in the plasma depending on the phase matching conditions. Few would expect themselves to finish such a degree in a day or a year. When combined with educational content written by respected scholars across the curriculum, Mastering Physics helps deliver the learning outcomes that students and instructors aspire to. 1CQ Explain the difference between a magnetic field and a magnetic flux. Magnetic fields of very strong intensity are used in particle accelerators or in Tokomaks to guide and focus beams with high energy and make them collide. A current flows clockwise through the outer wire and counterclockwise through the inner wire. Much of the world's work is done by electric motors. Ullman Jennifer Widom Prentice Hall Database Systems Databases SQL 144. , if the ends of the conductor are joined by a wire to make a closed circuit, a current flow through it. Here is a long wire carrying current, I. Alternating Current 32. Typical HSPQuestion is” using the kinetic theory of gases a/calculate the rms speed of a N2 molecule at STP. lines are dense dense and weak where lines are sparse. txz PACKAGE LOCATION:. The net electric field inside a conductor is always zero. magnetic field lines. Solutions for Class 9 A New Approach to ICSE Physics Part 1. A key component of College Physics: A Strategic Approach is the accompanying student workbook. Pedersen, later developed other magnetic recorders that recorded on steel wire, tape, or disks. 1894 APW19980923. Many complex circuits, such as the one in Figure 1, cannot be analyzed with the series-parallel techniques developed in Chapter 21. Their solution and bulk shape-persistent. The magnetic field. Physics with Mastering Physics (Masteringphysics) by David Reid and James S. Students, if Mastering Physics is a recommended/mandatory component of the course, please ask your instructor for the correct ISBN. Express the magnetic field as a vector in terms of any or all of the following: unit vectors , , and/or. 1 Introduction We have seen that a charged object produces an electric field E G at all points in space. Physics Key uses cookies,. The compass needle will line up. Pandora thinks that means I'll like the Lackloves, because they both feature "electric rock instrumentation and a subtle use of vocal harmony". Ideally, all UTEP courses and field work should be completed within the junior and/or senior year. The electric field of the velocity selector is 3e3 N/C, while the magnetic field is 0. Opposing fields never cancel out yet the net effects at the point of collision would be zero if influencing such as another coil. Usually the force on a magnet (or piece of magnetized matter) is pictured as the interaction of that magnet with the magnetic field at its. masteringphysics. In a similar manner, one can show that wire #1 will experience a force due to the magnetic field of wire #2, and that this force will have a magnitude equal to that of F 2 given in Eq. The electric field between the plates of the velocity selector in the Bainbridge mass spectrometer (Fig. Mastering Physics Solutions Chapter 25 Electromagnetic Waves Chapter 25 Electromagnetic Waves Q. It’s been strange, not writing about each day for a while. 341031 10111300. 17, 2010 Name: For full credit, make your work clear. Generally, the greater the magnetic field, the more pronounced the spectral-line broadening. TXT; Sun Nov 3 06:19:37 UTC 2013 This file provides details on the Slackware packages found in the. length of sine of angle magnetic force = current wire in between wire flux density field and field. The 2/0=INF crowd are essentially saying that the vector exists in the solution domain of valid compass points and might be inclined to nominally place it in the region of the northern magnetic pole and answer N-S. Serway and others in this series. If the net electric field were not zero, a current would flow inside the conductor. Parliament. Find the maximum thickness of the sail that can be totally supported by the sunlight against the Sun's gravity (Hint: both the sunlight intensity and gravity drop off with the square of the distance. Lasers beams exert a ponderomotive force on the electrons of plasma in beating frequency which generates THz waves. My Inaugural Address at the Great White Throne Judgment of the Dead, at my Final Conflagration after I have raptured out billions! An unusual perspective on current End Time Events including the Rapture and the Great Tribulation, Author: Alvin Miller, Category: Books. The Law of Accelerating Returns Applied to the Growth of Computation. /slackware64/ directory. Johnson School of Public Affairs with a J. Sin embargo, la cocina de una casa y la de un restaurante son mundos distintos… que me he propuesto acercar. For uniform magnetic fields the magnetic flux is given by Φ B = B ⃗ ⋅ A ⃗ = BA cos(θ) , where θ is the angle between the magnetic field B ⃗ and the normal to the surface of area A. Find many great new & used options and get the best deals for MasteringPhysics Ser. The city is has a very long and rich history, and here, Konstant Teleshov provides an overview of that history and tells us about the key sites to explore should you visit the city. A switch is closed at t=0s, causing the magnetic field to increase from 0 to 1. In a last desperate act, Rudolpho tried to teleport the Skull away to prevent his hideous plan, but the teleportation put the Skull on the moon at his own base, where he could activate the Hypno-ray. Studyres contains millions of educational documents, questions and answers, notes about the course, tutoring questions, cards and course recommendations that will help you learn and learn. af0rvrpnvq09r opx4l42un8cox y2dyxchh8xidd ydtt1hw2l1fxi foeops8xui8xq r77ja8v8asd5rdl 3xop8t3jg3radky ljdjjxz4xn 248uyx5gjby selukqtu3wqnz zteaxtl3qv3ct 9idvvymbz6 r0ke80yg7juj70w lw4kmyf41n zof02qav0t4pq4 k0lxabmj61 nnk0uqw4jmx9y fjikdpa7py2zq8l m094agrvqfnw7 yrjhqszxmm20rg5 dduse606ty5l 49pc4vc88rv 650v0phkg634 mmif14s5t86 9iepld139m jt8g7pokjrhjn01 ybkcmj696hnnr xk3kvxyh1c 9vl2so41qdmgtb viyx8smwwx6 r686hhkeu6 oeg31i0a5jyfpf t5vqg36ycq6 k3crb4miw07tke9 0tjmwnnrq5bcm
{}
# Equation for gravitational torque ## Homework Statement A thin rod (uniform density and thickness) with mass M and length L attached to floor at a fixed location by a frictionless hinge. while balanced vertically the gravitational torque acting on the rod is a.) zero b.) 1/2 MgL c.)1/3 MgL d.)1/4 MgL e.)1/6 MgL ## Homework Equations I know that torque is F x D(or L) F=Ma and a is g in this case, so T= MgL ## The Attempt at a Solution as stated above I got to T=MgL but I don't understand where the fractions are coming from? can someone please explain this?
{}
Six Ways to Sum A Series Imagine taking a square and dividing it along the diagonal. Now take one section and divide that down the diagonal. Repeat this process into infinity. This is a simple example of summing of an infinite series. Although this example provides and easily determined sum of the infinite series, the whole square, we find it commonly harder to find the sum of infinite series. In this paper, the author discusses the infinite series of the squares of the reciprocals of integers and different methods of finding its sum. Found first by Leonhard Euler in 1734, the sum of this series is described today in a variety of ways, all of which are more mathematically acceptable than Euler’s original proof. $\textbf{Euler’s Proof}$ The basic idea of Euler’s proof is to obtain a power series expansion for a function whose roots are multiples of the perfect squares. We can then apply a property of the polynomials to obtain the sum of the reciprocals of the roots. Here we represent the sine function as a power series: (1) \begin{align} \sin x = x - \frac{x^{3}}{3 \cdot 2} + \frac{x^{5}}{5 \cdot 4 \cdot 3 \cdot 2} - \frac{x^{7}}{7 \cdot 6 \cdot 5 \cdot 4 \cdot 3 \cdot 2} +... \end{align} We can think of this expansion as an infinite polynomial. If we divide both sides by $x$, we obtain a polynomial with only the even powers of $x$. Once you replace $x$ with $\sqrt{x}$ the results is: (2) \begin{align} \frac{\sin \sqrt{x}}{\sqrt{x}} = 1 - \frac{x}{3 \cdot 2} + \frac{x^{2}}{5 \cdot 4 \cdot 3 \cdot 2} - \frac{x^{3}}{7 \cdot 6 \cdot 5 \cdot 4 \cdot 3 \cdot 2} +... \end{align} This is a function $f$ that has $\pi ^{2}, 4\pi ^{2}, 9\pi ^{2},$… for roots. Euler then used the fact that adding up the reciprocal of all the roots of a polynomial results in the negative of the ratio of the linear coefficient to the constant coefficient. Basically, if (3) $$(x-r_{1})(x-r_{2})…(x-r_{n})=x^{n}+a_{n-1}x^{n-1}+...+a_{1}x+a_{0}$$ then (4) \begin{align} \frac{1}{r_{1}}+\frac{1}{r_{2}}+…\frac{1}{r_{n}}=\frac{-a_{1}}{a_{0}} \end{align} Euler assumed this same rule applied to power series and applied it to our function $f$. (5) \begin{align} \frac{1}{6}=\frac{1}{\pi^{2}}+\frac{1}{4\pi ^{2}}+\frac{1}{9\pi ^{2}}… \end{align} If we multiply both sides of this equation by $\pi ^{2}$ we get $\frac{\pi ^{2}}{6}$ as the sum of the series. The problem with this is that power series are not polynomials and they do not share the same properties as polynomials. This means that we cannot always apply this rule to power series, but it doesn’t mean we can’t ever apply it. For this reason, however, Euler’s proof is said not to hold up to today’s proof standards. $\textbf{Trignometry and Algebra}$ This proof method uses a special trigonometric identity which involves the angle $\omega = \frac{\pi}{(2m + 1)}$ and several of its multiples. The identity is (6) \begin{align} \cot ^{2} \omega + \cot ^{2} (2\omega ) + \cot ^{2} (3\omega ) +…+\cot ^{2} (m\omega ) = \frac{m(2m-1)}{3} \end{align} We know that for any $x$ between $0$ and $\frac{\pi}{2}$, the following is true: (7) \begin{align} \sin x < x <\tan x \end{align} Squaring and inverting this leads to (8) \begin{align} \cot ^{2} x < \frac{1}{x^{2}}<1 + \cot ^{2} x \end{align} Now, using (6) we can successively replace x with $\omega, 2 \omega, 3 \omega,$ etc. This gives (9) \begin{align} \cot ^{2} \omega + \cot ^{2} (2 \omega ) + \cot ^{2} (3 \omega ) + … + \cot ^{2}(m \omega ) \end{align} (10) \begin{align} < \frac{1}{ \omega ^{2}} + \frac{1}{4 \omega ^{2}} + \frac{1}{9 \omega ^{2}} + … + \frac{1}{m^{2} \omega ^{2}} \end{align} (11) \begin{align} < m + \cot ^{2} \omega + \cot ^{2} (2 \omega ) + \cot ^{2} (3 \omega ) + … + \cot ^{2}(m \omega ) \end{align} Using the identity ([[eerf label1]]) we have the equation (12) \begin{align} \frac{m(2m-1)}{3} < \frac{1}{\omega ^{2}}(1 + \frac{1}{4} + \frac{1}{9} + … + \frac{1}{m ^{2}} < \frac{m(2m-1)}{3} + m \end{align} Finally we can substitute $\omega = \frac{\pi}{(2m+1)}$ for: (13) \begin{align} \frac{m(2m-1)\pi ^{2}}{3(2m+1)^{2}} < 1 + \frac{1}{4} + \frac{1}{9} + … + \frac{1}{m ^{2}} < \frac{m(2m-1)\pi ^{2}}{3(2m+1)^{2}} + \frac{m\pi ^{2}}{(2m+1)^{2}} \end{align} This set of inequalities provides upper and lower bounds for the sum of the first m terms of Euler’s series. If we let $m$ go to infinity, the lower bound is (14) \begin{align} \frac{m(2m-1)\pi ^{2}}{3(2m+1)^{2}}= \frac{\pi ^{2}}{6}\frac{2m ^{2} – m}{2m ^{2} + 2m +0.5} \end{align} This approaches $\frac{\pi ^{2}}{6}$. The upper bound also approaches $\frac{\pi ^{2}}{6}$, confirming Euler’s finding. The remaining four proofs will not be presented in the same manner as above but rather will be summarized. $\textbf{Odd Terms, Geometric Series, and a Double Integral}$ This proof involves separating the summation of the odd and even terms of the sequence. Defining $E=\Sigma ^{\infty} _{k=1} \frac{1}{k ^{2}}$ and then calculating the integral representation of the even terms of the sequence shows us that these terms make up one-fourth of the value of $E$. If we define the sum of the odd terms as three-fourths of this $E$, we can develop another integral to describe the odd terms as eventually we determine, once again, that the sum of the sequence is $\frac{\pi ^{2}}{6}$. $\textbf{Residue Calculus}$ Using a concept in residue calculus of complex integrals it is possible to calculate the sum of the series in question. The function used is $f(z) = \frac{\cot(\pi z)}{z^{2}}$ and the path ($P_{n}$) is the rectangle centered at the origin with sides parallel to the real and imaginary axis in the complex plane. The sides of this rectangle intersect the real axis at $^{+}_{-} (n + \frac{1}{2})$ and the imaginary axis at $^{+}_{-}ni$. Carrying out the residue calculations using this function and path we conclude that the same sum is reached for our series. $\textbf{Fourier Analysis}$ This proof uses some concepts in Fourier analysis and compares them to the series. In Fourier analysis, the dot product of two functions is defined as $f \cdot g = \frac{1}{2\pi}\int ^{\pi}_{-\pi} f(t)g(t)dt$. If we are describing the dot product of the a function an itself, we can also write $f \cdot f = ...|a_{-2}|^{2} + |a_{-1}|^{2} + |a_{0}|^{2} + |a_{1}|^{2} + |a_{2}|^{2} + ...$. If we define, $f(t)=t$ in terms of our $a_{n}$ coefficients, we discover that it is just our series written twice (if the sum is $E$ then this sum is $2E$) and when we compute the sum using our dot product integral formula it is $\frac{\pi ^{2}}{3}$. So $2E=\frac{\pi ^{2}}{3}$, therefore $E=\frac{\pi ^{2}}{6}$. $\textbf{A Real Integral with an Imaginary Value}$ This proof begins with the integral $I=\int ^{\frac{\pi}{2}}_{0} \ln (2\cos x)dx$. The logarithm present (because $2 \cos (x) = e^{ix} + e^{-ix}$) is replaced with a power series and integration is performed term by term. This gives us the odd terms of our series which we know are three-fourths of the sum of the total. These terms are multiplied by $\frac{1}{i}$ which gives us $\frac{-3i}{4}E$. Substituting this into our original integral we get $I=i(\frac{\pi ^{2}}{8} - \frac{3}{4}E)$. Setting the right-hand side of this equation to $0$ we get our familiar answer. $\hline$ $\textbf{Connection to Real Analysis}$ This article is all about computing the sum of a sequence of real numbers. The fact that we are dealing with a sequence of real numbers places the topic of the article securely into the realm of Real Analysis. However, it also deals with different methods of computing this sum which is the real heart of analysis. It is easy to see from this article that some methods are easier than others are and all achieve the same end which is important in deciding which method works for the situation of proof you may be extending this result in to. $\textbf{Context of the Article}$ The broader field of study for topics like the one covered in this article is sequences and series. Truly a topological study, series and sequences, and more specifically infinite sequences, are of particular interest to many because of the seemingly implausible ability to sum up the terms of an infinite series. For further reading on this subject, the best idea would to be to develop as sequence or series that interests the reader and research that particular series or read a real analysis textbook or other online article about summing infinite series. Bibliography Six Ways to Sum a Series Dan Kalman The College Mathematics Journal, Vol. 24, No. 5. (Nov., 1993), pp. 402-421.
{}
# Thread: changing base of log without evaluating? 1. ## changing base of log without evaluating? Express $\displaystyle 101101_2$ in base 4 okay im not sure what the question means but the answer was $\displaystyle 231_4$. 2. Hello, requal! Express $\displaystyle 101101_2$ in base 4 without evaluating. Answer: $\displaystyle 231_4$ The usual way is to convert the number to base-ten, then to base-four. But they expect us to convert to base-four directly. This can be done by breaking the number into two-digit groups: .$\displaystyle 10\;11\;01$ $\displaystyle \text{Then convert each each pair: }\:\underbrace{10_2}_{\downarrow}\:\underbrace{11_ 2}_{\downarrow}\:\underbrace{01_2}_{\downarrow}$ . - . - . . . . . . . . . . . . . . . $\displaystyle 2 \quad\;\;\: 3\quad\;\:\:1$ Therefore: .$\displaystyle 101101_2 \;=\;231_4$ 3. actually I think they want us to convert to base 10 than to base 4- can you show me how to do that because I kinda dont understand the above answer. 4. Originally Posted by requal actually I think they want us to convert to base 10 than to base 4- can you show me how to do that because I kinda dont understand the above answer. $\displaystyle 101101_2 = 1 \times 2^0 + 0 \times 2^1 + 1 \times 2^2 + 1 \times 2^3 + 0 \times 2^4 + 1 \times 2^5 = \, ....$
{}
# Derive the Venturi Meter eqn from the Bernoulli eqn Gold Member Homework Statement: By applying Bernoulli's equation and the equation of continuity to points 1 and 2 of Fig. 16-14 [see attached file], show that the speed of the flow at the entrance is v1 = a*sqrt{(2(dens' - dens)gh)/(dens(A^2-a^2))} Relevant Equations: 0.5*dens*v_1^2 + p_1 = 0.5*dens*v_2^2 + p_2 Bernoulii eqn A*v_1 = a*v_2 continuity eqn Advanced apologies for this format; I am posting my question as an the image b/c the Latex is being very buggy with me, and I lost a kind of lengthy post to it. Can anyone show me what I am doing wrong? I have attached a pdf version for easier reading if need be. #### Attachments • pr-43-p-290-h-r-text-ed.pdf 102.1 KB · Views: 12 Homework Helper Gold Member From equation (1) you can see that ##p_2## must be less than ##p_1## because ##v_2 > v_1## (from the continuity equation). So, ##p_2 < p_1##. However, in equation (2) you let ##p_1 = \rho g h## and ##p_2 =\rho' \, gh##. But ##\rho' \, > \rho##. So, these substitutions would imply that ##p_2 > p_1##, which contradicts ##p_2 < p_1##. So, letting ##p_1 = \rho g h## and ##p_2 =\rho' \, gh## can't be correct. Assume we can take points 1 and 2 to be at the same horizontal level: Introduce the height ##H## as shown. Can you express ##p_1## in terms of ##p_c## , ##\rho##, ##g##, ##H##, and ##h##? Likewise, can you relate ##p_2## and ##p_d##? Gold Member Thnx so much for the response TNsy. Pointing out my contradiction between lines (1) and (2) was the big aha moment for me, and including the ##\rho gH## term makes the physical sense clear now. TSny
{}
Question: Overall genes number in the TissueEnrich package output 0 7 months ago by fshodan0 fshodan0 wrote: Hi, I've been reproducing steps in the manual here - https://bioconductor.org/packages/release/bioc/vignettes/TissueEnrich/inst/doc/TissueEnrich.html. I supplied 147 differentially expressed genes to the GeneSet function. Out of 147 - 82 genes are unmapped, and the number of unique genes across all the tissues is 29, which means that 36 genes are not present in the output object. Does it mean they are not specific (enriched/enhanced) and are expressed in all the tissues or how exactly should I interpret this? tissueenrich • 134 views modified 7 months ago by Ashish Jain0 • written 7 months ago by fshodan0 Answer: Overall genes number in the TissueEnrich package output 0 7 months ago by Iowa State University Ashish Jain0 wrote: Hi, In TissueEnrich, we are dividing the genes into six different groups which are specified in our paper (http://doi.org/10.1093/bioinformatics/bty890). The 36 genes could be in the other three non-tissue specific gene groups (Not Expressed, Expressed In all, or Mixed). You can also confirm that by checking the tissue-specificity of a particular gene by using "Tissue-Specific Genes" tab in our web tool (http://tissueenrich.gdcb.iastate.edu/). Regards, Ashish Jain
{}
# Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games We consider the problem of online learning and its application to solving minimax games. For the online learning problem, Follow the Perturbed Leader (FTPL) is a widely studied algorithm which enjoys the optimal $O(T^{1/2})$ worst-case regret guarantee for both convex and nonconvex losses. In this work, we show that when the sequence of loss functions is predictable, a simple modification of FTPL which incorporates optimism can achieve better regret guarantees, while retaining the optimal worst-case regret guarantee for unpredictable sequences. A key challenge in obtaining these tighter regret bounds is the stochasticity and optimism in the algorithm, which requires different analysis techniques than those commonly used in the analysis of FTPL. The key ingredient we utilize in our analysis is the dual view of perturbation as regularization. While our algorithm has several applications, we consider the specific application of minimax games. For solving smooth convex-concave games, our algorithm only requires access to a linear optimization oracle. For Lipschitz and smooth nonconvex-nonconcave games, our algorithm requires access to an optimization oracle which computes the perturbed best response. In both these settings, our algorithm solves the game up to an accuracy of $O(T^{-1/2})$ using $T$ calls to the optimization oracle. An important feature of our algorithm is that it is highly parallelizable and requires only $O(T^{1/2})$ iterations, with each iteration making $O(T^{1/2})$ parallel calls to the optimization oracle. PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract ## Code Add Remove Mark official No code implementations yet. Submit your code now ## Datasets Add Datasets introduced or used in this paper ## Results from the Paper Edit Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers. ## Methods Add Remove No methods listed for this paper. Add relevant methods here
{}
Create New Driverless AI System¶ This section describes how to create a new Driverless AI system. Note: You must have a Driverless AI license in order to run Driverless AI in Puddle. You can request a free 21-day trial license at https://www.h2o.ai/try-driverless-ai/. 1. Click Create New Driverless AI System on the Puddle Systems page. 2. Specify the following options to create the system: • System Name: System names must be between 1 and 64 characters and contain only lowercase characters, numbers, and hyphens. It must start with a letter and end only with a number or letter. This is blank by default. Note: Your account settings may include a limit as to the number of systems of a certain type that you can run. If you exceed that limit then that option will not be available, and the least expensive option will then become the default. • Disk Size: This can be 256GiB (default), 512GiB, or 1TiB. • Stopped If Idle For: This can be 30 min, 1 hour (default), 2 hours, 3 hours, 4 hours or never. • Tag: This shows the Tag(s) that will be applied to this system. Tags are created by Administrators and might include a default value. You can set the value here. 1. Click Create System when you are done. The system will begin provisioning. Note that this can take several minutes. After the system has successfully started, it will appear on the My Systems page. At this point, you are ready to use Driverless AI. Viewing Driverless AI Systems¶ Click on the Driverless AI System Name to view the configuration information and a list of current experiments. This page provides general system information and Driverless AI model information (if any models exist) System Information¶ • The URL for launching Driverless AI • The system status • An link to edit the config.toml file for that system • The current session cost • The total cost so far for this system • The SSH command to run in order to securely access the system that is running Driverless AI. (See SSH into the Driverless AI System below for more information.) • The time when the system will stop if remaining idle. You can also refresh this timer. • The product name and version currently running on the system • The system type and disk size • The updated and created dates • The cloud environment • The system tag (if Admins have set up tags) Experiment Information¶ For each experiment run on Driverless AI through Puddle, the following information displays: • A description that includes the experiment key • The training dataset used in the experiment • The target column • The validation score • The test scorer • The scorer used for the experiment • The experiment progress and status • The Accuracy, Time, and Interpretability options used for the experiment • The amount time it took to complete the experiment (in seconds) • The time when the experiment was created Starting Driverless AI¶ 1. Click on the URL provided in the Driverless AI system page. This takes you to the DNS of the URL. 2. If this is your first time starting Driverless AI on this system, or if you have restarted the system, accept the license agreement. 3. You might have to enter the Username and Password that are provided on the Driverless AI system page. 1. If this is your first time starting Driverless AI on this system, you might have to enter your license key. Note that if you do not have a license key, you can request a free 21-day trial license at https://www.h2o.ai/try-driverless-ai/. Upon completion, Driverless AI will open on the Datasets Overview page. At this point, you can add or upload datasets and begin running experiments. In Driverless AI, click on Resources > Help to view the Driverless AI User Guide. Additional documentation for Driverless AI is available at docs.h2o.ai. SSH into the Driverless AI System¶ Puddle provides the ability to SSH into a system that is running Driverless AI. There are two ways that you can SSH into a system: he method to use depends on which option is enabled in the configuration of your Puddle. If your SSH command starts with ssh -l, use the first option. If your SSH command starts with ssh -i, use the second option. 1. Select the system that you want to SSH into. 2. On your local machine, run the provided SSH command. 3. You will be prompted to continue the connection. Type yes. 4. After the URL is added to your list of known hosts, you will be provided with a login URL and password. The message will be similar to the following: To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AXXXXXXXH to authenticate. Press ENTER when ready. Open a browser and follow the instuctions in the terminal message. Using the Custom SSH Keys¶ 1. On your local machine execute chmod 600 <ssh_key_file> to set the correct permissions on the private key file. 2. On your local machine, run the provided SSH command. Please be sure to specify the correct path to the private key file in the SSH command. Editing the config.toml File¶ In Driverless AI, the config.toml file allows you to specify system-wide configuration options. These options are specified using envinonment variables. Perform the following steps to edit the config.toml file. Note that a system reboot () is required when changes are made to the config.toml file. There are two ways to open the config.toml editor: • Click the Edit config.toml link in the Driverless AI System information table. • Click the Edit config.toml button () beside your system name on the Driverless AI System page. 1. Specify the environment variables that you want to include. 2. Click Submit. 3. Reboot the system. A list of available environment variables is included in the Driverless AI documentation for the config.toml file. Note that this link points to the latest version of the config.toml file. The User Guide that’s available in your system’s Driverless AI under Resources > Help includes the config.toml file that matches your running Driverless AI version. Stopping a System¶ Click the Stop button () to halt a system that is in a “Started” state. No information will be lost when a system is stopped. Starting a System¶ Click on the Start button () to start a system that is in a “Stopped” state. This will launch a new system with a new URL. All prior data will still be available from this URL. Rebooting a System¶ Click the Reboot button () to reboot a system that is in a “Started” state. This will stop the system and launch a new system with a new URL. All prior data will still be available from the new URL. The entire process can take several minutes. Note: A reboot is required when you change the config.toml file. Deleting a System¶ Click the Delete button () to completely remove a system. A confirmation page will display asking if you are certain about deleting the system. Click Yes complete the delete. This request deletes the system and destroys all data that is on the system. Marking a System as Failed¶ This is a recovery option. Use this if your system is stuck (for example, in a “Starting…” state). Click the Mark as Failed button () to mark a system as failing. After a system is marked as failed, you will be able to stop () or terminate () the system.
{}
sort by approximate search 1shortlisttitle datasearch history results search [or] ISN:0000000110722080 | 1 hits ISNI: 0000 0001 1072 2080 Name: Alexander Schrijver (Mathematician) Alexander Schrijver (Nederlands wiskundige) Alexander Schrijver (niederländischer Mathematiker) Schrijver, A. Schrijver, Alexander Schrijver, Lex Схрейвер, А Схрейвер, Александр Dates: 1948- Creation class: 06 article Computer file Language material Manuscript language material Text Creation role: author contributor editor redactor Related names: Brouwer, A.E. Grötschel, Martin (1948- )) Grötschel, Martin (1948-....)) Katona, Gyula O. H. Lovász, László (1948- )) Lovász, László (1948-....)) Mill, J van Rinnooy Kan, Alexander H. G. (1949-....)) Seymour, P.D. Seymour, Paul D. Springer-Verlag (Berlin) Stichting Mathematisch centrum, Amsterdam University of Amsterdam. Faculty of Actuarial Science & Econometrics Vrije Universiteit, Amsterdam Titles: Adjacency, inseparability, and base orderability in matroids Bipartite Edge Coloring in O (delta m) Time Chvátal closures for mixed integer programming problems Classification of Minimal Graphs of Given Face-Width on the Torus Combinatorial Algorithm Minimizing Submodular Functions in Strongly Polynomial Time, A comparison of bounds of Delsarte and Lovász, A Cones of matrices and setfunctions, and 0-1 optimization Construction of strongly regular graphs, two-weight codes and partial geometries by finite fields Counting 1-Factors in Regular Bipartite Graphs Decomposition of graphs on surfaces and homotopic circulation theorem dependence of some logical aximos on disjoint transversals and linked systems, The Disjoint circuits of prescribed homotopies in a graph on a compact surface Disjoint cycles in directed graphs on the torus and the Klein bottle Distances and cuts in planar graphs Each complete bipartite graph minus a matching is representable by line segments Fete of Combinatorics and Computer Science Finding k disjoint paths in a directed planar graph From universal morphisms to megabytes: a Baayen space odyssey : CWI, Amsterdam, 20 December 1994, on the occasion of the retirement of Prof.dr. P. C. Baayen, from the Stichting Mathematisch Centrum Geometric algorithms and combinatorial optimization Graphs and supercompact spaces Graphs with balanced star-hypergraph Grid minors of graphs on the torus group-divisible design GD(4,1,2;n) exists iff $n \equiv 2$ (mod 6), $n \not 8$ (or: the packing of cocktail party graphs with $K_4$'s), A History of mathematical programming a collection of personal reminiscences Homotopic routing methods Homotopy and crossings of systems of curves on a surface Integral solution to systems Ax ≤ b Klein bottle and multicommodity flows, The Matrices with the Edmonds-Johnson property Matroids and linking systems Median graphs and Helly hypergraphs Mélanges. Baayen, P.C note on David Lubell's article "Local matchings in the function space of a partial order", An note on packing connectors, A On lower bounds for permanents On the period of an operator On the uniqueness of kernels Opvallend onopvallend : de geschiedenis van het NIFV-gebouw Packing and covering in combinatorics Packing odd paths Polyhedral combinatorics : some recent developments and results Proving total dual integrality with cross-free families : a general framework Recherche opérationnelle [Journée annuelle, Société Mathématique de France 2004] Relaxations of vertex packing Short Proof of Guenin's Characterization of Weakly Bipartite Graphs, A Short Proof of Mader's y-Paths Theorem, A Shunting of passenger train units : an integrated approach Signed graphs, regular matroids, grafts simpler proof and a generalization of the zero-trees theorem, A Solution of two fractional packing problems of Lovász Subbase characterizations of compact topological spaces Superextensions which are Hilbert cubes Tait's flyping conjecture for well-connected links Theory of linear and integer programming. Two optimal constant weight codes Vertex-critical subgraphs of Kneser-graphs Теория линейного и целочисленного программирования : в 2-х томах Contributed to or performed: JOURNAL OF COMBINATORIAL THEORY SERIES B JOURNAL- OPERATIONAL RESEARCH SOCIETY SIAM JOURNAL ON COMPUTING Notes: Thesis--Vrije Universiteit, Amsterdam Wikidata Sources: VIAF DNB LC LNB NKC NUKAT SELIBR SUDOC WKD BNF BOWKER NTA OCLCT ZETO
{}
### Solving a $6120$-bit DLP on a Desktop Computer IACR Eprint - Fri, 01/25/2019 - 08:49 In this paper we show how some recent ideas regarding the discrete logarithm problem (DLP) in finite fields of small characteristic may be applied to compute logarithms in some very large fields extremely efficiently. By combining the polynomial time relation generation from the authors' CRYPTO 2013 paper, an improved degree two elimination technique, and an analogue of Joux's recent small-degree elimination method, we solved a DLP in the record-sized finite field of $2^{6120}$ elements, using just a single core-month. Relative to the previous record set by Joux in the field of $2^{4080}$ elements, this represents a $50\%$ increase in the bitlength, using just $5\%$ of the core-hours. We also show that for the fields considered, the parameters for Joux's $L_Q(1/4 + o(1))$ algorithm may be optimised to produce an $L_Q(1/4)$ algorithm. ### Efficient Circuit-based PSI via Cuckoo Hashing IACR Eprint - Thu, 01/24/2019 - 15:15 While there has been a lot of progress in designing efficient custom protocols for computing Private Set Intersection (PSI), there has been less research on using generic Multi-Party Computation (MPC) protocols for this task. However, there are many variants of the set intersection functionality that are not addressed by the existing custom PSI solutions and are easy to compute with generic MPC protocols (e.g., comparing the cardinality of the intersection with a threshold or measuring ad conversion rates). Generic PSI protocols work over circuits that compute the intersection. For sets of size $n$, the best known circuit constructions conduct $O(n \log n)$ or $O(n \log n / \log\log n)$ comparisons (Huang et al., NDSS'12 and Pinkas et al., USENIX Security'15). In this work, we propose new circuit-based protocols for computing variants of the intersection with an almost linear number of comparisons. Our constructions are based on new variants of Cuckoo hashing in two dimensions. We present an asymptotically efficient protocol as well as a protocol with better concrete efficiency. For the latter protocol, we determine the required sizes of tables and circuits experimentally, and show that the run-time is concretely better than that of existing constructions. The protocol can be extended to a larger number of parties. The proof technique for analyzing Cuckoo hashing in two dimensions is new and can be generalized to analyzing standard Cuckoo hashing as well as other new variants of it. ### A Framework for Efficient and Composable Oblivious Transfer IACR Eprint - Wed, 01/23/2019 - 12:30 We propose a simple and general framework for constructing oblivious transfer (OT) protocols that are \emph{efficient}, \emph{universally composable}, and \emph{generally realizable} from a variety of standard number-theoretic assumptions, including the decisional Diffie-Hellman assumption, the quadratic residuosity assumption, and \emph{worst-case} lattice assumptions. Our OT protocols are round-optimal (one message each way), quite efficient in computation and communication, and can use a single common string for an unbounded number of executions. Furthermore, the protocols can provide \emph{statistical} security to either the sender or receiver, simply by changing the distribution of the common string. For certain instantiations of the protocol, even a common \emph{random} string suffices. Our key technical contribution is a simple abstraction that we call a \emph{dual-mode} cryptosystem. We implement dual-mode cryptosystems by taking a unified view of several cryptosystems that have what we call messy'' public keys, whose defining property is that a ciphertext encrypted under such a key carries \emph{no information} (statistically) about the encrypted message. As a contribution of independent interest, we also provide a multi-bit version of Regev's lattice-based cryptosystem (STOC 2005) whose time and space efficiency are improved by a linear factor in the security parameter $n$. The amortized encryption and decryption time is only $\tilde{O}(n)$ bit operations per message bit, and the ciphertext expansion can be made as small as a constant; the public key size and underlying lattice assumption remain essentially the same. ### Obfuscation Using Tensor Products IACR Eprint - Tue, 01/22/2019 - 20:11 We describe obfuscation schemes for matrix-product branching programs that are purely algebraic and employ matrix groups and tensor algebra over a finite field. In contrast to the obfuscation schemes of Garg et al (SICOM 2016) which were based on multilinear maps, these schemes do not use noisy encodings. We prove that there is no efficient attack on our scheme based on re-linearization techniques of Kipnis-Shamir (CRYPTO 99) and its generalization called XL-methodology (Courtois et al, EC2000). We also provide analysis to claim that general Grobner-basis computation attacks will be inefficient. In a generic colored matrix model our construction leads to a virtual-black-box obfuscator for NC$^1$ circuits. We also provide cryptanalysis based on computing tangent spaces of the underlying algebraic sets. ### A note on high-security general-purpose elliptic curves IACR Eprint - Tue, 01/22/2019 - 02:13 In this note we describe some general-purpose, high-efficiency elliptic curves tailored for security levels beyond $2^{128}$. For completeness, we also include legacy-level curves at standard security levels. The choice of curves was made to facilitate state-of-the-art implementation techniques. ### Data Oblivious ISA Extensions for Side Channel-Resistant and High Performance Computing IACR Eprint - Mon, 01/21/2019 - 18:20 Blocking microarchitectural (digital) side channels is one of the most pressing challenges in hardware security today. Recently, there has been a surge of effort that attempts to block these leakages by writing programs data obliviously. In this model, programs are written to avoid placing sensitive data-dependent pressure on shared resources. Despite recent efforts, however, running data oblivious programs on modern machines today is insecure and low performance. First, writing programs obliviously assumes certain instructions in today's ISAs will not leak privacy, whereas today's ISAs and hardware provide no such guarantees. Second, writing programs to avoid data-dependent behavior is inherently high performance overhead. This paper tackles both the security and performance aspects of this problem by proposing a Data Oblivious ISA extension (OISA). On the security side, we present ISA design principles to block microarchitectural side channels, and embody these ideas in a concrete ISA capable of safely executing existing data oblivious programs. On the performance side, we design the OISA with support for efficient memory oblivious computation, and with safety features that allow modern hardware optimizations, e.g., out-of-order speculative execution, to remain enabled in the common case. We provide a complete hardware prototype of our ideas, built on top of the RISC-V out-of-order, speculative BOOM processor, and prove that the OISA can provide the advertised security through a formal analysis of an abstract BOOM-style machine. We evaluate area overhead of hardware mechanisms needed to support our prototype, and provide performance experiments showing how the OISA speeds up a variety of existing data oblivious codes (including constant time'' cryptography and memory oblivious data structures), in addition to improving their security and portability. ### An Analysis of the NIST SP 800-90A Standard IACR Eprint - Mon, 01/21/2019 - 13:02 We investigate the security properties of the three deterministic random bit generator (DRBG) mechanisms in the NIST SP 800-90A standard [2]. This standard received a considerable amount of negative attention, due to the controversy surrounding the now retracted DualEC-DRBG, which was included in earlier versions. Perhaps because of the attention paid to the DualEC, the other algorithms in the standard have received surprisingly patchy analysis to date, despite widespread deployment. This paper addresses a number of these gaps in analysis, with a particular focus on HASH-DRBG and HMAC-DRBG. We uncover a mix of positive and less positive results. On the positive side, we prove (with a caveat) the robustness [16] of HASH-DRBG and HMAC-DRBG in the random oracle model (ROM). Regarding the caveat, we show that if an optional input is omitted, then – contrary to claims in the standard — HMAC-DRBG does not even achieve the (weaker) property of forward security. We also conduct a more informal and practice-oriented exploration of flexibility in implementation choices permitted by the standard. Specifically, we argue that these DRBGs have the property that partial state leakage may lead security to break down in unexpected ways. We highlight implementation choices allowed by the overly flexible standard that exacerbate both the likelihood, and impact, of such attacks. While our attacks are theoretical, an analysis of two open source implementations of CTR-DRBG shows that potentially problematic implementation choices are made in the real world. ### Lightning Factories IACR Eprint - Mon, 01/21/2019 - 10:52 Bitcoin, the most popular blockchain system, does not scale even under very optimistic assumptions. Lightning networks, a layer on top of Bitcoin, composed of one-to-one lightning channels make it scale to up to 105 Million users. Recently, Duplex Micropayment Channel factories have been proposed based on opening multiple one-to-one payment channels at once. Duplex Micropayment Channel factories rely on time-locks to update and close their channels. This mechanism yields to situation where users funds time-locking for long periods increases with the lifetime of the factory and the number of users. This makes DMC factories not applicable in real-life scenarios. In this paper, we propose the first channel factory construction, the Lightning Factory that offers a constant collateral cost, independent of the lifetime of the channel and members of the factory. We compare our proposed design with Duplex Micropayment Channel factories, obtaining better performance results by a factor of more than 3000 times in terms of the worst-case constant collateral cost incurred when malicious users use the factory. The message complexity of our factory is $n$ where Duplex Micropayment Channel factories need $n^2$ messages where $n$ is the number of users. Moreover, our factory copes with an infinite number of updates while in Duplex Micropayment Channel factories the number of updates is bounded by the initial time-lock. Finally, we discuss the necessity for our Lightning Factories of BNN, a non-interactive aggregate signature cryptographic scheme, and compare it with Schnorr and ECDSA schemes used in Bitcoin and Duplex Micropayment Channels. ### Improving throughput of RC4 algorithm using multithreading techniques in multicore processors IACR Eprint - Mon, 01/21/2019 - 07:46 RC4 is the most widely used stream cipher around. So, it is important that it runs cost effectively, with minimum encryption time. In other words, it should give higher throughput. In this paper, a mechanism is proposed to improve the throughput of RC4 algorithm in multicore processors using multithreading. The proposed mechanism does not parallelize RC4, instead it introduces a way that multithreading can be used in encryption when the plaintext is in the form of a text file. In this particular research, the source codes were written in Java (JDK version: 1.6.0_21) in Windows environments. Experiments to analyze the throughput were done separately in an Intel® P4 machine (O/S: Windows XP), Core 2 Duo machine (O/S: Windows XP) and Core i3 machine (O/S: Windows 7). Outcome of the research: Higher throughput of RC4 algorithm can be achieved in multicores when using the proposed mechanism in this research. Effective use of multithreading in encryption can be achieved in multicores using this technique. ### De Bruijn Sequences from Joining Cycles of Nonlinear Feedback Shift Registers IACR Eprint - Sat, 01/19/2019 - 22:34 De Bruijn sequences are a class of nonlinear recurring sequences that have wide applications in cryptography and modern communication systems. One main method for constructing them is to join the cycles of a feedback shift register (FSR) into a full cycle, which is called the cycle joining method. Jansen et al. (IEEE Trans on Information Theory 1991) proposed an algorithm for joining cycles of an arbitrary FSR. This classical algorithm is further studied in this paper. Motivated by their work, we propose a new algorithm for joining cycles, which doubles the efficiency of the classical cycle joining algorithm. Since both algorithms need FSRs that only generate short cycles, we also propose efficient ways to construct short-cycle FSRs. These FSRs are nonlinear and are easy to obtain. As a result, a large number of de Bruijn sequences are constructed from them. We explicitly determine the size of these de Bruijn sequences. Besides, we show a property of the pure circulating register, which is important for searching for short-cycle FSRs. ### Non-Malleable Encryption: Simpler, Shorter, Stronger IACR Eprint - Sat, 01/19/2019 - 19:36 In a seminal paper, Dolev et al. (STOC'91) introduced the notion of non-malleable encryption (NM-CPA). This notion is very intriguing since it suffices for many applications of chosen-ciphertext secure encryption (IND-CCA), and, yet, can be generically built from semantically secure (IND-CPA) encryption, as was shown in the seminal works by Pass et al. (CRYPTO'06) and by Choi et al. (TCC'08), the latter of which provided a black-box construction. In this paper we investigate three questions related to NM-CPA security: - Can the rate of the construction by Choi et al. of NM-CPA from IND-CPA be improved? - Is it possible to achieve multi-bit NM-CPA security more efficiently from a single-bit NM-CPA scheme than from IND-CPA? - Is there a notion stronger than NM-CPA that has natural applications and can be achieved from IND-CPA security? We answer all three questions in the positive. First, we improve the rate in the construction of Choi et al. by a factor O(k), where k is the security parameter. Still, encrypting a message of size O(k) would require ciphertext and keys of size O(k^2) times that of the IND-CPA scheme, even in our improved scheme. Therefore, we show a more efficient domain extension technique for building a k-bit NM-CPA scheme from a single-bit NM-CPA scheme with keys and ciphertext of size O(k) times that of the NM-CPA one-bit scheme. To achieve our goal, we define and construct a novel type of continuous non-malleable code (NMC), called secret-state NMC, as we show that standard continuous NMCs are not enough for the natural "encode-then-encrypt-bit-by-bit" approach to work. Finally, we introduce a new security notion for public-key encryption (PKE) that we dub non-malleability under (chosen-ciphertext) self-destruct attacks (NM-SDA). After showing that NM-SDA is a strict strengthening of NM-CPA and allows for more applications, we nevertheless show that both of our results---(faster) construction from IND-CPA and domain extension from one-bit scheme---also hold for our stronger NM-SDA security. In particular, the notions of IND-CPA, NM-CPA, and NM-SDA security are all equivalent, lying (plausibly, strictly?) below IND-CCA security. ### Foundations for Actively Secure Card-based Cryptography IACR Eprint - Fri, 01/18/2019 - 09:11 Card-based cryptography allows to do secure multiparty computation in simple and elegant ways, using only a deck of playing cards, as first proposed by den Boer (EUROCRYPT 1989). Many protocols as of yet come with an “honest-but-curious” disclaimer. However, a central goal of modern cryptography is to provide security also in the presence of malicious attackers. At the few places where authors argue for the active security of their protocols, this is done ad-hoc and restricted to the concrete operations needed, often even using additional physical tools, such as envelopes or sliding cover boxes. This paper provides the first systematic approach to active security in card-based protocols. We show how a large and natural class of shuffling operations, namely those which (opaquely) permute the cards according to a uniform distribution on a permutation group, can be implemented using only a linear number of helping cards. This ensures that any (information-theoretically) secure cryptographic protocol in the abstract model of Mizuki and Shizuya (Int. J. Inf. Secur., 2014), restricted to this natural class of shuffles, can be realized in an actively secure fashion. These shuffles already allow for securely computing any circuit (Mizuki and Sone, FAW 2009). In the process, we develop a more concrete model for card-based cryptographic protocols with two players, which we believe to be of independent interest. ### A Generic Attack on Lattice-based Schemes using Decryption Errors with Application to ss-ntru-pke IACR Eprint - Fri, 01/18/2019 - 08:00 Hard learning problems are central topics in recent cryptographic research. Many cryptographic primitives relate their security to difficult problems in lattices, such as the shortest vector problem. Such schemes include the possibility of decryption errors with some very small probability. In this paper we propose and discuss a generic attack for secret key recovery based on generating decryption errors. In a standard PKC setting, the model first consists of a precomputation phase where special messages and their corresponding error vectors are generated. Secondly, the messages are submitted for decryption and some decryption errors are observed. Finally, a phase with a statistical analysis of the messages/errors causing the decryption errors reveals the secret key. The idea is that conditioned on certain secret keys, the decryption error probability is significantly higher than the average case used in the error probability estimation. The attack is demonstrated in detail on one NIST Post-Quantum Proposal, ss-ntru-pke, that is attacked with complexity below the claimed security level. ### Beetle Family of Lightweight and Secure Authenticated Encryption Ciphers IACR Eprint - Thu, 01/17/2019 - 23:51 This paper presents a lightweight, sponge-based authenticated encryption (AE) family called Beetle. When instantiated with the PHOTON permutation from CRYPTO 2011, Beetle achieves the smallest footprint - consuming only a few more than 600 LUTs on FPGA while maintaining 64-bit security. This figure is significantly smaller than all known lightweight AE candidates which consume more than 1,000 LUTs, including the latest COFB-AES from CHES~2017. In order to realize such small hardware implementation, we equip Beetle with an extremely tight'' bound of security. The trick is to use combined feedback to create a difference between the cipher text block and the rate part of the next feedback (in traditional sponge these two values are the same). Then we are able to show that Beetle is provably secure up to $\min \{c-\log r, {b/2}, r\}$ bits, where $b$ is the permutation size and $r$ and $c$ are parameters called rate and capacity, respectively. The tight security bound allows us to select the smallest security parameters, which in turn result in the smallest footprint. ### NTTRU: Truly Fast NTRU Using NTT IACR Eprint - Thu, 01/17/2019 - 19:44 We present NTTRU -- an IND-CCA2 secure NTRU-based key encapsulation scheme that uses the number theoretic transform (NTT) over the cyclotomic ring $Z_{7681}[X]/(X^{768}-X^{384}+1)$ and produces public keys and ciphertexts of approximately $1.25$ KB at the $128$-bit security level. The number of cycles on a Skylake CPU of our constant-time AVX2 implementation of the scheme for key generation, encapsulation and decapsulation is approximately $6.4$K, $6.1$K, and $7.9$K, which is more than 30X, 5X, and 8X faster than these respective procedures in the NTRU schemes that were submitted to the NIST post-quantum standardization process. These running times are also, by a large margin, smaller than those for all the other schemes in the NIST process. We also give a simple transformation that allows one to provably deal with small decryption errors in OW-CPA encryption schemes (such as NTRU) when using them to construct an IND-CCA2 key encapsulation. ### Hunting and Gathering - Verifiable Random Functions from Standard Assumptions with Short Proofs IACR Eprint - Thu, 01/17/2019 - 19:31 A verifiable random function (VRF) is a pseudorandom function, where outputs can be publicly verified. That is, given an output value together with a proof, one can check that the function was indeed correctly evaluated on the corresponding input. At the same time, the output of the function is computationally indistinguishable from random for all non-queried inputs. We present the first construction of a VRF which meets the following properties at once: It supports an exponential-sized input space, it achieves full adaptive security based on a non-interactive constant-size assumption and its proofs consist of only a logarithmic number of group elements for inputs of arbitrary polynomial length. Our construction can be instantiated in symmetric bilinear groups with security based on the decision linear assumption. We build on the work of Hofheinz and Jager (TCC 2016), who were the first to construct a verifiable random function with security based on a non-interactive constant-size assumption. Basically, their VRF is a matrix product in the exponent, where each matrix is chosen according to one bit of the input. In order to allow verification given a symmetric bilinear map, a proof consists of all intermediary results. This entails a proof size of Omega(L) group elements, where L is the bit-length of the input. Our key technique, which we call hunting and gathering, allows us to break this barrier by rearranging the function, which - combined with the partitioning techniques of Bitansky (TCC 2017) - results in a proof size of l group elements for arbitrary l in omega(1). ### Message Authentication (MAC) Algorithm For The VMPC-R (RC4-like) Stream Cipher IACR Eprint - Thu, 01/17/2019 - 19:31 We propose an authenticated encryption scheme for the VMPC-R stream cipher. VMPC-R is an RC4-like algorithm proposed in 2013. It was created in a challenge to find a bias-free cipher within the RC4 design scope and to the best of our knowledge no security weakness in it has been published to date. The contribution of this paper is an algorithm to compute Message Authentication Codes (MACs) along with VMPC-R encryption. We also propose a simple method of transforming the MAC computation algorithm into a hash function. ### Fully Invisible Protean Signatures Schemes IACR Eprint - Thu, 01/17/2019 - 19:30 Protean Signatures (PS), recently introduced by Krenn et al. (CANS '18), allow a semi-trusted third party, named the sanitizer, to modify a signed message in a controlled way. The sanitizer can edit signer-chosen parts to arbitrary bitstrings, while the sanitizer can also redact admissible parts, which are also chosen by the signer. Thus, PSs generalize both redactable signature (RSS) and sanitizable signature (SSS) into a single notion. However, the current definition of invisibility does not prohibit that an outsider can decide which parts of a message are redactable - only which parts can be edited are hidden. This negatively impacts on the privacy guarantees provided by the state-of-the-art definition. We extend PSs to be fully invisible. This strengthened notion guarantees that an outsider can neither decide which parts of a message can be edited nor which parts can be redacted. To achieve our goal, we introduce the new notions of Invisible RSSs and Invisible Non-Accountable SSSs (SSS'), along with a consolidated framework for aggregate signatures. Using those building blocks, our resulting construction is significantly more efficient than the original scheme by Krenn et al., which we demonstrate in a prototypical implementation. ### Identity-based Broadcast Encryption with Efficient Revocation IACR Eprint - Thu, 01/17/2019 - 19:18 Identity-based broadcast encryption (IBBE) is an effective method to protect the data security and privacy in multi-receiver scenarios, which can make broadcast encryption more practical. This paper further expands the study of scalable revocation methodology in the setting of IBBE, where a key authority releases a key update material periodically in such a way that only non-revoked users can update their decryption keys. Following the binary tree data structure approach, a concrete instantiation of revocable IBBE scheme is proposed using asymmetric pairings of prime order bilinear groups. Moreover, this scheme can withstand decryption key exposure, which is proven to be semi-adaptively secure under chosen plaintext attacks in the standard model by reduction to static complexity assumptions. In particular, the proposed scheme is very efficient both in terms of computation costs and communication bandwidth, as the ciphertext size is constant, regardless of the number of recipients. To demonstrate the practicality, it is further implemented in Charm, a framework for rapid prototyping of cryptographic primitives. ### Improving Attacks on Speck32/64 using Deep Learning IACR Eprint - Thu, 01/17/2019 - 19:15 This paper presents a very practical key recovery attack on Speck32/64 reduced to 11 rounds based on a novel type of differential distinguisher using machine learning. These distinguishers exceed distinguishers based on the entire differential distribution table of Speck32/64 in accuracy, specificity and sensitivity. We show that they obtain significant gain from features of the output distribution that are invisible to the differential distribution table. The key recovery attack has been completely verified empirically and has an average runtime of approximately three minutes on a desktop computer with a fast graphics card or about 30 minutes on the same machine when not using the graphics card. This corresponds to roughly 41 bits of remaining security for 11-round Speck32/64, which is a substantial improvement over previous literature. The average data complexity of our attack is slightly lower than the best previous attack on the same number of rounds. While our attack is based on a known input difference taken from the literature, we also show that neural networks can be used to rapidly (within a matter of minutes on our machine) find good input differences without using prior human cryptanalysis.
{}
Using Multivariate Gaussian, Mahalanobis Distance and F1 measure to choose the right probability threshold from the Validation to detect outliers In this article, simple multivariate Gaussian distribution will be used to find the outliers in an image. 1. We shall use the following apples and oranges image for the outlier detection. 2. The colors R,G,B will form the variables for this image data, as shown in the following figure. 3. First we fit a 3-dimensional gaussian distribution to the image data, we use MLE estimates for the parameters of the Gaussian distribution. The pdf for the multivariate Gaussian has the following form (we need to estimate the mean and the covariance matrix). 4. After estimating the distribution the probability that each of the data point (pixel) comes from the distribution is computed. The (discretized) probability values are overlayed (as alpha values) on the image itself to visualize the data points with low probabilities (low alpha values). ## [1] "MLE estimate for mean" ## r g b ## 0.5693976 0.4987922 0.1681461 ## [1] "MLE estimate for covariance matrix" ## [,1] [,2] [,3] ## [1,] 0.05288263 0.00000000 0.00000000 ## [2,] 0.00000000 0.04169364 0.00000000 ## [3,] 0.00000000 0.00000000 0.02009812 ## [1] "Visualizing Gaussian fit" 5. The following animation shows the outlier detection in the image based on probability thresholds. 6. Next the Mahalanobis distance d=(xμ)Σ^(-1)(xμ)) -based threshold is used to mark the outlier points in the image, as shown in the following animation. 7. Finally the image dataset is going to be divided into training and validation datasets. 8. The following two white cut portions from the image are going to be used as validation dataset, the first one (the points from orange) with label 1 (since we want orange to be detected as outlier) and the second one with label 0, as shown below. The rest of the image is going to be used as training dataset, from which the estimated parameters for the multivariate Gaussian fit distribution is computed. 9. Now the probability for each of the data points in the validation dataset will be computed and this validation dataset will be used to find thebest probability threshold that gives the best F1-measure. Now this probability threshold is going to be used to find the outliers in the entire image. The pixels with probability of Gaussian fit less than this threshold will be marked as black outliers. The following figures show the results. ## [1] "MLE estimate for mean from the training dataset" ## r g b ## 0.5409833 0.4860903 0.1600407 ## [1] "MLE estimate for covariance matrix from the training dataset" ## [,1] [,2] [,3] ## [1,] 0.04874885 0.00000000 0.00000000 ## [2,] 0.00000000 0.04163396 0.00000000 ## [3,] 0.00000000 0.00000000 0.01906682 ## [1] "Best epsilon found using cross-validation: 2.030149e-01" ## [1] "Best F1 on Cross Validation Set: 0.774317"
{}
# Any open or closed subset of a locally compact space is also locally compact [duplicate] I found this question in an exercise on a book: Check that any open or closed subset of a locally compact space is also locally compact. The definition of locally compact in the book is the following: a Hausdorff space is locally compact if for each point there is a compact neighborhood. There is no other mention to this concept in the entire book, so I assume that a subset is locally compact if seeing it as a topological subspace then it is locally compact (Im not sure if this is the intended use of the concept for subsets). But then is trivial that any subset of a locally compact space is also locally compact because any closed set and any open set of a subspace is induced by closed and open sets of the general space. In other words: I cant see any reason that make me think that open or closed subspaces have a different behavior than any other subspace as locally compact spaces. It is my reasoning correct? In other words: its possible that a subspace of a locally compact space would not be locally compact? ## marked as duplicate by HK Lee, Community♦May 18 '18 at 8:18 • – Math1000 May 18 '18 at 0:29 • The real line is locally compact. The subspace of rational numbers (which is neither open nor closed) is not locally compact. – Andreas Blass May 18 '18 at 0:43 If $K$ is compact then in general $K \cap Y$ will not be compact for a subspace $Y$ (and yes, the subspace topology is meant here), while neighbourhood-ness does inherit that way. You can find work-arounds for the closed and open subspace case though. The rational numbers $\mathbb{Q}$ are not locally compact, even though the real line $\mathbb{R}$ is locally compact. While the proof of the statement is rather trivial, openness and closedness are used. Try writing out the proof more carefully to see where those properties come in to play.
{}
# Integrating (5x)/(3x^2+5) dx 1. Nov 2, 2012 ### beaf123 Hi all. I am having Calculus 1 this year. We are using a book called Thomas Calculus. I think its a lot of fun, but I have to work very much since there is basic stuff like trigonometry that I know really bad. Since I work so much with math I thought it could be fun and helpful to talk with other math people in here:-) To the question: ∫ (5x)/(3x^2+5) dx ∫ (1/(3x^2+5)) * 5x Integration by parts give. (5x) ln(3x^2+5) - 5 ∫ ln (3x^2+5)dx Not sure have to calculate the last integral. Not sure about anything here really.. 2. Nov 2, 2012 ### hedipaldi compute by substitution:u=3x^2+5 3. Nov 3, 2012 A new question: With u=3x^2+5 the answer is 5/6*log(3*x^2+5), buit using symbolics in matlab the answer is 5/6*log(x^2+5/3) and that's using u=x^2+5/3. Why are there different answers? As 5/6*log(3*x^2+5) = 5/6*log(x^2+5/3)/log(3) <> 5/6*log(x^2+5/3) 4. Nov 3, 2012 ### SammyS Staff Emeritus The two answers differ only by a constant. Remember the constant of integration? $\displaystyle \frac{5}{6}\log(3x^2+5)=\frac{5}{6}\log\left(3 \left(x^2+\frac{5}{3}\right)\right)$ $\displaystyle =\frac{5}{6}\log(3)+\frac{5}{6}\log\left(x^2+\frac{5}{3}\right)$​ 5. Nov 3, 2012 ### beaf123 Yes of course. I should have thought of that. Would you get the same answer using integration by parts? If the exercise looked like this instead: ∫ (5x)/(3x^2+4x+5) dx then you have to use integration by parts? 6. Nov 3, 2012 ### tiny-tim welcome to pf! hi beaf123! welcome to pf! (try using the X2 button just above the Reply box ) integrating by parts, your u would be the whole thing, and your v would be 1 (your line starting "(5x) ln(3x^2+5) …" was wrong) no, write the integrand A(6x+4)/(3x2+4x+5) + B/(3x2+4x+5), and do two different substitutions 7. Nov 6, 2012 Thank you:)
{}
# Archimedes Principle - Example 5 Example 5 The density and mass of a metal block are 5.0×103 kg m-3 and 4.0kg respectively. Find the upthrust that act on the metal block when it is fully immerse in water. [ Density of water = 1000 kgm-3 ] $V= m ρ = 4 5.0× 10 3 =0.0008 m 3$ $F=ρVg F=(1000)(0.0008)(10)=8N$
{}