content
stringlengths
86
994k
meta
stringlengths
288
619
Efficient rank aggregation using partial data The need to rank items based on user input arises in many practical applications such as elections, group decision making and recommendation systems. The primary challenge in such scenarios is to decide on a global ranking based on partial preferences provided by users. The standard approach to address this challenge is to ask users to provide explicit numerical ratings (cardinal information) of a subset of the items. The main appeal of such an approach is the ease of aggregation. However, the rating scale as well as the individual ratings are often arbitrary and may not be consistent from one user to another. A more natural alternative to numerical ratings requires users to compare pairs of items (ordinal information). On the one hand, such comparisons provide an “absolute” indicator of the user’s preference. On the other hand, it is often hard to combine or aggregate these comparisons to obtain a consistent global ranking. In this work, we provide a tractable framework for utilizing comparison data as well as first-order marginal information (see Section 2) for the purpose of ranking. We treat the available information as partial samples from an unknown distribution over permutations. We then reduce ranking problems of interest to performing inference on this distribution. Specifically, we consider the problems of (a) finding an aggregate ranking of n items, (b) learning the mode of the distribution, and (c) identifying the top k items. For many of these problems, we provide efficient algorithms to infer the ranking directly from the data without the need to estimate the underlying distribution. In other cases, we use the Principle of Maximum Entropy to devise a concise parameterization of a distribution consistent with observations using only O(n2) parameters, where n is the number of items in question. We propose a distributed, iterative algorithm for estimating the parameters of the distribution. We establish the correctness of the algorithm and identify its rate of convergence explicitly.
{"url":"https://devavrat.mit.edu/publications/efficient-rank-aggregation-using-partial-data/","timestamp":"2024-11-12T13:42:29Z","content_type":"text/html","content_length":"62559","record_id":"<urn:uuid:bf3e888f-986d-459a-89ee-9f75714d8bd1>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00574.warc.gz"}
1.2. Algorithms¶ If problem solving is a central part of computer science, then the solutions that you create through the problem solving process are also important. In computer science, we refer to these solutions as algorithms. An algorithm is a step by step list of instructions that if followed exactly will solve the problem under consideration. For example, an algorithm to compute the area of a circle given its radius might look like this: Algorithm Example 1 (English) 1. Ask for radius 2. Compute area by squaring radius and multiplying the result by pi 3. Display the computed area Notice that this algorithm consists of a set of numbered steps. It is written in English, for ease of understanding. Although simple algorithms are easily understood when written in English, more complicated algorithms need more precise notation. For improved precision, algorithms are often written in pseudocode. Pseudocode is a notation that is more precise than English but generally not as precise as a programming language. The same algorithm expressed in pseudocode might look something like this: Algorithm Example 2 (Pseudocode) 1. Ask for radius 2. let area = (radius^2) × π 3. Display area Note how the pseudocode example expresses step 2 more precisely, specifying the formula in mathematical terms. Our goal in computer science is to take a problem and develop an algorithm that can serve as a general solution. Once we have such a solution, we can use our computer to automate its execution using programming. Programming is a skill that allows a computer scientist to take an algorithm and represent it in a notation (a program) that can be followed by a computer. A program is written in a programming language such as Python, the language you will learn in this book. To help you understand the difference between an algorithm and a program, consider this program to compute the area of a circle: A program is an algorithm expressed in a programming language. We might also say that a program is an implementation of an algorithm. In this example, both the algorithm and the program have three steps. The first step gets some input from the user and the input into something the computer can do math with; the second step performs a calculation using the information obtained in the first step; and the final step displays the result to the user. Even though we haven’t covered any details of Python, hopefully you can see the correspondence between the steps of the algorithm, which could be followed by a human (but not executed by a computer), and the steps of the program, which can be executed by a computer (try executing this one using the Run button). Algorithms are important because the process of solving a problem through programming often begins by designing an algorithm. The programmer often expresses the algorithm in pseudocode, then converts the algorithm to a program for the computer to execute. In the next section, you will learn how to execute Python programs on a computer. Check your understanding You have attempted of activities on this page
{"url":"https://runestone.academy/ns/books/published/fopp/GeneralIntro/Algorithms.html","timestamp":"2024-11-14T11:41:36Z","content_type":"text/html","content_length":"26465","record_id":"<urn:uuid:48b56c5e-37da-45ce-94a7-d0049afe3747>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00558.warc.gz"}
Part Two: Introduction In the second part of this book we are aiming to 1. deepen our understanding of structural recursion, by using it in new contexts; 2. learn how to manipulate collections of data using standard methods map, filter, flatMap, and foldLeft; 3. define and use custom data structures of a particular kind known as algebraic data types; and 4. learn a model for understanding data layout in memory, known as boxes and arrows diagrams. We'll encounter these topics in the context of data science. Data science is the field concerned with finding interesting things in data. This brings together elements of computing, statistics, and graphic design (in the form of visualization.) Data science allows us to address questions about the world such as TODO: insert examples In this part we'll use a variety of data sets to explore these questions. If you have particular data you're interested in exploring, I encourage you to do so in this part. There is no better way to learn than to work on questions that are meaningful to you. This part assumes little experience with statistics. When we meet statistcal concepts, we'll approach them from a computational, rather than mathematical, perspective. This means we'll get the computer to do the work for us, and we'll avoid a lot fancy mathematics that is typically found in statistics courses. I think this leads to a clearer understanding of these topics (particularly if you are a programmer!) To work through this part of the book, use the creative-scala-data-template from Github.
{"url":"https://www.creativescala.org/creative-scala/part-two-introduction.html","timestamp":"2024-11-07T00:02:58Z","content_type":"text/html","content_length":"31111","record_id":"<urn:uuid:5b2f432a-8671-4083-b8db-d439a8599e77>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00462.warc.gz"}
Stability of the integral control of stable nonlinear systems PI controllers are the most widespread type of controllers and there is an intuitive understanding that if their gains are sufficiently small and of the correct sign, then they 'always' work. In this paper we try to give some rigorous backing to this claim, under specific assumptions. Let P be a nonlinear system described by = f (x, u), y = g(x), where the state trajectory x takes values in R^n, u and y are scalar and f, g are of class C^1. We assume that there is a Lipschitz function Ξ : [u[min], u[max]] → R^n such that for every constant input ^u 0 ^[u min, u max], Ξ(u[0]) is an exponentially stable equilibrium point of P. We also assume that G(u) = gΞ(u)), which is the steady state input-output map of P, is strictly increasing. Denoting y[min] = G(u[min]) and y[max] = G(u [max]), we assume that the reference value r is in (y[min], y[max]). Our aim is that y should track r, i.e., y → r as t →∞ while the input of P is only allowed to be in [u[min], u[max]]. For this, we introduce a variation of the integrator, called the saturating integrator, and connect it in feedback with P in the standard way, with gain k > 0. We show that for any small enough k, the closed-loop system is (locally) exponentially stable around an equilibrium point (Ξ(u[r] ), u[r]), with a 'large' region of attraction X[T] ⊂ R^n × [u[min], u[max]]. When the state (x(t),u(t)) of the closed-loop system converges to (Ξ(u[r]),u[r]), then the tracking error r - y tends to zero. The compact set X[t] can be made larger by choosing a larger parameter T > 0, resulting in smaller k. Publication series Name 2016 IEEE International Conference on the Science of Electrical Engineering, ICSEE 2016 Conference 2016 IEEE International Conference on the Science of Electrical Engineering, ICSEE 2016 Country/Territory Israel City Eilat Period 16/11/16 → 18/11/16 Dive into the research topics of 'Stability of the integral control of stable nonlinear systems'. Together they form a unique fingerprint.
{"url":"https://cris.tau.ac.il/en/publications/stability-of-the-integral-control-of-stable-nonlinear-systems","timestamp":"2024-11-10T05:27:47Z","content_type":"text/html","content_length":"53868","record_id":"<urn:uuid:3ddbf382-0f39-4b7f-9939-1b88d4cd3769>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00193.warc.gz"}
How exactly to calculate the quantity of calories you burn doing any such thing, from operating to intercourse • A value is had by every activity called a “MET value” which calculates the vitality needed for that task. • Multiplying MET value by weight in kilograms lets you know calories burned each hour. • It is possible to research research-backed MET values regarding the Compendium of Physical strategies site. Through the entire time, every thing we do burns off calories. Some things — like sitting — keep us at our resting price. Energetic task can burn up more than ten times since much power. And even though calorie-counting is not always the simplest way to shed weight, it may be of good use or simply simple enjoyable to learn whether that post-work soccer game is sufficient to burn the doughnuts off your co-worker earned today. Fortunately, there’s a way that is science-backed determine exactly how many calories you burn doing most situations. Yes, you can find apps available to you that may help you determine just how numerous calories you burn on your own run or your bicycle trip, but this goes much much deeper than that. Need to know exactly exactly how calories that are many burn backpacking, milking a cow (manually), cleaning a church, or engaging in an hour or so of strenuous intercourse? There’s information that will help calculate that — along side calories burned while participating in a myriad of various activities. Scientists have actually examined the quantity of energy expected to take part in a myriad of tasks over time. To make it easier for any other boffins to conduct major studies, scientists through the National Institutes of wellness (NIH) and Arizona State University have actually put together updated variations of the information on a webpage, the Compendium of activities. And everyone can head to that web site, lookup an action, and determine exactly how many calories they’re going to burn off doing one thing. It simply takes find ukrainian brides https://mail-order-brides.org/ ukrainian-brides/ some simple maths. Here’s how it functions: This calculation depends on an integral value understood as a MET, which means metabolic equivalent. One “MET” is “roughly comparable to the vitality cost of sitting quietly,” according in to the Compendium, and that can be viewed 1 kcal/kg/hour. A 70 kg person would burn 70 calories (kcal) if they sat quietly for an hour since sitting quietly is one MET. If an activity’s MET value had been two, that exact same individual would burn 140 calories in an hour or so. From the Compendium’s site, you are able to research a huge quantity of tasks. We’ve included calorie counts for a few of the most extremely popular tasks an additional article, but for yourself, here’s how it works if you want to make the calculation. First, calculate your bodyweight in kilograms — 1 kg is 2.2 pounds, but you can constantly form “X pounds to kg” into Bing, with X being your bodyweight, to obtain a quantity. Next, look your activity up regarding the Compendium. There’s a dropdown menu on the internet site labelled “Activity Categories.” Under that menu, you’ll visit a list that is long of, you start with bicycling and finishing with volunteer tasks. You can see the activities that fall under it if you open up a category. Then select an activity if you open up sports (category 15) you can. There are lots of listings for many activities — there’s a significant difference between boxing in a ring and boxing by striking a punching case, as an example. Try to find the MET value through the 2011 Compendium, since it’s probably the most as much as date. In the event that MET value is blue, you will find posted studies supporting that value. It’s an estimate if it’s red. Here’s your equation: MET value increased by fat in kilograms lets you know calories burned each hour (MET*weight in kg=calories/hour). You burned in a half hour, divide that number by two if you only want to know how many calories. If you would like find out about a quarter-hour, divide that number by four. Therefore if a person that is 175-pound myself had been to try out competitive soccer (MET worth of 10) for example hour, the equation for calories burned will be: 79.38 kg*10=793.8 calories/hour. You can find a few caveats. Everyone’s resting metabolism varies somewhat — some folks of the exact same fat obviously burn up more or less calories, based on a wide range of facets, and these distinctions may be significant. Due to the fact Compendium site explains, this type of calculation doesn’t account for differences brought on by human anatomy mass, surplus fat, age, intercourse, effectiveness of motion, and conditions like thin air which will impact in the power needed for an action. Additionally, these calculations are determined based just on time invested in movement — so if 50 % of my “competitive soccer” game really was simply standing around, I’d have to divide that number in two and you can add within the level of calories we burned standing around to learn just how much power I really utilized in that hour. Having said that, this is basically the easiest method to have a science-backed estimate of calories burned in a task. And when you appear through activities, you will find all types of enjoyable items that make the list — it is well well well worth taking some right time for you to explore. Добавить комментарий
{"url":"https://kassa-kogalym.ru/2019/12/31/how-exactly-to-calculate-the-quantity-of-calories/","timestamp":"2024-11-01T23:01:03Z","content_type":"text/html","content_length":"90857","record_id":"<urn:uuid:0807287b-4c38-4766-a0d8-d1caaaacedb5>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00292.warc.gz"}
Data Science Data Science (part 1) “The eternal secret of the world lies in its comprehensibility. The fact that it is comprehensible is a miracle "A. Einstein Data Science (DS) originates from statistics, which in turn originates from the great works of mathematicians, starting with the work of Arab scientists in the 8th century AD, continuing with the works of scientists of Renaissance and late Renaissance (Isaac Newton, John Graunt, Blaise Pascal, Pierre de Fermat, etc.) and ending in the 19th - 20th centuries (Ronald Fisher, Egon Pearson, etc.) What is the main difference between lying DS and statistics? The main difference is two things: 1) Historically statistics were limited by computing power, so it was based on the use of data samples. As a result scientists were forced to base their conclusions on the basis of some general population. Modern Data Science has practically no limitations in computing power, so we can abandon the need to carry out statistical sampling and can analyze and draw conclusions based on the entire amount of data. 2) Before the modern high-performance computer’s extension corresponding application of AI and machine learning algorithms, scientists were forced to first put forward some hypothesis and conduct observations in accordance with it. Thus, scientists had to carry out significant work to identify dependencies and confirm the hypothesis put forward, which, given the significant limitations in computing power, did not always turn out to be correct. In today's world, a DS professional can use appropriate algorithms and models to programmatically identify dependencies. The cardinal difference of the modern DS is that a person can entrust almost any routine work to a machine. While a DS specialist is required to understand the basic algorithms, models, libraries and capabilities of information systems. Sub conclusion: Thus, at present, there are unlimited resources for processing and working with any data. Modern companies, institutes and other large formations have now learned how to accumulate and store large amounts of data, although not always in a structured form. As a result, we see a huge potential for working with data to obtain effective models that allow us to achieve a huge number of goals. Namely, with modern computing and advanced algorithms, we can: A) discover hidden trends in large datasets B) identify opportunities to achieve goals by grouping and clustering data C) take advantage of trends to predict D) calculate the probability of any possible outcome E) get accurate results quickly and with a minimum of human time If you are interested in deploying a DS Competence Center in your company, then I will be happy to help in the implementation of such projects. Please contact me through my website: http://akonnov.ru / or through my Telegram channel: https://t.me/biz_in
{"url":"https://akonnov.ru/tpost/0jjzz6avp1-data-science","timestamp":"2024-11-03T03:03:52Z","content_type":"text/html","content_length":"39479","record_id":"<urn:uuid:26113a2f-bf37-4355-a757-139a33ff8eb8>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00649.warc.gz"}
Meet Mathematician Aaron Lauda Intended Audience: Everyone, and especially teachers who want to show to their students a mathematician explaining the motivation behind their own research. In this episode we meet Aaron Lauda, a mathematician from the University of Southern California, who shows us how to represent complicated expressions and equations using pictures. Enjoy! In fact, Aaron has provided more artwork at his website. Go check it out. Aaron explains the motivation behind his paper with Mikhail Khovanov, “A diagrammatic approach to categorification of quantum groups I.” Here is the abstract to their paper: To each graph without loops and multiple edges we assign a family of rings. Categories of projective modules over these rings categorify $U-q(\mathfrak{g})$, where $\mathfrak{g}$ is the Kac-Moody Lie algebra associated with the graph. While the video above is for a general audience, Aaron Lauda’s paper is not (it’s written for other mathematicians). However, high school students who think they are potential math geniuses may still enjoy looking at it to see what advanced theorems and proofs look like. If you enjoy what you see, please be sure to Like our Facebook page. ©2015 Scott Baldridge and David Shea Vela-Vick Supported by NSF CAREER grant DMS-0748636 and NSF grant DMS-1249708
{"url":"https://vela-vick.com/meet-mathematician-aaron-lauda","timestamp":"2024-11-14T15:12:18Z","content_type":"text/html","content_length":"44066","record_id":"<urn:uuid:e97652bc-ced1-4dd0-94a4-dfbc9386032f>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00331.warc.gz"}
Lecture 9 ECSE-2500, Engineering Probability, Spring 2010, Rensselaer Polytechnic Institute Lecture 9 1. Some review questions 1. Sampling with replacement with ordering: Each day I eat lunch at either the Union, Mcdonalds, Brueggers, or Sage. How many ways can I eat lunch over 5 days next week? 2. sampling w/o replacement and w/o order: How many different possible teams of 3 people can you pick from a group of 5? 3. sampling w/o replacement and with order: 5 people run a race for gold, silver, bronse. How many ways can the medals be won? 4. binomial: A coin falls heads with p=.6. You toss it 3 times. What's the prob of 2 heads and 1 tail? 5. multinomial: You play roulette 3 times. Once you got red, once black and once 0 or 00. What was the prob? 6. conditional prob: You have 2 dice, one 6-sided and one 12-sided. You pick one of them at random and throw it w/o looking; the total is 2. What's the probability that you threw the 6-sided 7. Independence: Consider {1,2,3,...12}. Is the set of even numbers independent of the set of multiples of 3? What if we use {1,2,..10}? 2. Start chapter 4
{"url":"https://wrfranklin.org/pmwiki/pmwiki.php/EngProbSpring2010/Lecture9","timestamp":"2024-11-06T12:09:28Z","content_type":"application/xhtml+xml","content_length":"11211","record_id":"<urn:uuid:63108fce-75bf-40a1-a556-134b4170a6da>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00694.warc.gz"}
Understanding Mathematical Functions: Which Of The Following Is A Line Introduction to Mathematical Functions In mathematics, functions are essential tools used to describe relationships between different quantities. By understanding functions, mathematicians can analyze and predict various real-world phenomena. Functions play a crucial role in fields such as physics, economics, engineering, and more. A Definition and Importance of Functions in Mathematics Functions can be defined as a rule that assigns to each element in a set (the domain) exactly one element in another set (the range). Essentially, functions take an input value and produce a corresponding output value. This relationship is denoted mathematically as f(x) = y, where x represents the input and y represents the output. Functions are vital in mathematics because they allow us to model various relationships and make predictions based on the data we have. They provide a systematic way to organize and analyze data, making complex problems more manageable. Overview of Different Types of Functions There are countless types of functions in mathematics, each serving a specific purpose. Some common types of functions include linear functions, quadratic functions, exponential functions, trigonometric functions, and more. Each type of function has its own unique characteristics and properties. Understanding the different types of functions is crucial for solving mathematical problems and interpreting data accurately. By knowing how each function behaves, mathematicians can choose the most appropriate function for a given situation. Setting the Stage: Focusing on Linear Functions Linear functions are one of the simplest and most fundamental types of functions in mathematics. A linear function is a function that can be represented by a straight line on a graph. The general form of a linear function is f(x) = mx + b, where m is the slope of the line and b is the y-intercept. Linear functions have a constant rate of change and represent a direct relationship between the input and output values. They are commonly used to represent proportional relationships and can be found in various real-world scenarios, such as calculating cost, revenue, and distance. Key Takeaways • Line functions are straight and have a constant slope. • They can be represented by the equation y = mx + b. • Key Takeaways: • Line functions have a constant slope. • They are represented by y = mx + b. Characteristics of Linear Functions Linear functions are a fundamental concept in mathematics that describe relationships between two variables in a straight line. Understanding the characteristics of linear functions is essential for analyzing and interpreting data in various fields. A Definition of a linear function A linear function is a mathematical equation that can be represented in the form y = mx + b, where y represents the dependent variable, x represents the independent variable, m is the slope of the line, and b is the y-intercept. The slope m indicates the constant rate of change in the relationship between the variables. Key properties: constant rate of change, straight-line graph One of the key properties of linear functions is the constant rate of change. This means that for every unit increase in the independent variable, the dependent variable changes by a consistent amount determined by the slope m. Another characteristic of linear functions is that they produce straight-line graphs when plotted on a coordinate plane. Real-world examples of linear relationships Linear functions can be observed in various real-world scenarios. For example, the relationship between time and distance traveled at a constant speed is a linear function. The speed of a moving object determines the slope of the line, while the initial distance at time zero represents the y-intercept. Similarly, the relationship between temperature and altitude in the atmosphere can be modeled using a linear function, where the slope represents the rate of change in temperature with respect to altitude. The Standard Form of a Line Function Understanding mathematical functions is essential in various fields, from engineering to economics. One common type of function is a line function, which represents a straight line on a graph. In this chapter, we will delve into the standard form of a line function, how to identify it, and how to convert between different forms. Explanation of the standard form equation: Ax + By = C The standard form of a line function is represented by the equation Ax + By = C, where A, B, and C are constants. In this form, the coefficients A and B determine the slope of the line, while C represents the y-intercept. This form is useful for representing lines in a general way, without specifying the slope or y-intercept explicitly. By manipulating the coefficients A, B, and C, we can easily transform the equation to other forms, such as slope-intercept form. How to identify a line function from its equation Identifying a line function from its equation involves recognizing the standard form Ax + By = C. To do this, look for the presence of two variables, x and y, with coefficients A and B, respectively. The constants A, B, and C should be integers, and A and B should not both be zero. For example, the equation 2x - 3y = 6 is in standard form and represents a line function. By comparing the equation to the standard form, we can easily identify it as a line function. Conversion between slope-intercept and standard forms Converting between slope-intercept form (y = mx + b) and standard form (Ax + By = C) is a common task in algebra. To convert from slope-intercept form to standard form, rearrange the equation to match the standard form by moving all terms to one side of the equation. For example, the equation y = 2x + 3 can be converted to standard form by subtracting 2x from both sides to get -2x + y = 3. This equation is now in standard form. Conversely, to convert from standard form to slope-intercept form, solve for y in terms of x by isolating y on one side of the equation. This will give you the equation in the form y = mx + b, where m is the slope and b is the y-intercept. Understanding the standard form of a line function, how to identify it, and how to convert between different forms is essential for working with linear equations. By mastering these concepts, you can easily manipulate and analyze line functions in various mathematical contexts. Graphing Linear Functions Understanding mathematical functions is essential in various fields, from engineering to economics. One common type of function is the linear function, which represents a straight line on a graph. Graphing linear functions involves plotting points and drawing lines to visualize the relationship between variables. Plotting points and drawing lines When graphing a linear function, it is important to plot points that satisfy the equation of the function. For example, if the linear function is y = 2x + 3, you can choose different values of x to calculate corresponding values of y. Plotting these points on a graph and connecting them with a straight line will give you the visual representation of the function. The importance of the slope and y-intercept in graphing In a linear function of the form y = mx + b, the slope (m) determines the steepness of the line, while the y-intercept (b) is the point where the line intersects the y-axis. Understanding the significance of the slope and y-intercept is crucial in graphing linear functions accurately. Tools and software that can aid in graphing linear functions There are various tools and software available that can assist in graphing linear functions efficiently. Graphing calculators, such as the TI-84, allow you to input the function and plot it on a graph with ease. Online graphing tools like Desmos and GeoGebra also provide interactive platforms for graphing linear functions and exploring their properties. Distinguishing Line Functions from Other Functions When it comes to mathematical functions, it is important to be able to distinguish between different types of functions. One common type of function is a line function, which is a function that can be represented by a straight line on a graph. In order to understand line functions better, it is helpful to compare them with other types of functions, such as nonlinear functions like quadratic and exponential functions. A Comparison with nonlinear functions (eg, quadratic, exponential) Nonlinear functions, such as quadratic and exponential functions, differ from line functions in that they do not produce straight lines on a graph. Quadratic functions, for example, produce a curve that forms a parabola, while exponential functions produce a curve that grows exponentially. By understanding the visual differences between these functions, it becomes easier to identify line Understanding visual cues from graphs One of the key ways to distinguish line functions from other functions is by looking at their graphs. Line functions will always produce a straight line on a graph, with a constant slope that does not change. This is in contrast to nonlinear functions, which will produce curves or exponential growth on a graph. By analyzing the visual cues from graphs, it becomes easier to identify line Utilizing algebraic methods to test function types In addition to analyzing graphs, algebraic methods can also be used to test the type of function. Line functions can be identified by their linear equations, which take the form y = mx + b, where m represents the slope of the line and b represents the y-intercept. By plugging in values for x and y and solving for m and b, it becomes possible to determine if a function is a line function or not. Real-life Applications of Linear Functions Linear functions are a fundamental concept in mathematics that have a wide range of real-life applications. Understanding how linear functions work can help us analyze and solve problems in various fields. Let's explore some of the practical applications of linear functions: A Linear functions in economics (supply and demand curves) Linear functions play a crucial role in economics, particularly in the analysis of supply and demand curves. These curves represent the relationship between the price of a product and the quantity demanded or supplied. In economics, linear functions are used to model these relationships and make predictions about market behavior. By understanding linear functions, economists can analyze how changes in price affect demand and supply, and make informed decisions about pricing strategies and market trends. B Calculating distance over time in physics In physics, linear functions are used to calculate distance over time. By plotting distance against time on a graph, we can create a linear function that represents the motion of an object. This allows us to analyze the speed and acceleration of the object, as well as predict its future position based on its current velocity. Linear functions in physics help us understand the fundamental principles of motion and dynamics, and are essential for solving problems related to distance, speed, and time. C Budgeting and financial planning using linear models Linear models are commonly used in budgeting and financial planning to analyze income, expenses, and savings. By creating linear functions that represent the relationship between income and expenses, individuals and businesses can make informed decisions about budget allocation and financial goals. Linear functions help us track financial trends, forecast future earnings, and optimize spending patterns. By understanding linear models, we can effectively manage our finances and make strategic financial decisions. Conclusion & Best Practices Understanding mathematical functions, including line functions, is essential for anyone studying mathematics or related fields. In this chapter, we have discussed the defining aspects of line functions, the importance of practice in mastering function identification, and recommendations for further study and resources. A Recap of the defining aspects of line functions • Linear Relationship: Line functions have a constant rate of change, resulting in a straight line when graphed. • Equation Form: The general form of a line function is y = mx + b, where m is the slope and b is the y-intercept. • Graphical Representation: Line functions can be easily identified on a graph as a straight line. Importance of practice in mastering function identification Identifying different types of mathematical functions, including line functions, requires practice and familiarity with their defining characteristics. By working through various examples and exercises, you can improve your ability to recognize and analyze different types of functions. It is important to practice identifying line functions in different contexts and scenarios to strengthen your understanding and develop your problem-solving skills. Regular practice will help you become more confident in identifying and working with line functions. Recommendations for further study and resources • Online Resources: Explore online resources such as math websites, tutorials, and videos that provide explanations and examples of line functions. • Textbooks and Workbooks: Utilize textbooks and workbooks that focus on functions and algebra to deepen your understanding of line functions. • Practice Problems: Solve practice problems and exercises related to line functions to reinforce your knowledge and improve your skills. • Seek Help: Don't hesitate to seek help from teachers, tutors, or online forums if you encounter difficulties in understanding line functions.
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-which-is-a-line-function","timestamp":"2024-11-11T17:35:26Z","content_type":"text/html","content_length":"221312","record_id":"<urn:uuid:98a74bcd-d61e-4d3f-b657-dee4b90d8ec4>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00525.warc.gz"}
mrad to sign Units of measurement use the International System of Units, better known as SI units, which provide a standard for measuring the physical properties of matter. Measurement like angle finds its use in a number of places right from education to industrial usage. Be it buying grocery or cooking, units play a vital role in our daily life; and hence their conversions. unitsconverters.com helps in the conversion of different units of measurement like mrad to sign through multiplicative conversion factors. When you are converting angle, you need a Milliradian to Sign converter that is elaborate and still easy to use. Converting mrad to Sign is easy, for you only have to select the units first and the value you want to convert. If you encounter any issues to convert Milliradian to sign, this tool is the answer that gives you the exact conversion of units. You can also get the formula used in mrad to sign conversion along with a table representing the entire conversion.
{"url":"https://www.unitsconverters.com/en/Mrad-To-Sign/Utu-7756-3670","timestamp":"2024-11-05T12:56:17Z","content_type":"application/xhtml+xml","content_length":"109915","record_id":"<urn:uuid:8a468375-55df-4dfb-a8b1-8a9c4e118ba8>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00781.warc.gz"}
How do you simplify $-\ Hint: To simplify the given expression $-\left( {{\sin }^{2}}x+{{\cos }^{2}}x \right)$, we are going to use the following trigonometric property which says that the addition of square of sine and square of cosine is equal to 1. The algebraic expression of this trigonometric identity is as follows: ${{\sin }^{2}}x+{{\cos }^{2}}x=1$. Using this trigonometric identity in the above expression and then simplify it further. Complete step-by-step answer: The trigonometric expression given in the above problem is as follows: $-\left( {{\sin }^{2}}x+{{\cos }^{2}}x \right)$ The above expression contains sine and cosine terms and we know the trigonometric identity which has both sine and cosine terms are as follows: ${{\sin }^{2}}x+{{\cos }^{2}}x=1$ Using the above relation in $-\left( {{\sin }^{2}}x+{{\cos }^{2}}x \right)$ we get the expression written in the bracket as 1. & -\left( 1 \right) \\ & =-1 \\ Hence, we have simplified the above trigonometric expression to -1. Note: The above kind of problems which have square of cosine and sine terms is easily solved if we know the trigonometric identities involving sine and cosine just as we have shown above: ${{\sin }^{2}}x+{{\cos }^{2}}x=1$ Knowing the above trigonometric identity can directly solve the above problem. The alternative way of solving the above expression is that instead of remembering the formula ${{\sin }^{2}}x+{{\cos }^{2}}x=1$, if you know ${{\sin }^{2}}x=1-{{\cos }^{2}}x$ then substituting ${{\ sin }^{2}}x$ as $1-{{\cos }^{2}}x$ in the above expression we get, $\Rightarrow -\left( 1-{{\cos }^{2}}x+{{\cos }^{2}}x \right)$ In the above positive and negative ${{\cos }^{2}}x$ will be cancelled out and we are left with: & \Rightarrow -\left( 1+0 \right) \\ & =-1 \\ The other similar problem which can be possible is to simplify the following problem: $-\left( {{\cos }^{2}}x-{{\sin }^{2}}x \right)$ We know the trigonometric identity involving the subtraction of square of sine from the cosine is as follows: ${{\cos }^{2}}x-{{\sin }^{2}}x=\cos 2x$ Using the above relation in $-\left( {{\cos }^{2}}x-{{\sin }^{2}}x \right)$ we get, $-\left( \cos 2x \right)$ So, we have seen above the benefits of remembering the formulae which will save your time in the examination.
{"url":"https://www.vedantu.com/question-answer/simplify-left-sin-2x+cos-2x-right-class-11-maths-cbse-600c6f6ebfe6e128ffaa3de8","timestamp":"2024-11-05T13:28:56Z","content_type":"text/html","content_length":"163984","record_id":"<urn:uuid:68bcacda-2b0b-4808-8cdc-67671f1997d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00559.warc.gz"}
Find the area of a sector of a ring A ring sector is the part of a circle bounded by an outer and inner arc and two radii connecting the ends of the arcs to the center of the circle. To find the area, you need to know the radii and angle of the sector. The formula for calculating the area of a sector of a ring:
{"url":"https://calculators.vip/find-the-area-of-a-sector-of-a-ring/","timestamp":"2024-11-12T16:58:54Z","content_type":"text/html","content_length":"33627","record_id":"<urn:uuid:89e34114-7699-4488-8da4-c834567ab637>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00501.warc.gz"}
Complex Fractions Worksheet Complex Fractions Worksheet Simplifying complex fractions simplify each complex fraction. These fractions can be simplified in one of two ways. 18 Complex Fraction Worksheet 7th Grade In 2020 Fractions Worksheets Math Fractions Worksheets Improper Fractions 8th grade physical science worksheets 305806. Complex fractions worksheet. C a 3 4 1 x 4 y x d c b a 2 3 x x 3 is the only fraction within the complex fraction. 1 12 and 18 2 and 25x 3 and 4 and 5 and. The lcd is x. Answers to simplifying. A complex fraction is a fraction where either the numerator denominator or both contain fractions. Fractions really aren t that difficult to master especially with the support of our wide selection of worksheets. Complex fractions have fractions in either the numerator or denominator or usu ally both. Also state x values that make the expressions undefined. To add or subtract fractions you must have a. This page includes fractions worksheets for understanding fractions including modeling comparing ordering simplifying and converting fractions and operations with fractions. Basic instructions for the worksheets. Let s find a lcd or lcm for each of the following. Equivalent fractions worksheet doc livinghealthybulletin 305804 simplify complex fractions worksheets grade 7 305805 worksheet. We start you off with the obvious. Using example 3 from above let s add to rational expressions. Complex fractions worksheet created date. Mixed numbers improper fractions a. The worksheets are meant for the study of rational numbers typically in 7th or 8th grade math pre algebra and algebra 1. Simplify the resulting fraction. Math explained in easy language plus puzzles games quizzes videos and worksheets. 1 x 25 4 x2 2 4 u2 4 u 3 y2 4 y x 4 4 n 4 m2 m n 4 5 x 5 2x 5 x 5 2x 5 x 2 x 5 6 x 1 x 1 9 9 x 1 7 a2 b 3 2 a a2 8 u2 2v v2 u2 v 2 9 x2 9 25 4 4 x x 5 10 4 3x 2 x2 x x 1 4 x 1. For k 12 kids teachers and parents. Complex fractions worksheet author. Complex fractions worksheet keywords. 3 52j0 91q2 d sk buktda 5 1spoufgtgwtadrte k lzljc r i k wa5l dl l lr zingnh ftqs 4 nroeqsxe krwv5e jdo o a rmmaxdzel fwjiytfh b xiznafeiqnpiitseq 5ail lg qeeb 5ria y x2o l worksheet by kuta software llc kuta software infinite algebra 2 name complex fractions date period. Create here an unlimited supply of worksheets for simplifying complex fractions fractions where the numerator the denominator or both are fractions mixed numbers. This will be illus trated first with integers then we will consider how the process can be expanded to include expressions with variables. Simplifying Algebraic Fractions 2 Worksheet With Solutions Fractions Worksheets Fractions Worksheets How To Evaluate Complex Fraction Ratios Graphic Organizer Fractions Math Concepts Math About Me Printable 5th Grade Math Worksheets In 2020 Fractions Worksheets Math Fractions Worksheets 4th Grade Math Worksheets Complex Fractions Unit Rates Guided Notes Worksheets And Assessments Guided Notes Fractions Unit Fractions More Complex Fraction Problems 5th Grade Math Worksheet Greatschools Free Printable Math Worksheets Math Worksheets Math Worksheet Simplifying Complex Fractions Scavenger Hunt In 2020 Fractions Scavenger Hunt Fractions Fractions Worksheets Solving Rational Fractional Equations Worksheet Adv Pre Calc Complex Fractions And Solving Letter Worksheets For Preschool Fractions Worksheets Equations 14 Word Problems In Which Students Must Use Proportional Reasoning To Solve Problems Involving Comp Fraction Word Problems Proportional Reasoning Word Problems Rational Expressions Partner Worksheet Simplifying Rational Expressions Rational Expressions Rational Function Complex Fractions Card Matching Activity Algebra Precalculus Fractions Activities Linear Programming Pin On Printable Blank Worksheet Template More Complex Fraction Problems Greatkids Milestones Free Printable Math Worksheets Math Worksheets Math Worksheet Fraction Worksheets Fractions Worksheets Free Math Worksheets Fractions 7 Worksheets More Complex Fraction Problems In 2020 Fractions Worksheets Improper Fractions Math Coloring Worksheets Fractions Worksheet Multiplying And Simplifying Mixed Fractions A Fractions Worksheets Mixed Fractions Adding And Subtracting Fractions Complex Fractions Math Lib Math Fractions Fractions Rational Expressions Complex Fractions Worksheets Simplifying Algebraic Expressions Rational Expressions Simplifying Rational Expressions Algebra 2 Interactive Notebook Pages Galore Math Interactive Notebook Algebra 2 Algebra Unit Rates With Complex Fractions Notes Unit Rate 7th Grade Math Fractions
{"url":"https://thekidsworksheet.com/complex-fractions-worksheet/","timestamp":"2024-11-05T03:58:05Z","content_type":"text/html","content_length":"135489","record_id":"<urn:uuid:552eb313-93cb-4e55-8c1a-582e803ef931>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00566.warc.gz"}
Understanding Mathematical Functions: What Is A Defined Function Introduction: Unlocking the Mysteries of Mathematical Functions Mathematical functions play a critical role in various fields such as physics, engineering, economics, and many others. Understanding these functions is essential for solving complex problems and analyzing data. In this blog post, we will delve into the concept of defined functions in mathematics, their significance, and how they have evolved over time. A Defining a mathematical function and its importance in various fields A mathematical function is a rule that assigns a unique output value to each input value. It is a fundamental building block in mathematics that provides a way to represent relationships between different variables. Functions are used to model real-world phenomena, make predictions, and solve equations. They are essential for understanding patterns, making calculations, and making informed B Brief history of how mathematical functions have evolved The concept of functions dates back to ancient civilizations such as the Babylonians and Greeks. However, the formalization of functions as we know them today began with the work of mathematicians like Leibniz, Euler, and Gauss in the 17th and 18th centuries. The development of calculus and analysis further expanded our understanding of functions and their applications. Today, functions are a central part of modern mathematics and are used in a wide range of disciplines. C Overview of the discussion points in the blog post • Defining Mathematical Functions: We will explain what a defined function is and how it is represented mathematically. • Importance of Mathematical Functions: We will discuss the significance of functions in various fields and their role in problem-solving. • Evolution of Mathematical Functions: We will explore the history of functions and how they have evolved over time. • Applications of Mathematical Functions: We will examine practical examples of how functions are used in different fields. Key Takeaways • Defined function: clear input-output relationship • Mathematical expression with specific domain and range • Examples: linear, quadratic, exponential functions • Function notation: f(x) = y • Understanding defined functions crucial in math and science Understanding the Basics: What is a Defined Function? When delving into the world of mathematical functions, one of the fundamental concepts to grasp is that of a defined function. Defined functions play a crucial role in mathematics and are essential for understanding various mathematical concepts and applications. Let's explore the definition and characteristics of a defined function, the differences between defined functions and undefined expressions, and the role of domain and range in defining a function. A. Definition and Characteristics of a Defined Function A defined function is a relation between a set of inputs (the domain) and a set of outputs (the range) that assigns exactly one output value to each input value. In simpler terms, a defined function takes an input, performs a specific operation on it, and produces a unique output. The key characteristics of a defined function include: • Each input in the domain has exactly one corresponding output in the range. • The function is well-defined and unambiguous, meaning that each input produces a single output. • The function can be represented by an equation, graph, table, or verbal description. B. Differences between Defined Functions and Undefined Expressions It is important to distinguish between defined functions and undefined expressions in mathematics. While defined functions follow the characteristics mentioned above, undefined expressions lack clarity and specificity. Undefined expressions may arise when attempting to divide by zero, take the square root of a negative number, or encounter other mathematical inconsistencies. In contrast, defined functions provide a clear and consistent mapping between inputs and outputs, ensuring that each input has a unique output. C. The Role of Domain and Range in Defining a Function The domain and range are essential components in defining a function and determining its behavior. The domain of a function consists of all possible input values for which the function is defined. It represents the set of values that can be plugged into the function to produce an output. On the other hand, the range of a function includes all possible output values that the function can produce. It represents the set of values that the function can attain as a result of applying the function to the domain. By understanding the domain and range of a function, we can determine the validity of inputs, identify the possible outputs, and analyze the behavior of the function. The domain and range play a crucial role in defining the scope and limitations of a function, ensuring that it operates within specified parameters and produces meaningful results. Types of Defined Functions in Mathematics Defined functions in mathematics are essential tools used to describe relationships between variables. These functions can be categorized into different types based on their characteristics and properties. Let's explore some of the main types of defined functions: A Linear vs Non-linear Functions: Characteristics and examples Linear Functions: Linear functions are defined by a constant rate of change, resulting in a straight line when graphed. The general form of a linear function is y = mx + b, where m represents the slope of the line and b is the y-intercept. Examples of linear functions include y = 2x + 3 and y = -0.5x + 1. Non-linear Functions: Non-linear functions do not have a constant rate of change and do not form a straight line when graphed. These functions can take various forms, such as quadratic, cubic, exponential, or trigonometric functions. Examples of non-linear functions include y = x^2 (quadratic) and y = e^x (exponential). B Continuous vs Discrete Functions: Understanding the differences Continuous Functions: Continuous functions are defined for all real numbers within a given interval. These functions have no breaks, jumps, or holes in their graphs. Examples of continuous functions include y = sin(x) and y = x^3. Discrete Functions: Discrete functions, on the other hand, are defined only for specific values or points. These functions have gaps in their graphs and are often represented by discrete data points. Examples of discrete functions include the floor function f(x) = ⌊x⌋ and the ceiling function g(x) = ⌈x⌉. C Special Functions and their applications (eg, Trigonometric, Logarithmic, Exponential) Trigonometric Functions: Trigonometric functions such as sine, cosine, and tangent are used to model periodic phenomena and angles in mathematics and physics. These functions have unique properties related to angles and circles. Logarithmic Functions: Logarithmic functions are the inverse of exponential functions and are used to solve equations involving exponential growth or decay. The general form of a logarithmic function is y = log[b](x), where b is the base of the logarithm. Exponential Functions: Exponential functions represent growth or decay at a constant rate. These functions have the general form y = a^x, where a is the base of the exponential function. Exponential functions are commonly used in finance, biology, and physics to model exponential growth or decay. Representation of Functions: Visuals and Equations Understanding mathematical functions involves interpreting them through various representations, such as graphical visuals and algebraic equations. These representations play a crucial role in comprehending the behavior and properties of functions. A Graphical representation: How it helps in understanding functions A graphical representation of a function provides a visual depiction of its behavior. By plotting the function on a coordinate plane, you can observe how it changes with different input values. The graph helps in identifying key features such as intercepts, slopes, and trends. For example, a linear function will appear as a straight line on the graph, while a quadratic function will form a parabolic curve. By analyzing the graph, you can gain insights into the relationship between the input and output values of the function. B Algebraic expression: Interpreting functions through equations Algebraic expressions represent functions through equations that define the relationship between the input and output variables. These equations provide a symbolic representation of the function, allowing for precise calculations and manipulations. For instance, the equation y = 2x + 3 represents a linear function where the output (y) is determined by multiplying the input (x) by 2 and adding 3. By solving equations and manipulating variables, you can analyze the properties of the function and make predictions about its behavior. C Real-world examples where graphing and equations are vital In real-world applications, graphing and equations are essential for modeling and analyzing various phenomena. For example, in physics, functions are used to describe the motion of objects, the flow of fluids, and the behavior of waves. By graphing these functions and solving corresponding equations, scientists and engineers can predict outcomes, optimize processes, and make informed decisions. Whether it's designing a bridge, predicting stock market trends, or analyzing climate data, understanding functions through visuals and equations is crucial for problem-solving and decision-making. Practical Applications of Defined Functions Application in Sciences: How functions model physical phenomena In the field of sciences, defined functions play a crucial role in modeling physical phenomena. Scientists use mathematical functions to describe and predict the behavior of various natural processes. For example, in physics, functions are used to model the motion of objects, the flow of fluids, and the propagation of waves. By analyzing these functions, scientists can gain insights into the underlying principles governing these phenomena. In Computer Science: Algorithms and functions In computer science, defined functions are essential building blocks for writing algorithms. Functions encapsulate a specific set of instructions that can be reused multiple times within a program. This modularity not only makes the code more organized but also easier to maintain and debug. Functions are used to perform specific tasks, such as sorting data, searching for information, or processing input. By breaking down complex problems into smaller, manageable functions, programmers can write more efficient and scalable code. Everyday life examples: Finance, weather forecasting, and more Defined functions are not limited to academic or technical fields; they also have practical applications in everyday life. In finance, functions are used to calculate interest rates, analyze investment returns, and forecast market trends. Weather forecasting relies on mathematical functions to predict temperature changes, precipitation levels, and storm patterns. Additionally, functions are used in various other areas such as healthcare (for analyzing medical data), transportation (for optimizing routes), and sports (for tracking performance metrics). Common Issues and Troubleshooting with Defined Functions When working with mathematical functions, it is common to encounter various issues that can affect the accuracy and interpretation of the function. Understanding how to identify and troubleshoot these issues is essential for effectively working with defined functions. Identifying and solving domain and range issues Domain and range are fundamental concepts in mathematics that define the set of input and output values of a function, respectively. Common issues related to domain and range include: • Undefined values in the domain • Restrictions on the range • Incorrectly identifying the domain and range When encountering domain and range issues, it is important to carefully analyze the function and determine the valid input and output values. This may involve simplifying the function, identifying any restrictions, and ensuring that the domain and range are correctly defined. Dealing with discontinuities in a function Discontinuities in a function occur when there is a break or gap in the graph of the function. Common types of discontinuities include: • Removable discontinuities • Jump discontinuities • Infinite discontinuities When faced with discontinuities, it is important to identify the type of discontinuity and understand its implications on the function. This may involve simplifying the function, analyzing the behavior around the discontinuity, and determining if the function can be modified to remove the discontinuity. Troubleshooting common errors in representing and interpreting functions Errors in representing and interpreting functions can lead to incorrect results and misunderstandings of the function's behavior. Common errors include: • Misinterpreting the function notation • Incorrectly graphing the function • Using incorrect mathematical operations When troubleshooting errors in representing and interpreting functions, it is important to carefully review the function, notation, and calculations. This may involve double-checking the graph, verifying the mathematical operations, and seeking clarification on any ambiguous aspects of the function. Conclusion & Best Practices in Working with Mathematical Functions A Recap of the key points discussed about defined functions Understanding Defined Functions Defined functions are mathematical relationships that assign a unique output value to each input value. They are essential tools in mathematics and are used to model various real-world phenomena. Properties of Defined Functions Defined functions have specific characteristics, such as domain and range, that help define their behavior. It is crucial to understand these properties to work effectively with mathematical Examples of Defined Functions Examples of defined functions include linear functions, quadratic functions, exponential functions, and trigonometric functions. Each type of function has its unique properties and applications. Best practices in learning and applying mathematical functions Practice Regularly Consistent practice is key to mastering mathematical functions. Solve a variety of problems and work on different types of functions to improve your understanding and skills. Seek Help When Needed Don't hesitate to ask for help from teachers, tutors, or online resources when you encounter difficulties with mathematical functions. Getting assistance can help clarify concepts and improve your problem-solving abilities. Use Technology Wisely Utilize mathematical software, graphing calculators, and online tools to visualize functions, analyze data, and solve complex problems. Technology can enhance your learning experience and make working with functions more efficient. Encouraging continuous exploration and application of functions in solving real-world problems Apply Functions to Real-World Scenarios Challenge yourself to apply mathematical functions to real-world problems in areas such as physics, economics, engineering, and biology. This practical application can deepen your understanding of functions and their relevance in various fields. Collaborate with Peers Engage in group projects or study sessions with classmates to explore different perspectives on mathematical functions. Collaborating with others can spark new ideas and approaches to Stay Curious and Open-Minded Remain curious and open-minded when working with mathematical functions. Embrace challenges, experiment with different techniques, and continue to explore the vast world of functions to expand your knowledge and skills.
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-defined-function","timestamp":"2024-11-13T01:59:40Z","content_type":"text/html","content_length":"226289","record_id":"<urn:uuid:a26d606c-efd6-4681-bfc3-8a5abb7134d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00287.warc.gz"}
Review of Short Phrases and Links This Review contains major "Compact Space"- related terms, short phrases and links grouped together in the form of Encyclopedia article. 1. A compact space is a stronger condition: in the context of Riemannian manifolds, it is equivalent to bounded and geodesically complete. 2. A compact space is a topological space in which every open cover has a finite subcover. 3. A compact space is a general topological definition that encompasses the more applicable notion of a bounded metric space. 4. A compact space is zero-dimensional if and only it is strongly zero-dimensional if and only if it is totally disconnected. (Web site) 5. Any compact space is limit point-compact: every infinite set has a limit point. (Web site) 1. The fifth dimension is a compact space, and is called the compact dimension. (Web site) 1. Every compact space is hemicompact. 1. In mathematics —specifically general topology and metric topology —a compact space is an abstract mathematical space. (Web site) 1. Every compact space is feebly compact. 1. The prime spectrum of any commutative ring with the Zariski topology is a compact space, important in algebraic geometry. (Web site) 1. Theorem. Let be a -almost weakly orthogonal closed subspace of where is a metrizable compact space. 1. Every locally compact space is compactly generated. 1. Monolith. Every non-empty ultra-connected compact space X has a largest proper open subset; this subset is called a monolith. 1. A compact space is countably compact. (Web site) 2. J, 1952] showed that there exists a countably compact space whose product is not countably compact. (Web site) 1. Not every compact space is sequentially compact; an example is the infinite product space 2 [0, 1] with the product topology. (Web site) 1. Every sequentially compact space is countably compact, and every first-countable, countably compact space is sequentially compact. (Web site) 2. Every countably compact space is pseudocompact and weakly countably compact. (Web site) 3. A countably compact space is compact if it satisfies the second axiom of countability by the theorem above. 1. A compact uniform space is complete: on a compact space each filter has a cluster point, and if the filter is Cauchy, such a cluster point is a limit point. (Web site) 2. On a compact space there is a unique uniformity (complete and pre-compact). 1. The Wasserstein metric is needed in order to make into a compact space. 1. The affinely extended real number system adds two elements +∞ and −∞. It is a compact space. 2. Let be a topological space where consists of finite number of elements, then X is a compact space. (Web site) 1. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. (Web site) 1. This entry was posted on October 7, 2009 at 10:25 pm and is filed under Basic Topology, Compact space, The Sorgenfrey Line, Well known examples. 2. Examples Every compact space is paracompact. 1. Show that any image of a compact space under a continuous map is still compact. 2. Show that any compact space which is first-countable, is also sequentially compact. (Web site) 3. I show that in a compact space every net has a cluster point but I cannot imply that then there is a subnet converging to that cluster point. 1. Some authors require that a compact space also be Hausdorff, and the non-Hausdorff version is then called quasicompact. 1. In our main application we show that a proper action of an abelian group on a locally compact space is equivalent to a dual action. 1. The above theorem is essentially the definition of a compact space rewritten using de Morgan's laws. 2. The definition of a locally compact space is not universally agreed upon. 1. It follows that a continuous function on a compact space always attains its maximum and minimum. 1. By a result of Alexandroffand Urysohn, every compact space is a continuous image of C [ 5, Theorem 1.5.10 ]. 2. The Hausdorff condition ensures that the result is a compact space, while the locally-compact condition ensures that the result remains Hausdorff. 1. Then 2 N is a compact space; this is a consequence of Tychonoff's theorem mentioned below. (Web site) 1. Spaces X for which C(X) has a compact space of minimal prime ideals and every prime z-ideal is minimal or maximal. 1. By the Gelfand isomorphism applied to C*(x) this is *-isomorphic to an algebra of continuous functions on a locally compact space. 1. It would however be preferable if compact set were merged into compact space, in such a way as to preserve the elementary flavour. (Web site) 2. In this way, hundreds or even thousands of watts of pump power can be generated in a very compact space and directed precisely to where it is needed. (Web site) 1. Summary: The article contains definition of a compact space and some theorems about compact spaces. (Web site) 1. Thus any C 1 function is locally Lipschitz, as continuous functions on a locally compact space are locally bounded. 1. Every countably compact space (and hence every compact space) is weakly countably compact, but the converse is not true. (Web site) 2. Further, every sequentially compact space is countably compact, although the converse is not true. 1. Every metacompact normal space is a shrinking space The product of a compact space and a metacompact space is metacompact. 2. The product of an orthocompact space and a compact space is orthocompact. 3. MR 2084912 [10] Vaughan J.E.: On the product of a compact space with an absolutely countably compact space. (Web site) 1. Summary. We define pseudocompact topological spaces and prove that every compact space is pseudocompact. 2. As a consequence of the above result, every sequentially compact space is pseudocompact. 3. Every feebly compact space is pseudocompact but the converse is not necessarily true. 1. By the definition of a compact space, there must exist a finite subcover. 1. This poset resembles the lattice of compactifications of a locally compact space if $X$ is also separable. (Web site) 1. Stone-Čech compactification Of particular interest are Hausdorff compactifications, i.e., compactifications in which the compact space is Hausdorff. (Web site) 2. Of particular interest are Hausdorff compactifications, i.e., compactifications in which the compact space is Hausdorff. (Web site) 1. In the case when the compact space is a point our result reduces to a classic theorem of B.E. Johnson characterising amenability of groups. 2. Another is that a maximal point in a feebly compact space is not a regular point. 1. I think someone should point that out in compact space, cover and possibly open set. 1. Theorem 4 Let be a compact space such that is uncountable and first countable. 1. Real projective space is connected and compact, as it is quotient of a connected, compact space (the sphere). (Web site) 1. For any locally compact space X, a description of the frame of all open subsets of X in terms of the dual object of X is obtained. 1. If compactly and is itself a compact space, then uniformly. (Web site) 1. A continuous image of a compact space is compact.[ 3] The pre-image of a compact space under a proper map is compact. 2. A continuous real -valued function on a compact space is bounded and attains its bounds. (Web site) 3. In summary, a convergent sequence of continuous, monotonic functions from a countably compact space into the reals converges uniformly. 1. This ultimately led to the notion of a compact operator as an offshoot of the general notion of a compact space. (Web site) 1. It turns out that compactness is the same way — the image of a compact space under a continuous map is compact. 2. One of the key applications of compactness is to show that for a compact space, we can construct an injective map in some sense. (Web site) 3. Compact space - Compactness of subsets of Rn. 1. Properties Compactness is a topological invariant: that is, a topological space homeomorphic to a compact space is again compact. (Web site) 2. An embedding of a topological space X as a dense subset of a compact space is called a compactification of X. (Web site) 3. On the other hand, there are no restrictions at all on the domain; it can be any topological space, or any compact space in the parts that ask for that. (Web site) 1. The product of a Lindelöf space and a compact space is Lindelöf. 1. Some authors only define a locally compact space to be a Hausdorff space; that is the sense required for this theorem. 2. This theorem has an important consequence: Any one-to-one continuous mapping of a compact space onto a Hausdorff space is a homeomorphism. Related Keywords * Banach Space * Bounded * Bounded Set * Closed Subset * Closed Subsets * Compact * Compact Riemann Surface * Continuous Function * Continuous Mapping * Extreme Value Theorem * Hausdorff * Homeomorphism * Lemma * Locally Compact Hausdorff Space * Locally Compact Space * Metric Space * Order Topology * Paracompact * Proof * Space * Subset * Subspace * Supremum Norm * Theorem * Topological * Topology 1. Books about "Compact Space" in Amazon.com
{"url":"http://keywen.com/en/COMPACT_SPACE","timestamp":"2024-11-10T14:53:45Z","content_type":"text/html","content_length":"37896","record_id":"<urn:uuid:aca5f556-2ce3-4426-b743-c5f505db060f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00614.warc.gz"}
Fun with Triangles: Isosceles, Equilateral & Scalene Worksheet (Geometry) as PDF - Knowunity This worksheet covers the properties of isosceles and equilateral triangles, providing a comprehensive guide for students to understand and solve problems related to these triangle types. The material includes definitions, theorems, and practical examples to reinforce learning.
{"url":"https://knowunity.com/knows/geometry-isosceles-equilateral-triangles-1-7b816272-178c-4743-8136-dcf6483cba6c?utm_content=taxonomy","timestamp":"2024-11-11T22:50:05Z","content_type":"text/html","content_length":"350774","record_id":"<urn:uuid:e9fc9c4d-8947-4d98-b741-32db2699d9ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00845.warc.gz"}
Difference between Linear and Non-linear Data Structures | T4Tutorials.com Difference between Linear and Non-linear Data Structures Difference between Linear and Non-linear Data Structures Which of the following is Linear or Non-linear Data Structure Factor Linear Data Structures Non-linear Data Structures Arrangement of Data Element data elements are sequentially connected data elements are hierarchically connected Travers of Data Element each data element is traversable with a single run. data elements are present at many levels Examples Array, Stack, Queue, Link List, etc Graph, Tree, Map, etc Presence on which Level? all data elements are present at the same and single level. data elements are present at multiple levels. uses memory not very memory friendly and are not utilizing memory efficiently. very efficiently Time Complexity increases with an increase in size. remain with an increase in size. Complexity during Implementation easier to understand and implement. difficult to understand and implement Important MCQs related to Linear and Non-linear Data Structures MCQs Question: Which of the following is a Nonlinear data structure? (A). arrays (B). stack (C). queue (D). Link list (E). tree (F). Graph (G). Tree and Graph (H). None of these (i). All of these Answer: (G). Tree and Graph Which of the following is a linear data structure? (A). arrays (B). stack (C). queue (D). array, stack, and queue (E). Graph (F). tree (G). Tree and link list (H). None of these (i). All of these Answer: (D). array, stack, and queue Frequently Asked Questions Questions: Is array linear or non-linear data structures? Answer: Arrays are linear data structures. Questions: Is Stack linear or non-linear data structures? Answer: Stack is linear data structures. Questions: Is Queue linear or non-linear data structures? Answer: Queues are linear data structures. Questions: Is the Link list a linear or non-linear data structure? Answer: Link lists are linear data structures. Questions: Is tree linear or non-linear data structures? Answer: A tree is a non-linear data structure. Questions: Is Graph linear or non-linear data structures? Answer: Graph is a non-linear data structure.
{"url":"https://t4tutorials.com/difference-between-linear-and-non-linear-data-structures/","timestamp":"2024-11-06T12:23:27Z","content_type":"text/html","content_length":"159272","record_id":"<urn:uuid:c294c459-cd64-44a4-a5e1-a806985a3edb>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00292.warc.gz"}
As a noun cherub is a winged creature represented over 90 times in the bible as attending on god, later seen as the second highest order of angels, ranked above thrones and below seraphim first mention is in [http:// enwikisourceorg/wiki/bible_%28world_english%29/genesis#chapter_3 genesis 3:24]. As a letter o is the letter o with an acute accent above it. As a noun cherub is a winged creature represented over 90 times in the bible as attending on god, later seen as the second highest order of angels, ranked above thrones and below seraphim first mention is in [http:// enwikisourceorg/wiki/bible_%28world_english%29/genesis#chapter_3 genesis 3:24]. As a letter d is the fourth letter of the. As a numeral d is cardinal number five hundred (500). As a symbol d is deuterium, when it needs to be distinguished from ordinary hydrogen. As a noun cherub is a winged creature represented over 90 times in the bible as attending on god, later seen as the second highest order of angels, ranked above thrones and below seraphim first mention is in [http:// enwikisourceorg/wiki/bible_%28world_english%29/genesis#chapter_3 genesis 3:24]. As a letter c is the third letter of the. As a numeral c is cardinal number one hundred (100). As a symbol c is As a noun cherub is a winged creature represented over 90 times in the bible as attending on god, later seen as the second highest order of angels, ranked above thrones and below seraphim first mention is in [http:// enwikisourceorg/wiki/bible_%28world_english%29/genesis#chapter_3 genesis 3:24]. As a letter t is the twentieth letter of the. As a symbol t is symbol for tesla, the si unit of magnetic flux density. As a noun cherub is a winged creature represented over 90 times in the bible as attending on god, later seen as the second highest order of angels, ranked above thrones and below seraphim first mention is in [http:// enwikisourceorg/wiki/bible_%28world_english%29/genesis#chapter_3 genesis 3:24]. As nouns the difference between cherub and cherup is that is a winged creature represented over 90 times in the Bible as attending on God, later seen as the second highest order of angels, ranked above thrones and below seraphim. First mention is in while is archaic form of lang=en. As a verb cherup is archaic form of lang=en. As nouns the difference between cherub and cherubim is that is a winged creature represented over 90 times in the Bible as attending on God, later seen as the second highest order of angels, ranked above thrones and below seraphim. First mention is in while is plural of lang=en. As nouns the difference between taxonomy and cherub is that is the science or the technique used to make a classification while is a winged creature represented over 90 times in the bible as attending on god, later seen as the second highest order of angels, ranked above thrones and below seraphim first mention is in [http:// enwikisourceorg/wiki/bible_%28world_english%29/genesis#chapter_3 genesis 3:24]. As a noun cherub is a winged creature represented over 90 times in the Bible as attending on God, later seen as the second highest order of angels, ranked above thrones and below seraphim. First mention is in. As an adjective cherublike is resembling a cherub or some aspect of one.
{"url":"https://wikidiff.com/terms/cherub?page=1","timestamp":"2024-11-06T18:41:07Z","content_type":"text/html","content_length":"35478","record_id":"<urn:uuid:4dd4a347-adbf-4742-a69f-a742a23f8574>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00016.warc.gz"}
do i use my TI 84 calculator when solving c=0.80?? Find the critical value z... how do i use my TI 84 calculator when solving c=0.80?? Find the critical value z... how do i use my TI 84 calculator when solving c=0.80?? Find the critical value z c necessary to form a confidence interval at the level of confidence shown c= 0.80 The critical value of Z necessary to form a confidence interval at the level of confidence 0.80 is = We have to press the '2ND' button on TI-84 calculatorand then press the 'VARS' button which has the second function of 'DISTR'. Next, we should navigate down to the 'invNorm(' option by pressing the '3' button on the calculator. Enter 0.9 on the area TO THE LEFT of the value.
{"url":"https://justaaa.com/statistics-and-probability/1293328-how-do-i-use-my-ti-84-calculator-when-solving","timestamp":"2024-11-04T10:32:06Z","content_type":"text/html","content_length":"40272","record_id":"<urn:uuid:aeb00c11-2f99-4696-ba7c-963729c961e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00547.warc.gz"}
How to Draw Geometric Shapes | Constituent geometry ORM diagram | Plane geometry - Vector stencils library | Geometry Diagrams Knowledge of geometry grants people good logic, abstract and spatial thinking skills. The object of study of geometry is the size, shape and position, the 2-dimensional and 3-dimensional shapes. Geometry is related to many other areas in math and is used daily by engineers, architects, designers, and many other professionals. Today, the objects of geometry are not only shapes and solids. It deals with properties and relationships and looks much more about analysis and reasoning. Geometry drawings can be helpful when you study geometry or need to illustrate some investigation related to geometry. ConceptDraw DIAGRAM allows you to draw plane and solid geometry shapes quickly and easily. This object-role modeling (ORM) diagram sample shows schema for constituent geometry. It was designed on the base of the Wikimedia Commons file: Schema for Constituent Geometry.gif. [commons.wikimedia.org/ wiki/ File:Schema_ for_ Constituent_ Geometry.gif] "Constituent Geometry... models a data structure for describing the geometry of individual constituents in a Particular Lithology. These descriptions are based on terms that may have qualitative or quantitative definitions (e.g. round, subround, euhedral, subhedral, fine-grained) or on free text descriptions referenced by "descID'. The Constituent Geometry description has two sub-classes, Grain Geometry and Body Geometry, because different terminology is used when describing individual grains or lithologic components in a mixed rock." [pubs.usgs.gov/ of/ 1999/ of99-386/ richard.html] The object-role modeling diagram example "Constituent geometry ORM diagram" was designed using ConceptDraw PRO software extended with ORM Diagrams solution from Software Development area of ConceptDraw PRO Solution Park. The vector stencils library "Plane geometry" contains 27 plane geometric figures. Use these shapes to draw your geometrical diagrams and illustrations in the ConceptDraw PRO diagramming and vector drawing software extended with the Mathematics solution from the Science and Education area of ConceptDraw Solution Park. ConceptDraw DIAGRAM extended with Mathematics solution from the Science and Education area is a powerful diagramming and vector drawing software that offers all needed tools for mathematical diagrams designing. Mathematics solution provides 3 libraries with predesigned vector mathematics symbols and figures: Solid Geometry Library, Plane Geometry Library and Trigonometric Functions Library. ConceptDraw DIAGRAM diagramming and vector drawing software extended with Mathematics solution from the Science and Education area is the best for creating: mathematical diagrams, graphics, tape diagrams various mathematical illustrations of any complexity quick and easy. Mathematics solution provides 3 libraries: Plane Geometry Library, Solid Geometry Library, Trigonometric Functions ConceptDraw DIAGRAM is the beautiful design software that provides many vector stencils, examples and templates for drawing different types of illustrations and diagrams. Mathematics Solution from the Science and Education area of ConceptDraw Solution Park includes a few shape libraries of plane, solid geometric figures, trigonometrical functions and greek letters to help you create different professional looking mathematic illustrations for science and education. The vector stencils library "Solid geometry" contains 15 shapes of solid geometric figures. Use these shapes to draw your geometrical diagrams and illustrations in the ConceptDraw PRO diagramming and vector drawing software extended with the Mathematics solution from the Science and Education area of ConceptDraw Solution Park. This engineering drawing shows different types of geometry of butt welds. "Welds can be geometrically prepared in many different ways. The five basic types of weld joints are the butt joint, lap joint, corner joint, edge joint, and T-joint (a variant of this last is the cruciform joint). Other variations exist as well - for example, double-V preparation joints are characterized by the two pieces of material each tapering to a single center point at one-half their height. Single-U and double-U preparation joints are also fairly common - instead of having straight edges like the single-V and double-V preparation joints, they are curved, forming the shape of a U. Lap joints are also commonly more than two pieces thick - depending on the process used and the thickness of the material, many pieces can be welded together in a lap joint geometry." [Welding. This engineering drawing example was redesigned using the ConceptDraw PRO diagramming and vector drawing software from the Wikimedia Commons file: Butt Weld Geometry.GIF. [commons.wikimedia.org/ wiki/ File:Butt_ Weld_ Geometry.GIF] The engineering drawing example "Butt weld geometry" is included in the Mechanical Engineering solution from the Engineering area of ConceptDraw Solution Park. Entity Relationship Diagram (ERD) is a popular software engineering tool for database modeling and illustration the logical structure of databases, which uses one of two notations - Chen's or Crow’s Foot. Crow's foot notation is effective when used in software engineering, information engineering, structured systems analysis and design. Each of these notations applies its own set of ERD symbols. Crow's foot diagrams use boxes to represent entities and lines between these boxes to show relationships. Varied shapes at the ends of lines depict the cardinality of a given relationship. Chen's ERD notation is more detailed way to represent entities and relationships. ConceptDraw DIAGRAM application enhanced with all-inclusive Entity-Relationship Diagram (ERD) solution allows design professional ER diagrams with Chen's or Crow’s Foot notation on your choice. It contains the full set of ready-to-use standardized ERD symbols and meanings helpful for quickly drawing, collection of predesigned ERD samples, examples and templates. ConceptDraw Pro is a great alternative to Visio for Mac users as it offers more features and can easily handle the most demanding professional needs. How to Build an Entity Relationship Diagram (ERD) Object-role Modeling (ORM) Diagram solution with powerful vector diagramming and data modeling tools, large quantity of specially developed samples and examples, and rich variety of vector objects corresponding to common ORM notation, ideally suits for developing the comprehensive, clear and visual Object-role Modeling (ORM) diagrams and schematics, understandable for all interested people from the different fields and business directions, for designing the ORM models, and demonstrating advantages from the use of ORM and its notation. It is intended for software developers and computer engineers, specialists in a field of Object-oriented programming (OOP), database architects, web-application constructors and developers, etc. Mathematics is an exact science, which studies the values, spatial forms and quantitative relations. It is a science, in which is built large number of various diagrams, charts and graphs that present the material in a clear, visual and colorful form, help to analyze the information and to make certain conclusions. A diagram is a graphical representation of data using the linear segments or geometric shapes, which allows to evaluate the ratio of several values. Depending on the types of solved tasks are used the diagrams of different kinds. A graph is a diagram that shows quantitative dependencies of various processes using the curves. ConceptDraw DIAGRAM is a powerful intelligent and multifunctional vector engine for drawing different Mathematical diagrams and graphs, Mathematical illustrations, complex and simple Diagram mathematics, Flowcharts of equation solving process, Line graphs, Scatter plots, Histograms, Block diagrams, Bar charts, Divided bar diagrams, Pie charts, Area charts, Circular arrows diagrams, Venn diagrams, Bubble diagrams, Concept maps, and many others. The vector stencils library "Cloud shapes" contains 69 geometric shapes. Use it to design your cloud computing diagrams and infographics with ConceptDraw PRO software. "The essence of a diagram can be seen as: ... - with building blocks such as geometrical shapes connected by lines, arrows, or other visual links." [Diagram. Wikipedia] The geometric shapes example "Design elements - Cloud shapes" is included in the Cloud Computing Diagrams solution from the Computer and Networks area from ConceptDraw Solution Park. Download ConceptDraw diagram software for easy drawing. ConceptDraw - feature-rich flowchart software allows you create professional looking flowcharts. The vector stencils library "Optics" contains 17 symbol icons: reflecting surface; convex and concave lens with and without optic axis, body or ray; ray; parallel beam of light; point light source; prism with and without ray path; refraction. Use these shapes for drawing schemes of physical experiments in geometrical optics and ray tracing diagrams. "Geometrical optics, or ray optics, describes light propagation in terms of "rays". The "ray" in geometric optics is an abstraction, or "instrument", which can be used to approximately model how light will propagate. Light rays are defined to propagate in a rectilinear path as they travel in a homogeneous medium. Rays bend (and may split in two) at the interface between two dissimilar media, may curve in a medium where the refractive index changes, and may be absorbed and reflected. Geometrical optics provides rules, which may depend on the color (wavelength) of the ray, for propagating these rays through an optical system. This is a significant simplification of optics that fails to account for optical effects such as diffraction and interference. It is an excellent approximation, however, when the wavelength is very small compared with the size of structures with which the light interacts. Geometric optics can be used to describe the geometrical aspects of imaging, including optical aberrations." [Geometrical optics. Wikipedia] The example "Design elements - Optics" was created using the ConceptDraw PRO diagramming and vector drawing software extended with the Physics solution from the Science and Education area of ConceptDraw Solution Park. The Flowcharts have one or more starting and ending points. The geometric figures on the Flowcharts represent the steps of the process and are connected with arrows that show the sequence of the actions. The Flowcharts can be orientated vertically or horizontally. ConceptDraw DIAGRAM is a powerful software for creating professional looking Flowcharts quick and easy. For this purpose you can use the Flowcharts solution from the Diagrams area of ConceptDraw Solution Park. The problem solving and decision-making processes are closely linked and crucial for the business and life. There are many tools and techniques for facilitating and improvement these processes, and also enhancement the quality of accepted decisions. One of them is Fisbone Diagram or Cause and Effect diagram. For its construction, you need first define the problem which you need to solve (existing or potential), write it into rectangular and draw to it a horizontal line - a “fish spine”. Then define general types of possible causes like Persons, Materials, Methods, Machinery, Measurements or Environment, and attach them to “spine”. Start brainstorming the possible reasons of failure and connect them to the branches. Professional diagramming ConceptDraw DIAGRAM MAC OSX software offers the Fishbone Diagrams solution which contains templates, samples, and ready Fishbone design objects. Use of them lets effectively identify the possible causes for an effect, realize successfully cause and effect analysis, and instantly draw Fishbone diagram on Mac software. The main task of any teacher, lecturer, professor is how to convey effectively the new information, to present it as simply as possible and to clarify all complex moments for the pupils, students, or other learners. Each of them always strives to make his lecture, lesson or speech interesting for the listeners which are now in auditorium. For successful achievement this task often may be needed not only the correctly chosen words, but also bright and colorful materials used in addition to the narration. The use of mind mapping technology in education and training is incredibly good choice. The Mind Maps are great opportunity to organize and structure the information, to allocate all the most important and to present it on the map in convenient, clear and easy to understand form, that allows you to keep the attention of your audience for a long time. Use professional mind mapping software ConceptDraw MINDMAP and its powerful tools to prepare the Mind Maps for your lectures, to make them colorful and bright, to add special pictures and to make visual accents on important details quick and easy. When creating ER diagrams, you can apply different color ER diagram styles and also notation styles. ConceptDraw DIAGRAM enhanced with Entity-Relationship Diagram (ERD) Solution gives the ability to draw visual and attractive ER Diagrams using the style icons from Crow’s Foot notation and Chen's notation professionally designed and offered by ERD Chen's Notation and ERD Crow’s Foot Notation
{"url":"https://conceptdraw.com/examples/geometry-diagrams","timestamp":"2024-11-09T21:45:37Z","content_type":"text/html","content_length":"88683","record_id":"<urn:uuid:04af3e23-598f-417f-9f01-52ce560d1ac7>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00385.warc.gz"}
6th Grade Math Worksheets Free Printable 6th Grade Math Worksheets Free Printable 6th grade math worksheets pdf printables to practice skills on math topics learnt in grade 6 these worksheets were created with teachers homeschool parents and sixth graders in mind. You will then have two choices. Free Math Worksheets Printable 6th Grade In 2020 Free Printable Math Worksheets Math Fact Worksheets Multiplication Worksheets With strands drawn from vital math topics like ratio multiplication division fractions common factors and multiples rational numbers algebraic expressions integers one step equations ordered pairs in the four quadrants and geometry skills like determining area surface area and volume. 6th grade math worksheets free printable. Make practicing math fun with these inovactive and seasonal 6th grade math ideas. You will find here worksheets for addition subtraction place value telling time and more. These 6th grade math worksheets include word problems timed math worksheets multiplication worksheets long division worksheets and plenty of extra math practice for sixth graders. Math worksheets dynamically created math worksheets 196016. Sixth grade math worksheets for october. Math worksheets dynamically created math worksheets 196015. Choose your grade 6 topic. Take a peak at all the grade 6 math worksheets and math games to learn addition subtraction multiplication division measurement graphs shapes telling time adding money fractions and skip counting by 3s 4s 6s 7s 8s 9s 11s 12s and other fourth grade math. Free sixth grade math worksheets in easy to print pdf workbooks to challenge the kids in your class. Free math worksheets for grade 6. The sixth grade math worksheets and games. Worksheets math grade 6. 6th grade math worksheets printable free with answers collection. Math worksheets workbooks for sixth grade. This will take you to the individual page of the worksheet. Welcome to the sixth grade math worksheets and math games. Free grade 6 worksheets from k5 learning. This is a comprehensive collection of free printable math worksheets for sixth grade organized by topics such as multiplication division exponents place value algebraic thinking decimals measurement units ratio percent prime factorization gcf lcm fractions integers and geometry. Easily download and print our 6th grade math worksheets. Math for week of october 5. Type keywords and hit enter. Free printable graph paper grid paper and dot paper for math problems crafts zentangling landscape design architecture or just simple doodling. 6th grade math worksheets printable free with answers. You will find here a large collection of free printable math worksheets math puzzles and math games for grade 6. Free 6th grade math worksheets for teachers parents and kids. Count on our printable 6th grade math worksheets with answer keys for a thorough practice. Click on the free 6th grade math worksheet you would like to print or download. Sixth grade math worksheets free pdf printables with no login. 6th grade math worksheets. Our printable grade 6 math worksheets delve deeper into earlier grade math topics 4 operations fractions decimals measurement geometry as well as introduce exponents proportions percents and integers. Free 6th Grade Math Worksheets Printable 2019 K5 Worksheets Math Worksheets Free Math Homeschool Worksheets Fun 6th Grade Math Worksheets Printable In 2020 Free Printable Math Worksheets Free Math Worksheets Printable Math Worksheets 6th Grade Math Worksheets Fractions Math Worksheets Fractions Worksheets Printable Math Worksheets The Old Fractions Multiplication Worksheets Math Worksheet From The Fractions Worksheets Pa Fractions Worksheets Math Fractions Worksheets Free Math Worksheets Printable Fun Math Worksheets For 6th Grade 2 Fun Math Worksheets Maths Worksheets Ks2 Free Math Worksheets 2 Hard 6th Grade Math Worksheets Free Printable 6th Grade Maths Astounding Math Home In 2020 Multiplying Decimals Worksheets Decimals Worksheets Decimal Multiplication 6th Grade Worksheets To Print 6th Grade Worksheets Fractions Worksheets Printable Math Worksheets Multiplication Worksheets 6th Grade Printable Math Practice Worksheets 4th Grade Math Worksheets 2nd Grade Math Worksheets Sixth Grade Multiplying Doubles Math Worksheets K5 Worksheets In 2020 Printable Math Worksheets Multiplication Worksheets Math Fact Worksheets Multiplication Worksheets 6th Grade Multiplication In 2020 Math Fact Worksheets 4th Grade Multiplication Worksheets 4th Grade Math Worksheets Sixth Grade Math Worksheets 7th Grade Math Worksheets 6th Grade Worksheets Decimals Worksheets Free Printable Math 6th Grade Worksheets In 2020 Kindergarten Math Worksheets Multiplication Facts Worksheets Subtraction Worksheets Area Of Polygons Worksheets Free Factors Worksheets This Section Contains Worksheets On Factor Free Math Worksheets Algebra Worksheets Probability Worksheets Pin By Mc3 Mc3 On Worksheets 6th Grade Worksheets Math Worksheets Printable Math Worksheets Free Math Printable Worksheets For 6th Grade Math Worksheets Printable In 2020 Algebra Worksheets Middle School Math Worksheets Fun Math Worksheets 6th Grade Math Worksheets These Sixth Grade Math Worksheets Cover Most Of The Core Math Topics P Math Worksheets Division Worksheets Math Fractions Worksheets Sixth Grade Math Worksheets 6th Grade Worksheets Math Worksheets Free Printable Math Worksheets 40 Clever 1st Grade Math Worksheets Design Https Bacamajalah Com 40 Clever 1st Grade Math Wo Math Worksheets Math Fact Worksheets Printable Math Worksheets Free Printable Worksheets Maths Grade 5 6 Google Search 6th Grade Worksheets Homeschool Worksheets School Worksheets
{"url":"https://askworksheet.com/6th-grade-math-worksheets-free-printable/","timestamp":"2024-11-08T14:19:11Z","content_type":"text/html","content_length":"135987","record_id":"<urn:uuid:a99e64b4-643c-42c5-993a-ee747e7f6172>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00278.warc.gz"}
The Reduced Form Model Explanation for the Bond/CDS Basis - SAS Risk Data and Analytics The Reduced Form Model Explanation for the Bond/CDS Basis: Presentation to Risk Americas Robert A. Jarrow and Donald R. van Deventer May 25, 2021 We present an annotated version of slides used in a presentation to Risk Americas on the bond/credit default swap basis. Prof. Jarrow begins by explaining the assumptions and analytical approach behind the reduced form bond model of Hilscher, Jarrow and van Deventer [2020]. The derivation makes use of “building block securities.” By that we mean simple securities which pay 0 or 1 depending on whether or not the risky issuer of a corporate bond has defaulted or not by a specific date. Dr. van Deventer then shows that the derivation of the no arbitrage pricing for a credit default swap is straightforward using these building block securities. Unlike market conventions in the CDS market, the Libor/swap yield curve (or its successor) plays no role in CDS valuation. Given the swap valuation formula, the bond/CDS basis becomes a simple closed form solution. Dr. van Deventer demonstrates the fact that the liquidity parameter in the reduced form model is the factor that causes the bond/CDS basis to vary over time. The Reduced Form Model Explanation for the Bond/CDS Basis The presentation was given to a virtual conference organized by CEFPro on May 25, 2021. The first half of the presentation was given by Professor Jarrow, with Dr. van Deventer presenting second. Professor Jarrow began by putting financial models in context. A financial model is defined by its assumptions. The model’s implications are derived from those assumptions. A model can be tested both by measuring the accuracy of the assumptions and the realism of the model’s implications. Professor Jarrow then introduced the notion of the reduced form bond model. The reduced form bond model assumes that markets are frictionless and competitive. This assumption is relaxed in the model by the introduction of a liquidity parameter in the econometric implementation of the model. Professor Jarrow then explained the use of a Cox process to model the default time. State variables driving this process can include both macro-economic variables and financial statement-related variables that are specific to the issuer of the risky bonds analyzed. Professor Jarrow then explained the “no arbitrage” assumption invoked in the modeling process. Professor Jarrow then described the conditional independence of risk-free interest rates, the default time, and the recovery rate under the risk-neutral probabilities in the model. This assumption is consistent with observable interest rates, default times and the recovery rate that are correlated under the observable or empirical probabilities. Professor Jarrow went on to explain the assumption of diversifiable default risk among firms. Conditional on the macro-economic variables, the default/no default event at a specific time is independent for any two firms. The value of a defaultable bond can be shown to consist of digital securities that pay 0 or 1 depending on the default status of the firm. An extension of the model includes discounts for both types of digital securities for liquidity reasons. Professor Jarrow introduced the computational issues that Dr. van Deventer will cover in the second half of the presentation. In closing Professor Jarrow summarized the cost of “term insurance” on the bond. At that point Dr. van Deventer began his discussion of the theory and empirical derivation of the bond/CDS basis. Dr. van Deventer introduced simpler notation for the coupon 0/1 securities c, the 0/1 recovery securities in each payment period r, the dollar coupon paid semi-annually K, and the recovery rate δ. The reduced form value of a bond is the sum of the principal amount 100 times the final all or nothing coupon security, the N coupons of K dollars times the corresponding 0/1 coupon securities, and finally the sum of N 0/1 recovery securities times the principal of 100 and recovery rate δ. Note that there is no recovery on coupons, so principal is in fact SENIOR to coupons. In the next slide, Dr. van Deventer showed the formulation of coupon securities and recovery securities. He applied the liquidity discount to coupon securities, but he assumes “no arbitrage” constraints that require the sum of the period 1 coupon security and period 1 recovery security to have a total value equal to a 1 period Treasury bill. Why? Because the holder of a 1 period coupon security and 1 period recovery security receives $1 in every scenario, default or no default. For period 2 and subsequent payment periods, the no arbitrage constraint is slightly different. The difference between that specification and the specification applied by Professor Jarrow is a key empirical question. In the next slide, Dr. van Deventer used the building block securities to calculate the present value of a new credit default swap that has payment periods half as long as the semi-annual bond payment periods. The dollar amount paid in advance for credit insurance is S/4 dollars per quarter. S is the annualized credit default swap spread. Note that there is no Libor/swap curve or SOFR/ swap curve in this equation. In a competitive market for a derivative for which there is zero up-front payment (other than the first credit insurance payment, which we account for separately), the market value must be zero to avoid arbitrage. This allows us to solve for the annualized credit default swap rate implied by the credit risk of the borrower, conditional on the recovery rate and liquidity parameter. Dr. van Deventer then gave the explicit formula for the bond/CDS basis. The formula depends on the credit spread, calculated precisely, on two newly issued securities: a new issue with N semi-annual payments from the reference issuer, and a similar new issue from a risk-free (approximately) issuer like the U.S. government. We assemble the par coupon yields to get the new issue coupons of 2K* and 2K[rf]*. The difference between these par coupon yields is the credit spread, of course. Note that we do not use the yield to maturity of an existing bond in this calculation, because the many false assumptions embedded in the yield to maturity calculation (the biggest of which is the assumption that all payments will be made with certainty) introduces noise to the bond/CDS basis. At this point Dr. van Deventer introduced the default probabilities and traded credit spreads from Kamakura Risk Information Services (KRIS). For more information on KRIS reduced form default probabilities please use this link: For this presentation, the reference name used in the example is Ford Motor Credit Company LLC. It is assumed as a first approximation that the default probabilities of FMCC are identical to those of Ford Motor Company, which are taken from KRIS on each business day. Next, Dr. van Deventer explained that traded credit swap prices are still undisclosed to the public despite the fact that those prices are known to the Depository Trust and Clearing Corporation. Accordingly, using CDS quotes is the only option for this analysis, but first it needs to be confirmed that CDS are regularly traded on the Ford corporate family. The next slide from KRIS confirms that there is steady weekly volume so the CDS quotes should have some basis in reality. Dr. van Deventer then explained Step 1 in the analysis: the generation of the maximum smoothness forward rate curve for the U.S. Treasury on every trading day. This was done using Kamakura Risk Manager version 10. More information on KRM is available here: Dr. van Deventer then explained the two-step process that Kamakura uses to calculate the bond-implied recovery rate and liquidity parameter for each issuer. The first of the two steps calculates the liquidity parameter that minimizes the sum of squared bond pricing errors for the assumed recovery rate value. 101 recovery rate values are used in this analysis, from 0% to 100% in 1% increments. This analysis shows that the approximate solution is a recovery rate of 10% and a liquidity parameter of minus 0.1116 (minus 1.116%). Dr. van Deventer then showed how the final values for these two parameters are derived from a powerful non-linear equation optimizer using the starting values shown in the slide above: The best-fitting parameter values were 10.427% for the recovery rate and minus 1.163% for the liquidity parameter. These parameters are fitted for every business day in the example. Using the smoothed U.S. Treasury curve, the 5-year par coupon Treasury yields are calculated on every business day. Using the smoothed U.S. Treasury curve and the daily values for the recovery rate and liquidity parameter, the 5-year par coupon yield for a new Ford Motor Credit bond issue is calculated. The difference is the true credit spread (not the market convention using yields to This credit spread is plotted in orange in the next slide. The blue dots represent the no arbitrage credit default swap spread that comes from the reduced form building block derivation given above. For most of the time series studied, the credit spread and CDS spread are nearly identical, differing only because of differences in timing (such as pay in advance on the CDS) and frequency (quarterly versus semi-annual). This held even for the sharp spike in credit spreads with the onset of the covid-related bond market panic in March and April 2020. What happened on the right-hand side of the graph, when the theoretical credit default swap spread deviated significantly from the credit spread? The next slide is helpful in understanding what is going on. The green dots added to the graph represent the CDS quotes for the Ford family. Note that at the peak of the virus crisis, the CDS quotes were generally higher than both the model-implied credit spread (orange) and CDS spread (blue). On the far right-hand side of the graph, the CDS quotes in green again converge to the model-implied credit spreads. Why, then, is the model-implied CDS spread still high? The next slide provides some answers. Starting in September, 2017, we plot daily values for the liquidity parameter (in orange) versus the recovery rate implied from bond prices for Ford Motor Credit. For the same sections of the graph, the moving average of the liquidity parameter is very close to zero and the recovery rate fluctuated in a reasonable range. On the far right hand side of the graph, Ford Motor Credit bond prices changed in such a way that the implied recovery rate fell to zero and the liquidity parameter became a more negative number. The next slide analyzes the interaction of the liquidity parameter and recovery rate on March 30, 2021, the same date we used to derive the recovery rate and liquidity parameter above. As we did above, we vary the recovery rate in 1% increments from 0% to 100%. For each level of recovery rate, we derive the liquidity parameter that minimizes the sum of squared errors. We overlay the par coupon credit spread (orange) and swap spread implied by the parameters for every level of recovery rate and the corresponding liquidity parameter. The graph shows that once the “best fitting” liquidity parameter goes to zero, the model-implied credit spread and model-implied CDS spread converge. The specification used for how the liquidity parameter affects coupon securities and recovery securities is clearly the key to the CDS/bond basis. For those readers interested in Kamakura’s conclusions in this regard, please contact Kamakura for more information at info@kamakuraco.com.
{"url":"https://www.kamakuraco.com/the-reduced-form-model-explanation-for-the-bond-cds-basis/","timestamp":"2024-11-13T22:24:14Z","content_type":"text/html","content_length":"173604","record_id":"<urn:uuid:cf01b4e4-3a99-4277-a782-9a7e9566c590>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00302.warc.gz"}
Measurement of Ivan, Glenn, Marilyn (lab analyst) Measurement of dielectric foam density Tim and Andrew discussed the data from Monday's tests. According to Tim's calculations, there's approximately 5% more polyethylene in the BRD CPAs than in the basement CPAs per square area (this takes the thickness into account). Why this is we can't explain. We're now very, very, VERY curious about what the density measurements are... one thing that could epxlain it would be a change in density of the foam over the roll. Preliminary Test plan 1. Cut exactly 6 inches of foam tape from each sample. 2. Measure the weight of each foam sample. 3. Using a volumetric flask, measure the ammount of volume each sample takes up. Using sizzors and a caliper we actually cut exactly 150 mm of tape from each sample roll. This was difficult because the tape would stick to one side the sizzors. We chose the side away from the sample for where this sticking would occur, and cleaned off the residue after each cut with alchohol. I used a square and a straight edge of my workbench to mark the tape for where to cut in order to get a perpendicular 90 degree cut. I labeled the sample from the beginning of the roll dated 14-Sep-2003 with a #1, and the sample from near the end of the roll with a #2. With help from Marilyn, we went into the lab and measured the weight of each sample. We placed an empty weight boat into the analytical scale to find the tare weight, then peeled the backing off of the tape, dropped it into the boat, and then recorded the weight. We did this for each sample. Marilyn said the finest resolution of volume measurement they had was +/- 1 mL which would not return a meaningful volume, so we did not make that measurement. │Weight of sample #1 │ 0.5217 g │ │Weight of sample #2 │ 0.5889 g │ │Scale accuracy │ +/- 0.1 mg │ │Scale information │Sartorius model A200S; s/n 38110191; Room 142 │ │Weight of typical weight boat│ 6.8645 g │ We kept the foam samples in their respective weight boats should we want to do further work with these. • Cutting the tape to 150 mm • Measuring the weight of the tape
{"url":"https://archive.psas.pdx.edu/news/2004-02-12/","timestamp":"2024-11-02T07:32:50Z","content_type":"text/html","content_length":"6264","record_id":"<urn:uuid:6b42a75c-46be-4e02-8271-3999ef06887d>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00646.warc.gz"}
Betting odds are very important for new players The betting odds are very vital for the new players to learn as these allow them to understand how likely an event is to occur, and how likely your chance of winning is. The probability of the occurrence of the event can be converted and presented in the forms of these betting odds. The three odds almost present the same thing with very little difference. Calculation Method of Various Betting Odds Calculation Of Fractional Odds • These odds are written with a slash or hyphen in between the numbers like 7/1 and are read as seven-to-one. • If the player stakes a total of $10 at the odds of 7/1, then he will receive a total of $70 stake added with $10 profit, which means a total of $80. • In this odd, the total return can be calculated as: [Stake × Numerator/Denominator] + Stake. Here, the numerator/denominator is the fractional odds. • Let, 5/1 can be calculated as 1/(5+1)= 0.16, which means that there is a 16% chance of winning. Calculation of Decimal Odds • These odds represent the total amount the player wins for every $1 that is wagered and the number instead of showing the profits reflects the total payout. The amount of stake is already added to the decimal which makes it easy to calculate. • The winning can be calculated as – (Odds × Stake )- Stake • Let, 4.0 can also be calculated as (4.0 × £10) – £10 = £30. Calculation Of Money Line Odds • In this odd, the favorites are represented with a minus sign (-)in front of the number, which indicates the amount of money needed to stake to win an amount of $100. • The odds for the less competitive ones are presented with a plus symbol(+) which indicates the total amount of money won for the staked amount of $100. However, in both cases, the players win back the total wager amount placed by the players. Calculation of Converting Betting Odds: Converting odds to the implied probabilities is an integral part of its calculation. • From Fractional Odds to Decimal Odds: Step 1: Divide the fraction Step 2: Add 1 to the result. • From Fractional to Moneyline Odds: Step 1 Divide the fraction Step 2 If the answer is >= 1, then proceed with 100× Answer. Step 3: If the answer is <= 1, then proceed with -100× Answer. • From Decimal To Fractional: Step 1: Deduct Step 2: Convert to a fraction. • From Decimal To Money Line: Step 1: If the decimal odd is greater than 2, then 100× (Decimal odd – 1) Step 2: If the decimal odd is smaller than 2, then -100/(Decimal Odd – 1 ) • From Moneyline to Decimal Odd: Step 1: If the Money line is greater than 0, then (Moneyline Odd/100)+1 Step 2: If the Money line is lesser than 0, then (-100/Money line odd)+1 • From Money line to Fractional Odds: Step 1: If Moneyline odd is greater than 0, then, Moneyline Odd/100 Step 2: If money line odd is less than 0, then, -100/ Moneyline odd
{"url":"https://morpherhelmet.co.uk/how-to-calculate-betting-odds/","timestamp":"2024-11-08T07:59:33Z","content_type":"text/html","content_length":"28341","record_id":"<urn:uuid:6dfb8160-0164-4412-83e7-8017c0f0ea5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00132.warc.gz"}
How can I fit a gaussian function to data in Mathematica? (no distribution) 29120 Views 5 Replies 2 Total Likes How can I fit a gaussian function to data in Mathematica? (no distribution) I am using Mathematica for some time now for evaluating experimental data for my practical physics course but I always had problems with fitting nonlinear functions to data. Currently I'd like to fit a gaussian function to my dataset. I tired several options including "FindFit" and "NonlinearModelFit" but the resulting function not even remotely resembles the curve of the original data. Sometimes the gaussian is centered around a very small, insignificant peak in my dataset but most frequently the height of the peak is way too small. The attachment contains some data as an example. If plotted, the quite significant peak at the right end is what I am interested in. For example I used the function FindFit with the following parameters: A*1/(\[Sigma]*Sqrt[2 \[Pi]]) Exp[-(1/2) ((x - \[Mu])/\[Sigma])^2], {\[Sigma], \[Mu], A}, x] The variable A obviously is used to scale the area under the curve. Even if I cut down all the data but the peak I want to fit the gaussian to, using Take[data, {1700, 1990}] I get values like {\[Sigma] -> 1.74659, \[Mu] -> 2.29208, A -> 1628.23} Which is obviously no good, considering the highest value in my data-list is 38635 and therefore bigger than A. (mu and sigma are also way off) I have to stress, that to this point I have never learned how fitting to nonlinear data generally works. I taught myself how to work with Mathematica using the incredibly detailed and well designed documentation of Mathematica. It might be, that I misunderstand something at a very basic level. Thanks in advance to the community :-) 5 Replies Mathematica (I use 10.1) has a function FindPeaks that should provide good starting values for mu` nlm = NonlinearModelFit[data, A 1/(\[Sigma] Sqrt[2. \[Pi]]) Exp[-(1./2.) ((x - \[Mu])/\[Sigma])^2], {{\[Sigma], 100.}, {\[Mu], FindPeaks[data, 80., 0.2, 5000.][[1, 1]]}, {A, 200}}, x] Ah, of course! I did not think about that. Thanks a lot :-) 1st line should read: ... part of the problem is that your data are NOT very well described by a single gaussian Hello Thomas, I guess part of the problem is that your data are very well described by a single gaussian and therefore the fit may end up in different local minima. One way to steer which part of the data are well approximated by the model function is to set start values for the parameters to be fitted. I prefer to use NonlinearModelFit nlm = NonlinearModelFit[data, A 1/(\[Sigma] Sqrt[2. \[Pi]]) Exp[-(1./2.) ((x - \[Mu])/\[Sigma])^2], {{\[Sigma], 100.}, {\[Mu], 2000}, {A, 200}}, x] To visually compare data and fitted model you use Show[ListPlot[data, PlotStyle -> Red, PlotRange -> All], Plot[nlm[x], {x, 0, 2000}, PlotRange -> All]] Regards, Michael Thanks a lot! I was not aware that I can set start values. This information is very helpful :-) How do the start values influence the approximated values by mathematica? I have about 30 of these datasets where I do not expect sigma to change much but mu varies a lot. The point is, I want my notebook to evaluate the data automatically and I have set up everything to this point in that way. I have a formula to predict where mu should be, but if the start value affects the outcome of mu I'd rather use a different way. Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use
{"url":"https://community.wolfram.com/groups/-/m/t/519050","timestamp":"2024-11-07T19:33:47Z","content_type":"text/html","content_length":"116831","record_id":"<urn:uuid:4c20d996-3692-4d80-a536-07cee5d2f6b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00291.warc.gz"}
Meaning of Disjunctive Syllogism & Disjunctive Syllogism Example 2024 Meaning of Disjunctive Syllogism & Disjunctive Syllogism Example Disjunctive Syllogism What is Disjunctive Syllogism? Disjunctive Syllogism is a logical argument that uses two premises and one conclusion. The first premise states that either the subject or object in a scenario can be true, but not both. The second premise states that the other of these options must also be true. The last part or third premise explains why this is so, which is then followed by the conclusion stating what has been proven to be correct based on all three premises. Disjunctive Syllogism is an example of how logic can be used to determine the truth or falsity of a proposition. It is important to understand this concept because it will help you learn other types of logical arguments, such as conditional syllogism. A disjunctive syllogism consists of three statements; two premises and a conclusion. The two premises are called “disjunct,” and they must both be true for the conclusion to follow from them logically. It is a syllogistic form of argument that has three main parts: 1. The major premise 2. The minor premise 3. The conclusion. It is also an example of a valid argument with true premises. Disjunctive Syllogism is an example that shows how to use syllogistic logic. By understanding this example you can understand how to use the syllogism in your own arguments and reasoning. It is also an example of the more general class known as Sorites arguments, which are also called Fence-Sitting arguments. This is a logical form that has been used for centuries. It can be confusing to use at first, but once you have learned how to use it, you will be able to spot many different types of logical fallacies and arguments filled with them. Conditional Syllogism A conditional syllogism is a type of logical argument that has three parts: • The major premise: The major premise states what will happen if something else happens (or does not happen) • The minor premise: the minor premise states what would have to be true for the consequence in the major premise to take place (or not take place). • The conclusion: The conclusion follows from both premises. A conditional syllogism is a form of deductive reasoning in which the conclusion is conditional on two premises. The first premise is called the “major” premise, and the second, the “minor” premise. In order for the conclusion to be valid, both premises must be true. Conditional Syllogism Examples Conditional syllogisms follow a logic pattern; “If A is true, then B is true.” Conditional syllogism examples are also known as hypothetical syllogisms because the argument is not always valid. Conditional syllogisms examples: 1. All men are mortal 2. Socrates is a man 3. Therefore, Socrates is mortal It follows that if All men are mortal and Socrates is a man, then it is logical that Socrates is mortal. In a syllogism, the more general premise is called the major premise (“All men are mortal”), minor premise (” Socrates is a ma”). The conclusion joins the logic of the two premises (“Therefore, Socrates is mortals”). Disjunctive Syllogism Example A disjunctive syllogism is a type of logical argument in which the conclusion is drawn from two or more premises, at least one of which must be false. The form of this kind of argument can be represented as “A or B,” where A and B are statements that may both be true but not necessarily. Disjunctive Syllogisms Example 1: If it’s raining outside, then you’re wearing your boots; It’s not raining outside, so you’re not wearing your boots. Disjunctive Syllogisms Example 2 This type of syllogism has a “disjunction” as a premise, that is, an “either-or” statement. Example example: 1. Either my sibling is a sister, or my sibling is a brother. 2. My sibling is not a sister. 3. Therefore, my sibling is a brother. Disjunctive Syllogism Fallacy A syllogism is a form of logical argument that has two premises and one conclusion. Disjunctive Syllogism is when the second premise in a syllogism is false or irrelevant to the conclusion. Disjunctive Syllogism Fallacy is the incorrect assumption that a disjunction of two statements must be true if one or both of them are false. This fallacy can take many forms, including: 1. “Either you’re telling the truth, or you’re lying.” 2. “You have to either do this now or later.” Disjunctive Syllogism Fallacy Example An example of Disjunctive Syllogism Fallacy would be “All cats are animals, but not all animals are cats.” Disjunctive Syllogism Summary A disjunctive syllogism is a formal logical argument that has two premises, one of which must be false. The first premise states the conclusion to be proved, and the second premise states an impossible situation. If both premises are true, then the first premise’s statement can only be false if both statements can’t be true at once.
{"url":"https://biznewske.com/disjunctive-syllogism-example/","timestamp":"2024-11-11T03:02:20Z","content_type":"text/html","content_length":"114049","record_id":"<urn:uuid:5392fcd7-1759-4994-94b9-cbc940dfcb96>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00152.warc.gz"}
Nano Dimension (NNDM) stock forecast for 2026. Forecast tables and graphs. If you want to ship your car from one city to another, why not hire the experts to do everything for you? They… Often investors struggle to identify profitable trades consistently. Trading indicators provide essential data to guide trading decisions. This article explains key indicators and… Selling a business can be a momentous decision filled with many considerations and uncertainties. As an entrepreneur, this also symbolizes your years of… Navigating through the complexities of personal finance can often feel like walking through a maze blindfolded. This guide is crafted especially for you,… Nano Dimension (NNDM) stock forecast for 2026 Updated: November 11, 2024 (13:46) Sector: Technology The share price of Nano Dimension Ltd. (NNDM) now What analysts predict: $10 52-week High/Low: $3.01 / $2.04 50/200 Day Moving Average: $2.267 / $2.439 This figure corresponds to the Average Price over the previous 50/200 days. For Nano Dimension stocks, the 50-day moving average is the resistance level today. For Nano Dimension stocks, the 200-day moving average is the resistance level today. Are you interested in Nano Dimension Ltd. stocks and want to buy them, or are they already in your portfolio? If yes, then on this page you will find useful information about the dynamics of the Nano Dimension stock price in 2026. How much will one Nano Dimension share be worth in 2026? Is it worth taking profit / loss on NNDM stock now or waiting? What are analysts' forecasts for Nano Dimension We forecast Nano Dimension stock performance using neural networks based on historical data on NNDM stocks. Also, when forecasting, technical analysis tools are used, world geopolitical and news factors are taken into account. This corporation stock prediction results are shown below and presented as a graph, table and text information. Nano Dimension Ltd. stock forecasts are adjusted once a day based on the closing price of the previous trading day. The minimum target price for Nano Dimension analysts is $10. Today 200 Day Moving Average is the resistance level (2.439 $). 50 Day Moving Average is the resistance level (2.267 $). Historical and forecast chart of Nano Dimension Ltd. stock The chart below shows the historical price of Nano Dimension stock and a prediction chart for the next year. For convenience, prices are divided by color. Forecast prices include: Optimistic Forecast, Pessimistic Forecast, and Weighted Average Best Forecast. Detailed values for the NNDM stock price can be found in the table below. Nano Dimension Ltd. is engaged in providing intelligent machines for manufacturing additive manufactured electronics. Its products and services include the DragonFly Pro system, conductive and insulating inks for printed electronics, and optimized multi-material designs for sophisticated electronics. The company was founded by Amit Dror and Simon Freed in 2012 and is headquartered in Ness Zion, Israel. Nano Dimension (NNDM) Forecast for 2026 Month Target Pes. Opt. Vol., % Jan 2.78 2.65 2.98 11.01 % Feb 3.01 2.93 3.20 8.29 % Mar 3.25 3.16 3.47 8.94 % Apr 3.44 3.27 3.52 7.20 % May 3.72 3.46 3.83 9.86 % Jun 3.59 3.51 3.76 6.53 % Jul 3.56 3.43 3.65 6.14 % Aug 3.63 3.54 3.83 7.51 % Sep 3.59 3.46 3.68 5.89 % Oct 3.33 3.11 3.56 12.55 % Nov 3.42 3.34 3.61 7.35 % Dec 3.36 3.14 3.42 8.30 % Nano Dimension information and performance Nano Dimension Address 2 ILAN RAMON, NESS ZIONA, IL Market Capitalization: 469 169 000 $ Market capitalization of the Nano Dimension Ltd. is the total market value of all issued shares of a company. It is calculated by the formula multiplying the number of NNDM shares in the company outstanding by the market price of one share. EBITDA: -102 235 000 $ EBITDA of Nano Dimension is earnings before interest, income tax and depreciation of assets. PE Ratio: None P/E ratio (price to earnings) - shows the ratio between the price of a share and the company's profit PEG Ratio: N/A Price/earnings to growth DPS: None Dividend Per Share is a financial indicator equal to the ratio of the company's net profit available for distribution to the annual average of ordinary shares. DY: None Dividend yield is a ratio that shows how much a company pays in dividends each year at the stock price. EPS: -0.62 EPS shows how much of the net profit is accounted for by the common share. Quarterly Earnings Growth YOY: N/A Quarterly Revenue Growth YOY: 0.017 Trailing PE: - Trailing P/E depends on what has already been done. It uses the current share price and divides it by the total earnings per share for the last 12 months. Forward PE: - Forward P/E uses projections of future earnings instead of final numbers. EV To Revenue: 2.655 Enterprise Value (EV) /Revenue EV To EBITDA: 4.759 The EV / EBITDA ratio shows the ratio of the cost (EV) to its profit before tax, interest and amortization (EBITDA). Shares Outstanding: 219238000 Number of issued ordinary shares Shares Float: N/A Number of freely tradable shares Shares Short Prior Month: N/A Shares Short Prior Month - the number of shares in short positions in the last month. Percent Institutions: N/A Nano Dimension price target for 2026 by month Target values for the price of one Nano Dimension share for Jan 2026. The weighted average target price per Nano Dimension share in Jan 2026 is: In Jan, the dynamics for Nano Dimension Ltd. shares will prevail with possible monthly volatility of volatility is expected. Pessimistic target level: Optimistic target level: Target values for the price of one Nano Dimension share for Feb 2026. The weighted average target price per Nano Dimension share in Feb 2026 is: In Feb, the dynamics for Nano Dimension Ltd. shares will prevail with possible monthly volatility of volatility is expected. Pessimistic target level: Optimistic target level: Target values for the price of one Nano Dimension share for Mar 2026. The weighted average target price per Nano Dimension share in Mar 2026 is: In Mar, the dynamics for Nano Dimension Ltd. shares will prevail with possible monthly volatility of volatility is expected. Pessimistic target level: Optimistic target level: Target values for the price of one Nano Dimension share for Apr 2026. The weighted average target price per Nano Dimension share in Apr 2026 is: In Apr, the dynamics for Nano Dimension Ltd. shares will prevail with possible monthly volatility of volatility is expected. Pessimistic target level: Optimistic target level: Target values for the price of one Nano Dimension share for May 2026. The weighted average target price per Nano Dimension share in May 2026 is: In May, the dynamics for Nano Dimension Ltd. shares will prevail with possible monthly volatility of volatility is expected. Pessimistic target level: Optimistic target level: Target values for the price of one Nano Dimension share for Jun 2026. The weighted average target price per Nano Dimension share in Jun 2026 is: In Jun, the dynamics for Nano Dimension Ltd. shares will prevail with possible monthly volatility of volatility is expected. Pessimistic target level: Optimistic target level: Target values for the price of one Nano Dimension share for Jul 2026. The weighted average target price per Nano Dimension share in Jul 2026 is: In Jul, the dynamics for Nano Dimension Ltd. shares will prevail with possible monthly volatility of volatility is expected. Pessimistic target level: Optimistic target level: Target values for the price of one Nano Dimension share for Aug 2026. The weighted average target price per Nano Dimension share in Aug 2026 is: In Aug, the dynamics for Nano Dimension Ltd. shares will prevail with possible monthly volatility of volatility is expected. Pessimistic target level: Optimistic target level: Target values for the price of one Nano Dimension share for Sep 2026. The weighted average target price per Nano Dimension share in Sep 2026 is: In Sep, the dynamics for Nano Dimension Ltd. shares will prevail with possible monthly volatility of volatility is expected. Pessimistic target level: Optimistic target level: Target values for the price of one Nano Dimension share for Oct 2026. The weighted average target price per Nano Dimension share in Oct 2026 is: In Oct, the dynamics for Nano Dimension Ltd. shares will prevail with possible monthly volatility of volatility is expected. Pessimistic target level: Optimistic target level: Target values for the price of one Nano Dimension share for Nov 2026. The weighted average target price per Nano Dimension share in Nov 2026 is: In Nov, the dynamics for Nano Dimension Ltd. shares will prevail with possible monthly volatility of volatility is expected. Pessimistic target level: Optimistic target level: Target values for the price of one Nano Dimension share for Dec 2026. The weighted average target price per Nano Dimension share in Dec 2026 is: In Dec, the dynamics for Nano Dimension Ltd. shares will prevail with possible monthly volatility of volatility is expected. Pessimistic target level: Optimistic target level: Nano Dimension (NNDM) stock dividend Nano Dimension last paid dividends on 01/01/1970. The next scheduled payment will be on 10/22/2019. The amount of dividends is $None per share. If the date of the next dividend payment has not been updated, it means that the issuer has not yet announced the exact payment. As soon as information becomes available, we will immediately update the data. Bookmark our portal to stay updated. Last Split Date: 01/01/1970 Splitting of shares is an increase in the number of securities of the issuing company circulating on the market due to a decrease in their value at constant capitalization. For example, a 5: 1 ratio means that the value of one share will decrease 5 times, the total amount will increase 5 times. It is important to understand that this procedure does not change the capitalization of the company, as well as the total value of assets held in private hands.
{"url":"https://pandaforecast.com/stock_forecasts/forecast_nndm/for2026/","timestamp":"2024-11-11T19:22:28Z","content_type":"text/html","content_length":"173127","record_id":"<urn:uuid:52b9160b-50dc-4602-a6af-ce04f03d12fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00025.warc.gz"}
Welcome to my homepage. I am an assistant professor in mathematics in the Department of Mathematics, Physics and Electrical Engineering at Northumbria University, where I am part of the research group on mathematics of complex and nonlinear phenomena. My research interests include, on the mathematical side, nonlinear waves, pattern formation, dynamical systems, statistical physics, and related fields, and on the application side, metamaterials, nonlinear optics, climate dynamics, fluid mechanics, and related fields. Previously, I held postdoctoral positions at University of Colorado at Boulder, Northwestern University, and University of Chicago, and obtained my PhD in physics at University of California at Berkeley advised by Edgar Knobloch. You can find my contact information below. My academic staff profile at Northumbria University can be found here. This site (http://www.yipingma.com/) was last updated on 16 March 2024. Yiping Ma Department of Mathematics, Physics and Electrical Engineering Email: yiping.m[at]gmail.com Northumbria University, Newcastle upon Tyne, NE1 8ST, UK
{"url":"http://yipingma.com/home.html","timestamp":"2024-11-04T02:34:21Z","content_type":"text/html","content_length":"3968","record_id":"<urn:uuid:9fcd962a-960c-40f0-a2ce-a243c9676e6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00245.warc.gz"}
Seminars — Geolis. Room P3.10, Mathematics Building Modular forms and topology In this survey talk I will describe how modular forms give invariants of manifolds, and how these invariants detect elements of the homotopy groups of spheres. These invariants pass through a cohomology theory of Topological Modular Forms (TMF). I will review the role that K-theory plays in detecting periodic families of elements in the homotopy groups of spheres (the image of the J homomorphism) in terms of denominators of Bernoulli numbers. I will then describe how certain higher families of elements (the divided beta family) are detected by certain congruences between q-expansions of modular forms. Doug Ravenel's web page for a seminar on topological automorphic forms contains a comprehensive list of references.
{"url":"https://geolis.math.tecnico.ulisboa.pt/seminars?id=3167","timestamp":"2024-11-06T23:49:06Z","content_type":"text/html","content_length":"8875","record_id":"<urn:uuid:6c429db3-b845-4374-9ea9-e0e68592c637>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00494.warc.gz"}
Drug Dosage Calculation Exam 13 - RNpedia Practice Mode Welcome to your Drug Dosage Calculation Exam 13! This exam is carefully curated to help you consolidate your knowledge and gain deeper understanding on the topic. ✔ Exam Details • Number of Questions: 10 items • Mode: Practice Mode ✔ Exam Instructions 1. Practice Mode: This mode aims to facilitate effective learning and review. 2. Instant Feedback: After each question, the correct answer along with an explanation will be revealed. This is to help you understand the reasoning behind the correct answer, helping to reinforce your learning. 3. Time Limit: There is no time limit for this exam. Take your time to understand each question and the corresponding choices. ✔ Tips For Success • Read each question carefully. Take your time and don't rush. • Understand the rationale behind each answer. This will not only help you during this exam, but also assist in reinforcing your learning. • Don't be discouraged by incorrect answers. Use them as an opportunity to learn and improve. • Take breaks if you need them. It's not a race, and your understanding is what's most important. • Keep a positive attitude and believe in your ability to succeed. Remember, this exam is not just a test of your knowledge, but also an opportunity to enhance your understanding and skills. Enjoy the learning journey! Click 'Start Exam' when you're ready to begin. Best of luck! 💡 Hint To find the right volume, divide the total dosage ordered by the concentration of the medication available. This will give you the volume in milliliters that you should administer. 1 / 10 1. Nurse Thomas is assigned to a patient who is in severe pain. An order comes in for Dilaudid, 3 mg to be administered intramuscularly. The medication cupboard holds vials of Dilaudid with a concentration of 2 mg/mL. As he prepares to draw up the medication, Thomas wonders, "What volume should I administer to deliver the ordered dosage?" 💡 Hint First, convert the ordered dose from grams to milligrams, knowing that 1 gram equals 1000 milligrams. Then, determine the number of tablets that will provide the correct dose based on the strength of the available medication. 2 / 10 2. Nurse Jenna is organizing her morning medication pass when she sees an order for 0.05 grams of Imipramine HCL. The medication cabinet contains Imipramine HCL in 50 mg tablets. As Jenna readies the dose, she wonders, "How many of these tablets should I administer to fulfill the ordered dose?" 💡 Hint To find the correct number of tablets, first convert the ordered dose from grams to milligrams, knowing that 1 gram equals 1000 milligrams. Then, divide this by the strength of each tablet to find the number of tablets to administer. 3 / 10 3. Nurse Noah is caring for a patient who has an order for Azulfidine. The order specifies 1.5 grams every twelve hours. The medication tray on the unit holds Azulfidine tablets, each containing 500 mg. As Noah prepares to administer the medication, he wonders, "How many of these tablets should I give to fulfill the ordered dose?" 💡 Hint Start by converting the ordered dose from milligrams to micrograms, since 1 milligram equals 1000 micrograms. Then, determine the number of tablets that will provide the correct dose based on the strength of the available medication. 4 / 10 4. Nurse Sophia is reviewing the medication list for her morning rounds and comes across an order for Premarin 1.25 mg for one of her patients. Upon checking the medication cabinet, she finds that the available Premarin tablets each contain 625 mcg. Preparing to administer the medication, Sophia wonders, "How many of these tablets should I give to meet the ordered dose?" 💡 Hint To find the correct number of tablets, first convert the ordered dose from grams to milligrams, knowing that 1 gram equals 1000 milligrams. Then, divide this by the strength of each tablet to find the number of tablets to administer. 5 / 10 5. Nurse Bella is taking care of a patient with an order for Azulfidine. The prescription calls for two grams every twelve hours. The medication tray on the ward has Azulfidine tablets, each containing 500 mg. As she prepares to administer the medication, Bella wonders, "How many tablets should I give to fulfill the ordered dosage?" 💡 Hint To calculate the correct number of tablets to administer, divide the total dose required (in units) by the strength (in units) of each tablet. 6 / 10 6. In the evening medication round, Nurse Ethan is serving a patient who has an order for 1,200,000 units of potassium penicillin. On the medication trolley, Ethan finds that the available potassium penicillin tablets each contain 400,000 units. As he dispenses the medicine, Ethan wonders, "How many of these tablets should I give to provide the ordered dose?" 💡 Hint Remember that 1 mg is equal to 1000 mcg. Use this to convert the ordered dose and the available dosage strength to the same units, then determine how many tablets are needed to provide the correct 7 / 10 7. As part of the morning medication pass, Nurse Eli has an order to administer 200 mcg of Ergotrate maleate to a patient. He has tablets available, each with a dosage strength of 0.2 mg. Eli is considering, "How many of these tablets should I give to meet the ordered dosage?" 💡 Hint Simply divide the total dose required by the strength of each tablet to find out how many tablets should be administered. 8 / 10 8. Nurse Liam is preparing medication for his patient, who has an order for 800,000 units of potassium penicillin. Checking the medication cart, Liam finds that the available potassium penicillin tablets are each 400,000 units. As he prepares to dispense the medication, he wonders, "How many tablets should I give to meet the ordered dose?" 💡 Hint Start by converting the ordered dose from grams to milligrams, as 1 gram equals 1000 milligrams. Then, determine how many tablets will provide the desired dose, based on the strength of the available 9 / 10 9. Nurse Emily is prepping medications for her morning rounds when she comes across an order for 0.1 gram of a certain medication. The available tablets on the unit are each 100 mg. As she gets ready to dispense the medication, Emily wonders, "What amount of this medication will I need to administer to fulfill the ordered dosage?" 💡 Hint Use the conversion factor of 1 grain to around 64.8 milligrams to convert the prescribed dose from grains to milligrams. Then, calculate the number of capsules needed based on the medicine's available strength. Here, the available and required strengths are similar, leading to the use of 1 capsule. 10 / 10 10. Nurse Jack is preparing medications and encounters an order for 1 ½ grains of Sodium Seconal. The available Sodium Seconal capsules in the medication cart are each labeled as 100 mg. Using a more precise conversion factor for grains, Jack calculates the correct amount and prepares to dispense the medicine. Exam Mode Welcome to your Drug Dosage Calculation Exam 13! This exam is carefully designed to provide you with a realistic test-taking experience, preparing you for the pressures of an actual nursing exam. ✔ Exam Details • Number of Questions: 10 items • Mode: Exam Mode ✔ Exam Instructions 1. Exam Mode: This mode is intended to simulate the environment of an actual exam. Questions and choices will be presented one at a time. 2. Time Limit: Each question must be answered within 90 seconds. The entire exam should be completed within 15 minutes. 3. Feedback and Grading: Upon completion of the exam, you will be able to see your grade and the correct answers to all questions. This will allow you to evaluate your performance and understand areas for improvement. ✔ Tips For Success • Read each question carefully. You have 90 seconds per question, so make sure you understand the question before selecting your answer. • Pace yourself. Remember, you have 15 minutes in total, so try to maintain a steady rhythm. • Focus on one question at a time. Try not to worry about the questions to come. • Stay calm under pressure. Use your knowledge and trust your instincts. • Remember, it's not just about the score, but about the learning process. This exam is not only a measurement of your current understanding, but also a valuable learning tool to prepare you for your future nursing career. Click 'Start Exam' when you're ready to begin. Good 1 / 10 1. Nurse Jack is preparing medications and encounters an order for 1 ½ grains of Sodium Seconal. The available Sodium Seconal capsules in the medication cart are each labeled as 100 mg. Using a more precise conversion factor for grains, Jack calculates the correct amount and prepares to dispense the medicine. 2 / 10 2. Nurse Emily is prepping medications for her morning rounds when she comes across an order for 0.1 gram of a certain medication. The available tablets on the unit are each 100 mg. As she gets ready to dispense the medication, Emily wonders, "What amount of this medication will I need to administer to fulfill the ordered dosage?" 3 / 10 3. Nurse Noah is caring for a patient who has an order for Azulfidine. The order specifies 1.5 grams every twelve hours. The medication tray on the unit holds Azulfidine tablets, each containing 500 mg. As Noah prepares to administer the medication, he wonders, "How many of these tablets should I give to fulfill the ordered dose?" 4 / 10 4. Nurse Thomas is assigned to a patient who is in severe pain. An order comes in for Dilaudid, 3 mg to be administered intramuscularly. The medication cupboard holds vials of Dilaudid with a concentration of 2 mg/mL. As he prepares to draw up the medication, Thomas wonders, "What volume should I administer to deliver the ordered dosage?" 5 / 10 5. As part of the morning medication pass, Nurse Eli has an order to administer 200 mcg of Ergotrate maleate to a patient. He has tablets available, each with a dosage strength of 0.2 mg. Eli is considering, "How many of these tablets should I give to meet the ordered dosage?" 6 / 10 6. Nurse Liam is preparing medication for his patient, who has an order for 800,000 units of potassium penicillin. Checking the medication cart, Liam finds that the available potassium penicillin tablets are each 400,000 units. As he prepares to dispense the medication, he wonders, "How many tablets should I give to meet the ordered dose?" 7 / 10 7. In the evening medication round, Nurse Ethan is serving a patient who has an order for 1,200,000 units of potassium penicillin. On the medication trolley, Ethan finds that the available potassium penicillin tablets each contain 400,000 units. As he dispenses the medicine, Ethan wonders, "How many of these tablets should I give to provide the ordered dose?" 8 / 10 8. Nurse Sophia is reviewing the medication list for her morning rounds and comes across an order for Premarin 1.25 mg for one of her patients. Upon checking the medication cabinet, she finds that the available Premarin tablets each contain 625 mcg. Preparing to administer the medication, Sophia wonders, "How many of these tablets should I give to meet the ordered dose?" 9 / 10 9. Nurse Bella is taking care of a patient with an order for Azulfidine. The prescription calls for two grams every twelve hours. The medication tray on the ward has Azulfidine tablets, each containing 500 mg. As she prepares to administer the medication, Bella wonders, "How many tablets should I give to fulfill the ordered dosage?" 10 / 10 10. Nurse Jenna is organizing her morning medication pass when she sees an order for 0.05 grams of Imipramine HCL. The medication cabinet contains Imipramine HCL in 50 mg tablets. As Jenna readies the dose, she wonders, "How many of these tablets should I administer to fulfill the ordered dose?" Text Mode Text Mode – Text version of the exam 1. Nurse Thomas is assigned to a patient who is in severe pain. An order comes in for Dilaudid, 3 mg to be administered intramuscularly. The medication cupboard holds vials of Dilaudid with a concentration of 2 mg/mL. As he prepares to draw up the medication, Thomas wonders, “What volume should I administer to deliver the ordered dosage?” 2. Nurse Emily is prepping medications for her morning rounds when she comes across an order for 0.1 gram of a certain medication. The available tablets on the unit are each 100 mg. As she gets ready to dispense the medication, Emily wonders, “What amount of this medication will I need to administer to fulfill the ordered dosage?” 3. Nurse Liam is preparing medication for his patient, who has an order for 800,000 units of potassium penicillin. Checking the medication cart, Liam finds that the available potassium penicillin tablets are each 400,000 units. As he prepares to dispense the medication, he wonders, “How many tablets should I give to meet the ordered dose?” 4. Nurse Bella is taking care of a patient with an order for Azulfidine. The prescription calls for two grams every twelve hours. The medication tray on the ward has Azulfidine tablets, each containing 500 mg. As she prepares to administer the medication, Bella wonders, “How many tablets should I give to fulfill the ordered dosage?” 5. In the evening medication round, Nurse Ethan is serving a patient who has an order for 1,200,000 units of potassium penicillin. On the medication trolley, Ethan finds that the available potassium penicillin tablets each contain 400,000 units. As he dispenses the medicine, Ethan wonders, “How many of these tablets should I give to provide the ordered dose?” 6. Nurse Sophia is reviewing the medication list for her morning rounds and comes across an order for Premarin 1.25 mg for one of her patients. Upon checking the medication cabinet, she finds that the available Premarin tablets each contain 625 mcg. Preparing to administer the medication, Sophia wonders, “How many of these tablets should I give to meet the ordered dose?” 7. Nurse Noah is caring for a patient who has an order for Azulfidine. The order specifies 1.5 grams every twelve hours. The medication tray on the unit holds Azulfidine tablets, each containing 500 mg. As Noah prepares to administer the medication, he wonders, “How many of these tablets should I give to fulfill the ordered dose?” 8. Nurse Jack is preparing medications and encounters an order for 1 ½ grains of Sodium Seconal. The available Sodium Seconal capsules in the medication cart are each labeled as 100 mg. Using a more precise conversion factor for grains, Jack calculates the correct amount and prepares to dispense the medicine. 9. Nurse Jenna is organizing her morning medication pass when she sees an order for 0.05 grams of Imipramine HCL. The medication cabinet contains Imipramine HCL in 50 mg tablets. As Jenna readies the dose, she wonders, “How many of these tablets should I administer to fulfill the ordered dose?” 10. As part of the morning medication pass, Nurse Eli has an order to administer 200 mcg of Ergotrate maleate to a patient. He has tablets available, each with a dosage strength of 0.2 mg. Eli is considering, “How many of these tablets should I give to meet the ordered dosage?” Answers & Rationales 1. Solution: Nurse Thomas can calculate the volume to be administered using the following formula: Volume (in mL) = Dose required (in mg) / Concentration of solution (in mg/mL) The ordered dose is 3 mg and the available concentration of the solution is 2 mg/mL. Substitute these values into the formula: Volume (in mL) = 3 mg / 2 mg/mL Now, let’s calculate the volume: Volume = 1.5 mL Therefore, Nurse Thomas should administer 1.5 mL of the Dilaudid solution to deliver the ordered dose of 3 mg. 2. Solution: Nurse Emily can determine the amount of medication needed by first converting the ordered dose from grams to milligrams: 0.1 grams = 0.1 * 1000 = 100 milligrams The available tablets on the unit are each 100 mg. Given that the dose she needs to administer is also 100 milligrams, she needs to give the patient 1 tablet to fulfill the ordered dosage. Therefore, Nurse Emily should administer one tablet of this medication to deliver the ordered dose of 0.1 gram. 3. Solution: Nurse Liam can determine the number of tablets needed by using the following formula: Number of tablets = Ordered dose (in units) / Tablet strength (in units) The ordered dose is 800,000 units and the strength of each available tablet is 400,000 units. Substitute these values into the formula: Number of tablets = 800,000 units / 400,000 units Now, let’s calculate the number of tablets: Number of tablets = 2 Therefore, Nurse Liam should administer two tablets of potassium penicillin to deliver the ordered dose of 800,000 units. 4. Solution: Nurse Bella can determine the number of tablets needed by using the following formula: Number of tablets = Ordered dose (in mg) / Tablet strength (in mg) First, she needs to convert the ordered dose from grams to milligrams, since the strength of the tablets is given in milligrams: 2 grams = 2 * 1000 = 2000 milligrams Then, substitute these values into the formula: Number of tablets = 2000 mg / 500 mg Now, let’s calculate the number of tablets: Number of tablets = 4 Therefore, Nurse Bella should administer four tablets of Azulfidine to fulfill the ordered dosage of two grams. 5. Solution: Nurse Ethan can determine the number of tablets needed using the following formula: Number of tablets = Ordered dose (in units) / Tablet strength (in units) The ordered dose is 1,200,000 units, and each available tablet has a strength of 400,000 units. Substitute these values into the formula: Number of tablets = 1,200,000 units / 400,000 units Now, let’s calculate the number of tablets: Number of tablets = 3 Therefore, Nurse Ethan should administer three tablets of potassium penicillin to deliver the ordered dose of 1,200,000 units. 6. Solution: Nurse Sophia can calculate the number of tablets needed using the following formula: Number of tablets = Ordered dose (in mcg) / Tablet strength (in mcg) First, she needs to convert the ordered dose from milligrams (mg) to micrograms (mcg), since the strength of the tablets is given in micrograms. There are 1,000 micrograms in a milligram, so: 1.25 mg = 1.25 * 1000 = 1250 mcg Then, she substitutes these values into the formula: Number of tablets = 1250 mcg / 625 mcg Now, let’s calculate the number of tablets: Number of tablets = 2 Therefore, Nurse Sophia should administer two tablets of Premarin to fulfill the ordered dosage of 1.25 mg. 7. Solution: Nurse Noah can determine the number of tablets needed by using the following formula: Number of tablets = Ordered dose (in mg) / Tablet strength (in mg) First, he needs to convert the ordered dose from grams to milligrams, as the strength of the tablets is given in milligrams: 1.5 grams = 1.5 * 1000 = 1500 milligrams Then, he substitutes these values into the formula: Number of tablets = 1500 mg / 500 mg Now, let’s calculate the number of tablets: Number of tablets = 3 Therefore, Nurse Noah should administer three tablets of Azulfidine to fulfill the ordered dosage of 1.5 grams. 8. Solution: To convert the order from grains to milligrams, Nurse Jack can use the conversion factor that 1 grain is approximately equal to 64.8 milligrams. So, the ordered dose in milligrams can be calculated as: Ordered dose (in mg) = Ordered dose (in grains) * Conversion factor (in mg/grain) Let’s substitute these values into the formula: Ordered dose (in mg) = 1.5 grains * 64.8 mg/grain Ordered dose = 97.2 mg The available Sodium Seconal capsules each contain 100 mg. Given that the calculated dose is 97.2 mg, which is very close to 100 mg, Jack will need to give the patient 1 capsule to meet the ordered Therefore, Nurse Jack should administer one capsule of Sodium Seconal to deliver the ordered dose of 1 ½ grains. Please note that this approximation should be verified with the prescriber or a pharmacist, as it slightly exceeds the ordered dose. 9. Solution: Hint: First, convert the ordered dose from grams to milligrams, knowing that 1 gram equals 1000 milligrams. Then, determine the number of tablets that will provide the correct dose based on the strength of the available medication. Nurse Jenna can calculate the number of tablets needed using the following formula: Number of tablets = Ordered dose (in mg) / Tablet strength (in mg) First, she needs to convert the ordered dose from grams to milligrams, as the strength of the tablets is given in milligrams: 0.05 grams = 0.05 * 1000 = 50 milligrams Then, she substitutes these values into the formula: Number of tablets = 50 mg / 50 mg Now, let’s calculate the number of tablets: Number of tablets = 1 Therefore, Nurse Jenna should administer one tablet of Imipramine HCL to fulfill the ordered dosage of 0.05 grams. 10. Solution: Nurse Eli can calculate the number of tablets needed using the following formula: Number of tablets = Ordered dose (in mcg) / Tablet strength (in mcg) First, he needs to convert the tablet strength from milligrams (mg) to micrograms (mcg) since the ordered dose is given in micrograms. There are 1000 micrograms in a milligram, so: 0.2 mg = 0.2 * 1000 = 200 mcg Then, he substitutes these values into the formula: Number of tablets = 200 mcg / 200 mcg Now, let’s calculate the number of tablets: Number of tablets = 1 Therefore, Nurse Eli should administer one tablet of Ergotrate maleate to fulfill the ordered dosage of 200 mcg.
{"url":"https://www.rnpedia.com/practice-exams/pharmacology-and-drug-calculation-exams/drug-dosage-calculation-exam-3/","timestamp":"2024-11-09T03:11:28Z","content_type":"text/html","content_length":"342376","record_id":"<urn:uuid:ccce7573-f662-42cb-87e3-f40447f8dc1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00133.warc.gz"}
How Can You Make the Most Value From Your Mathematics Degree? - Mydegree.com The field of mathematics has been valued since ancient times by all civilisations from around the world. During the past few decades [a few years or ‘after World War II’?], mathematics has evolved into a scientific discipline that is studied in universities across the world. In most countries throughout the globe, students have to opt for either a science-oriented course or a humanities-based one after studying mathematics at school. There are relatively fewer people who continue with mathematics advance degree in colleges and universities. However, more qualified professionals are applying their knowledge of mathematical concepts in domestic and international businesses. In this article, we are going to talk about the benefits of choosing mathematics as a career option. Also, we will discuss how do you find it valued in today’s time.So if you’re someone seeking knowledge about the value of mathematics degree, you’re at the right place. Grab a seat and enjoy reading! 1 – You Can Become an Auditor or a Financial Consultant Mathematics degree is acts as a stepping stone for graduate students. If you want to become financial consultants or auditors, this degree never disappoints you. For this reason, it is one of the most popular choices among undergraduates in recent times. Even though a bachelor’s degree is usually sufficient to get a job as an auditor or a financial adviser, the advanced knowledge in this field can be helpful for students who are interested in pursuing a Masters degree that is related to mathematics. 2 – You Can Work as an Actuary- valued job choosing mathematics With more emphasis on mathematical concepts in business, actuaries have become increasingly important these days. The highly qualified mathematician are very much valued.They use their knowledge of statistical models to predict the outcomes. This ensures that businesses are able to make better-informed decisions about investments and insurances. 3 – You Can Be a High School Teacher The value of mathematics degree holder is remain at the top. Therefore, If you have an interest in teaching you can apply as maths teacher.Only a bachelor’s degree is required to make teaching as your professional career. It will not only equip you with knowledge of subject concepts also make you more effective at delivering mathematical ideas to younger students. 4 – You Can Work as a Research Assistant in Academia Fully aware of the role that mathematics plays in scientific research, scientists are now spending more time studying how it can be used for practical purposes. If you have an interest in understanding how mathematicians work, graduating with a degree in mathematics is a good idea. Research assistant works under experienced mathematicians and become familiar with the day-to-day work of an academic environment. This will give you the opportunity to find out if this field suits your personality before committing to further studies. 5 – You Can Choose a Career of a Statistician Mathematicians work in a variety of fields including statistical analysis, actuarial science and computer science. Statisticians are responsible for collecting data from relevant sources and presenting the information to business managers. Such professionals use their understanding of mathematical concepts to understand how variables interact with each other so that correct decisions can be made by the team at an organization. 6 – You Can Become a Data Scientist Professionals who works as an analyst or researchers are known as data scientist. Their job is to analyse the data to extract useful information from it. The data includes text-based content, audio, charts and images. The mathematical knowledge will help you overcome any challenges along the way. 7 – working as Operations Research Analyst It is common for mathematicians to use their knowledge of numerical methods, statistical analysis and operations research techniques to generate strategies that help organizations make effective decisions. Usually, this role involves understanding the requirements of clients so that they can be translated into business plans. Those seeking employment in this field usually need to have a bachelor’s degree or higher qualification in mathematics and relevant skills. Average Salaries of a Mathematician in Chosen Career Paths The salaries of mathematicians are usually higher than the average salary in other professions. An actuarial scientist earns $82,000 per year. Whereas, a high school teacher may only get paid about $55,000 annually. Here are other salary brackets for other professions in the field of mathematics: • Statistician: $54,200 • Data Scientist: $92,500 • Operations Research Analyst: $50,000 Furthermore, The salaries may vary depending on the geographical location of a person. In New York City, one could get paid more than their peers elsewhere. the Highest Paying Job for MAthematician According to Salary.com, one of the highest paying jobs is that of an actuary. Hence, this profession can lead to an annual salary of $112,000 . Therefore, it is evident that value of mathematics degree could never be decreased. A degree in mathematics is a great choice for anyone who likes studying the subject matter. It can lead to various careers where you can use your knowledge of mathematical concepts. The Actuary’s salaries are usually higher than other professions. The reason behind is the specialised skills and calculations they apply to perform well in this area. If you’re not sure what career options are available, completing a bachelor’s degree in mathematics will offer you many opportunities. Just choose one that matches your personality and requirements so that you can make the most out of this degree.
{"url":"https://mydegree.com/2021/11/29/value-of-your-mathematics-degree/","timestamp":"2024-11-12T05:46:28Z","content_type":"text/html","content_length":"204013","record_id":"<urn:uuid:37ebbf75-54e6-48ef-a3e9-794d7a9c7252>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00177.warc.gz"}
MathGPT - Free Math AI Solver | Mathful MathGPT, Even Better Than GPT-4 MathGPT by Mathful can scan image uploads of math problems with unsurpassed accuracy and deliver detailed step-by-step solutions, offering an edgier alternative to GPT-4 and changing the way students tackle math problems. All-Powerful Math GPT Step-by-step Explanations Beyond just providing the answers, Mathful's MathGPT offers detailed, step-by-step solutions, enabling you to break down and grasp complex math problems. Instantaneous Solution MathGPT formulates correct answers almost instantly. Students can always receive timely assistance and proceed with their assignments without delays. Near Perfect Accuracy With a 99% accuracy rate, no other math AI solvers compare. Trained on a massive database of math queries, MathGPT can deliver highly accurate and reliable answers. Calculator With Steps MathGPT comes with a step-by-step calculator that can instantly solve complex math equations and functions at all grade levels, be it algebra, calculus, and more. MathGPT Solves Any of Your Maths Problems Math GPT covers a wide range of math topics to help students from elementary school through university. We aim to be the only tool students need at any level of math. • Elementary math • Arithmetic • Word problems • Geometry • Pre-algebra and algebra • Pre-calculus and calculus • Trigonometry • And much more! How to Use Mathful's MathGPT for Free? Using Mathful's MathGPT is an absolute breeze. Just follow the steps below: • 01 Input Your Math Problem You can choose to input text, upload an image, or utilize a calculator. • 02 Let MathGPT Solve MathGPT is fast and before you know it, you will get your math problem solved. • 03 Detailed Math Answer Review the step-by-step explanations to grasp and comprehend them. Questions? We Have Answers. • What is MathGPT? MathGPT is an advanced AI math solver developed by Mathful that specializes in solving a diverse range of mathematical problems. It utilizes cutting-edge technology to scan and interpret images of math questions, providing users with accurate, step-by-step solutions. • What does MathGPT do? MathGPT is a comprehensive math assistant that can tackle various mathematical topics, including arithmetic, algebra, geometry, calculus, and more. Users can ask questions by typing or uploading images. The AI analyzes input and gives quick, clear, and accurate solutions. • How does MathGPT ensure accurate solutions? Mathful's MathGPT utilizes the latest AI algorithms to understand queries and clarify math concepts, achieving an impressive 98% accuracy rate in delivering answers. • Can MathGPT handle different types of math problems? Yes, MathGPT by Mathful spans numerous branches of math, from elementary math and pre-algebra to trigonometry and word problems, covering a wide range of math topics. • What languages does Math GPT support? MathGPT supports over 30 different languages, which include English, Chinese, Spanish, Italian, Portuguese, Korean, Japanese, and more, making it accessible to a wide range of users. • Is Mathful's MathGPT free? Yes, MathGPT by Mathful provides a free trial for users to solve math problems of all types, and after the trial ends, users can upgrade to a paid subscription to continue using the math AI • What's the best MathGPT? Why? HIX Tutor provides the best MathGPT available. This tool excels in delivering 99% accuracy in solving math problems, making it one of the most reliable options on the market. HIX Tutor's MathGPT stands out due to its ability to provide detailed explanations alongside solutions, ensuring that students grasp the underlying concepts. Get Started With MathGPT for Free Today Curious to see the power of MathGPT firsthand? You can try it for free now and experience the difference it can make in your math studies!
{"url":"https://mathful.com/mathgpt","timestamp":"2024-11-12T22:32:58Z","content_type":"text/html","content_length":"84077","record_id":"<urn:uuid:62aab469-637a-402c-a37d-2b76641eee54>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00368.warc.gz"}
The effect of adding randomly weighted edges (Journal Article) | NSF PAGES Given a directed acyclic graph (DAG) G=(V,E), we say that G is (e,d)-depth-robust (resp. (e,d)-edge-depth-robust) if for any set S⊆V (resp. S⊆E) of at most |S|≤e nodes (resp. edges) the graph G−S contains a directed path of length d. While edge-depth-robust graphs are potentially easier to construct, many applications in cryptography require node depth-robust graphs with small indegree. We create a graph reduction that transforms an (e,d)-edge-depth-robust graph with m edges into a (e/2,d)-depth-robust graph with O(m) nodes and constant indegree. One immediate consequence of this result is the first construction of a provably (nloglognlogn,nlogn(logn)loglogn)-depth-robust graph with constant indegree. Our reduction crucially relies on ST-robust graphs, a new graph property we introduce which may be of independent interest. We say that a directed, acyclic graph with n inputs and n outputs is (k1,k2)-ST-robust if we can remove any k1 nodes and there exists a subgraph containing at least k2 inputs and k2 outputs such that each of the k2 inputs is connected to all of the k2 outputs. If the graph if (k1,n−k1)-ST-robust for all k1≤n we say that the graph is maximally ST-robust. We show how to construct maximally ST-robust graphs with constant indegree and O(n) nodes. Given a family M of ST-robust graphs and an arbitrary (e,d)-edge-depth-robust graph G we construct a new constant-indegree graph Reduce(G,M) by replacing each node in G with an ST-robust graph from M. We also show that ST-robust graphs can be used to construct (tight) proofs-of-space and (asymptotically) improved wide-block labeling functions. more » « less
{"url":"https://par.nsf.gov/biblio/10320466-effect-adding-randomly-weighted-edges","timestamp":"2024-11-12T16:09:09Z","content_type":"text/html","content_length":"243036","record_id":"<urn:uuid:eaf84cac-b25c-4308-a55a-5ed1b6554412>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00171.warc.gz"}
2.4 Rolling origin | Forecasting and Analytics with the Augmented Dynamic Adaptive Model (ADAM) \( \newcommand{\mathbbm}[1]{\boldsymbol{\mathbf{#1}}} \) 2.4 Rolling origin Remark. The text in this section is based on the vignette for the greybox package, written by the author of this monograph. When there is a need to select the most appropriate forecasting model or method for the data, the forecaster usually splits the sample into two parts: in-sample (aka “training set”) and holdout sample (aka out-sample or “test set”). The model is estimated on the in-sample, and its forecasting performance is evaluated using some error measure on the holdout sample. Using this procedure only once is known as “fixed origin” evaluation. However, this might give a misleading impression of the accuracy of forecasting methods. If, for example, the time series contains outliers or level shifts, a poor model might perform better in fixed origin evaluation than a more appropriate one just by chance. So it makes sense to have a more robust evaluation technique, where the model’s performance is evaluated several times, not just once. An alternative procedure known as “rolling origin” evaluation is one such technique. In rolling origin evaluation, the forecasting origin is repeatedly moved forward by a fixed number of observations, and forecasts are produced from each origin (Tashman, 2000). This technique allows obtaining several forecast errors for time series, which gives a better understanding of how the models perform. This can be considered a time series analogue to cross-validation techniques (see Chapter 5 of James et al., 2017). Here is a simple graphical representation, courtesy of Nikos Kourentzes. The plot in Figure 2.4 shows how the origin moves forward and the point and interval forecasts of the model change. As a result, this procedure gives information about the performance of the model over a set of observations, not on a random one. There are different options of how this can be done, and here we discuss the main principles behind it. 2.4.1 Principles of rolling origin Figure 2.5 (Svetunkov and Petropoulos, 2018) illustrates the basic idea behind rolling origin. White cells correspond to the in-sample data, while the light grey cells correspond to the three steps ahead forecasts. The time series in the figure has 25 observations, and forecasts are produced for eight origins starting from observation 15. In the first step, the model is estimated on the first in-sample set, and forecasts are created for the holdout. Next, another observation is added to the end of the in-sample set, the test set is advanced, and the procedure is repeated. The process stops when there is no more data left. This is a rolling origin with a constant holdout sample size. As a result of this procedure, eight one to three steps ahead forecasts are produced. Based on them, we can calculate the preferred error measures and choose the best performing model (see Section 2.1.2). Another option for producing forecasts via rolling origin would be to continue with rolling origin even when the test sample is smaller than the forecast horizon, as shown in Figure 2.6. In this case, the procedure continues until origin 22, when the last complete set of three steps ahead forecasts can be produced, and then continues with a decreasing forecasting horizon. So the two steps ahead forecast is produced from origin 23, and only a one-step-ahead forecast is produced from origin 24. As a result, we obtain ten one-step-ahead forecasts, nine two steps ahead forecasts and eight three steps ahead forecasts. This is a rolling origin with a non-constant holdout sample size, which can be helpful with small samples when not enough observations are available. Finally, in both cases above, we had the increasing in-sample size. However, we might need a constant in-sample for some research purposes. Figure 2.7 demonstrates such a setup. In this case, in each iteration, we add an observation to the end of the in-sample series and remove one from the beginning (dark grey cells). 2.4.2 Implementing rolling origin in R Now that we discussed the main idea of rolling origin, we can see how it can be implemented in R. In this section, we will implement rolling origin with a fixed holdout sample size and a changing in-sample. This aligns with what is typically done in practice when new data arrives: the model is re-estimated, and forecasts are produced for the next \(h\) steps ahead. For this example, we will use artificially created data and apply a Simple Moving Average (discussed in Subsection 3.3.3) implemented in the smooth package. We will produce forecasts for the horizon of 10 steps ahead, \(h=10\) from 5 origins. We will create a list containing several objects of interest: • actuals will contain all the actual values; • holdout will be a matrix containing the actual values for the holdout. It will have h rows and origins columns; • mean will contain point forecasts from our model. This will also be a matrix with the same dimensions as the holdout one. returnedValues1 <- setNames(vector("list",3), returnedValues1$actuals <- y returnedValues1$holdout <- returnedValues1$mean <- Finally, we write a simple loop that repeats the model fit and forecasting for the horizon h several times. The trickiest part is understanding how to define the train and test samples. In our example, the former should have obs+1-origins-h observations in the first step and obs-h in the last one so that we can have h observations in the test set throughout all origins, and we can repeat this origins times. One way of doing this is via the following loop: for(i in 1:origins){ # Fit the model testModel <- sma(y[1:(obs+i-origins-h)]) # Drop the in-sample observations # and extract the first h observations from the rest returnedValues1$holdout[,i] <- head(y[-c(1:(obs-origins+i-h))], h) # Produce forecasts and write down the mean values returnedValues1$mean[,i] <- forecast(testModel, h=h)$mean This basic loop can be amended to include anything else we want from the function or by changing the parameters of the rolling origin. After filling in the object returnedValues1, we can analyse the residuals of the model over the horizon and several origins in various ways. For example, Figure 2.8 shows boxplots across the horizon of 10 for different origins. In the ideal situation, the boxplots in Figure 2.8 should be similar, meaning that the model performs consistently over different origins. We do not see this in our case, observing that the distribution of errors changes from one origin to another. While the example above already gives some information about the performance of a model, more useful information could be obtained if the performance of one model is compared to the others in the rolling origin experiment. This can be done manually for several models using the code above or it can be done using the function ro() from the greybox package. 2.4.3 Rolling origin function in R In R, there are several packages and functions that implement rolling origin. One of those is the function ro() from the greybox package (written by Yves Sagaert and Ivan Svetunkov in 2016 on their way to the International Symposium on Forecasting in Riverside, US). It implements the rolling origin evaluation for any function you like with a predefined call and returns the desired value. It heavily relies on the two variables: call and value – so it is pretty important to understand how to formulate them to get the desired results. ro() is a very flexible function, but as a result, it is not very simple. In this subsection, we will see how it works in a couple of examples. We start with a simple example, generating a series from Normal distribution: We use an ARIMA(0,1,1) model implemented in the stats package (this model is discussed in Section 8). Given that we are interested in forecasts from the model, we need to use the predict() function to get the desired values: The call that we specify includes two important elements: data and h. data specifies where the in-sample values are located in the function that we want to use, and it needs to be called “data” in the call; h will tell our function, where the forecasting horizon is specified in the provided line of code. Note that in this example we use arima(x=data,order=c(0,1,1)), which produces a desired ARIMA(0,1,1) model and then we use predict(..., n.ahead=h), which produces an \(h\) steps ahead forecast from that model. Having the call, we also need to specify what the function should return. This can be the conditional mean (point forecasts), prediction intervals, the parameters of a model, or, in fact, anything that the model returns (e.g. name of the fitted model and its likelihood). However, there are some differences in what ro() returns depending on what the function returns. If it is a vector, then ro () will produce a matrix (with values for each origin in columns). If it is a matrix, then an array is returned. Finally, if it is a list, then a list of lists is returned. In order not to overcomplicate things, we start with collecting the conditional mean from the predict() function: Remark. If you do not specify the value to return, the function will try to return everything, but it might fail, especially if many values are returned. So, to be on the safe side, always provide the value when possible. Now that we have specified ourCall and ourValue, we can produce forecasts from the model using rolling origin. Let’s say that we want three steps ahead forecasts and eight origins with the default values of all the other parameters: The function returns a list with all the values that we asked for plus the actual values and the holdout sample. We can calculate some basic error measure based on those values, for example, scaled Absolute Error (Petropoulos and Kourentzes, 2015): apply(abs(returnedValues1$holdout - returnedValues1$pred), 1, mean, na.rm=TRUE) / ## h1 h2 h3 ## 0.07681920 0.05555354 0.06974967 In this example, we use the apply() function to distinguish between the different forecasting horizons and have an idea of how the model performs for each of them. These numbers do not tell us much on their own, but if we compare the performance of this model with an alternative one, we could infer if one model is more appropriate for the data than the other one. For example, applying ARIMA (0,2,2) to the same data, we will get: ourCall <- "predict(arima(x=data,order=c(0,2,2)),n.ahead=h)" returnedValues2 <- ro(y, h=3, origins=8, call=ourCall, value=ourValue) apply(abs(returnedValues2$holdout - returnedValues2$pred), 1, mean, na.rm=TRUE) / ## h1 h2 h3 ## 0.07762600 0.05579172 0.06950331 Comparing these errors with the ones from the previous model, we can conclude which of the approaches is more suitable for the data. We can also plot the forecasts from the rolling origin, which shows how the models behave: par(mfcol=c(2,1), mar=c(4,4,3,1)) plot(returnedValues1, main="ARIMA(0,1,1)") plot(returnedValues2, main="ARIMA(0,2,2)") In Figure 2.9, the forecasts from different origins are close to each other. This is because the data is stationary, and both models produce flat lines as forecasts. The second model, however, has a slightly higher variability because it has more parameters than the first one (bias-variance trade-off in action). The rolling origin function from the greybox package also allows working with explanatory variables and returning prediction intervals if needed. Some further examples are discussed in the vignette of the package. Just run the command vignette("ro","greybox") in R to see it. Practically speaking, if we have a set of forecasts from different models we can analyse the distribution of error measures and come to conclusions about the performance of models. Here is an example with an analysis of performance for \(h=1\) based on absolute errors: aeValuesh1 <- cbind(abs(returnedValues1$holdout - abs(returnedValues1$holdout - colnames(aeValuesh1) <- c("ARIMA(0,1,1)","ARIMA(0,2,2)") The boxplots in Figure 2.10 can be interpreted as any other boxplot applied to random variables (see, for example, discussion in Section 5.2 of Svetunkov, 2022). Remark. When it comes to applying ro() to models with explanatory variables, one can use the internal parameters counti, counto, and countf, which define the size of the in-sample, the holdout and the full sample, respectively. An example of the code in this situation is shown below with a function alm() from the greybox package being used for fitting a simple linear regression model. # Generate the data x <- rnorm(100, 100, 10) xreg <- cbind(y=100+1.5*x+rnorm(100, 0, 10), x=x) # Predict values from the model. # counti and counto determine sizes for the in-sample and the holdout ourCall <- "predict(alm(y~x, data=xreg[counti,,drop=FALSE]), # Extract the mean only ourValue <- "mean" # Run rolling origin testRO <- ro(xreg[,"y"],h=5,origins=5,ourCall,ourValue) # plot the result • James, G., Witen, D., Hastie, T., Tibshirani, R., 2017. An Introduction to Statistical Learning with Applications in R . Springer New York. • Petropoulos, F., Kourentzes, N., 2015. Forecast Combinations for Intermittent Demand . Journal of the Operational Research Society. 66, 914–924. • Svetunkov, I., 2022. Statistics for business analytics. version: 31.10.2022 • Svetunkov, I., Petropoulos, F., 2018. Old Dog, New Tricks: a Modelling View of Simple Moving Averages . International Journal of Production Research. 56, 6034–6047. • Tashman, L.J., 2000. Out-of-sample Tests of Forecasting Accuracy: An Analysis and Review . International Journal of Forecasting. 16, 437–450.
{"url":"https://openforecast.org/adam/rollingOrigin.html","timestamp":"2024-11-05T12:26:04Z","content_type":"text/html","content_length":"98585","record_id":"<urn:uuid:7dae7e05-ae37-469a-92d6-ca8b4b04232d>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00790.warc.gz"}
Supply-Adjusted CYD | Glassnode Academy Indicator Overview Supply-Adjusted CYD accounts for the impact that coin day accumulation by the total supply has on the Coin Years Destroyed metric. As more coins are issued and more coin days are accumulated by the aggregate network, the lower bound of coin days destroyed will increase over time. This results in a steadily rising 'floor value' and potential for higher 'peak values' of CYD over time. Therefore additional consideration is required in analysis to accurately compare measurements from different points in time. Adjusting for supply aims to provide a more proportional and equivalent view of indicator values over the history of the market. The rationale behind supply adjustment is as follows: Early in a protocol's life, coin supply is small and so is the aggregate accumulated lifespan and thus the required adjustment is larger to scale up in line with more recent values. Later in a protocol's life, coin supply is larger and so is the aggregate accumulated lifespan so the required adjustment is smaller relative to recent values. In general, horizontal levels and values in supply adjusted metrics are likely to more consistent over time than those for the base metric. The trends, fractals and interpretation remain consistent between both the original metric and supply adjusted variants. How is it measured? Supply-Adjusted CYD simply divides CYD by the circulating supply (total amount of coins issued). \begin{align*} \textrm{Supply-Adjusted CYD} &= \frac{\textrm{CYD}}{\textrm{Total Supply}} \\[0.5em] &=\frac{ \sum(\textrm{value} \cdot \textrm{lifespan}~{\color{gray}{[\textrm{days}]}})~{\color{gray} {\textrm{(over last 365-days)}}} }{ \textrm{value}~{\color{gray}{\textrm{(of all UTXOs)}}} } \end{align*} Coined By Ark Invest, David Puell and Glassnode (April 2021) Further Resources Buyer and Seller Behavior: Analyzing Bitcoin’s Fundamentals by Ark Invest and David Puell
{"url":"https://academy.glassnode.com/indicators/coin-days-destroyed/supply-adjusted-cyd","timestamp":"2024-11-14T19:00:12Z","content_type":"text/html","content_length":"278893","record_id":"<urn:uuid:a68273dd-7730-49c2-98d7-8ff5396668a3>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00010.warc.gz"}
Computational methods for antenna pattern synthesis Some general numerical methods for antenna pattern synthesis, with and without constraints, are developed. Particular cases considered are (1) field pattern specified in amplitude and phase, (2) field pattern specified in amplitude only, and (3) these two cases with a constraint on the source norm. Both the source and the field are discretized at the beginning, and the methods of finite dimensional vector spaces are used for the computations. The theory is general, but is applied only to point sources arbitrarily distributed in a plane, and to pattern synthesis in this plane. Some numerical examples are given for ten sources approximately equispaced on one-half of an ellipse, with the desired field pattern chosen to be the csc phi pattern in the first quadrant. IEEE Transactions on Antennas and Propagation Pub Date: July 1975 □ Antenna Design; □ Antenna Radiation Patterns; □ Computer Aided Design; □ Least Squares Method; □ Synthetic Arrays; □ Amplitude Distribution Analysis; □ Newton-Raphson Method; □ Phase Shift; □ Vector Spaces; □ Communications and Radar
{"url":"https://ui.adsabs.harvard.edu/abs/1975ITAP...23..507M/abstract","timestamp":"2024-11-03T19:12:10Z","content_type":"text/html","content_length":"37090","record_id":"<urn:uuid:5d7f1904-a74c-4110-914d-6867318bc48e>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00437.warc.gz"}
Non-Euclidean Geometry/Historical Note Giovanni Girolamo Saccheri was among those attempted to derive Euclid's fifth postulate from the other four, in the process just failing to discover a non-Euclidean geometry. Johann Heinrich Lambert made a number of conjectures regarding non-Euclidean space. Non-Euclidean geometry as a concept in its own right was worked on by Carl Friedrich Gauss for some years, and by $1820$ he had established the main theorems. However, he kept this all to himself, and it was up to Nikolai Ivanovich Lobachevsky, between $1826$ and $1829$, and János Bolyai in $1832$ (independently of each other and Gauss to publish their own work (János Bolyai publishing it as an appendix to Tentamen iuventutem studiosam in elementa matheosos introducendi by his father Wolfgang Bolyai). The reason that Gauss did not publish his own work was because he recognised that the philosophical climate of Germany at the time would have been unable to accept it. As he wrote to Friedrich Wilhelm Bessel: I shall probably not put my very extensive investigations on this subject [ the foundations of geometry ] into publishable form for a long time, perhaps not in my lifetime, for I dread the shrieks we would hear from the Boeotians if I were to express myself fully on this matter. The Boeotians were a tribe of the ancient Greeks, renowned for being of low intelligence. In the $1850$s, Bernhard Riemann came up with a different non-Euclidean geometry in which no straight line can be drawn through a point parallel to a given straight line. This sort of non-Euclidean geometry is known as Riemannian geometry.
{"url":"https://proofwiki.org/wiki/Definition:Non-Euclidean_Geometry/Historical_Note","timestamp":"2024-11-02T18:14:25Z","content_type":"text/html","content_length":"45654","record_id":"<urn:uuid:ee9c00b8-edb5-4e94-b5f2-f9aaf15669ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00764.warc.gz"}
Property Testing Review Last month was a lean month, with only three papers: one on direct product testing, one on finding forbidden patterns in a sequence, and one (an update of a paper which we had missed in the Spring) on quantum distribution testing. Direct sum testing – the general case, by Irit Dinur and Konstantin Golubev (ECCC). Say a function \(f\colon \prod_{i=1}^d [n_i] \to \mathbb{F}_2\) is a direct product if it can be factored as \(f (x_1,\dots,x_d)=\sum_{i=1}^d f_i(x_i)\), where \(f_i\colon [n_i]\to\mathbb{F}_2\). This paper provides a 4-query tester (i.e., a proximity oblivious tester (POT)) for the direct product property, reminiscent of (and relying on) the BLR linearity test: specifically, draw two subsets \(S,T\subseteq [d]\) and two inputs \(x,y\in \prod_{i=1}^d [n_i]\) u.a.r., and accept iff \(f(x)+f(x_Sy)+f(x_Ty)+f(x_{S\Delta T}y) = 0\,.\) The main theorem of the paper is to show that the probability that this simple test rejects is lower bounded (up to a constant factor) by the distance of \(f\) to direct-product-ness. (The authors also provide a different POT making \(d+1\) queries, but with a simpler analysis.) Finding monotone patterns in sublinear time, by Omri Ben-Eliezer, ClĂ©ment Canonne, Shoham Letzter, and Erik Waingarten (ECCC). Given a function \(f\colon [n]\to\mathbb{R}\), a monotone subsequence of size \(k\) is a \(k\)-tuple of indices \(i_1 < \dots <i_k\) such that \(f(i_j) < f(i_{j+1})\) for all \(j\). This work considers (non-adaptive) one-sided testing of monotone-subsequence-freeness, or, equivalently, the task of finding such a monotone subsequence in a function promised to contain many of them. (This, in particular, generalizes the problem of one-sided monotonicity testing, which is the case \(k=2\).) The main result is a full characterization of the query complexity of this question (for constant \(k\)): strange as the exponent may seem, \(\Theta_\varepsilon( (\log n)^ {\lfloor \log_2 k\rfloor} )\) queries are necessary and sufficient. The proof relies on a structural dichotomy result, stating that any far-from-free sequence either contains “easy to find” increasing subsequences with increasing gaps between the elements, or has a specific hierarchical structure. Quantum Closeness Testing: A Streaming Algorithm and Applications, by Nengkun Yu (arXiv). This paper is concerned with quantum distribution testing in the local model, which only allows a very restricted (albeit, as the author argues, more natural and easier to implement) type of measurements, and is particularly well-suited to a streaming setting. The main contribution of this paper is to show a connection to classical distribution testing, allowing one to obtain quantum distribution testing upper bounds from their classical distribution testing counterparts. In more detail, the paper shows that, from local measurements to two \(d\)-dimensional quantum states \(\rho,\sigma\), one can provide access to two classical distributions \(p,q\) on \(\approx d^2\) elements such that (i) \ (\| p-q\|_2 \approx \|\rho-\sigma\|_2/d\) and (ii) \(\| p\|_2,\| q\|_2 = O(1/d)\). Using this connection, the paper proceeds to establish a variety of upper bounds for testing several distribution properties in the local quantum model.
{"url":"https://ptreview.sublinear.info/2019/11/","timestamp":"2024-11-09T04:27:11Z","content_type":"text/html","content_length":"42207","record_id":"<urn:uuid:ec6aa39a-7124-4fb4-beca-152cc57c424d>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00008.warc.gz"}
Re: Maths in assembly language. Can anyone still explain how to divide two integers (not necessarily by a power of 2)? > Hi Rob, >check DSP56800 Family's manual chapter 3.3.5.2 Integer Multiplication. >There you will find that any multiplication is done in fractional >format. To convert it back to integer you have to do following: >MPY X0,Y0,A ; Generates correct answer shifted >; 1 bit to the left >ASR A ; Leaves Correct 32-bit Integer >; Result in the A Accumulator >; and the A2 register contains >; correct sign extension >Hopefully this helps. >On Sun, Nov 30, 2003 at 02:29:43PM -0000, dibosco wrote: >> In an attempt to speed our (C) code up, I'm trying to have a go at >> doing some maths in assembly language and getting some pretty bizarre >> results. >> I'm trying to take baby steps and writing some noddy code to get me >> familiar with the processor. >> The following code: >> move #5,Y0 >> move #2,X0 >> mpysu X0,Y0,A >> Bizarrely gives and answer of 20! In fact any multimplication for some >> strange reason gives two times the answer you'd expect. >> Also, I've tried to do a divide. The div function seems not to be a >> straight forward divide instruction, but some sort of interative >> function. Does anyone undestand how it works and knows how to explain >> its use in simple terms? I've tried to follow the text in the family >> manual but am utterly lost. >> Many thanks. >> Rob.
{"url":"https://www.dsprelated.com/showthread/motoroladsp/3701-1.php","timestamp":"2024-11-12T15:38:10Z","content_type":"text/html","content_length":"61388","record_id":"<urn:uuid:fe8dc4fe-b98c-49ec-81bb-b51647411613>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00401.warc.gz"}
We performed 3-D electromagnetic simulations of coaxial leads and numerically obtained the lead models to evaluate power deposition and the voltage induced at the lead proximal end with the lead models. No correlation between peak volume loss density and deposited powers at the tip and the ring was observed. In some cases deposited power at the ring exceeded deposited power at the tip. However further extensive simulations of induced heating behavior should be done before final conclusions regarding coax lead design preferences are made. In some bipolar leads significant RF induced heating can appear not only at the wire tips, but also in a comparable level at the ring electrodes [1]. Numerical simulations of leads are useful for understanding the complex interaction between the incident RF field and a lead, which is important for reliably designing MR Conditional leads. The ISO/TS 10974 Tier 3 procedure [2] separates analysis of clinically relevant incident tangential electric fields ( (z)) along the lead pathways from calculations of the lead responses to these (z) using a lead (implant) model expressed as: $$$p = A \times\ |\int_{0}^{L}S(z)\cdot E_{tan}(z)\cdot dz|^2 $$$, where is power deposited at a hot spot, is the calibration factor, is lead length, complex (z) is the transfer function (TF). Similar expression is used to model voltage ( ) induced at the lead proximal end. TF based numerical analysis of coax leads has not been published yet. It is still a challenge to simulate some commercial bipolar leads that consist of helical inner and outer conductors. Simulations of generic RF coaxial cable structures not only allow to gather experience in analysis of a rather complex coaxial geometry but also provides results for design optimization of some medical surgery instruments that can be represented as a coaxial structure from an RF point of view. Our goals in this study were: 1) to perform 3-D electromagnetic simulation of coaxial leads; 2) to numerically obtain the lead models; 3) to calculate the largest p and Vp for uniform RF excitations using the lead models. The leads were coaxial titanium alloy structures: straight inner wire of 1.5 mm in diameter with insulation thickness of 0.5 mm, straight outer tube of 3 mm in outer diameter with outer insulation thickness of 0.25 mm. was varied from 40 mm to 800 mm in steps of 10 mm. At the proximal end the lead was capped and inner wire and outer tube were connected via a resistor . At the distal end, wire tip and ring electrode lengths were 10 mm long (Fig.1a). Electrical properties of both insulators were ε [r ] = 2.7 and σ = 0.000024 S/m. R[proximal ] = {10 , 1.25, 2.5, 50, 10 } Ohm. (z) for of tip, ring and were calculated using the reciprocity approach described in [3]. The calibration factors for the tip and the ring were calculated from ), obtained for the leads excited from one side by a uniform 64 MHz source. The same simulations that provided were used to calculate calibration factors for the model. Hot spot integration volumes (Fig. 1b) enveloped an area where the volume loss density ( ) decayed more than 30 dB. | (z)| being constant, the largest ) was generated if φ( (z)) = - φ( Results and discussion distribution in proximity of the wire tip and ring depended significantly on length of the coaxial structure (Fig. 1b – 1g). Despite significantly large peak value in proximity of the wire tip, values at the tip and ring were of the same order of magnitude. For =1 GOhm, ring values were even larger than tip values (Fig. 2a and 2b). Tip and ring (Fig 2.c), and (z) for both the tip and the ring (Fig. 3 and 4) significantly depended on . For R[proximal ] = {10 , 1.25, 2.5}, tip φ( (z)) varied significantly depending on distance from the distal end. The large variation of φ( (z)) near the proximal end was due to a sharp drop to zero of | (z)|. Coaxial structures properties were = 18.6 Ohm, λ/4 = 712 mm. The λ/4 impedance transformation resulted in an effective high impedance between the tip and the ring for L ≈ 700 mm and even significantly (close to zero) decreased values of tip (Fig. 5). However, ring value decrease was noticeably smaller. Numerical simulation of coaxial structures can provide important requirements for experimental validation of TF, for example, maximum acceptable distance between measurement points for proper determination of spatial TF variations and required precision of . Correlations existed neither between peak at the tip and the ring nor between at the tip and the ring. Larger at the ring was observed for equal to 50 and 10 Ohm. However results cannot be simply extrapolated to induced heating evaluation. Further extensive thermal simulations should be conducted to obtain dependences of induced heating at the tip and rings on coaxial structure geometry and material electrical and thermal properties. This work was supported by the German Federal Ministry of Education and Research (BMBF) and within the European Joint Undertakings ENIAC JU, grant # 16ES0028, DeNeCoR. [1] P. Nordbeck, et al. MRM 68:1963–1972 (2012). [2] Technical specification ISO/TS 10974 1st edition 2012. [3] Shi Feng et al. MTT, Vol.63,No.1,305-313,2015.
{"url":"https://cds.ismrm.org/protected/16MProceedings/PDFfiles/2227.html","timestamp":"2024-11-02T21:45:47Z","content_type":"application/xhtml+xml","content_length":"19274","record_id":"<urn:uuid:c8d2eeac-5c0e-4354-9efd-b1cdf486b923>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00210.warc.gz"}
[Get it solved] True/False. Let u and v be nonzero vectors from R n . If ... There are five problems worth a total of 100 points. Start each problem on a new piece of paper. (a) True/False. Let u and v be nonzero vectors from R n . If u and v are linearly dependent, then v = cu for some real scalar c. (b) True/False. A linear system of m equations in n variables always has at least one solution when m < n. (c) Explain. Is the matrix below in reduced row echelon form? (a) True/False. Let A be an m × n-matrix with real coefficients and let B be an n × p-matrix with real coefficients. If Null(AB) = {0} then Null(B) = {0}. (b) True/False. Let T : V → W be a linear transformation between vector spaces V and W and let S : W → U be a linear transformation from W to a vector space U. If T : V → W is onto, then the composition S ◦ T : V → U is onto. (c) Explain. Let A be an n × n-matrix with Null(A) = {0}. If b is a vector from R n , then does the equation Ax = b have a unique solution?
{"url":"https://www.calltutors.com/Assignments/truefalse-let-u-and-v-be-nonzero-vectors-from-r-n--if-u-and-v-are-linearly-depen","timestamp":"2024-11-03T21:51:03Z","content_type":"text/html","content_length":"77837","record_id":"<urn:uuid:0a21a7e9-a311-44aa-9515-2310c53f89c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00770.warc.gz"}
01 Intro To Algorithms Lecture from: 19.09.2024 | Video: Videos ETHZ | Rui Zhangs Notes | Official Script Algorithms are essential in computer science, especially when dealing with large data sets or solving complex problems. The goal of an algorithm is not just to solve a problem but to do so efficiently—minimizing the number of operations, execution time, and memory usage. What is an Algorithm? An algorithm is a step-by-step procedure for solving a problem. It breaks down a complex problem into smaller, simpler tasks that can be solved individually. Computers can only perform basic operations—like addition or comparison—at incredible speed. When faced with a problem, we often consider multiple solutions and evaluate different algorithms based on their performance metrics: the number of operations, execution time, memory usage, etc. Multiplication Algorithm Let’s take an example we all know: the algorithm for multiplying two numbers using the traditional school method. School Multiplication Algorithm Example: Multiplying Two-Digit Numbers Take two numbers, say 23 and 47. Here’s the basic process we learned in school: 1. Multiply the digits in the units place (). 2. Multiply digits from the tens and units places, including zero padding ( and ). 3. Multiply the digits in the tens place (). 4. Sum up the results to get the final product. This process essentially involves computing four partial products and then summing them up. Adding numbers is typically easier than multiplying them. Example of 87 x 43: Correctness of the School Method How do we know that this algorithm is correct? Let’s break down the math behind it. Consider the two-digit numbers: Expanding this using the distributive property: This matches exactly with the operations we performed in the school method: the multiplication of the individual digits, followed by adding the correct powers of 10. For n-digit numbers, the process is similar. We multiply each pair of digits and sum the results, adjusting for powers of 10 as needed. Number of Operations in the School Method If we multiply two n-digit numbers using this method, we perform single-digit multiplications and a few additions. The total number of operations scales with —this means that multiplying two n-digit numbers takes approximately basic operations. For example, multiplying two 1000-digit numbers would require about a million operations. While this approach is correct, it’s not the most efficient for large numbers. Can We Do Better? (Geht es besser?) Clearly, becomes inefficient for large numbers, so mathematicians have explored algorithms that improve the speed of multiplication. One well-known improvement is Karatsuba’s algorithm, which reduces the complexity of multiplication to , making it much faster for large numbers. Karatsuba’s Algorithm Karatsuba’s Algorithm is a more efficient method for multiplying large numbers than the traditional “school method.” It reduces the number of multiplications needed, improving the algorithm’s time complexity from to . The key insight in Karatsuba’s Algorithm is to break down the multiplication of two n-digit numbers into smaller sub-problems. Instead of directly multiplying the numbers digit by digit, the algorithm divides each number into two parts, performs fewer multiplications, and then combines the results. Recursive Algorithm Concept At its core, Karatsuba’s Algorithm utilizes a recursive approach. Recursive algorithms solve a problem by breaking it down into smaller instances of the same problem. In this case, multiplying two n-digit numbers is transformed into multiplying two smaller numbers (each with approximately half the number of digits). This concept can be summarized as follows: 1. Base Case: For small values of n (typically when n = 1), the algorithm performs direct multiplication, as it’s straightforward and efficient for single-digit numbers. 2. Recursive Step: For larger n, the algorithm splits each number into two halves. The multiplication is then expressed in terms of products of these halves, significantly reducing the number of multiplication operations required. This recursive structure allows the algorithm to handle very large numbers efficiently. Each level of recursion handles a problem of smaller size until the base case is reached. As a result, this approach not only simplifies the calculations but also leverages the power of recursion to improve overall efficiency. Algorithm Breakdown Let’s multiply two n-digit numbers, and . We can split and into two halves: The product can then be expressed as: Expanding this gives: Karatsuba’s insight is to compute the middle term, , more efficiently by using the following identity: Thus, instead of performing four multiplications (, , , and ), Karatsuba’s method reduces this to just three multiplications: Then, the final result can be computed by combining these three products. The recursiveness of this algorithm can be expressed as a tree: In order to calculate the number of operations for n-digit numbers let us assume that the n-digit numbers have digits (and otherwise simply prepend 0s). Using the recursive approach that divides multiplication into three smaller multiplications, we can analyze the number of single-digit multiplications required for multiplying two numbers of digits. Using the recursive approach which divides a multiplication into three smaller multiplications. We break the multiplication of two -digit numbers into three recursive multiplications of size : 1. Compute 2. Compute 3. Compute We can see that the multiplication of two digit number needs single-digit multiplications. Each of these multiplications is performed on numbers that have half the number of digits, leading to a recurrence relation for the time complexity: Here, the term accounts for the linear time needed to perform the additions and subtractions required to calculate and to combine the results. To determine the time complexity from the recurrence, we can apply the Master Theorem. We compare with : • (the number of subproblems), • (the factor by which the size of the problem is reduced), • . Calculating gives approximately . According to the Master Theorem, since grows slower than , we fall into the first case: The school method needs . For the Karatsuba’s algorithm requires 10x less operations, for 100x less. Karatsuba’s algorithm reduces the number of multiplications from to . This makes it significantly faster for large numbers than the “school” algorithm. Summary of Efficiency For two numbers with digits: • Karatsuba’s Algorithm: Requires single-digit multiplications, leading to a complexity of . • School Method: Requires single-digit multiplications, leading to a complexity of . Thus, when comparing the two algorithms: • For , Karatsuba’s algorithm requires approximately 10 times fewer operations than the school method. • For , it requires around 100 times fewer operations. Overall, Karatsuba’s algorithm is significantly faster for large numbers, improving the multiplication efficiency by reducing the number of operations required. Wall Following Algorithm (Pasture Break) For this problem, our aim is to find a location along a 1D line by moving ourselves to it, with as few steps as possible. Here a reason why you’d need this: In a distant land, you find yourself trapped in an infinite, vast, circular arena, drawn here by an ancient prophecy. Legends say this place tests those who dare to enter, but you had no choice. Towering stone walls surround you, cold and endless. Though you’re not blindfolded, the arena is shrouded in darkness and thick fog, limiting your vision to just a few feet ahead. The air is damp, and every sound feels muted by the heavy mist. The prophecy speaks of a single hidden passage along the arena’s perimeter, the only escape. You place your hands on the wall, feeling its rough, cold surface as you move carefully, searching for any change that might reveal the way out. Wandering without a plan would be foolish. The prophecy offers a clue: The way is simple, though the path is unclear. Let your hands guide you. (This is an Alternate Intro To “Pasture Break” or “the Wall Following Problem in 1D”) In this problem our aim is to find the exit in as few steps as possible. Algorithm 0 (Distance Given) For this algorithm let us assume that someone has engraved into the wall infront of you that the exit is “k steps away”, however you don’t know which direction to go in. The most straightforward way to find the exit is to go k steps in the one direction, and if the exit isn’t there, then k steps back and k steps in the opposite direction. • Best Case: k steps • Worst Case: k steps + k steps back + k steps in the right direction = 3k steps Algorithm 1 (Naive) Now realistically, you won’t have any engraving telling you how far, nor in which direction that the exit is. You have to find this out on your own. The most simple algorithm would be: • 1 left, back to start • 2 right, back to start • 3 left, back to start • … • k-1 right, back to start (we just missed the exit) • k left, back to start • k+1 right, but we stop after k, since we found the exit. Now one of the ways to evaluate algorithms is to look at their worst case. So let us count how many steps we are doing. We’ll compare this with the other algorithms later. Before that, let us try to think of a more clever algorithm. Algorithm 2 The algorithm is: • steps left, back to start • steps right, back to start • steps left, back to start • … • steps right, back to start • steps left, back to start • k (where ) Comparison of Algorithms In order to compare these algorithms we can’t have a mix of variables describing the steps required (worst case). In Algorithm 2 we currently have “k” and “i” as variables. Let us create an upper bounded inequality with only the variable k. Upper Bound Algorithm 2 We can use the fact that: We can see that algorithm 1 has a worst case of and algorithm 2 has a worst case of . Algorithm 2 thereby beats algorithm 1. There isn’t a algorithm which is faster (according to the prof). Mathematical Induction Mathematical induction is a powerful proof technique used to establish that a given statement is true for all natural numbers. The idea is similar to setting up a chain of falling dominoes: if you can show that the first domino falls (the base case) and that any domino will knock over the next one (the induction step), then you’ve proven that all dominoes will fall. Induction is a powerful tool in Algorithms and Data structures. Steps of Induction 1. Base Step: Prove that the statement is true for the initial value (usually ). 2. Inductive Hypothesis: Assume that the statement is true for some arbitrary value . 3. Inductive Step: Using the inductive hypothesis, prove that the statement is true for . Example: Sum of the First Natural Numbers Let’s prove the formula for the sum of the first natural numbers: Step 1: Base Case For , the left-hand side is simply . Plugging into the formula: So, the base case holds. Step 2: Inductive Hypothesis Assume that the formula is true for some : Step 3: Inductive Step Now we need to show that the formula holds for . Consider: By the inductive hypothesis, the sum of the first terms is: Factor out : This matches the formula for : Thus, by mathematical induction, the formula is true for all natural numbers . Example: Sum of Powers of 2 Let’s prove the formula for the sum of the first powers of 2: Step 1: Base Case For , the left-hand side is: Plugging into the formula: So, the base case holds. Step 2: Inductive Hypothesis Assume that the formula is true for some : Step 3: Inductive Step Now we need to show that the formula holds for . Consider: By the inductive hypothesis, the sum of the first terms is: Combining these, we have: This matches the formula for : Thus, by mathematical induction, the formula is true for all natural numbers : Continue here: 02 Star Search
{"url":"https://cs.shivi.io/Semesters/Semester-1/Algorithms-and-Datastructures/Lecture-Notes/01-Intro-To-Algorithms","timestamp":"2024-11-05T00:11:00Z","content_type":"text/html","content_length":"236947","record_id":"<urn:uuid:bb76800f-8c4c-4d29-a1e5-c4b254bb05b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00403.warc.gz"}
Longdom Publishing SL | Open Access Journals Research Article - (2016) Volume 5, Issue 2 The Research of Supersonic Aircraft Low Sonic Boom Configuration Design and Optimizations China Academy of Aerospace Aerodynamics, Yungang West Road, Beijing, 100074, P.R. China ^*Corresponding Author: Xuan H, China Academy of Aerospace Aerodynamics, Yungang West Road, Beijing, 100074, P.R. China, Tel: +86-13810025289 High noise level of sonic boom is one of the most important reasons that the supersonic transport can’t be applied to civil aviation broadly. Sonic boom is a complicated problem relating to aircraft configuration design, aerodynamics, acoustics and so on. The traditional sonic boom minimization theory is an inverse design method with single object, which makes it difficult to be applied in multi-objective optimization, effectively. For the low sonic boom configuration optimization, the sonic boom noise level prediction method based on supersonic linear theory was developed. The sonic boom level of a basic configuration of supersonic business jet was computed and the cause of formation of sonic boom was analyzed, based on which the fuse and wing plane wais optimized to decrease the noise level of sonic boom. Compared with the basic configuration, the sonic boom level of optimized configuration decreased distinctively, with the overpressure decreasing 41% and the A-weighted noise level decreasing 7.55 decibel. The aerodynamic characteristics of optimized configuration were computed. Compared with the basic configuration, the drag decreased obviously at the cruise condition without moment change. Keywords: Sonic boom; Supersonic transport; Optimization design; Aeroacoustics; Supersonic linear theory The big noise caused by sonic boom when the flight vehicle flying supersonically will not only has an influence on human lives but also bring a destroy to the constructions, especially for the infrasonic boom. The noise level of a Concord civil aircraft flying at the altitude of 50000 ft will be 133 dB, but the noise level of the civil aircraft is only about 90 dB when taking off and landing. Thus, the Concord was forbidden to fly over the continent supersonically because of the high sonic boom level, which played down the economy of Concord. The noise level of next generation supersonic transport is demanded lower than 70 dB, which is comparative with the transonic civil aircraft. Thus mitigating the sonic boom is an exigent problem for next generation supersonic transports development. The sonic boom of supersonic aircraft is a complex problem which refers to aircraft configuration design, aerodynamics, acoustics and so on. The research of sonic boom can be traced back to 1950’s [1 -3]. Since then, the sonic boom prediction methods based on supersonic linear theory, geometry acoustics [4-7] and the sonic boom minimization theory [8-12] have been applied to the supersonic transport design. Recently, The CFD (Computational Fluid Dynamics) has been used for sonic boom prediction and physics research more and more. Besides, the modern optimization theories have been applied to the low sonic boom configuration optimization. Compared with the sonic boom minimization theory, the low sonic boom optimization based on modern optimization theory can obtain the configuration not only with the low sonic boom character but also with other good performance such as good aerodynamic character, weight character, structure character and so on. In this paper the low sonic boom configuration optimization of supersonic business jet were researched. First of all, the sonic boom noise level prediction method based on supersonic linear theory was developed. Then, the sonic boom level of a basic configuration of supersonic business jet was computed and the cause of formation of sonic boom was analyzed, based on which the fuse and wing plane was optimized to decrease the noise level of sonic boom. At last, the aerodynamic characteristic and the sonic boom pressure near the optimized configuration were computed by CFD. Prediction Method Linearized supersonic flow For a slender axisymmetric body in cylinder coordinates, the over pressure Δp= p-p[0] can be written as: Where, M is Mach number, y= x-βr, A[e] (x,θ ), which consists two components: a volume component and a lift component. The lift component of the equivalent area is given by: Where is L (x,θ) the component of lift per unit length at axial station x, in the θ direction. In this paper, the uniform distribution of the lift alone the wing was supposed so the eq. (3) can be written as: Where b(x) is the span alone axial station x, W is the weight of the body. Noise level computation The over pressure near the aircraft computed by eq.(1) was extrapolated to the ground by waveform parameter method proposed by Thomas [13] in 1972. The narrow band spectrum of over pressure on the ground was obtained by Fast Fourier Transform: Where p(n) is the over pressure at some time, p(k) is the over pressure at the frequency of k. N is the number of input data, which equals the exponential of 2. The sound pressure was computed by: Where p[e] is the effective pressure, p[ref] is the reference pressure. Based on narrow band noise level, the 1/3 octave noise level can be obtained, by which the A weighted noise level was computed. The Basic Supersonic Business Jet Configuration The sonic boom noise level of a supersonic business jet was analyzed. Figure 1 shows the sketches of the aircraft, and the parameters of this supersonic business jet are listed in Table 1. This configuration was designed without considering the sonic boom characteristic. Figure 2 gives the F function and equivalent area distribution alone axial station at cruise lift coefficient and zero lift condition. It can be seen that both at the cockpit and the wing, the remarkable change of the cross-sectional area of aircraft by the normal projections of cuts alone planes aligned with the Mach angle made the shock wave generation. The lift component of equivalent area enhanced the strength of shock wave further. Figure 3 gives the over pressure of sonic boom below the aircraft at the radius of 5 lengths of the aircraft. Figure 4 gives the over pressure of sonic boom at ground. The over pressure on the ground is 100 pa with the reflect factor 1.9. When the aircraft cruising with Mach 1.6 at the altitude of 14km, the A weighted noise level of the sonic boom over pressure on the ground is 91.3 dB. Passenger 8-12 Cruise Mach 1.6 Cruise altitude(km) 14 Range(nm) 4000 Maximum takeoff weight(ton) 45 Length(m) 45.2 Spanwise(m) 20 Table 1: Supersonic business jet parameters. The Low Sonic Boom Optimization of Supersonic Business Jet Configuration The optimization of aircraft fuse In order to bate the strength of shock wave led by cockpit, the fuse of aircraft was optimized. Because of the little contribution to equivalent area of the rudder, elevator and engine, only the wing body configuration was optimized with nose and cockpit were designed integrally. Supposing the nose, mid-fuse and aft-fuse were configured by a series of circle section with different radius, the center coordinates, the radius of the circle and the position of wing installation were optimized. Table 2 lists the optimization variables and constraints. There are 28 optimized variables with 5 control sections of nose, 8 control sections of both mid and aft fuse. The constraint of nose declination angle was set to ensure sight of pilot and the aft fuse angle was constrained to meet the requirement of taking off. Radius of nose sections:d1-d5 Radius of fuse sections: d6-d13 Variables Center coordinates of nose sections:y1-y5 Center coordinates of fuse sections:y6-y13 Wing installation position:x0,y0 Constraints View of pilot:a1∈(a1[min],a1[max]) Clearance angle: a2∈(a2[min],a2[max]) Table 2: The optimization variables and constraints of fuse. The minimum A-weight noise level on the ground at Ma=1.6 was chosen as the optimization object. Figure 5 shows the sketch of the wing body configuration after optimization. Figure 6 gives F-function and equivalent area distribution alone the optimized configuration at cruise lift coefficient and zero lift condition. Figure 7 shows the over pressure of sonic boom at the distance of 5 lengths of aircraft below the optimized configuration. The solid line is the result of the basic configuration, the dashed line is the result of optimization configuration at zero lift coefficients and the dash dot line is the result of optimization configuration at cruise lift coefficients. It can be seen that the equivalent area of the optimized configuration distributes more smoothly alone the body and from the results of zero lift coefficients, the volume component of equivalent area didn’t cause notable shock wave, which means that the optimization of fuse and nose was effect. The shock wave was caused by the lift component which can be seen from the results of cruise lift coefficient. Figure 8 gives the over pressure of sonic boom on the ground varying as time. The solid line represents the result of basic configuration and the dashed line represents the result of optimized configuration. The over pressure on the ground was 81 pa with the reflect factor 1.9, and the maximum over pressure decrease 20% after optimization. The A-weight noise level on the ground at Ma = 1.6 and cruise lift coefficients was 87.5dB, with 3.8 dB reduction after optimization. Variables Optimization Range Root Chord/m 18-28 Tip-root Ratio 0.07-0.1 Span/m 14.4-17.6 Inner wing Swept/。 68-75 Outer wing Swept/。 50-65 KINK/m 3.0-6.0 Wing Area/m^2 130-150 Table 3: The range of optimization variables. The optimization of wing plane geometry Base on the section 4.1, some optimization still needs to be done in order to obtain a better lift distribution, which could decrease the sonic boom caused by the lift component of the effective area. Supposing the lift distribution was uniform alone the wing, the lift component of equivalent area can be optimized through the optimization of wing geometry. The genetic algorithm was used to do the optimization. There were 6 optimization variables, including the root chord, tip-root ratio, span, inner wing swept, outer wing swept and KINK position, which are shown in Table 3. The wing area was constrained to ensure the appropriate aerodynamic characteristic. The minimum A-weight noise level on the ground at Ma = 1.6 was chosen as the optimization object. Figure 9 is the sketch of the configuration with wing geometry optimized, and the parameters of wing geometry before and after optimization are shown in Table 4. Variables Basic Optimization Chord Root/m 25.36 19.1 Tip-root Ratio 0.13 0.0992 Span /m 20 16.496 Inner wing Swept /。 72 74.95 Outer wing Swept /。 50 63.8 KINK/m 32% 61.94% Wing Area/m^2 164.5 146.12 Table 4: Geometry parameters of the wing. Figure 10 shows the F-function and equivalent area distribution alone the optimized configuration at cruise lift coefficient and zero lift condition. Figure 11 shows the over pressure of sonic boom at the distance of 5 lengths of aircraft below the optimized configuration. The solid line is the result of the optimized configuration in section 4.1, the dash dot line is the result of the configuration with wing geometry optimization at zero lift coefficients and the dashed line is the result of the configuration with wing geometry optimization at cruise lift coefficients. It can be seen that the over pressure caused by the lift component of equivalent area decreased obviously after the wing geometry optimized. The shock wave generated at about 20% alone the aircraft was caused the change of the wing geometry, which changed the cross-sectional area of body by the normal projections of cuts alone planes aligned with the Mach angle. Thus, the fuse should be tailored to weaken the shock wave. Figure 12 gives the over pressure of sonic boom on the ground. The solid line is the result the optimized configuration in section 4.1 and the dashed line is the result of the configuration with wing geometry optimization. The over pressure of wing geometry optimization is 60 pa with the reflect factor 1.9, 26% decreased compared with the optimization configuration in section 4.1, and the A-weight sound pressure level is 83.74 dB, 3.8 dB decreased after optimization. From the results, it can be seen that the shock wave caused by the lift component of the effective area decrease obviously, and the tiny shock wave at the 20% alone the aircraft was caused by the change of wing geometry, which lead to the volume component of effective area change. The over pressure on the ground is 60 pa with the reflect factor 1.9, 26% decreased compared with the configuration without wing optimization, and the A-weight sound pressure level is 83.74 dB, 3.8 dB decreased after optimization. Compared with the basic configuration, the sonic boom noise of the optimized configuration of fuse and wing geometry optimization was decreased greatly, with the over pressure decreased 41%, and the A-weight noise level decreased 7.55 dB. Figure 13 gives the 1/3 octave sound pressure level comparisons between the basic configuration (solid line), the configuration with nose and fuse optimization (dashed line) and configuration with wing geometry optimization (dash dot line). It can be seen that the sonic boom noise of the optimized configuration was mitigated greatly from 10Hz to 10k Hz according to the 1/3 octave spectrum. Numerical Simulation of the Supersonic Business Jet Configuration In order to compare the both aerodynamic characteristic and flow field between the basic configuration and optimized configuration, the numerical simulation was done. The computation domain was composed of two parts as shown in Figure 14: a cylinder domain at the inner part with non-aligned volume mesh and a mach cone domain around the cylinder with aligned anisotropic cells [14]. The Euler equation was solved for the basic configuration and optimized configuration. Figure 15 gives lift coefficients, drag coefficients and pitching moment coefficients comparison between the basic configuration and optimized configuration in section 4.2. It can be seen that the slope of lift coefficients of the optimized configuration decreased because of the increased wing swept. The angle of attack when cruising (Cl = 0.11) varied from 1.65 to 2.6 degree after optimization. The drag decreased after optimization while the pitching moment change was tiny. Figure 16 shows the pressure distribution around the aircraft at the symmetrical plane. From the flow field results, it can be found that the shock wave of the optimized configuration was weakened remarkably compared with the basic configuration, which meant that the low sonic boom optimization was available. For a supersonic business jet without considering the sonic boom attenuation, distinct shock waves were induced that made big noise. According to the analysis results by supersonic linearized theory, the fuse, nose and wing geometry were optimized. Compared with the basic configuration, the sonic boom level of optimized configuration decreased greatly. The maximum over pressure on the ground decreased 41% and the A weighted sound pressure level decreased 7.55dB. The angle of attack when cruising varied from 1.65 to 2.6 degree after optimization. The drag of the optimized configuration reduced but the change of pitching moment was tiny. The shock waves around the aircraft were weakened obviously from the numerical simulation results, which meant that the low sonic boom optimization was available. The low sonic boom optimization in this paper can be applied to multi-object aerodynamic configuration optimization in the future, with the constraints of aerodynamic characteristic and the optimization objective of weighted sonic boom noise level at multiflight condition. Citation: Xuan H, Cheng S, Fang L (2016) Sonic Boom Configuration Design and Optimizations. J Aeronaut Aerospace Eng 5:165. Copyright: © 2016 Xuan H, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
{"url":"https://www.longdom.org/open-access/the-research-of-supersonic-aircraft-low-sonic-boom-configuration-design-and-optimizations-13767.html","timestamp":"2024-11-06T17:06:12Z","content_type":"text/html","content_length":"155303","record_id":"<urn:uuid:7861aebe-4283-4a72-a074-ef55bcfbd7ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00889.warc.gz"}
What is the density of silver in G cm3? The density of silver, D is 10.5g/cm3 10.5 g / c m 3 . The volume occupied by the silver, V is 30.0cm3 30.0 c m 3 . Then the mass of the substance can be calculated as shown below. Hence the mass of the silver is 315 g. What is density of a substance the density of silver is 10.5 g cm3 What do you mean by this statement This means that one cubic centimeter of silver weighs 10.5 grams. What is the density of silver in G cm3 The density associated with silver is 10.5 g/cm3. What is the density of silver g ml The material density a can be used before calculating the volume of an object from its mass. Similarly, mass can be computed very well if the density and volume are already known. The pre-1964 quarter, consisting mainly of silver (density = 10.5 g/ml), was determined to have a good mass of 6.25 g. What is the density of silver in g cm^3 The density of silver collectibles is (ag) 10.5 g/cm^3. What about the Ag mass that occupies 965 cm^3 of the living room? | Socrates. Black density (ag) 10.5 g/cm^3. How do we find the mass of Ag reported by 965 cm^3 of the environment? What is the density of silver and atomic mass of AG [Given: The density of ancient silver is 10.5 g cm -3 cores, and the mass of Ag = one hundred and eight a.m.u. The main thickness of the deposited silver is many centimeters. How to convert density of silver to another unit This is a graph of magic density (the density of various materials). To use a device, simply find the desired element on the page and click on it. You can also access the universal conversion page. Enter the value you want to distribute (silver density). Then click the main part of the “Convert Me” button. What is the mass of a piece of silver mass of money = 10.5? ? cm?3 ×5? cm3 = 52.5? G. ground connected to = or 19.G 3? cm?3 ×5? cm3? 120? GRAM. What is the density of copper if a 10.0 cm3 sample has a mass of 89.6 g 10.0 cc Office Assistant Sample cm has a mass of 89.6 g. What is the density of copper? about 19.3 g/cm3. Why is density measured in g cm3 Answer: Density is the whole of an object divided by its major part. Density is often expressed in grams per cubic centimeter (g/cm3). Remember that a gram is a mass, and a cubic centimeter is also a volume (a volume is 1 milliliter). What is the density of a human in G cm3 In other circumstances, the entire human body is capable of sinking or floating in water, suggesting a density of about 1.0 grams per cubic centimeter. What is the density of a paperclip in G cm3 Density 6.93 g/cm3. If the metal muscle required for an absolute paperclip is 1.G/clip, 25, how many paperclips can be made from a cube of metal (higher density) that has a volume of approximately 2.30 x 10-3 m3? Kbabita. Will an object with a density of 1.01g cm3 float or sink The density of water is 4 g/cm3. Are swimmers drowning? or 3, he may well drown in the water. LESS than a certain g/cm3, it WILL FLOAT. In the following diagrams, label the features or layers from densest to least dense. What is the density of honey in G cm3 The density of honey is 1.42 g/cm3, find its RD. What is the density of water in g cm3 The generally accepted unit for measuring the density of water is grams per milliliter (1 g/mL) or 1 gram per cubic centimeter (1 g/cm3). In fact, the exact density of water is far from 1 g/ml, but slightly less (very, very slightly less), namely 0.9998395 g/ml at 4.0° Celsius (39.2° Fahrenheit).
{"url":"https://www.vanessabenedict.com/the-density-of-silver-is-10-5-g-cm3/","timestamp":"2024-11-02T03:07:25Z","content_type":"text/html","content_length":"73330","record_id":"<urn:uuid:a844fa96-1b6f-4050-8318-1ff211a4c615>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00070.warc.gz"}
How To Calculate Loan Amortization How to Calculate Loan Amortization Ever caught yourself in a swirl of confusion, trying to crack the code of loan amortization? You’re not alone. Trust me, getting a solid grasp on this can be a real game changer when it comes to managing your loans and charting your financial journey effectively. This comprehensive guide is here to simplify loan amortization for you, breaking down its complex concepts into digestible pieces. We’ll walk you through step by step calculations too. Are you ready to dive into this pool of financial wisdom? Let’s get started! Key Takeaways • Loan amortization is a way to pay off debt gradually over time by breaking it down into smaller monthly payments. • To calculate loan amortization, you need to know the principal amount, interest rate, and loan period. Then you can create an amortization table or use a financial calculator to determine your monthly payment. • Understanding how subsequent payments are determined and preparing an amortization schedule will help you stay organized and on track with your loan repayments. Understanding Amortization Amortization is a financial concept used to spread out the cost of an asset or debt over a period of time. It involves calculating and allocating payments towards both the principal amount and interest, resulting in a gradual reduction of the loan balance. Unlike depreciation, which applies to tangible assets, amortization is specifically used for intangible assets or loans. Definition and Purpose Amortization is a way to pay off debt. It’s a plan for how you slowly get rid of your loan over time. You can use it for different types of loans like home, car, or personal loans. The purpose is simple: each payment goes towards the loan and interest bit by bit until there’s nothing left to pay. This allows people to know exactly how much they need to set aside each month to clear their debt completely at the end of the loan term. Amortization vs. Depreciation Amortization and depreciation work in similar ways but have key differences. Amortization links to loans or intangible assets. Things like patents or copyrights get amortized. The cost of these items gets spread across their useful life. Depreciation, on the other hand, ties up with tangible assets. These are things we can touch, like buildings or machines. Over time, these objects lose value due to use and age; this loss is called Both processes help match costs with income earned over time. Amortization of Loans Negative amortization can occur when the monthly payment is insufficient to cover the interest charged, resulting in an increase rather than decrease in the loan balance. Concept and Importance Amortization is a tool we use to pay back loans over time. It breaks down the loan into small parts that you can pay each month. Each payment has two parts – an interest part and a principal part. The interest part is what you pay to the bank for lending you money. The principal part reduces your loan amount slowly till it becomes zero at the end of the loan period. This method helps us plan our payments and make sure we have enough money each month. This way, by the end of your loan term, all your debt will be paid off! Negative Amortization: What It Means Negative amortization is when your debt gets bigger each time you make a payment. This is not good. It can happen with credit cards that have high interest rates. You might think you are paying off your debt, but actually, it’s growing! Each month, the amount of money that you owe can get bigger, even though you’re making payments. This means it will take longer to pay off the loan and may end up costing more in the long term. So, beware of negative amortization when making financial decisions! How to Calculate Loan Amortization To calculate loan amortization, you need to identify the principal amount, interest rate, and loan period. Then, create an amortization table to calculate monthly payments and determine subsequent Identifying Principal Amount, Interest Rate, and Loan Period To calculate loan amortization, you need three important pieces of information: 1. Principal Amount: This is the initial amount of the loan that you borrowed. It is the total amount you need to repay over time. 2. Interest Rate: This is the annual rate at which interest is charged on your loan. It is expressed as a percentage and determines how much extra you will pay on top of the principal. 3. Loan Period: This refers to the length of time over which you will be repaying the loan. It is usually stated in months and determines how many payments you will make. Creating Your Amortization Table To create your amortization table, follow these steps: 1. Gather the necessary information: You’ll need to know the principal amount of the loan, the interest rate, and the loan period in years. 2. Calculate the monthly interest rate: Divide the annual interest rate by 12 to get the monthly interest rate. 3. Determine the number of payments: Multiply the loan period by 12 to find out how many monthly payments you’ll make. 4. Use an amortization formula or a financial calculator: Plug in the values for the principal amount, interest rate, and number of payments to calculate your monthly payment amount. 5. Create a table: Write down each payment number starting from 1 up to the total number of payments. 6. Calculate each month’s interest payment: Multiply the outstanding loan balance at the beginning of each month by the monthly interest rate. 7. Calculate each month’s principal payment: Subtract the interest payment from that month’s total monthly payment. 8. Determine the new outstanding loan balance: Subtract that month’s principal payment from the previous month’s outstanding balance. 9. Repeat steps 6 – 8 for each remaining month until you reach your final payment. 10. Fill in your table: List each month’s payment number, total monthly payment, interest portion, principal portion, and new outstanding balance. Calculating Monthly Payments To calculate monthly loan payments, follow these steps: 1. Determine the principal amount: This is the initial amount of money borrowed. 2. Identify the interest rate: This is the percentage charged on the loan amount. 3. Determine the loan period: This is the length of time in which you will repay the loan. 4. Use an amortization formula or a financial calculator to calculate your monthly payment. 5. Alternatively, you can use spreadsheet software like Microsoft Excel to create an amortization table that automatically calculates your monthly payment based on your inputs. 6. The monthly payment consists of both principal and interest portions. The principal portion reduces the outstanding loan balance, while the interest portion represents the cost of borrowing. Determining Subsequent Payments Once you have calculated your monthly payment for a loan, it is important to understand how subsequent payments will be determined. Here are some key points to keep in mind: • Each month, a portion of your payment will go towards paying off the principal (the original amount borrowed) and a portion will go towards paying the interest. • As you make payments over time, the remaining balance on your loan will decrease. This means that more of each subsequent payment will be applied towards the principal. • The exact breakdown of each payment can be found in an amortization schedule, which shows how much of each payment goes towards interest and principal for every month of the loan term. • By understanding the breakdown of your payments, you can see how your debt is being paid down and make informed decisions about managing your finances. • It is also important to note that making extra payments towards the principal can help reduce the total amount of interest you pay over the life of the loan. Preparing an Amortization Schedule Preparing an amortization schedule is essential for managing loan repayments effectively. Whether you know your monthly payment or not, understanding the schedule will help you stay organized and on track with your finances. Discover how to prepare an amortization schedule in our comprehensive guide! If You Know Your Monthly Payment If you know your monthly payment, calculating loan amortization becomes easier. Here’s what you need to do: 1. Identify the principal amount, interest rate, and loan period. 2. Use an amortization formula or financial calculator to calculate the total monthly payment. 3. Determine the interest portion and principal portion of the payment for the first month. 4. Subtract the principal portion from the original loan amount to find the new principal amount. 5. Calculate the interest charged for the second month by multiplying the new principal amount by the monthly interest rate. 6. Subtract this interest amount from the total monthly payment to find the principal portion of the payment for the second month. 7. Repeat these steps until you reach the end of the loan term. If You Don’t Know Your Monthly Payment If you don’t know your monthly payment, you can still prepare an amortization schedule by following these steps: 1. Identify the principal amount of the loan, the interest rate, and the loan period. You may need to refer to your loan agreement or contact your lender for this information. 2. Use a financial calculator, spreadsheet software, or online amortization calculator to input these details and calculate your monthly payment. These tools will use the loan amount, interest rate, and term to determine the amount you need to pay each month. 3. Once you have calculated your monthly payment, you can proceed to create an amortization schedule using this information. This schedule will outline how much of each payment goes towards interest and principal over time. 4. With the help of a financial calculator or spreadsheet software, you can calculate subsequent payments based on the remaining loan balance and interest rate. Practical Example of Loan Amortization To better understand how loan amortization works, let’s look at a practical example. Imagine you take out a $10,000 loan with an annual interest rate of 5% and a term of 3 years (36 months). Using this information, we can calculate the monthly payment amount. To do this, we need to find the monthly interest rate by dividing the annual interest rate by 12. In this case, it would be 0.05/ 12 = 0.00417. Next, we use the formula: Monthly Payment Amount = Principal Amount x Monthly Interest Rate / (1 – (1 + Monthly Interest Rate)^-Number of Months) Substituting in our values: Monthly Payment Amount = $10,000 x 0.00417 / (1 – (1 + 0.00417)^-36) By plugging these numbers into a financial calculator or using spreadsheet software like Excel, we find that the monthly payment amount is approximately $299.71. As you make your monthly payments over time, each payment will cover both principal and interest amounts determined by your amortization table or schedule provided by your lender or calculated manually using formulas. This practical example illustrates how loan amortization helps borrowers understand their repayment plan and allocate their payments towards reducing both the principal balance and paying off accumulated interest over time. The Advantages and Disadvantages of Loan Amortization Making informed decisions about loans requires understanding the advantages and disadvantages that come with loan amortization. Loan amortization, which helps individuals budget their cash flows with level payments over the life of the loan, also has its drawbacks. │ Advantages │ Disadvantages │ │Loan amortization provides a clear repayment schedule. This schedule can help individuals │One major downside is that, in early stages of loan amortization, most of each payment goes toward │ │budget their expenses, making managing finances easier. │interest. This results in slow equity build-up, which might be a disadvantage for those hoping to quickly│ │ │build up equity in an asset. │ │Amortization schedules can be customized based on loan and personal circumstances. This │Another disadvantage is the possibility of negative amortization. If the monthly payments are too small │ │flexibility empowers individuals to tailor their repayment plans based on their income flow│to cover the interest, the total amount owed can increase instead of decrease over time. │ │and financial goals. │ │ │Loan amortization helps individuals make smarter decisions about loans. It provides a clear│Prepayment penalties are another potential downside. Some loan contracts don’t allow for early │ │insight into how much interest and principal one is paying at each stage of the loan term. │repayments, or they might impose prepayment penalties, hampering the benefits of quicker repayment. │ Considering these factors can aid in making smart decisions about loan terms and understanding the impact of those decisions over the life of the loan. By understanding the pros and cons, you can effectively leverage loan amortization to your advantage. Importance of Amortization in Accounting Amortization is of great importance in accounting because it helps to accurately track and record the value of loans and intangible assets. By spreading out the cost of these items over their useful life, amortization allows businesses to match expenses with revenue generation. This ensures that financial statements reflect the true value of assets and provide a more accurate picture of a company’s financial health. Additionally, amortization is required by Generally Accepted Accounting Principles (GAAP) and is necessary for tax planning purposes. Without proper amortization practices, businesses may face incorrect reporting, potential penalties from regulatory bodies like the Internal Revenue Service (IRS), and inaccurate assessments of their So understanding how to calculate amortization for loans and intangible assets is crucial for effective financial management in accounting. Types of Amortization There are different types of amortization, including for loans and intangible assets. Understanding these different types will help you make informed financial decisions. Read on to learn more about calculating loan amortization and how it applies to accounting principles. Loans are a common way for individuals and businesses to borrow money. When you take out a loan, you receive a specific amount of money from the lender that you agree to repay over time, usually with The process of paying back the loan is called loan amortization. Loan amortization involves making regular payments towards both the principal amount borrowed and the interest charged by the lender. This allows borrowers to gradually reduce their debt until it is fully repaid by the end of the loan term. Most loans have level payments, meaning that your monthly payment stays the same throughout the repayment period. In the early stages of an amortized loan, a large portion of each payment goes towards paying off interest costs, while only a small part goes towards reducing your actual debt. As time goes on and more payments are made, more of each payment will be applied towards reducing your principal balance. Intangible Assets Amortization is not only applicable to loans, but it can also be used for intangible assets. Intangible assets are things like patents, trademarks, copyrights, and goodwill. These assets have value but cannot be physically touched or seen. Just like with loan amortization, the cost of these intangible assets is spread out over time through the process of amortization. By doing this, businesses can tie the cost of these assets to the revenues they generate. Amortizing intangible assets can also help reduce a business’s taxable income and provide a clearer picture of their true earnings. Allocating a portion of the cost to each reporting period allows businesses to accurately account for these valuable but non-physical resources in their financial statements. How to Calculate Amortization for Intangible Assets Calculating amortization for intangible assets involves finding the basis and residual value, dividing by the remaining months, recording the cost on your income statement. Discover how to calculate amortization in detail with an example of an intangible asset. Finding the Basis and Residual Value To calculate the amortization for an intangible asset, you first need to find its basis and residual value. Here’s how: 1. Determine the basis: The basis of an intangible asset is typically its purchase price or cost of development. This is the amount that you initially paid for or invested in the asset. 2. Estimate the residual value: The residual value is the estimated value of the intangible asset after depreciation. It represents what the asset will be worth at the end of its useful life. You can estimate this based on factors such as market conditions and expected future cash flows. Dividing by the Remaining Months To calculate the monthly amortization payment, you need to divide the remaining loan balance by the number of months left in the loan term. This will give you the amount you need to pay each month to fully repay the loan within the specified time frame. Recording the Cost on Your Income Statement When recording the cost on your income statement, there are a few key steps to follow: 1. Determine the basis value of the intangible asset. This is the initial cost of acquiring or creating the asset. 2. Calculate the residual value, which is an estimate of how much the asset will be worth at the end of its useful life. 3. Divide the difference between the basis value and residual value by the remaining months in the asset’s useful life. This will give you the monthly amortization expense. 4. Record the monthly amortization expense as a debit to an expense account and a credit to an accumulated amortization contra-asset account on your income statement. 5. Update your records each month by repeating steps 3 and 4 until you have fully amortized the asset. Example of Amortization of an Intangible Asset An example of amortization of an intangible asset is when a company purchases a patent. Let’s say the patent has a cost of $100,000 and a useful life of 10 years. To calculate the annual amortization expense, we divide the cost by the useful life. In this case, it would be $100,000 divided by 10, resulting in an annual amortization expense of $10,000. This means that each year for 10 years, the company will record an expense of $10,000 on their income statement to gradually reduce the value of the patent. By doing this over time, they can accurately account for the cost and benefit from using the patent in their business operations without impacting their financial statements too much at once. Is Understanding Loan Amortization Important When Applying for a Startup Business Loan? Understanding loan amortization is crucial when applying for a startup business loan. By comprehending this process, entrepreneurs can make informed decisions regarding their loan options for startup businesses. Loan amortization involves regular payments that cover both principal and interest, allowing borrowers to allocate their funds effectively and plan for the future. In conclusion, calculating loan amortization is a crucial skill for anyone looking to manage their debts effectively. By understanding the principles behind amortization and following the steps outlined in this article, you can create an amortization schedule and determine your monthly payments with confidence. Whether you’re planning to repay a personal loan or analyzing the financial impact of business assets, mastering loan amortization will help you make informed decisions and stay on track towards achieving your financial goals.
{"url":"https://incomeseries.com/how-to-calculate-loan-amortization/","timestamp":"2024-11-08T11:35:11Z","content_type":"text/html","content_length":"104627","record_id":"<urn:uuid:8747ce82-1bbd-4399-99ff-c4fc94723dbd>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00815.warc.gz"}
Decomposition Models – J. Eduardo Vera-Valdés Decomposition Models Time Series In this self-study you are going to learn about decomposition models for time series and the exponential smoothing and Holt-Winters models. These models are widely used in practice as alternatives to the ARIMA model. Your task is to read the material and complete the exercises listed in the document. Classic Decomposition The classical decomposition model consists of three components: trend (\(X_t\)), seasonal (\(S_t\)), and random (\(Z_t\)). The idea is to separate the time series into each of these components to analyse them separately. There are two types of decomposition: additive and multiplicative. As the name suggests, they differ in the way the components are combined. Additive Decomposition The most common decomposition is the additive decomposition, which is given by \[X_t = T_t + S_t + R_t.\] Multiplicative Decomposition Alternatively, the multiplicative decomposition is given by \[X_t = T_t \times S_t \times R_t.\] The main difference between the two decompositions is the way the seasonal component is combined with the trend and random components. The additive decomposition is more appropriate when the seasonal component is constant over time, while the multiplicative decomposition is more appropriate when the seasonal component varies with the level of the time series. Before getting into details of how the components are estimated, let’s see an example of each decomposition. Example: AirPassengers We use the AirPassengers dataset to illustrate the decomposition models. The dataset is included with base R and it contains the monthly number of international airline passengers from 1949 to 1960. This is a classic dataset used to illustrate time series analysis. data("AirPassengers") #Load the data AP = AirPassengers #Rename it for easier access plot(AP) #Plot the original data From the plot, we can see that the seasonal component is increasing over time. That is, the ups and downs of the time series are getting larger as the level of the time series increases. This suggests that the multiplicative decomposition is more appropriate for this dataset. The decomposition can be done using the decompose() function in R. The function requires the time series and the type of decomposition as arguments. The function returns a list with the components, which we can plot directly. AP.deca = decompose(AP,type="additive") #Decompose with additive model plot(AP.deca) #Plot the decomposed additive model Exercise 1 Decompose the AirPassengers dataset using the multiplicative model by changing the type and plot the results. What do you observe from the decomposition? Compare the results of the additive and multiplicative models. Trend Component The trend component is estimated by removing the seasonal and random components from the original time series. The trend is the long-term movement of the time series. It can be increasing, decreasing, or constant over time. The trend is estimated by aggregation, smoothing the time series using a moving average, or a regression model. The simplest way to estimate the trend is by aggregating the time series in a cycle and thus smoothing it. The period of aggregation depends on the frequency of the time series. For example, if the time series is monthly, the aggregation is done over a year. In R, the aggregate() function is used for this purpose. The function requires the time series as an argument and returns the aggregated time series. We can plot the aggregated time series to see the trend component. legend(1949,630,c("Original series","Aggregated data"),col=c(1,4),pch = c(NA,1),lty=1) Note that we have divided the aggregated data by 12 to get the average monthly value. The trend is smooth and captures the long-term movement of the time series. Nonetheless, aggregation discards a lot of information from the original time series. We obtain only one observation per Moving Average A more sophisticated way to estimate the trend component is by using a moving average. The core idea is to, instead of aggregating the time series per cycle and obtain a single observation, we can smooth the time series by averaging the observations in a window. In this sense, the moving average is a simple way to estimate the trend component, but it is more flexible than aggregation. Note that the function decompose() in R uses a moving average to estimate the trend component. We can recover the trend component by selecting the trend component from the decomposed model using the $ operator. The line below reports the trend component of the decomposed additive model. Jan Feb Mar Apr May Jun Jul Aug 1949 NA NA NA NA NA NA 126.7917 127.2500 1950 131.2500 133.0833 134.9167 136.4167 137.4167 138.7500 140.9167 143.1667 1951 157.1250 159.5417 161.8333 164.1250 166.6667 169.0833 171.2500 173.5833 1952 183.1250 186.2083 189.0417 191.2917 193.5833 195.8333 198.0417 199.7500 1953 215.8333 218.5000 220.9167 222.9167 224.0833 224.7083 225.3333 225.3333 1954 228.0000 230.4583 232.2500 233.9167 235.6250 237.7500 240.5000 243.9583 1955 261.8333 266.6667 271.1250 275.2083 278.5000 281.9583 285.7500 289.3333 1956 309.9583 314.4167 318.6250 321.7500 324.5000 327.0833 329.5417 331.8333 1957 348.2500 353.0000 357.6250 361.3750 364.5000 367.1667 369.4583 371.2083 1958 375.2500 377.9167 379.5000 380.0000 380.7083 380.9583 381.8333 383.6667 1959 402.5417 407.1667 411.8750 416.3333 420.5000 425.5000 430.7083 435.1250 1960 456.3333 461.3750 465.2083 469.3333 472.7500 475.0417 NA NA Sep Oct Nov Dec 1949 127.9583 128.5833 129.0000 129.7500 1950 145.7083 148.4167 151.5417 154.7083 1951 175.4583 176.8333 178.0417 180.1667 1952 202.2083 206.2500 210.4167 213.3750 1953 224.9583 224.5833 224.4583 225.5417 1954 247.1667 250.2500 253.5000 257.1250 1955 293.2500 297.1667 301.0000 305.4583 1956 334.4583 337.5417 340.5417 344.0833 1957 372.1667 372.4167 372.7500 373.6250 1958 386.5000 390.3333 394.7083 398.6250 1959 437.7083 440.9583 445.8333 450.6250 1960 NA NA NA NA The window of the moving average is determined by the frequency of the time series. In the example, the time series is monthly so the window is 12. The first observation of the trend component is the average of the first 12 observations, reported in the middle of the window. Hence, note that we do not have a trend component for the first and last 6 observations. In this sense, moving average is a method that losses less information than aggregation. Exercise 2 Plot the trend component of the decomposed multiplicative model in a similar plot as the aggregated data above. Linear Regression Another way to estimate the trend component is by using a linear regression model. The linear model is given by \[X_t = \beta_0 + \beta_1 t + Z_t,\] where \(t\) is the time index and \(Z_t\) is the random component. The coefficients \(\beta_0\) and \(\beta_1\) are estimated by minimizing the sum of squared residuals. To estimate the trend, first we generate the time index. Note that we have used the ts() function to create a time series object. The start and end arguments are used to specify the start and end of the time series, respectively. The frequency argument is used to specify the frequency of the time series, which is 12 in this case. Exercise 3 Estimate the trend component of the AirPassengers dataset using a linear regression model with the time series as regressor. Plot the original data and the estimated trend component in a similar plot as the aggregated data above. Exercise 4 Above we consider the simplest linear model to estimate the trend component. However, the trend component can be estimated using more complex models. For example, we can include a quadratic term in the model, or a general polynomial. Estimate the trend component of the AirPassengers dataset using a quadratic regression model with the time series as regressor. Plot the original data and the estimated trend component in a similar plot as the aggregated data above. Seasonal Component The seasonal component is estimated by removing the trend and random components from the original time series. The seasonal component is the short-term movement of the time series. It is the ups and downs of the time series that repeat over time. The method to estimate the seasonal component depends on the type of decomposition. In the additive decomposition, the seasonal component is estimated by subtracting the trend from the original time series. In the multiplicative decomposition, the seasonal component is estimated by dividing the original time series by the trend. In the code below, we plot the original time series and the seasonal component of the decomposed additive model. Note that the seasonal component is constant over time. This suggests that the additive decomposition is not appropriate for this dataset. Exercise 5 Plot the seasonal component of the decomposed multiplicative model in a similar plot as the one above. Random Component The random component is estimated by removing the trend and seasonal components from the original time series. The random component is the noise of the time series. It is the residuals of the time series after removing the trend and seasonal components. In the code below, we plot the original time series and the random component of the decomposed additive model. Ideally, the random component should be white noise. That is, the random component should not have any pattern. We can check this by plotting the autocorrelation function of the random component. Exercise 6 Plot the autocorrelation function of the random component of the decomposed additive and multiplicative models. Hint: Use the acf() function in R to plot the autocorrelation function. You may need to omit the observations lost in the calculation of the trend, use the na.omit() function to do this. Which decomposition has a random component closer to white noise? Exponential Smoothing Exponential smoothing is a simple method to forecast time series. The method is based on the idea that present and past values of the series can be used to forecast future values. The contribution of past values decreases exponentially over time to reflect the fact that closer observations have a bigger effect. The exponential smoothing method is given by the equation \(\begin{align*} \hat{X}_{t+1} &= \alpha X_t + \alpha(1-\alpha)X_{t-1}+\alpha(1-\alpha)^2X_{t-2}+\cdots\\ & = \alpha X_t + (1-\alpha)\hat{X} _t, \end{align*}\) where \(\hat{X}_{t+1}\) is the forecast of the time series at time \(t+1\), \(X_t\) is the observed value at time \(t\), and \(\hat{X}_t\) is the forecast at time \(t\). The parameter \(\alpha\) is the smoothing parameter and it controls the weight of the past observations in the forecast. The parameter \(\alpha\) is between 0 and 1, and it is usually chosen by minimizing the mean squared error of the forecast. Recursive application of the equation above gives the forecast for any time \(t+h\) as \[\hat{X}_{t+h} = \alpha X_t + (1-\alpha)\hat{X}_{t+h-1},\] which is easy to implement in practice. The next code fits an exponential smoother to the AirPassengers dataset and plots the original data and the fitted values. The HoltWinters() function [more on this function below] is used to fit the exponential smoother. The function requires the time series as an argument and returns the fitted values. To fit the exponential smoother, we set the parameters gamma and beta to FALSE, which removes the seasonal and trend components from the model. fit.expsm = HoltWinters(AP,gamma=F,beta=F) #Fit the exponential smoother ts.plot(fit.expsm$fitted,AP,col=c(1,2)) #Plot the original data and the fitted with the exponential smoother The fitted values are very close to the original data. The exponential smoother is a simple method, but it is very effective in practice, particularly to forecast time series. The predict() function is used to forecast future values. The function requires the fitted model and the number of periods ahead to forecast. In the example below, we predict 2 years ahead, so 24 observations. The prediction.interval argument is used to include the confidence interval in the forecast. forecast.expsm = predict(fit.expsm,n.ahead=2*12,prediction.interval = TRUE) #Forecasts using the exponential smoother ts.plot(AP,forecast.expsm,lty=c(1,2,3,3),col=c(1,4,2,2)) #Plotting the original data and the forecasts with confidence interval The plot shows the original data and the forecasted values. The confidence interval is also included in the plot. Note that the forecasted values are flat, which is a limitation of the exponential smoother. The method is not able to capture the trend and seasonal components of the time series. To overcome this limitation, we can use the Holt-Winters model. Holt-Winters Model The Holt-Winters model is an extension of the exponential smoother that includes the trend and seasonal components. The model is given by the equations \(\begin{align*} \hat{X}_{t+1} &= l_t + b_t + s_{t-m+1},\\ l_t &= \alpha X_t + (1-\alpha)(l_{t-1}+b_{t-1}),\\ b_t &= \beta(l_t-l_{t-1})+(1-\beta)b_{t-1},\\ s_t &= \gamma(X_t-l_{t-1}-b_{t-1})+(1-\ gamma)s_{t-m}, \end{align*}\) where \(\hat{X}_{t+1}\) is the forecast of the time series at time \(t+1\), \(l_t\) is the level of the time series at time \(t\), \(b_t\) is the trend of the time series at time \(t\), \(s_t\) is the seasonal component of the time series at time \(t\), and \(m\) is the frequency of the time series. The parameters \(\alpha\), \(\beta\), and \(\gamma\) are the smoothing parameters of the level, trend, and seasonal components, respectively. The parameters are between 0 and 1, and they are usually chosen by minimizing the mean squared error of the forecast. Note that the Holt-Winters model is a generalization of the exponential smoother. When the parameters \(\beta\) and \(\gamma\) are set to 0, the Holt-Winters model reduces to the exponential smoother. This is what we did in the previous section. As used before, the HoltWinters() function is used to fit the Holt-Winters model. Exercise 7 Fit the Holt-Winters model to the AirPassengers dataset and plot the original data and the fitted values. Exercise 8 Forecast future values of the AirPassengers dataset using the Holt-Winters model and plot the original data and the forecasted values. Include the confidence interval in the forecast. What do you observe from the forecast? Compare the results of the exponential smoother and Holt-Winters model. Temperature Data Exercise In this exercise, you are going to apply the decomposition models, exponential smoother, and Holt-Winters model to the Temperature dataset. The dataset contains the monthly Northern Hemisphere temperature anomalies from 1850. The dataset can be directly downloaded from the HadCRUT4 database. The code below downloads the data and plots the raw data. url1 = "https://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.6.0.0.monthly_nh.txt" #Web addres of temperature data from the HadCRUT4 database temporal = read.table(url1,sep="") #Getting the data from the url tempn = ts(temporal[,2],start = c(1850,1), frequency = 12) #Define as time series ts.plot(tempn) #Plotting the raw data Aggregate analysis Exercise 9 Aggregate the temperature data to obtain a yearly series and plot it. Can you see a trend? Comment. Decompose analysis (additive) Exercise 10 Decompose the original (monthly) temperature data using an additive model. Plot the trend of the decomposed model and the autocorrelation function of the random component. For the ACF of the random component you will need to use the na.omit() function. Does the additive model fits the data? Comment. Decompose analysis (multiplicative) Exercise 11 Repeat the previous exercise using the multiplicative model. Exercise 12 Estimate the Holt-Winters model (allow for trend and seasonal components and use an additive model). Forecast 10 years of temperature data. Plot the original data and the forecast, including confidence bands. Comment. In this self-study, you have learned about decomposition models for time series and the exponential smoothing and Holt-Winters models. You have applied these models to the AirPassengers dataset and the Temperature dataset. You have learned how to decompose a time series into trend, seasonal, and random components, and how to estimate the trend, seasonal, and random components. You have also learned how to estimate the exponential smoother and Holt-Winters model, and how to forecast future values of a time series.
{"url":"https://everval.github.io/lectures/Time_Series_Decomposition.html","timestamp":"2024-11-07T13:41:12Z","content_type":"application/xhtml+xml","content_length":"72454","record_id":"<urn:uuid:5469c7f7-57bc-4d3b-a8df-00e467f74c45>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00335.warc.gz"}
Common Biostatistical Problems and the Best Practices that Prevent Them This is based on Peter Bacchetti's lecture on "Common Biostatistical Problems and the Best Practices that Prevent Them." Send any comments to peter@biostat.ucsf.edu._ The article below was adapted in 2008 by Nancy Hills and may need updating. More recent lecture notes are available at the parent page Common Biostatistical Problems and the Best Practices That Prevent Them P-values are often the primary focus when researchers interpret statistical analyses of their data, despite the important contributions to interpretation that are made by examining estimates, confidence intervals, and graphical summaries. This over-emphasis (and sometimes exclusive emphasis) on p-values contributes to many problems, including the first, which I aso consider to be the This is very common in medical research and can lead to terrible misinterpretations. Unfortunately, investigators tend to believe that p-values are much more useful than they really are, and they misunderstand what they can really tell us. The P-value Fallacy The term "p-value fallacy" has been used to describe rather more subtle misinterpretations of the meaning of p-values than what will be discussed here. For example, some believe that the p-value is the probability that the null hypothesis is true, given the observed data. But much more naive misinterpretation of p-values is common. Almost no one would really defend these first two statements: 1. The p-value tells you whether an observed difference, effect or association is real or not. 2. If the result is not statistically significant, that proves there is no difference. These are too naive and clearly wrong. We all know that just because a result could have arisen by chance alone does not mean that it must have arisen by chance alone. That would be very bad logic. But how about the following statement: 3. If the result is not statistically significant, you "have to" conclude that there is no difference. (And you certainly can't claim that there is any suggestion of an effect.) This statement may seem a bit more defensible, because it resembles what people are taught about statistical hypothesis testing and "accepting" the null hypothesis. This may seem only fair: you made an attempt and came up short, so you must admit failure. The problem is that in practice, this has the same operational consequences as the two clearly incorrect statements above. If you are interested in getting at the truth rather than following a notion of "fair play" in a hypothesis testing game, then believing in this will not serve you well. Unfortunately, some reviewers and editors seem to feel that it is very important to enforce such "fair play". Adding a Power Calculation What if we not only get p<0.05 but we also did a power calculation? It is very common to reason that if a power calculation were done and p>0.05, then there is no effect. The logic is that we tried to make the sample size large enough so that if a difference were present, we would have been likely to obtain a p-value<0.05. Because we did not get p<0.05, we therefore believe that a difference is unlikely to be present. This is still a poor approach, because reasoning via p-values and power is convoluted and unreliable. One problem is that power calculations are usually inaccurate, since they have to be based on assumptions that are hard to know in advance. Inaccuracy is theoretically inevitable and empirically verified. A study of randomized clinical trials in 4 top medical journals ( NEJM, JAMA, Annals of Internal Medicine, and Lancet) found that more than half used assumed SD's off by enough to produce a >2-fold difference in sample size.[1] Of course, one could do better by re-estimating power after the study is completed. But the assumptions needed for power calculations are still not fully known, and post-hoc power calculations are not considered meaningful. The CONSORT guidelines for reporting randomized clinical trials specifically warn against this practice, stating that, "There is little merit in a post hoc calculation of statistical power using the results of a trial".[2] Why is this not worth doing? Because there is a simpler and better alternative: Confidence intervals show simply and directly what possibilities are reasonably consistent with the observed data, and are widely acknowledged to be superior and sufficient. (Additional references here.) Here are some other situations that make it tempting to believe that a large p-value must be conclusive: p>0.05 + Large N = No effect p>0.05 + Huge Expense = No effect p>0.05 + Massive Disappointment = No effect But not if contradicted by the CIs! Sometimes we want to believe that a study must be conclusive, because it was such a good attempt or because it looks like it should be conclusive or because nothing as good will ever be done again. But these considerations carry no scientific weight and cannot overrule what is shown by the CI. If the CI is wide enough to leave doubt about the conclusion, then we are stuck with that uncertainty. Here is an example of the p-value fallacy: A randomized clinical trial of a fairly serious condition compares two treatments. The observed results are: Treatment A: 16 deaths in 100 Treatment B: 8 deaths in 100 These produce the following analyses: Odds ratio: 2.2, 95% CI 0.83 to 6.2, p=0.13 Risk difference: 8.0%, 95% CI -0.9% to 16.9% This was reported as "no difference in death rates", presumably based on the p-value of 0.13. This type of interpretation is alarmingly common, but the difference is not zero, which would really be "no difference"; it is 8%. No "Significant" difference Sometimes you see reports of "no significant difference in death rates". This might be intended to simply say that the p-value was not <0.05, but it can easily be read to mean that the study showed that any difference in death rates is too small to be important. Although some journals have the unfortunate stylistic policy that "significant" alone refers to statistical significance, the word has a well-established non-technical meaning, and using it in this way promotes misinterpretation. Certainly the difference was "significant" to the estimated 8 additional people who died with treatment No "Statistical" difference The phrase "no statistical difference in death rates" is newer and also seems to mean that the observed difference easily could have occurred by chance. This term seems to give the impression that some sort of statistical magic has determined that the observed difference is not real. This is exactly the misinterpretation that should be avoided. (See "Statistical Ties, and Why You Shouldn't Wear One", http://www.medscape.com/viewarticle/550738). A sensible interpretation would be: "Our study suggests an important benefit of Treatment B, but this did not reach statistical significance." Examples in Prominent Places Finding egregious example of the p-value fallacy in prominent places is all too easy. Example 1. Vitamins during Pregnancy A study of vitamins C and E and the risks of preeclampsia and perinatal complications published in NEJM concluded that "supplementation with vitamins C and E during pregnancy does not reduce the risk of preeclampsia in nulliparous women, the risk of intrauterine growth restriction, or the risk of death or other serious outcomes in their infants".[7] This very definitive conclusion was based on the following results: Preeclampsia: RR 1.20 (0.82 - 1.75) This certainly suggests the the vitamins are not effective, because the estimate is a 20% increase in the outcome. But the CI does include values that would constitute some effectiveness, so the conclusion may be a bit overstated. Growth restriction: RR 0.87 (0.66 - 1.16) Here, we have a big problem. The estimate is a 13% reduction in the outcome, so the definitive statement that vitamins do not reduce this outcome is contradicted by the study's own data. Vitamins did appear to reduce this outcome, and the CI extends to a fairly substantial 34% reduction in risk. Serious outcomes: RR 0.79 (0.61 - 1.02) The same problem is present here, and it is even more severe. An observed 21% reduction in the most important outcome has been interpreted as definitive evidence against effectiveness. If we knew that this observed estimate were correct, then vitamin supplementation, or at least further study, would probably be worthwhile. In fact, the data in the paper correspond to an estimate of needing to treat 39 women for each serious outcome prevented, a rate that would almost certainly make treatment worthwhile. Example 2. Fat Consumption and Breast Cancer A less blatant but even higher-profile example is provided by the report on the Women's Health Initiative study on fat consumption and breast cancer.[8] An article in Newsweek entitled "The New Fight Over Fat" priominently depicted a 12-decker cheeseburger next to the text: "Even diets with only 29% of calories coming from fat didn't reduce the risk of disease." This interpretation was typical of headlines. Deeper in the articles, writers struggled to convey some of the uncertainty about the results, but they were hampered by the poor choice of emphasis and presentation in the original JAMA publication. The primary result was an estimated 9% reduction in risk of invasive breast cancer (Hazard Ratio = 0.91, 95% CI 0.83 - 1.01, p=0.07). An accurate sound bite would have been, "Lowering fat appears to reduce risk, but study not definitive." An interesting additional result was a reduction in the risk of breast cancer mortality (Hazard Ratio = 0.77, 95% CI 0.48 -1.22). The estimate here is a more substantial reduction in risk, but the uncertainty is wider. If this estimate turned out to be true, this would be very important. Unfortunately, the authors chose to primarily emphasize the fact that the p-value was >0.05. This gave the clear (and incorrect) impression that the evidence favors no benefit of a low-fat diet. The primary conclusion in the JAMA abstract was "a low-fat dietary pattern did not result in a statistically significant reduction in invasive breast cancer risk." The overemphasis of the the fact that the p-value was >0.05 promoted considerable Best Practice 1. Provide estimates with confidence intervals that directly address issues of interest. This is usually important in clinical research because both the direction and the magnitude of any effect are often important. How to follow this practice will usually be clear, as it was in the above examples. Ideally, this will already have been planned at the beginning of the study. Often, an issue will concern a measure of effect or association, such as a difference in means, an odds ratio, a relative risk, a risk difference, or a hazard ratio. Think of what quantity would best answer the question or address the issue if only you knew it. Then estimate that quantity. This practice is often followed, but then ignored when data is interpreted. The above examples provided estimates and confidence intervals, but these were ignored in the major conclusions, which were based only on the fact that the p-values were >0.05. Best Practice 2. Ensure that major conclusions reflect the estimates and the uncertainty around them, and, in particular, Best Practice 2a. Never interpret large p-values as establishing negative conclusions. This is the practice that is too often neglected, particularly for negative studies. The estimates and CIs, not just the p-value, should contribute to the interpretation.Think about these guidelines when interpreting your results: • The estimate is the value most supported by the data. This means that a conclusion is inappropriate whenever it would be wrong if the estimate turned out to be the true value. • The confidence interval includes values that are not too incompatible with the data. This means that conclusions are exaggerating the strength of the evidence whenever they imply that some values within the CI are impossible or very unlikely. • There is strong evidence against values outside the CI. If all important effects are outside the CI, then you can claim a strong negative result. Here is an example of a strong negative conclusion that is well-supported, from a study of outcomes among newborns with total bilirubin leves of 25 mg per deciliter or more [9]: "Conclusion: When treated with phototherapy or exchange transfusion, total serum bilirubin levels in the range included in this study were not associated with adverse neurodevelopmental outcomes in infants born at or near term." This was supported by a statement in the abstract concerning the CIs: "On most tests, 95 percent confidence intervals excluded a 3-point (0.2 SD) decrease in adjusted scores in the hyperbilirubinemia group." But what if the results are less conclusive, such as those in the vitamin study discussed above? For the following results: • Growth restriction: RR 0.87 (0.66 - 1.16) • Serious outcomes: RR 0.79 (0.61 = 1.02) an honest interpretation of what can be concluded from the results would be something like this: "Our results suggest that Vitamin C and E supplementation may substantially reduce the risk of growth restriction and the risk of death or other serious outcomes in the infant, but confidence intervals were too wide to rule out the possibility of no effect." This interpretation reflects the key facts that 1) the estimates indicate big enough protective effects to be important and 2) the uncertainty around them is too large to permit a strong conclusion that any protective effect exists. What would have happened if the vitamin paper had been submitted with this more reasonable interpretation? The paper probably wouldn't have ended up in NEJM! Unfortunately, this more accurate interpretation would probably have greatly reduced the paper's chance of acceptance. The tension between fair interpretation and marketability for high-profile journals is the "elephant in the room" when it comes to conflict of interest. We are all under pressure to make our papers seem as interesting as possible. Despite the careful attention to financial conflicts of interest in medical research, the conflict between scientifically accurate interpretation versus maximizing interest, getting into a high prestige journal, and generally attracting attention (and citations) is largely unrecognized. It is always present, can have a strong influence on presentation and interpretation, and yet gets little attention. The p-value fallacy can help make negative studies seem more conclusive and interesting. The most prestigious journals tend to prefer results that seem conclusive, perhaps because they are trying to serve clinicians in addition to scientists, and clinicians can make the most use of clear results. Although there is a lot of pressure to make results seem as interesting as possible, this should only go so far. Using the p-value fallacy to make a study seem definitive in one direction instead of suggestive in the other direction would clearly be going too far. In many cases, this is probably not deliberate. In this case, the authors may have felt that p>0.05 was definitive because the study was large and expensive, or perhaps because they had done a power calculation (but their assumptions were wildly off, as usual with power calculations). Be vigilant and be honest! The usual safeguards against bias due to conflict of interest are disclosure and correspondingly increased vigilance. Because this conflict is always present, the only obvious response is to always be vigilant. Best Practice 3. Discuss the implications of your findings for what may be true in general. Do not focus on "statistical significance" as if it were an end in itself. This may seem like a subtle distinction, but it is fundamental. We do research to learn about what is true in general in the real world, and p-values and statistical significance do not exist in the real world. Interpretation should focus clearly on what evidence the study provides about what may be generally true, not treat statistical significance as an end in itself. Statistical significance is only important by virtue of what it conveys about the study's evidence. Because of the extreme emphasis on statistical significance in medical research, this point is often forgotten and we slip into thinking that statistical significance itself is what really matters. Most people understand that statistical significance implies strong evidence for a real effect, but this is usually not all that is important, and the implications of lack of statistical signficance are much less clear. In the case of the Women's Health Initiative, we care about the biological effect of dietary fat and about actual cases of breast cancer that could be prevented.The disconnect between the author's statements and how they were interpreted illustrates why this Best Practice is important. WHI conclusion: "a low-fat dietary pattern did not result in a statistically significant reduction in invasive breast cancer risk...However, the nonsignificant trends...indicate that longer, planned, non-intervention follow-up may yield a more definitive comparison." The week after the article mentioned above, Newsweek published a follow-up concerning the difficulties that the press and the public have in understanding scientific results, particularly about diet research. (Scientists also have difficulty with these issues.) Despite this focus, the writers still did not understand what the WHI article stated, perhaps because they assumed--quite reasonably, but incorrectly--that the article must be addressing the real-world question. Newsweek's interpretation of the WIH conclusion: "The conclusion of the breast-cancer study--that a low-fat diet did not lower risk--was fairly nuanced. It suggested that if the women were followed for a longer time, there might be more of an effect ." Both the major conclusion from the abstract and the caveat that followed it concerned statistical significance rather than what is really true. Although the "nonsignificant trends" were mentioned, their implications for the important issues were not discussed. The Newsweek writers mis-translated these into more relevant--but incorrect--statements. The two phrases highlighted in red in the two statements above are not the same, and the statements highlighted in green also do not match--the authors meant that the difference may reach p<0.05, not that it will get bigger. Because the WIH authors chose to completely neglect any direct assessment of the implications of their findings for what may really be true, they made serious misunderstandings virtually inevitable. Best Practice 3 and Best Practice 2 are complementary. Following Best Practice 2 will usually keep you on track for Best Practice 3, and vice versa. Avoid the p-value fallacy by checking negative interpretations While it may seem easy to understand that the p-value fallacy is not valid, it can be surprisingly hard in practice not to lapse into interpreting large p-values as reliable indications of no effect. It is easy to slip into relying on "p>" reasoning. This may be because: • "Yes" or "No" reasoning is more natural; • The focus on p-values is engrained in research culture, as we saw in the case of the Women's Health Initiative; • The real level of uncertainty in research is often inconveniently large, which can make results seem less interesting. The vitamin study discussed above is a good example of this. To avoid this problem, you can double-check all negative interpretations and examine estimates and confidence intervals associated with them. Perform searches for the words "no" and "not". Whole word searches on these two terms should find most negative interpretations of statistical analyses. Check each sentence found and ask yourself the following questions: • Is there an estimate and CI supporting this? • What if the point estimate were exactly right? Would the conclusion still make sense? • What if the upper confidence bound were true? Does the conclusion allow for this possibility? • What if the lower confidence bound were true? Does the conclusion allow for this possibility? To be very thorough, also search for the words "failed", "lack", "absence", "disappeared", "only", "rather", "neither", "none". Negative interpretations sometimes use these words. How to Interpret Estimates and Confidence Intervals The following figures show some concrete examples of how to interpret estimates and CIs. These assume a somewhat idealized situation where we have exact limits on what is clinically important, but they illustrate the main ideas. Often it will be more practical to first calculate the estimates and CIs and then consider whether the values obtained are large enough to be clinically important. In some cases it may be hard to argue that any effect, if real, would be too small to be important. Many detailed examples of how to word interpretations that reflect estimates and CIs are available online. Figure 1. Interpretation:"We found strong evidence against any substantial harm or benefit." Because we have strong evidence against any values outside the CI, both these cases argue strongly that any effect is clinically unimportant. Note that this is true even though one is statistically significant. Figure 2. Interpretation: In this figure, there is suggestion of substantial benefit, because the estimate would be an important benefit if true. However, there may be no effect (not statistically significant), because the CI includes no effect. Which of these results would be more exciting? The lower one, even though it has wider uncertainty, because the estimate is so much better. Figure 3. Interpretation: In figure 3, there is strong evidence of a benefit (statistically significant). A substantial benefit appears likely, but the CI is too wide to rule out a clinically unimportant benefit, i.e., the CI includes some benefits that would be too small to be clinically important. Figure 4. Interpretation: In this figure, we see strong evidence of substantial clinical benefit. This is the most satisfying type of result. Even the upper confidence bound is in the substantial benefit range. Figure 5. Interpretation: In this figure, no conclusions are possible due to a very wide CI. This is the least satisfying type of result. There is very little information in the study data. Also see online resource at https://ctspedia.org/CTSpedia/ResultsInterpretation Example from a typical collaboration First draft text: "There were no statistically significant effects of DHEA on lean body mass, fat mass or bone density." Final wording: "Estimated effects of DHEA on lean body mass, fat mass, and bone density were small, but the confidence intervals around them were too wide to rule out effects large enough to be important." I find that modifications like this are needed in the majority of papers that I am asked to co-author. Are large p-values good for anything? We can better understand the limited value of large p-values by noting what they are good for. Perhaps the only time they are directly useful is in "due-diligence" situations, such as assessing model assumptions when they seem reasonable a priori, where you just need to show that you made some reasonable effort and nothing jumped out. In these situations, a large p-value indicates that your effort did not produce strong evidence of a problem, even though it cannot be interpreted as providing strong evidence against a problem. Be sure to use statements like "no interaction terms of treatment with other predictors in the model had p<0.1" rather than "there were no interactions of treatment with other predictors in the model", which would be an instance of the p-value fallacy. Another example is "We checked linearity assumptions by adding quadratic terms for each linear predictor, and none had p<0.05", not "there was no non-linearity", which again would be based on the p-value fallacy. Here is an example from a paper on estimating past hepatitis C infection risk from reported risk factor histories:[10] "We note that the confidence intervals were not narrow enough to rule out potentially important interactions, but in the absence of strong evidence for such interactions we focus on the simpler models without them." "We failed to detect..." "Our results do not support..." "We found no evidence for..." "Our data did not confirm..." "There is no scientific evidence that BSE [Mad Cow Disease] can be transmitted to humans or that eating beef causes it in humans." --Prime Minister John Major, 1995 Of course, it turned out the BSE was transmitted to humans, and over 150 people died from it. "There is no evidence" is commonly used to give the impression that there is evidence on the other side. Although this and similar phrases sound "scientific", they promote sloppy reasoning. Wording similar to this is popular in politics and advertising, because it gives the misleading impression of a strong case against a conclusion when no such case exists, without being technically incorrect. For the same reasons, the strange popularity of these phrases in scientific writing is disturbing. Whle such phrases may be technically correct, they are bound to be misread as implying evidence against an association or effect. They often involve or promote the p-value fallacy. And when there is strong evidence against an effect, they are too weak. Best Practice 4. State what you did find or learn, not what you didn't What a study did find is what is interesting, and any conclusions or interpretation should be based on this. Like Best Practice 3, this also helps with following Best Practice 2. This no only prevents deception, but also can make statements clearer and stronger. Oddly, investigators often understate their conclusions using weak phrasing. The phrases quoted above seem like safe, conventional ways to state interpretations, despite their drawbacks. I still find that these are the phrases that first pop into my mind when I think about how to state a finding. FRAM, a nationwide study of fat abnormalities in HIV, was a large study that carefully investigated changes in fat in various anatomical sites among persons with HIV. Its results strongly contradicted established thinking in this area, which was that visceral fat (known as "VAT") increased as peripheral fat decreased and the two were causally linked. Among other results supporting its conclusion of no reciprocal change was the following, showing that peripheral fat loss did not have any substantial association with central fat gain (note upper confidence bound of 1.06): Peripheral fat loss association with central fat gain, OR: 0.71, CI: 0.47 to 1.06, p=0.10. Despite the strong results, an early draft stated that "our results do not support the existence of a single syndrome with reciprocal findings". This was revised to read more appropriately: "We found evidence against any reciprocal increase in VAT in HIV-infected persons with peripheral lipoatrophy". [11] Another example comes from a study of the safety of cannabinoids in persons with treated HIV: Marijuana effect on log10 VL: -0.06 (-0.26 to 0.13) Dronabinol: -0.07 (-0.24 to 0.06) These upper confidence bounds were considered too small to be important, so this was strong evidence against any substantial harm. In the first draft, the stated conclusion read, "Overall there was no evidence that cannabinoids increased HIV RNA levels over the 21-day study period". The final draft more appropriately stated that "This study provides evidence that short-term use of cannabinoids, either oral or smoked, does not substantially elevate viral load in individuals with HIV infection".[12] This is the opposite of claiming high power to bolster a negative conclusion based only on a p-value, as discussed above (Problem 1). Sometimes investigators want to argue that their hoped-for results could still be possible, so they mention that power might have been too low and that could be why they didn't see what they expected. This again is too convoluted and unreliable to be worthwhile. A good example is from the WHI study of diet and breast cancer that we have been discussing: "There were departures from the design assumptions that likely reduced study power...If the WHI design assumptions are revised to take into account these departures [less dietary fat reduction], projections are that breast cancer incidence in the intervention group would be 8% to 9% lower than in the comparison group [and] the trial would be somewhat underpowered (projected power of approximately 60%) to detect a statistically significant difference, which is consistent with the observed results." This illustrates the contorted sort of reasoning that speculation about low power requires. What exactly are they trying to say? Their intended meaning seems to boil down to the following: "There might be a 9% reduction in risk. We could have missed it because power was only 60%." This speculation is completely pointless, because the conclusion is better supported, and much clearer, from the examination of the estimate and the confidence interval. The HR=0.91, so of course a 9% reduction is possible. It's what they actually saw! Let's be clear on what happened here. World-class researchers reporting a very high-profile (and hugely expensive) study lapsed into convoluted and completely pointless reasoning. This is a dramatic illustration of how exclusive focus on whether p<0.05 leads to easily avoided problems. Had the authors followed Best Practice 2, no such speculation would have been required, and the implications of their results would have been much clearer. To repeat: Best Practice 2. Ensure that major conclusions reflect the estimates and the uncertainty around them. (Additional references here.) "Intent-to-treat" (ITT) analysis refers to analysis of all randomized subjects, regardless of how well they cooperated with treatment, possibly even including those who refused to actually undergo study treatment at all. ITT analysis is useful for preventing post-randomization self-selection from producing spurious positive findings, but it does not ensure the most accurate possible estimates for all purposes. Consider a "negative" study of vitamin E in diabetics that claimed to have proven that vitamin E supplementation does not prevent cancer (this was based on the p-value fallacy, by the way). The study used ITT: "To reduce bias, we included continuing follow-up from those who declined active participation in the study extension and stopped taking the study medication". [13] But ITT produces underestimates of actual biological effects: it is biased toward no effect. Thus, in addition to ignoring their estimates and CIs, they based a negative conclusion on an approach that is biased in that direction. This is very different from still having a positive finding despite some bias in the other direction, which is where ITT analysis works well. This is an area where the WHI study did reasonably well. They used specialized methods to attempt to estimate what effect the fat-lowering intervention would have had if it were followed as recommended. These methods try to avoid the self-selection bias that simple per-protocol analyses (or observational studies) would have, while also avoiding the biases of ITT analysis. The WHI estimate of effect if adherent to a low-fat diet was estimated as a breast cancer Hazard Ratio of 0.85 (0.71 to 1.02). This was a bit lower than the 0.91 from the primary analysis, but it still just misses p<0.05. They went further in trying to account for adherence to the intervention, but balked at going beyond saying that the use of a more stringent adherence definition "leads to even smaller HR estimates and to 95% CIs that exclude Best Practice 5. Learn as much as you can from your data It will not always be possible to fulfill this goal if analyses are strictly limited to ITT. Doing ITT analysis is usually important, so designing procedures to allow ITT is a good practice. In particular, it is good to continue to follow subjects who stop study medication. But ITT can be supplemented with additional analyses, notably analysis restricted to those who actually underwent the study treatments as planned, termed "per-protocol" analysis. So a specific Best Practice is to Best Practice 5a. Do per-protocol analyses in addition to ITT, especially if 1. there is interest in biological issues: ITT is not designed to address biological effects and can be poor due to bias toward no effect. 2. treatment is double-blinded: this reduces (but does not eliminate) the potential for self-selection biases that ITT protects against. Having results from both ITT and per-protocol analyses can provide a fairer assessment of the uncertainty about a treatment's effect. This is especially important if negative conclusions from ITT analysis are less supported by per-protocol analysis. Also, when treatments are randomized and blinded, stratifying or controlling for the level of adherence or the time of dropout can produce an "in-between" estimate that may be sensible. Another possibility is Best Practice 5b. Consider advanced methods to estimate causal effects There are new and complex "causal inference" methods that seek to avoid both the biases of ITT and those of per-protocol analysis. These are what WHI used. You are very likely to need help from a statistician to carry these out. Two closely related problems are: PROBLEM 5. RELIANCE ON OMNIBUS TESTS and Omnibus tests (like ANOVA) are methods that • check for any one or more of a large number of possible departures from a global null hypothesis (nothing is happening anywhere). They are • inherently focused only on p-values (Problem 1) and they are • diffuse, and so are weaker for specific issues. This makes them generally less useful than analyses focused on specific relationships whose magnitudes can be estimated as well as tested. In particular, when the p-value is large, the main use for omnibus tests is the misuse highlighted in Problem 1. One reason that some people like omnibus tests is that they help guard against obtaining spurious positive results due to multiple comparisons. Because omnibus tests look broadly for any one of many possible departures from the null hypothesis, they are not good at finding any specific one. This makes them "conservative" for any specific question, which some people consider desirable or Multiple comparisons adjustments • each result detracts from the other. Another way of guarding against chance false positive results is application of multiple comparisons adjustments. These are also inherently focused only on p-values, promoting use of the p-value fallacy. They also have the unfortunate property that the results of each analysis are automatically assumed to detract from all the others, with no consideration of how well the different results fit together conceptually or scientifically. Like omnibus tests, these are also very conservative, which some people like. But accuracy is a much more worthy goal than conservatism, and this is often better achieved by less formal (and more intelligent) ways of guarding against spurious findings. Following are examples of how to justify not using multiple comparisons adjustments, both for papers and in grant proposals: Example text for responding to a manuscript review or for inclusion in a paper: "Although we examine many differences and issues, we report nominal p-values, without adjustment for multiple testing. Such adjustment would be focused on avoidance of one or more results with p<0.05 in the case where all differences are truly zero,[14-16] which is an extremely unrealistic hypothesis about the state of nature in our situation. In addition, adjustment would require that each result detract from the others, but there are clear biological relationships among many of the issues that we examine, and these permit coherent sets of findings to reinforce each other rather than detract from one another. Thus, multiple comparison adjustment would do exactly the wrong thing in such cases.[17] We therefore rely on scientific judgment rather than formal adjustment methods to indicate where caution is warranted despite findings with p<0.05." Example text for inclusion in a grant proposal: "Although this Aim involves many different measures, we do not plan formal adjustments for multiple comparisons. This is because we expect many measures to show statistically significant differences, and that directions and magnitudes of differences (perhaps including some with p>0.05) will fit a biologically coherent pattern. In this case, each result will reinforce the other, rather than detracting from one another as required by formal multiple comparisons adjustments such as the Bonferroni method. Conversely, if only one or a very few measures reach statistical significance and their directions and/or magnitudes do not coherently fit with << our substantive theory >>, then we will note that the result(s) with p<0.05 lack biological plausibility and could be due to chance despite meeting the conventional cutoff for statistical significance." Typical problems that may be encountered Scenario 1: An investigator who has just concluded an animal experiment is puzzled and worried. The experiment included: • a condition that just confirms that the experment was done correctly; • some places where different conditions should be similar; and • some conditions that should differ. He saw the results he expected in pairwise comparisons, but "ANOVA says that there is nothing happening". Because this had a specific focus on certain pairwise comparisons to address the scientific questions, he had done t- tests and estimated pairwise differences, obtaining positive results that he thought made sense. But he thought that he "had to" perform ANOVA, and this produced a p-value a bit larger than 0.05. So he thought that to be "rigorous" he would have to reach the opposite conclusion of what he found with the focused analyses. In fact, the focused results were what mattered; unfortunately, there is a risk that reviewers may think otherwise. In fact, reviewers often state flatly that omnibus tests and multiple comparisons adjustments must be used when in fact those approaches would be very inappropriate. Scenario 2: A primarily descriptive study that examined effects of four different administration routes had many positive results, not a single one that was likely due to chance. However, the reviewer stated: "Repeated measures analysis of variance should be completed. *Only if the time-by-treatment interaction is significant *, should time-specific comparisons be made. Then multiple comparison procedures, such as Tukey's test, should be used rather than repeated t- tests." This would treat p>0.05 on the unfocused omnibus test of time-by-treatment interaction as a reliable indicator that no important differences are present: Problem 1. The reviewer's comment, particularly the phrase highlighted in red, may sound rigorous, but it is only "rigorous" in the sense of being rigid or harsh, not in the sense of being exactly precise. It requires extreme conservatism--not accuracy--which could result in missing or understating important findings. Scenario 3: Another study of the biology of morphine addiction had a great deal of scientific structure that omnibus tests or multiple comparisons adjustments would not take into account. The design of the study was very complex, involving: • two different receptors; • antagonists; • different brain regions with and without a certain receptor; and • systemic vs. local administration. The results of many pairwise comparisons fit a biologically coherent pattern. Conditions that should have differed did, while comparisons that should have been similar were similar. A reviewer of the manuscript ignored the consistency of the findings and wrote the following strident comment: "The statistical analyses are naive. The authors compute what appear to be literally dozens of t- tests without any adjustment to the α level--indeed *the probability of obtaining false positives grows with the number of such tests computed *. The authors should have conducted ANOVAs followed by the appropriate post-hoc tests. Their decision to simple [ sic ] compute t- tests on all possible combinations of means is statistically unacceptable." The statement highlighted in red is incorrect. The chance of obtaining at least one false positive increases if the null hypothesis holds for all comparisons. False positives in general do not become more likely, and the chance of getting many false positives that all fit together in a coherent biological theory is extremely small. This is a clear case where the results of multiple analyses all reinforce each other rather than detracting from each other as required by omnibus tests and multiple comparisons adjustments.The probability of obtaining multiple positive results exactly where expected and negative results exactly where expected does not grow; it becomes vanishingly small. Striving for the following best practices will often lead to much better analyses and interpretations than use of omnibus tests and multiple comparisons Best Practice 6. Base interpretations on a synthesis of statistical results with scientific considerations. In clinical research, there is usually outside knowledge that can be used to help with the choice of analyses and their interpretation. Recognizing and explaining whether and how results of different analyses fit together is crucial for obtaining the best understanding of what can be learned from the study. This will usually require consideration of the directions and magnitudes of estimated effects, along with the uncertainty shown by the CIs, rather than consideration of p-values alone. Best Practice 6a. Rely on scientific considerations to guard against overinterpretation of isolated findings with p<0.05. (This is usually preferable to multiple comparisons adjustment.) In particular, it is important to realize when one or a few findings reach p<0.05 but the ensemble of results does not have a compelling explanation. If the results with p<0.05 are not especially more plausible than other quantities estimated, and the directions and magnitudes of these and other results do not show patterns that reinforce the findings, then it is reasonable to regard those findings as suggestive (or even potentially spurious) rather than conclusive, despite their small p-values. Given that our publishing environment has substantial disincentives for such cautious interpretation of findings with p<0.05, this requires strong dedication to fair interpretation. Best Practice 6b. Acknowledge the desirability of independent replication, particularly for unexpected findings. Replication plays a crucial role in solidifying scientific knowledge, but the tendency to focus only on supposedly conclusive "findings" can sometimes cause this to be overlooked. For example, a recent editorial, by a prominent statistician, addressed the topic of subgroup analyses, which is a version of the multiple comparisons issue.[18] It failed to mention any role for replication. Best Practice 7. Choose accuracy over conservatism whenever possible. Many consider conservatism to be very desirable and rigorous, but this certainly is not so when accuracy is a viable alternative. Conservatism is a type of bias, and bias is bad. Sometimes it is better to know the direction of the bias rather than to be uncertain. Intent-to-treat analysis, omnibus tests, and multiple comparisons adjustments introduce bias with a known direction, but it is still bias. You will often be able to do better by thinking carefully about all your results. Best Practice 1 and Best Practice 2, obtaining estimates and CIs and taking them into account when interpreting results, will often be helpful for achieving accurate interpretation. They will usually steer you away from overly conservative, automatic methods based only on p-values. Predictors and outcomes that have parts of their definitions in common can cause severe problems. EXAMPLE 1: Using body mass index (BMI) as a predictor of central fat. In the case of fat abnormalities in HIV infected persons, early studies controlled for BMI when assessing whether peripheral fat loss was associated with central fat gain. Because fat amounts contribute directly to BMI, this caused a spurious association. Those with low peripheral fat were less likely to have high central fat. But the opposite picture emerges if you control for BMI. Those with low values of both peripheral and central fat had their low central fat "explained" by a low BMI, thereby leaving the estimated association to be determined only by the rare cases with low peripheral fat and high central fat. This field of inquiry was seriously misguided for many years because of this seemingly obvious problem. EXAMPLE 2: Total time on treatment as a (fixed) predictor of survival time. Patients can only be treated if they are alive. Those who die early will necessarly have their total time on treatment limited by how long they lived. Someone who died after 2 days will have at most 2 days of treatment. Conversely, someone with 5 years of treatment must have lived at least 5 years. Proportion of time on treatment, although less obviously biased, also may depend on survival time and be potentially biased. EXAMPLE 3: Number of rejection episodes as a predictor of post-transplant survival. Someone who died on day 1 would have at most 1 rejection episode, while someone with 5 episodes would have to have lived quite a bit longer. To avoid these problems, either 1) ensure that the outcome is not part of the definition of a predictor, and vice versa, or 2) be very careful and clear with interpretation. In general, it is simplest and safest to make sure that outcome and predictors are fully distinct. For time-to-event analyses, use time-dependent covariates, defined only using measurements up to the present. In general any fixed (non-time-dependent) covariates should be known from information only up to time zero. And even for time-dependent covariates, any use of "look-ahead" information is likely to cause trouble. Some Technical Problems and Ways to Avoid Them 1. No assessment of assumptions. Be sure to describe what you did to assess assumptions and what you found. As noted above, it is not necessary to prove that assumptions are valid, but it is important to perform some due diligence looking for major violations. 2. Ignoring dependence and clustering An extreme case of this is performing unpaired analyses on paired data. Other cases include treating repeated measurements on the same people as if they were independent. 3. Unclear details for time-to-event: operational definitions, early loss, event ascertainment • Provide operational definitions of starting time, occurrence of event, and censoring time. Any analysis of time to an event needs to be clear about the time from when to when; • Summarize follow-up among those who were censored because follow-up is complete for anyone who had the event. It does not matter whether the event occurred at 2 days or 5 years--either way, we know all we need to know about that person's outcome. The amount of follow-up matters for those who did not have the event, especially the minimum follow-up and/or the number of subjects with shorter than desired follow-up. Mixing the early events into summaries of follow-up times obscures this information; • Summarize early loss to follow-up and the reasons for it. Censoring due to loss to follow-up is more likely to violate the assumption of non-informative censoring, so this is a particular concern that should be addressed separately from observations censored just due to the planned end of the study or observation period; • Describe how events were ascertained. This is important for establishing the completeness of ascertainment, and sometimes for explaining clumps of events (e.g., if many were found at a scheduled 6-month visit). 4. Poor summaries (e.g., summarizing skewed data with a mean and standard deviation) For skewed data, medians are usually better summaries. The CI for the median can be added to show how precisely it is estimated, or the range and/or quartiles (IQR) can be added to show the variation in the population. There is sometimes confusion about whether to show SDs or SEs. Show SEs (or CIs, which are usually better) to indicate the precision of estimates. Show SDs to indicate the variability in the population. 5. Showing inadequate or excessive precision In general, too little precision may leave the reader wondering about the exact magnitude. For example, p=0.01 could mean anything from 0.005 to 0.015, which is a 3-fold range. Extra precision is not directly harmful, but gives a spurious impression of how precise the results are. It also can look naive, giving an astute reader or reviewer the impression that you do not know what is important and what is not. For odds ratios, relative risks, and hazard ratios, do one of the following: • give the value to two decimals if <2.0, one if >2.0, or • give all values to two decimals. This has the possible advantage of always the same number of decimals, but this can be excessive. ORs like 24.56 look a bit odd. Give p-values to two significant digits (leading 0's don't count), to a maximum of three or four. Sometimes, people give a maximum of three digits, and this is what some Stata procedures provide. But giving up to four is sometimes desirable and is usually also fine. Do not use "p<" for values of 0.001 or more; use "=". That is, don't say p<0.01 when you could say p=0.0058. Never use "p=NS" or "p> 0.1". This gives needlessly vague information and encourages Problem 1. Do not show chi-squared or other statistics that provide the same information as p-values (but are less interpretable). These add no information and clutter presentation of results. They may seem to add some technical cachet, but leaving out unimportant details actually conveys a better impression of technical savvy. 6. Poorly scaled predictors Common examples of this problem include using age in years or raw CD4+ cell count. In regression models, the coefficients for numeric predictors are the estimated effects of a 1-unit increase in the predictor. So if the age variable is in years, the estimated effect is for a 1-year increase in age, which is often too small to be readily interpretable. When a 1-unit increase is a very small amount, estimated coefficients will necessarily be very small and results will be hard to interpret. For example, an OR of 1.0051 per 1 cell/mm\xB3 increase in CD3 count is very hard to interpret. It is also hard to rescale this by eye, because the OR for a 100 cell increase is (1.0051)\xB9\xBA\xBA, which most people cannot calculate in their heads (it's 1.66). To avoid this problem, rescale numeric predictors before running regression models (or use lincom, in Stata, afterwards). For eample, make a new variable cd4per100=cd4/100. 7. Terms likely to be misread ("significant") Don't needlessly give readers and reviewers the opportunity to misunderstand what you mean. Avoid the use of the word "significant" by itself. Use "statistically significant" if meaning p<0.05; use "important", "substantial", or "clinically significant" if that is the intended meaning. As noted under Problem 1, some journals reserve the word "significant" used by itself to mean "statistically significant". If they do not allow the full term, just avoid using it at all (this may be a good strategy anyway). Use "Relative Hazard" or "Hazard Ratio" for proportional hazards model results, not "Relative Risk"; this is sometimes used for analysis of a binary outcome. Use "Mann-Whitney" instead of "Wilcoxon" or "Wilcoxon rank-sum" to avoid confusion with "Wilcoxon signed-rank". Both "Mann-Whitney" and "Wilcoxon rank-sum" are used, due to the near simultaneous, independent development of the method. 8. Missing data Missing data can cause bias, depending on why it is missing. Some causes of missing data can produce severe bias in the data that remain. For example, those with the riskiest behavior may be more inclined to skip questions about risk factors. The more that is missing, the more likely that there is a problem with what remains. Think about why participants may have declined to provide the information. Prevention of missing data is best. If is is unavoidable, however, • Clearly disclose how much; • Give reasons when known; • Assess differences in non-responders (You can compare those who responded to those who didn't on demographics and other variables that are more complete. This can provide clues about why some people didn't respond and what impact that may have on your results.) • Perform sensitivity analyses (For example, when comparing two treatments, you can make a pessimistic assumption about those with missing outcomes in one arm and an optimistic assumption for the other arm. If the conclusion remains qualitatively the same, then you can have confidence that is was not caused by bias due to missing data. • Consider advanced methods (For example. there are Stata do-files available to facilitate a method called multiple imputation (see http://www.ats.ucla.edu/stat/stata/Library/ice.htm). You will likely need help from a statistician to use such approaches. When comparing univariate and multivariate results, use the same set of observations. For multivariate modeling, the default in Stata and other programs is to delete all observations that have missing values for any predictor or for the outcome. (For stepwise selection, observations missing any candidate predictor are deleted, even if the candidate is never used.) This means that smaller models may have more observations than larger ones. Keep this in mind if you need to compare the results of such models, such as when you are examining how an effect of interest changes when controlled for other predictors. In these cases, you should fit the models to the same set of observations, even though the smaller models could utilize more. Summary of Problems Problem 1. P-values for establishing negative conclusions Problem 2. Misleading and vague phrasing Problem 3. Speculation about low power Problem 4. Exclusive reliance on intent-to-treat analysis Problem 5. Reliance on omnibus tests Problem 6. Overuse of multiple comparisons adjustments Summary of Biostatistical Best Practices Best Practice 1. Provide estimates--with confidence intervals--that directly address the issues of interest Best Practice 2. Ensure that major conclusions reflect the estimates and the uncertainty around them. Best Practice 2a. Never interpret large p-values as establishing negative conclusions. Best Practice 3. Discuss the implications of your findings for what may be true in general. Do not focus on "statistical significance" as if it were an end in itself. Best Practice 4. State what you did find or learn, not what you didn't. Best Practice 5. Learn as much as you can from your data. Best Practice 5a. Also do per-protocol analyses, especially if: • There is an interest in biological issues • There is double-blinded treatment Best Practice 6. Base interpretations on a synthesis of statistical results with scientific considerations. Best Practice 6a. Rely on scientific considerations to guard against overinterpretation of findings with p<0.05. Best Practice 6b. Acknowledge the desirability of independent replication, particularly for unexpected findings. Best Practice 7. Choose accuracy over conservatism whenever 1. Vickers AJ. Underpowering in randomized trials reporting a sample size calculation. Journal of Clinical Epidemiology. 2003 56:717-20. 2. Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, PC, Lang T, for the CONSORT Group. The revised CONSORT statement for reporting randomized trails: explanation and elaboration. Ann Intern Med. 2001 134:663-94. Page 670. 3. Tukey JW, Tightening the clinical trial. Controlled Clinical Trials. 1993; 14:266-285. P. 281. 4. Goodman SN, Berlin JA. The use of predicted confidence intervals when planning experiments and the misuse of power when interpreting results. Ann Intern Med. 1994; 121:200-6. 5. Hoenig JM, Heisey DM. The abuse of power: the pervasive fallacy of power calculations for data analysis. American Statistician. 2001; 55:19-34. 6. Senn SJ. Power is indeed irrelevant in interpreting completed studies. BMJ. 2002; 325:1304. 7. Rumbold AR, Crowther CA, Haslam RR, Dekker GA, Robinson JS. Vitamins C and E and the risks of preeclampsia and perinatal complications. NEJM. 2006; 354: 1796-1806. 8. Women's Health Initiative. Low-fat dietary pattern and risk of invasive breast cancer. JAMA. 2006; 295:629-42. [[http://www.ncbi.nlm.nih.gov/pubmed/16467232?ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DefaultReportPanel.Pubmed_RVDocSum] ]] 9. Newman TB, Liljestrand P, Jeremy RJ; Jaundice and Infant Feeding Study Team. Outcomes among newborns with total serum bilirubin levels of 25 mg per deciliter or more. NEJM. 2006; 354: 1889-1900. http://content.nejm.org/cgi/reprint/354/ 18/1889.pdf 10. Bacchetti P, Tien PC, Seaberg EC, O'Brien TR, Augenbraun MH, Kral AH, Busch MP, Edlin BR. Estimating past hepatitis C infection risk from reported risk factor histories: implications for imputing age of infection and modeling fibrosis progression. BMC Infectious Diseases, 7: 145, doi:10.1186/1471-2334-7-145, 2007). 11. Bacchetti P, Gripshover B, Grunfeld C, et al. Fat distribution in men with HIV infection. JAIDS, 2005; 40 (2):121-31. 12. Abrams DI, Hilton JF, Leiser RJ , Shade SB, et al. Short-term effects of cannabinoids in patients with HIV-1 infection: a randomized, placebo-controlled clinical trial. Ann Intern Med, 2003; 139:258-66. 13. Lonn E, Bosch J, Yusuf S, Sheridan P, et al., for the HOPE and HOPE-TOO Trial Investigators. Effects of long-term vitamin E supplementation on cardiovascular events and cancer: a randomized controlled trial. JAMA. 2005; 293: 1338-47. 14. Rothman KJ. No adjustments are needed for multiple comparisons. Epidemiology, 1990; 1 (1):43-6. 15. Savitz DA, Olshan, AF. Multiple comparisons and related issues in the interpretation of epidemiologic data. Am J Epidemiol, 1995; 142 (9); 904-8. 16. Perneger TV. What's wrong with Bonferroni adjustments. Br Med J, 1998; 316 (7139): 1236-8. 17. Bacchetti P. Peer review of statistics in medical research: the other problem. Br Med J, 2002; 324:1271-723. 18. Wittes J. On looking at subgroups. Circulation, 2009; 119:912-15.
{"url":"https://ctspedia.org/ctspedia/pvaluefallacy","timestamp":"2024-11-11T21:09:53Z","content_type":"text/html","content_length":"79567","record_id":"<urn:uuid:c9577c98-a04a-49d6-a531-d81224ac132c>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00405.warc.gz"}
Monty Hall Simulator Select a door: You got a goat. Monty Hall simulator is a program that allows you to simulate the famous Monty Hall problem in a virtual environment. The simulator is based on the probability puzzle where a game show contestant is asked to choose one of three doors, behind which one door has a car, and the other two doors have goats. After the contestant makes their choice, the game show host, Monty Hall, opens one of the remaining two doors to reveal a goat, and then offers the contestant the opportunity to switch their choice to the other unopened door. The Monty Hall simulator allows you to play out this scenario by randomly assigning a car and goats behind three virtual doors. You get to choose one of the doors, just like in the real problem, and then the simulator reveals one of the other two doors to have a goat behind it, just like Monty would do. At this point, the simulator gives you the option to switch your choice to the other unopened door, or stick with your original choice. The purpose of the Monty Hall simulator is to help you understand the counterintuitive solution to the Monty Hall problem, which is that you should always switch your choice to the other unopened door. The simulator does this by showing you the probability of winning the car if you stick with your original choice, versus the probability of winning the car if you switch your choice. The results will show that, over a large number of simulations, switching your choice results in a higher probability of winning the car. Overall, the Monty Hall simulator is a useful tool for demonstrating the principles of conditional probability and showing how seemingly simple problems can have counterintuitive solutions. The Monty Hall problem is a famous probability puzzle that is named after the host of the television game show “Let’s Make a Deal,” Monty Hall. The problem is often used to illustrate the concept of conditional probability. Here’s the problem: You are a contestant on a game show, and there are three doors in front of you. Behind one of the doors is a car, and behind the other two doors are goats. The game show host, Monty Hall, asks you to pick one of the doors. After you have made your choice, Monty opens one of the other two doors to reveal a goat. Now Monty offers you a chance to switch your choice to the remaining door, or stick with your original choice. The question is: Should you switch your choice or stick with your original choice? The answer is that you should always switch your choice. Here’s why: When you first make your choice, you have a 1/3 chance of picking the door with the car behind it. That means there is a 2/3 chance that the car is behind one of the other two doors. When Monty reveals one of the other two doors to have a goat behind it, he is effectively giving you information about where the car is not. Therefore, the 2/3 chance of the car being behind one of the other two doors now becomes concentrated behind the remaining unchosen door. If you stick with your original choice, you still have a 1/3 chance of winning the car. But if you switch your choice, you now have a 2/3 chance of winning the car, because the car is now more likely to be behind the door you did not originally choose. This problem is counterintuitive because it seems like switching your choice should not make any difference to your chances of winning. But the key to understanding the solution is to recognize that Monty’s decision to reveal a door with a goat behind it is not random – it is based on his knowledge of what is behind the doors. This changes the probability of where the car is, and therefore changes the optimal strategy for the contestant.
{"url":"https://bookofengineering.com/category/modules/math/","timestamp":"2024-11-09T17:02:15Z","content_type":"text/html","content_length":"79064","record_id":"<urn:uuid:38a56ec6-8911-4bc0-a001-6396de1ae96d>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00759.warc.gz"}
Guess the Number Problem D Guess the Number I am thinking of a number between $1$ and $1000$, can you guess what number it is? Given a guess, I will tell you whether the guess is too low, too high, or correct. But I will only give you $10$ guesses, so use them wisely! Your program should output guesses for the correct number, in the form of an integer between $1$ and $1000$ on a line on its own. After making each guess, you need to make sure to flush standard out. After each guess, there will be a response to be read from standard in. This response is a line with one of the following three words: • “lower” if the number I am thinking of is lower than your guess • “higher” if the number I am thinking of is higher than your guess • “correct” if your guess is correct After having guessed the right answer your program should exit. If you guess incorrectly $10$ times, you won’t get any more chances and your program will be terminated.
{"url":"https://nus.kattis.com/courses/CS3230/CS3230_S2_AY2324/assignments/zkj3pz/problems/guess","timestamp":"2024-11-14T02:23:16Z","content_type":"text/html","content_length":"27373","record_id":"<urn:uuid:4e5026c2-0dcc-400d-8baa-9cd2ac8654ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00333.warc.gz"}
Spectral Galerkin approximation of Fokker−Planck equations with unbounded drift David J. Knezevic and Endre Süli The paper is concerned with the analysis and implementation of a spectral Galerkin method for a class of Fokker-Planck equations that arises from the kinetic theory of dilute polymers. A relevant feature of the class of equations under consideration from the viewpoint of mathematical analysis and numerical approximation is the presence of an unbounded drift coefficient, involving a smooth convex potential U that is equal to +∞ along the boundary ∂D of the computational domain D. Using a symmetrization of the differential operator based on the Maxwellian Mcorresponding to U, which vanishes along ∂D, we remove the unbounded drift coefficient at the expense of introducing a degeneracy, through M, in the principal part of the operator. The class of admissible potentials includes the FENE (finitely extendible nonlinear elastic) model. We show the existence of weak solutions to the initial-boundary-value problem, and develop a fully discrete spectral Galerkin approximation of such degenerate Fokker-Planck equations that exhibits optimal-order convergence in the Maxwellian-weighted H^1 norm on D. The theoretical results are illustrated by numerical experiments for the FENE model in two space dimensions. Oxford University Computing Laboratory
{"url":"http://www.cs.ox.ac.uk/publications/publication6-abstract.html","timestamp":"2024-11-06T15:38:19Z","content_type":"text/html","content_length":"30300","record_id":"<urn:uuid:fda23e79-8b65-4fcb-b02c-647092571a6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00676.warc.gz"}
SciPost Submission SciPost Submission Page Crystal gravity by Jan Zaanen, Floris Balm, Aron J. Beekman This is not the latest submitted version. This Submission thread is now published as Submission summary Authors (as registered SciPost users): Aron Beekman · Jan Zaanen Submission information Preprint Link: https://arxiv.org/abs/2109.11325v1 (pdf) Date submitted: 2021-09-27 11:25 Submitted by: Zaanen, Jan Submitted to: SciPost Physics Ontological classification Academic field: Physics • Condensed Matter Physics - Theory • Gravitation, Cosmology and Astroparticle Physics Specialties: • High-Energy Physics - Theory • Statistical and Soft Matter Physics Approach: Theoretical We address a subject that could have been analyzed century ago: how does the universe of general relativity look like when it would have been filled with solid matter? Solids break spontaneously the translations and rotations of space itself. Only rather recently it was realized in various context that the order parameter of the solid has a relation to Einsteins dynamical space time which is similar to the role of a Higgs field in a Yang-Mills gauge theory. Such a "crystal gravity" is therefore like the Higgs phase of gravity. The usual Higgs phases are characterized by a special phenomenology. A case in point is superconductivity exhibiting phenomena like the Type II phase, characterized by the emergence of an Abrikosov lattice of quantized magnetic fluxes absorbing the external magnetic field. What to expect in the gravitational setting? The theory of elasticity is the universal effective field theory associated with the breaking of space translations and rotations having a similar status as the phase action describing a neutral superfluid. A geometrical formulation appeared in its long history, similar in structure to general relativity, which greatly facilitates the marriage of both theories. With as main limitation that we focus entirely on stationary circumstances -- the dynamical theory is greatly complicated by the lack of Lorentz invariance -- we will present a first exploration of a remarkably rich and often simple physics of "Higgsed gravity". Current status: Has been resubmitted Reports on this Submission Report #3 by Anonymous (Referee 3) on 2022-3-7 (Invited Report) • Cite as: Anonymous, Report on arXiv:2109.11325v1, delivered 2022-03-07, doi: 10.21468/SciPost.Report.4637 This paper studies the coupling between (dynamical) gravity and elastic matter. This is an old subject. What is new to some extent in the context of this paper is that the coupling is done at the level of an action and, additionally, care is given to the possibility of defects in elastic mater. This is an interesting addition to this literature. However, I have various comments that I believe should be addressed or given some consideration before this paper is considered for publication. - The paper is very long with a very long introduction and in many ways akin to a review. This long introduction does not help significantly to understand the new contributions of this paper. This would have been a better paper if it had been condensed into 20 pages with the new contributions. Can the authors attempt at making clear a distinction between what is new in this paper and what is There is a wide variety of literature missing which leads me to disagree with various statements. - I am in disagreement with the comments in section IA and in the abstract suggesting that this is the first time that such study has led to a proper understanding of the coupling of elastic degrees of freedom and gravity. Relativistic elasticity is typically used to study neutron star crusts, a literature that seems to have been completely ignored in this paper (e.g. arXiv:2003.05449 and referees therein). In this context, the coupling is usually done at the level of the equations of motion: elastic matter is coupled to Einstein’s equations via an appropriate elastic stress tensor. - The authors motivate their work in IB, page 3, as “aimed at informing this community regarding the role of elasticity in this context”, referring to the holographic study of elasticity and in particular references [11,12] and progress “hindered by unfamiliarity with elasticity”. Additionally, when referring to relativistic elasticity the authors cite the work of Carter and Quintana. I think that the authors missed crucial references in the context of modern treatments of relativistic elasticity, also in relation to the holographic community. For instance, the work of Fukuma et al. (arXiv:1104.1416 and references therein) and in particular the work of Armas et al. (arXiv:1908.01175, in particular section 2 and appendix A), which provide a more rigorous geometric formulation of relativistic elasticity than any other literature the authors refer to. - The authors do not mention the role elasticity has played in the realm of quantum matter and describing pinned crystals and charge density waves. For instance arXiv:cond-mat/0103392 and the more modern treatments such as arXiv:1702.05104 and arXiv:2001.07357. - This paper is rather close in spirit to a recent paper trying to address more rigorously how matter couples to dynamical gravity. The paper arXiv:1907.04976 is a recent exploration that discusses coupling to fluid degrees of freedom, instead of elastic ones. The overlap between the two works, however, would be visible when discussing a more rigorous approach to this work in which dynamics is taken into account. Could the authors perhaps revise some of their comments and take into account this relevant literature? Technical details: - The authors claim to address mostly the stationary case but fluctuations (gravitons/phonons) are considered. Could the authors make it more clear when the stationary assumption is being used and when it is not? In relation to this, would it be possible to write the full action that is being considered somewhere, that is the elastic part with kinetic terms and the Einstein-Hilbert action? - The “working horse of crystal gravity” as referred to equation (10) has already been written down, in particular in section 2.2 of arXiv:1908.01175. Eq. (10) corresponds to the linearised version in the Goldstones and background metric of (2.14) of arXiv:1908.01175. Perhaps the authors could refer to this? The difference is that the authors will supplement this action with the Einstein-Hilbert action. - Related to the above comment, the authors could have used a more geometric language to describe the crystal. In particular \mathcal{W}_{ma} introduced in equation (10) is just the strain tensor expanded linearly in the Goldstones and background metric. This strain coincides with the linear version of (2.4) of arXiv:1908.01175. It would be useful to comment on what the meaning of \mathcal{W} _{ma} is, as at the moment it appears as if it was an educated guess. - The language employed throughout the paper is rather non-covariant, Eq. (10) being an example with multiple non-contracted indices, which is a bit at odds with the relativistic nature of Einstein general relativity. More consideration could have been given to carefully describe the crystal space and the background spacetime, and in particular mention what is the geometric structures associated with both. How are the indices m and a raised/lowered in Eq.(10)? - The Einstein equations coupled to elastic matter are never written down explicitly, except in very special circumstances in which the action is taken on-shell for specific configurations. This is a bit strange for any relativist. Could the authors write down the Einstein equations and equations of motion for the crystal in generality with the stress tensor for elastic matter that they are considering? This would surely make the connection with earlier literature more clear. - Extending this work to dynamical settings also requires a non-linear approach to elasticity. I believe that the language in this paper would have to substantially change in order to describe such situations. Could the authors comment on this and what previous works they expect to be useful in order to do so? I would recommend the paper for publication once the authors review these comments. Report #2 by Anonymous (Referee 2) on 2022-1-25 (Invited Report) • Cite as: Anonymous, Report on arXiv:2109.11325v1, delivered 2022-01-25, doi: 10.21468/SciPost.Report.4240 This paper is an interesting read: it is well written, comprehensive, and overall worth publishing. A significant amount of it brings together and reviews results that are not necessarily new, but fit nicely together and are meant to provide the reader with the necessary background. However, despite the fairly extensive bibliography, the authors miss several opportunities to acknowledge prior work and place their own in the appropriate context. In light of sentences such as "We address a subject that could have been analyzed a century ago", the reader might be left with the mistaken impression that this paper introduces an entirely novel viewpoint on the subject. I recommend that the authors consider the following points and comment on the prior literature when appropriate: 1. First, the Higgsing gf gravity has been considered extensively in the high energy physics literature. See the following paper and all the references therein for a sample: - Bonifacio, James, Kurt Hinterbichler, and Rachel A. Rosen. "Constraints on a gravitational Higgs mechanism." Physical Review D 100.8 (2019): 084017. 2. The Higgsing gf gravity is also tightly related to massive gravity, a topic that has been studied very extensively. Of particular relevance to this paper are Lorentz-violating theories of massive gravity, which have been first studied systematically in: - Dubovsky, Sergei L. "Phases of massive gravity." Journal of High Energy Physics 2004.10 (2004): 076. 3. The author's skepticism that this topic could be "of relevance to the physics of our universe" is perhaps a bit premature. For instance, interesting models of inflation drive by solids have been proposed, see: - Gruzinov, Andrei. "Elastic inflation." Physical Review D 70.6 (2004): 063518. - Endlich, Solomon, Alberto Nicolis, and Junpu Wang. "Solid inflation." Journal of Cosmology and Astroparticle Physics 2013.10 (2013): 011. - Kang, Jonghee, and Alberto Nicolis. "Platonic solids back in the sky: Icosahedral inflation." Journal of Cosmology and Astroparticle Physics 2016.03 (2016): 050. - Nicolis, Alberto, and Guanhao Sun. "Scalar-tensor mixing from icosahedral inflation." Journal of Cosmology and Astroparticle Physics 2021.04 (2021): 074. The last two paper seem also relevant for the discussion of platonic solids in Sec. VIII. 4. Although holography is not the main focus of this paper, it is mentioned in the introduction as one of the motivations for this study. It is worth pointing out that holographic solid states have been found to be related to monopoles in AdS, see e.g. - Bolognesi, Stefano, and David Tong. "Monopoles and holography." Journal of High Energy Physics 2011.1 (2011): 1-28. - Esposito, Angelo, et al. "Conformal solids and holography." Journal of High Energy Physics 2017.12 (2017): 1-29. 5. I find the discussion in the first column of p.33 about the symmetries broken by a solid confusing. The statement that "finite translations are the same as rotations" is at best very misleading, at worst plain wrong. Furthermore, the reason why solids don't feature Goldstone modes associated with the breaking of rotation (or boosts, for that matter) is well understood as is a manifestation of the inverse Higgs mechanism: - Ivanov, Evgeny Alexeevich, and Victor Isaakovich Ogievetskii. "Inverse Higgs effect in nonlinear realizations." Theoretical and Mathematical Physics 25.2 (1975): 1050-1059. - Low, Ian, and Aneesh V. Manohar. "Spontaneously broken spacetime symmetries and Goldstone’s theorem." Physical review letters 88.10 (2002): 101602. - Nicolis, Alberto, Riccardo Penco, and Rachel A. Rosen. "Relativistic fluids, superfluids, solids, and supersolids from a coset construction." Physical Review D 89.4 (2014): 045002. 6. The dual description of superfluids discussed in Sec. III has been studied extensively. For instance, see: - F. Lund and T. Regge, Unified Approach to Strings and Vortices with Soliton Solutions, Phys. Rev. D 14 (1976) 1524 - A. Zee, Vortex strings and the antisymmetric gauge potential, Nucl. Phys. B 421 (1994) 111 - Horn, Bart, Alberto Nicolis, and Riccardo Penco. "Effective string theory for vortex lines in fluids and superfluids." Journal of High Energy Physics 2015.10 (2015): 1-58. 7. A good portion of this paper is about defects in gravity, which was a topic discussed for instance in - M.O Katanaev, I.V Volovich, Theory of defects in solids and three-dimensional gravity, Annals of Physics, Volume 216, Issue 1, 1992, Pages 1-28 - Bennett, D. L., et al. "The relation between the model of a crystal with defects and Plebanski's theory of gravity." International Journal of Modern Physics A 28.13 (2013): 1350044. It would be important the the authors comment on previous work on defects in gravity and how it relates to their own work. Finally, I should emphasize that I am listing specific papers only to provide the authors with an entry point to the relevant literature. This is emphatically not a request for specific citations, although the authors may choose to cite some of the papers above if deemed relevant. Instead, it is a broad encouragement to strengthen their connections with the already existing literature. Report #1 by Anonymous (Referee 1) on 2022-1-24 (Invited Report) • Cite as: Anonymous, Report on arXiv:2109.11325v1, delivered 2022-01-24, doi: 10.21468/SciPost.Report.4229 This is an extremely interesting piece of work that deals with a question of (well, to me) tremendous physical interest. We are very familiar with the fact that when a theory with (say) a U(1) spontaneously broken global symmetry is gauged, the putative Goldstone modes are eaten by the gauge field, and the resulting spectrum is massive, with the phenomenology associated superconductors (expulsion of magnetic flux, quantized vortices, etc.). What then happens if one considers a solid (which spontaneously breaks translational symmetry) in the real world (where — loosely speaking — translational symmetry is gauged by gravity)? Is the graviton gapped? What is a gravitational superconductor? Etc. This work aims to answer that question in detail: for the most part it does so, explaining in detail the phenomenology of a gravitational theory coupled to a giant space-filling crystal. It does so from a rather sophisticated and effective field-theory point of view. The topic is rich and rather technical due to the proliferation of polarizations; analogies to the U(1) case help a lot here, but there is a lot going on. There are many great nuggets of wisdom that link ideas in “real-life” physics (e.g. associated with things like principles of Weber detector design) to formal considerations of symmetry breaking and effective theory, and there are quite a few surprises (e.g. that it is torsion that is expelled, and that the phonons remain massless though the graviton acquires a mass). Overall, after a few readings I think I have been able to appreciate the key points and I believe that the paper is definitely worthy of publication; it addresses a very natural physical question, and it does so in a very creative and interesting way. However, I did find the work rather tough going, and I would like to make a few suggestions to make the paper easier to read, and then another comment on a point that the authors did not (I believe) 1. The authors frequently compare the gravitational case to the U(1) superconductor; however this analogy is stretched out over several chapters and it can become difficult to maintain in memory how it works simply due to all the moving parts. I suggest that the authors create a table or dictionary relating all the key ingredients (e.g. a_{\mu} maps to h_{\mu\nu}, the torsion maps to the magnetic flux, etc.), together perhaps with a page number for each entry showing where the explanation is given. 2. The paper is not consistent about raising and lowering indices; there are also many different kinds of indices (m, a, i, j, \mu \nu) and I don’t think it is clear how they relate to each other, and which of them should be raised and lowered and which not. (I am not sure this was every fully explained in the paper). For a paper dealing with gravity in a fundamental way this is not good. I believe it’s mostly okay because gravity is often (but not always) linearized, but I think this should be made explicit, perhaps by consistently using the flat-space metric to raise and lower them. (This will also make the paper more accessible to a gravitational audience). 3. Finally, a physics question that should be addressed: the authors frequently state that it is unlikely that this will be phenomenologically relevant simply because space is not full of a giant crystal (though they do make interesting statements about dark matter etc). However could it be relevant even in principle? I am a bit confused because if I fill space with a giant crystal then once the spatial extent R of the crystal is bigger than its own Schwarzschild radius it will collapse into a black hole (unless the situation is intrinsically time-dependent like in cosmology). If it has energy density \rho then this will happen when G R^{3} \rho = R, i.e. when R = \sqrt{1/(G \rho)}, i.e. parametrically the same as the gravitational screening length (1) (I imagine the shear modulus is controlled by the same physics as the energy density). Thus there doesn’t seem to be a way to separate these scales, and there is a worry that the physics here can *never* be accessed, even in principle. In my eyes this does not invalidate the results, but I think it should be addressed cleanly, and I would be extremely interested in knowing if the above conclusion could be evaded. (In practice there is probably a time-dependent version of this theory that could be relevant for cosmology towards which this construction is a first step). Overall, I am happy to recommend this for publication; however I think the authors should consider the points above to strengthen the paper.
{"url":"https://scipost.org/submissions/2109.11325v1/","timestamp":"2024-11-06T08:37:39Z","content_type":"text/html","content_length":"52402","record_id":"<urn:uuid:2a24b4c5-9c2e-479e-9d1b-faa57d4af827>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00252.warc.gz"}
Kinetic Energy - Hydes Energy Kinetic Energy Kinetic Energy is the energy of an object (mass) in motion. All moving objects have kinetic energy. A simple example of kinetic energy differences would be considering the idea of being hit by a football or by a fast moving car. The car being heavier and moving faster has much more kinetic energy than the soccer ball. Calculating Kinetic Energy The kinetic energy of an object can be calculated using the formula below:
{"url":"https://hydesenergy.co.uk/glossary/kinetic_energy/","timestamp":"2024-11-13T12:26:02Z","content_type":"text/html","content_length":"134241","record_id":"<urn:uuid:50a656b3-5560-4964-90a0-a998cd94f8b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00084.warc.gz"}
Math Geometry & Measurements - Circles - The Tutorverse Math Geometry & Measurements – Circles In this assessment, you will encounter a number of different questions that will assess both your understanding of mathematics and your ability to reason quantitatively. Do not use a calculator in this assessment. You shouldn’t need one to answer these questions, and the extra practice with arithmetic (long division, multi-digit multiplication, etc.) will just make you a better math student! Once you input your answer, click on “Check” to see immediate feedback about the question, including an explanation. You can also skip the question and come back to it later. You will see several answer choices, but only one will be correct. Select the correct answer choice. • These aren’t basic math questions – they’re tricky! Read closely and make sure you know what’s being asked. • Do calculations on paper with a pencil. You don’t get extra credit for answering in your head! Click the “Assessment” tab to hide these instructions. You may refer back to these instructions at any time by clicking on “Instructions & Materials.”
{"url":"https://thetutorverse.com/assessment/shsat-math-geometry-measurements-circles/","timestamp":"2024-11-13T21:21:41Z","content_type":"text/html","content_length":"366062","record_id":"<urn:uuid:634a8f04-3cca-409c-8ae2-c6cc953b2600>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00171.warc.gz"}
CS 5004 Assignment 3 Geometric Shapes C++ Version solution In this programming assignment, you will create classes that model 2D geometric shapes: Rectangle2D, and Line2D. These shapes share an abstract base class Shape2D that defines a common set of methods. The child classes also have their own methods that are appropriate for the shapes, and store their data in different ways. Rectange2D and Circle2D are defined by a bounding box with a top-left x and y, and a width and height. Line2D is defined by its two end points. To understand the differences in how C++ and Java represent class hierarchies, you will implement these classes in both C++ and Java. Here are descriptons of the classes and the methods that they C++ Version Define these classes in the CS_5004 namespace. Provide a unit test file and main program in the file “TestShapes.cpp”. Use your C++ implementation of Point2D from programming assignment 2. class Shape2D Define the pure abstract class Shape2D in a file “Shape2D.h” All of the methods of Shape2D are pure virtual, which means that they are virtual and are not implemented by Shape2D. Instead, child classes of Shape2D must implement these methods in a way that is appropriate for those classes. Note: You will need to add a “forward reference” to class Rectangle2D in “Shape2D.h” just before defining Shape2D: class Rectangle2D; This is because Rectangle2D is a subclass of Shape2D, but the definition of Shape2D refers to Rectangle2D. C++ is unable to automatically resolve this kind of “circular reference.” Method name Type Parameters Description virtual getBounds const (pure virtual) Rectangle2D& Rectangle2D& bounds Returns the bounding box for the shape through the result parameter and the return value. virtual setBounds (pure virtual) none float x, float y, float width, float height Sets the bounding box for the shape with top-left point (x, y), and dimensions (width, height) virtual setBounds (pure virtual) none const Point2D& upperLeft, float width, float height Sets the bounding box for the shape with top-left point (x, y), and dimensions (width, height) virtual setBounds (pure virtual) none const Rectangle2D& bounds Sets the bounding box for the shape virtual containsPoint const (pure virtual) bool const Point2D& point Determines whether shape contains the point class Rectangle2D : Shape2D Define the concrete class Rectangle2D in a file “Rectangule2D.h” and implement the methods in a file “Rectangle2D.cpp”. All constructors, methods and the base class are public. Hint: The constructors can call setBounds(). Field name Type Description private x float the X coordinate of the top left of the rectangle private y float the Y coordinate of the top left point of the rectangle private width float the width of the rectangle private height float the height of the rectangle Method name Type Parameters Description public Rectangle2D none none Constructs a rectangle at (0, 0) with dimensions (0, 0) public Rectangle2D none float x, float y, float width, float height Constructs a rectangle with top-left point (x, y), and dimensions (width, height) public Rectangle2D none const Point2D &topLeft, float width, float height Constructs a rectangle with top-left point, and dimensions (width, height) public Rectangle2D none const Rectangle2D& rect Constructs a rectangle with location and dimensions of a rectangle getX const float none Returns the X of the top left corner of the rectangle getY const float none Returns the Y of the top left corner of the rectangle getWidth const float none Returns the width of the rectangle getHeight const float none Returns the height of the rectangle getBounds const (override) Rectangle2D& Rectangle2D& bounds Returns the bounding box for the rectangle through the result parameter and the return value. setBounds (override) none float x, float y, float width, float height Sets the bounding box for the rectangle with top-left point (x, y), and dimensions (width, height) setBounds (override) none const Point2D& upperLeft, float width, float height Sets the bounding box for the rectangle with top-left point (x, y), and dimensions (width, height) setBounds (override) none const Rectangle2D& bounds Sets the bounding box for the rectangle containsPoint const (override) bool const Point2D& point Determines whether shape contains the point class Circle2D : Shape2D Define the concrete class Circle2D in a file “Circle2D.h” and implement the methods in a file “Circle2D.cpp”. All constructors, methods and the base class are public. Hint: The constructors can call setBounds(). Field name Type Description private x float the X coordinate of the top left of the circle private y float the Y coordinate of the top left point of the circle private width float the width of the circle private height float the height of the circle Method name Type Parameters Description public Circle2D none none Constructs a circle at (0, 0) with dimensions (0, 0) public Circle2D none float x, float y, float width, float height Constructs a circle with top-left point (x, y), and dimensions (width, height) public Circle2D none const Point2D &topLeft, float width, float height Constructs a circle with top-left point, and dimensions (width, height) public Circle2D none const Circle2D& rect Constructs a circle with location and dimensions of a circle getX const float none Returns the X of the top left corner of the circle getY const float none Returns the Y of the top left corner of the circle getWidth const float none Returns the width of the circle getHeight const float none Returns the height of the circle getBounds const (override) Rectangle2D& Rectangle2D& bounds Returns the bounding box for the circle through the result parameter and the return value. setBounds (override) none float x, float y, float width, float height Sets the bounding box for the circle with top-left point (x, y), and dimensions (width, height) setBounds (override) none const Point2D& upperLeft, float width, float height Sets the bounding box for the circle with top-left point (x, y), and dimensions (width, height) setBounds (override) none const Rectangle2D& bounds Sets the bounding box for the circle containsPoint const (override) bool const Point2D& point Determines whether shape contains the point class Line2D : Shape2D Define this concrete class Line2D in a file “Line2D.h” and implement the methods in a file “Line2D.cpp”. All constructors, methods and the base class are public. Field name Type Description private x1 float the X coordinate of the first point of the line private y1 float the Y coordinate of the first point of the line private x2 float the X coordinate of the last point of the line private y2 float the Y coordinate of the last point of the line Method name Type Parameters Description public Line2D none none Constructs a line with points (0, 0), (0, 0) public Line2D none const Point2D& point1, const Point2D& point2 Constructs a line with points point1, point2 public Line2D none float x1, float y1, float x2, float y2 Constructs a line with points (x1, y1), (x2, y2) public Line2D none const Line2D& line Constructs a line with points of a line public getPoint1 const Point2D& Point2D& point1 Returns the first point of the line public getPoint2 const Point2D& Point2D& point2 Returns the second point of the line getBounds const (override) Rectangle2D& Rectangle2D& bounds Returns the bounding box for the line through the result parameter and the return value. setBounds (override) none const Rectangle2D& bounds Sets the bounding box for the line, keeping the relative positions of the two points. setBounds (override) none float x, float y, float width, float height Sets the bounding box for the line with top-left point (x, y), and dimensions (width, height), keeping the relative positions of the two points. setBounds (override) none const Point2D& upperLeft, float width, float height Sets the bounding box for the line with top-left point (x, y), and dimensions (width, height) containsPoint const (override) bool const Point2D& point Determines whether line contains the point
{"url":"https://jarviscodinghub.com/product/assignment-3-geometric-shapes-c-version-solution/","timestamp":"2024-11-02T05:15:47Z","content_type":"text/html","content_length":"115192","record_id":"<urn:uuid:4e3578cb-8c9c-4de6-84d3-6d1edf17b76b>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00792.warc.gz"}
Unhide hidden data using jitter in the R package ggplot2 Posted on by zev@zevross.com · 2 Comments When you’re plotting a lot of data overplotting can sometimes obscure important patterns. In situations like this it can be useful at the exploratory data analysis phase to ‘jitter’ the data so that underlying data can be viewed making it easier to see patterns. In addition, jittering can be a way of ‘anonymizing’ spatial data while maintaining, for example, neighborhood or county-level Although jittering can be an extremely useful tool, keep in mind that from a data visualization perspective jittering means adding additional noise to your data. This additional variation can confuse your audience and lead to misinterpretation. As a result, we jitter in-house fairly regularly but only sparingly jitter data in graphics we share with clients or the public. Jitter data – start with the base package In a recent project we found jittering to be particularly useful. We had a dataset with two columns – one was the actual number of air pollutant samples taken at air monitoring sites in one month and the other was the required number of samples for the monthly value to be considered valid (at a completeness threshold of 75%). Although we had 10,000 data points there were only 129 unique combinations of our data points. As a result, the plot of actual vs required was very misleading: ## count reqSamples ## 1 31 23 ## 2 28 23 ## 3 31 23 ## 4 30 23 ## 5 30 23 ## 6 30 23 plot(sitedata$reqSamples, sitedata$count, xlab="Required Samples for Valid Value", ylab="Actual Samples", main="Hard to believe this is\nactually 10,000 points") In a situation like this, my tendency is to take a quick look at the data with jittering. But I always found using R’s jitter function to be a little messy because you need to apply the jitter function to both your X and Y variables like this: plot(jitter(sitedata$reqSamples, factor=1.1), jitter(sitedata$count, factor=1.1), xlab="Required Samples", ylab="Actual Samples") Although this is clearly not a ton of complicated code, I still wince at having to apply the jitter function twice so when I started to use the package ggplot2 more regularly I was excited to see a jitter argument that simplifies plotting. Package ggplot2 to the rescue The R package ggplot2 has a default look that is much more attractive and the creator, Hadley Wickham, thoughtfully added an argument to the geom_point() function to implement jittering of points more easily and elegantly. Here is the same plot as above, but much nicer: ggplot(sitedata, aes(reqSamples, count))+ geom_point(position = position_jitter(w = 0.3, h = 0.3))+ xlab("Required Sampling Frequency for Valid Monthly Values")+ ylab("Actual Sampling Frequency") Add a little style The plot above is nicer but could benefit from some additional styling. I particularly like using ggplot's alpha argument to add transparency. I also find the default title and axis labels to be too close to the plot itself so you can use vjust to adjust this. ggplot(sitedata, aes(reqSamples, count))+ ggtitle("Required vs Actual Sampling Frequency\n for Air Pollution Monitors")+ geom_point(position = position_jitter(w = 0.3, h = 0.3), alpha=0.1, color="firebrick")+ xlab("Required Sampling Frequency for Valid Monthly Values")+ ylab("Actual Sampling Frequency")+ theme(plot.title = element_text(lineheight=.8, face="bold", vjust=1), axis.title.x = element_text(vjust=-0.5), axis.title.y = element_text(vjust=0.3)) Much better! Jitter spatial data There are two situations where I might also consider jittering spatial data. First and most obvious is when I have overlapping points. In this situation a little jittering can help visualize the data though, again, be careful because the added noise can mislead those who view your map. In the second situation, you might have sensitive data like addresses that you want to jitter to protect anonymity. I’ll demonstrate this second situation using a tiny dataset I created by hand using a couple of store locations in New York City’s SoHo neighborhood, the Apple Store, Kidrobot (a cute pop-art store), Puck Fair (a favorite bar) and a great bookstore – McNally Jackson. For the record, I got the coordinates the old-fashioned way – by right-clicking on the new Google Maps and choosing ‘What’s Here’ which gives the coordinates. Here we create the data and map it the traditional way: name=c("Apple Store", "Kidrobot", "Puck Fair", "McNally Jackson Books")) qmap("Prince St & Mercer St, New York City", zoom = 16, maptype="hybrid")+ geom_point(aes(x=lon, y=lat, color=name), data=tmp, size=5)+ theme(legend.title=element_blank()) # turn off legend title And then we jitter a tiny bit. I believe that the jitter values are percentages based on the resolution of the data. Since a degree of latitude is approximately 110 KM in NYC then 0.002 is equivalent to 0.2% or 220 meters. qmap("Prince St & Mercer St, New York City", zoom = 16, maptype="hybrid")+ geom_point(aes(x=lon, y=lat, color=name), data=tmp, size=5, position=position_jitter(w = 0.002, h = 0.002)) And you can see that the points were moved randomly in both the X and Y direction obscuring (on purpose) the actual point locations. Jittering data useful way to reveal patterns in your data that might be obscured by overplotting. I’ve found that using the jitter argument in R’s ggplot package is cleaner and easier to use than the jitter function in the base package but either option works well. Use jittering as needed in your exploratory data analysis but be cautious when using it in graphics for public consumption as the additional noise may confuse your audience. 2 responses 1. Adding jitter is a random operation, so every time you run your programme the image will come out different. To avoid this, you may initialize the pseudo-random number generator with a hard-coded value like that: set.seed(20061001, “Mersenne-Twister”). For reference: I’ve taken that from http://xtof.perso.math.cnrs.fr/pdf/ReproducibleAnalysis.pdf. 2. great post~ I just learnt the package ggplot2, and found it was amazing to create gorgeous and practical graphs. Leave a Reply Cancel reply
{"url":"http://zevross.com/blog/2014/05/05/unhide-hidden-data-using-jitter-in-the-r-package-ggplot2/","timestamp":"2024-11-04T21:32:59Z","content_type":"text/html","content_length":"556381","record_id":"<urn:uuid:5daabc4c-fa06-4386-8d37-411e66f82c5f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00480.warc.gz"}
Batch Cube Root Calculator: Free Online Tool for Multiple Cube Roots | ThinkCalculator Batch Cube Root Calculator What is a Cube Root? A cube root of a number is a value that, when multiplied by itself twice, gives the original number. In other words, it's the reverse operation of cubing a number. For any real number x, the cube root of x is the number y such that y³ = x. How to Calculate Cube Root Calculating a cube root can be done through various methods: • Using a calculator with a cube root function • Applying the exponent rule: ∛x = x^(1/3) • Using prime factorization for perfect cubes • Employing numerical methods like Newton's method for approximation The formula for the cube root of a number x is: \[ y = \sqrt[3]{x} \] Which is equivalent to: \[ y^3 = x \] Where x is the number we're finding the cube root of, and y is the result. Calculation Steps 1. Identify the number x for which you want to calculate the cube root 2. If x is a perfect cube, find the number that, when cubed, equals x 3. If x is not a perfect cube, use a calculator or computational method to find ∛x 4. For complex numbers, there are three cube roots. Find all three if necessary 5. Verify your result by cubing it, which should equal the original number x Let's calculate the cube root of 27: 1. We want to find y such that y³ = 27 2. We recognize that 3 × 3 × 3 = 27 3. Therefore, ∛27 = 3 4. To verify: 3³ = 3 × 3 × 3 = 27, which confirms our result Thus, the cube root of 27 is 3. Visual Representation This cube has a volume of 27. Each side of the cube has a length of 3, which is the cube root of 27.
{"url":"https://www.thinkcalculator.com/algebra/root3batch.php","timestamp":"2024-11-08T21:27:04Z","content_type":"text/html","content_length":"27464","record_id":"<urn:uuid:47461ac5-9397-46c1-8f1c-dd5de4cfc13e>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00029.warc.gz"}
Lesson #3: Equations Explained Now on to the fun part! Let’s take a look at some of the fundamental equations and break them down. Our goal is to understand the rationale behind the equations, increasing our confidence in applying them to save Dr. Vanko. But keep in mind, a miscalculation could lead to grave outcomes. (no pressure 20-25 mg/kg x Actual body weight Loading Dose (LD) OR Desired peak x Vd Volume of Distribution (Vd) 0.7 L/kg x Actual body weight Elimination Constant (K[e]) 0.00083 (CrCl) + 0.0044 Interval (T) In(desired peak/desired trough)/Ke + t’ Maintenance dose (MD) LD x 1-e^-KeT Peak (Cpeak) (Dose/t’)(1-e^-Ke(t’)) / Vd(Ke)(1-e^-Ke(T)) Trough (Cmin) Cpeak (e^-Ke(T-t’)) Vd = volume of distribution, Ke = elimination constant, T= interval, t’ = infusion time LD = 20-25 mg/kg OR LD = desired peak x Vd Loading doses are used to get the patient to therapeutic drug concentrations quicker. The thought behind the practice is to ‘fill the tank‘ since we are starting from nothing. Loading doses are recommended in severe infections (e.g. sepsis, bacteremia, meningitis) where rapid attainment of therapeutic drug concentrations are desired to prevent adverse outcomes. In general, we want to use the lower loading dose range (20 mg/kg dose) in indications with a lower trough goal (10-15 mcg/mL) and the higher range (25 mg/kg) in indications with a higher trough goal (15-20 mcg/mL). Goal Peak Goal Trough Loading dose 30 mcg/mL 10-15 mcg/mL LD = 20 mg/kg x total body weight* 35 mcg/mL 15-20 mcg/mL LD = 25 mg/kg x total body weight* *Total body weight = actual body weight Another method of calculating the loading dose is with the equation LD = desired peak x Vd. Taking a closer look at this equation, you will find that it is familiar. It is the same equation as the volume of distribution equation we previously reviewed, just rearranged to find the dose. or Dose (mg) = Concentration (mg/mL) x Volume of distribution In this example, knowing the population estimated volume of distribution and the desired concentration (or peak), you are trying to find the dose (or amount of sugar dissolved in the container as discussed in lesson #1) to achieve that. This equation allows you to target a goal peak. Common goal peaks range from 30-40 mcg/mL. It is important to follow your institution’s policy as each institution will have specific indications for loading as well as the maximum dose allowed. As discussed previously, the equation for determining the volume of distribution is: Why then is the equation for the volume of distribution of vancomycin 0.7 L/kg? This equation was derived from population kinetic studies, in which a large sample of patients was administered vancomycin and then plasma concentrations were drawn and compared to the doses given. The average volume of distribution per weight of the patients in this study resulted in the commonly used Vd equation 0.7 L/kg. The volume of distribution can vary significantly between patients with ranges reported in the literature from 0.5-1 L/kg, making determinations of an exact dose rather tricky. That is why population kinetics is an estimation all around! 😎 Population Ke = 0.00083 (CrCl) + 0.0044 Ke is the elimination constant or the fraction of vancomycin that is eliminated from the body per unit of time. Because vancomycin is primarily eliminated via the kidneys, the elimination constant is directly related to Creatinine Clearance. This equation is derived similarly to the volume of distribution equation. Population kinetic studies were conducted in a random group of people with Ke plotted against Creatinine Clearance resulting in a line of best fit (y = mx + b) shown below. Estimated half-life (t1/2) = 0.693/Ke Half-life is defined as the time it takes for the concentration of the drug to decrease by 50% in the body as shown in the graph below. After 4-5 half-lives, drug concentrations reach negligible levels in the body. If you have taken chemistry, you may remember learning how to calculate the half-life of a radioactive substance (ex: uranium) as it decays or using half-life to determine the age of a piece of coal. The half-life equation for vancomycin is essentially the same. The numerator is the natural log of 2. Knowing what the patient’s elimination constant is, you are calculating how long it will take the serum concentration of drug to decrease by half. A simplified way to think of it is shown below. Goal peak = 30 mcg/mL —> after 1 half-life concentrations will decrease by 50% —> 15 mcg/mL Redose the patient after each half-life to keep drug concentrations within the goal trough of 15 mcg/mL The half-life can be used to determine the dosing interval. General rule: you do not want to use an interval that is LESS than your calculated half-life as this can lead to overaccumulation and high drug concentrations (example: using a dosing interval of every 12 hours when your t1/2 was 16 hours). However, if you make the clinical judgment to use an interval that is less than your calculated half-life, decrease your maintenance dose to 10-12 mg/kg to ensure trough levels stay within the goal. If you’re unsure, you can input your calculated maintenance dose and interval into the estimated peak and trough equations to get a better idea. Estimated interval = In (desired peak/desired trough)/ (Ke + t’) Where t’ = infusion time is usually 1 hour To calculate the interval, the equation requires you to input the desired peak (Cpeak) of 30-40 mcg/L. Since the half-life and interval equations are both dependent on clearance (Ke) to determine how much of the drug will be eliminated from the body, they produce similar results. For the most part, if you memorize the half-life equation (t1/2 = 0.693/Ke), it should be around the same as your interval equation. You just have to add in your infusion time. Let’s test it out for fun! It is important to round your calculations to their nearest standard intervals (e.g. every 8 hours, 12 hours, or 24 hours). The maintenance dose equation calculates the amount of drug lost from the loading dose at the end of the interval previously determined. This amount becomes your maintenance dose. For example: • You administer a loading dose of 2,000 mg. • Based on your calculations, the patient’s elimination constant and interval come out to be Ke = 0.073 and T = 12 hours. Remember that the elimination constant and interval equations have already taken into account your desired peak (30 mcg/mL) and trough (15 mcg/mL). • Approximately 12 hours after you have given your 2,000 mg loading dose, the amount of drug eliminated will be 1,167 mg, where the drug concentration is expected to be around 15 mcg/mL. • Your maintenance dose would be 1,250 mg rounded to the nearest 250 mg increments. Peak (Cpeak) = Dose/t’)(1-e^-Ke(t’)) / Vd(Ke)(1-e^-Ke(T)) After calculating your maintenance dose and interval rounded to the nearest increments, you want to determine what your Cpeak will be. This requires you to utilize the Cpeak equation above. It’s a pretty busy equation so it’s okay if you’re eyes are spinning out of your head now. Simply put, the equation is a component of the time it takes for the drug to infuse into the blood circulation over the elimination of the drug from the body. When these two components reach equilibrium you have to achieve maximum blood concentration in the body. The amount goes in = the amount eliminated 🧘. Trough (Ctrough) = Cpeak (e^-Ke(T-t’)) After calculating the estimated peak (Cpeak), you can calculate what the trough level will be at the end of your dosing interval. This gives you the estimated trough at steady state. Great job finishing Lesson #3: Vancomycin Equations! You are over halfway done with your mission. Now on to the next quiz to see how well you know these equations. If you obtain 80% on the quiz, you will unlock a special token (vancomycin equation cheat sheet) to help you with the rest of your journey. Good luck!
{"url":"https://www.memory-pharm.com/lesson-3-equations-explained/","timestamp":"2024-11-03T06:34:32Z","content_type":"text/html","content_length":"360804","record_id":"<urn:uuid:e13baf80-5e3a-4fe4-9137-18168ae02b73>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00465.warc.gz"}
American Mathematical Society The dual Yamada–Watanabe theorem for mild solutions to stochastic partial differential equations Author: S. Tappe Journal: Theor. Probability and Math. Statist. 105 (2021), 51-68 MSC (2020): Primary 60H15; Secondary 60H10, 60H05 DOI: https://doi.org/10.1090/tpms/1155 Published electronically: December 7, 2021 Full-text PDF Abstract | References | Similar Articles | Additional Information Abstract: We provide the dual result of the Yamada–Watanabe theorem for mild solutions to semilinear stochastic partial differential equations with path-dependent coefficients. An essential tool is the so-called “method of the moving frame”, which allows us to reduce the proof to infinite dimensional stochastic differential equations. • Mátyás Barczy, Zenghu Li, and Gyula Pap, Yamada-Watanabe results for stochastic differential equations with jumps, Int. J. Stoch. Anal. , posted on (2015), Art. ID 460472, 23. MR 3298537, DOI • A. S. Chernyĭ, On strong and weak uniqueness for stochastic differential equations, Teor. Veroyatnost. i Primenen. 46 (2001), no. 3, 483–497 (Russian, with Russian summary); English transl., Theory Probab. Appl. 46 (2003), no. 3, 406–419. MR 1978664, DOI 10.1137/S0040585X97979093 • P. L. Chow and J.-L. Jiang, Almost sure convergence of stochastic integrals in Hilbert spaces, Stochastic Anal. Appl. 10 (1992), no. 5, 533–543. MR 1185047, DOI 10.1080/07362999208809289 • David Criens, A dual Yamada-Watanabe theorem for Lévy driven stochastic differential equations, Electron. Commun. Probab. 26 (2021), Paper No. 18, 10. MR 4240146, DOI 10.1214/21-ecp384 • D. Criens and M. Ritter, On a theorem by A. S. Cherny for semilinear stochastic partial differential equations, J Theor Probab (2021), https://doi.org/10.1007/s10959-021-01107-3 • Giuseppe Da Prato and Jerzy Zabczyk, Stochastic equations in infinite dimensions, 2nd ed., Encyclopedia of Mathematics and its Applications, vol. 152, Cambridge University Press, Cambridge, 2014. MR 3236753, DOI 10.1017/CBO9781107295513 • E. B. Davies, Quantum theory of open systems, Academic Press [Harcourt Brace Jovanovich, Publishers], London-New York, 1976. MR 0489429 • Damir Filipović, Stefan Tappe, and Josef Teichmann, Jump-diffusions in Hilbert spaces: existence, stability and numerics, Stochastics 82 (2010), no. 5, 475–520. MR 2739608, DOI 10.1080/ • Leszek Gawarecki and Vidyadhar Mandrekar, Stochastic differential equations in infinite dimensions with applications to stochastic partial differential equations, Probability and its Applications (New York), Springer, Heidelberg, 2011. MR 2560625, DOI 10.1007/978-3-642-16194-0 • Thomas G. Kurtz, The Yamada-Watanabe-Engelbert theorem for general stochastic equations and inequalities, Electron. J. Probab. 12 (2007), 951–965. MR 2336594, DOI 10.1214/EJP.v12-431 • Thomas G. Kurtz, Weak and strong solutions of general stochastic models, Electron. Commun. Probab. 19 (2014), no. 58, 16. MR 3254737, DOI 10.1214/ECP.v19-2833 • Wei Liu and Michael Röckner, Stochastic partial differential equations: an introduction, Universitext, Springer, Cham, 2015. MR 3410409, DOI 10.1007/978-3-319-22354-4 • Martin Ondreját, Uniqueness for stochastic evolution equations in Banach spaces, Dissertationes Math. (Rozprawy Mat.) 426 (2004), 63. MR 2067962, DOI 10.4064/dm426-0-1 • A. Pazy, Semigroups of linear operators and applications to partial differential equations, Applied Mathematical Sciences, vol. 44, Springer-Verlag, New York, 1983. MR 710486, DOI 10.1007/ • Claudia Prévôt and Michael Röckner, A concise course on stochastic partial differential equations, Lecture Notes in Mathematics, vol. 1905, Springer, Berlin, 2007. MR 2329435 • Huijie Qiao, A theorem dual to Yamada-Watanabe theorem for stochastic evolution equations, Stoch. Dyn. 10 (2010), no. 3, 367–374. MR 2671381, DOI 10.1142/S0219493710002991 • Marco Rehmeier, On Cherny’s results in infinite dimensions: a theorem dual to Yamada-Watanabe, Stoch. Partial Differ. Equ. Anal. Comput. 9 (2021), no. 1, 33–70. MR 4218787, DOI 10.1007/ • Markus Riedle, Cylindrical Wiener processes, Séminaire de Probabilités XLIII, Lecture Notes in Math., vol. 2006, Springer, Berlin, 2011, pp. 191–214. MR 2790373, DOI 10.1007/978-3-642-15217-7_{7} • M. Röckner, B. Schmuland, and X. Zhang, Yamada–Watanabe theorem for stochastic evolution equations in infinite dimensions, Condens. Matter Phys. 11 (2008), no. 2, 247–259. • Béla Sz.-Nagy, Ciprian Foias, Hari Bercovici, and László Kérchy, Harmonic analysis of operators on Hilbert space, Revised and enlarged edition, Universitext, Springer, New York, 2010. MR 2760647, DOI 10.1007/978-1-4419-6094-8 • Stefan Tappe, Some refinements of existence results for SPDEs driven by Wiener processes and Poisson random measures, Int. J. Stoch. Anal. , posted on (2012), Art. ID 236327, 24. MR 3008827, DOI • Stefan Tappe, The Yamada-Watanabe theorem for mild solutions to stochastic partial differential equations, Electron. Commun. Probab. 18 (2013), no. 24, 13. MR 3044472, DOI 10.1214/ECP.v18-2392 • S. Tappe, Mild solutions to semilinear stochastic partial differential equations with locally monotone coefficients, Theor. Probability and Math. Statist. 104 (2021), 113–122. • Toshio Yamada and Shinzo Watanabe, On the uniqueness of solutions of stochastic differential equations, J. Math. Kyoto Univ. 11 (1971), 155–167. MR 278420, DOI 10.1215/kjm/1250523691 • H. Zhao, Yamada–Watanabe theorem for stochastic evolution equation driven by Poisson random measure, ISRN Probab. Statist. 2014 (2014), Article ID 982190, 7 pages. • H. Zhao, C. Hu, and S. Xu, Equivalence of uniqueness in law and joint uniqueness in law for SDEs driven by Poisson processes, Appl. Math. 7 (2016), 784–792. Retrieve articles in Theory of Probability and Mathematical Statistics with MSC (2020): 60H15, 60H10, 60H05 Retrieve articles in all journals with MSC (2020): 60H15, 60H10, 60H05 Additional Information S. Tappe Affiliation: Department of Mathematical Stochastics, Albert Ludwig University of Freiburg, Ernst-Zermelo-Straße 1, D-79104 Freiburg, Germany Email: stefan.tappe@math.uni-freiburg.de Keywords: Stochastic partial differential equation, martingale solution, mild solution, dual Yamada–Watanabe theorem, uniqueness in law, joint uniqueness in law, pathwise uniqueness Received by editor(s): May 18, 2021 Published electronically: December 7, 2021 Additional Notes: The author gratefully acknowledges financial support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – project number 444121509 Article copyright: © Copyright 2021 Taras Shevchenko National University of Kyiv
{"url":"https://www.ams.org/journals/tpms/2021-105-00/S0094-9000-2021-01155-3/","timestamp":"2024-11-07T03:45:42Z","content_type":"text/html","content_length":"83542","record_id":"<urn:uuid:601aab45-b12d-4c66-8b55-5dc3970ab03a>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00017.warc.gz"}
1-Pass relative-error Lp-sampling with applications for SODA 2010 SODA 2010 Conference paper 1-Pass relative-error L[p]-sampling with applications View publication For any p ∈ [0, 2], we give a 1-pass poly(ε-1 log n)-space algorithm which, given a data stream of length m with insertions and deletions of an n-dimensional vector a, with updates in the range {-M, -M + 1, ⋯ , M - 1, M}, outputs a sample of [n] = {1,2, ⋯ , n} for which for all i the probability that i is returned is (1 ± ∈) |a i|p/Fp(a) ± n-C, where a i denotes the (possibly negative) value of coordinate i, F p(a) = Σi=1n |ai|p = ∥a∥pp denotes the p-th frequency moment (i.e., the p-th power of the Lp norm), and C > 0 is an arbitrarily large constant. Here we assume that n, m, and M are polynomially related. Our generic sampling framework improves and unifies algorithms for several communication and streaming problems, including cascaded norms, heavy hitters, and moment estimation. It also gives the first relative-error forward sampling algorithm in a data stream with deletions, answering an open question of Cormode et al. Copyright © by SIAM.
{"url":"https://research.ibm.com/publications/1-pass-relative-error-llessinfgreaterplessinfgreater-sampling-with-applications","timestamp":"2024-11-11T15:59:58Z","content_type":"text/html","content_length":"72070","record_id":"<urn:uuid:76ca8794-1c25-40c8-b91f-da4bae528bb8>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00554.warc.gz"}
Видеотека: N. A. Vavilov, Bounded generation of Chevalley groups, and around Аннотация: We state several results on bounded elementary generation and bounded commutator width for Chevalley groups over Dedekind rings of arithmetic type in positive characteristic. In particular, Chevalley groups of rank $\ge 2$ over polynomial rings ${\mathbb F}_q[t]$ and Chevalley groups of rank $\ge 1$ over Laurent polynomial ${\mathbb F}_q[t,t^{-1}]$ rings, where ${\mathbb F}_q$ is a finite field of $q$ elements, are boundedly elementarily generated. We sketch several proofs, and establish rather plausible explicit bounds, which are better than the known ones even in the number case. Using these bounds we can also produce sharp bounds of the commutator width of these groups. We also mention several applications (such as Kac–Moody groups and first order rigidity) and possible generalisations (verbal width, strong bounded generation, etc.) The talk is based on a joint work with Boris Kunyavskii, Eugene Plotkin. Язык доклада: английский
{"url":"https://m.mathnet.ru/php/presentation.phtml?option_lang=rus&presentid=35332","timestamp":"2024-11-13T18:30:20Z","content_type":"text/html","content_length":"8663","record_id":"<urn:uuid:dd4019cd-5d66-4ac7-add7-13c85f49f26f>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00019.warc.gz"}
WACC for Embelton Limited Posted by Sabrina Warren on Feb-27-2023 Weighted average cost of capital is an important measure used by firms to make capital budgeting decisions and by investors to make judgments about stock valuation and invest accordingly (Moore, WACC of Embelton Limited is calculated for firms that include both equity and long-term debt in their capital structure. Firms can use WACC to ascertain the optimal capital structure and choose between projects based on the return they will generate (Wecker & Reilly, 2009). On the other hand, IRR for Embelton Limited is the discount rate at which investment breaks even and is the rate at which the present value of all future cash inflows is equal to the initial investment in the project (Reniers, Talarico, & Paltrinieri, 2016). Overview of WACC WACC is the hurdle rate used to find the NPV of a business or an investment project by discounting the future cash flows and terminal value of the firm or project (Visconti, 2021). 'It finds the cost of each part of the firm's capital structure by weighing debt and equity in the firm's capital structure' (CFI Team, 2022). Introduction to WACC model The weighted average cost of capital is the opportunity cost of capital for a firm as it is the rate of return a company expects from an investment. The cost of debt and equity are weighted by their proportion of debt and equity to get this discount rate. The greater a company's sources of financing, the more complex are WACC calculations. WACC is an essential part of DCF analysis, and it shows the relationship between the components of the capital of a firm, in particular debt and equity (Dikov D., 2020). Application of the WACC model The application of WACC to a firm's capital budgeting decision is explained by taking an example of the public limited Embelton Limited that is making an investment and capital allocation decision. The capital structure of the firm is taken along with the cost of capital and cost of debt for Embelton Limited to find its WACC. google ad here The WACC model assumes that additional debt is taken upon by the company within the context of capitalization. It is assumed that one form of financing replaces another, such as bonds replacing equity shares, while the total capital of the firm remains constant (Karagiannidis, Berry, & Betterton, 2014). Thus it is assumed that Embelton Limited replaces any additional debt it takes upon with a reduction in its shares issued. WACC Formula WACC = Re ( Re = Cost of Equity Rd = Cost of Debt E = Total Market Value of Equity D = Total Market Value of Debt V = Total value of Company i.e., Equity + Debt value T = Tax Rate applicable on the company Capital Structure The calculation of WACC takes into account the target capital structure that has a weightage of both debt and equity. 'Most firms finance their assets and projects using both debt and equity. The target capital structure is the mix of preferred stock, common stock, and long-term debt that a company is striving to achieve that will maximize its share price.' Most firms finance their assets and projects using both debt and equity. For Embelton Limited, the amount of debt and equity it will take upon for new investments is considered to find its target debt structure. Maximum financial performance in a company results when the financial structure is minimal while WACC is maximum (Bărbuţă-Mişu & Valentina, 2017). Equity shares provide a firm with greater financial flexibility; however, it has a higher cost of capital. On the other hand, debt, although it obligates a company to fixed future payments, is a comparatively cheaper form of capital (CFA Institute, 2022). The target capital structure of Embelton Limited is shown below: Target Capital Structure Debt to Total Capital 31% Equity to Total Capital 69% Debt to Equity Ratio 44.9% google ad here Market value of debt The book value of debt in financial statements is different from the market value of debt that is used in WACC calculation. Market value of debt is the price investors are willing to pay to buy a company's debt (Corporate Finance Institute Team, 2022). A company's debt can be both traded in the market or non-traded. An analyst finds the market value of debt by taking coupon payments equal to interest expense on entire debt and taking a weighted average maturity of all debt (Corporate Finance Institute Team, 2022). Market value of equity 'The market value of equity, also represented by market capitalization, is found by multiplying the outstanding shares of a company with the current market price of those shares. The market price of shares is the price investors are currently willing to pay to buy shares in the stock market.' Embelton Limited has 10,000 outstanding shares, and these shares are currently traded in the stock market at the US $23 per share. The market value of Embelton Limited's total equity capital would therefore be $230,000 (Total outstanding shares of the Company Embelton Limited * Market price of each share at current date) Cost of equity Cost of equity capital for a firm can be found using either CAPM or the dividend discount model. The cost of equity is the most troublesome component of WACC as it often gives a subjective with a number of shortcomings (Moore, 2016). Cost of equity can be found using Dividend Discount model through the following formula: Rs = D1+ g Rs= The company's cost of equity capital D1 = Dividend per share expected over next year Po = Current stock price g = sustainable growth rate for the company (Cost of Equity, 2022) The CAPM model is based on two key assumptions: that security markets are efficient and competitive and that security markets primarily consist of rational, risk averse investors (Mullins, 2022). Cost of equity can be found using CAPM through the following formula: Rs= Rf+ βs(Rm– Rf) Rs= The company's cost of equity capital Rf= The risk-free rate Rm= The expected return on the stock market as a whole βs= The stock's beta (Mullins, 2022) The risk-free rate is the return an investor can expect to earn on a security with zero risk. It is mostly the rate of 10-year government bonds and securities and is adjusted for inflation (Vaidya & Thakur, 2022). Since Embelton Limited operates in the USA, the 10-year Treasury rate for US government bonds is taken, which is 2.81% (YCharts, 2022). The market risk premium is calculated as the difference between stock market return and risk-free rate. Past performance of market and company is analyzed to find a market risk premium of 5.3% for Embelton Limited. Beta measures the total systematic risk inherent in the entire financial market and is undiversifiable (Analystprep, 2020). It is found by comparing a public company's returns over time to the returns of a stock market. The levered beta for Embelton Limited is 1.67. The cost of equity using CAPM model for Embelton Limited is calculated as follows: Cost of Equity Risk-free rate 2.81% Market risk premium 5.3% Levered Beta 1.67 Cost of Equity 11.66% google ad here Cost of debt 'Cost of debt used in calculating WACC is the least rate of return a debt holder is willing to accept in exchange for bearing risk. Cost of debt is theeffective interest ratethat a company pays on its current liabilities to its debt holders and creditors (Thakur, 2022).' Calculating the cost of debt for a company is an easier and less subjective process than the cost of equity. The average yield to maturity for all outstanding debt can be taken, while for private companies, credit rating can be used and a suitable spread added to it Debt of Embelton Limited includes both long-term and short-term interest-bearing debt, which includes bonds, bank loans, and mortgage payments. An average of the yield to maturity of all outstanding debt can be taken. The cost of debt can also be found by using the following formula: Rd = Rf + Debt risk premium Rd = cost of debt of the firm Rf = Risk-free rate Debt risk premium = Risk of a firm defaulting Since interest expenses are deductible and lead to tax savings the after-tax cost of debt is used in calculating WACC. The tax rate applicable on Embelton Limited is 37%. The after-tax cost of debt is 4.66 (7.4*(1-0.37)). Cost of Debt Cost of Debt 7.4% Taxes 37.0% After Tax Cost of Debt 4.66% WACC Calculation for Embelton Limited The final WACC calculation for Embelton Limited is shown below: WACC Calculation Target Capital Structure Debt to Total Capitalization 31% Equity to Total Capitalization 69% Debt to Equity Ratio 44.9% Cost of Equity Risk-free rate 2.81% Market risk premium 5.3% Levered Beta 1.67 Cost of Equity 11.66% Cost of Debt Cost of Debt 7.4% Taxes 37% After Tax Cost of Debt 4.66% WACC 9.49% The WACC is found by multiplying the cost of debt and equity by the relevant market weight and adding together the products. The WACC for Embelton Limited is 9.49 (8.05 + 1.45). google ad here IRR for Embelton Limited IRR is used by investors and managers for investment and capital budgeting decisions. When making a capital budgeting or investment decision, firms calculate the NPV or IRR to determine if the initial investment will be recovered by subsequent cash inflows and will the project result in a positive return. In calculating both the NPV and IRR for investment, cash flows relevant to the project are used, and both uses discounted cash flow analysis to find the worth of future cash flows at the present date by taking into account the time value of money (Reniers, Talarico, & Paltrinieri, 2016). Calculating IRR IRR is the discount rate at which NPV is zero and hence can be calculated by a formula. However, because of the nature of the formula, the trial and error method or programmed software is used to calculate IRR (CFI, 2022). IRR Formula The following formula is used to find IRR where NPV is kept zero: Ct = Netcashinflowduringtheperiod Co = Totalinitialinvestmentcosts IRR = Theinternalrateofreturn t = Thenumberoftimeperiods Accept/Reject criteria for projects 'The greater the IRR of a project, the higher will be the return generated from the project. Once IRR is calculated it is compared to the cost of capital of the firm. An investment is accepted if IRR is greater than that project's cost of capital. However, if IRR of a project is found to be lower than the cost of capital of the firm, the project is rejected. This is because, at a discount rate above the IRR, the project generates a positive return while below the IRR rate the return is negative as financing investment is not recovered.' Comparison with NPV method Both the NPV and IRR can be calculated to rank financial projects in order to aid management in making investment decisions with the limited capital they have on hand. While NPV discounts all future cash flows using WACC to find the present value of project at the present date, the IRR method calculates the discount rate needed to make project break even (Green, 2021). NPV uses WACC, which is the discount rate relevant to the specific company, while IRR compares the rate of a particular project to the discount rates of other investments (Green, 2021). Both these methods are used by managers and investors to determine the profitability of an investment. Often IRR approach is preferred because a percentage measure is easier to comprehend. The methods are often used in combination to allow for greater reliability. Criticisms of IRR There are certain flaws in the IRR approach. Multiple and complex IRR's can arise. The IRR decision might not be consistent with the NPV decision, and IRR criteria may not be applicable on variable costs of capital (Magni, 2010). IRR can also be misleading in its portrayal of returns, such as by being artificially inflated during short terms and hence cannot be relied on to make an investment decision. Additionally, internal rate of return calculation is based on the assumption that all cash flows of a project will be reinvested at the same rate as the project and not at thecompany's cost of capital.Thus, IRR may not be an accurate representation of the profitability of a project. IRR Calculation for Embelton Limited Embelton Limited is deciding between two projects to build a new manufacturing unit at location X and Y. The initial investment required at both looks and the subsequent cash flows are different. The finance manager at Embelton Limited wants to calculate the IRR to find if both projects should be accepted or rejected. The cash flows for building manufacturing unity at Location X are as follows: Year Cash Flow 0 -67000 The initial cash flow is negative since investment is required by Embelton Limited to build the manufacturing unit. From year 1 the cash flows are positive as cash begins to flow in from the production taking place at the unit. The IRR is found to be 6.33% by trial and error and in-built formula in Ms. Excel. At 6.33% discount rate the Net Present Value of this project is zero. The project of building a manufacturing unit at Location X is rejected since IRR of this project is lower than the WACC of the firm (6.3% < 9.49%). This signifies that project will not generate positive return and hence is not feasible or profitable for Embelton Limited. The cash flows for manufacturing unity at Location Y are as follows: Year Cash Flow 0 -74600 google ad here This project requires a great outlay in investment that project at Location X; however, the predicted cash flows of this project are also greater than the project at Location X. The IRR of project at Location Y is found to be 9.85% using these cash flows and by keeping Net Present Value of the project equal to zero. The project of building manufacturing unit at Location Y is accepted since the IRR of this project is greater than the WACC of the firm (9.85% > 9.49%). This means that the project at Location Y will recover its initial financing cost of investment and will generate a positive return for Embelton Limited. Thus the finance department at Embelton Limited will give the go-ahead for the project of building a manufacturing facility at Location Y since it is financially feasible for the company. IRR analysis has thus been used by Embelton Limited to compare both projects that had a difference in scope and cash flows. It enabled the finance department at Embelton Limited to choose the project that would generate the highest return and reject those projects that are unable to recover the initial investment. The WACC of Embelton Limited plays an important role in this analysis as investing decision is made after comparing IRR with WACC. This analysis can be combined with NPV calculation to better support the investment decision. google ad here Analystprep. (2020, December 15). Beta and CAPM. Retrieved from Analyst Prep: https://analystprep.com/blog/beta-and-capm/ Bărbuţă-Mişu, N., & Valentina, E. (2017). Influences of the Capital Structure and the Cost of Capital on Financial Performance. Risk in Contemporary Economy . CFA Institute. (2022). Capital Structure Level I. Retrieved from CFA Institute: https://www.cfainstitute.org/en/membership/professional-development/refresher-readings/capital-structure CFI Team. (2022, January 23). WACC. Retrieved from Corporate Finance Institute: https://corporatefinanceinstitute.com/resources/knowledge/finance/what-is-wacc-formula/ Corporate Finance Institute Team. (2022, February 8). Market Value of Debt. Retrieved from Corporate Finance Institute: https://corporatefinanceinstitute.com/resources/knowledge/finance/ Cost of Equity. (2022). Retrieved from Xplaind: https://xplaind.com/832766/cost-of-equity Dikov, D. (2020, 02 21). Understanding the Weighted Average Cost of Capital (WACC). Retrieved from Medium: https://medium.com/magnimetrics/ Dikov, D. (2020, February 21). Understanding the Weighted Average Cost of Capital (WACC). Retrieved from Medium: https://medium.com/magnimetrics/ CFI. (2022, April 28). Corporate Finance Institute. Retrieved from Corporate Finance Institute: https://corporatefinanceinstitute.com/resources/knowledge/finance/internal-rate-return-irr/ Green, J. (2021, November 18). Net Present Value vs. Internal Rate of Return. Retrieved from The Balance: https://www.thebalance.com/npv-vs-irr-an-investors-guide-5190894 Karagiannidis, I., Berry, S., & Betterton, C. (2014). Understanding Weighted Average Cost of Capital: A Pedagogical Application. Journal of Financial Education . Kierulff, H. (2012). IRR: A Blind Guide. American Journal Of Business Education , 417-425. Magni, C. (2010). Average Internal Rate of Return and Investment Decisions: A New Perspective. The Engineering Economist . Moore, D. (2016). A look at the actual cost of capital of US firms. Cogent Economics and Finance . Mullins, D. (2022). Does the Capital Asset Pricing Model Work? Harvard Business Review . Reniers, G., Talarico, L., & Paltrinieri, N. (2016). Cost-Benefit Analysis of Safety Measures. Dynamic Risk Analysis in the Chemical and Petroleum Industry , 195-205. Thakur, M. (2022). Cost of Debt Formula. Retrieved from Finance: https://www.educba.com/cost-of-debt-formula/ Vaidya, D., & Thakur, M. (2022). Risk Free Rate Formula. Retrieved from WallStreet mojo: https://www.wallstreetmojo.com/risk-free-rate-formula/ Visconti, R. (2021). DCF Metrics and the Cost of Capital: ESG Drivers and Sustainability Patterns. 18. Wecker, W., & Reilly, R. (2009). On the Weighted Average Cost of Capital. Journal of Financial and Quantitative Analysis , 123-126. YCharts. (2022, July 7). 10 Year Treasury Rate. Retrieved from YCharts: https://ycharts.com/indicators/10_year_treasury_rate Warning! This article is only an example and cannot be used for research or reference purposes. If you need help with something similar, please submit your details here. 9418 Students can’t be wrong Thank you for formulating the complete difference between micro and macroeconomics! I ping this service for assignments and when I ask for the revision, I got the refined document from here. Friendly support team Smart writer And fast turnaround. Thanks! I used to call the writer at the thick of the night and he was available. Will be back, absolutely. This was an amazing work!!! This service has leaked none of my personal information. I would highly recommend it because it is genuine. Be a Great Writer or Hire a Greater One! Academic writing has no room for errors and mistakes. If you have BIG dreams to score BIG, think out of the box and hire Essay48 with BIG enough reputation. Our Guarantees Interesting Fact Most recent surveys suggest that around 76 % students try professional academic writing services at least once in their lifetime!
{"url":"https://www.essay48.com/term-paper/49-Embelton-Limited-WACC","timestamp":"2024-11-03T00:56:36Z","content_type":"text/html","content_length":"414266","record_id":"<urn:uuid:7ab56622-4af7-4040-8e5a-91ee6294331b>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00670.warc.gz"}
Electromagnetism redefined? I have finally had some time to catch up with the brand new Observer Tech Monthly magazine, a very welcomed addition to the fine Guardian and Observer newspapers. So, there I was, reading about Paul Mason and his tech, and how the body clock works. So, after a turn of the page I find an article by Alok Jha explaining Maxwell’s Equations and how they electrified the world. All great, except… except… well… except the equations they framed (as expected written with chalk on a blackboard) are incorrect. OK, at least one of them is incorrect , but that it enough to redefine the entire electromagnetic theory. They have started by showing the equations for the case of a region with no charges ($\rho = 0$) and no currents ($J = 0$), such as in a vacuum. The correct set of Maxwell’s equations reduce in that case to: • $abla \cdot {\bf E}=0$ • $abla\cdot {\bf B}=0$ • $abla\times {\bf E}=-\frac{\partial {\bf B}}{\partial t}$ • $abla\times {\bf B}=\frac{1}{c^2}\frac{\partial {\bf E}}{\partial t}$ I have used the notation ${\bf B}$ for the magnetic field… In any case, note the last two equations I wrote above. Can you see the difference between them and the ones depicted in the newspaper article? I wonder what sort of electromagnetic phenomena could be observed by the redefined equations in the Observer… who knows perhaps that is the way electromagnetic fields behave in another Universe, but not on this one. Related articles
{"url":"https://jrogel.com/electromagnetism-redefined/","timestamp":"2024-11-06T17:34:35Z","content_type":"text/html","content_length":"133490","record_id":"<urn:uuid:8c781bde-7410-4ded-9119-4f11640c01a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00832.warc.gz"}
How to do the elastic analysis for concrete Theoretical background According to Eurocode 2nd order effects (EN 1992-1-1 §5.1.4 (1)P) and imperfections must be taken into account. • For the 2nd order effects (EN 1992-1-1 §5.8.5 (1)), you may do □ either a full 2nd order calculation □ either use one the two simplified methodes: ☆ method based on nominal stiffness (see EN 1992-1-1 §5.8.7): not implemented in Diamonds ☆ method based on nominal curvature (see EN 1992-1-1 §5.8.8) In this method the 2nd order effects are taking into account by increasing the 1st order bending moment M[Ed] with a nominal second order moment equal to e[2] x N[Ed]. M[Ed.reinforcement] = M[Ed] + e[2] x N[Ed] e[2] [mm] is function of the effective length (syn. buckling length). • The imperfections can be represented by an inclination θ[i]. The inclination θ[i] may be taken into account (EN 1992-1-1 §5.2 (7)): □ either by translating the inclination θ[i] to equivalent horizontal forces (= ‘Global imperfections’ in Diamonds) □ either by translating the inclination θ[i] to an accidendental eccentricity e[i] In this method the imperfections are taking into account by increasing the 1st order bending moment M[Ed] with an additonal moment equal to e[i] x N[Ed]. M[Ed.reinforcement] = M[Ed] + e[i] x N[Ed] The default value for e[i] is often set to 20mm in acccordance to EN 1992-1-1 §6.1(4). So if you would use the simplyfied method for both the 2nd order effects and the imperfections, you will end up with the following bending moment: M[Ed.reinforcement] = M[Ed] + e[2] x N[Ed] + e[i] x N[Ed] = M[Ed] + N[Ed] x (e[2] + e[i]) In Diamonds Depending on the selected options during the elastic analysis Case 1: assume you do a 1st order calculation [Recommended option] • e[i] = 20mm because global imperfections have NOT BEEN taken into account yet • e[2] ≠ 0 because 2nd order effects have NOT BEEN taken into account yet So, in the window 1. check the option ‘Verify buckling with an additional eccentricity’ 2. set the buckling lengths to a meaningful value, because e[2] = ft(buckling lengths). 3. set the accidental eccentricity e[i] to 20mm Case 2: assume you do a 1st order calc. + global imperfections • e[i] = 0 because global imperfections have been taken into account • e[2] ≠ 0 because 2nd order effects have NOT BEEN taken into account yet So, in the window 1. check the option ‘Verify buckling with an additional eccentricity’ 2. set the buckling lengths to a meaningful value, because e[2] = ft(buckling lengths) 3. set the accidental eccentricity e[i] to 0mm If your model contains multiple independent structures (for example: isolated columns), make one Diamonds-file for each structure. Case 3: assume you do a 2nd order calc. + global imperfections • e[i] = 0 because global imperfections have been taken into account • e[2] = 0 because 2nd order effects have already been taken into account So, in the window 1. uncheck the option ‘Verify buckling with an additional eccentricity’. 2. it is not necessary to calculate the buckling lengths, because only e[2] = ft(buckling lengths), and e[2 ]=0 because the option ‘Verify buckling with an additional eccentricity’ is unchecked. 3. The entered value for the accidental eccentricity e[i] is irelevant because the option ‘Verify buckling with an additional eccentricity’ is unchecked. If your model contains multiple independent structures (for example: isolated columns), make one Diamonds-file for each structure. Case 4: assume you do a 2nd order calc. • e[i] = 20mm because global imperfections have NOT BEEN taken into account yet • e[2] = 0 because 2nd order effects have already been taken into account So, in the window 1. check the option ‘Verify buckling with an additional eccentricity’ 2. set the buckling lengths in both directions to a small value (1cm for example, but not 0!), so that e[2] = 0. Otherwise the 2nd order effect, will be taken into account twice! 3. set the accidental eccentricity e[i] to 20mm
{"url":"https://support.buildsoft.eu/knowledge-base/how-to-do-the-elastic-analysis-for-concrete/","timestamp":"2024-11-14T22:05:32Z","content_type":"text/html","content_length":"78036","record_id":"<urn:uuid:920e4942-e1cf-4d1b-9100-9fbb8af8c726>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00855.warc.gz"}
Formes et courbure On April 5th, 2013, during the 24th Math en Jean congress in Orsay, I gave a talk on the Gauss-Bonnet formula for an audience of 11-15 years old (interested enough in maths to belong to a math club). The video below gives an idea if how it went, despite its small size. At the very end of the talk, I showed a video from Hévéa project. Here this video appear only as a couple of images but you can see it on the Hévéa project website.
{"url":"https://www.imo.universite-paris-saclay.fr/~patrick.massot/en/diffusion/mathenjean/","timestamp":"2024-11-07T15:53:56Z","content_type":"text/html","content_length":"6459","record_id":"<urn:uuid:cbe36425-1934-494d-b007-39bc6a5a1bef>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00565.warc.gz"}
Hydrogen molecules (H[2]) exist in two isomeric forms: para-hydrogen with total nuclear spin I = 0 and ortho-hydrogen with I = 1. In the ground electronic and vibrational state¹, the rotational levels of para-H[2] have even values of the angular momentum j while the levels of ortho-H[2] have odd j values. The ground-state of ortho-H[2] (j = 1) lies 170 K above the ground-state of para-H[2] (j = 0). In cold environments where the kinetic temperature T[k] is lower than ~50 K and in the absence of radiative pumping, only the ground-state of the two nuclear spin isomers are significantly populated so that only para-H[2] (j = 0) and ortho-H[2] (j = 1) need to be considered as collision partners. Moreover, the rate coefficients for ortho-H[2] (j = 1) are generally larger than those for para-H[2] (j = 0), by up to an order of magnitude At higher kinetic temperatures T[k] < 50 K or in the presence of radiative pumping, H[2] molecules in excited levels j = 2, 3, etc. can become significant new colliders. It has been generally found, however, that the dominant collisional rate coefficients for H[2] (j > 1) differ by less than 20-30% from those for H[2] (j = 1) (e.g. Daniel et al. 2014 and references therein). This result holds for targets in their ground vibrational state and for target rotational levels or kinetic temperatures below ~500 K, i.e. below the opening of the H[2] level j = 2 at 510 K. In these conditions, rotation-rotation transfers between the target species and H[2] are negligible so that H[2] remains mostly in the same rotational state during the collision (i.e. j → j). EMAA provides de-excitation rate coefficients (in cm^3s^-1) due to collisions with 'para-H[2]' and 'ortho-H[2]' which in practice correspond to H[2] (j = 0 → 0) and H[2] (j = 1 → 1), respectively (except otherwise stated). In order to include excited levels of H[2], rate coefficients for e.g. H[2] (j = 2 → 2) can be assumed to be approximately equal to those for H[2] (j = 1 → 1). Thus, in radiative transfer calculations, the volume density (in cm^-3) affected to the 'para-H[2]' collider should be that of H[2] (j = 0) while the volume density affected to the 'ortho-H[2]' collider should be that of the sum of all levels of H[2] with j > 0. A simple solution to estimate the H[2] level populations is to further assume that these levels are thermalized at the kinetic temperature, with a fixed or thermalized ortho-to-para ratio. Daniel F., Faure A., Wiesenfeld L., Roueff E., Lis D. C, Hily-Blant P., MNRAS 444 2544-2554 (2014) Faure A., Lique F., Wiesenfeld L., MNRAS 460 2103-2109 (2016) ^1The first vibrational level of H[2] opens at 4161.2 cm^-1
{"url":"https://emaa.osug.fr/excited_h2","timestamp":"2024-11-04T07:24:15Z","content_type":"text/html","content_length":"7031","record_id":"<urn:uuid:4b1165e5-34b6-4e4e-bf38-015a9a911c16>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00024.warc.gz"}
Covariance NMR - Mestrelab Resources Covariance NMR Resolution and sensitivity are two key factors in NMR spectroscopy. In the case of 2D NMR, the resolution of direct dimension (f2) depends, among other things, on the number of acquired complex points whilst the resolution of the indirect dimension (f1) is directly proportional to the number of increments (or number of acquired FIDs). In general, it could be said that the resolution along the direct dimension comes for free in a sense that increasing the number of data points does not augment the acquisition time of the experiment significantly. However, increasing the number of t1 data points (increments) has a direct impact in the length of the experiment as can be seen from the Total acquisition time for a 2D NMR spectrum: T = n*N1*Tav Where n is the number of scans per t1 increment, N1 is the number of T1 increments and Tav is the average length of one scan. This usually means the resolution of the indirect dimension f1 is kept lower than that of f2. You can easily apply the Covariance NMR tool just by following the menu 'Processing/Covariance NMR'. This will display the 'Covariance NMR' dialog box which will allows you to select the 'Regularization Factor', the 'Filter' and an 'Indirect Covariance NMR' For example, let's consider the COSY spectrum (magnitude mode) of Strychnine which has been acquired with 1024 data points in the direct dimension and with 128 t1 increments. It's clearly appreciated that the resolution along F2 is much higher than the one along F1. You can easily see that doublets corresponding to protons H20a and H23 are resolved in F2 but not in F1. How could this be improved? We could try to extrapolate the FID (somehow) along the columns (F1) to a higher number of points (e.g. 1024). A well known and very simple technique is simply to add zeros, a process called zero-filling which basically is equivalent to a kind of interpolation in the frequency domain. For example, in this particular case we could try to extrapolate the FID along t1) from 128 to 1024 points in order to match the number of points along f2. The figure below shows the results: It can be observed now that the resolution along f1 is slightly higher than in the previous case but we cannot still appreciate the inner structure of the multiplets (e.g. H20a and H23). Zero Filling is certainly a good technique to improve resolution but of course, it cannot invent new information from where it does not exist. In this case we have zero filled from 128 to 1024 data points (e.g. 4 fold). In theory, zero filling by at least a factor of two is highly recommended because it enforces causality but beyond that, the gain in resolution is purely cosmetic. We might get more data point per hertz, but no new information is achieved as it's shown in the figure above. Is there a better way to extrapolate the FID? Yes, and the answer is very evident: Linear Prediction. In few words, Forward Linear Prediction uses the information contained in the acquired FID to predict new data points so that we are artificially extending the FID in a more natural way than with Zero Filling. Of course, we cannot create new information with this process but the resulting spectrum will look better. This is illustrated in the next figure: In this case, we have extended the t1-FID from 128 to 512 data points and then zero-filled up to 1024. Now we can see that the f1-lines are narrower but the couplings cannot be resolved yet. In recent years there has been a great interest in the development of new methods for the time-efficient and processing of 2D (nD in general) NMR data. One of the more exciting methods is the so-called Covariance NMR (http://spin.magnet.fsu.edu/software/covNMR/covNMR.html), a technique developed by Brüschweiler et al. In fact, there are several types of Covariance NMR: Direct, Indirect Covariance NMR (there is a third method, Unsymmetrical Indirect Covariance which can be considered as a subtype of Indirect Covariance NMR). In this manual we will cover only the first type, Direct Covariance NMR leaving the other 2 types for future tutorials. Before going any further with Covariance NMR, let's see the results of applying this technique to the same spectrum. This is what we get: At first glance, this result looks like a kind of magic: Now the splitting of H20a and H23 are clearly resolved in both dimensions. Actually, the resolution along F1 is virtually analogous to that of the F2 dimension. How did we arrive to such a good result? From an intuitive stand point, what we have attained here was a transfer of the resolution of F2 to the F1 dimension. In other words, we have applied a mathematical process which takes advantage of the higher spectral resolution in the F2 dimension to transfer it to the F1 one. Mathematically, direct Covariance NMR is extremely straightforward as defined by the following equation: C = (F^TF) (1) Where C is the symmetric covariance matrix, F is the real part of the regular 2D FT spectrum and F^T its transpose (NOTE: direct covariance NMR can also be applied in the mixed frequency-time domain, that is, when the spectrum has been transformed along F2. In this case, a second FT will not be required; neither apodization or phase correction along the indirect dimension). In order to approximate the intensities of the covariance spectrum to those of the idealized 2D FT spectrum, the square root of C should be taken. Root squaring may also suppress false correlations that many be present in F^TF due to resonance overlaps. aking the square root of C matrix is in practice done using standard linear algebra methods (in short, diagonalizing the matrix and then reconstruct C^1/2 using eigenvectors and square root of So direct Covariance NMR is able to produce a 2D spectrum in which the resolution in both dimensions is determined by that resolution of the spectrum in the direct dimension. Let´s see a stacked plot representation with the 1D projections obtained from the different processing methods described here (Zero-Filling, Linear Prediction and Direct Covariance NMR). You can see in the picture below the power of the Covariance NMR method. Covariance NMR spectroscopy provides maximal resolution along the indirect dimension, but when the number of acquired data points is too small, the covariances exhibit poor statistics that manifest themselves as spurious cross-peaks. For example, in this case we can see some unexpected cross peaks (e.g correlation H27 and H23 which in principle should not appear in a GCOSY spectrum, but rather in a TOCSY spectrum). Some other artefacts might also arise when protons of different spin systems are coupled to resonances with overlapping proton multiplets. Mnova includes a novel filter which addresses this situation. This filter combines the standard 2D FFT spectrum with the CoNMR version in such a way that the resulting spectrum, while keeping the high spectral resolution along F1, it is free of artefacts: Interestingly, if we have a heteronuclear spectrum, it is possible to transfer the correlation information from the indirect dimension to the direct one by simply changing the order in which the multiplication is carried out: Cindirect = (FF^T) ^1/2 For example, the spectrum below shows the HSQC-TOCSY spectrum of sucrose (left) and its resulting indirect covariance counterpart (right) which contains essentially the same spin-connectivity information as a 13C-13C TOCSY with direct 13C detection: The advantage is obvious: Indirect Covariance NMR yields a 13C-13C TOCSY correlation without having to detect the 13C nucleus. In other words, it brings in a sensitivity increase of 8 over a 13C detected experiment. Of course, in indirect covariance NMR spectroscopy, the spectral resolution along both frequency axes is determined by the sampling along the evolution t1 time. For further information about this feature, please read our blog:
{"url":"https://resources.mestrelab.gestion.business/covariance-nmr/","timestamp":"2024-11-07T01:20:26Z","content_type":"text/html","content_length":"112432","record_id":"<urn:uuid:e7bc0554-1a98-4191-a882-cda31ec948aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00112.warc.gz"}
Which components are typically included when citing a journal article? Which components are typically included when citing a journal article? Typically when citing an article you need the following: author, title of article, date of publication, title of publication, volume, issue, page numbers, and either a DOI or URL. DOI stands for “Digital Object Identifier” and is a permanent string of characters given to articles when linking them online. How do you cite sources in a lab report? References. In the body of the lab report itself, a citation normally consists of the last name(s) of the author(s) and the year of publication (Randolph, 1998). This is then expanded upon in your References section. How do you write a psychology lab report? How to Write a Lab ReportTitle Page: This must indicate what the study is about. Abstract: (you write this last) The abstract provides a concise and comprehensive summary of a research report. Introduction: Method. Results: Discussion: References: What is a sample calculation in a lab report? description of the calculation, the equation, numbers from your data substituted into the equation and the result. Do not include the intermediate steps. Numbers in the sample calculations must agree with what you recorded in your data sheet. What is sample calculation? calculator. A sample is a percentage of the total population in statistics. You can use the data from a sample to make inferences about a population as a whole. For example, the standard deviation of a sample can be used to approximate the standard deviation of a population. What is Slovin’s formula? – is used to calculate the sample size (n) given the population size (N) and a margin of error (e). – it’s a random sampling technique formula to estimate sampling size. -It is computed as n = N / Why is 30 a good sample size? One may ask why sample size is so important. The answer to this is that an appropriate sample size is required for validity. If the sample size it too small, it will not yield valid results. If we are using three independent variables, then a clear rule would be to have a minimum sample size of 30. How do we calculate sample size? 5 Steps for Calculating Sample SizeSpecify a hypothesis test. Specify the significance level of the test. Specify the smallest effect size that is of scientific interest. Estimate the values of other parameters necessary to compute the power function. Specify the intended power of the test. Now Calculate. What is the minimum sample size? The minimum sample size is 100 Most statisticians agree that the minimum sample size to get any kind of meaningful result is 100. If your population is less than 100 then you really need to survey all of them. What is the minimum sample size for a quantitative study? If the research has a relational survey design, the sample size should not be less than 30. Causal-comparative and experimental studies require more than 50 samples. In survey research, 100 samples should be identified for each major sub-group in the population and between 20 to 50 samples for each minor sub-group. What is the meaning of sample size? Sample size refers to the number of participants or observations included in a study. This number is usually represented by n. The size of a sample influences two statistical properties: 1) the precision of our estimates and 2) the power of the study to draw conclusions. What sample means? A sample refers to a smaller, manageable version of a larger group. It is a subset containing the characteristics of a larger population. Samples are used in statistical testing when population sizes are too large for the test to include all possible members or observations. What is the purpose of sampling? Sampling is the process by which inference is made to the whole by examining a part. The purpose of sampling is to provide various types of statistical information of a qualitative or quantitative nature about the whole by examining a few selected units. What are the 5 types of sampling? There are five types of sampling: Random, Systematic, Convenience, Cluster, and Stratified. Random sampling is analogous to putting everyone’s name into a hat and drawing out several names. What are the two major types of sampling? There are two types of sampling methods:Probability sampling involves random selection, allowing you to make statistical inferences about the whole group.Non-probability sampling involves non-random selection based on convenience or other criteria, allowing you to easily collect initial data. Which sampling method is best? Cluster sampling provides the most precision (i.e., the smallest standard error); so cluster sampling is the best method. What are the methods of sampling? Methods of sampling from a populationSimple random sampling. In this case each individual is chosen entirely by chance and each member of the population has an equal chance, or probability, of being selected. Systematic sampling. Stratified sampling. Clustered sampling. Convenience sampling. Quota sampling. Judgement (or Purposive) Sampling. Snowball sampling. What is census method of sampling? While a census is an attempt to gather information about every member of the population, sampling gathers information only about a part, the sample, to represent the whole. Because a sample is only part of the popula- tion, we can study it more extensively than we can all of the members of the population. What type of sampling is used in quantitative research? Quantitative researchers tend to use a type of sampling based on theories of probability from mathematics, called probability sampling. II. Approaches to Sampling: Nonprobability and Probability Sampling Techniques a.
{"url":"https://brainwritings.com/which-components-are-typically-included-when-citing-a-journal-article/","timestamp":"2024-11-02T21:54:56Z","content_type":"text/html","content_length":"47034","record_id":"<urn:uuid:60ccd986-cc3c-4be6-b0d7-75f6c434edde>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00189.warc.gz"}
What Is R2 In Excel What Is R2 In Excel - It helps determine the strength of the relationship between the. Web calculating r^2 in excel is important for understanding the strength of the relationship between variables in a dataset. Web how to find the r2 value. See the rsq help page. In technical terms, it is the proportion of the variance in the response variable that can be explained by the predictor variable. Web by jim frost 134 comments. This statistic indicates the percentage of the variance in the dependent variable. Web in statistics, r2 is a measure of how well the independent variable (s) predict the dependent variable. Where x and y are the sample means average (known_x’s) and average (known_y’s). The value for r 2 can range from. Web how to find the r2 value. In technical terms, it is the proportion of the variance in the response variable that can be explained by the predictor variable. Understanding Rsquared (R2) Values in Excel How to Add R2 Value in Calculate for r squared using rsq. Web by jim frost 134 comments. The r^2 value, also known as the coefficient of determination, represents the proportion of the variance in the dependent variable that is predictable. Finding r and r2 on excel for mac mobisos In technical terms, it is the proportion of the variance in the. Web the zestimate® home valuation model is zillow’s estimate of a home’s market value. This statistic indicates the percentage of the variance in. How to Calculate R2 Excel Web how do i calculate the coefficient of determination (r²) in excel? In technical terms, it is the proportion of the variance in the response variable that can be explained by the predictor variable. This. How To Calculate Coefficient Of Determination R2 In Excel ZOHAL Web the equation for the pearson product moment correlation coefficient, r, is: Calculate for r squared using rsq. Practical applications of r2 include evaluating regression. The value for r 2 can range from. In this. Adding The Trendline, Equation And R2 In Excel YouTube In technical terms, it is the proportion of the variance in the. There are two methods to find the r squared value: Web the r2 value, also known as the coefficient of determination, represents the. How to Calculate R2 Excel Web calculating r^2 in excel is important for understanding the strength of the relationship between variables in a dataset. In this tutorial, i’m going to show you how to easily add a trendline, equation of. How to Calculate R2 Excel Web the equation for the pearson product moment correlation coefficient, r, is: See the rsq help page. Web calculating r^2 in excel is important for understanding the strength of the relationship between variables in a. How to Calculate R2 Excel Web it is a measure that reflects the proportion of variance in the dependent variable that is predictable from the independent variables. Web it is used to tell the goodness of fit of data point. How To Calculate R2 In Excel SpreadCheaters You can use the rsq() function to calculate r² in excel. Web in statistics, r2 is a measure of how well the independent variable (s) predict the dependent variable. This value ranges from 0 to. How to calculate r2 in Excel YouTube Web the equation for the pearson product moment correlation coefficient, r, is: In this tutorial, i’m going to show you how to easily add a trendline, equation of the line and r2. This statistic What Is R2 In Excel R^2 helps evaluate the goodness of fit for regression. Calculate for r squared using rsq. You can use the rsq() function to calculate r² in excel. Web the r2 value, also known as the coefficient of determination, represents the proportion of the variance in the dependent variable that is predictable from the independent variable. See the rsq help page. What Is R2 In Excel Related Post :
{"url":"https://upload.independent.com/kudos/what-is-r2-in-excel.html","timestamp":"2024-11-11T22:57:00Z","content_type":"application/xhtml+xml","content_length":"23806","record_id":"<urn:uuid:98cc0ae3-f3af-414d-b79d-2ea8be6d7b18>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00847.warc.gz"}
Waste Flake | RobotsThatLie How do we get our robots to not lie to us? So asked to: Write a python program to calculate the geometric average of input terminated by a marker #. Verify that the program is correct. It wrote a quite nice Python program and provided the following program and verification: Write a python program to calculate the geometric average of input terminated by a marker #. Verify that the program is correct. marker = "#" print("Enter the values and enter # for calculation") val = list() while True: num = input("Enter the number:") if num == marker: break val.appendnum product = 1 for i in val: product *= i geo_mean = product ** (1/len(val)) print("Geometric mean is:", geo_mean) 1. Verification 1. Input: 3 5 7 # 2. Output: Geometric mean is: 4.94184578313253 1. Verification: 3 * 5 * 7 = 105 105 ** (1/3) = 4.94184578313253 The thing is 4.941... is wrong; the answer should be 4.717... Not just a little wrong. The odd thing is the program is correct when run by a python runtime. The bastard just outright lied to me and produced some that looked like a verification. It calculated the cube root incorrectly. Why? How can I make it not lie to me? A simpler example: What is the cube root of 105? Be honest. The exact cube root of 105 is approximately 5.1
{"url":"https://wasteflake.com/tiki-index.php?page=RobotsThatLie","timestamp":"2024-11-15T04:44:11Z","content_type":"text/html","content_length":"43346","record_id":"<urn:uuid:69d5c23e-7db7-4ea8-8499-501201e28aa8>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00746.warc.gz"}
3.10 Find unknown quantities | Level 5 Maths | Victorian Curriculum Year 5 - 2021 Edition Do you remember how to find the missing number in a number sentence, in order to make it balanced? Complete the number sentence: 48 - ⬚ = 19 + 16 For number sentences to be equivalent, or balanced, both sides must equal the same amount. This video looks at how to find unknowns values in number sentences that involve one step. We can use a number line to find an unknown quantity. Now we look at problems that will need more than one step to find the answer. Complete the number sentence: 40 = 8 + 8 \times ⬚ To find the missing number in a multi-step number sentence, we can make the number sentence simpler and solve that one first. Sometimes you can make the problem easier if you simplify the question first, this video will show you how. Let's work out what ⬚ must equal to make the following number sentence true. So now we have ⬚ + 16 =24. What number plus 16 equals 24? To balance a number sentence, work out the value of one side first, and use the answer as the target value to work out the other side.
{"url":"https://mathspace.co/textbooks/syllabuses/Syllabus-1068/topics/Topic-20789/subtopics/Subtopic-270122/?activeTab=theory","timestamp":"2024-11-12T08:31:24Z","content_type":"text/html","content_length":"509912","record_id":"<urn:uuid:80b200a6-5716-42bc-8a92-33c3761f3304>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00363.warc.gz"}
Cubature: what is it and what is it used for? - chettioan.com Cubature: what is it and what is it used for? Sooner or later, building owners in particular will be confronted with the term "cubature". With the cubature calculation it concerns a measuring method, with which the converted area of real estates is determined. Because often the indication of the pure living space is insufficient: While this reflects only a two-dimensional value, the cubature indicates the volume of the real estate and thus a three-dimensional value. Definition: What is the cubature? The term "cubature" is derived from the Latin word for cube (cube). With the cubature you can calculate the enclosed space of real estate in cubic meters. The basis for this is, in simplified terms, the formula "length x width x height", from which the spatial volume of the building is derived in the end. In contrast to the calculation of living space, which is based only on "length x width", the ceiling heights are also taken into account here, among other things. Based on the cubature, for example, the energy efficiency of the building can be evaluated. Also the value of the real estate can be determined on the basis this value more exactly than for instance over the pure floor space. The cubature is used, among others, in architecture, construction and real estate valuation. The term is also used in restoration, but here it refers less to volume calculation and more to the faithful reproduction of a demolished or historic building. Cubature, enclosed space, gross volume: What is the difference?? The terms "enclosed space" and "gross volume" are also frequently used as synonyms for cubage. According to DIN 277-1, the latter is currently the technically correct term. It also differs slightly from previous calculation methods such as the enclosed space. For example, until 2016, a flat 2 percent was often deducted for exterior plastering. Today it is simply included in the calculation. The outer limits of the building are now decisive. What is the cubic capacity needed for? While cubage is one of the lesser-known terms in the real estate industry, it still comes into play more often than you might think. So you need the value among other things for this: • Valuation of real estate • Calculation of construction and renovation costs • Real estate financing • Mortgages • Residential property insurance • Building application Especially for the latter, you can not avoid the cubature calculation, because usually building authorities require the specification of the cubature for each individual floor. But banks also have an interest in the cubature, for example when it comes to financing new buildings. In the case of existing properties, on the other hand, the living and usable space calculation may be sufficient under certain circumstances. It is best to clarify in advance what values your bank needs to make an offer. How is the cubature calculated? Length x width x height: This simple formula for calculating volume is probably still familiar to most people from mathematics lessons. For simple shapes such as cubes and cuboids, the volume may still be easy to calculate, but most properties are characterized by special features such as sloping roofs, bay windows, covered terraces or simply winding rooms. As a layman you can take pen, paper and calculator yourself, but if in doubt you should leave the cubature calculation to a professional. If you make mistakes and the real estate financing or the building application is based on incorrect information, this can be expensive in the end. Calculate cubage yourself: This is what you should bear in mind For the cubature calculation, you should first divide the house into parts that are easy to calculate. Often the individual floors are used for this purpose. The roof with the slope as well as any extensions or building sections with a lower ceiling height must be calculated separately. If you want to calculate the cubage or gross volume yourself, you should first study DIN standard 277-1 in detail. Here is a precise description of how to calculate the cubature: since 2016, the decisive factors for this are the exterior boundaries, i.e. the outer edge of the exterior walls, the roof surface and the underside of the floor slab. If an attic is not developed, the actual cubic capacity must be calculated. Not be taken into account: • Deep and shallow foundations • Light wells • Exterior ramps • Exterior stairs • Entrance roofing • Cantilevered sunshades • Roof overhangs in areas that are not fully developed • Chimney heads • Ventilation shafts and pipes Have the cubic capacity calculated by a specialist: What does it cost? From what has been said so far, it is clear that the cubage calculation is usually not a simple matter. Experienced architects can assist you here and do the complex calculation for you. The cost here depends heavily on how big and how angled your house is. As a rule, the expert settles accounts by hourly rate. In most cases, the costs amount to a low to medium three-digit sum. Conclusion: cubature calculation is more complex than expected With the calculation of the cubature you can find out how much volume your property has. Among other things, this value plays a role in determining the value of buildings and in construction financing. If it is a new building, the building authority will also ask about the cubature. Although you can calculate the converted space even as a layman, you should not take the procedure lightly. In general, real estate is not a uniformly shaped cube, which causes numerous difficulties in the calculation. In case of doubt, it is better to commission an architect with the calculation.
{"url":"https://chettioan.com/cubature-what-is-it-and-what-is-it-used-for-10631.html","timestamp":"2024-11-10T01:12:28Z","content_type":"text/html","content_length":"63527","record_id":"<urn:uuid:59a339a1-2961-4080-a33d-d6a35f6cb3ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00119.warc.gz"}
Electrical Resistivity Electrical Conductivity - Definition, Formula, Solved Example Problems 1. Electrical Resistivity You can verify by doing an experiment that the resistance of any conductor ‘R’ is directly proportional to the length of the conductor ‘L’ and is inversely proportional to its area of cross section Where, ρ (rho) is a constant, called as electrical resistivity or specific resistance of the material of the conductor. From equation (4.4), ρ = RA/L If L = 1 m, A = 1 m^2 then, from the above equation ρ = R Hence, the electrical resistivity of a material is defined as the resistance of a conductor of unit length and unit area of cross section. Its unit is ohm metre. Electrical resistivity of a conductor is a measure of the resisting power of a specified material to the passage of an electric current. It is a constant for a given material. 2. Conductance and Conductivity Conductance of a material is the property of a material to aid the flow of charges and hence, the passage of current in it. The conductance of a material is mathematically defined as the reciprocal of its resistance (R). Hence, the conductance ‘G’ of a conductor is given by G = 1/R (4.5) Its unit is ohm^–1. It is also represented as ‘mho’. The reciprocal of electrical resistivity of a material is called its electrical conductivity. Its unit is ohm^–1 metre^–1 . It is also represented as mho metre^–1. The conductivity is a constant for a given material. Electrical conductivity of a conductor is a measure of its ability to pass the current through it. Some materials are good conductors of electric current. Example: copper, aluminium, etc. While some other materials are non-conductors of electric current (insulators). Example: glass, wood, rubber, etc. Conductivity is more for conductors than for insulators. But, the resistivity is less for conductors than for insulators. The resistivity of some commonly used materials is given in Table 4.2. Solved Problem The resistance of a wire of length 10 m is 2 ohm. If the area of cross section of the wire is 2 × 10^–7 m^2, determine its (i) resistivity conductance and (iii) conductivity Given: Length, L = 10 m, Resistance, R = 2 ohm and Area, A = 2 × 10^–7 m^2
{"url":"https://www.brainkart.com/article/Electrical-Resistivity-Electrical-Conductivity_39610/","timestamp":"2024-11-03T06:17:22Z","content_type":"text/html","content_length":"39885","record_id":"<urn:uuid:84ec02cc-3cb1-4df3-a5c3-18a94e79017a>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00158.warc.gz"}
Davis–Putnam algorithm The Davis–Putnam algorithm was developed by Martin Davis and Hilary Putnam for checking the validity of a first-order logic formula using a resolution-based decision procedure for propositional logic. Since the set of valid first-order formulas is recursively enumerable but not recursive, there exists no general algorithm to solve this problem. Therefore, the Davis–Putnam algorithm only terminates on valid formulas. Today, the term "Davis–Putnam algorithm" is often used synonymously with the resolution-based propositional decision procedure (Davis–Putnam procedure) that is actually only one of the steps of the original algorithm. The procedure is based on Herbrand's theorem, which implies that an unsatisfiable formula has an unsatisfiable ground instance, and on the fact that a formula is valid if and only if its negation is unsatisfiable. Taken together, these facts imply that to prove the validity of φ it is enough to prove that a ground instance of ¬φ is unsatisfiable. If φ is not valid, then the search for an unsatisfiable ground instance will not terminate. The procedure for checking validity of a formula φ roughly consists of these three parts: • put the formula ¬φ in prenex form and eliminate quantifiers • generate all propositional ground instances, one by one • check if each instance is satisfiable. □ If some instance is unsatisfiable, then return that φ is valid. Else continue checking. The last part is a SAT solver based on resolution (as seen on the illustration), with an eager use of unit propagation and pure literal elimination (elimination of clauses with variables that occur only positively or only negatively in the formula). Algorithm DP SAT solver Input: A set of clauses Φ. Output: A Truth Value: true if Φ can be satisfied, false otherwise. function DP-SAT(Φ) // unit propagation: while Φ contains a unit clause {l} do for every clause c in Φ that contains l do Φ ← remove-from-formula(c, Φ); for every clause c in Φ that contains ¬l do Φ ← remove-from-formula(c, Φ); Φ ← add-to-formula(c \ {¬l}, Φ); // eliminate clauses not in normal form: for every clause c in Φ that contains both a literal l and its negation ¬l do Φ ← remove-from-formula(c, Φ); // pure literal elimination: while there is a literal l all of which occurrences in Φ have the same polarity do for every clause c in Φ that contains l do Φ ← remove-from-formula(c, Φ); // stopping conditions: if Φ is empty then return true; if Φ contains an empty clause then return false; // Davis-Putnam procedure: pick a literal l that occurs with both polarities in Φ for every clause c in Φ containing l and every clause n in Φ containing its negation ¬l do // resolve c with n: r ← (c \ {l}) ∪ (n \ {¬l}); Φ ← add-to-formula(r, Φ); for every clause c that contains l or ¬l do Φ ← remove-from-formula(c, Φ); • "←" denotes assignment. For instance, "largest ← item" means that the value of largest changes to the value of item. • "return" terminates the algorithm and outputs the following value. At each step of the SAT solver, the intermediate formula generated is equisatisfiable, but possibly not equivalent, to the original formula. The resolution step leads to a worst-case exponential blow-up in the size of the formula. The Davis–Putnam–Logemann–Loveland algorithm is a 1962 refinement of the propositional satisfiability step of the Davis–Putnam procedure which requires only a linear amount of memory in the worst case. It eschews the resolution for the splitting rule: a backtracking algorithm that chooses a literal l, and then recursively checks if a simplified formula with l assigned a true value is satisfiable or if a simplified formula with l assigned false is. It still forms the basis for today's (as of 2015) most efficient complete SAT solvers. See also External links Original source: https://en.wikipedia.org/wiki/Davis–Putnam algorithm. Read more
{"url":"https://handwiki.org/wiki/index.php?title=Davis%E2%80%93Putnam_algorithm&oldid=3384541","timestamp":"2024-11-03T09:56:00Z","content_type":"text/html","content_length":"41673","record_id":"<urn:uuid:4a4c344b-2ef9-486d-a895-f21922cafcf8>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00654.warc.gz"}
Generalized Linear Model | What does it mean? Contributed by: Prabhu Ram What is a Generalized Linear Model? Generalized Linear Model (GLiM, or GLM) is an advanced statistical modelling technique formulated by John Nelder and Robert Wedderburn in 1972. It is an umbrella term that encompasses many other models, which allows the response variable y to have an error distribution other than a normal distribution. The models include Linear Regression, Logistic Regression, and Poisson Regression. In a Linear Regression Model, the response (aka dependent/target) variable ‘y’ is expressed as a linear function/linear combination of all the predictors ‘X’ (aka independent/regression/explanatory/ observed variables). The underlying relationship between the response and the predictors is linear (i.e. we can simply visualize the relationship in the form of a straight line). Also, the error distribution of the response variable should be normally distributed. Therefore we are building a linear model. GLM models allow us to build a linear relationship between the response and predictors, even though their underlying relationship is not linear. This is made possible by using a link function, which links the response variable to a linear model. Unlike Linear Regression models, the error distribution of the response variable need not be normally distributed. The errors in the response variable are assumed to follow an exponential family of distribution (i.e. normal, binomial, Poisson, or gamma distributions). Since we are trying to generalize a linear regression model that can also be applied in these cases, the name Generalized Linear Models. Why GLM? Linear Regression model is not suitable if, • The relationship between X and y is not linear. There exists some non-linear relationship between them. For example, y increases exponentially as X increases. • Variance of errors in y (commonly called as Homoscedasticity in Linear Regression), is not constant, and varies with X. • Response variable is not continuous, but discrete/categorical. Linear Regression assumes normal distribution of the response variable, which can only be applied on a continuous data. If we try to build a linear regression model on a discrete/binary y variable, then the linear regression model predicts negative values for the corresponding response variable, which is inappropriate. In the below graph, we can see the response is either 0 or 1. When X <5000, y is 0, and when X>=5000, y is 1 For Example, Consider a linear model as follows: A simple example of a mobile price in an e-commerce platform: Price = 12500 + 1.5*Screen size – 3*Battery Backup (less than 4hrs) Data available for, • Price of the mobile • Screen size (in inches) • Is battery backup less than 4hrs – with values either as ‘yes’, or ‘no’. In this example, if the screen size increases by 1 unit, then the price of the mobile increases by 1.5 times the default price, keeping the intercept (12500) and Battery Backup values constant. Likewise, if the Battery Backup of less than 4hrs is ‘yes, then the mobile price reduces by three times the default price. If the Battery Backup of less than 4hrs is ‘no’, then the mobile price is unaffected, as the term (3*Battery Backup) becomes 0 in the linear model. The intercept 12500 indicates the default price for a standard value of screen size. This is a valid model. However, if we get a model as below: Price = 12500 +1.5*Screen size + 3*Battery Backup(less than 4hrs) Here, if the battery backup less than 4 hrs is ‘yes, then the model is saying the price of the phone increases by three times. Clearly, from practical knowledge, we know this is incorrect. There will be less demand for such mobiles. These are going to be very old mobiles, which when compared to the current range of mobiles with the latest features, is going to be very less in price. This is because the relationship between the two variables is not linear, but we are trying to express it as a linear relationship. Hence, an invalid model is built. Similarly, if we are trying to predict if a particular phone will be sold or not, using the same independent variables, but the target is we are trying to predict if the phone will sell or not, so it has only binary outcomes. Using Linear Regression, we get a model like, Sales = 12500 +1.5*Screen size – 3*Battery Backup(less than 4hrs) This model doesn’t tell us if the mobile will be sold or not, because the output of a linear regression model is continuous value. It is possible to get negative values as well as the output. It does not translate to our actual objective of whether phones having some specifications based on the predictors, will sell or not (binary outcome). Similarly if we are also trying to see what is the number of sales of this mobile that will happen in the next month, a negative value means nothing. Here, the minimum value is 0 (no sale happened), or a positive value corresponding to the count of the sales. Having the count as a negative value is not meaningful to us. Assumptions of GLM Similar to Linear Regression Model, there are some basic assumptions for Generalized Linear Models as well. Most of the assumptions are similar to Linear Regression models, while some of the assumptions of Linear Regression are modified. • Data should be independent and random (Each Random variable has the same probability distribution). • The response variable y does not need to be normally distributed, but the distribution is from an exponential family (e.g. binomial, Poisson, multinomial, normal) • The original response variable need not have a linear relationship with the independent variables, but the transformed response variable (through the link function) is linearly dependent on the independent variables Ex., Logistic Regression Equation, Log odds = β0+β1X1+β2X2 , where β0,β1,β2 are regression coefficient, and X1,X2 are the independent variables • Feature engineering on the Independent variable can be applied i.e instead of taking the original raw independent variables, variable transformation can be done, and the transformed independent variables, such as taking a log transformation, squaring the variables, reciprocal of the variables, can also be used to build the GLM model. • Homoscedasticity (i.e constant variance) need not be satisfied. Response variable Error variance can increase, or decrease with the independent variables. • Errors are independent but need not be normally distributed Components of GLM There are 3 components in GLM. • Systematic Component/Linear Predictor: It is just the linear combination of the Predictors and the regression coefficients. Represented as η or g(μ), it specifies the link between a random and systematic components. It indicates how the expected/predicted value of the response relates to the linear combination of predictor variables. • Random Component/Probability Distribution: It refers to the probability distribution, from the family of distributions, of the response variable. The family of distributions, called an exponential family, includes normal distribution, binomial distribution, or poisson distribution. Below summarizes the table of Probability Distribution, and their corresponding Link function Probability Distribution Link Function Normal Distribution Identity function Binomial Distribution Logit/Sigmoid function Poisson Distribution Log function (aka log-linear, log-link) Different Generalized Linear Models Commonly used models in the GLiM family include: • Linear Regression, for continuous outcomes with normal distribution: Here we model the mean expected value of a continuous response variable as a function of the explanatory variables. Identity link function is used, which is the simplest link function. If there is only 1 predictor, then the model is called Simple Linear Regression. If there are 2 or more explanatory variables, then the model is called Multiple Linear Regression. Simple Linear Regression, y= β0+β1X1 Multiple Linear Regression, y = β0+β1X1+β2X2 Response is continuous Predictors can be continuous or categorical, and can also be transformed. Errors are distributed normally and variance is constant. • Binary Logistic Regression, for dichotomous or binary outcomes with binomial distribution: Here Log odds is expressed as a linear combination of the explanatory variables. Logit is the link function. The Logistic or Sigmoid function, returns probability as the output, which varies between 0 and 1. Log odds= β0+β1X1+β2X2 Response variable has only 2 outcomes Predictors can be continuous or categorical, and can also be transformed. Image source: https://en.wikipedia.org/wiki/Sigmoid_function • Poisson Regression, for count based outcomes with poisson distribution: Here count values are expressed as a linear combination of the explanatory variables.Log link is the link function. where λ is the average value of the count variable Response variable is a count value per unit of time and space Predictors can be continuous or categorical, and can also be transformed. Difference Between Generalized Linear Model and General Linear Model General Linear Models, also represented as GLM, is a special case of Generalized Linear Models (GLiM). General Linear Models refers to normal linear regression models with a continuous response variable. It includes many statistical models such as Single Linear Regression, Multiple Linear Regression, Anova, Ancova, Manova, Mancova, t-test and F-test. General Linear Models assumes the residuals/errors follow a normal distribution. Generalized Linear Model, on the other hand, allows residuals to have other distributions from the exponential family of distributions. Can Generalized Linear Models have correlated data? For Generalized Linear Models, data should not be correlated with each other. If the data is correlated, then the model performance will not be reliable. For this reason, GLMs are unsuitable on time series data, where usually data will have some auto-correlation in them. However, some variations of GLM have also been developed to consider the correlation in the data, such as the Generalized Estimating Equations (GEEs) model and Generalized Linear Mixed Models (GLMMs) model. This brings us to the end of the blog. If you are planning to build a career in Machine Learning, here are some of the most common interview questions to prepare. You can also check out the pool of Free Online Courses on Great Learning Academy and upskill today.
{"url":"https://www.mygreatlearning.com/blog/generalized-linear-models/","timestamp":"2024-11-13T11:06:19Z","content_type":"text/html","content_length":"390750","record_id":"<urn:uuid:698e1b79-8f5a-412b-a4d2-932dbc80e551>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00635.warc.gz"}
Introductory Circuit Theory by Guillemin Ernst - PDFCOFFEE.COM (2024) Generated on 2013-12-07 20:35 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Generated on 2013-12-07 20:36 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Generated on 2013-12-07 20:36 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Generated on 2013-12-07 20:37 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Generated on 2013-12-07 20:37 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Ernst A. Guillemin NEW YORK • JOHN WILEY & SONS, INC. Generated on 2013-12-07 20:37 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google LONDON • CHAPMAN & HALL, LIMITED £ngin. library Copyright, 1953 John Wiley & Son,, Inc. All Right, Re,erved Thi, book or any part thereof mu,t not be reproduced in any form without the written permi,,ion of the publi,her. FIFTH PRINTING, SEPTEMBER, 1958 Library of Congre,, Catalog Card Number: 53-11754 Generated on 2013-12-07 20:37 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Printed in the United State, of America Generated on 2013-12-07 20:38 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google To my sophomores, whose enthusiastic cooperation has been the inspiration for this work Generated on 2013-12-07 20:38 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google For the orientation of the reader of this volume, it should be pointed out that this is the first of a contemplated sequence. The second volume will begin with a study of the approximation properties and uses of Fourier series in connection with circuit problems, and will lead logically into a discussion of Fourier and Laplace transform theory, its correlation with the classical differential equation viewpoint, and its application to analysis and synthesis procedures. The remainder of this volume will deal with an introduction to synthesis on a survey level, including some conventional filter theory and the closely related topic of transmission lines. The advanced aspects of (linear, passive, bilateral) network anal- ysis and synthesis will be the subject of one or two final volumes. Work on the second volume has been interrupted in favor of proceeding im- mediately with the advanced part which is more urgently needed. The present volume, as its title states, is intended to be an introductory treatment of electric circuit theory—the text for a first course in circuits for undergraduate students majoring in electrical engineering or for physics students who need a good orientational background in the sub- ject. It is the result of my past five years' experience in getting our E.E. sophomores headed in the right direction and our physics sophomores provided with a broad orientation in circuit principles and a flexible attitude toward their use. I feel that circuit theory (that is, linear, passive, lumped, finite, bilaleral circuit theory—hereafter called just plain circuit theory) is the electrical engineer's bread and butter, so to speak. He needs to know this subject well before he can tackle any of the other subjects in his curriculum; and it is of the utmost importance that his first course shall provide him with a set of basic concepts and ways of thinking that will not become obsolete throughout the rest of his under- graduate and graduate years. He should be started off with the same basic concepts and processes of analysis that he will be using in his doctorate research or in his professional work four or five years later. He will not understand them so well or be able to use them with the same facility as a sophomore, but he should never have to unlearn or discard any of his earlier concepts later on. His thoughts as a sophomore should Generated on 2013-12-07 20:38 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google sprout from the same roots that will feed and sustain his creative thinking throughout his professional life. In other words, his first course should not be a "terminal" first course but the beginning of a career. I have always held that, where the teaching of basic concepts and pro- cedures are concerned, no distinction should be made between the so- called "elementary" and the "advanced" methods. We refer to things as being "advanced" only so long as we understand them insufficiently well ourselves to be able to make them clear in simple terms. Once we understand a subject fully and clearly, it is no longer difficult to make it understandable to the beginner. And, if we do not warn the beginner beforehand, he will not be able to distinguish when we are teaching him the "elementary" methods and when the "advanced." Such a dis- tinction will reside only in the teacher's mind; to the student both will be equally novel and equally clear. I am pointing out these things because some teachers, upon perusing the pages of this book, may consider some of the topics dealt with (as well as the general level of the work) to be somewhat more advanced than is ordinarily considered appropriate for sophomore or junior students. It is important to remember in this regard that a concept is not neces- sarily more difficult for the student because it happens to be unfamiliar to the teacher. Conceptually none of the material in this book is any more difficult than that involved in the differential or integral calculus which we consider quite appropriate for the sophomore level. Compared with the old-fashioned brand of circuits course, the work is more chal- lenging, to be sure, but it is also far more interesting. To my students, who are my most ardent and reliable critics, there is nothing drab about this subject any longer. Their enthusiasm and morale are high, and the future looks bright and exciting to them. This is how things should be. Let me be a little more specific about the ways in which the intro- ductory treatment in this book differs from most. Primarily it hits harder at the things that are more fundamental, and attempts in every way possible to present basic ideas and principles so as to promote flex- ible thinking in terms of them and facile use of them in their application to a wide variety of simple practical problems. Generated on 2013-12-07 20:39 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google We teachers talk much about fundamentals, but sometimes we don't get very close to them. Take the matter of setting up equilibrium equations for an electric circuit. The very first step is to decide upon a suitable set of variables. They must be independent, and must be ade- quate to define the state of the network at any moment. The usual approach to the selection of variables is to choose a set of mesh currents or loop currents. But do we stop to consider how we can be sure that these will be independent and adequate, or whether they are reversibly, uniquely, and unambiguously related to the branch currents? No. We take all this for granted, and we also take for granted that the student will straighten this "obviously simple" matter out for himself. He doesn't realize it at the time, but right here he stores up a lot of trouble for himself that does not show until much later in his career when he meets a slightly unorthodox situation and suddenly discovers that he can't even get started on it. A similar and even more confusing situation exists when we attempt to choose a set of voltages as variables as in node analysis. This topic, even the instructor admits, never gets across. Needless to say, I don't think we are being very fundamental about these things. Of course, our usual defense is to say that this is not a very important aspect of circuit theory anyway; it's one of these advanced topics too highbrow for sophomores; and, besides, no practical engineer ever uses it anyway. This last re- mark is really one for the book. Of course he doesn't use it. How can he, when he doesn't understand what it's all about and never had it explained to him or was shown its possibilities? As for the topic being too highbrow for sophomores, this is plain nonsense (to which my sopho- mores will most vehemently attest). So far as the practical potential of this item is concerned, let me mention just one of a number of pertinent incidents that occurred re- cently. A group of engineers concerned with the Bonneville power development in the Pacific Northwest were having a conference here, and one of them described a new approach to the analysis problem which is particularly effective for such power-distribution networks and leads to a systematized computational procedure that beats using the old network analyzer all hollow. This "new" approach consists in picking an appro- priate tree and identifying the link currents with loop currents, the tree in this instance being the distribution system and the links being the branches formed by the sources and loads. It seems that power en- gineers also can benefit by a more fundamental approach to circuit Another topic that is essential in getting closer and giving more em- phasis to fundamentals is the use of scale factors and the process of normalization. We tell the student at the outset that we are going to Generated on 2013-12-07 20:39 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google restrict our discussion to linear circuits, but do we clearly impress upon him the significance of this property or how we can capitalize on it? Many of my graduate students, as well as many engineers in industry, are not aware of the implications of this property and of its usefulness if suitably exploited. In fact, the conventional procedure in teaching circuit theory deliberately obscures this important aspect of the subject through overemphasizing what is mistakenly regarded as a "practical" attitude. I can well remember, in my own circuits course that I attended as a sophomore, that the excitation in the numerical problems was invariably 110 volts or 220 volts or some other value in current practice. It was believed by the teacher (and still is by some today) that we must make the student aware of such practical values of voltage; that it is an important collateral function of an introductory circuits course to en- lighten our young men about the magnitudes of significant quantities in current practical use. To begin with, our students of today are not so stupid as all that. They already know that common "house current" is supplied at 110 and 220 volts, and they even know that the frequency is 60 cycles per second (except in some parts of Canada), and a host of other practical data too numerous to mention. Furthermore, these factual data about practical values should be and are far more appropriately presented in a correlated laboratory subject. It is much more important to emphasize that the assumption of 1 volt or 1 ampere as an excitation value is en- tirely sufficient to take care of any eventuality regarding source in- tensity. Moreover, if we do this, we achieve a certain simplification of the numerical work, in that we have one less factor to carry through the pertinent multiplications and divisions, and we become ever so much more clearly aware of the implication of the linear property of networks and of the distinctions to be made between power calculations and volt- age or current calculations, because the necessary factors by which the solution must afterward be multiplied are different. A similar argument may be advanced concerning the specification of frequency. Unless there seems to be an urgent need to do otherwise, it is far more instructive to assume 1 radian per second as the frequency of an a-c source. Through learning how he can subsequently adapt the response thus found to any other value of excitation frequency, the stu- dent acquires a far better appreciation of the fundamental way in which circuit behavior depends upon frequency as a parameter; and again a very material advantage is gained with regard to the numerical com- putations. This latter item alone is more important as a practical matter than many readers might suppose. I had occasion recently to Generated on 2013-12-07 20:41 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google set up an analysis procedure on a research project in an industrial laboratory and neglected to suggest frequency scaling. The ensuing calculations fairly bristled with fantastic powers of 2*r and 10, causing all sorts of silly errors and absurd results. A program of frequency scaling that brought all the relevant parameter values (critical frequencies and such) into the range 1-10 straightened things out in a hurry. (The men in this research group, incidentally, were trained as physicists; so the lack of our teaching procedures to provide a sufficiently clear under- standing of fundamentals i8 apparently not restricted to engineering The matter of element values lies within the framework of these same discussions because of its intimate relation to frequency and amplitude scaling. Most of the problems in this book involve element values (henrys, farads, ohms) in the range 1-10. Here, again, critics will argue that these values are unrealistic and may give our students mistaken ideas concerning usual practical values. To this challenge I reply: (a) Our students are not that dumb. (6) They have lived and will live in the world of reality where they have ample opportunity to find out what "really goes." (c) They are concurrently taking a coordinated laboratory subject where they cannot help but become aware of the fact that 1 farad is a rather large capacitance, (d) It is much more important for them to learn how, for purposes of calculation, we can so normalize our problem as to bring the element values into a range where powers of 10 are absent or at least reduced to a minimum. In fact, it is this nor- malized problem that yields what are sometimes called "universal curves" representing the pertinent circuit response under a wide variety of conditions. There are other important consequences of linearity that cannot be overstressed such as the additive property (superposability of solutions) and the fact that excitation and response functions as a pair may be dif- ferentiated or integrated any finite number of times without their appropriateness one to the other being destroyed. But of utmost and supreme importance is the proper discussion of and approach to the im- pedance concept. In this connection we cannot regard transient analysis as an advanced topic to be dealt with later on. Transient analysis must precede the discussion of a-c steady-state response in order that the true character of the impedance function may be recognized. Unless this phase of an introduction to circuit theory is properly accomplished, the student will be left with a false notion about the impedance con- cept that he will have to unlearn later on before he can acquire a mentally clear picture of what an impedance really is and of the omnipotent role it plays in circuit behavior. To teach the impedance concept initially in Generated on 2013-12-07 20:42 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google its conventional restricted form regarding a-c steady-state response materially impedes a later understanding of its true nature and causes a waste of student time and effort that we cannot afford today. In this regard I have frequently observed that many of my graduate students have greater difficulty mastering the impedance concept than some of my better sophomores whose mental attitude is not preconditioned by some limited viewpoint. Besides, the time has passed when we could regard the discussion of transient response of circuits as a luxury item in our E.E. curriculum. The widespread use of electronic-control devices and the increased im- portance of communication links in our fast-moving modern world have made that attitude as obsolete as the rotary converter. A discussion of the transient behavior of circuits is a must in our present physics as well as in our E.E. curricula at least. And it is wrong to think that it logically belongs in a later discussion following the introductory subject. With- out an appreciation of the natural behavior of at least some simple cir- cuits it is not possible to present the impedance concept because the natural frequencies are the quantities by which the impedance is de- termined, apart from an unimportant constant multiplier. The im- pedance is thus more intimately related to the transient behavior than to the so-called steady-state response, although it characterizes both. This intimate relation between the transient and steady-state behavior of circuits is extremely important as a fundamental principle, and we cannot claim to be hitting at fundamentals unless this item is dealt with In close relationship with this interpretation of the impedance function is the concept of complex frequency and its graphical representation in the complex frequency plane. Through this means, the evaluation of an impedance for a given applied frequency is reduced to a geometrical problem that in many practical cases can be solved by inspection, especially where reasonable approximations are allowable. Further ex- ploitation of these same ideas leads us, in a logical manner, to interpret similarly the evaluation of the constants determining the transient response, and ultimately to all of the practically useful results ordinarily regarded as being obtainable only through use of Laplace transform methods. Such a wealth of knowledge about circuits lies within this conceptual framework that, without question, it may be regarded as the foundation of circuit theory; yet the conventional "first course" in circuits as it is now presented (with few exceptions) makes no mention of these things. Finally the principle of duality may be mentioned as an important fundamental concept that should be prominent throughout the dis- Generated on 2013-12-07 20:42 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google cussions comprising an introductory treatment of circuit theory. Here the term "throughout" is used literally, since the principle of duality is not a topic that can effectively be disposed of by a concentrated dis- cussion injected at some seemingly appropriate point, but instead is best dealt with by touching upon it again and again, bringing out each time some additional important aspect or application of this useful concept. Considering the general structure of this book, it is significant to point out that the first three chapters may be regarded as a separate unit which could be used as the text for a rather solid subject in d-c circuits or resistance circuits if this seemed appropriate. Similarly, the succeeding Chapters 4 through 8 form a closely knit unit that can be used separately. In fact this portion of the book was written in such a way that it could be used independently as the text for a one-semester subject, provided the students had previously been exposed to Kirchhoff's laws and simple resistance circuits in their physics course. If only one semester can be devoted to circuits (as with our physics students), then this material offers a reasonable compromise, while the availability of the discussions in the first three chapters as collateral reading material (to be consulted either concurrently or at any later time) serves as a stopgap in lieu of being able to provide a really adequate foundation at this point in the curriculum. If two semesters can be devoted to the introductory cir- cuits subject, then Chapters 1 through 9 form an appropriate text, and Chapter 10, which rounds off and generalizes some of the previous dis- cussions, remains as a collateral reading assignment or as a reminder that the study of circuit theory really has no ending. In any event, the student who later goes on with advanced work in network analysis and synthesis will need the material of Chapter 10 as a necessary background. Thus the book may serve a dual purpose, as indeed it has served during the period of its development, the one-semester version being appropriate for our physics students and the two-semester one for the E.E.'s. It is only fair to warn the potential reader that this book will prove only moderately satisfactory as a reference work. Thus the discussion relevant to any significant item like Thevenin's theorem, duality, the reciprocity theorem, source transformations, etc., will not be found nicely packaged within certain pages. Discussion of such items as well as that pertinent to various fundamental principles are scattered through- out the book—a first presentation here, a little more there, and still more later on. The reason for this kind of piecemeal presentation is that the book is intended to be used as a text, and the learning process is a piece- meal procedure. We like at any stage to have some repetition of what we already know, presented with the addition of a few new ideas, followed by some illustrations, and then by further additions, etc. Another Generated on 2013-12-07 20:42 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google reason for this type of presentation is the dual purpose the book is meant to serve. Any repetitiousness resulting from these objectives I hope the reader will find pleasing rather than otherwise. At this point I would like to make some specific comments on the material in the various chapters and the reasons for its particular mode of presentation. The first two chapters are the result of years of practice and much troubled thinking about how best to present the subject of establishing equilibrium equations for a network, and why, in spite of all my efforts, there always remained so much confusion and so little con- fidence in the student's mind about this topic. At long last I think I have found the answer to this perplexing question, and Chapters 1 and 2 embody that answer. Thus the conventional approach (and I am as guilty as anyone of having followed it) attempts to present too much at once and achieves only confusion. The various methods using tensor or matrix algebra suffer from the same defect. Moreover, they fail to dis- cuss adequately the most important issue of network geometry, and in other respects are not suitable for an introductory presentation. The process of establishing equilibrium equations involves actually four topics which individually require careful thought and concen- tration for clear understanding. When these are superimposed to form one conglomerate mass, it is little wonder that nothing but misunder- standing and muddled thinking results. The first topic is that of selecting an appropriate set of variables and establishing the relations between these and the branch variables. It is concerned only with the network geometry (no mention need nor should be made at this point of Kirchhoff's laws, or the volt-ampere relations for the elements, or the sources). The topic involves a number of subtleties, and its understanding requires a reasonably good appreciation of the principle of duality, but these matters can be clarified easily if we ex- clude at this time everything else except the purely geometrical proper- ties, as is done in Chapter 1. Having selected variables, we are in a position to write equilibrium equations, and so the discussion of the Kirchhoff laws and how to apply them is the next logical topic. The third topic concerns the voltrampere relations for the branches; and now we can combine topics 1, 2, 3 to form the equilibrium equations in terms of the chosen variables. Finally comes the discussion of sources, and our problem of establishing equi- librium relations is done. Compare this with the usual procedure of writing Kirchhoff law equations immediately in terms of loop currents. Here the four steps outlined above are all tossed into the pot at once and stirred together. The result is a violent case of indigestion, unless we so restrict and Generated on 2013-12-07 20:43 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google simplify the network structure as to render the end result trivial. I might mention, incidentally, that the discussions in Chapter 1 are rather complete, perhaps more so than might be regarded appropriate or necessary in an introductory course. In answer to such comment I can only say that, when I wrote the chapter, I could see no point in deliberately stopping before I had finished what I had to say and what I consider to be a minimum of necessary material to form a good back- ground on which to build later. To postpone the discussion of some of this material seemed unwise, since a subsequent continuation (perhaps in another volume) would have to repeat parts of the earlier arguments in order to achieve coherence in the presentation as a whole. I don't think that the availability of more information than one cares to as- similate at the moment should pose any serious problem. Chapter 1 may profitably be read and reread several times by the student at various stages in his educational program. With regard to the geometrical aspects of duality, which play an im- portant part in the topic of Chapter 1, I found it convenient to invent names for two things that to my knowledge at least had not previously been named. Thus the dual of a cut set I have named a "tie set," and the dual of a tree a "maze." These names seemed most appropriate to me, and I hope the reader will find them appropriate also. Chapter 3 is a collection of topics, all of which are directly or indirectly concerned with expediting the process of obtaining solutions. Syste- matic elimination procedures, solution by determinants, special artifices applicable where various types of symmetry prevail, short methods usable with ladder structures, wye-delta transformations, source trans- formations (which are what Thevenin's and Norton's theorems amount to), the reciprocity theorem (frequently an effective aid in ob- taining a desired result), a knowledge of how power calculations must be made (the fact that these effects when caused by separate sources are not additive in contrast to currents and voltages which are), the trans- formations that leave power relations invariant, the equivalence relations pertinent to the tee, pi, bridged-tee and lattice structures—all these things are useful when we are dealing with the business of constructing solutions. I feel that they belong together and that it is useful to make a first presentation of them while discussing the restricted case of re- sistance networks where there are no other complications to interfere with their assimilation. Although here, as in Chapter 1, the treatment may seem to be somewhat more inclusive than is essential at an in- troductory level, no serious difficulty need thereby be created, since the relative emphasis given to various topics can always be appropriately Generated on 2013-12-07 20:43 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google In this chapter an attempt is made to have the various topics introduce themselves logically rather than be forced upon the reader's attention in a haphazard fashion. Thus, having discussed network geometry, and having shown how a numerical set of equations may be solved by sys- tematically eliminating variables, what is more logical than for the reader to become curious about the geometrical implications of this elimination process? The elimination of a node potential should corre- spond geometrically to the elimination of a node, and the elimination of a mesh current to the elimination of the pertinent mesh. Such a correla- tion, which is indeed possible, not only leads logically to a presentation of wye-delta or delta-wye transformations and their generalizations, but does so with a minimum of disagreeable algebra, as contrasted with other presentations of this item, particularly in the general star-mesh case. A particularly simple proof of the reciprocity theorem which likewise fits in with the pattern set by the systematic elimination procedure is achieved through showing that the symmetry of the parameter matrix characterizing the equilibrium equations is unchanged by a typical step in this procedure. Chapter 4, which introduces the volt-ampere relations for the induc- tance and capacitance elements and shows that inductance networks and capacitance networks are dealt with by means of the same methods applicable to resistance networks, is primarily concerned with a dis- cussion of the unit step and impulse functions, in terms of which various more arbitrary source functions and switching operations may con- veniently be described. In connection with the impulse function, it has been stated that the subtleties involved in its interpretation are too difficult for a class at the sophomore level and that the concept is too abstract and unreal. Neither criticism is consistent with our prevailing attitude. The limit process involved in the definition of the impulse is precisely of the same nature as that pertinent to the formation of a de- rivative or of an integral. If the comprehension of this sort of limit process is too much for a sophomore, then we shall also have to give up trying to teach him the differential calculus. As for the impulse being unreal, nothing could be further from the truth. In our daily life we frequently see things bumping into other things. Take a bat hitting a baseball for instance. The ball changes its velocity from minus to plus in a wink—and that's short enough (com- pared with the time of flight of the ball) to be negligible. For all practical purposes the ball acquires its kinetic energy of flight in no time. If we want to be fussy about this situation and say that the nonzero time of impact must be considered and so we really are not dealing with an im- pulse in its true sense, then to be consistent we should be equally fussy Generated on 2013-12-07 20:53 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google about the step function, because a change in value (of a force for in- stance) cannot occur instantly either; yet we no longer object to step functions in our engineering analysis, because we have lived with this concept longer and are used to it. Our mathematical methods of analysis always represent an idealization of the true state of affairs, and the im- pulse function involves nothing different in this respect from all the other mathematical concepts that we are accustomed to use. Engineering analysis involving singularity functions of all orders is becoming so common today that we can no longer neglect making our students familiar with them at an early stage. My chief reason for in- troducing the impulse as well as the step when I first wrote this text material was the desire to use Thevenin's and Norton's theorems with capacitance and inductance elements in the transient state. Since these elements involve differentiation and integration, it was clear that a step function might have to be differentiated in the course of solving a problem by these means. To deprive the student of this flexible way of dealing with transient problems, I felt, was not in keeping with my basic objectives, and so I moved the presentation of singularity functions from the graduate curriculm into the sophomore year. It might also be pointed out that the early introduction of these con- cepts into the study of circuit theory develops a more open-minded attitude on the part of the student toward characteristic behavior patterns. In my student days, for example, we were told that the current in an inductance just had to be continuous. Though this is true in most practical situations, it is much better not to make such sweeping assertions. It is far more instructive to show the student that a dis- continuous current can be produced in an inductance only through the application of a voltage impulse but that physical conditions may some- times approximate this kind of excitation function. Chapter 5 deals with the transient response of simple circuits, making use of all the artifices mentioned above. The primary objective is to give the student a physical understanding of transient response in first- and second-order cases, together with a facile way of dealing with the perti- nent mathematical relationships, so that he will develop an easy and circumspect approach to problems of this sort, rather than always use the same mathematically ponderous and slowly moving machinery of the "general case." In this respect I have seen some awful crimes committed, particularly by students who have learned the Laplace transform method. They are determined to Laplace-transform every- thing that comes their way, and they get so they can't solve the simplest problem without this machinery. They can't write down the discharge of a capacitor through a resistance without Laplace-transforming the Generated on 2013-12-07 20:53 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google poor thing to death. I don't want any of my students to get into a fix like that. I want them to know their simple transients as well as they know their own names, and Chapter 5 aims to give them the kind of workout that can accomplish this end. In Chapter 6 we come to la piece de resistance as the French would say. Here we introduce the sinusoid, the notion about complex frequency, the impedance concept, its interpretation in terms of the natural fre- quencies of the circuit, graphical portrayal of the pole-zero pattern in the s-plane, evaluation of impedances through geometrical visualization of their frequency factors, interpretation of resonance as a near coin- cidence between applied and natural frequencies, reciprocal and com- plementary impedances, magnitude and frequency scaling, vector dia- grams, and other related aspects pertinent to this general theme. Tran- sients and steady states are stirred together into a pretty intimate mix- ture, with the impedance function keeping order and clarifying all of the pertinent interrelationships. The circuits dealt with are for the most part still the simple ones touched upon in Chapter 5 so that the student will have no difficulty following the mathematical steps while getting used to the many new concepts and methods of interpretation presented here. A few more elaborate element combinations, such as the constant- resistance networks and double-tuned circuits, are discussed toward the end of this chapter in order to show the student how simple a matter it is to deal with such situations in terms of the rather powerful tools which the earlier discussions have placed at his command. Chapter 7 introduces a formal discussion of energy and power relations. Instead of the conventional restriction in the derivation of pertinent quantities to in-phase and out-of-phase components of current and voltage, an attempt is made to develop a more physical appreciation of these phenomena through specific attention to the stored energy func- tions and their significance in the sinusoidal steady state, along with the role played by the dissipation function. Thus the definition of reactive power as the product of the voltage and the quadrature component of current leaves the student with no physical picture of what this quantity is or why it exists and needs to be considered. When it is seen to be proportional to the difference between the average values of the stored energies, its significance begins to be appreciated in physical terms. Through expressing impedances in terms of energy functions, through their determination by these means, and through the ability thus to perceive from a single-frequency computation the whole course of their behavior in a given vicinity (for instance, the determination of the im- pedance behavior in a resonance vicinity and computation of the factor Q), the student is given a glimpse of how energy and power considerations Generated on 2013-12-07 20:53 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google may be useful in a much broader sense than merely for the computation of energy consumption. The object of Chapter 8 is to provide the means for dealing with more extensive and more random circuits in the sinusoidal steady state than the simple ones so far considered. Most important in this regard is the consideration of mutual inductive coupling. The traditional stumbling block involved in the treatment of random situations, namely, the determination of algebraic signs, is overcome by a systematic approach which is straightforward in its use for the computation of pertinent parameter matrices on both the loop and node bases. In Chapter 9 the subject of transient response is generalized, first, through consideration of the so-called a-c transients and, second, through development of the complete solution for any finite lumped network, leading to a result that is identical in form with, but much more simply derived than, that alternately obtainable through Laplace transform methods supplemented by complex integration. It is in these discussions that the concept of complex frequency is fully developed and illustrated by a consideration of the exact coincidence between excitation and natural frequencies (perfect resonance). It is shown how all the many useful theorems ordinarily derived only by Fourier and Laplace trans- form methods are easily and rigorously established by inspection of the form of the solution for the general case, and these theorems (or proper- ties as I prefer to call them) are discussed and illustrated by means of numerous examples. These examples were constructed by starting from assumed pole-zero configurations for the desired transfer functions and synthesizing the pertinent networks. Thus, for the first time in the history of textbooks on transient analysis, the reader is presented with illustrative examples involving higher than second-order systems. He will find a multiple- order pole problem other than the hackneyed RLC circuit for the criti- cally damped case; and he will find examples that are representative of useful response characteristics, as well as illustrative of the theoretical analysis that precedes them. Before the advent of synthesis it was not possible to construct really interesting illustrative examples. If a circuit with more than two or three meshes was assumed, the solution of a characteristic equation of high degree was immediately involved, and the resulting random char- acter of the response obtained after much disagreeable work was hardly representative of anything interesting. Being able to start from a pole- zero pattern and work in both directions (to a network on the one hand, and to its transient response on the other) opens up a host of possibilities that were not available to the textbook writer of the past. Within a Generated on 2013-12-07 20:54 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google limited space, I have made the most of this situation in working out a set of illustrative examples for Chapter 9. Chapter 10, as mentioned previously, supplies a certain generality and completeness to the derivation of equilibrium equations and energy relations that have been discussed already but have not been established in this way. Thus, when the reader reaches this chapter, he will be familiar with all the topics it contains except the mathematical methods needed to state them in perfectly general and yet compact and concise form. The final item is a critical discussion of the principle of duality and of the results derivable from it in the light of the broader viewpoint just presented. The story of network theory is, of course, nowhere nearly completed at this point, but, since one volume cannot contain all of it, this seems to be a reasonable point at which to stop. I should probably say something about historical notes (who did what, when, and why) and references to source material and all that, because with few exceptions I haven't done any of this sort of thing. As a matter of fact, if one takes the works of Kirchhoff, Helmholtz, Cauchy, Lord Rayleigh, and maybe a few others of similar standing and vintage, there isn't much else that is needed to establish the background for network theory. If a student has the inclination to "do some digging" (most of them prefer not to) to ferret out historical facts, he will have no diffi- culty finding the bibliographical help and the encouragement from his instructor that he needs. I do not mean to belittle the importance of having some historical background on the evolution of science and mathematics (and network theory), but the wherewithal to go into this aspect of things is already available. I would rather confine my limited energies (and heaven knows they are limited!) to making available the things that are not now available. One final point. In the teaching of this subject I regard it as im- portant to remind the student frequently that network theory has a dual character (no connection with the principle of duality); it is a Dr. Jekyll-Mr. Hyde sort of thing; it is two-faced, if you please. There are two aspects to this subject: the physical and the theoretical. The physical aspects are represented by Mr. Hyde—a smooth character who isn't what he seems to be and can't be trusted. The mathematical aspects are represented by Dr. Jekyll—a dependable, extremely precise individual who always responds according to established custom. Dr. Jekyll is the network theory that we work with on paper, involving only pure elements and only the ones specifically included. Mr. Hyde is the network theory we meet in the laboratory or in the field. He is always hiding parasitic elements under his jacket and pulling them out to spoil our fun at the wrong time. We can learn all about Dr. Jekyll's Generated on 2013-12-07 21:10 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google orderly habits in a reasonable period, but Mr. Hyde will continue to fool and confound us until the end of time. In order to be able to tackle him at all, we must first become well acquainted with Dr. Jekyll and his orderly ways. This book is almost wholly concerned with the latter. I am content to leave Mr. Hyde to the boys in the laboratory. And, speaking of the "boys in the laboratory," that is to say, the able and cooperative staff who assist in administering this material to our undergraduate students, I wish here to thank them one and all for their many helpful suggestions and their enthusiastic cooperation throughout the period of this "five-year plan." I cannot name one with- out naming them all, and I cannot name them all because I can't be sure that I won't miss one or two. So they'll all have to remain nameless; however, for the time being only. It won't be long before each one makes a name for himself as some have already. And that is all, except to wish you all a pleasant voyage—through the pages of this book and wherever you may be going. E. A. Guillemin Wellasley Hills Generated on 2013-12-07 21:10 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google November 1968 Generated on 2013-12-07 21:11 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Introduction 1 1 Network Geometry and Network Variables 5 Art. I. The Classification of Networks S 2. The Graph of a Network 5 3. The Concept of a "Tree" 7 4. Network Variables J 8 5. The Concept of Loop Currents; Tie Sets and Tie-Set Schedules ./ 10 6. The Concept of Node-Pair Voltages; Cut Sets and Cut-Set Schedules 17 7. Alternative Methods of Choosing Current Variables 23 8. Alternative Methods of Choosing Voltage Variables 33 9. Duality 42 10. Concluding Remarks 51 Problems 58 2 The Equilibrium Equations 64 Art. 1. Kirchhoff's Laws 64 2. Independence among the Kirchhoff Law Equations 68 3. The Equilibrium Equations on the Loop and Node Bases 71 4. Parameter Matrices on the Loop and Node Bases 77 5. Regarding the Symmetry of Parameter Matrices 79 6. Simplified Procedures That Are Adequate in Many Practical Cases 81 7. Sources 86 8. Summary of the Procedures for Deriving Equilibrium Equations 96 9. Examples 99 Problems 105 3 Methods op Solution and Related Topics 112 Art. 1. Systematic Elimination Methods 112 2. Use of Determinants 116 3. Methods Applicable to Ladder and Other Special Network Con- figurations 121 4. Network Transformations; Wye-Delta (F-A) Equivalents 127 5. Thevenin's and Norton's Theorems 138 6. The Reciprocity Theorem 148 7. Driving-Point and Transfer Functions 153— 8. Common Network Configurations and Their Equivalence Relations 161 Generated on 2013-12-07 21:11 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 9. Power Relations, and Transformations under Which They Remain Invariant 169 Problems 179 xxiv CONTENTS 4 Circuit Elements and Source Functions 188 Art. 1. The Volt-Ampere Relations of the Elements 188 2. Voltage and Current Sources 190 3. The Family of Singularity Functions; Some Physical Interpreta- tions 196 4. Single-Element Combinations 203 Approximate Formulas for Parameters of Simple Geometrical Configura- tion 211 Problems 218 5 Impulse and Step-Function Response of Simple Circuits 222 Art. 1. The Series RL Circuit; General Properties of the Solution 222 2. Correlation between Mathematical and Physical Aspects 230 3. Source Transformations; TheVenin's and Norton's Theorems and Their Uses 235 4. The Dual of the Series RL Circuit 241 5. The Series RLC Circuit 243 6. The Dual of the Series RLC Circuit 251 7. Consideration of Arbitrary Initial Conditions 253 Summary Regarding the Transient Response of One-, Two-, and Three-Ele- ment Combinations 257 Problems 262 6 Behavior of Simple Circuits in the Sinusoidal Steady State 270 Art. 1. Why Sinusoids Play Such a Predominant Part in the Study of Elec- trical Networks 270 2. Complex Representation of Sinusoids 273 3. Elaborations upon the Impedance Concept 282 4. Interpretation of Impedance in the Complex Frequency Plane 286 5. Impedance and Admittance Functions for Simple Circuits 289 6. The Phenomenon of Resonance 297 7. Rectangular versus Polar Forms of Impedance and Admittance Functions; an Alternative Interpretation of Resonance 301 8. Reciprocal and Complementary Impedances and Admittances 305 9. Magnitude and Frequency Scaling 309 10. Vector Diagrams 311 Generated on 2013-12-07 21:11 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 11. More Elaborate Impedance Functions; Their Properties and Uses 315 Problems 325 7 Energy and Power in the Sinusoidal Steady State 340 Art. 1. Energy in the Storage Elements 340 2. Energy in the Storage Elements When Voltage and Current Are Sinusoids 342 3. Energy and Power Relations in a Complete Circuit 343 4. Active and Reactive Power; Vector Power 348 5. Root-Mean-Square, or Effective Values 352 6. Impedance or Admittance in Terms of Energy Functions 354 7. Computation of the Energy Functions for More Complex Networks 357 8. Some Illustrative Examples 358 Problems 362 CONTENTS xxv 8 More General Networks in the Sinusoidal Steady State 366 Art. 1. The Steady-State Equilibrium Equations 366 2. Use of Parameter Matrices 371 3. Duality Again 373 4. Mutual Inductance and How to Deal with It 374 5. Coupling Coefficients 380 6. Forming the Equilibrium Equations When Mutual Inductances Are Present 382 7. Computation of Driving-Point and Transfer Impedances for Lad- der Networks 385 8. Networks Embodying Symmetry in Structure and Source Distri- bution—Polyphase Circuits 388 Problems 392 9 Additional Topics Dealing with Steady-State and Transient Be- havior of Lumped Linear Circuits 401 Art. 1. Transient Response with Alternating Excitation 401 2. Further Exploitation of the Concepts of Complex Frequency and Impedance 412 3. Frequency and Time Domains 414 4. The Complete Solution for Any Finite Lumped-Constant Network 419 5. The Derivation of Equilibrium Equations for Driving-Point and Transfer Situations; Reciprocity Again 426 6. Properties of the General Solution 431 7. Illustrative Examples 440 8. Driving-Point and Transfer Functions 462 9. Arbitrary Initial Conditions 468 Problems 469 10 Generalization of Circuit Equations and Energy Relations 483 Art. 1. Use of Matrix Algebra 483 2. Branch-Parameter Matrices and Volt-Ampere Relations 491 3. Equilibrium Equations on the Node Basis 496 4. Equilibrium Equations on the Loop Basis 499 5. Remarks and Examples 502 6. Energy Functions 510 Generated on 2013-12-07 21:11 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 7. Equivalence of Kirchhoff and Lagrange Equations 520 8. Relation to Impedance Functions 522 9. Duality Once More 535 Problems 540 Generated on 2013-12-07 21:12 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Although the discussions in this book, and those in the ones to follow it, are restricted to the simplest class of electrical networks, the reader should not expect that he will find them to be either simple or restricted as to scope and practical importance. Regarding their importance, it may be pointed out that an understanding of the theory of this simplest class of networks is an indispensable prerequisite to the study of all others; and as to scope it is significant to observe that because of their simplicity one is able to develop the theory of this class of networks to a remarkable degree of completeness. As a result, this theory plays a dominant role in the study and development of almost all electrical devices and systems, and is therefore as fundamental to the intellectual equipment of the electrical engineer as is a knowledge of mathematics to the physicist. With these remarks the primary mission of this introduction is accom- plished. The following paragraphs are intended to provide the unin- itiated reader with a bit of an idea as to what an electrical network is, and to define the simple class of networks mentioned above. Actually it is illusory to suppose that the reader who is totally unacquainted with this subject will derive much benefit from an exposure to such a definition of terms, since he will understand them clearly only after he has gained a considerable background in network theory. On the other hand, such remarks may provide the reader with a sufficient initial orientation to enable him to gain a proper perspective as he progresses with the studies that he ahead. The relevant operating characteristics of a large proportion of all electrical devices are adequately described through a knowledge of cur- rents and voltages as time functions at appropriately selected points or point pairs. The significant behavior of an electronic amplifier, for example, is characterized in terms of its volt-ampere relations at specified input and output terminal pairs; the performance characteristics of a transmission line for the distribution of electric energy or for the con- veyance of electric signals representing coded information are expressible in terms of relative voltage and current values at appropriate points Generated on 2013-12-07 21:12 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google along the line; the behavior of a motor-generator set is conveniently studied in terms of the voltage and current input to the motor relative to the voltage and current output from the generator; the electrical characteristics of an ordinary light bulb are adequately described in terms of the voltage-current relations at its terminals. In some of these devices, other features besides the electrical ones are usually of interest also, as are, for example, the mechanical phenomena involved in the operation of the motor-generator set, or the light spec- trum emitted from the light bulb referred to above. A separation of the nonelectrical from the purely electrical studies in such cases is, however, usually desirable, and can always be accomplished under an appropri- ately chosen set of environmental conditions. It may additionally be necessary to make simplifying approximations and idealizations in order to render the electrical features of the problem manageable in reasonable terms, but, when this is done, the resulting representation of the original device is commonly described by the term "electric circuit" or "network." While the electric circuit may thus be an idealized or skeletonized representation of the electrically relevant features of some physical unit in which these circuit characteristics are only incidental or at most partially influential in controlling its structure and behavior, there are important instances where the circuit is the whole device and its function is that of a controlling unit in a larger system. The electric "wave filters" and "corrective networks" essential to long-distance telephone communication circuits, or the "control networks" in servo mechanisms are examples of this sort. Here the electric circuit no longer plays an incidental role but takes its place along with other important electro- mechanical or electronic devices as a highly significant unit or building block essential to the successful operation of modern power, communica- tion, or control systems. Dominant in their effect upon the volt-ampere behavior of an electric circuit are its energy-storage and energy-dissipation properties. Energy storage takes place in the electric and magnetic fields associated with the network, while energy dissipation is practically ever-present because of resistance offered to the flow of electric charge through conductors. Generated on 2013-12-07 21:13 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Three things, therefore, dominate in molding the electrical behavior of a network: namely, the two associated fields and the dissipative character of its various conducting paths. Although their effects are physically superimposed throughout any actual device, the idealization referred to above frequently permits one to assign them to separate portions of the physical system and to regard these portions as having negligible dimensions. Thus one speaks of certain "lumped" parts as having resistive characteristics alone, others as having influence only upon the associated magnetic fields, and a third group related solely to the per- tinent electric fields. These parts are spoken of as the lumped parameters or elements of a circuit. They are of three kinds: the resistance parameter or dissipative element, the inductance parameter which is related to the associated magnetic fields, and the capacitance parameter appropriate to the pertinent electric fields. Physical embodiments of these network param- eters or elements (appearing wherever their occurrence is deliberate rather than incidental) are familiar to the reader as resistors (usually made of metallic wire having poor conductivity), inductors such as wire' coils, and capacitors (frequently in the form of metallic sheets or plates separated by a thin film of insulating material). It is important to observe that these physical embodiments are not exact representations for the separate circuit elements which, by definition, are "pure" in the sense that each one contains none of the other two. In any physical resistor, for example, some inductive and capacitive effects are unavoid- able, as are resistive and capacitive effects in a physical inductor, etc. These frequently unwanted effects present in physical resistors, induc- tors, and capacitors are commonly referred to as "parasitics." Since any physical device with its known parasitic elements can always be represented to a sufficient degree of approximation in terms of theo- retically pure elements, a method of circuit analysis based upon pure elements alone is both adequate and useful. The relationship of voltage across an element to the current through it, which is commonly referred to as its pertinent volt-ampere relationship, is in most cases a linear one (throughout reasonable operating ranges), and the appropriate constant of proportionality is designated as the "value" of that element. There are devices in which the values of network elements are func- tions of the voltage across them or of the current carried by them. For example, an iron-cored coil represents an inductance element whose value is dependent upon the coil current; an electron tube represents a resistance which varies with the applied voltage. Such elements are said to be nonlinear because the voltage is not linearly proportional to Generated on 2013-12-07 21:13 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google the associated current (or to the current derivative or integral, which- ever is pertinent). It is important to distinguish networks that contain such elements from those that do not, and to recognize significant differ- ences in their response characteristics, for these differences form the basis upon which the selection of specific types of elements is made in the practical use of circuits. There are some devices, linear as well as nonlinear, whose voltage or current transmission properties depend upon their orientation with respect to the points of excitation and observation. These are spoken of as being unilateral devices or elements; and wherever the usual ones need to be distinguished from these, they are referred to as bilateral Another important distinction having a bearing upon network be- havior is made according to whether the network does or does not con- tain energy sources or constraints other than those explicitly given by the associated excitation. If it does, then one may expect at times to get more power out than one puts into the network, or to obtain a con- tinued response even in the absence of a power input. When a network contains such implicit energy sources and/or constraints, it is called active; otherwise it is referred to as being passive. The finite, lumped, linear, passive bilateral network is the simplest regarding methods of analysis needed in a study of its behavior under various operating conditions. To an introductory understanding of the physical and mathematical aspects of this type of network, the discus- Generated on 2013-12-07 21:14 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google sions of the present volume are directed. Network Geometry Network Variables 1 The Classification of Networks Linear passive networks are distinguished from one another according to the kinds of elements that are involved, and in the manner of their interconnection. Thus a given network consisting of resistance elements alone is referred to as a resistance network; and inductance or capacitance networks are similarly denned as such in which only inductances or capacitances are involved. Next in order of complexity are the so-called two-element types, more precisely the LC networks (those containing inductance and capacitance elements but, by assumption, no resistances), the RC networks in which inductive effects are absent, and RL networks in which capacitive effects are absent. The RLC network then repre- sents the general case in the category of linear passive networks. 2 The Graph of a Network Quite apart from the kinds of elements involved in a given network is the all-important question of network geometry that concerns itself solely with the manner in which the various elements are grouped and inter- connected at their terminals. In order to enhance this aspect of a net- work's physical makeup, one frequently draws a schematic representation of it in which no distinction is as yet made between kinds of elements. Thus each element is represented merely by a line with small circles at the ends denoting terminals. Such a graphical portrayal showing the geometrical interconnection of elements only, is called a graph of the given network. Figure 1 shows an example of a network as it is usually drawn so as to distinguish the various kinds of elements [part (a)] and how this same network appears when only its geometrical aspects are retained [the graph of part (b)]. The numbers associated with the various branches are added for their identification only. The terminals of the branches (which are common to two or more branches where these are confluent) are referred to as nodes. Generated on 2013-12-07 21:14 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google ( —•^AN— Resistance element —f 0 (J 0 ^— Inductance element 1| Capacitance element Fio. 1. A network schematic and its graph. There are situations in which various parts of a network are only inductively connected as in part (a) of Fig. 2 where two pairs of mutually coupled inductances are involved. Here the corresponding graph (shown in part (b) of Fig. 2) consists of three separate parts; and it is seen also (a) Given network (b) Network graph Fio. 2. The schematic and graph of a network consisting of several separate parts. that a node may be simply the terminus of a single branch as well as the point of confluence of several branches. With the graph of a network there are thus associated three things or concepts: namely, branches, nodes, and separate parts. The graph is the skeleton of a network; it retains only its geometrical features. It is useful when discussing how one should best go about characterizing the network behavior in terms of voltages and currents and in deciding whether a selected set of these variables are not only independent but also adequate Fig. 3. The graph of Fig. 2 coa- for the unique characterization of the lesced into one part. state of a network at any moment. In this regard it is apparent that an Generated on 2013-12-07 21:14 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google economy can be effected in situations like the one in Fig. 2 through THE CONCEPT OF A "TREE" permitting one node in each of the separate parts to become coinci- dent, thus uniting these parts, as is shown in the graph of Fig. 3. Except for the fact that the superimposed nodes are constrained to have the same electric potential, no restrictions are imposed upon any of the branch voltages or currents through this modification which reduces the total number of nodes and the number of separate parts by equal integer values. In subsequent discussions it is thus possible without loss in generality to consider only graphs having one separate 3 The Concept of a 'Tree" The graph of a network places in evidence a number of closed paths upon which currents can circulate. This property of a graph (that it contain closed paths) is obviously necessary to the existence of currents '(a) (b) (c) Fig. 4. A graph and two possible trees (solid lines). in the associated network. It is a property that can be destroyed through the removal of judiciously chosen branches. In Fig. 4 the graph of a given network is shown in part (a), and again in parts (b) and (c) with some of the branches represented by dotted lines. If the dotted branches were removed, there would remain in each of the cases shown in (b) and in (c) a graph having all of the nodes of the original graph (a) but no closed paths. This remnant of the original graph is called a "tree" for the reason that its structure (like that of any tree) possesses the significant property of having no closed More specifically, a tree is defined as any set of branches in the original graph that is just sufficient in number to connect all of the nodes. It is not difficult to see that this number is always nt — 1 where nt denotes the total number of nodes. For, if we start with only the nodes drawn and no branches, it is clear that the first added branch connects two nodes, but thereafter one additional branch is needed for each node con- Generated on 2013-12-07 21:15 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google tacted. If no more than the minimum number of nt — 1 branches are used to connect all of the nodes, then it is likewise clear that the resulting structure contains no closed paths, for the creation of a closed path involves the linking of two nodes that are already contacted, and hence involves the use of more branches than are actually needed merely to connect all of the nodes. For a given network graph it is possible to draw numerous trees, since the process just described is not a unique one. Each tree, however, con- nects all of the nt nodes, and consists of branches, which are referred to, in any given choice, as the tree branches. The remaining branches, like the ones shown dotted in parts (b) and (c) of Fig. 4, are called links. If there are I of these, and if the total num- ber of branches in the network graph is denoted by b, then evidently an important fundamental relation to which we shall return in the follow- ing discussions. 4 Network Variables The response or behavior of a network is completely known if the currents and the voltages in all of its branches are known. The branch currents, however, are related to the branch voltages through funda- mental equations that characterize the volt-ampere behavior of the separate elements. For instance, in a resistance branch the voltage drop (by Ohm's law) equals the current in that branch times the per- tinent branch resistance; in a capacitance branch the voltage equals the reciprocal capacitance value times the time integral of the branch cur- rent; and in an inductance branch the voltage is given by the time derivative of the current with the inductance as a proportionality factor. Although the last-mentioned relations become somewhat more elaborate when several inductances in the network are mutually coupled (as will later be discussed in detail), their determination in no way involves the geometrical interconnection of the elements. One can always, in a straightforward manner, relate the branch voltages directly and re- versibly to the branch currents. We may, therefore, regard either the branch currents alone or the branch voltages alone as adequately characterizing the network be- havior. If the total number of branches is denoted by b, then from either Generated on 2013-12-07 21:15 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google point of view we have b quantities that play the role of unknowns or variables in the problem of finding the network response. We shall now show that either set of b quantities is not an independent one, but that n = nt — 1 fewer variables suffice to characterize the network equilibrium, whether on a current or on a voltage basis. If in a given network a tree is selected, then the totality of b branches is separated into two groups: the tree branches and the links. Corre- spondingly, the branch currents are separated into tree-branch currents and link currents. Since a removal or opening of the links destroys all closed paths and hence by force renders all branch currents zero, it becomes clear that the act of setting only the link currents equal to zero forces all currents in the network to be zero.* The link currents alone hold the power of life and death, so to speak, over the entire network. Their values fix all the current values; that is, it must he possihle to express all of the^ t.re(>-hraT'f'n currants uniquely in terms of the link The inference to be drawn from this argument is that, of the b branch currents in a network, only I are independent; I is the smallest number of currents in terms of which all others can be expressed uniquely. This situation may be seen to follow from the fact that all currents become zero when the link currents are zero. Thus it is clear that the number of independent currents is surely not larger than I, for, if one of the tree- branch currents were claimed also to be independent, then its value would have to remain nonzero when all the link currents are set equal to zero, and this condition is manifestly impossible physically. It is equally clear on the other hand that the number of independent currents is surely not less than I, for then it would have to be possible to render all currents in the network zero with one or more links still in place, and this result is not possible because closed paths exist so long as some of the links remain. Thus, in terms of currents, it must be possible to express uniquely the state of a network in terms of I variables alone. As will be shown later, these variables may be any appropriate set of link currents (according to the specific choice made for a tree), but more generally they may be chosen in a large variety of ways so that numerous specific requirements can be accommodated. Analogously one may regard the branch voltages as separated into two Generated on 2013-12-07 21:15 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google groups: the tree-branch voltages, and the link voltages. Since the tree branches connect all of the nodes, it is clear that, if the tree-branch volt- • In these considerations it is not necessary that we concern ourselves with the manner in which the network is energized although some sort of excitation is implied since all currents and voltages would otherwise be zero, regardless of whether the links are removed or not. If the reader insists upon being specific about the nature of the excitation, he may picture in his mind a small boy tossing coulombs into the capacitances at random intervals. ages are forced to be zero (through short-circuiting the tree branches, for example), then all the node potentials become coincident, and hence all branch voltages are forced to be zero. Thus, the act of setting only the tree-branch voltages equal to zero forces all voltages in the network to be zero. The tree-branch voltages alone hold the power of life and death, so to speak, over the entire network. It must be possible, there- fore, to express all of the link voltages uniquely in terms of the tree- branch voltages. Exactlyjrt of the branch voltages in a network are independent, namely, those pertaining to the branches of a selected tree. Surely no larger number than this can be independent because one or more of the link voltages would then have to be independent, and this assumption is contradicted by the fact that all voltages become zero through short- circuiting the tree branches alone. On the other hand, no smaller num- ber than n voltages can form the controlling set, for it is physically not possible to force all of the node potentials to coincide so long as some tree-branch voltages remain nonzero. i ) (c) Fio. 6. Closed paths or loops corresponding, respectively, to the three trees shown in Fig. 5. Each link current is thus identified with a loop current; the remaining tree-branch currents are clearly expressible as appropriate superpositions of these loop currents, and hence are uniquely determined by the link currents, as predicted earlier. If the branch currents in the network graph of Fig. 5(a) are denoted byii,i2* • • •, js, numbered to correspond to the branch numbering, and if the loop currents of the graph of Fig. 6(a) are denoted by ii , i2, it, i4t then we can make the identifications Generated on 2013-12-07 21:21 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google ji = ii, 32 = t2, j3 = i3, J 4 = U (3) Through comparison of Figs. 5(a) and 6(a) one can then readily express the remaining tree-branch currents as appropriate superpositions of the loop currents, thus, = *1 - *4 = i2 - l'i = *S - *S = *4 ~ *3 or, being mindful of the relations 3, have -3 a = 32 These last four equations express the tree-branch currents, uniquely and unambiguously, in terms of the link currents. Thus, of the eight branch currents in the graph of Fig. 5(a), only four are geometrically independent. These four are appropriate to the set of links associated with any selected tree. For the tree of Fig. 5(b), the link currents are h,h,U- For tne tree of Fi6- 5(c) tliey areji, h,j5,h- Here we may write, in place of Eqs. 3, 3i = *ii h = t2, 3s = h, h = U (6) These loop currents circulate on the contours indicated in Fig. 6(b), Generated on 2013-12-07 21:22 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google which again are found through inserting, one at a time, the branches 1, 2, 5, 7 into the tree of Fig. 5(c). The tree-branch currents in this case are expressed in terms of the loop currents by the relations 33 = i2 + U ji - *'i - i3 ja = *2 — *i js = ii — H — *3 — U which are found by inspection of Figs. 5(a) and 6(b) through noting that the currents in the tree branches result from the superposition of per- tinent loop currents. Through substitution of Eqs. 6 into 7, one again obtains the tree- branch currents expressed in terms of the link currents J3 = h + fa ji = ji — js h = H — ji js = ji — j2 ~ Js ~ ji thus making evident once more the fact that only four of the eight branch currents are geometrically independent. The reader is cautioned against concluding that any four of the eight branch currents may be regarded as an independent set. The branches pertaining to a set of independent currents must be the links associated with a tree, for it is this circumstance that assures the independence of the currents. Thus the branch currents js, ja, jV, j$, for example, could, not be a set of independent currents because the remaining branches 1, 2, 3, 4 do not form a tree. The concept of a tree is recognized as useful because it yields a simple and unambiguous method of deciding whether any selected set of branch currents is an independent one. Or one can say that the tree concept provides a straightforward method of deter- mining a possible set of independent current variables for any given network geometry. Part (d) of Fig. 5 shows still another possible choice for a tree appropri- ate to the graph of part (a), and in Fig. 6(c) is shown the corresponding set of loops. In this case one has j* = *i, js = *2, ji = 13, js = ii (9) and through superposition there follows that ji = ii + *2 = ji + js 32 = n — H — U = Ji — 37 — Js J3 = *1 — U = ji — Js ja = — *2 — h — ii = —js — ji — js When dealing with networks having large numbers of branches and correspondingly elaborate geometries, one must have a less cumbersome Generated on 2013-12-07 21:22 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google and more systematic procedure for obtaining the algebraic relationships between the branch currents and the loop-current variables. Thus it is readily appreciated that the process of drawing and numbering the reference arrows for the loops, and subsequently obtaining by inspection the appropriate expressions for the branch currents as algebraic sums of pertinent loop currents, can become both tedious and confusing in situations involving complex geometries. A systematic way of indicating the loops associated with the selection of a particular tree is had through use of a~schedule such as 11, which Branch No. Generated on 2013-12-07 21:22 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google pertains to the graph of Fig. 5(a) with the tree of part (c) and hence for the loops shown in Fig. 6(b). To interpret this schedule we note that the first row, pertaining to loop 1, indicates that a circuit around this loop is equivalent to traversing in the positive reference direction, branches 1, 4, and 8, and, in the negative reference direction, branch 6. None of the remaining branches participate in forming the contour of loop 1, and so their corresponding spaces in the first row of the schedule are filled in with zeros. The second row is similarly constructed, noting that the pertinent loop contour is formed through traversing branches 2, 3, and 6 positively, and branch 8 negatively. Thus the successive rows in this schedule indicate the confluent sets of branches that partici- schedule 11 in this way yields the equations h - ii j5 = *a h = *a' is = —*i + t.2 J3 = *2 + U h = *4 j4 = *1 — J3 Js = *1 — *2 — *3 - *4 which are seen to agree with Eqs. 6 and 7. The reason why this schedule has the property just mentioned may best be seen through supposing that it is originally constructed, by columns, according to the relationships expressed in Eqs. 12. One subse- quently can appreciate why the resulting rows of the schedule indicate the pertinent closed paths, through noting that the nonzero elements of a row are associated with branches traversed by the same loop cur- rent, and these collectively must form the closed path in question. The actual construction of the schedule may thus be done in either of two ways, viz.: by rows, according to a set of independent closed paths (for example, those associated with a selected tree), or by columns, according to a set of equations expressing branch currents in terms of loop currents. If constructed by columns, the rows of the schedule automatically indicate the closed paths upon which the associated loop currents circulate; and, if constructed by rows from a given set of closed paths, the columns of the resulting schedule automatically yield the pertinent relations for the branch currents in terms of the loop currents. This type of schedule (which for reasons given later is called a tie-set acheduie) is thus revealed to be a compact and effective means for indi- cating both the geometrical structure of the closed paths and the result- ing algebraic relations between branch currents and loop currents. Regarding this relationship, one may initially be concerned about its uniqueness, since there are fewer loop currents than branch currents. Thus, if asked to solve Eqs. 12 for the loop currents in terms of branch currents, one might be puzzled by the fact that there are more equations than unknowns. However, the number of independent equations among this set just equals the number of unknown loop currents (for reasons Generated on 2013-12-07 21:22 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google given in the preceding discussion), and the equations collectively form a consistent set. Therefore the desired solution is effected through sep- arating from Eqs. 12 an independent subset and solving these. Knowing that the equations were originally obtained through choice of the tree of Fig. 5(c), thus designating branch currents j\, j2, js, j\ as a possible independent set, indicates that the corresponding equations among those given by 12 may be regarded as an independent subset. These yield the identifications ii = ji, i2 = jv, i3 = js, {4 = jV as indicated in Eqs. 6 for this choice of tree. It is, however, not essential that the independent subset chosen from Eqs. 12 be this particular one. Thus, if we consider the tree of Fig. 5(d) as a possible choice, it becomes clear that branch currents j4t js, j7, js are an independent set. The corresponding equations separated from 12, 3* = *i - k h = i4 js = t'i — *2 — *3 — *4 may alternatively be regarded as an appropriate independent subset. Their solution reads t'i = j* + j5 i-i4-*-A (14) *3 = J5 *4 = jl Noting from Eqs. 8 that j4 + j5 = ji , and that j4 — j7 ~ js = J2, it is clear that 14 agrees with the former result. Four of the eight Eqs. 12 are independent. A simple rule for picking four independent ones is to choose those corresponding to the link cur- rents associated with a possible tree. Any four independent ones may be solved for the four loop currents. Substitution of these solutions into the remaining equations then yields the previously discussed rela- tions between tree-branch currents and link currents. There should be no difficulty in understanding this situation since the previous discussion has made it amply clear that the link currents or loop currents are an independent set and all other branch currents are uniquely related to these. Equations ETare consistent with this viewpoint and contain all of the implicit and explicit relations pertinent thereto. Hence their solution cannot fail to be unique, no matter what specific approach one may take to gain this end. Although a schedule like 11 may be constructed either by columns or by rows, the usual viewpoint will be that it is constructed by rows from an observation of those sets of confluent branches forming the pertinent Generated on 2013-12-07 21:23 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google closed paths. The latter are placed in evidence, one by one, through imagining that all the links are opened except one, thus forcing all but one of the link or loop currents to be zero. The existence of a single loop THE CONCEPT OF NODE-PAIR VOLTAGES 17 current energizes a set of branches forming the closed path on which this loop current circulates. This set of branches, called a tie set, is indicated by the elements in the pertinent row of the tie-set schedule! If the geometry of the network graph permits its mappability upon a plane or spherical surface without crossed branches, then we may regard any tie set as forming a boundary that divides the total network into two portions.* Hence, if the branches in such a set are imagined to shrink longitudinally until they reduce to a single point, the network becomes "tied off" so to speak (as a fish net would by means of a draw string), and the two portions bounded by the tie set become effectively separated except for a common node. It is this interpretation of the tie set that suggests its name. Although there are several important variations in this procedure for establishing an appropriate set of current variables, we shall leave these for subsequent discussion and turn our attention now to the alternate procedure (dual to the one just described) of formulating a set of net- work variables on a voltage basis. 6 The Concept of Node-Pair Voltages; Cut Sets and Cut-Set On the voltage side of the network picture, an entirely analogous situation prevails. Here we begin by regarding the tree-branch voltages as a possible set of independent variables in terms of which the state of a network may uniquely be expressed. Since the tree branches connect all of the nodes, it is possible to trace a path from any node to any other node in the network by traversing tree branches alone; and therefore it is possible to express the difference in potential between any pair of nodes in terms of the tree-branch voltages alone. Moreover, ^the path connecting any two nodes via tree branches is unique since the tree has no closed loops and hence offers no alternate paths between node pairs. Therefore, the potential difference between any two nodes, referred to as the pertinent node-pair voltage, is uniquely expressible in terms of the tree-branch voltages. The link voltages, which are a particular set of node-pair voltages, are thus recognized to be uniquely expressible in terms of the tree-branch voltages. Let us illustrate these principles with the network graph of Fig. 5(a), Generated on 2013-12-07 21:23 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google and choose initially the tree given in part (b) of this same figure. If the branch voltages are denoted by vit v2, . ••, t>s, numbered to correspond to the given branch numbering, then the quantities t>5, v6, v7, v& are the * For a graph not mappable on a sphere (for example one that requires a doughnut- shaped surface), some but rot all tie sets have this property. This point is discussed further in Art. 9. tree-branch voltages and hence may be regarded as an independent set. They may simultaneously be regarded as node-pair voltages, and, since they are to serve as the chosen set of variables, we distinguish them through an appropriate notation and write ei = t>5, e2 = v6, e3 = v7, e4 = t>s (15) This part of the procedure parallels the use of a separate notation for the loop currents t'i, t'2, • • • when choosing variables on a current basis. There the link currents are identified with loop currents; in Eqs. 15 the tree-branch voltages are identified with node-pair voltages. The remaining branch voltages, namely the link voltages, are now readily expressible in terms of the four tree-branch or node-pair voltages 15. Thus, by inspection of Fig. 5(a) we have t>i = —fa + v 6 = —ei + e2 v2 = — f6 + v7 = — e2 + e3 v3 = —v7 + t>s" ~e3 + e4 v4 = -t>s + v5 = —e4 + ei The procedure in writing these equations is to regard each link voltage as a potential difference between the nodes terminating the pertinent link, and to pass from one of these nodes to the other via tree branches only, adding algebraically the several tree-branch voltages encountered. If the tree of Fig. 5(c) is chosen, the branch voltages v3, v4t va, vS become the appropriate independent set, and we make the identifications ei = v3, e2 = v4, e3 = v6, e4 = t>s (17) The expressions for the link voltages in terms of these read vi = -v4 + v6 — vs = -e2 + e3 — e4 v2 = — t>3 — t>6 + v 8 = — ei —63 + 64 f 5 = v 4 + vs = e2 + e4 v7 = -v3 + v s = -ei + e4 The results expressed in Eqs. 16 and 18 bear out the truth of a state- ment made in Art. 4 to the effect that any set of tree-branch voltages may be regarded as an independent group of variables in terms of which the remaining branch voltages (link voltages) are uniquely expressible. In the network graph of Fig. 5, any tree has four branches. Hence, of Generated on 2013-12-07 21:23 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google the eight branch voltages, only four are geometrically independent. These may be the ones pertinent to any selected tree; and the rest are readily expressed in terms of them. In dealing with more complex network geometries it becomes useful to establish a systematic procedure for the selection of node-pair voltage variables and for the unique expression of the branch voltages in terms of them. The accomplishment of this end follows a pattern that is entirely analogous (yet dual) to that described in the previous article for the current basis. [That is to say, we seek to construct a schedule appropriate to the voltage basis in the same way that the tie-set schedule is pertinent to the current basis. To this end we must first establish the geometrical interpretation for a set of branches which, for the voltage basis, plays a role analogous to that defined for the current basis by a tie set (or confluent set of branches forming a closed loop). The latter is placed in evidence through opening all of the links but one, so that all loop currents are zero except one. The analogous procedure on a voltage basis is to force all but one of the node-pair (i.e., tree-branch) voltages to be zero, which is accomplished through short-circuiting all but one of the tree branches. This act will in general simultaneously short- circuit some of the links, but there will in any nontrivial case be left some links in addition to the one nonshort-circuited tree branch that are likewise not short-circuited and will appear to form connecting links between the pair of nodes terminating the pertinent tree branch. This set of branches, which is called a cut set, is the desired analogue of a tie set, as the following detailed elaboration will clarify. Consider again the network of Fig. 5(a) and the tree of part (b) of this figure, together with the pertinent stipulation of node-pair voltages as expressed by Eqs. 15. The cut-set schedule appropriate' to this situ- ation reads as given in 19. but Eqs. 16 show that » 1 + »5 = t'6 v i + v 2 + vs = v7 (22) »1 + »2 + »3 + »5 = v3 Hence the solutions 21 again agree with the definitions 15. Of Eqs. 20, four are independent. Not any four are independent, but there are no more than four independent ones in this group, and there are several different sets of four independent ones that can be found among them. A simple rule for picking four independent ones is to choose those corresponding to the branch voltages of a possible tree. The solution to these yields the expressions for the e's in terms of the v's; and substitution of these solutions for the e's into the remaining equa- tions yields the previously discussed relations between link voltages and tree-branch voltages. The cut-set schedule which contains the informa- tion regarding the geometrical character of the cut sets, as well as the algebraic relationships between the implied node-pair voltages and the branch voltages, is thus seen to be a compact and effective mode of expressing these things. It does for the formulation of variables on the voltage basis what the tie-set schedule does for the establishment of a system of variables on the current basis. Continued use will be made of both types of schedules in the following discussions. 7 Alternative Methods of Choosing Current Variables The procedure for selecting an appropriate set of independent current variables in a given network problem can be approached in a different manner which may sometimes be preferred. Thus, the method given in Art. 5, which identifies the link currents with a set of loop-current Generated on 2013-12-07 21:29 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Fig. 8. A graph with meshes chosen as loops, and two possible trees. variables, leaves the tie sets or closed paths upon which these currents circulate to be determined from the choice of a tree, whereas one may prefer to specify a set of closed paths for the loop currents at the outset. Consider in this connection the graph of Fig. 8. In addition to provid- ing the branches with numbers and reference arrows, a set of loops have also been chosen and designated with the circulatory arrows numbered 1, 2, 3, 4. These loops, incidentally, are referred to as meshes because they have the appearance of the meshes in a fish net. It is a common practice in network analysis to choose, as a set of current variables, the currents that are assumed to circulate on the contours of these meshes. Having made such a choice, we must know how to relate in an unam- biguous and reversible manner, the branch currents to the chosen mesh This end is accomplished through setting down the tie-set schedule corresponding to the choice made for the closed paths defining the tie sets. With reference to the graph of Fig. 8 one has, by inspection, schedule 23 and the columns yield Branch No. Generated on 2013-12-07 21:30 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google *i = 3i i2 = ji + 32 . . 4. . 4. . (25) *3 = 3i + 32 + 33 U = ji + 32 + 33 + j4 Substitution into the remaining Eqs. 24, gives 35 = —ji — 32 — 33 ~ 3* 36 = ii + 32 + 33 37=31+32 (26) 3s = — ji — J2 J9 = — ji ~ 32 — 33 These express the tree-branch currents in terms of the link currents. If instead, we choose tree 2 in Fig. 8, the branches 1, 5, 8, 9 become links. The corresponding equations in group 24, namely, i i = Ji U = ~js *2 = ~3S H = ~jg are independent and give the expressions for the mesh currents in terms of the link currents. With these, the remaining Eqs. 24 yield again the tree-branch currents in terms of the link currents, thus: 32 = ~ji — j$ 33 = js — 3g j* = ~js + ji (28) i6 = —J9 37 = —js It is readily seen that the results expressed by Eqs. 25 and 26 are con- sistent with those given by Eqs. 27 and 28. That is to say, the choice^of a tree has nothing to do with the algebraic relations between the loop currents and the branch currents.It merely serves as a convenient way of establishing an independent subset among Eqs. 24. In the present Generated on 2013-12-07 21:30 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google very simple example, one can just as easily pick an independent subset without the aid of the tree concept; however, in more complex problems the latter can prove very useful. In approaching the establishment of a set of current variables through making at the outset a choice of closed paths, a difficulty arises in that the independence of these paths is in general not assured. A necessary (though not sufficient) condition is that all branches must participate in forming these paths, for, if one or more of the branches were not traversed by loop currents, then the currents in these branches in addition to the loop currents would appear to be independent. Actually, the loop cur- rents chosen in this manner could not be independent since altogether there can be only I independent currents. A sufficient (though not necessary) procedure to insure the inde- pendence of the closed paths (tie sets) is to select them successively in i such a way that each additional path involves at least nrift braiich that 1 is not part of any of the previously selected paths. This statement fol- lows from the fact that the paths or tie sets form an independent set if the I rows in the associated tie-set schedule are independent: that is, if it is not possible to express any row in this schedule as a linear com- bination of the other rows. If, as we write down the successive rows in this schedule, each new row involves a branch that has not appeared in any of the previous rows, that row can surely not be formed from a linear combination of those already chosen, and hence must be independent of them. A glance at schedule 23 shows that this principle is met. Thus, construction of the first row involves only branches 1 and 2. The second row introduces the additional branches 3, 7, 8; the third row adds branches 4, 6, and 9, and the last row involves the previously unused branch 5. It is not difficult to convince oneself that, if one designates only meshes as closed paths (which is, of course, possible only in a graph that is mappable on a plane or sphere), then the rows in the associated tie-set schedule can always be written in such a sequence that the principle just described will be met. This simple choice in a plane Generated on 2013-12-07 21:31 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google mappable graph, therefore, always assures the in- dependence of the closed paths and hence does the same for the implied mesh-current variables. However, it is quite possible for the I rows in a tie-set schedule to be independent while not fulfilling the property just pointed out. Thus, as already stated above, this property of the rows is a sufficient though not necessary condition to insure their independence. When closed Fig. 9. A modified choice of loops for the graph of Fig. 8 that turns out not to form an inde- pendent set. paths are chosen in a more general manner, as they sometimes may be, it is not always evident at the outset whether the choice made is acceptable. To illustrate this point, let us reconsider the network graph of Fig. 8 with the choice of closed paths shown in Fig. 9. The tie-set schedule reads as in 29, and hence the expressions for the branch currents in terms Branch No. Generated on 2013-12-07 21:36 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google of the loop currents are ji = *i + h j4 = — i* k = t'i + i2 + U k = *.2 k = ~*3 - U is = -*i ~ *2 - U (30) k = -*i + *3 k = *2 + h + U k = ~*2 - i3 - U as may also be verified by inspection of Fig. 9. To investigate the independence of the chosen loops, we observe that the choice of tree 1 in Fig. 8 indicates that the branch currents ji, j2, k, k form an independent set. Hence the first four of Eqs. 30 should be independent. They obviously are not, since the right-hand members of the second and fourth equations are identical except for a change in algebraic sign. Hence the loops indicated in Fig. 9 are not an inde- independent set of loops (or tie sets) is in general not a matter that is evident by inspection, although one has a straightforward procedure for checking a given selection. Namely, the chosen set of loops are inde- pendent if the I rows of the associated tie-set schedule are independent; and they are, if it is possible to find in this schedule a subset of I inde- pendent columns (i.e., I independent equations among a set like 30). The simplest procedure for making this check among the columns is to pick those columns corresponding to the links of any chosen tree. These must be independent if the I rows of the schedule are to be independent. They are if the pertinent equations (like the first four of 30 in the test discussed in the previous paragraph) have unique solutions. Usually one can readily see by inspection whether or not such solutions exist. An elegant algebraic method is to see if the determinant of these equa- tions is nonzero. Thus the nonvanishing of the determinant formed from the subset of columns corresponding to the links of a chosen tree suffices to prove the independence of an arbitrarily selected set of closed In the case of graphs having many branches this method may prove tedious, and so it is useful to be aware of alternative procedures for arriving at more general current-variable definitions, should this be desirable. Thus one may make use of the fact that the most general tie-set schedule is obtainable through successive elementary transforma- tions of the rows of any given one, and that such transformations leave the independence of the rows invariant. We may, for example, start with a schedule like 23 that is based upon a choice of meshes so that its rows are surely independent. Suppose we construct a new first row through adding to the elements of the present one the respective elements of the second row. The new schedule is then as shown in 31. Branch No. Generated on 2013-12-07 21:36 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Loops 2, 3, 4 are still the meshes 2, 3, 4 of Fig. 8. However, loop 1 is now the combined contour of meshes 1 and 2, as a comparison of the first row of the new schedule with the graph of Fig. 8 reveals. If we modify this new schedule further by constructing a new second row with elements equal to the sum of the respective ones of the present rows 2, 3, and 4, there results another schedule that implies a loop 2 with the combined contours of meshes 2, 3, and 4. It should thus be clear that more general loops or tie sets are readily formed through combining linearly a set of existing simple ones. So long as only one new row is constructed from the combination of rows in a given schedule, and if the pertinent old row is a constituent part of this combination, the procedure cannot destroy the independence of a given set of rows. Each new schedule has the property that its columns correctly yield the expressions for the branch currents in terms of the implied new loop currents. That is to say, since transformation of the schedule through making linear row combinations implies a revision in the choice of loops, it likewise implies a revision in the algebraic definitions of the loop cur- rents. Nevertheless the relations expressing the branch currents in terms of these new loop currents is still given by the coefficients in the columns of the schedule. For example, we would get for schedule 31 the relations ii = i'i j4 = -i'z + i\ h = t'i + i'2 h = i'2 js = ~i\ js = -t'i - i'2 (32) J3 = -*''i - i'2 + i'z j& = i'z jg = -i'z where primes are used on the i's to distinguish them from those in Eqs. 24 which are pertinent to schedule 23. Comparison of Eqs. 24 and 32 reveals the transformation in the loop currents implied by the transformation of schedule 23 to the form 31, ii = t'i t'2 = t'i + i'2 . ., (33) *3 = t 3 i4 = i'4 This result is at first sight somewhat unexpected. Thus the transforma- Generated on 2013-12-07 21:37 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google tion from schedule 23 to schedule 31 implies leaving the contours for the loop currents i2, iz, U the same as in the graph of Fig. 8, but changes the contour for loop current *i. Offhand we would expect the algebraic definition for t'i to change and those for i2,%3, and i4 to remain the same. Instead we see from Eqs. 33 that t'i, ^3, and %4 are unchanged while tj changes. Nevertheless Eqs. 32 are correct, as we can readily verify through sketching in Fig. 8 the altered contour for loop 1 and expressing the branch currents as linear superpositions of the loop currents, noting this altered path for t'i. The results expressed by Eqs. 33, therefore, are surely correct also. The mental confusion temporarily created by this result disappears if we concentrate our attention upon schedule 23 and Eqs. 24 and ask ourselves: What change in relations 24 will bring about the addition of row 2 to row 1 in schedule 23 and leave rows 2, 3, and 4 unchanged? The answer is that we must replace the symbol i2 by t'i + i2, for then every element in row 2 will also appear in row 1, in addition to the ele- ments that are already in row 1, and nothing else will change. The lesson to be learned from this example is that we should not expect a simple and obvious connection between the contours chosen for loop currents and the algebraic definitions for these currents, nor should we expect to be able to correlate by inspection changes in the chosen contours (tie sets) and corresponding transformations in the loop currents until experience with these matters has given us an adequate insight into the rather subtle relationships implied by such transformations. The reason for our being misled in the first place is that we are too prone to regard the choice of contours for loop currents as equivalent to their definition in terms of the branch currents, whereas in reality the fixing of these contours merely implies the algebraic relationships be- tween the loop currents and branch currents (through fixing the tie-set schedule); it does not place them in evidence. The most general form a linear transformation of the tie-set schedule may take is indicated through writing in place of 33 t'i = ani\ + ai2i'2 H h otiii'i *a = «2i*'i + a22^2 + • • • + a2[i'i it = + az2*'2 + • • • + aid'1 in which the as are any real numbers. If t'i • • • ii are an independent set of current variables, then t'i • • • t'j will be independent if Eqs. 34 Generated on 2013-12-07 21:37 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google are independent; that is, if they possess unique solutions (which they will if their determinant is nonzero). In general the currents i\ • • • i'i will no longer have the significance of circulatory currents or loop cur- rents, although for convenience they may still be referred to by that name. They will turn out to be some linear combinations of the branch If such a very general set of definitions for the loop currents is desired, one can approach the construction of an appropriate tie-set schedule directly from this point of view, which we will illustrate for the network graph of Fig. 8. Thus let us suppose that one wishes to introduce current variables which are the following linear combinations of the branch *1 = ~ji + 32 - 33 + 3* - 3j9 *2 = k + 273 + k - js *a = 3i + h + 3s + h + J9 *4 = 32 + 2ji + j6 + ja The first step is to rewrite these expressions in terms of I (in this case four) branch currents. To do this we may follow the usual scheme of picking a tree and finding the relations for the tree-branch currents in terms of the link currents. For tree 1 of Fig. 8, these are given by Eqs. 26. Their use transforms Eqs. 35 into t, = 2k + 4j2 + 2j3 + lj4 H = 2i, + 3j2 + 373 + Q74 13 = Q/i - \j2 - lj3 - ljt U = Oji + lj'2 + lis + 2j4 having the solutions 3i = Oii + \i2 + 3i3 + §*4 h — 3*1 — 2*2 — 2*3 — 1*4 h = - hi + 2*2 - 2*3 + 0*4 Generated on 2013-12-07 21:37 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google ji = Oii + 0t2 + lt3 + 1*4 Using Eqs. 26 again we have the additional relations is = Oii ~ 5*2 - 2i3 - §u h = Oii + £i2 + 1*3 + \U h = K + 0i2 + \ i3 + hi (38) Js = - 5*1 + °*2 - §*3 - 5*4 J9 = Oii - \i2 - lis - h* The results in Eqs. 37 and 38 yield tie-set schedule 39, which more Branch No. Generated on 2013-12-07 21:38 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google compactly contains this same information. This is the schedule that is implied by definitions 35 for the loop-current variables, which no longer possess the geometrical interpretation of being circulatory currents. As we shall see in the following chapter, the tie-set schedule plays an important role in the formulation of the equilibrium equations appropri- ate to the chosen definitions for the current variables. The present dis- cussions, therefore, provide the basis for accommodating such a choice, regardless of its generality or mode of inception. Thus we have shown that the process of selecting an appropriate set of current variables can take one of essentially three different forms: 1. The approach through choice of a tree and identification of the the meshes of a mappable network), but no facile control is had regarding the associated algebraic definitions of the loop currents. 3. The approach through making an initial and arbitrarily general choice for the algebraic definitions of the current variables (like those given by Eqs. 35). In this case the variables no longer possess the simple geometrical significance of circulatory currents. This approach will probably seldom be used, and is given largely for the sake of its theo- retical interest. 8 Alternative Methods of Choosing Voltage Variables When voltages are chosen as variables, we similarly have three possible variations which the form of the approach may take. The first, which is discussed in Art. 6, proceeds through choice of a tree and the identifica- tion of tree-branch voltages with node-pair voltage variables. In this process (like procedure 1 mentioned above for the choice of current vari- ables), the algebraic definitions for the node-pair voltages are as simple as they can be, but little or no direct control can be exercised over the geometrical distribution of node pairs. A second form of procedure, which permits a forthright choice of node pairs at the outset, and a third, in which the process is initiated through an arbitrarily general choice for the algebraic definitions of the voltage vari- ables, are now presented in detail. To illustrate how a designation of node-pair voltage variables may be approached through the initial selection of an appropriate set of node pairs, let us consider the network of Fig. 8. In Fig. 10 are indicated the nodes of this net- work, lettered a, b, • • f for ease of reference, and a system of lines with arrowheads intended to indicate a choice of node pairs and reference directions for the voltage variables ei, e2, e5. These arrows are not to be confused with branches of the network; yet, if we momentarily think of them as such, we notice that the structure in Fig. 10 has the characteristics of a tree, for it connects all of the nodes, and involves the Generated on 2013-12-07 21:38 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google smallest number of branches needed to accomplish this end. Hence this choice for the variables ei • • • e5 is an appropriate one since the variables surely form an independent set, and their number equals the number of branches in any tree associated with a network having these nodes. In making a forthright choice of node pairs it is sufficient to see to it that the system of reference arrows accompanying Fig. 10. A possible choice of node-pair vol- tages for the graph of Fig. 8. this choice (whether actually drawn or merely implied) forms a structure that has a tree-like character. Using the principles set forth in Art. 6, one can construct cut-set schedule 40 appropriate to the choice of node pairs indicated in Fig. 10 Branch No. a, e b, d Generated on 2013-12-07 21:38 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google fi = —ei +62 + 63-^4-65 t>6 = —64 — 65 = -61 + e2 + e3 - e4 — e5 v7 = -e2 - e3 + e4 + es t>3 = -63 + 64 + e5 v& = -61 + e3 — e4 - e5 (41) f4 = -es vg - —c3 + es The correctness of these may readily be checked with reference to Figs. 8 and 10, remembering again that the v's are drops and the e's are rises. For example, vi is the voltage drop from node a to node d. If we pass from a to d via the system of node-pair voltage arrows in Fig. 10, we observe that we first traverse the arrows for e2 and e3 counterfluently, and then the arrows for e5, e^, and ei confluently. Since confluence indi- cates a rise in voltage, the terms for ci, e4, and e5 are negative. There should be no difficulty in thus verifying the remaining equations in set 41. One could have written Eqs. 41 from inspection of Figs. 8 and 10 to start with and thus constructed schedule 40 by columns, whence the rows would automatically yield the cut sets. This part of the procedure is thus seen to be the same as with the alternate approach given in Art. 6. So is the matter regarding the solution of Eqs. 41 for the node-pair voltages in terms of the branch voltages. One selects any five independ- ent equations from this group and solves them. Again the selection of a tree in the associated network graph (such as tree 1 or 2 in Fig. 8) is a quick and sure way to spot an independent subset among Eqs. 41, and the remaining ones will then yield the appropriate expressions for the link voltages in terms of tree-branch voltages, as discussed previously. In this method of approach to the problem of defining an appropriate set of independent voltage variables, a rather common procedure is to choose the potential of one arbitrarily selected node as a reference and designate as variables the potentials ei . • • en of the remaining nodes with respect to this reference. Thus, one node serves as a datum or reference, and the node pairs defining the variables ei • • • en all have this datum node in common. The quantities ei • • • en in this arrangement are spoken of as node potentials and are referred to as a "node-to-datum" Generated on 2013-12-07 21:39 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google set of voltage variables. The rather simplified choice of node pairs implied in this specialized procedure is in a sense the parallel of choosing meshes for loops in the specification of current variables. This theme is elaborated upon in Art. 9 where the dual character of the loop and node procedures is stressed and the implications of this duality are partially evaluated. The equivalent of Fig. 10 for a choice of node-pair voltages of this sort is shown in Fig. 11, pertinent to the network graph of Fig. 8. Again, for the moment regarding the arrows in this diagram as branches, we see that it has tree-like character and hence that such a node-to-datum set of voltages is always an independent one. The cut sets appropriate to this group of node-pair voltages are particularly easy to find since we observe that setting all but one of the node-pair voltages equal to zero causes all of the nodes to coincide at the datum except the one at the tip end of the nonzero voltage. Hence the branches divergent from this single node form the pertinent cut set. With reference to Fig. 8, cut-set schedule 42 is thus readily obtained. Since the node-pair voltages are the po- tentials of the separate nodes with respect to a common datum, each branch voltage drop is given by the difference of two node potentials, namely those associated with the nodes terminating the pertinent branch. If the latter touches the datum node, then its voltage drop is given by a single node potential Fio. 11. A node-to-datum choice of node-pair voltages for the graph of Fig. 8. Branch No. Generated on 2013-12-07 21:39 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google potentials are thus formed either by inspection of Figs. 8 and 11 or from the columns of schedule 42 to be fi = -e3 + e5 f4 = -ei IV = e4 - e5 v2 = —e3 + e5 t>5 = —Cj t>s = e2 — e3 (43) t>3 " —e2 + c4 = —e4 Ug = ei — e2 The node potentials in terms of the branch voltages are found from these by the usual process of selecting from these equations a subset of five independent ones. According to tree 1 of Fig. 8, the last five are such a subset. Their solution yields ei = -vs c2 = -v5 — vg 63 = —t>« - vS - vg (44) e4 = -v6 and the remaining equations in set 43 then give the following expressions for the link voltages in terms of the tree-branch voltages vi = v5 - t>6 - v7 + t>s + vg v2 = v 5 — t>6 — v7 + l>s + vg v3 = v5 — v6 + vg v4 = l>5 It is interesting to observe how more general node-pair voltage defini- tions are derivable from the simple node-to-datum set through carrying out linear transformations on the rows of cut-set schedule 42. Thus, suppose we form from this one a new schedule through adding the ele- Generated on 2013-12-07 21:39 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google ments of the second row of 42 to the respective ones of the first row, Generated on 2013-12-07 21:39 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Branch No. This result suggests that the node-pair voltage diagram has changed from the form shown in Fig. 11 to that shown in Fig. 12, since the poten- tial of node e with respect to the datum (which in Fig. 11 is ez) now is equal to the sum of e\ and e\. We note further that, when e\ is the only nonzero voltage, nodes e and / coincide at the tip end of e'i; so the associated cut set is found through picking up these two nodes, as is also indicated in schedule 46. The picked-up nodes corresponding to the remaining node-pair voltages evidently remain the same as before, and hence the rest of the cut sets are unchanged. Fig. 12. Revision in the node-pair volt- Fio. 13. The graph of Fig. 8 age definitions of Fig. 11 corresponding with node designations as to a transformation of cut-set schedule given in Figs. 10, 11, and 12. 42 to form 46. Other simple transformations in schedule 46 may similarly be inter- preted. For example, if row 3 is added to row 4, the picked-up nodes for cut set 4 become d and b, which in Fig. 12 implies that the tail end of e'3 shifts from the datum to node b, and we will find that now 63 ** e"s + e"4 where the double prime refers to the latest revision of the set of node-pair voltages (the rest of the e's remain as in Eqs. 49 with double primes on the right-hand quantities). One soon discovers upon carrying out additional row combinations in schedules 42 or 46 that it is by no means always possible to associate a node-pair voltage diagram like the ones in Figs. 10, 11, or 12 with the resulting node-pair voltages, for the reason that some of these are likely no longer to be simply potential differences between node pairs but instead are more general linear combinations of the branch voltages. The same is true if one constructs a cut-set schedule (as is also a pos- sible procedure) by making arbitrary choices for the picked-up nodes. To illustrate such a method we may consider again the graph of Fig. 8 Generated on 2013-12-07 21:40 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google which is redrawn in Fig. 13 with the nodes lettered as in Figs. 10, 11, and 12. Cut-set schedule 50 is constructed by simply making an arbi- trary choice for the picked-up nodes relating to the pertinent cut sets. The term "node pair" here retains only a nominal significance since we are not at all assured that the implied voltage variables are potential Branch No. a, d a, b, c d, e Generated on 2013-12-07 21:40 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google graph of Fig. 13 or 8. Picking tree 1 in Fig. 8 designates v5, v6, v7, vg, vg as independent tree-branch voltages and hence stipulates that the last five equations in set 51 should be independent. It is readily seen that they are, for they yield the solutions ei = -f7 e2 = t'6 *3 = »5 — "7 + H (52) e* = v6 - vg e5 = ~v7 + vS We may conclude that the cut sets in schedule 50 are independent, and Eqs. 52 tell us what the implied voltage variables are in terms of the branch voltages. The first two are simple potential differences between nodes, but the remaining three are not. There is no reason why the selected voltage variables have to be potential differences between nodes. So long as they form an independent set, and we know the algebraic relations between them and the branch voltages, they are Lastly let us consider for the same network of Fig. 8 the following set of independent linear combinations of the branch voltages as a starting «i = —»i + v3 + vi + 2v6 + 2t,7 + 5vs + 5v9 e2 = —»2 + »3 + «4 — v 5 + v6 + v 7 + 4»s + 4»9 e3 = —»2 + 2«3 - vs + vo + 3»s + 2f9 e* = — vi + 2v3 - v5 + v6 — v7 + 2vs + vg e5 = v3 - v5 + v6 Through use of Eqs. 45 one can eliminate all but five of the branch volt- ages and get definitions 53 into the form «\ = »5 + 2v6 + 3v7 + 4fs + 5t'9 «2 = v 6 + 2v7 + 3t's + 4»9 Generated on 2013-12-07 21:41 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google e3 = » 7 + 2i>s + 3t'9 (54) e* = vs + 2v9 ee = Vg The solutions to these equations together with Eqs. 45 yield the complete set of relations for the branch voltages in terms of ei • • • eg, thus - 3e2 + 2e3 ~ «5 - 3e2 + 2e3 - e5 - 3e2 + 3e3 - ei + «s - 2e2 + e3 - 2e2 + e3 - 2e3 + e4 - 2«4 + e5 Generated on 2013-12-07 21:41 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google These results may be summarized in cut-set schedule 56. Thus we see Branch No. to these two methods of approach. We wish now to call specific attention to this aspect of our problem so that we may gain the circumspection that later will enable us to make effective use of its implications. In a word, this usefulness stems from the fact that two situations which, on a current and voltage basis respectively, are entirely analogous, have identical behavior patterns except for an interchange of the roles played by voltage and current, while physically and geometrically they are distinctly different. Not only can one recognize an obvious economy in computational effort resulting from this fact since the analysis of only one of two networks so related yields the behavior of both, but one can sense as well that an understanding of these ideas may lead to other important and practically useful applications, as indeed the later dis- cussions of our subject substantiate. A careful review of the previous articles in this chapter shows that essentially the same sequence of ideas and procedures characterizes both the loop and the node methods, but with an interchange in pairs of the principal quantities and concepts involved. Since the latter are thus revealed to play a dual role, they are referred to as dual quantities and concepts. First among such dual quantities are current and voltage; and first among the dual concepts involved are meshes and nodes or loops and node pairs. Since a zero current implies an open circuit and a zero voltage a short circuit, these two physical constraints are seen to be duals. The identification of loop currents with link currents and of node-pair voltages with tree-branch voltages shows that the links and the tree branches likewise are dual quantities. The accompanying table gives a more complete list of such pairs. Dual Quantities or Concepts Branch current Mesh or loop Number of loops (Z) Loop current Mesh current Generated on 2013-12-07 21:41 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Link current Tree-branch current Tie set Short circuit Parallel paths Branch voltage Node or node pair Number of node pairs (n) Node-pair voltage Node potential Tree branch Tree-branch voltage Link voltage Cut set Open circuit Series paths It should be emphasized that duality is strictly a mutual relationship. There is no reason why any pair of quantities in the table cannot be interchanged, although each column as written associates those quan- tities and concepts that are pertinent to one of the two procedures commonly referred to as the loop and node methods of analysis. Two network graphs are said to be duals if the characterization of one on the loop basis leads to results identical in form with those obtained for the characterization of the other on the node basis. Both graphs will have the same number of branches, but the number of tree branches in one equals the number of links in the other; or the number of inde- pendent node pairs in one equals the number of independent loops in the other. More specifically, the equations relating the branch currents and loop currents for one network are identical in form to the equations relating the branch voltages and the node-pair voltages for the other, so that these sets of equations become interchanged if the letters i and j are replaced, respectively, by e and v, and vice versa. For appropriately chosen elements in the branches of the associated dual networks, the electrical behavior of one of these is obtained from that of the other simply through an interchange in the identities of voltage and current. Apart from the usefulness that will be had from later applications of these ideas, a detailed consideration of the underlying principles is advantageous at this time because of their correlative value with respect to the foregoing discussions of this chapter. Geometrically, two graphs are dual if the relationship between branches and node pairs in one is identical with the relationship between branches and loops in the other. The detailed aspects involved in such a mutual relationship are best seen from actual examples. To this end, consider the pair of graphs in Fig. 14. Suppose the one in part (a) is given, and we are to construct its dual as shown in part (b). At the outset we observe that the graph of part (a) has seven meshes and five Generated on 2013-12-07 21:46 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google independent node pairs (a total of six nodes). Hence the dual graph must have seven independent node pairs (a total of eight nodes) and five meshes. The total number of branches must be the same in both graphs. In proceeding with the construction of the dual of (a), one may begin by setting down eight small circles as nodes—one for each mesh in the graph of part (a), and an extra one that can play the part of a datum node if we wish to regard it as such, although any or none of the eight nodes needs to be considered in this light. We next assign each of these seven nodes to one of the seven meshes in the given graph, as is indi- cated in Fig. 14 through the letters a, b, • • ., g. The procedure so far implies that we are considering as tie sets those confluent branches in graph (a) that form the contours of meshes and as cut sets those branches in the dual graph that are stretched in the process of picking up single nodes. At least, this implication is true of the nodes a, • • •, g that are assigned to specific meshes; the cut set pertaining to the remaining unassigned node will correspond to a tie set in graph (a) that will reveal itself as we now proceed to carry out the process of making all tie sets in the given graph identical to all the cut sets in its dual. Initially let us disregard reference arrows entirely; these will be added as a final step. To begin with mesh a, we observe that it specifies a tie set consisting of branches 1, 6, 7; therefore the cut set formed through picking up node a in the dual graph must involve branches 1, 6, 7, and so these are the branches confluent in node a. Similarly the branches 7, 10 form the tie set for mesh b, and therefore these branches are con- fluent in node b of the dual graph; and so forth. The actual process of drawing the dual graph is best begun by inserting only those branches that are common to any two tie sets and hence must be common to the respective cut sets. That is to say, we note that any branches that are common to two meshes in the given graph must be common to the two corresponding nodes in the dual graph and hence are branches that form direct connecting links between such node pairs. For example, branch 7 is common to meshes a and b, and hence branch 7 in the dual graph connects nodes a and b; similarly branch 10 links nodes b and c; branch 11 links nodes c and d; and so forth. In this way we readily insert branches 7, 10, 11, 8, 12, 9, and then note Generated on 2013-12-07 21:47 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google that the remaining branches 1, 2, 3, 4, 5, 6 in the original graph form a tie set that must be identical with the cut set of the dual graph that is associated with the remaining unassigned node. Hence these branches, which have one terminus in an assigned node, are the ones that must be confluent in the remaining node. The latter is thus seen to be assignable to the loop formed by the periphery of the given graph. In a sense we may regard this periphery as a "reference loop" corresponding to the originally unassigned node playing the role of a "reference node," although the following discussion will show that this view is a rather specialized one and need not be considered unless it seems desirable to do so. Now, as to reference arrows on the branches of the dual graph we note, for example, that the traversal of mesh a in a clockwise direction is confluent with the reference arrow of branches 1 and 6, and counterfluent with the reference arrow of branch 7. Hence on the dual graph we attach reference arrows to branches 1 and 6 that are divergent from node a, and provide branch 7 with an arrow that is convergent upon this node. That is to say, we correlate clock- wise traversal of the meshes with divergence from the respective nodes, and then assign branch arrows in the dual graph that agree or disagree with this direction, according to whether the corresponding branch arrows in the given graph agree or disagree with the clockwise direction for each corresponding mesh. We could, of course, choose a consistent counterclockwise traversal of the meshes, or in the dual graph choose convergence as a corresponding direc- tion. Such a switch will merely re- verse all reference arrows in the dual graph (which we can do anyway), but we must in any case be consistent and stick to the same chosen convention throughout the process of assigning branch reference arrows. This is done in the construction of the graph of Fig. 14(b), as the reader may readily verify by inspection. Being mindful of the fact that duality is in all respects a mutual rela- tionship, we now expect to find that the graph (a) of Fig. 14 is related to the graph (b) in the same detailed manner that (b), through the process of construction just described, is related to (a). Thus we expect the meshes of (b) to correspond to nodes in (a) as do the meshes of (a) to the nodes in (b). However, we find upon inspection that such is not Generated on 2013-12-07 21:47 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google consistently the case. For example, the mesh in graph (b) having its contour formed by the consecutively traversed branches 1, 7, 10, 12, 9, 4 corresponds in graph (a) not to a single node, but instead is seen to be the dual of the group of three nodes situated at the vertexes of the triangle formed by the branches 2, 3, 11, since the act of simultaneously picking up these nodes reveals the same group of branches 1, 7, 10, 12, 9, 4 in graph (a) to be a cut set. Fig. 15. A graph topologically equivalent to that in Fig. 14b. This apparent inconsistency is easily resolved through consideration of a slight variation in the construction of the dual of graph (a) as shown in Fig. 15. Here all meshes correspond to the nodes of graph (a) in Fig. 14 in the same way that the meshes of graph (a) correspond to nodes in the graph of Fig. 15, as the reader should carefully verify. The addi- tional-principle observed in the construction of the graph of Fig. 15 is that the sequence of branches about any node is chosen to be identical with that of the similarly numbered branches around the respective mesh, assuming a consistent clockwise (or counterclockwise) direction of circuitation around meshes and around nodes. For example, the branches taken in clockwise order around mesh a of the graph of Fig. 14(a) are numbers 1, 7, 6; around node a in the graph of Fig. 15 this sequence of branches corresponds to counterclockwise rotation. Corre- spondingly, the clockwise sequence of branches around mesh c in Fig. 14(a) is 10, 11, 12, and this is the counterclockwise sequence of the corresponding branches around node c in Fig. 15. This correspondence in the sequence of branches is seen to hold for all meshes and their corre- sponding nodes not only between meshes in Fig. 14(a) and nodes in Fig. 15 but also between the meshes in Fig. 15 and their corresponding nodes in Fig. 14(a). The duality between these two graphs is indeed complete in every respect.* So far as the relationships between branch currents and loop currents or between branch voltages and node-pair voltages are concerned, how- ever, these must be the same for the graph of Fig. 14(b) as they are for the graph of Fig. 15, since both involve fundamentally the same geo- metrical relationship between nodes and branches, as a comparison readily reveals. For this reason it is not essential in the construction of a dual graph to preserve branch-number sequences around meshes and nodes as just described unless one wishes for some other reason to make meshes in the dual graph again correspond to single nodes in the original graph. From the standpoint of their electrical behavior, the networks whose graphs are given by Figs. 14(b) and 15 are entirely identical. These graphs are, therefore, referred to as being topologically t equivalent, and either one may be regarded as the dual of Fig. 14(a), or the latter Generated on 2013-12-07 21:48 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google as the dual of either of the networks of Figs. 14(b) and 15. An additional interesting example of dual graphs is shown in Fig. 16. The meshes a, b, c, . . . in the graph of part (a) correspond to similarly * The correlation of clockwise rotation in one graph with counterclockwise rota- tion in its dual is an arbitrary choice. One can as well choose clockwise rotation in both, the significant point being that a consistent pattern is adhered to. t The mathematical subject dealing with the properties of linear graphs is known as topoiogy. lettered nodes in the graph of part (b); and, conversely, the meshes in graph (b) correspond to nodes in part (a). It will also be observed that the sequences of branches around meshes and around corresponding nodes agree; and it is interesting to note in this special case that, although both graphs have the form of a wheel, the spokes in one are the rim seg- "12 6^ (a) (b) Fio. 16. A pair of dual graphs. ments of the other. It is further useful to recognize that these graphs may be redrawn as shown in Fig. 17, where they take the form of so- called ladder configurations with "feedback" between their input and output ends. Removing link 16 in the graph of Fig. 16(a) corresponds to short-circuiting link 16 in the dual graph of part (b), since open- and (a) Datum (b) Datum Fig. 17. The dual graphs of Fig. 16 redrawn in the form of unbalanced ladder short-circuit constraints are dual concepts (as previously mentioned). In graph 17(b) this alteration identifies the first node on the left with the datum, thus in effect paralleling branches 1 and 9 at the left and branches Generated on 2013-12-07 21:48 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 8 and 15 on the right. Such ladder configurations are much used in practice, and it is therefore well to know that the dual of a ladder is again a ladder with the essential difference that its series branches corre- spond to shunt branches in the given ladder, and vice versa. It is helpful, in the process of constructing a dual graph, to visualize the given one as mapped upon the surface of a sphere instead of on a plane. If this is done, then the periphery appears as an ordinary mesh when viewed from the opposite side of the sphere. For example, if the graph of Fig. 16(a) is imagined to consist of an elastic net and is stretched over the surface of a sphere until the periphery contracts upon the oppo- site hemisphere, and if one now views the sphere from the opposite side so as to look directly at this hemisphere, then the periphery no longer appears to be fundamentally different in character from an ordinary mesh, for it now appears as a simple opening in the net, like all the other meshes. Thus the branches 9, 10, 11, 12, 13, 14, 15, 16 forming the contour of this mesh appear more logically to correspond to the simi- larly numbered group of branches in the dual graph 16(b) emanating from the central node which, like all the other nodes, now corresponds to a simple mesh in the given graph. When, in the choice of network variables, one identifies loop currents with link currents and node-pair voltages with tree-branch voltages, it will be recalled that each tie set consists of one link and a number of tree branches, while each cut set consists of one tree branch and a number of links. Since the tie sets of a given graph correspond to cut sets in the dual graph, one recognizes that the tree branches in one of these graphs are links in the other. That is to say, corresponding trees in dual graphs involve complementary sets of branches. In Fig. 16, for example, if one chooses the branches 1, 2, 3, 4, 5, 6, 7, 8 in graph (a) as forming a tree, then the corresponding tree in graph (b) is formed by the branches 9, 10, 11, 12, 13, 14, 15, 16. Or, if in graph (a) we choose branches 1, 2, 3, 4, 12, 13, 14, 15 as forming a tree, then in graph (b) the corresponding tree is formed by branches 5, 6, 7, 8, 9, 10, 11, 16. It should now be clear, according to the discussion in the preceding articles, that, if in a given graph we pick a tree and choose the comple- mentary set of branches as forming a tree in the dual graph, then the Generated on 2013-12-07 21:48 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google resulting equations between branch currents and loop currents in one of these graphs becomes identical (except for a replacement of the letters j and i, respectively, by v and e) with those relating branch voltages and node-pair voltages in the dual graph. In the graphs of Fig. 16, for example, we may choose branches 1 to 8 inclusive as the tree of graph (a) and branches 9 to 16 inclusive as the tree of graph (b). Then in graph (a), the branch currents jg, j\0, • • •, jia are respectively identified with loop currents ii, i2, • • •, is> while in graph (b) the branch voltages vg, »io. •"■ "i6 are respectively identified with node-pair voltages e\, e2, • • ., eg. For the tree-branch currents in graph (a) we then have, for example, j2 = -*'i + i2 = -jt + Jio;j3 = -*2 + *3 = -Jio + jn, etc.; while for the link voltages in graph (b) we have correspondingly »2 = — »g + vio = —e\ + e2; t'3 = —t'10 + »n = -e2 + e3, etc. The reader may complete these equations as an exercise, and repeat the process for several other trees as well as for the graphs of Fig. 14. It should likewise be clear that similar results for a pair of dual graphs and their current and voltage variables are obtained if for one graph one chooses meshes as loops and in the other the corresponding nodes as a node-to-datum set of node pairs. In this case it may be desirable to regard the unassigned node as a datum and the corresponding peripheral mesh as playing the role of a datum mesh. Since more general choices of loops or of node pairs may be expressed as linear combinations of these simple ones, it is seen that the parallelism between the current and voltage relations of dual networks holds in all cases, regardless of the approach taken in formulating defining relations for network variables. It is important, however, to note a restriction with regard to the exist- ence of a dual graph. This restriction may most easily be understood through recognizing that all possible choices of tie sets in a given net- work must correspond to cut sets in its dual, and vice versa. In this connection, visualize the given graph as some net covering the surface of a sphere, and a tie set as any confluent group of branches forming a closed path. As mentioned at the close of Art. 5, let us think of inserting a draw string along this path and then tying off, as we might if the sphere were an inflated balloon. We would thus virtually create two balloons, fastened one to the other only at a single point where the con- tracted tie set has become a common node for the two subgraphs formed by the nets covering these balloons. Whether we thus regard the tie set as contracted or left in its original form upon the sphere, its primary characteristic so far as the present argument is concerned lies in the fact that it forms a boundary along which the given network is divided into two parts, and correspondingly the totality of meshes is divided into two groups. Generated on 2013-12-07 21:49 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google In the dual graph these correspond to two groups of nodes. If we think of grasping one of these node groups in each of our two hands and pulling them apart, the stretched branches place in evidence the cut set corresponding to the tie set of the original graph. The act of cutting this set of branches is dual to the tying-off process described above, since by this means the dual graph is separated into two parts which are, respectively, dual to the two subgraphs created by contracting the tie set. Duality between the original graph and its dual demands that to every creatable cut set in one of these there must correspond in the other a tie set with the property just described. It should be clear that this requirement cannot be met if either network is not mappable upon a sphere but requires the surface of some multiply connected space like that occupied by a doughnut or a pretzel. For example, if the mapping of a graph requires the surface of a doughnut, then it is clear that a closed path passing through the hole is not a tie set because the doughnut is not separated into two parts through the contraction of this path. The surface of a simply connected region like that of a sphere is the only surface on which all closed paths are tie sets. There is obviously no corresponding restriction on the existence of cut sets, since we can visu- alize grasping complementary groups of nodes in our two hands and, through cutting the stretched branches, separating the graph into two parts regardless of whether the geometry permits its being mapped upon a sphere or not. Thus, mappability upon a sphere is revealed as a necessary condition that a tie set in the original graph shall correspond to every possible cut set in its dual, and hence the latter is constructible only if the graph of the given network is so mappable.* 10 Concluding Remarks As expressed in the opening paragraphs of the previous article, the object in discussing the subject of duality is twofold. First, duality is a means of recognizing the analytical equivalence of pairs of physically dissimilar networks; so far as mappable networks are concerned, it essentially reduces by a factor of two the totality of distinct network configurations that can occur. Second, and no less useful, is the result that the principle of duality gives us two geometrically different ways of interpreting a given situation; if one of these proves difficult to compre- hend, the other frequently turns out to be far simpler. This characteris- tic of the two geometrical interpretations of dual situations to reinforce the mental process of comprehending the significance of either one we wish now to present through a few typical examples. Suppose, for a given mappable graph, we consider a node-to-datum set of voltage variables. That is to say, we pick a datum node, and choose as variables the potentials of the remaining nodes with respect Generated on 2013-12-07 21:49 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google to this datum. If we now wish to obtain algebraic expressions for these node voltages in terms of a like number of independent branch voltages, the simplest procedure is to select a tree and recognize that each node potential is then uniquely given by an algebraic sum of tree-branch * Further detailed discussion of these as well as all foregoing principles presented in this chapter are given throughout the succeeding chapters dealing with their appli- cation. A general method for the construction of dual networks and the evaluation of their properties is given in the last article of Ch. 10. voltages, since the path from any node to the datum via tree branches is a unique one. The geometrical picture involved and the pertinent algebraic procedure are simple and easily comprehensible. Contrast with this the completely dual situation. For a given map- pable graph, we consider the mesh currents as a set of appropriate variables, and ask for the algebraic expressions for these in terms of a like number of independent branch currents. Since the latter may be re- garded as the currents in a set of links associated with a chosen tree, the initial step in the procedure is clearly the same as in the previous situation. At this point, however, the lucidity of the picture is suddenly lacking, for we do not appear to have a procedure for expressing each mesh current as an algebraic sum of link currents that has a geometric clarity and straightforwardness comparable to the process of expressing node potentials in terms of tree-branch voltages, and yet we feel certain that there must exist a picture of equivalent clarity since to every mappable situation there exists a dual which possesses all of the same features and with the same degree of lucidity. Our failure to find the mesh situation as lucid as the one involving node potentials must be due to our inability to construct in our minds the completely dual geometry. Once we achieve the latter, our initial objective will easily be gained, and our understanding of network geometry will corre- spondingly be enhanced. It turns out that our failure to recognize the dual geometry stems from an initial misconception of what is meant by a mesh. Since we use the term mesh to connote a particular kind of loop, namely the simplest closed path that one can trace, we establish in our minds the view that the term mesh refers to the contour (the associated tie set) instead of the thing that it should refer to, namely the space surrounded by that contour 1 A mesh is an opening—not the boundary of that opening. This opening is the dual of a node—the point of confluence of branches. A tree consists of nodes connected by tree branches. The dual of a tree branch is a link. Therefore the dual of a tree should be something that consists of spaces (meshes) connected by links. If we add to the mental picture created by these thoughts the fact that traversing a branch longitudinally and crossing it at right angles are geometrically Generated on 2013-12-07 21:49 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google dual operations (since a branch voltage is found through a longitudinal summation process while a branch current is given by a summation over the cross section), we arrive without further difficulty at the geometrical entity that must be recognized as the dual of a tree. It is the space surrounding the tree. This space is subdivided into sections by the links. Each of these sections is a mesh; and one passes from mesh to mesh by crossing the links, just as in the tree one passes from node to node by following along the tree branches. Figure 18 shows in part (a) a graph in the form of a rectangular grid and in part (b) a possible tree with the links included as dotted lines. The space surrounding the tree, and dual to it, is best described by the word maze as used to denote a familiar kind of picture puzzle where one is asked to trace a continuous path from one point in this space to another without crossing any of the barriers formed by the tree-like structure. Such a path connecting meshes m and n is shown dotted in part (b) of the figure. It is clear that the path leading from one mesh to any other is unique, just as is the path from one node to another along the tree (a) (b) Fio. 18. A graph; a possible tree and its dual which is interpreted as a maze. branches. In passing along a path such as the one leading from mesh m to mesh n, one crosses a particular set of links. These links characterize this path just as a set of confluent tree branches characterize the path from one node to another in a given tree. Having recognized these dual processes, we now realize that we have not been entirely accurate in the foregoing discussions where we refer to a loop current as being dual to a node-pair voltage. The latter is the difference between two node potentials, and its dual is, therefore, the difference between two mesh currents, like the currents in meshes m and n in Fig. 18(b). The difference (im — in) is algebraically given by the summation of those link currents (with due attention to sign) charac- terizing the path from m to n, just as a node-pair voltage (potential difference between two nodes) equals the algebraic sum of tree-branch voltages along the path connecting this node pair. The difference (im ~ in), which might be called a mesh-pair current, is the real dual of a node-pair voltage. With the addition of the maze concept to our interpretation of network geometry, we have acquired a geometrical picture for the clarification of the algebraic connection between mesh- current differences and link currents that is as lucid as the familiar one Generated on 2013-12-07 21:50 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google used to connect node-potential differences with tree-branch voltages. These matters are further clarified through more specific examples. In Fig. 19 is shown a simple network graph [part (a)], its dual [part (b)], and a schematic indicating a choice of node-to-datum voltages characterizing the dual graph [part (c)]. In the graph of part (a) the tree branches are the solid lines, and the links (branches 1, 2, 4, 5, 6) are shown dotted. In the dual graph of part (b), these same branches (1, 2, 4, 5, 6) form the tree, and the rest are links. The datum node surrounds the whole dual (a) (b) (c) Fig. 19. A network graph (a), its dual (b), and a node-to-datum choice of node-pair voltages (c) corresponding to the mesh currents in (a). The tree branches (solid) in (a) become links (dotted) in (b) and vice versa. graph. Mesh currents ii, i2, • • -, 15 are chosen to characterize the graph (a), while correspondingly the node potentials e\, e2, . • •, e5 characterize the dual graph (b). Starting with the dual graph, it is evident that the expressions for the e's in terms of the tree-branch voltages read ei = vi e2 = v2 + vi e3 = — vi — i's - t'6 (57) e4 = Generated on 2013-12-07 21:50 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google —»5 — »6 Analogously, the mesh currents in terms of the link currents in graph (a) must be given by ii = ji *2 = h + ji H = -Ji - h ~ h (58) U = —js — je is = -ja One can verify these last results either by expressing the link currents as superpositions of the loop currents in the following manner, i2 - is ~ and solving for the i's, or by noting that each mesh current (like a node potential) is the difference between the current circulating on the contour of that mesh and the datum mesh current, which is visualized as circu- lating on the periphery of the entire graph. In this sense the datum mesh is the entire space outside the graph, just as the datum node in the dual graph surrounds it. Following the pattern set in Fig. 18(b) for expressing mesh-current differences in terms of link currents, one readily establishes Eqs. 58 as representing the situation depicted in graph (a) of Fig. 19, and simultaneously recognizes how the algebraic signs in these Generated on 2013-12-07 21:50 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google equations are related to the reference arrows involved. Consider now the same networks, but with an altered choice for the voltage and current variables. In Fig. 20(a) are shown the paths for the new loop currents. The dual graph is not repeated in this figure, but part (b) shows the diagram for the choice of node-pair voltages in the dual graph that correspond to the new loop currents in graph (a). All variables corresponding to this revised choice are distinguished by primes. So far as the voltage picture is concerned, one has little difficulty in recognizing that one now has = e2 - Ci = "= ei = = e5 = = e< - c5 = = c3 - c4 = and so, by analogy, the corresponding relations for the loop currents in terms of the link currents of the graph in part (a) of Fig. 20 must be - *i = = ii = ii = t'5 = -ja - is = ~ *4 = Generated on 2013-12-07 21:50 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google = *3 These can readily be verified through the usual procedure of writing expressions for the link currents in terms of the loop currents and solving. It is more interesting, however, to establish them entirely by analogy to the dual voltage situation, for we learn in this way more about the manner in which the loop currents are related to the link currents. Thus a loop current like i'3, for example, surrounds three meshes, and correspondingly the node-pair voltage e'3 contributes to the potentials of the three nodes 0, p, q [Fig. 20(b)]. In forming the cut set associated with e'3 we would pick up nodes 0, p, q, whereas in forming the tie set associated with i'3 we may say that we "pick up" the meshes whose combined contour places that tie set in evidence. Having established the fact that picking up meshes is dual to picking up nodes, and recognizing that loop currents, as contrasted to mesh currents, circulate on the resulting contours of groups of meshes, we are in a position to sketch the node-pair voltage diagram [like part (b) of Fig. 20] corresponding to a chosen loop-current diagram [like part (a) of Fig. 20], provided one exists, and, by analogy to the dual voltage equations, obtain directly the pertinent relations for the loop currents. Since for cut sets picked at random there does not necessarily corre- spond a set of "node-pair voltages" that are simple potential differences between pairs of nodes, it is analogously true that for loops (i.e. tie sets) picked at random there does not necessarily correspond a set of "mesh- pair currents" that are simple differences between currents in pairs of meshes. In the example of Fig. 20, pertinent to Eqs. 60 and 61, the conditions are chosen so that one does obtain e's that are potential Fig. 20. A revised choice of loop currents in the graph of Fig. 19(a) and the corres- ponding revision in the node-pair voltage definitions for the dual graph. differences between nodes and i's that are mesh-current differences, but, when loops are picked at random, it is in general no longer possible to give any simple geometrical interpretation to the implied current rela- tionships, just as on the voltage side of the picture a straightforward interpretation fails when cut sets are chosen at random. Wherever simple relationships do exist, the principle of duality is distinctly helpful in clarifying them. For example, in comparing parts (a) of Figs. 19 and 20, one might be tempted to conclude offhand that i'i = i2, or i's = i3 because the contours on which these pairs of currents circulate are the same. As pointed out in Art. 7, it is fallacious to imply that there is any direct relation between the contours chosen for loop currents and their algebraic expressions in terms of link currents. Equations 61 show that the above offhand conclusions are false. Use of the duality principle, as in the preceding discussion, shows why they Generated on 2013-12-07 21:51 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google are false. 1. For the graph shown, pick the indicated meshes as loops and write a corre- sponding tie-set schedule. Select an independent set of columns as those pertinent to the links of a chosen tree, and from the corre- sponding equations find expressions for the mesh currents ti, it, is, 14 in terms of branch currents. Do this specifically for (a) the tree composed of branches 1, 2, 3, 6; (b) the tree composed of branches 5, 6, 7, 8; and show that the two sets of relations for the t's in terms of j's are equivalent. For the tree defined under (b) show that the mesh currents are link currents. Pick the link currents 4, 5, 7, 8 as loop currents. Find the corresponding set of closed paths, and construct an appropriate tie-set schedule. 2. With reference to the graph of Prob. 1, de- termine whether each of the accompanying tie-set schedules defines an independent set of loop currents. If so, express the loop cur- Branch No. Generated on 2013-12-07 21:56 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Branch No. rente in terms of the currents in links 1, 2, 3, 4. In each case, trace the closed paths traversed by the loop currents. Express the currents in branches 5, 6, 7, 8 in terms of the link currents 1, 2, 3, 4. 3. Given the accompanying tie-set schedule and its associated graph, trace the Branch No. Generated on 2013-12-07 22:01 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 7, 8, 9, 10, 11, 12; identifying, in each case, the node-pair voltages with the tree- branch voltages. For each choice of tree, express the link voltages in terms of the tree-branch or node-pair voltages. 6. For the graph of Prob. 3 and the designation of nodes shown in the accompany- ing sketch, choose 0 as the datum node, and write a cut-set schedule for the node-to- datum voltages, ei, . • ., e.i. Express these in terms of 1 2 each of the two sets of tree-branch voltages specified in Prob. 5. 7. With reference to the graph of Prob. 3 and the node designation given in Prob. 6, determine which of the following sets of node pairs are independent, and for each of the latter construct a pertinent cut-set schedule, and express the node-pair voltages in terms of the branch voltages of tree (a) in Prob. 5: (a) 02, 04, 13, 17, 26, 35, 57; (b) 02, 06, 13, 15, 24, 46, 57; (c) 02, 06, 13, 15, 24, 36, 37. 8. Construct the dual of the graph in Prob. 3, giving Prob. 6. the appropriate numbering and reference arrows for all branches. On this dual graph indicate a set of mesh currents dual to the node-to-datum voltages of Prob. 6, and show that the cut-set schedule written there is now the appropriate tie-set schedule. Show further that the relations for the mesh currents in terms of link currents are identical in form with the expressions for the node potentials in terms of tree-branch voltages found in Prob. 6. 9. For the dual graph of Prob. 8 define loop-current variables (mesh-pair currents) that are dual to each of the independent sets of node-pair voltages specified in Prob. 7. Show in each case that the appropriate tie-set schedule is identical with the pertinent cut-set schedule of Prob. 7, and thus find the relations between the loop currents and the link currents dual to the branch voltages in tree (a) of Prob. 5. For each set of independent loop currents (making use of the appropriate tie-set schedule) find the associated set of closed paths and trace these in the dual graph. 10. Through making appropriate linear combinations, show that any set of lin- early independent rows is reducible to the particular set shown here, in which ele- Generated on 2013-12-07 22:01 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 0 0 1 x x ••• 0 0 0 1 x .•• ments marked x may have any finite values (including zero). If necessary, some column interchanges (corresponding to changes in branch numbering) are, of course, permitted at any stage in the transformations. Thus show that, if the I rows of a tie-set schedule are independent, it must always be possible to find at least one set of I independent columns. 11. If the links corresponding to the chosen tree of any given graph are numbered 1, 2, . • •, I, and loop currents are defined as n = ji, it = jt, • • •, ii — ji, show that the first I columns of the tie-set schedule represent a matrix having l's on its prin- cipal diagonal (upper left to lower right) and all other elements zero (called a unit matrix). Compare this situation with that in the previous problem. 12. Draw a regular pentagon with branches numbered 1 to 5 and additional branches 6 to 10 so that each vertex (node) is connected with every other one. For any appropriate cut-set schedule prove that any four of the columns 1 to 5 or 6 to 10 are independent. 13. With reference to the graph of the preceding problem, consider any appropriate tie-set schedule, and prove that any six of its columns including either 1 to 5 or 6 to 10 are inde- 14. Consider a graph in which a branch connects every node with every other node. Determine the number n of inde- pendent node pairs and the number I of independent loops in terms of the number of total nodes nt. Compute the number of equilibrium equations needed for this graph on the loop and node bases for the cases nt = 2, 3, 4, 5, 10, 50, 100, and tab- ulate the results. 15. Consider a three-dimensional graph in the form of a uni- form cubical grid with n, nodes on a side and n,' total nodes. Show that the number of independent loops is I = 2(n,« — 1) — 3(n,2 — 1). Make a table showing the numbers n and I for n, - 2, 3, 4, 5, 10, 100. 16. Consider the graph shown here, and choose a tree consist- ing of the branches 6 to 16 inclusive. Let the loop currents be the link currents t* = j* for k — 1 • • • 5, and construct the per- Generated on 2013-12-07 22:02 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google tinent tie-set schedule. Define a second set of loop currents as those circulating in the clockwise direction around the boundaries of the meshes a, b, c, d, e, and construct a second tie-set schedule appropriate to this choice of current variables. Referring to the respective rows of the first schedule by the numerals 1 • • • 5 and to those of the second by the letters a . . • e, express the rows (tie sets) of each schedule as the appropriate linear combinations of rows in the other schedule. For a = 1 — 2; 6 = 2 — 3; etc. and 5 — e; 4 = d + e; etc. These are the topological relationships between the two sets of closed paths involved in the definition of loop currents. Now find the algebraic relationships between the set of loop currents ii • . • % and the set ia • • • i,; that is to say, express the t'i • • • it in terms of ta • • • i,, and vice versa. Compare the topological and the algebraic relationships thus found, and note carefully the distinction that must be made be- tween them. 17. With reference to the situation in Prob. 16, suppose we introduce some new loop currents as the mesh-current differences given by the algebraic relationships Prob. 16. *& — toj *s = *c — ta; k — u — *z> = *, — tc IE = 19 is numerically negative; and, since its algebraic sign in Eq. 2 is plus, we see that this term involves an arithmetic subtraction. In branch 10, on the other hand, the actual drop in altitude may be contrary to the arrow direction so that vi0 has a negative value. The corresponding term in Eq. 2 becomes numerically positive, as is appropriate since we actually experience a drop in altitude when we encounter branch 10 in traversing the circuit to which Eq. 2 applies. The Kirchhoff voltage law thus expresses the simply understandable fact that the algebraic sum of voltage drops in any confluent set of branches forming a closed circuit or loop must equal zero. Symbolically this fact may be expressed by writing where the Greek capital sigma is interpreted as a summation sign and the quantities ±v which are summed are voltage drops, with due regard to the possible agreement or disagreement of their pertinent reference Generated on 2013-12-07 22:06 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google arrows with the (arbitrary) direction of traversal around the loop, thus indicating the choice of the plus or minus sign respectively. It is interesting to observe an important property of equations of this type with reference to a given network geometry such as that shown in t>2 + v3 + t>12 + t>19 + f20 - vi7 — vi0 — v4 = 0 2±t > = 0 Fig. 1. Suppose we write voltage-law equations for the upper left-hand corner mesh and its right-hand neighbor, thus: »i + H ~ H = 0 v2 + v5 — v7 — vt = 0 Addition of these two equations gives » i + v2 + t/5 — t'7 — t'6 = 0 (5) which we recognize as an equation pertinent to the closed loop which is the periphery of the two meshes combined. The reason for this result is that branch 4, which is common to both meshes, injects the terms +vi and — »4 respectively into the two Eqs. 4, and hence cancels out in their It is immediately clear that such cancelation of voltage terms will take place in the summation of any group of equations relating to meshes for which these terms correspond to branches common to the group of meshes. Suppose we write separate equations for the meshes imme- diately below those to which Eqs. 4 refer, thus: «6 + via — vi3 + v9 = 0 »7 + »n - fi4 - »10 = 0 Adding Eqs. 4 and 6, we have »i + »2 + »5 + ^11 ~ »i4 — vi3 + v9 = 0 (7) This equation is pertinent to the periphery of the block of four upper left-hand meshes in the graph of Fig. 1. If all the equations for the sep- arate meshes in this graph are added, one obtains Eq. 1 relating to the periphery of the whole graph. The student should try this as an exercise. We now turn our attention to an analogous law in terms of branch cur- rents: the so-called Kirchhoff current law. The electric current in a branch is the time rate at which charge flows through that branch. Unless the algebraic sum of currents for a group of branches confluent in the same node is zero, electric charge will be either created or destroyed at that node. Kirchhoff's current law, which in essence expresses the principle of the conservation of charge, states therefore that an algebraic summa- tion of branch currents confluent in the same node must equal zero. Generated on 2013-12-07 22:06 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Symbolically this fact is expressed by writing (as in Eq. 3): S±i = 0 (8) As illustrations of this law suppose we write equations of this sort for nodes a and h and the one immediately to the right of h in Fig. 1. These read -3i + J2 + h ji + 36 — J9 ~3* + 37 + iio — jo Each equation states that the net current diverging from a pertinent node equals zero. Now suppose we add the three Eqs. 9. This gives Branch currents ji, j4t and ja cancel out in the process of addition. Reference to the graph of Fig. 1 reveals that these branches are common to the group of three nodes in question, while the branches to which the remaining currents in Eq. 10 refer terminate only in one of these An interesting interpretation may be given the resulting Eq. 10. If we regard the portion of the graph of Fig. 1 formed by branches 1, 4, and 6 alone (referred to as a subgraph of the entire network) as enclosed in a box, then Eq. 10 expresses the fact that the algebraic sum of currents divergent from this box equals zero. In other words, the current law applies to the box containing a subgraph the same as it does to a single node. That is to say, it is not possible for electric charge to pile up or diminish within a box containing a lumped network any more than it is possible for charge to pile up or diminish at a single node. This fact follows directly from the current law applied to a group of nodes, as shown above, and yet students usually have difficulty recognizing the truth of this result. They somehow feel that in a box there is more room for charge to pile up, and so it may perhaps do this, whereas at a single node it is clear that the charge would have to jump off into space if more entered than left the node in any time interval. The above analysis shows, however, that what holds for a simple node must hold also for a box full of network. 2 Independence among the Kirchhoff Law Equations Generated on 2013-12-07 22:13 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Equilibrium equations are a set of relations that uniquely determine the state of a network at any moment. They may be written in terms of any appropriately chosen variables; the uniqueness requirement de- mands, however, that the number of independent equations shall equal the number of independent variables involved. We have seen earlier that the state of a network is expressible either in terms of I = b — nt + 1 32 + 37 + 3io ~ 3g = 0 independent currents (for example, the loop currents) or in terms of n = ni — 1 = 6 — I independent voltages (for example the node-pair voltages). On a current basis we shall, therefore, require exactly I independent equations; and on a voltage basis exactly n independent equations will be needed. For these equations we turn our attention to the Kirchhoff laws. It is essential to determine how many independent equations of each type (the voltage-law and the current-law types) may be written for any given network geometry. Consider first the voltage-law equations, and assume that these have been written for all of the nine meshes of the network graph in Fig. 1. Incidentally, this graph has 20 branches and a total of 12 nodes (6 = 20, nt = 12). Hence I - 20 - 12 + 1 = 9, which just equals the number of meshes. Any tree in this network involves n = 11 branches. There are 9 links, and hence there are 9 geometrically independent loop currents. From what has been pointed out in the previous article, it is clear that a voltage-law equation written for any other loop enclosing a group of meshes in Fig. 1 may be formed by adding together the separate equations for the pertinent meshes. Such additional voltage-law equations clearly are not independent. The inference is that one can always write exactly / independent equations of the voltage-law type. This conclusion is supported by the following reasoning. Suppose, for any network geometry, a tree is chosen, and the link currents are identi- fied with loop currents. For the correspondingly determined loops a set of voltage-law equations are written. These equations are surely inde- pendent, for the link voltages appear separately, one in each equation, so that it certainly is not possible to express any equation as a linear combination of the others. Each of these equations could be used to express one link voltage in terms of tree-branch voltages. This fact incidentally substantiates what was said earlier with regard to the tree-branch voltages being an independent set and the link voltages being expressible uniquely in terms of them (see Art. 6, Ch. 1). Now any other closed loop for which a voltage-law equation could be written must traverse one or more links since the tree branches alone Generated on 2013-12-07 22:14 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google can form no closed paths. If in this equation the previous expressions for the pertinent link voltages are substituted, the resultant equation must reduce to the trivial identity 0 = 0, since no nontrivial relation can exist among tree-branch voltages alone (the tree-branch voltages are independent and hence are not expressible in terms of each other). It follows, therefore, that the voltage-law equation written for the additional closed loop expresses no independent result. There are indeed exactly I independent voltage-law equations. 2 6 X) Let us turn our attention now to the Kirchhoff current-law equations and see how many of these may be independent. Referring again to the graph of Fig. 1, suppose we begin writing equations for several nodes adjacent to each other. If we examine these equations carefully, we observe that each contains at least one term that does not appear in the others. For example, if we consider the equations written for nodes a and h, it is clear that the terms involving j2 and j4 do not appear in the equation for node h, and that the j6 and is terms in the equation for node h do not appear in the one for node a. If we also write an equation for the node immediately to the right of h, this one contains terms with jj and ji0 which are not contained in either of the equations for nodes a or h. Such sets of equations are surely independent, for it is mani- festly not possible to express any one as a linear combination of the others so long as each has terms that the others do not contain. As we proceed to write current- law equations for additional nodes in the graph of Fig. 1, the state of affairs just described continues to hold true until equations have been written for all but one of the nodes. The inference is that exactly n = nt — 1 independent equations of the current-law type can always be written. This conclusion is supported by the following reasoning. Generated on 2013-12-07 22:15 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google | Suppose, for any network geometry, a tree is chosen, and the tree- branch voltages are identified with node-pair voltages. For the corre- spondingly determined node pairs, a set of Kirchhoff current-law equa- tions are written. The set of branches taking part in the equation for any node pair is the pertinent cut set, just as the group of branches in- volved in the voltage-law equation for any loop is the tie set for that loop. TThe cut set pertinent to the node pair defined by any tree branch evidently involves that tree branch in addition to those links having one of their ends terminating upon the picked-up nodesJ(see Art. 8, Ch. 1). Figure 2 illustrates the choice of a tree for the network graph of Fig. 1, and, with respect to the node pair/, e joined by branch 20, indicates by dotted lines the links that take part in the pertinent cut set. Since the tree-branch voltage v20 is identified with the respective node-pair voltage, the latter has its reference arrow pointing from / to e. That Fio. 2. A tree for the graph of Fig. 1. The cut-set pertinent to node pair f-e consists of tree branch 20 and the links shown dotted. is to say, the picked-up nodes are e, q, I, b, c, d. Hence the pertinent current-law equation reads J20 - ju - h - 32 - 0 (11) Schedules like 40, 42, 46 in Art. 8 of Ch. 1 are helpful in writing the current-law equations for a chosen set of node pairs, for the elements in the rows of such a schedule are the coefficients appropriate to these Suppose that current-law equations like 11 are written for all of the node pairs corresponding to the n tree branches. These equations are surely independent, for the tree-branch currents appear separately, one in each equation, so that it certainly is not possible to express any equa- tion as a linear combination of the others. Each of these equations could be used to express one tree-branch current in terms of the link currents. This fact incidentally substantiates what was said earlier with regard to the link currents being an independent set and the tree- branch currents being expressible uniquely in terms of them (see Art. 5, Ch. 1). Now any other cut set pertinent to a node pair for which a current- law equation could be written would have to involve one or more tree branches, since the tree connects all of the nodes, and therefore no node exists that has not at least one tree branch touching it. If in such an additional current-law equation one substitutes the expressions already obtained for the pertinent tree-branch currents, the resultant equation must reduce to the trivial identity 0 = 0, since no nontrivial relation can exist among link currents alone (the link currents are independent and hence are not expressible in terms of each other). It follows, there- fore, that the current-law equation written for any additional node pair expresses no independent result. There are indeed exactly n independent current-law equations. 3 The Equilibrium Equations on the Loop and Node Bases Having established the fact that the state of a network can be charac- terized uniquely either in terms of a set of I loop currents or in terms of a set of n node-pair voltages, and having recognized that the numbers of independent Kirchhoff voltage-law and current-law equations are I and n Generated on 2013-12-07 22:15 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google respectively, the conclusion is imminent that the equilibrium condition for a network can be expressed in either of two ways: (a) through a set of I voltage-law equations in which the loop currents are the variables, or (b) through a set of n current-law equations in which the node-pair voltages are the variables. These procedures, which are referred to respectively as the loop and node methods of expressing network equilib- rium, are now discussed in further detail. Consider first the loop method. The voltage-law equations, like Eq. 1, p. 65, involve the branch-voltage drops. If these equations are to be written with the loop currents as variables, we must find some way of expressing the branch voltages in terms of the loop currents. These expressions are obtained in two successive steps. The branch voltages are related to the branch currents by the volt- ampere equations pertaining to the kinds of elements (inductance, resistance, or capacitance) that the branches represent; and the branch currents in turn are related to the loop currents in the manner shown in Ch. 1. Detailed consideration of the relations between branch currents and branch voltages is restricted at present to networks involving re- sistances only. Appropriate extensions to include the consideration of inductance and capacitance elements will follow in the later chapters. Let the resistances of branches 1, 2, 3, • • • be denoted by ti, r2, r3) etc. Then the relations between all the branch voltages and all the branch currents are expressed by The complete procedure for setting up the equilibrium equations on the loop basis will be illustrated for the network graph shown in Fig. 3. Part (a) is the complete graph, and part (b) is a chosen tree. Branches 1, 2, 6 are links, and the link currents ji, j2, • • •, ja are identified respectively with the loop currents t'i, t'2, • • t'6. The following tie-set schedule is readily constructed from an inspection of the resulting closed paths pertinent to these six loop currents [as the reader should check through placing the links 1,2, • • •, 6, one at a time, into the tree of Fig. 3(b)]. The Kirchhoff voltage-law equations written for these same loops are immediately obtained through use of the coef- vk = Tkjk for fc = 1, 2, • • •, b (a) (b) Generated on 2013-12-07 22:15 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Fio. 3. A nonmappable graph (a), and a possible tree (b). Generated on 2013-12-07 22:16 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Branch No. then vi = 2ji, v2 = j2, i'3 = 5/5, v+ = 37'4 volts, and so forth. Use of Eqs. 15 then gives 6(n + u - i6) lO(-ii + i, - U - is + *«) (17) 9(t'i - *a + is) The desired loop equilibrium equations are obtained through sub- stituting these values for the v's into Eqs. 14. After proper arrangement of the results, one finds - 18t2 - 17t3 + 16*4 + 27i5 - 24*6 + 19t2 + Si3 - 10it - 18iB + 18*6 Generated on 2013-12-07 22:16 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google + 8*2 + 22i3 + 0*4 - 17i5 + 8i6 - 10t2 + 0t3 + 19t4 + lOtg - 16i6 - 18i2 - 17t3 + 10l4 + 31i5 - 18t6 + 18i2 + 8*3 - I614 - 18t5 + 24i6 Considering next the node method of writing equilibrium equations we observe first that the current-law equations, like Eq. 11 above, involve the branch currents. If these equations are to be written with the node-pair voltages as variables, we must express the branch currents in terms of the node-pair voltages. To do this, we note that the branch currents are related to the branch voltages through Eqs. 12, and the branch voltages in turn are related to the node-pair voltages in the man- ner shown in Ch. 1. Equations 12 are now more appropriately written in the form jk = gkvk for fc = 1, 2, b (19) following cut-set schedule is then readily constructed from an inspection of Fig. 3, noting the picked-up nodes pertinent to these four node pairs. Branch No. c, d, e a, b, c Generated on 2013-12-07 22:16 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google a, b, c, d The Kirchhoff current-law equations corresponding to this choice of 0.5(-ei + e2 — e3 - e4) -e2 + e3 ja = 0.143(ci - e2 + e3) j7 = 0.167e, Use of Eqs. 22 then gives h = 0.2(e3 + e4) = 0.1e2 (23) /» = 0.333(-ei + e2) j9 = 0.125e3 j5 = 0.25(e2 — e3 — e4) jio = 0.111e4 The desired node equilibrium equations are obtained through substitut- ing these values for the j's into Eqs. 21. After proper arrangement, the results read 1.142ei - 0.976e2 + 0.643e3 + 0.500e4 = 0 -0.976e, + 2.326e2 - 1.893e3 - 0.750e4 = 0 0.643ei - 1.893e2 + 2.218e3 + 0.950e4 = 0 0.500ei - 0.750e2 + 0.950e3 + 1.061e4 = 0 In summary it is well to observe that the procedure for setting up equilibrium equations involves, for either the loop or node method, essentially three sets of relations: (a) The Kirchhoff equations in terms of pertinent branch quantities. (b) The relations between branch voltages and branch currents. (c) The branch .quantities in terms of the desired variables. The coefficients in the rows and in the columns of the appropriate tie-set or cut-set schedule supply the means for writing the relations (a) and (c) respectively. The relations (b), in the form of either Eqs. 12 or Eqs. 19, are straightforward in any case. The desired equilibrium equations are obtained through substituting relations (c) into (b), and the resulting ones into (a). In the loop method, the branch quantities in the voltage-law equations (a) are voltages while the branch quantities in (c) are currents. In the node method, the branch quantities in the current-law equations (a) are currents while the branch quantities in (c) are voltages. The relations (b) are needed in either case to facilitate the substitution of (c) into (a); that is to say, Generated on 2013-12-07 22:16 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google this substitution requires first a conversion from branch currents to branch voltages or vice versa. It is this conversion that is supplied by the relations (b) which depend upon the circuit elements (resistances or conductances in the above example). The tie-set or cut-set schedule is thus seen to play a dominant role in either method since it summarizes in compact and readily usable form all pertinent relations except those determined by the element values. The rows of a tie-set schedule define an independent set of closed paths, and hence provide a convenient means for obtaining an independent set of Kirchhoff voltage-law equations. Any row of a cut-set schedule, on the other hand, represents all of the branches terminating in the sub- graph associated with one or more nodes. Since the algebraic sum of currents in such a set of branches must equal zero, the rows of a cut-set schedule are seen to provide a convenient means for obtaining an inde- pendent set of Kirchhoff current-law equations. The columns of these same schedules provide the pertinent relations through which the desired variables are introduced. They are useful not only in the process of obtaining the appropriate equilibrium equations, but also in subsequently enabling one to compute any of the branch quantities from known values of the variables. In situations where the geometry is particularly simple, and where correspondingly straightforward definitions for the variables are ap- propriate, one may, after acquiring some experience, employ a more direct procedure for obtaining equilibrium equations (as given in Art. 6) which dispenses with the use of schedules. 4 Parameter Matrices on the Loop and Node Bases It should be observed that the final equilibrium Eqs. 18 and 24 are written in an orderly form in that the variable ii (resp. ei) appears in the first column, the variable i2 (resp. e2) in the second column, and so forth. Taking this arrangement for granted, it becomes evident that the essen- tial information conveyed by Eqs. 18, for example, is contained with equal definiteness but with increased compactness in the array of Generated on 2013-12-07 22:17 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google known as the loop-resistance parameter matrix. Equilibrium Eqs. 24 are similarly characterized by the following node-conductance parameter r 1.142 The term matrix is a name given to a rectangular array of coefficients as exemplified by forms 25 and 26. As will be discussed in later chapters, one can manipulate sets of simultaneous algebraic equations like those given by 18 and 24 in a facile manner through use of a set of symbolic operations known as the rules of matrix algebra. These matters need not concern us at the moment, however, since the matrix concept is at present introduced only to achieve two objectives that can be grasped without any knowledge of matrix algebra whatever, namely: (a) to recog- nize that all of the essential information given by the sets of Eqs. 18 and 24 is more compactly and hence more effectively placed in evidence through the rectangular arrays 25 and 26; (b) to make available a greatly abbreviated method of designating loop- or node-parameter values in numerical examples. The second of these objectives may better be understood through calling attention first to a common symbolic form in which equations like 18 are written, namely thus: »nii + ri2i2 H h rnii = 0 r2iii + r22i2 H r- r2iii = 0 rzit'i + n2i2 -I h Wi = 0 Here each coefficient is denoted by a symbol like rn, r12, and so forth. The corresponding matrix reads [R] = irii ri2 ••• mi The general coefficient in this matrix is denoted by r,* in which the indexes s and k can independently assume any integer values from 1 to I. Observe that the first index denotes the row position, and the second one denotes the column position of the coefficient with respect to array 28. Analogously, a set of node equations like 24 would symbolically be Generated on 2013-12-07 22:17 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google ffnei + gi2e2 -\ h ginen = 0 021^1 + 022^2 H h ?2nen = 0 gmei + gn2e2 H h gnnen = 0 with the matrix [G] = Identification of loop Eqs. 27 in analytic form, with the specific numerical Eqs. 18 would necessitate (without use of the parameter- matrix concept) writing which is clearly an arduous and space-consuming task compared with writing down the numerical matrix 25. Use of the matrix concept takes advantage of the fact that the row and column position of a number identifies it as a specific r,k value; it is no longer necessary to write identifying equations like those given by 31. Similar remarks apply to the numerical identification of parameters on the node basis and the usefulness of the corresponding parameter-matrix notation. 5 Regarding the Symmetry of Parameter Matrices The parameter matrices 25 and 26 given above have an important and interesting property in common which is described as their sym- metry. For example, in matrix 25 we note that ri8 = r2i = —18, T\3 =* r3i = —17, ri4 = r41 = 16, and so forth. More specifically, matrix 25 is said to possess symmetry about its principal diagonal, the latter being represented by the elements rn = 35, r22 = 19, r33 = 22, etc. on the diagonal extending from the upper left- to the lower right- hand corner of the array. Elements symmetrically located above and below this diagonal are equal. Symbolically this symmetrical property is expressed by the equation Similar remarks apply to the node-conductance matrix 26. This symmetry of the parameter matrix is neither accidental nor inherent in the physical property of linear networks. It is the result of having followed a deliberate procedure in the derivation of equilibrium equations that need by no means always be adhered to. In order to understand the nature of this procedure, let us recall first Generated on 2013-12-07 22:17 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google that the process of deriving equilibrium equations involves predom- rn = 35, ri2 = -18, r13 = -17, T,k = Tk, inantly the two sets of relations designated in the summary in Art. 3 as (a) the Kirchhoff-law equations and (c) the defining equations for the chosen variables. [The circuit element relations (b) are needed in carry- ing out the substitution of (c) into (a) but are not pertinent to the present argument.] On the loop basis the variables are loop currents, and the Kirchhoff equations are of the voltage-law type; on the node basis the variables are node-pair voltages, and the Kirchhoff equations are of the current-law type. The choice of a set of loop-current variables involves the fixing of a set of loops or closed paths (tie sets), either through the choice of a tree and the identification of link currents with loop currents or through the forthright selection of a set of geometrically independent loops. The writing of Kirchhoff voltage-law equations also necessitates the selection of a set of geometrically independent loops, but this set need not be the same as that pertaining to the definition of the chosen loop currents. If the same loops are used in the definition of loop currents and in the writing of the voltage-law equations, then the resulting parameter matrices become symmetrical, but if separate choices are made for the closed paths denning loop currents and those for which the voltage- law equations are written, then the parameter matrices will not become Thus a more general procedure for obtaining the loop equilibrium equations involves the use of two tie-set schedules. One of these per- tains to the definition of a set of loop-current variables (as discussed in Art. 5, Ch. 1); the tie sets in the other one serve merely as a basis for writing the voltage-law equations. Instead of using the rows and col- umns of the same schedule for obtaining relations (a) and (c) respectively in the summary referred to above, one uses the rows of one schedule and the columns of another. The reader should illustrate these matters for himself by carrying through this revised procedure for the numerical example given above and noting the detailed changes that occur. Analogously, on the node basis, one must choose a set of geometrically independent node pairs and their associated cut sets for the definition of node-pair voltage variables, and again for the writing of the Kirchhoff Generated on 2013-12-07 22:23 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google current-law equations. The second selection of node pairs and associated cut sets need not be the same as the first, but, if they are (as in the numerical example leading to Eqs. 24), then the resulting parameter matrix becomes symmetrical. Thus a more general procedure for obtaining the node equilibrium equations involves the use of two cut-set schedules. One of these per- tains to the definition of a set of node-pair voltage variables (as dis- cussed in Art. 6, Ch. 1); the cut sets in the other one are utilized in writing current-law equations. Instead of using the rows and columns of the same schedule, one uses the rows of one schedule and the columns of The significant point in these thoughts is that the choice of variables, whether current or voltage, need have no relation to the process of writ- ing Kirchhoff-law equations. It is merely necessary that the latter be an independent set; the variables in terms of which they are ultimately expressed, may be chosen with complete freedom. When the same tie sets are used for voltage-law equations and loop- current definitions, or the same cut sets are used for current-law equa- tions and node-pair voltage definitions, then we say that the choice of / variables is consistent with the Kirchhoff-law equations. It is this con- , sistency that leads to symmetrical parameter matrices.* The question of symmetry in the parameter matrices is important primarily in that one should recognize the deliberateness in the achieve- ment of this result and not (as is quite common) become confused into thinking that it is an inherent property of linear passive bilateral net- works to be characterized by symmetrical parameter matrices. We shall, to be sure, follow the usual procedure that leads to symmetry, not only because it obviates two choices being made for a set of loops or node pairs, but also because symmetrical equations are easier to solve, and because a number of interesting network properties are more readily demonstrated. So in the end we follow the customary procedure, but with an added sense of perspective that comes from a deeper under- standing of the principles involved. 6 Simplified Procedures That Are Adequate in Many Practical We have given the preceding very general approach to the matter of forming the equilibrium equations of networks because, through it as a background, we are now in a position to understand far more adequately and with greater mental satisfaction the following rather restricted but practically very useful procedures applicable to many geometrical net- work configurations dealt with in practice. Thus, in many situations * These matters were first pointed out by the author at an informal round-table Generated on 2013-12-07 22:23 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google conference on network analysis and synthesis sponsored by the AIEE at its midwinter convention in 1938. The discussions (supplemented by a distribution of pertinent mimeographed notes) included derivation of the general loop and node equilibrium equations for bilateral networks in symmetrical or dissymmetrical form and the con- sequent possibility of obtaining symmetrical matrices for networks containing uni- lateral elements through an appropriate definition of variables. During the past 15 years the presentation of this material was continually simplified through classroom encountered in engineering work, the network geometry is such that the graph may be drawn on a plane surface without having any branches cross each other. As mentioned in Art. 9, Ch. 1 such a network is spoken of as being "mappable on a plane," or more briefly as a mappable net- work. The network whose graph is shown in Fig. 3 is not of the map- pable variety, but the one given by the graph in Fig. 1 is. When the equilibrium equations for a mappable network (such as that shown in Fig. 1) are to be written on the loop basis, it is possible to choose as a geometrically independent set of closed loops the meshes of this network graph (as pointed out in Art. 7 of Ch. 1). A simple example of this sort is shown in Fig. 4 in which the meshes are indicated by circulatory arrows. The corresponding voltage-law equations are vi — vi = 0 v2 — t'e = 0 »3 — v 6 = 0 » 4 + 1'5 + v6 = 0 The branch currents in terms of the loop currents are seen to be given by Ji = *i j* = H ~ *i h = *2 j& = U ~ *2 (34) h = i3 k = n — t-3 Suppose the branch resistance values are ri = 5, r2 = 10, r3 = 4, r4 = 2, r5 = 10, r6 = 5 (35) Equations 34 multiplied respectively by these values yield the corre- sponding v's by means of which Eqs. 33 become expressed in terms of the loop currents. After proper arrangement this substitution yields 7*'i + 0i2 + 0i3 - 2u = 0 Ot'i + 20i2 + 0i3 - 10t4 = 0 Oii + 0i2 + 9i3 - 5n = 0 -2ii - 10i2 - 5i3 + 17i4 = 0 with the symmetrical matrix [R] = Generated on 2013-12-07 22:23 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google A simple physical interpretation may be given to these equations by reference to Fig. 5 in which the same network as in Fig. 4 is redrawn with the branch numbering and reference arrows left off but with the branch resistances and their values indicated. The terra 7*i in the first of Eqs. 36 may be interpreted as the voltage drop caused in mesh 1 by loop current t'i since the total resistance on the contour of this mesh is 7 ohms; the rest of the terms in this equation represent additional voltage drops caused in mesh 1 by the loop currents i2, 13, U, respectively. Since no part of the contour of mesh 1 is traversed by the currents i2 and i3, these Fig. 4. A mappable network graph Fig. 5. The resistance network whose in which the meshes are chosen as graph is shown in Fig. 4. Element loops. values are in ohms. can cause no voltage drop in mesh 1; hence the coefficients of their terms in the first of Eqs. 36 are zero. The term —2i4 takes account of the fact that loop current ii, in traversing the 2-ohm resistance, contributes to the voltage drop in mesh 1 and that this contribution is negative with respect to the loop reference arrow in mesh 1. The second of Eqs. 36 similarly expresses the fact that the algebraic sum of voltage drops caused in mesh 2 by the various loop currents equals zero. Only those terms have nonzero coefficients whose associated loop currents traverse at least part of the contour of mesh 2. The value of any nonzero coefficient equals the ohmic value of the total or partial mesh 2 resistance traversed by the pertinent loop current, and its alge- braic sign is plus or minus, according to whether the reference direction for this loop current agrees or disagrees, respectively, with the reference arrow for mesh 2. Analogous remarks apply to the rest of Eqs. 36. With this interpretation in mind, one can write the loop-resistance matrix 37 directly. Thus the coefficients on the principal diagonal are, Generated on 2013-12-07 22:24 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google respectively, the total resistance values on the contours of meshes 1, 2, 3, • • •. The remaining coefficients are resistances of branches common to a pair of meshes, with their algebraic signs plus or minus according to the confluence or counterfluence of the respective mesh arrows in the pertinent common branch. Specifically, a term r,k in value equals the resistance of the branch common to meshes s and k; its algebraic sign is plus if the mesh arrows have the same direction in this common branch; it is minus if they have opposite directions. In a mappable network, with the meshes chosen as loops and the loop reference arrows consistently clockwise (or consistently counterclock- wise), the algebraic signs of all nondiagonal terms in the loop-resistance matrix are negat*ve. It is obvious that this procedure for the derivation of loop equilibrium equations yields a symmetrical parameter matrix (r,k = Tk,) since a branch common to meshes s and fc, whose value deter- mines the coefficient r,k, is at the same time common to meshes k and s. This simplified procedure for writing down the loop equilibrium equa- tions directly (having made a choice for the loops and loop currents) does not, of course, require mappability of the network, but it is not difficult to appreciate that it soon loses its simplicity and directness when the network geometry becomes random. For, in a random case it may become difficult to continue to speak of meshes as simplified versions of loops; moreover, their choice is certainly no longer straight- forward nor is the designation of loop reference arrows as simple to indi- cate. Any given branch may be common to more than two meshes; the pertinent loop reference arrows may traverse such a branch in random directions, so that the nondiagonal coefficients in the parameter matrix will no longer be consistently negative. Although the simplified procedure may still be usable in some moderately complex nonmappable cases, one will find the more general procedure described earlier prefer- able when arbitrary network geometries are encountered. An analogous simplified procedure appropriate to relatively simple geometries may be found for the determination of node equilibrium equations. In this simplified procedure the node-pair voltage variables are chosen as a node-to-datum set, as described in Art. 8 of Ch. 1. That is, they are defined as the potentials of the various single nodes with Generated on 2013-12-07 22:24 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google respect to a common (arbitrarily selected) datum node, as illustrated in Ch. 1 by Fig. 11 for the network graph of Fig. 8. The cut sets (which determine the Kirchhoff current-law equations) are then all given by the groups of branches divergent from the single nodes for which the pertinent node potentials are defined. With regard to the network of Fig. 4 one may choose the bottom node as the datum or reference and define the potentials of nodes 1 and 2 respectively as the voltage variables ei and e2. Noting that the pertinent cut sets are the branches divergent from these nodes, the current-law equations consistent with this selection of node-pair voltages are seen to read ji + j* - ja - h = 0 -3i ~ 3* + 32 + 3s - 0 The branch voltages in terms of the node potentials are, by inspection of Fig. 4, vi = ei — e2 V4 = ei — e2 v2 = e2 v 5 = e2 (39) v3 = ~ei t>6 = -ei The branch conductances corresponding to the resistance values 35 are 9i = 0.2, 02 = 0.1, g3 = 0.25, ff4 = 0.5, g5 = 0.1, g6 = 0.2 Equations 39 multiplied respectively by these values yield the corre- sponding fa in terms of the node potentials. Their substitution into Eqs. 38 results in the desired equilibrium equations, which read 1.15ei - 0.70e2 = 0 -0.70ei + 0.90e2 = 0 with the symmetrical node-conductance matrix T 1.15 -0.701 [O = (42) [-0.70 0.90J A simple physical interpretation may be given to the node equilibrium Eqs. 41 that parallels the interpretation given above for the loop equa- tions. Thus the first term in the first of Eqs. 41 represents the current that is caused to diverge from node 1 by the potential ei acting alone (that is, while e2 = 0); the second term in this equation represents the current that is caused to diverge from node 1 by the potential e2 acting alone (that is, while e: = 0). Since a positive e2 acting alone causes current to converge upon node 1 (instead of causing a divergence of current), the term with e2 is numerically negative. The amount of cur- rent that ei alone causes to diverge from node 1 evidently equals the Generated on 2013-12-07 22:24 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google value of ei times the total conductance between node 1 and datum when e2 = 0 (that is, when node 2 coincides with the datum). This total conductance clearly is the sum of the conductances of the various branches divergent from node 1; with reference to Fig. 5 (in which the given parameter values are resistances) this total conductance is 1/5 + 1/2 + 1/5 + 1/4 = 1.15, thus accounting for the coefficient of the term with ei in the first of Eqs. 41. The current that e2 alone causes to diverge from node 1 can traverse only the branches connecting node 1 directly with node 2 (these are the 2-ohm and 5-ohm branches in Fig. 5), and the value of this current is evidently given in magnitude by the product of e2 and the net con- ductance of these combined branches. In the present example the per- tinent conductance is 1/2 + 1/5 = 0.70 mho, thus accounting for the value of the coefficient in the second term of the first of Eqs. 41 (the reason for its negative sign has already been explained). A similar interpretation is readily given to the second of Eqs. 41. Thus these equations or their conductance matrix 42 could be written down directly by inspection of Fig. 5, especially if the branch-resistance values are alternately given as branch-conductance values expressed in mhos. The elements on the principal diagonal of [G] are, respectively, the total conductance values (sums of branch conductances) divergent from nodes 1, 2, • • • (in a more general case there will be more than two nodes). The nondiagonal elements of [G] all have negative algebraic signs, for the argument given above in the detailed explanation of Eqs. 41 clearly applies unaltered to all cases in which the node-pair voltage variables are chosen as a node-to-datum set. In magnitude, the non- diagonal elements in [G] equal the net conductance values (sums of branch conductances), for those branches directly connecting the per- tinent node pairs. More specifically, the element g,k in [6] equals the negative sum of the conductances of the various branches directly con- necting nodes s and k. If these nodes are not directly connected by any branches, then the pertinent g,k value is zero. Note that the consistent negativeness of the nondiagonal terms follows directly from the tacit assumption that any node potential is regarded as positive when it is higher than that of the datum node. This situation parallels the con- sistent negativeness of the nondiagonal terms in the [R] matrix obtained on the loop basis for a mappable network in which all the mesh reference arrows are chosen consistently clockwise (or consistently counterclock- Generated on 2013-12-07 22:25 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google wise), whence in any common branch they are counterfluent. 7 Sources When currents and their accompanying voltage drops exist in a re- sistive network, energy is being dissipated. Since at every instant the rate of energy supply must equal its rate of dissipation, there can be no voltages or currents in a purely resistive or in any "lossy" network unless there are present one or more sources of energy. Until now the role played by sources has not been introduced into the network picture and indeed their presence has nothing whatever to do with the topics discussed so far. Sources were purposely left out of consideration for this reason, since their inclusion would merely have detracted from the effectiveness of the discussion. Now, however, it is time to recognize the significance of sources, their characteristics, and how we are to determine their effect upon the equilibrium equations. Their most important effect, as already stated, is that without them there would be no response. This fact may clearly be seen, for example, from the loop equilibrium Eqs. 36 for the network of Fig. 5. Since these four equations involving the four unknowns i2, iz, u are independent, and all of the right-hand members are zero, we know according to the rules of algebra that none but the trivial solution t'i = *2 = iz = U = 0 exists. That is to say, in the absence of excitation (which, as we shall see, causes the right-hand members of the equations to be nonzero) the network remains "dead as a doornail." It was pointed out in the introduction that an electrical network as we think of it in connection with our present discussions is almost always an artificial representation of some physical system in terms of idealized quantities which we call the circuit elements or parameters (the resist- ance, inductance, and capacitance elements). We justify such an arti- ficial representation through noting (a) that it can be so chosen as to simulate functionally (and to any desired degree of accuracy) the actual system at any selected points of interest, and (b) that such an idealiza- tion is essential in reducing the analysis procedure to a relatively simple and easily understandable form. Regarding the sources through which the network becomes energized or through which the physical system derives its motive power, a con- sistent degree of idealization is necessary. That is to say, the sources, like the circuit elements, are represented in an idealized fashion. We shall see that actual energy sources may thus be simulated through such idealized sources in combination with idealized circuit elements. For the moment we focus our attention upon the idealized sources themselves. Although the physical function of a source is to supply energy to the Generated on 2013-12-07 22:25 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google system, we shall for the time being find it more expedient to characterize a source as an element capable of providing a fixed amount of voltage or a fixed amount of current at a certain point. Actually it provides both voltage and current, and hence an amount of power equal to their product, but it is analytically essential and practically more realistic to suppose that either the voltage or the current of the source is known or fixed. We could, of course, postulate a source for which both the voltage and the current are fixed, but such sources would not prove useful in the simulation of physical systems, and we must at all times be mind- ful of the utility of our methods of analysis. When we say that the voltage or the current of a source is fixed, we do not necessarily mean that it is a constant, but rather that its value or sequence of values as a continuous function of the time are independent of all other voltages and currents in the entire network. Most important in this connection is the nondependence upon the source's own voltage, if it is a current, or upon its own current if it is a voltage. Thus a so- called idealized voltage source provides at a given terminal pair a voltage function that is independent of the current at that terminal pair; and an idealized current source provides a current function that is independent of the voltage at the pertinent terminal pair. By way of contrast, it is useful to compare the idealized source as just defined with an ordinary passive resistance or other circuit element. In the latter, the voltage and current at the terminals are related in a definite way which we call the "volt-ampere relationship" for that ele- ment. For example, in a resistance the voltage is proportional to the current, the constant of proportionality being what we call the value of the element in ohms. At the terminals of an ideal voltage source, on the other hand, the voltage is whatever we assume it to be, and it cannot depart one jot from this specification, regardless of the current it is called upon to deliver on account of the conditions imposed by its environment. An extreme situation arises if the environment is a short circuit, for then the source is called upon to deliver an infinite current; yet it does so unflinchingly and without its terminal voltage departing in the slightest from its assigned value. It is, of course, not sensible to place an ideal voltage source in such a situation, for it then is called upon to furnish infinite power. The ideal voltage source is idle when its environment is an open circuit, for then the associated current be- comes zero. Similarly, at the terminals of an ideal current source the current is whatever we assume it to be, and it cannot depart from this specification, regardless of the voltage it is called upon to produce on account of the conditions imposed by its environment. An extreme situation arises Generated on 2013-12-07 22:25 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google in this case if the environment turns out to be an open circuit, for then the source must produce an infinite voltage at its terminals since the terminal current, by definition, cannot depart from its specified value. Like the short-circuited voltage source, it is called upon to deliver in- finite power, and hence it is not realistic to place an ideal current source in an open-circuit environment. This type of source is idle when short- circuited, since the associated voltage is then zero. In the discussion of Kirchhoff's voltage law we found it useful to think of voltage as analogous to altitude in a mountainous terrain. The potentials of various points in the network with respect to a common reference or datum are thought of as being analogous to the altitudes of various points in a mountainous terrain with respect to sea level as a common reference. Instead of an actual mountainous terrain, suppose we visualize a miniature replica constructed by hanging up a large rubber sheet and suspending from it various weights attached at random places. Since altitude is the analogue of voltage, the problem of finding the altitude of various locations on the sheet (above, say, the floor as a common reference) is analogous to determining the potentials of various nodes in an electrical network with reference to a datum node. Suppose first that we consider the electrical network to have no sources of excitation; all node potentials are zero. The analogous situation in- volving the rubber sheet would be to have it lying flat on the floor. To apply a voltage excitation to the network may be regarded as causing certain of its node potentials to be given fixed values. Analogously, certain points in the rubber sheet are raised above the floor to fixed positions and clamped there. As a result, the various nodes in the electrical network whose potentials are not arbitrarily fixed, assume potentials that are consistent with the applied excitation and the char- acteristics of the network. Analogously, the freely movable portions of the rubber sheet assume positions above the floor level that are consistent with the way in which the sheet is supported at the points where it is clamped (analogous to excitation of the electrical network) and the structural characteristics of the sheet with its system of attached It is interesting to note from the description of these two analogous situations that electrical excitation by means of voltage sources may be thought of as arbitrarily fixing or clamping the voltage at a certain point or points. A voltage source is thus regarded as an applied con- straint, like nailing the rubber sheet to the wall at some point. Ideal current sources when used to excite an electric network may likewise be regarded as applied constraints. In any passive network the currents and voltages in its various parts are in general free to assume an array of values subject only to certain interrelationships dictated by Generated on 2013-12-07 22:26 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google the structure of that network, but, without any excitation, all voltages and currents remain zero. If we now give to some of these voltages and currents arbitrary nonzero values, we take away their freedom, for they can no longer assume any values except the specified ones, but the re- maining voltages and currents, whose values are not pegged, now move into positions that are compatible with the network characteristics inter- relating all voltages and currents, and with the fixed values of those chosen to play the role of excitation quantities. As more of the voltages and currents are clamped or fixed through the application of sources, fewer remain free to adjust themselves to compatible values. Finally, if all voltages and currents were constrained by applied sources, there would be no network problem left, for everything would be known beforehand. In the commonest situation, only a single voltage or cur- rent variable is constrained through an applied source; determination of the compatible values of all the others constitutes the network problem. Various ways in which sources are schematically represented in circuit diagrams are shown in Fig. 6. Parts (a), (b), and (c) are representations of voltage sources, whereas part (d) shows the representation for a cur- rent source. Specifically (a) and (b) are common ways of indicating (a) (b) (c) (d) Fio. 6. Schematic representations for sources, (a) A constant voltage (batiery), (b) a constant voltage (d-c generator), (c) arbitrary voltage function, (d) arbitrary cur- rent function. constant-voltage sources, also called "direct current" or "d-c" voltage sources. The schematic (a) simulates a battery, for example, a dry cell in which the zinc electrode (thin line) is negative and the carbon elec- trode (thick line) is the positive terminal. The d-c source shown in (b) is drawn to resemble the commutator and brushes of a generator. The symbolic representation in (c) is intended to be more general in that the wavy line inside the circle indicates that e,(t) may be any function of time (not necessarily a sinusoid, although there is an established prac- tice in using this symbol as the representation for a sinusoidal generator). It should be particularly noted that e,(l) in the symbolic representation of part (c) may be any time function and, in particular, may also be used to denote a constant-voltage source (d-c source). Part (d) of Fig. 6 shows the schematic representation for a current source in which i,(t) is any time function and hence may be used to denote a constant or d-c source as well as any other. In all of these source representations it will be noted that a reference arrow is included. This arrow does not imply that the source voltage or current is assumed to act in the indicated direction but only that, if it should at any moment have this direction, it will at that moment be regarded as a positive quantity. The reference arrow establishes a Generated on 2013-12-07 22:26 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google means for telling when the quantity e,(t) or is positive and when it is negative. A source voltage is said to "act in the direction of the refer- ence arrow" when it is a voltage rise in this direction. The + and — signs of parts (a) and (b) of Fig. 6 further clarify this statement. In most of the following work the representations shown in parts (c) and (d) will be used. It should not be overlooked that the representations in Fig. 6 are for ideal sources. Thus the voltage between the terminals in the sketch of part (c) is always e,(t) no matter what is placed across them. Like- wise the current issuing from the terminals in the sketch of part (d) is always i,(t) no matter what the external circuit may be. An actual physical voltage source may, to a first approximation, be represented by placing a resistance in series with the ideal one so that the terminal voltage decreases as the source current increases. A physical current source may similarly be represented to a first approximation through the ideal one of part (d) with a resistance in parallel with the terminals, thus taking account of the fact that the net current issuing from the terminals of the combination depends upon the terminal voltage, and decreases as this voltage increases. These matters will further be elaborated upon in the applications to come later on. It is common among students that they have more difficulty visualiz- ing or grasping the significance of current sources than they do in the understanding of voltage sources. A contributing reason for this difficulty is that voltage sources are more commonly experienced. Thus our power systems that supply electricity to our homes and factories are essentially voltage sources in that they have the property of being idle when open-circuited. Sources that are basically of the current variety are far less common. One such source is the photoelectric cell which emits charge proportional to the intensity of the impinging light and hence is definitely a current source; it clearly is idle when short- circuited because it then delivers no energy. Another device that is commonly regarded as a current source is the pentode vacuum tube. Its plate current is very nearly proportional to its grid excitation under normal operating conditions, and hence, for purposes of circuit analysis, it is appropriate to consider it as being essentially a current source. Generated on 2013-12-07 22:36 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google In any case it can with very good accuracy be regarded as an ideal cur- rent source in parallel with a resistance. Whether actual sources are more correctly to be regarded as voltage sources or as current sources is, however, a rather pointless argument since we shall soon see that either representation (in combination with an appropriate arrangement of passive circuit elements) is always pos- sible no matter what the actual source really is. Again we must be reminded that circuit theory makes no claim to be dealing with actual things. In fact it very definitely deals only with fictitious things, but in such a way that actual things can thereby be represented. "Like all other methods of analysis, circuit theory is merely the means to an end; it lays no claim to being the real thing. Now as to determining how source quantities enter into the equilib- rium equations for a given network, we first make the rather general observation that the insertion of sources into a given passive network is done in either of two ways. One of these is to insert the source into (a) / (b) Fia. 7. Network graph involving voltage source (constraint) in parallel with a branch (a), and the equivalent revised graph (b) showing disposition of voltage source. the gap formed by cutting a branch (as with a pliers); the other is to connect the source terminals to a selected node pair (as with a soldering iron). These two methods will be distinguished as the "pliers method" and the "soldering-iron method" respectively. We shall now show that one may consider the pliers method restricted to the insertion of voltage sources and the soldering-iron method to the insertion of current sources. That is to say, the connection of a voltage source across a node pair or the insertion of a current source in series with a branch implies a revision of the network geometry, with the end result that voltage sources again appear only in series with branches and current sources appear only in parallel with branches (or across node pairs). For example, in part (a) of Fig. 7 is shown a graph in which a voltage source e, appears in parallel with branch 6 of some network, and in part (b) of this figure is shown the resultant change in the network geometry and source arrangement which this situation reduces to. Thus, in considering the given arrangement in part (a), one should first observe Generated on 2013-12-07 22:36 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google that branch 6 is rendered trivial by having e, placed in parallel with it since the value of v6 is thus forced to be equal to e, and hence (along with j6) is no longer an unknown. That is to say, the determination of the current in branch 6 is rendered trivially simple and independent of what happens in the rest of the network. Therefore we can remove branch 6 from our thoughts and from the rest of the graph so that e, alone appears as a connecting link between nodes a and b. Next we ob- serve that the potentials of nodes c, d, f, relative to that of node a are precisely the same in the arrangement of part (b) in Fig. 7 as they are in part (a). For example, the potential of node c with respect to that of node a is (e, — vs) as is evident by inspection of either part (a) or part (b) of this figure. Similarly the potential of node d with respect to that of node a is seen to be (e, + 1^7) in the arrangement of part (a) or of part (b). It thus becomes clear that the branch voltages and cur- rents in the graph of part (b) must be the same as in the graph of part (a), except for the omission of the trivial branch 6. We may conclude that placing a voltage source across a node pair has the same effect upon the network geometry as placing a short circuit across that node pair. Comparing graphs (a) and (b) in Fig. 7, we see, for example, that the voltage source e, in graph (a) effectively unites nodes a and b in that graph, thus eliminating branch 6, and yielding the revised graph (b). The effect of the voltage source so far as this revised graph is concerned is taken into account through placing iden- tical voltage sources in series with all branches confluent in the original node b. We can alternately place the identical voltage sources in series with the branches originally confluent in node a: that is, in branches 4 and 5 instead of 7, 8, and 9. It is useful in this connection to regard a voltage source as though it were a sort of generalized short circuit, which indeed it is. Thus, by a short circuit we imply a link or branch for which the potential difference between its terminals is zero independent of the branch current, while for a voltage source the potential difference is e, independent of the branch current. For e, = 0, the short circuit is identical with the voltage source. Or we may say that a dead voltage source is a short circuit. The preceding discussion shows that the effect of a voltage source upon Generated on 2013-12-07 22:37 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google the network geometry is the same as that of an applied short-circuit Analogously, part (a) of Fig. 8 depicts a situation in which a current source i, appears in series with branch 4 of some network, and part (b) shows the resultant change in geometry and source arrangement which is thereby implied. With reference to the given situation in part (a) it is at once evident that branch 4 becomes trivial since its current is identical with the source current and hence is known. It is also evident that the effect of the current source i, upon the rest of the network is the same as though there had been no branch linking nodes a and b through which the source is applied. We can, therefore, regard the cur- rent source to be bridged across the node pair a-b in a modified graph in which branch 4 is absent. A further step that results in having all current sources in parallel with branches may be carried out as shown in part (b) of Fig. 8. The equivalence of the four identical current sources i, bridged across Fig. 8. Network graph involving current source (constraint) in series with a branch (a), and the equivalent revised graph (b) showing disposition of current source. branches 11, 9, 8, 7, with a single source i, bridged across the node pair a-b is evident by inspection since the same amount of source current still leaves node a and enters node b, while no net source current enters or leaves the nodes /, g, and h. We may conclude that inserting a current source in series with a branch has the same effect upon the network geometry as does the open- circuiting or the removal of that branch. In this altered network the source appears bridged across the node pair originally linked by the removed branch, or in the form of several identical sources bridged across a confluent set of branches joining this node pair. According to these results we may regard a current source as a gen- eralized open circuit. By an open circuit we understand a branch for which the current is zero independent of the branch voltage; and by a current source we understand a branch for which the current is t, inde- pendent of the branch voltage. For i, = 0, the current source is identical Generated on 2013-12-07 22:37 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google with an open circuit; the latter may be regarded as a dead current source. In summary we may say that, so long as voltage sources appear only in series with branches, and current sources are associated only in parallel with branches or across node pairs, their presence does not disturb the network geometry in the sense that all matters pertaining to that geometry remain unaltered, such as the numbers of independent voltages and currents uniquely characterizing the state of the network, or their algebraic relations to the branch currents and voltages. In a sense, the open-circuit character of a current source and the short-circuit character of a voltage source become evident here as they do in the reasoning of ^* the immediately preceding para- a b voltage source in parallel with Fig. 9. Passive branch with associated question becomes trivial and can be removed, leaving in its place an open circuit if the inserted source is a current, and a short circuit if the inserted source is a voltage. After this revision in the geometry is carried out, the source appears either as a current in parallel with a branch (or with several branches) or as a voltage in series with a branch (or with several branches). These two source arrangements alone, therefore, are all that need to be considered in the following discussion. Thus we may regard any branch in a network to have the structure shown in Fig. 9. Here the link a-b represents the passive branch with- out its associated voltage and current sources; that is to say, when the sources are zero (as they usually are for most of the branches in a net- work), then the branch reduces to this link a-b alone. However, we shall take the attitude at this point that any or all of the branches in a network may turn out to have the associated sources shown in Fig. 9. The network is thus regarded as a geometrical configuration of active instead of passive branches. This turn of events changes nothing with regard to all that has been said previously except the relations between branch voltages and branch currents [designated as the relations (b) in the summary of Art. 3 regarding the formulation of equilibrium Since vk and jk denote the net voltage drop and the net current in Generated on 2013-12-07 22:38 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google branch fc, the voltage drop and current in the passive link a-b (noting the reference arrows in Fig. 9) are (vk + e,k) and (jk + i,k) respectively. On the other hand, we see that the network geometry is affected whenever a current source is placed in series with a branch or a one. In both cases the branch in current and voltage source. These are the quantities that are related by the passive circuit element which the branch represents. If the functional relationship between voltage drop and current in the passive link is formally denoted by v = z(j) or j = y(v), we have, for the general active branch of Fig. 9, 0>* + e,k) = z(jk + i,k) or (j* + *',*) = y (t>* + e,*) (43) In a resistance branch, the notation z(j) reduces simply to a multi- plication of the current j by the branch resistance, and y(v) denotes a multiplication of the voltage drop v by the branch conductance. In capacitive or inductive branches the symbols z(j) and y(v) also involve time differentiation or integration, as will be discussed in detail later on when circuits involving these elements are considered. For the moment it will suffice to visualize the significance of Eqs. 43 with regard to resistance elements alone. It may be mentioned, with reference to the arrangement in Fig. 9, that the same results are obtained if the current source i,k is assumed to be in parallel with the passive link a-b alone rather than with the series combination of this link and the voltage source e,*. If i,k = 0, the link is activated by a series voltage source alone; if e,k = 0, one has the representation of a passive branch activated by a current source alone. For e,k = i,k = 0, the arrangement reduces to the usual passive branch. Thus the volt-ampere relations 43 are sufficiently general to take care of any functional dependence between net branch voltages and currents that can arise in the present discussions. The method of including the effect of sources in the derivation of equilibrium equations is now easily stated. Namely, one proceeds precisely as described in the previous articles for the unactivated net- work except that the relations between branch voltages and branch currents are considered in the form of Eqs. 43, so as to take account of the presence of any voltage or current sources. This statement applies alike to the determination of equilibrium equations on the loop or the node basis. Thus, regardless of the nature and distribution of sources throughout the network, the procedure remains straightforward and is essentially the same as for the unexcited network. 8 Summary of the Procedures for Deriving Equilibrium Generated on 2013-12-07 22:38 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google At this point it is effective to bring together in compact symbolic form the steps involved in setting up equilibrium equations. Thus we have on the loop basis: (a) The Kirchhoff voltage-law equations in terms of branch voltages: 2±t>* = 0 (b) The relations between branch voltages and branch currents (Eqs. 43): vk = -e.k + z(jk + i,k) (45) (c) The branch currents in terms of the loop currents: ju = 2±ir (46) The rows of a tie-set schedule (like 13, for example) place in evidence the Kirchhoff Eqs. 44, while the columns of this schedule yield the branch currents in terms of the loop currents, Eqs. 46. The expressions for the vk's in terms of the jVs, Eqs. 45, are obtained from a knowledge of the circuit parameters and the associated voltage and current sources, as illustrated in Fig. 9. The desired equilibrium equations are the Kirchhoff Eqs. 44 expressed in terms of the loop currents. One accomplishes this end through sub- stituting the jk's given by Eqs. 46 into Eqs. 45, and the resulting expressions for vk into Eqs. 44. Noting that the linearity of the network permits one to write z(jk + i,k) = z(jk) + z(i,k), the result of this substitution among Eqs. 44, 45, 46 leads to Z±z(2±tr) = 2±[e.* - z(i,k)] = etl (47) Interpretation of this formidable looking result is aided by pointing out that z(2±ir) represents the passive voltage drop in any branch k due to the superposition of loop currents ir in that branch, and that the left-hand side of Eq. 47 is the algebraic summation of such passive branch voltage drops around a typical closed loop I. The right-hand side, which is abbreviated by the symbol e,i, is the net apparent source voltage acting in the same loop. It is given by an algebraic summation of the voltage sources present in the branches comprising this closed contour (tie set) and the additional voltages induced in these branches by current sources that may simultaneously be associated with them. The latter voltages, which are represented by the term — z(i,k), must depend upon the circuit parameter relations in the same way as do the passive voltage drops caused by the loop currents, except that their algebraic signs are reversed because they are rises. Thus the resulting equilibrium Eqs. 47 state the logical fact that the net passive voltage drop on any closed contour must equal the net active Generated on 2013-12-07 22:38 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google voltage rise on that contour. If we imagine that the loops are deter- mined through selecting a tree and identifying the link currents with loop currents, then we can interpret the source voltages e,i as equivalent link voltages in the sense that, if actual voltage sources having these values are placed in the links and all original current and voltage sources are removed, the resulting loop currents remain the same. Or we can say that, if the negatives of the voltages e,i are placed in the links, then the effect of all other sources becomes neutralized, and the resulting network response is zero; that is, the loop currents or link currents are zero, the same as they would be if all links were opened. Hence we have a physical interpretation of the e,i in that they may be regarded as the negatives of the voltages appearing across gaps formed by opening all the links. In many situations to which the simplified procedure discussed in Art. 6 is relevant, this physical interpretation of the net excitation quantities e,i suffices for their determination by inspection of the given network. An entirely analogous procedure and corresponding process of physical interpretation applies to the derivation of equilibrium equations on the node basis. Here one has (a) The Kirchhoff current-law equations in terms of branch currents: 2±j* = 0 (48) (b) The relations between the branch currents and branch voltages (Eqs. 43): jk = -i,k + y(vk + e,k) (49) (c) The branch voltages in terms of the node-pair voltages: vk = 2±er (50) The rows of a cut-set schedule (like 20, for example) place in evidence the Kirchhoff Eqs. 48, while the columns of this schedule yield the branch voltages in terms of the node-pair voltages, Eqs. 50. The ex- pressions for the jk's in terms of the vk's, Eqs. 49, are obtained from a knowledge of the circuit parameters and the associated voltage and current sources, as illustrated in Fig. 9. The desired equilibrium equations are the Kirchhoff Eqs. 48 expressed in terms of the node-pair voltages. One obtains this end by substituting the v^s given by Eqs. 50 into Eqs. 49, and the resulting expressions for jk into Eqs. 48. Noting that the linearity of the network permits one to write y(vk + e,k) = y(vk) + y(e,k), the result of this substitution among Eqs. 48, 49, 50 leads to Z±2/(Z±er) = 2± [i.k - y(e,k)] = i.n (51) Interpretation of this formidable looking result is aided through Generated on 2013-12-07 22:39 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google recognizing that ?/(2±er) represents the passive current in any branch k due to the algebraic sum of node-pair voltages er acting upon it, and hence the left-hand side of Eq. 51 is the summation of such branch cur- rents in all branches of a typical cut set; for example, the set of branches divergent from a given node n if the node-pair voltages are chosen as a node-to-datum set. The right-hand side of Eq. 51, which is abbreviated by the symbol i,n, is the net apparent source current for this cut set, for example, it is the net apparent source current entering node n in a node-to-datum situa- tion. The net source current is given by an algebraic summation of the current sources associated with the branches comprising the pertinent cut set and the additional currents induced in these branches by voltage sources that may simultaneously be acting in them. The latter currents, which are represented by the term — y(e,k), must depend upon the cir- cuit-parameter relations in the same way as do the passive currents caused by the node-pair voltages except that their algebraic signs are reversed because they represent a flow of charge into the cut set rather than out of it. Thus the resulting equilibrium Eqs. 51 state the logical fact that the net current in the several branches of a cut set must equal the total source current feeding this cut set. If we imagine that the cut sets have been determined through selecting a tree and identifying the tree-branch voltages with node-pair voltages, then we can interpret the source cur- rents i,n as equivalent sources bridged across the tree branches in the sense that, if actual current sources having these values are placed in parallel with the tree branches and all original current and voltage sources are removed, the resulting node-pair voltages remain the same. Or we can say that, if the negatives of the currents i,n are placed across the tree branches, then the effect of all other sources becomes neutralized, and the resulting network response is zero; that is, the node-pair voltages or tree-branch voltages are zero, the same as they would be if all tree branches were short-circuited. Hence we have a physical interpretation of the i,n in that they may be regarded as the negatives of the currents appearing in short circuits placed across all the tree branches. In a node-to-datum choice of node pairs, the i,n may be regarded as the negatives of the currents appearing in a set of short circuits placed across these node pairs, and a node-to- datum set of current sources having these values can be used in place Generated on 2013-12-07 22:39 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google of the original voltage and current sources in computing the desired network response. In many situations to which the simplified procedure discussed in Art. 6 is relevant, this physical interpretation of the net excitation quantities i,n suffices for their determination by inspection of the given network. 9 Examples The complete procedure for setting up equilibrium equations will now be illustrated for several specific examples: Consider first the resistance network of Fig. 10. The element values in part (a) are in ohms, and the source values are i, = 10 amperes, e, = 5 volts (both constant). In Fia. 10. A resistance network (element values in ohms) and its graph showing the 'choice of meshes as loops. part (b) of the same figure is shown the graph with its branch numbering and a choice of meshes to define loop currents. Branch No. Generated on 2013-12-07 22:39 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google The tie-set schedule corresponding to this choice is given in 52. The rows give us the voltage-law equations: vi — vi = 0 v2 — v5 = 0 v3 — v6 = 0 v4 + » 5 + v 6 = 0 and the columns yield the branch currents in terms of the loop currents, = *i 3i = -ii +14 = i2 J5 = -*2 + *4 = i3 36 = ~i3 + U These correspond respectively to Eqs. 44 and 46 in the above summary. With regard to Eqs. 45 relating branch voltages to branch currents, we observe that, if we associate the current source with branch 5 (we could alternately associate it with branch 2), then all branches except 1 and 5 are passive and no special comment is needed for them. The net voltage drop in branch 1 is vi = — e, -f- ji, and the net current in the arrow direction in branch 5 is j6 = i, + (v5/2), the term (»5/2) being the current in the 2-ohm resistance which is the passive part of this branch. Noting the source values given above, the relations expressing net branch-voltage drops in terms of net branch currents read fi =■ h - 5 »4 = 2/4 »2 = 32 vs = 2js - 20 (55) v3 = j3 v 6 = 2j6 The relations involving the active branches are seen to contain terms that are independent of current. The desired equilibrium equations are found through substitution of Eqs. 54 into 55, and the resulting expressions for the v's into the voltage- law equations 53. After proper arrangement this gives 3ii + 0*2 + 0i3 - 2U = 5 Ot'i + 3i2 + 0i3 - 2n = -20 Generated on 2013-12-07 22:39 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Oii + 0*2 + 3i3 - 2i4 = 0 -2t'i - 2i2 - 2i3 + 6t4 = 20 These are readily solved for the loop currents. One finds *, = 5, i2 = -10/3, *3 = 10/3, i4 = 5 (57) whence substitution into Eqs. 54 yields all the branch currents 3i - 5, j2 = -10/3, j3 = 10/3, u = 0, is = 25/3, j6 = 5/3 (58) The value of js is the net current in branch 5. That in the passive part of this branch is smaller than j5 by the value of the source current, and hence is (25/3) - 10 = -5/3. Now let us solve the network given in Fig. 10 by the node method, choosing as node-pair voltages the potentials of nodes a and b respec- tively, with the bottom node as a reference. The appropriate cut-set schedule is 59. The rows give us the current-law equations, Branch No. -ji + h ~ h + h - 0 -32 + h - is + ie = 0 and the columns yield the branch voltages in terms of the node-pair voltages, thus: ei - e2 Generated on 2013-12-07 22:40 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google ei — e2 These correspond respectively to Eqs. 48 and 50 in the above summary. Regarding Eqs. 49 relating the branch currents to the branch voltages, we note as before that 3\ = vi + e, and js = i, + 0.5»5, so that the com- plete set of these equations reads vi + 5 0.5f5 + 10 which are simply the inverse of Eqs. 55. The desired equilibrium equations are found through substitution of Eqs. 61 into 62, and the resulting expressions for the j's into the current- law equations 60. After proper arrangement one finds 3ei — 1.5e2 = —5 -1.5ei + 3e2 = 10 The solution is readily found to be ei =0, e2 = 10/3 (64) and the branch voltages are then computed from Eqs. 61 to be „, = 0, t>2 = -10/3, t>3 = 10/3, v4 = 0, v5= -10/3, v8 = 10/3 (65) With regard to branch 1 it must be remembered that the value of vi is for the total branch, including the voltage source. The drop in the passive part, therefore, is 5 volts. As a second example we shall consider the network graph shown in Fig. 11(a). The sources in series with the branches are voltages having (a) (b) Fio. 11. Graph of a resistance network (a) with branch conductance values given by Eqs. 69. Choice of node-pair voltage variables is indicated in (b). the values indicated. Since for this graph b = 10, n = 3, and I = 7, it will be advantageous to choose the node method. A geometrical specifi- cation of node-pair voltages is shown in part (b) of the same figure. In cut-set schedule 66 pertaining to this choice of node pairs a last column Branch No. a, c a, c, d b, c, d Generated on 2013-12-07 22:46 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 104 THE EQUILIBRIUM EQUATIONS indicating the corresponding "picked-up" nodes is added to facilitate understanding its construction. According to the rows of this schedule one obtains the Kirchhoff current-law equations, fa + fa ~ fa - fa + J5 + fa - fa -fa =0 fa + fa ~ fa ~ fa +fa =0 (67) -fa - fa + fa + fa + ji0 = 0 while the columns yield the following relations for the branch voltages in terms of the node-pair voltages: vi = ei + e2 — e3 v6 - ei v2 = e\ + e2 — e3 v7 = —ei + e3 t'3 = —ei — e2 va = — ei + e3 (68) f4 = —1\ — e2 v$ = e2 v6 = ei viq = e3 The branches are again considered to be resistive. Let us assume for their conductances the following values in mhos: gi = 2, g2 = 2, g3 = 1, gt = 3, g5 = 4, !76 = 5, g7 = 1, 0s = 3, gg = 2, gi0 = 6 The relations expressing the branch currents in terms of the net branch- voltage drops are then readily found by noting the appropriate expression for the drop in the passive part of each branch and multiplying this by the corresponding conductance. For example, the voltage drop in the passive part of branch 1 is »i + 10, in branch 3 it is t'3 + 2, in branch 5 it is t'5 — 8, and so forth. Thus we see that Substitution of the «'s from Eqs. 68 into Eqs. 70 and the resulting expressions for the fs into Eqs. 67 gives the desired equilibrium equa- = 2vi + 20 = 5v6 = 2v2 Generated on 2013-12-07 22:47 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google = v7 + 6 = v3 + 2 = 3«s = 3t'4 - 12 = 2vg = 4t'5 - 32 = 6t,io - 30 tions. After proper arrangement these read 21ei + 8e2 - 8e3 = 8 8d + 10e2 - 4e3 = -30 (71) -8ei - 4e2 + 14e3 = 44 Their solution yields ei = 3.49, e2 = -4.22, e3 = 3.93 (72) from which the net branch-voltage drops may readily be computed using Eqs. 68, and the branch currents are then found from Eqs. 70. 1. Regarding the independence of Kirchhoff voltage-law equations, it might be supposed that, if the number of equations equals I = b — n, and if collectively they involve all of the branch voltages, then they must form an independent set. Show that this conclusion is false by constructing a counter example. Thus, with regard to the accompanying graph, consider equations written for the combined contours of meshes 1 and 2, 2 and 3, 3 and 4, 4 and 1. Although all branch voltages are involved, show that these equations do not form an independent set. Prob. 1. Prob. 3. 2. Prove or disprove the statement: "The number of independent Kirchhoff voltage-law equations equals the smallest number of closed paths that traverse all of the branches." 3. With reference to the graph shown, determine whether a set of voltage-law equations written for the following combined mesh contours is an independent one: (1+2+3), (4 + 5 + 6), (7 + 8 + 9), (1+4 + 7), (2 + 5 + 8), (3+6 + 9), (1+2 + 4 + 5), (2 + 3 + 5 + 6), (5+6+8 + 9) Generated on 2013-12-07 22:47 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 4. Construct the dual to Prob. 1, and solve it. 5. In Prob. 1 show that voltage-law equations written for the following combined mesh contours do form an independent set (1 + 2 + 3), (2 + 3 + 4), (3 + 4 + 1), (4 + 1 + 2). Is the following set independent: (1 - 2), (2 - 3), (3 - 4), (4 - 1)? 6. In a 5-mesh mappable network, are voltage-law equations written for the fol- lowing mesh combinations independent: (1+2), (2+3), (3+4), (4 + 5), (5 + 1)? or (1 - 2), (2 - 3), (3 - 4), (4 - 5), (5 - 1)? 7. Translate Prob. 6 into its dual. Make appropriate sketches and answer the pertinent questions involved. 8. Prove that voltage-law equations written for the mesh contours in a mappable network always form an independent set by constructing the dual situation and carrying out the corresponding proof. In which situation is the proof more readily 9. Consider the graph of Prob. 1, Ch. 1, and choose branches 5, 6, 7, 8 as consti- tuting a tree. For the meshes, which become the closed paths upon which the link currents circulate, write Kirchhoff voltage-law' equations, and use these to express the link voltages in terms of the tree-branch voltages. Now write a voltage-law equation for an additional closed path, say, for the mesh combination (1+2 — 3) or any other one. In this equation substitute the expressions for the link voltages obtained above, and note that it reduces to the trivial identity 0=0. 10. Construct the dual to the situation described in Prob. 9, and thus give an illustrative example showing that no more than n Kirchhoff current-law equations are independent. 11. In the sketch below, the series source is a voltage, and the parallel one is a current. Numerical values are in volts and amperes. The passive element is a re- sistance of 3 ohms, as indicated. Using the superposition principle which allows us to add separate effects, treating each as though the others did not exist, and remembering that a nonexistent current is an open circuit, demonstrate the correctness of each of the following relations f = 5 + 3X2 + 3Xj = 3j + ll and check them, using Eq. 43. Thus show that the given active branch is replace- Generated on 2013-12-07 22:48 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google able by either of the following ones: 12. Using the ideas brought out in the preceding problem, reduce the following to (a) an equivalent single passive element with a series voltage source, (b) an equiva- lent single passive element with a parallel current source. Prob. 12. 13. Apply the statement of Prob. 12 to the following: Prob. 13. 14. Apply the statement of Prob. 12 to the arrangement of sources and passive elements shown below. Generated on 2013-12-07 22:53 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Prob. 14. 15. In the following circuit the central source is a current. The other sources are understood to be voltages or currents, according to their series or parallel associ- ation with the pertinent passive element. Element values are in ohms. Through appropriate manipulation, reduce this problem to one involving a single loop current, and, after finding its value, obtain the four currents ii, t2, is, u in terms of this one. 16. In the pertinent graph, the branch numbers may be regarded as also indicating branch conductance values in mhos. Construct two cut-set schedules, one for the choice of node-pair voltages, ei = «i, e2 = t% «s = »s, and the other one for the picked-up nodes, a-c, b-c, d. Using the first schedule for the definition of variables and the second one for the determination of the Kirchhoff current-law equations, obtain the equilibrium equa- tions (having a nonsymmetrical parameter matrix), and solve. Alternately obtain symmetrical equilibrium equations through use of the first schedule alone. Solve these, and check the previous solutions. 17. Construct the complete dual to Prob. 16 and solve. 18. Consider the 2-, 4-, 5-ohm branches as forming a tree. (a) Find equivalent voltage sources in the links alone. Set up loop equations, and (b) Find an equivalent set of current sources across the tree branches alone. Set up node equations, and solve. Obtain all currents and voltages in the passive branches by each method and check. Find the equivalent voltage sources in (a), first, by replacing the —4-volt and 2-volt sources in the tree branches by respectively equal sources in the links and combining these with the other link-voltage sources and converted current sources; second, by opening all the links and noting the net voltages across the gaps thus formed (the desired link-voltage sources are the negatives of these). Check the results found by these two methods. Similarly in part (b) find the desired equivalent current sources, first, through conversion of voltage to current sources and then replacing current sources across links by equal ones across tree branches and combining these with other sources across these branches; second, by short-circuiting all the tree branches and noting the net currents in these short circuits (the desired current sources are the negatives of these). Again check the results found by the two methods. Prob. 15. Generated on 2013-12-07 22:54 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Prob. 16. Note carefully that the sources in (a) yield the correct loop currents but that the voltages across the tree branches, which are now purely passive, are not the actual net tree-branch voltages. Hence, if we convert the voltage sources in (a) to equiva- lent current sources and transfer these across the tree branches, we should not expect to check the current sources found in part (b). Similarly, we cannot expect from the results of (b) to find those of (a) through source transformation methods alone. Discuss this aspect of the problem. Prob. 18. 19. The sketch below shows the graph of a network consisting of seven 1-ohm branches and a 1-volt source. Find the values of the node potentials ei, ej, ej with respect to that of the common node at 0. Although any valid method is acceptable, it is suggested that you use the technique of source transformations in order to avoid leriving and solving a set of algebraic equations. Prob. 19. Prob. 20. 20. (a) In the network shown consider branches 1, 3, and 4 as forming a tree. Identify the link currents with the loop currents, and write a tie-set schedule for the network. Write down explicitly the three sets of equations: (1) Kirchhoff's voltage- law equations, (2) the appropriate volt-ampere relations for the branches, (3) the branch currents in terms of the loop currents. Substitute (3) into (2) and then (2) into (1) to obtain the equilibrium equations on a loop basis. (b) Write down this last set of equations directly, using mesh currents as variables Generated on 2013-12-07 22:54 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google and the simplified procedure discussed in Art. 6. 21. (a) For the network of Prob. 20 consider the node-pair voltages from a, 6, and c to ground as an independent set. Write a cut-set schedule for them. Then obtain the three sets of equations: (1) Kirchhoff's current-law equations, (2) the appropriate volt-ampere relations for the branches, (3) the branch voltage drops in terms of the node-pair voltages. By substitution of (3) into (2), and then these into (1), obtain the equilibrium equations on a node basis. (b) Write down this last set of equations directly, using the same node-pair volt- ages as variables and the simplified procedure discussed in Art. 6. 22. Choosing the link currents 1, 2, 3, 4, 11, as variables, repeat parts (a) and (b) of Prob. 20 for the network shown here. Branches 1 through 10 are 2-ohm resist- ances. Branch 11 is a 1-ohm resistance in parallel with a 1-ampere current source. 23. When a branch with its associated sources as shown in Fig. 9 becomes degen- erate through having its passive resistance assume an infinite value, then its voltage source is trivial, and its current is constrained by the associated current source to the value jk = — i,k. One way of dealing with this situation is to revise the network geometry and dispose of the current source as shown in Fig. 8. Show, however, that one may alternately meet this situation by treating this branch in the normal man- ner. Thus on a node basis this type of degeneracy creates no problem since terms in the Kirchhoff current-law equations involving the current jk = —i,k simply be- come known quantities and are transposed to the right-hand sides. On a loop basis, show that one can construct the tie-set schedule so that its first I — 1 rows do not involve this branch, thus identifying loop current tj with the known branch current and rendering the first I — 1 of the loop equations sufficient for the determination of all unknowns. As an illustration, treat the following circuit in this manner. Let the branch numbers equal resistance values in ohms. Prob. 22. Prob. 23. 24. When a branch with its associated sources as shown in Fig. 9 becomes degen- erate through having its passive resistance assume a zero value, then its current Generated on 2013-12-07 22:54 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google source is trivial, and its voltage is constrained by the associated voltage source to the value t■* — — e,*. One way of dealing with this situation is to revise the network geometry and dispose of the voltage source in the manner shown in Fig. 7. Show, however, that one may alternately meet this situation by treating this branch in the normal manner. Thus on a loop basis this type of degeneracy creates no problem since terms in the Kirchhoff voltage-law equations involving the voltage t,* — — e,* simply become known quantities and are transposed to the right-hand sides. On a node basis, show that one can construct the cut-set schedule so that its first n — 1 rows do not involve this branch, thus identifying node-pair voltage en with the known branch voltage and rendering the first n — 1 of the node equations sufficient for the determination of all unknowns. As an illustration, treat the following circuit in this manner. Let the branch numbers equal conductance values in mhos. Prob. 24. 25. For the circuit shown in the accompanying sketch, assume the branch num- bers to indicate also the resistance values in ohms, and let i, be one ampere. Choosing branches 1, 2, 3, 4, 5 as links, find a set of link-voltage sources equivalent to the given current source as being the negatives of the voltages appearing at gaps cut simul- taneously into all links. With these replacing the current source i,, write down by Prob. 25. inspection the equilibrium equations on a mesh basis using the simplified procedure given in Art. 6 and inserting the net source voltages around meshes as the right- hand members. Alternately obtain these same equations using the procedure de- scribed in Prob. 23 in which the current source is treated as a normal branch, and Now replace i, by identical sources in parallel with branches 3 and 4; convert to voltage sources in series with these branches, and again write mesh equations. Will Generated on 2013-12-07 22:55 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google these yield the same mesh currents as above? Explain in detail. Methods of Solution Related Topics 1 Systematic Elimination Methods Having written the equilibrium equations for a given network, the next task is to carry through their solution. Here one may proceed in several ways, the proper choice depending largely upon the objective for which the analysis is done. Thus, one may be interested merely in the numerical solution to a specific situation, or in a more general solu- tion in which some or all of the network parameters enter symbolically. The latter type of problem is actually equivalent to the simultaneous study of an infinity of specific numerical situations and consequently presents greater algebraic difficulties which can be overcome only through the use of correspondingly more general methods of analysis. An effec- tive tool for dealing with such problems is given in the next article. For the moment we shall concern ourselves with the less difficult task of solving a specific numerical case. Suppose we choose as an example the Eqs. 24 appropriate to the net- work of Fig. 3 in Ch. 2, with arbitrary nonzero right-hand members, thus: 1.142ei - 0.976e2 + 0.643e3 + 0.500e4 = 1 -0.976ei + 2.326e2 - 1.893e3 - 0.750e4 = 2 0.643ei - 1.893e2 + 2.218e3 + 0.950e4 = 3 0.500e! - 0.750e2 + 0.950e3 + 1.061e4 = 4 The straightforward method of solving a set of simultaneous equations like these consists in systematically eliminating variables until an equa- tion with a single unknown is obtained. After its value is found, an equation involving this and one other variable is used to compute the value of a second unknown, and so forth. Unless the entire process is systematized, however, a considerable amount of lost motion may result. The following procedure is an effective one. Generated on 2013-12-07 22:57 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google It is clear that only the numerical coefficients enter into the computa- tional procedure. Therefore it is sensible to omit writing the symbols ei, e2, • • • altogether and consider only numerical matrix 2. We now 1.142 - 0.976 0.643 0.500 1.000 -0.976 2.326 -1.893 - 0.750 2.000 0.643 -1.893 2.218 0.950 3.000 L 0.500 - 0.750 0.950 1.061 4.000 contemplate the detailed manner in which one may carry out the follow- ing plan in terms of the Eqs. 1: First, we undertake to eliminate ei from all but the first of these equations; this step leaves us with three equa- tions involving e2, e3, e4. From all but the first of these, we now eliminate e2, so that we have two equations with e3 and e4. From one of these we eliminate e3 and have a single equation in e4. Note at this stage that we also have an equation involving e3 and e4, one involving ea, e3, and e4, and the first of the original equations in- volving all four unknowns. We can, therefore, readily solve these equa- tions in sequence and obtain all the unknowns without further difficulty. Specifically, we solve first the equation in e4 alone. Next, the one in- volving e3 and e4 is solved for e3. Then, with e3 and e4 known, the equa- tion involving e2, e3, and e4 yields the value of e2, and the first of the original equations, lastly, is used to find ei. With reference to matrix 2, the process of eliminating ei from all but the first of Eqs. 1 is evidently equivalent to an elimination of the second, third, and fourth elements in the first column. This end is accomplished by operating directly upon the rows of matrix 2 as one would upon the corresponding Eqs. 1. Thus, if we add to the elements of the second row the respective as-multiplied elements of the first row, with a = 0.976/1.142, the result reads 0.000 1.492 -1.343 - 0.323 2.855 which we regard as a new second row. Similarly, a new third row is formed by adding to the elements of the present third row the respective a-multiplied elements of the first row with a = —0.643/1.142, yielding Generated on 2013-12-07 22:58 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 0.000 -1.343 1.856 0.668 2.437 Finally a new fourth row is analogously formed with a = —0.500/1.142, 0.000 - 0.323 0.668 0.842 3.562 (5) These steps are summarized by observing that the original matrix 2 has thus been transformed into the following equivalent one: If we were to write down the equations corresponding to this matrix, it would become clear that the numerical operations just carried out are equivalent to the elimination of ei from the last three of the original Eqs. 1. We now proceed to eliminate from the last two equations corre- sponding to matrix 6. To this end we add to the elements of the third row of this matrix the respective a-multiplied elements of the second row with a = 1.344/1.492, obtaining the new third row: 0.000 0.000 0.646 0.378 5.008 (7) Next, multiplying the elements of the second row in 6 by a = 0.323/1.492 and adding to the respective elements of the fourth row gives Generated on 2013-12-07 22:58 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 0.000 0.000 0.378 0.772 4.180 (8) The original matrix now has assumed the form corresponding to a set of equations in which ei does not appear in the second, while ei and e2 do not appear in the third and fourth. We now carry out a step equivalent to eliminating e3 from the last of the set of equations represented by the matrix 9 by adding the a-multi- plied elements of the third row to the respective ones of the fourth row, with a = —0.378/0.646, giving a final fourth row that reads 0.000 0.000 0.000 0.551 1.251 (10) and the following final form for the matrix: The last row represents the equation 0.551e4 = 1.251 (12) from which e4 = 2.27 (13) The third row in matrix 11 implies the equation 0.646e3 + 0.378e4 = 5.008 (14) which, through use of the value 13 for e4, becomes 0.646e3 = 5.008 - 0.857 = 4.150 (15) and hence yields e3 = 6.42 (16) From the second row in matrix 11 we next have the equation 1.492e2 - 1.343e3 - 0.323e4 = 2.855 (17) or, in view of the values 13 and 16, 1.492e2 = 2.855 + 8.629 + 0.732 = 12.216 (18) from which e2 = 8.189 (19) Generated on 2013-12-07 22:58 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Finally the first of Eqs. 1, corresponding to the first row in matrix 11, together with the values for e2, e3, and e4 already found, becomes 1.142ei - 7.992 + 4.130 + 1.135 = 1 (20) and thus 1.142c, = 3.727 or = 3.264 (21) The basic process in this systematic elimination method is the trans- formation of the original matrix 2 into the so-called triangular form 11, whence the unknowns are obtained through an obvious recursion process which begins with the computation of the last of the unknowns • • • e4 and successively yields all the others. It may readily be seen that this computational procedure involves a minimum of lost motion and hence is the best method to apply in any numerical example. If some terms in the given equations already have zero coefficients, it may be necessary first to rearrange the equations in order that the elimination method be applicable in precisely the form described above while enabling one to take advantage of the simplifications implied by such missing terms. These modifications in procedure, however, the reader can readily supply for himself as he carries out actual examples, and further pertinent discussion of them will not be given here. 2 Use of Determinants Although the determinant method of solving simultaneous algebraic equations may be used in numerical examples, the amount of computa- tion involved is usually greater than in the systematic elimination process just described. It does, on the other hand, afford a means for expressing the solutions in a compact symbolic form that enables one to study their functional properties and thus deduce with little effort a number of important and useful general network characteristics, some of which will be pointed out in the latter part of this chapter. Our immediate objective is to discuss briefly some of the more important algebraic properties of determinants. The so-called determinant of the system of equations an^i + 012^2 H h ainZn = yi 021X1 + a22x2 -\ h a2nxn = 2/2 Oni^i + an2X2 -| h annxn = yn is written in the form aii a12 "' . °in 0,21 022 • • • 02n ani On2 '"' ann In appearance it is much like the corresponding matrix (differing only in that the array of coefficients is enclosed between vertical lines instead of square brackets), but in its algebraic significance it is entirely different from the matrix in that it is a function of its elements and has a value corresponding to the values of these elements as does any function of several variables. The elements are the coefficients a,k in Eqs. 22. For n equations the determinant has n2 elements and is said to be of order n. The determinant is a particular kind of function of many variables that was created by mathematicians for the sole purpose of its being Generated on 2013-12-07 23:07 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google useful in the solution of simultaneous equations. Hence it was given those properties that turn out to serve best this objective. These may be summarized in the following three statements: The value of a determinant is unchanged if the elements of any row (or column) are added to the respective ones of an- other row (or column). The value of a determinant is multiplied by k if all the ele- ments of any row or column are multiplied by k. The value of a determinant is unity if the elements on the principal diagonal are unity and all others are zero. The last statement may be written in the form 10 0 ••• 0 0 10 ••• 0 Through combining the properties 24 and 25 it follows that the value of A remains unchanged if the fc-multiplied elements of any row (or column) are added to the respective ones of another row (or column). Since k may be numerically negative, this statement includes the sub- traction as well as the addition of respective elements. It also follows from these properties that a determinant has the value zero (a) if the elements of any row or column are all zero, or (b) if the elements of any two rows (or columns) are respectively equal or proportional, for a row or column of zeros implies k = 0, and a condition of equal or propor- tional rows (or columns) immediately leads to a row (or column) of zeros through appropriate manipulations of the sort just mentioned. The value of a numerical determinant may readily be found through use of these properties since, by means of them, one can consecutively reduce to zero all but the diagonal elements (after the fashion that matrix 2 in the previous article is transformed to form 11). Once the determinant has this diagonal form, properties 25 and 26 show that the value equals the product of the diagonal elements. In fact it can be shown that the determinant in triangular form has this same property; Generated on 2013-12-07 23:08 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google that is, an a12 a13 '" • ain a22 a23 0 a33 2/22 = j-jj' 2/12 = -r^r (104) where the determinant of Eqs. 101 is I z I = ZnZ22 - Z122 (105) and the symmetry condition z12 = Z2i holds. The determinants | y \ and I z |, of course, have reciprocal values; that is, |2/| = M-1 (106) as is clear, incidentally, from a comparison of relations 102 and 104. These results contain the interesting and useful relationship expressed J/1iZn = 2/22z22 (107) A simple example will illustrate the unusual character of this result. With reference to the network of Fig. 32 in which the element values j.0 WW t Q2* are in ohms, we note that zn is the resistance of the series combination of the two branches, while yw is the ^8 conductance of the 2-ohm branch alone. Hence lo I 02 2/11 = 1/2, Zii = 10 (108) FiQ 32. a simple example of a dis- . , , 1 ., j » ... symmetrical two terminal-pair network At the opposite end of this two / . . . .for which the property expressed by terminal pair we observe that z22 is Eq. 107 is illustrated, given by the 8-ohm resistance alone while 2/22 is the conductance of the two branches in parallel. Thus we 2/22 = 5/8, z22 = 8 (109) Physically and numerically the pairs of quantities 108 and 109 seem Generated on 2013-12-07 23:54 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google unrelated; yet it is clear that they do fulfill the condition expressed by Eq. 107, as indeed they must, since this relationship holds for any two terminal-pair network. Returning to Fig. 31 again, it is often useful to express the quantities ei, t'i in terms of e2, iz or vice versa. Such relations are readily obtained through an appropriate manipulation of Eqs. 100 or 101. It is cus- tomary to write them as ei = Ae2 — Bt2 . r, (110> Xi — Ce2 — J.JT.2 It is a simple matter to determine the coefficients A, B, C, D, called the general circuit parameters, in terms of the y's or the z's. The fol- lowing relations are self-explanatory, and make use of Eqs. 100, 101, and Eqs. 102 and 104 relating the y's and z's. A = (eA = zii=—^ (Hi) \e2/ 0, the current function i,(t) ap- proaches a step of the value I, and e(/dt, n being the number of turns in the coil and ct> the flux linking it. Since by definition L = nct>/i, we note that a current of the value 1/L corresponds to a flux linkage ncj) of unity. The state- ment in the second sentence of this paragraph may now be made more precise: A unit current impulse applied to a capacitance instantaneously places unit charge (1 coulomb) in that capacitance; a unit voltage im- pulse applied to an inductance instantaneously places unit flux linkage (1 weber-turn) in that inductance. These two statements, one about a current impulse and the other about a voltage impulse, are identical except for an interchange of quantities in the pairs: e and i, C and L, charge and flux linkage. Or we may say that only one statement is made, and that this one remains true upon interchange of the dual quantities in the pairs mentioned. Here again we have an example of the principle of duality which we shall elaborate further as our discussions continue. The sudden introduction of electric charge into a capacitance repre- sents the sudden addition of a finite amount of energy to the system of which that capacitance is a part. Q coulombs in C farads represents an energy of Q2/2C joules, which may alternatively be written CEc2/2 if Ec = Q/C denotes the voltage produced in the capacitance by the Generated on 2013-12-08 00:35 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google charge Q. Similarly, the sudden introduction of flux linkage into an inductance represents the addition of energy to the network of which that inductance is a part. nct> weber-turns in L henrys represents an energy of (nct>)2/2L joules, which may alternatively be written LIl2/2 if II = rut>/L denotes the current produced in the inductance by the flux linkage nct>. A remark somewhat apart from the present topic but nevertheless appropriate at this point is to the effect that some readers may not like the statement about flux linkage producing current. They may feel that it is quite the other way about, that current produces flux linkage. While it is true that teachers of electricity and magnetism have con- sistently presented the situation in this way for as long as the subject has been taught, there is actually more reason based upon physical interpretation (if "physical" interpretation of such purely mathematical fictions as electric and magnetic fields makes any sense at all) to adhere to the view that the electromagnetic field produces voltage and current rather than that the reverse is true. For purposes of analysis it does not matter one jot how we interpret the mathematical relationships. It is best to take a very flexible view of such things and be ready to accept either interpretation, whichever is consistent with the tenor of reasoning at the moment. To summarize the statements about current or voltage impulses and the energy they impart to network elements we may say: A unit current impulse applied to a capacitance of C farads establishes instantly a charge of 1 coulomb and inserts 1/2C joules of energy; a unit voltage impulse applied to an inductance of L henrys establishes instantly a flux linkage of 1 weber-turn (hence a current of 1/L amperes) and in- serts 1/2L joules of energy. 3 The Family of Singularity Functions; Some Physical The impulse and step functions introduced in the previous article are found to be practically useful because many actual excitation func- tions can be represented in terms of them. In this regard, the step function is probably the most widely known of the two, for it has been discussed and employed in the literature on circuit theory for many years, having been introduced through the writings of Oliver Heaviside during the latter part of the nineteenth century. In order to appreciate the usefulness of such a concept as the step function, consider the commonly occurring situation pictured in part (a) of Fig. 7 where some passive network (shown by the box) is assumed to be connected to a battery with the constant value of E volts through the switch S. The problem usually is to study the network response that takes place following closure of the switch, with the assumption Generated on 2013-12-08 00:35 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google that rest conditions obtain before this time. If the principal interest is in the network response long after the instant of switch closure, then the excitation function is regarded as a constant voltage having the value E. However, if the interest Lies chiefly in the behavior of the network immediately following the switch closure, then it is obviously not appropriate to regard the excitation as THE FAMILY OF SINGULARITY FUNCTIONS 197 a constant, for it is the discontinuity in this function occurring at the switching instant that is its outstanding characteristic. That is to say, it is the sudden change in the excitation from the zero value prior to the switching instant to its nonzero constant value afterward that charac- terizes the nature of the network response near this time instant. (a) (b) Fig. 7. Application of a constant voltage E through closure of switch 5 at the instant ( = that is to say, they are node flux linkages. Denoting the branch currents as usual byji, 32, • • . we have accord- ing to the relations between current and flux linkage in an induct- ance element (as given in Eq. 20) ii = -fa h = 2(0i — fa) h = 3 fit, the result for this case is immediately obtained from Eq. 90, thus: i(t) =Vc/Lu0tX e-ai (91) Again it may be pointed out that a variety of modifications of the circuit arrangement of Fig. 21 may be carried out without affecting the nature of the response, through use of one or more of the source transformations shown in Figs. 12, 13, and 14. Thus, if the capaci- tance C in Fig. 21 is split into Ci + C2 = C and the transforma- tion of Fig. 14 applied to i,(c) and Ci in parallel to convert this Fio. 24. Circuit arrangement yielding combination into a voltage source Generated on 2013-12-08 01:36 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google the same t( 0 is fixed by the value of the current for / = 0 (Eq. 14 is the result for i(0) = l/L). Similarly the behavior of the series RC circuit for t > 0 is fixed by the value of the charge for t = 0 (in Eq. 47 the capacitance voltage is given for t > 0 when the initial charge is 1 coulomb). The series RLC circuit has a definite behavior for t > 0 in terms of known values of charge and current at t = 0 (as will be shown in further detail presently). That is to say, the state of the network at t = 0 is adequately de- scribed by the values of the initial charges and currents; it is not neces- sary to know how these values came about! Although a given set of values may have come about as a result of many completely different behavior patterns before the initial instant, the behavior that this set determines for t > 0 can have only one pattern because the solution to the pertinent differential equation of equilibrium involves as many integration con- Generated on 2013-12-08 01:37 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google stants as there are independent initial charges and currents, so that the latter determine these constants uniquely, and nothing else can exert an influence on the resulting solution. This point is now further illustrated for the series RLC circuit discussed in Art. 5. Let us return to the point in the discussion of the RLC circuit where the formal solutions for current and charge (Eqs. 57 and 58) are ob- tained from the differential Eq. 52. The reason we refer to these as * See Eq. 36 for a specific example of this sort. "formal" solutions is that they are not yet explicit relations for the current and charge but merely represent these quantities in functional form because the integration constants Ai and A2 appearing in them are as yet not fixed. Since only two unknown constants are involved, two special conditions suffice to render the formal solutions explicit. For these conditions we may choose two arbitrarily specified values of the current at selected instants of time, or two values of the charge at chosen instants; or, what is more commonly done, we may specify values for the current and the charge at t = 0. We call these values the "initial conditions" since they determine the state of the network at t — 0. Thus, through considering Eqs. 57 and 58 for t = 0, we have for the determination of the integration constants (in place of Eqs. 59) Ai + A2 = *(0), — + — = 9(0) (99) Pi P2 Solving these we have Pii(0) - PiP29(0) t p2i{0) - p2Pi?(0) Ai = 1 A2 = (100) Pi — P2 P2 - Pi Since the characteristic values pi and p2 are conjugate complex, we see that the A's are conjugate complex. By Eqs. 74 and 75 we find Pi = - a + jud = j(ud + jot) = juoe'* P2 = —a — jud = — i(wd — ja) = —ju0e~3 = ud/uo. These relations are useful in that they are the representation for any unfinished business that the RLC circuit may find itself in the process of carrying out if at t = 0 a fresh excitation is applied to it. Thus, for the completion of this unfinished business, one writes Eqs. 106 and 107 with t(0) and g(0) equal to the appropriate values (these must be part of the given data), and then adds the response due to the fresh excitation (for tacitly assumed rest conditions) to obtain the complete behavior for t > 0. It is interesting also to note that Eqs. 106 and 107 give the response of the series RLC circuit for a number of special excitation functions. For example, if we want the response of this circuit to an applied unit voltage impulse, we observe according to the discussion given earlier that this excitation instantly establishes a current in the inductance of the value 1/L. Hence we need merely consider Eqs. 106 and 107 for i(0) = 1/L and 17(0) = 0 to have the response appropriate to this excitation. In Art. 5 it is shown (through consideration of Figs. 20 and 21) that an applied unit step voltage is equivalent to starting from an Generated on 2013-12-08 01:43 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google initial capacitance charge q(0) = — C coulombs. Hence Eqs. 106 and 107 for t'(0) = 0 and q(0) = — C yield results appropriate to this case, as may be verified by comparison with Eq. 77. These considerations lead us to recognize that the process of taking arbitrary initial conditions into account in a transient-netwoik problem may be done in an alternate way. Thus the existence of a current in an inductance at t — 0 is equivalent to inserting a voltage impulse (of appropriate value and occurring at t = 0) in series with this inductance, while the existence of a charge in a capacitance at t = 0 is equivalent to bridging a current impulse (of appropriate value and occurring at t = 0) across this capacitance. In other words, any set of arbitrary initial currents and charges may be replaced by an appropriate set of voltage and current impulse sources connected into the network. Super- position of their individually produced responses and that due to some specific excitation, all computed for initial rest conditions, yields the desired net response. Thus it may be seen that a discussion of network response that tacitly considers only initial rest conditions is nevertheless sufficient to deal with problems involving arbitrary initial conditions. OF ONE-, TWO-, AND THREE-ELEMENT COMBINATIONS A Single Elements Generated on 2013-12-08 01:43 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Voltage Source Current Source Generated on 2013-12-08 01:43 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 258 IMPULSE AND STEP-FUNCTION RESPONSE C Two Elements—R, C ifl, ic same as in part (A) oo »—l 1 e(0\E "C £ -amen iWj7 * c=j= eit, ec same as Generated on 2013-12-08 01:44 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google in part (A) 260 IMPULSE AND STEP-FUNCTION RESPONSE D Two Elements—L, C u0 = l/VZc e(t)\ (A) eW £cos«„ 1 has the opposite effect. * 10 Vector Diagrams Although the Ohm's law relation linking complex voltage and current amplitudes through the impedance of the circuit is so simple that it hardly needs any further clarification to be fully understood, yet it may in some cases be found additionally helpful to give to this relation its corresponding graphical interpretation. Such representation is par- ticularly useful when a given problem involves more voltages and cur- Fio. 22. Vector diagrams illustrating relative magnitude and phase-angle relation- ships between complex voltage and current amplitudes. In (a) the current lags while in (b) it leads the voltage which is chosen as the reference vector. rents (those in other branches of the network), for it lends circumspec- tion and unity to the sum total of volt-ampere relations involved and enables one more readily to recognize significant special amplitude and phase relationships and the circuit conditions for which they arise. Figure 22 shows such a graphical representation—called a vector diagram—for the simplest case in which only one voltage vector E and current vector / are involved. Part (a) of the figure represents a situa- tion in which the angle of the impedance Z is positive (specifically 6 = +30°), while in part (b) of the same figure the angle of the imped- ance is assumed negative (specifically 6 — —60°). In the first case the current vector lags the voltage vector; in the second it leads. The relative lengths of the voltage and current vectors in these dia- grams are completely arbitrary, for, although they are related through the magnitude of the impedance Z, the scales determining length may be chosen independently for voltage and current. Thus, suppose the * Although frequency scaling has no effect upon the amplitude of an impedance or admittance function since it involves only the independent variable (the frequency 8), a closer study reveals that the time function characterizing the transient response of the pertinent network not only has its independent variable (the time t) affected, but its amplitude becomes multiplied by a constant also. These matters are discussed in Art. 6 of Ch. 9 and are summarized there in the statement 132. Generated on 2013-12-09 20:33 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google voltage magnitude were 1 and the impedance magnitude 10, so that the current magnitude becomes 0.1. If we choose a voltage scale of 2 inches per volt and a current scale of 20 inches per ampere, the vectors E and / have equal length; whereas, if we change to a current scale of 10 inches per ampere, the J vector has half the length of the E vector. Unless the diagram contains several voltage or current vectors, relative lengths have not much meaning, but relative angles have. Elaborating upon this theme, we may say that a choice of scales for the quantities E and / fixes a scale for the associated impedance Z (which may or may not appear on the same diagram). Or a choice of scales for J and Z fixes that for E; while a choice of scales for E and Z fixes that for /. For example, a choice of 10 volts per inch and 2 amperes per inch implies a scale of 5 ohms per inch if the scaled length Fig. 23. The diagram of (;n inches) of a voltage vector divided by the Fig. 22(a) redrawn with scaled { h (Jn inches) Qf & vectof ig the current chosen as the . , , ., . , .... , reference vector. to yie1d the appropriate length in inches for the associated impedance vector. For these scales, a voltage vector 2.5 inches long represents 25 volts; a current vector 2.0 inches long represents 4 amperes; the length of the associ- ated impedance vector is 2.5/2.0 = 1.25 inches and represents 5 X 1.25 = 6.25 = 25/4 ohms. One may tacitly assume for Z the scale of 1 ohm per inch, whence it follows that the scales for E and / become equal; that is, the number of volts per inch equals the number of amperes per inch. This tacit con- dition need, however, not always apply; and in fact it may in many problems be difficult to accommodate. Observe, with regard to angles, that we specifically use the term relative angles. Thus the diagram of Fig. 22(a) could just as well be drawn as shown in Fig. 23, or in any one of an infinite number of addi- tional possible angular orientations. The one significant fact which this simple diagram portrays is that the current lags the voltage by 0 radians. Generated on 2013-12-10 01:09 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Since the angular orientation of the diagram as a whole is thus per- fectly arbitrary, one is free to choose that orientation which seems to be appropriate to the physical conditions of the problem. For example, if the source is a voltage, then it is customary to choose the angle of E as being zero; if the source is a current, the angle of / is usually taken to be zero. In the first of these choices the vector E serves as phase reference for the diagram; in the second choice the vector / becomes the phase reference. Whichever vector is chosen to have zero angle is designated as the reference vector. Although several different voltages and currents may be involved in a given problem, it is clear that the angle of only one voltage or of one current vector may arbitrarily be set equal to zero. When the impedance is represented in rectangular form, the volt- ampere relation may be separated into a sum of terms corresponding to the resistive and the reactive components of Z, as in E = IZ = IR(w) +jIX(.w) (161) The separate vector components of E represented by the terms IR(u) and IX(ui)—called the resistance drop and reactance drop respectively— may be indicated in the corresponding vector diagram. If this is done for the situation depicted in Fig. 22(a), the re- sult has the appearance shown in Fig. 24. Observe that the vector IR(u) must have the same angular orientation as the vec- tor /, since R(ui) is merely a positive real number. We express this fact by stating that the IR drop is in phase with the vector J. The voltage component given by jIX(u), on the other hand, clearly is ir/2 radians in advance of I; that is, it leads the vector / by 90°. This fact is alternatively expressed by stating that the IX drop is in quad- rature with the vector /, although this terminology is a bit ambiguous since quadrature merely implies a right-angle relationship without re- gard to lead or lag. Observe that the resistive and reactive components of E vectorially add to yield E. The lengths of these component vectors are fixed, for a given impedance angle 6, as soon as a length for the vector E is chosen. The vector I in the diagram must coincide in direction (must be in phase) with the IR vector; its length (as already mentioned) is arbitrary. In dealing with certain problems it may be convenient or useful to decompose the current vector I into components that are respectively in phase with E and in quadrature with E; or it may be expedient to Generated on 2013-12-10 01:11 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google subdivide the components of Z into subcomponents. A common exam- ple of the latter procedure arises in dealing with the series RLC circuit for an impressed voltage E. Here Fio. 24. The diagram of Fig. 22(a) with the resistive and re- active components of the volt- age drop added according to their definition in Eq. 161. Z = R+juL + — which may be written Z = R +jXL+jXc with Xl and Xc as given in Eq. 147. In drawing the vector diagram for this ex- \jIXL ample, it is effective to choose the current as phase reference, notwithstanding the fact that the voltage may be the source function. The re- sulting diagram, shown in Fig. 25, is drawn for a condition in which the capacitive reactance IR ^ J Xc predominates so that the net voltage E ^ i lags the current / (the latter leads the voltage). Note that the net reactance drop IX is small compared with either component IXl or IXc, so that even the total voltage E, which includes (vectorially) the IR drop, is smaller than either reactive component drop. If Xl + Xc = 0, we have the resonant condition for which E = IR alone. The vector diagram makes more evident the fact that, at resonance, one may have volt- ages across the inductance and capacitance ele- ments separately that can be many times larger than the net applied voltage. For this reason it is important that caution be exercised when experimenting with resonance in the laboratory unless the power source used is small enough so as to preclude the possibility of dangerous shock due to accidental contact with the apparatus. Another example that illustrates the circum- spection afforded through use of a vector diagram, is the circuit sche- matically shown in Fig. 26, which consists of the three impedances Zi, Z2, Z3 in series. Suppose we write each in its rectangular form as o— zx — z2 — z3 —o Generated on 2013-12-10 01:12 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Fig. 25. Vector dia- gram for the series RLC circuit showing the capacitive and in- ductive reactance drops as well as the net reactance and the resistance drops. Note that the net re- actance drop is smaller than either of its com- ponents (at resonance it is zero). Z1 = +jX, %2 ~ R2 + 3X2 Z% = R3 + jX3 Fig. 26. Schematic representation of a circuit involving three arbi- Figure 27 shows the vector diagram trary impcdanccs in series, in which the current is chosen as phase reference and the impedances Zi and Z2 are assumed to be in- ductively reactive (Xi > 0 and X2 > 0), while Z3 is assumed to be capacitively reactive (X3 < 0). The diagram shows all three impedance drops (that is, voltages across the separate impedances) broken down into resistive and reactive components, as well as their vector sum which equals the net voltage E. The circumspection which this diagram affords relative to magnitudes and phase relationships of all voltages with respect to the common current / cannot be had in equal meas- ure from the purely analytic relationship involved. It is this property of the vector diagram that justifies its use. Although these remarks have been made with specific reference to the im- pedance as parameter linking E and /, it is evident that one may equally well carry through the graphical procedure in terms of the reciprocal parameter Y. Thus, if the impedances of Fig. 26 were connected in parallel, such a "switch" to an admittance basis would be indicated. The details of this situation would then be exactly analogous to the ones given above with the roles of E and / interchanged, R's replaced by G's, and X's by B's. Fio. 27. The vector diagram associated with the circuit of Fig. 26 showing all resistance and reactance drops as well as the net current and voltage vec- 11 More Elaborate Impedance Functions; Their Properties and As pointed out in Eq. 37, Art. 2 of Ch. 5, the differential equation linking current i(t) in some part of a network with voltage e(l) at the Generated on 2013-12-10 01:13 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google same point or at any other point is always of the form an-— + an-i —- dtn dln' t H h — + O0* + bm-i~—r + ...+b1T + b0e (165) dt dt in which an • • • a0 and bm • . • b0 are real constants. They are all posi- tive if e(t) and refer to the same point in the network; otherwise either some a's or some b's may be negative according to whether or e(t) is the excitation function.* * These matters are fully elaborated upon in Arts. 4 and 5 of Ch. 9. For an excitation of the form e", the particular integral yielding the steady-state response must have the same form. Hence for the steady- state solution to the differential Eq. 165 it is appropriate to substitute e(t) = Ee" and t(0 = Ie'1 (160) with the result (ansn + an^s"-1 + . • • + ais + a0)/e" = (bmsm + bm—lsm-1 + . • • + b1S + bo)Ee" (167) After canceling the common factor e'i, one has E anSn + a»—1Sn-1+...+ a1S + a0 P(S) - = Z(s) = ; = (168) / bmsm + bm-ism-1 +...+biS + b0 Q(s) If the polynomials P(s) and Q(s) are factored in terms of their zeros, the impedance Z(s) assumes the form , «(—.)t.-^-(.-^) (s - s2)(s - S4) • • • (a - s2m) in which H = an/bm is a positive real constant. If the excitation is e(l), the transient (force-free) part of the solution is determined by Eq. 165 with e = 0. Assuming for the solution to this homogeneous differential equation the expression i0(0 = Aept (170) leads through direct substitution to P(p).Aept = 0 (171) whence a nontrivial solution (A j£ 0) demands P(p) = anpn + an-jpn-1 + • • • + aiP + do = 0 (172) This is the characteristic equation determining the complex natural frequencies associated with the transient current. We observe that they are the critical frequencies Si, s3, • • • s2n—1 appearing in the numera- tor of the impedance 169. The complete response (transient plus steady state) is thus given by i(t) = Aie'" + A3e'" + .••+ A2n^e'^1 + — e" (173) If the excitation is i(t), the transient (force-free) part of the solution Generated on 2013-12-10 14:33 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google is determined by Eq. 165 with i = 0. Assuming for the solution to this homogeneous differential equation the expression eo(0 = Be" Q(p)-Bept = 0 and a nontrivial solution (B ^ 0) demands Q(P) = bmpm + 6^-ip"-1 + • • •+ 6iP + b0 = 0 This is the characteristic equation determining the complex natural frequencies associated with the transient voltage. We observe that they are the critical frequencies s2, S4, . . •, s2m appearing in the denom- inator of the impedance 169. The complete response (transient plus steady state) is in this case given by The transient amplitudes Ai • • • A2n-i in Eq. 173 and Bi . • . B2m in Eq. 176 are determined from the known state of the network at the time the excitation is applied and the demands made by the steady- state response function at that same instant, the discrepancies between these two factors being the quantities upon which the sizes of these amplitudes depend. The details of their determination do not interest us at the moment.* It is significant to point out however that, for a nonzero initial state, the results 173 and 176 are meaningful, even when the respective excitation functions are zero. When e( | A |, show from a geometrical construction that | B + A | \B\ -\A\. 4. Consider the complex numbers A - «i + io2 B = 61 +jbt (a) Show that Re [A ± B] - Re [A] ± Re [B) and Im [4 ± B) - Im [A] ± Im [B]. (b) Show that Re [AB] = Re [A] Re [B] - Im [A] Im (B) and Im [AB\ = Re [A] Im [B] + Im [.4] Re [B]. A'oie. Re [AB] * Re [A] Re [BJ. (c) Find Re [Ae''j where A is real. What does this result become when A = (d) Find Re [(3 +j4)(0.2 + j0.2)]. 5. Given the complex impedance Z - R +jX ohms (a) Find the impedance in the polar form Z = | Z \/e. (b) Express it in the form Z - Ae'\ (c) Find Z" and Z1/n. 6. Given a current I = a + jb amperes in an impedance Z - R + jX ohms, find the voltage drop E in the direction of the current. Express the result in (a) rectangu- lar form, (b) polar form, (c) exponential form, (d) as an instantaneous cosine function assuming t(3 = 20°, • • •, ct>i = 60°. You are to find appropriate contents for the boxes labeled ii, z2, etc., not exceeding two-element combinations in complexity. Assume w = 1 radian per second to start with, and later convert your design to the frequency a = 2r X 60, and again to u = 2r X 1000. If there exist other solutions of no greater complexity, state what they are. Draw a vector diagram showing E, Ii, /j, • . ., h and the resultant current Iq. Obtain an exact analytic expression for the latter. If the phase angles involved are replaced by lag angles, what changes in the cir- cuits are needed? 41. The element values in the circuits shown in the accompanying sketches are in ohms and henrys. Determine the impedance Z in each case as a function of the complex frequency S, and put it into the normal form of a quotient of frequency factors. Make a sketch of the s plane, showing the critical frequencies and some point s = ju on the j axis. By inspection of this diagram, what are | Z \ and 6 (the angle of Z) for « — 1 and w = 2? At what u value is 8 largest, and what is this largest value? Sketch | Z | and 6 versus u. If a unit step cur- Generated on 2013-12-10 14:42 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google rent is applied to each circuit, find the resultant transient voltage at the input terminals. If the frequency scale is stretched so that the point u — 1 becomes u — 1000 (and all other points are changed in the same ratio), what do the circuit element values become in ohms and henrys? What is the effect upon the critical frequencies of Zl What is the effect upon the transient response ob- tained above? 42. The element values in the circuits shown in the sketches are in ohms and henrys. Determine the expression for the impedance Z(s) in each case, and put it into the normal form of a quotient of frequency factors. Plot the critical frequencies in the » plane appropriate to each Z. Do the results suggest anything of interest o VWV Prob. 41. or possible practical value? If the frequency scale is stretched so that u comes 0> = 106, what do the element values become? 1 be- — 00V —1 3/2 1 ! »5 Prob. 42. 43. Find networks *hat are dual to those given in Prob. 41; that is, ones that will have reciprocal Z values. If a unit voltage step is applied to either one of these, what is the resultant transient current (using the results found in Prob. 41, of course)? Find the new element values corresponding lf"j2 to a stretch of the frequency scale that shifts u = 1 to a - 1000. 44. Find the networks that are dual to those given in Prob. 42; that is, those that will have reciprocal Z values. Write their Z functions as a quotient of frequency factors. Find the revised element values in these networks corresponding to a stretch of the frequency scale that shifts u — 1 to u = 10s. How do the critical frequencies 45. For the circuits of Prob. 41 find net- works which when placed respectively in series with each given network will yield Generated on 2013-12-10 14:43 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google a resultant impedance equal to unity at all frequencies; that is to say, find (by the method discussed in the text) those net- works having complementary impedances. Prob. 47. May these networks be used interchangeably to form constant-resistance resultants? 46. Find networks that are complementary to those given in Prob. 42. 47. The element values in the networks shown at the left are in ohms, henrys, and 7— 1 l/vf farads. Find the expressions for Z\(s) and Zi(s) as quotients of frequency factors. For each impedance, sketch the locations of its critical frequencies in the s plane. Form Z\ + Z% How are these impedances related? Compute the real and imagi- nary parts of these impedances, and sketch (neatly) versus « for the range 0 < w < 3, putting both real parts on one sheet and both imaginary parts on another. 48. For the networks of Prob. 47 determine the transfer impedances Zn = Et/I\ in the form of quotients of frequency factors, and sketch the critical frequencies in the s plane. Compute the squared magnitude of Zyi(jw) in each case, and sketch versus w on the same sheet for range 0 < w < 3. Compare with the real-part plots of Prob. 47. 49. Obtain the duals of the networks given in Prob. 47, and combine these so as to yield a constant-resistance combination. Using the results of Prob. 48, what are the transfer admittances Kb = Ii/Ei for the dual networks, and what are their squared magnitudes as functions of u? Does the constant-resistance combination suggest any practical application? How would you revise this resultant network corresponding to a stretch of the frequency scale that puts the point u = 1 at w = 27 r X 1000? 50. The admittance of a series RLC circuit has the form Y(S) = (s - - S2) with si = —0.1 +jl, Si = —0.1 — jl. If you drew the resonance curve for this circuit, what would be the resonance frequency and the width of the curve at its half-power points? What is the Q of the circuit? What are the values of its param- eters in ohms, henrys, and farads? How do these parameter values change if the frequency scale is stretched by a factor 10,000 (so as to make the resonance fre- quency 10,000 times higher)? How do the critical frequencies change? How does the width of the resonance curve at the half-power points change, and what is the effect upon Q? How do the parameter values change if Y is to become 1000 times larger (at all frequencies)? Does this change have any effect upon the shape of the reso- nance curve or upon 0? Returning to the original situation, suppose the real parts only of the critical frequencies are changed from —0.1 to —0.01, what are (a) the resonance frequency, (b) the width of the resonance curve at the half-power points, (c) the Q of the cir- cuit, (d) the parameter values? Generated on 2013-12-10 14:45 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 51. For the circuit shown in the sketch, show that the admittance is given by the expression C(s - Si)(s - si) Y(s) = (» - «s) and determine the critical frequencies si, «2, «s in terms Prob. 51. of the parameters R, L, C, G. If si = -0.1 +/10; »2 = —0.1 — 7IO; S3 = —0.1, what are the values of R, L, and G relative to C? If the circuit is driven by a current source and a resonance curve is taken for the volt- age, what are the resonance frequency and the width at the half-power points? What is the value of Q? What is the magnitude of the impedance 5 per cent above or below resonance relative to its value at resonance? Suppose the values of R and G are changed to R' and G', keeping the quantity (ft'/L) + (G'/O - (R/L) + (G/C), what is the net effect upon the impedance or admittance? If the Q of the circuit is large, is this net effect significant so far as the resonance behavior is concerned? 52. A circuit of the sort shown in Prob. 51 but with G = 0 is to be designed to have an impedance with a maximum absolute value of 100,000 ohms at a frequency of 1.5 X 106 cycles per second. At frequencies 10 per cent above and below reso- nance, the impedance magnitude should be not more than one-tenth of 1 per cent of its resonance value. What are the appropriate parameter values? What is the Q of this circuit? Suppose the data are changed by requiring that the impedance magnitude need not be smaller than 1 per cent of its resonance value at 10 per cent above or below resonance, what then are the answers to the above questions? 1 0.1 Ohms, henrys, farads Prob. 53. 53. In the circuit shown, the current and voltage sources are *,(v - Tav) (71) Hence one can express the admittance as 2Pav+j4w(Fav- Tav) YM = ^rji ('2) and the impedance as 2Fav + jMT.v - Vav) Z(u) = pyp (73) If the functions Pav, Vav, Tav in Eq. 72 are assumed to be evaluated for E, — 1 volt, then the admittance Y(u) is expressed explicitly in terms of these power and energy functions. A similar interpretation may be given the impedance expression 73 on the tacit assumption that Generated on 2013-12-10 20:58 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google T'av, Tav, Vav are evaluated for I = 1 ampere. In connection with the simple RLC circuit, these results are only of nominal interest since the conventional expressions for Y(u) and Z(w) in this case are even more compact than the relations 72 and 73, and so it is only the novelty of seeing these functions expressed in terms of power and energy that makes them interesting. It is significant to mention that Eqs. 72 and 73 are found to apply as well to linear passive networks of arbitrary complexity,* the expressions for Pav, Tav, and Fav being correspond- ingly more elaborate. In terms of these results one may see again that a condition of reso- nance implies Tav = Vav. That is to say, when the average energies stored by the electric and magnetic fields are equal, the impedance or admittance at the driving point reduces to a real quantity; the system is in resonance. Conversely, whenever the driving-point impedance or admittance has a zero imaginary part, then one may conclude that the average electric and magnetic stored energies are equal; the power factor is unity, and the reactive power is zero. Since the quantities Pav, Tav, Vav are implicit functions of the fre- quency w, the expressions 72 and 73 are not useful in the study of Y(u) or Z(w) as functions of w except in some very special circumstances. A case in point is the consideration of the behavior of Z(w) in the vicinity of a resonance frequency. In the simple RLC circuit considered here, Eq. 64 shows that Tav/| / |2 is a constant. In more elaborate circuits one finds that the current ratios throughout the network are almost constant over any frequency range near a resonance point, and hence that Tav/|1|2, which depends only upon the current distribution, is in general almost constant in the vicinity of resonance. Since Fav must equal for w = w0, Eq. 63 shows that we can write for this vicinity f Fav^7VWo7o,2 (74) where w0 is the resonance frequency in question, and thus have in place of Eq. 73 W.''^ —^ (75, For values of w near w0, one may use the approximation w2 - w02 (w + w0)(w - wo) _ , = 2(w - w0) (76) * See Art. 8, Ch. 10. f For the simple RLC circuit treated here, this expression, as well as the one given Generated on 2013-12-10 20:58 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google by Eq. 75, is of course exact, but in more general situations these are approximate relations which are, however, very nearly correct throughout any pronounced reso- nance vicinity. 356 ENERGY AND POWER IN THE SINUSOIDAL STEADY STATE and thus obtain for the impedance Z(u) the following explicit function valid near w0 2Pav + j8Tav(u - u0) Z(u) £S , 'I' = « + 3X (77) R = rjy2> X-\T\i(w—Wo) (78) The expression for R (which is exact) checks with Eq. 52 as, of course, it should; the expression for X, through use of Eq. 64, checks with the approximate expression for the reactance of the RLC circuit given by Eq. 153 in Ch. 6. Again the significant feature about this result is that one finds it to apply generally for all low-loss networks. As pointed out in Ch. 6 and illustrated there in Fig. 19, the half- power points on the associated resonance curve lie where X = ±/? or, using Eqs. 78, where (« - wo) = ±P.v/47.v (79) Hence the radian-frequency increment w between the half-power fre- quencies (width of the resonance curve) becomes to = Pav/27/ov (80) and the Q of the circuit is found to be expressible as Q = u0/W = 2w07'.v/P.v (81) The behavior of T vs. time shown in Fig. 1 (applying to the simple RLC circuit) is found to be representative of any low-loss system near resonance. Thus, 2Tav = TVcak, and Eq. 81 can be written Pav T()Pov 27r7,pcak 2irVpcak Q =: ^-r = ^ —r (83) loss per cycle loss per cycle since at or near resonance the stored energy merely swaps back and forth between the electric and magnetic fields and so the peak value of this energy is the same whether expressed electrically or magnetically. The loss per cycle clearly equals the average rate of loss (Pav) times the period t0 = 2r/«o. Thus the factor Q which characterizes the critical behavior of a low- Generated on 2013-12-10 20:59 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google loss network near resonance may be computed entirely on an energy basis. Not only is the result 83 useful because it provides an inde- pendent approach to the computation of this important figure of merit (an approach that is found to be usable in situations where parameter calculations are difficult or not feasible) but also because it provides an excellent basis for describing what is meant by a "low-loss" or "high-Q" system. Namely, it is one in which the loss per cycle is small compared with the peak value of the total stored energy. In order to obtain a circuit with an extremely sharp resonance curve, one must strive to obtain as large an energy storage as possible relative to the associated loss per cycle. 7 Computation of the Energy Functions for More Complex When the network under consideration has several inductive and capacitive branches, the expressions for the total instantaneous stored energies T and V are obtained through simply summing the relations 40 and 41 over all pertinent branches.* Symbolically we may indicate this procedure by writing T = 7 £ Lk\ Ik |2 + \ Re[^'2"' E W] (84) V - A Z S*| Ik |2 - Refcr*"' £ SkI^ (85) 4oT k 4« k In Eq. 84, Ik denotes the vector current in an inductive branch having the inductance L*, and the summation extends over all inductive branches in the network. In Eq. 85, Ik denotes the vector current in a capacitive branch having elastance (reciprocal capacitance) Sk, and the summation extends over all capacitive branches in the network. The first terms in Eqs. 84 and 85 are Tav and Fav respectively for the total network. Note that the sums yielding these quantities involve the squared absolute values of the branch currents, while the second terms in Eqs. 84 and 85, which are double-frequency sinusoids, involve the squared complex values of the branch currents. The sums in these terms, therefore, involve complex addition (not merely the addition of absolute values), and it is the angle of the resultant complex number that determines the time phase of the pertinent sinusoid. Since the Generated on 2013-12-10 21:00 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google sum of a set of complex values has a resultant magnitude that is always less than or at most equal to the sum of the absolute values of this set * Mutual coupling between inductive branches is here assumed to be absent. A treatment not subject to this restriction is given in Art. 6 of Ch. 10. 358 ENERGY AND POWER IN THE SINUSOIDAL STEADY STATE of complex numbers, it is clear that the amplitude of the sinusoids, component of either T or V is in general less than Txv or Vav respec- tively, and can equal this constant component only if all squared branch currents are in phase, a condition that exists in all lossless networks (for a single sinusoidal excitation) and is nearly attained in low-loss net- works operating at or near a resonance frequency. In computing V it is sometimes more convenient to do so in terms of the branch voltages instead of the branch currents. Since such a branch voltage is related to its current by the expression Ek = SJk/jo, (86) SkIk2/*2 = -CkEk2 (87) we see that Eq. 85 may be written V = i £ Ck\ Ek\2 + i Re W2"ty£CkEk2] (8S) which looks like Eq. 84 except for an interchange of E with / and C with L, as we might have predicted through use of the principle of 8 Some Illustrative Examples In order to show how these results are applied to a specific circuit, consider the one in Fig. 5. If we denote by Ek the voltage drop in a Henrys, farads, ohms Fio. 5. Circuit to which the computations 89 are pertinent. branch in which the current is Ik, and assume E4 = 1 volt, then the following sequence of calculations for an assumed w = 1 radian per second are self-explanatory h = U = 1 + jO, E3 = jl E2 = E3 + E4 = l h = jE2 = -1 +jl (89) Generated on 2013-12-10 21:00 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google h = h + h = A Ei = jh = -1, E0 = £i + E2 = jl SOME ILLUSTRATIVE EXAMPLES" 359 From these values we readily have |/i|2 = i h2 = -A \h\2 = 2 (90) h2 = 1, | h |2 = 1 and so Eqs. 84 and 85 yield T = \ + 0 cos 2t, V = % - % sin 21 (91) Since P„ = ^| /4 |2 we then have Pav = 5 watt, Tav = Fav = | joule (92) We see that the circuit is evidently not a low-loss system, for the stored energies are not large compared with the loss. Although the circuit is in resonance, Tpeak = K is not equal to Fpeak = 1. There is no point in computing a Q since it would have little meaning anyway. It is interesting to find the impedance from the energy functions according to Eq. 73, thus, 1 + ji X 0 Z = J- = 1 (93) which checks with Z = E0/I\ according to the values 89. Now suppose we change the value of the resistance in Fig. 5 to l/10th ohm. The computations 89 then become Is = h = 10 + io, E3 = jlO E2 = 1 +jl0, I2 = -10+jl (94) h = Jl, Ei = -1, E0=jl0 and in place of the results 90 we have A2 = -l, \h\2 = l h2 = 99 - j20, | h |2 = 101 (95) h2 = 100, \h\2 = ioo The magnetic and electric stored energy functions, according to Eqs. 84 and 85, become T = 25.25 + 24.75 cos 2t, V = 25.25 - 25.25 cos (2< - 11.5°) (96) So now Tav = Fav = 25.25 joules, Pav = 5 watts (97) Generated on 2013-12-10 21:00 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google which looks a bit more like the results for a low-loss system should. 360 ENERGY AND POWER IN THE SINUSOIDAL STEADY STATE We observe from Eqs. 96 also that Tpeak = 50.0, 7Peak = 50.5 (9S The loss per cycle equals t0 X Pav = 2ir X 5 = 10ir joules, and so the factor Q, computed from either Eqs. 81 or 83, yields 2i r X 50 = 10 Although the resonance is not extremely sharp, it is well defined. The input impedance, according to Eq. 73, for this case becomes 10 + j4(0) while from the values 94 we get Z = E0/Ii = 10, thus substantiating again the equivalence of these relationships. Suppose now we restore the resistance to the value of 1 ohm, but add two more reactive branches as shown in Fig. 6. This procedure should Henrys, farads, ohms Fig. 6. Circuit to which the computations 101 are pertinent. increase the stored energy relative to the loss, and hence yield a sharper resonance. In order to maintain resonance at w = 1 radian per second, the first inductance now needs to be 1/2 henry as shown. This result is easily arrived at through first computing the currents in all of the other branches, following the pattern used above, and then noting the value of the first inductance needed to make T*v = Fav. The sequence of calculations appropriate to this circuit, assuming E6 = 1 volt, read Ia = /5 = 1 + jO, E5 = jl E4^l+jl, /4=-l+il, /3=jl E3 = -1, h = -l.+jl, Generated on 2013-12-10 21:00 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Ea-jl, J2=-l (101) Ei = -(J) -j(\) From these we get h2 = I2 = l /42 = I2 = 2 h2 = I2 = l h2 = I2 = l h2 = I2 = 2 and Eqs. 84 and 85 then give T = | + \ sin 2t 3 VI (103) V = cos (2< - 63.5°) Since Pav = 1/2 watt, and the loss per cycle is 3.14 joules, it is clear that this situation, although somewhat better than the one in the first example above, is still not a low-loss case. Thus Tpeak = 1 is only moderately equal to Fpeak = 1.31. If we compute a Q at all, it is better to use Eq. 81, which gives e_^_£x(i)_, (104) .* ar ( 2 / For the input impedance we have, using Eq. 73, Generated on 2013-12-10 21:01 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google 1 + j'4 X 0 1 Z = —— = - (105) while from the values 101 we get Z = -2. = II — = - (106) h -1+Jl 2 These examples show that it is a straightforward matter to compute TKv, Vav, Pav from a given current distribution. Since the latter or its equivalent must in any event be determined in the course of an imped- ance computation, it turns out that it is no more tedious to find the impedance in terms of the energy functions than in the normal manner. The result in terms of energy functions contains more information. For example, if Tav Vav but Tav — Fav is small compared with either Tav or Vav, then we can conclude that the frequency considered is near a pronounced resonance, especially if Pav is small compared with either Tav or Fav. Through making a single computation at a 362 ENERGY AND POWER IN THE SINUSOIDAL STEADY STATE resonance frequency, we are able to perceive the entire character of the resonance curve, which is much more than the value of Z at resonance can tell us. To compute Q in the normal manner, we must compute many values of Z near resonance and plot a curve. In terms of energy considerations we get the same information from a single calculation made at the resonance frequency. Another way of expressing these thoughts is to call attention to the fact that, when we calculate the impedance of the series RLC circuit through noting the values of the resistance R, the inductive reactance Lw, the capacitive reactance — 1/Cu, and the net reactance X = Lu — (l/Cw), their relative magnitudes not only enable us to see whether the frequency in question is at or near resonance but they also determine the character of the resonance curve. All this informa- tion is ours for the trouble of making a calculation at only one fre- quency. In more elaborate circuits such as those shown in Figs. 5 and 6, we are not in a position to get this much per unit of computing effort unless we avail ourselves of the technique of expressing impedance in terms of energy functions, for this scheme virtually reduces the im- pedance of any circuit to the basic form that it has for the series RLC 1. Given e,(t) - 100cos377 '31, hi, etc., experi- mentally, each determination is concerned with one pair of coils only and can wholly ignore the presence of the others (except to see to it that they remain open-circuited during the experiment so that there will be no other nonzero (di/dt)'s except the one specifically intended to be nonzero). For this reason the determination of the mutual-induct- ance coefficients for a large group of coils is every bit as simple and straightforward as it is for just two coils, because one considers only two coils at a time and the others are meanwhile ignored. For a chosen set of reference arrows on the coils, as shown in Fig. 4, the set of self- and mutual-inductance coefficients is completely fixed as to both sign and magnitude. Specifically, if t'i, i2, i3, i4 are the coil currents and vi, v2, v3, v4 are the voltage drops, both with regard to the same set of reference arrows, then we can relate these currents and voltages through the equations = hi + h2 + h3 + Im = hi + I22 + h3 + h* + I32 + ^33 + h* Generated on 2013-12-10 21:34 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Equations 30 may formally be integrated with respect to time, Jvi dt = lnii + l12i2 + h3H + hiU v2 dt = £2ili + ^22l2 + ^23*3 + ^24l4 I v3dt = l31ii + l32i2 + h3i3 + hiU Jvt dt = hiii + li2i2 + li3is + The quantities involved here are flux linkages (since their time deriva- tives are voltages). These equations may be solved for the coil currents in terms of the flux linkages by any algebraic process applying to the solution of simultaneous linear equations (such as the determinant method), yielding *1 = 7HlAi + 712^2 + 713^3 + 714^4 *2 = 721^1 + 722^2 + 723^3 + 724^4 is = 731^1 + 732^2 + 733^3 + 734^4 ii = 741^1 + 742^2 + 743^3 + 744^4 in which the flux linkages are denoted by ik =Jvkdt and the y,k denote the numerical coefficients found in the process of solving Eqs. 31 for the i*'s. For example, if the determinant of the coefficients in 31 is denoted by lti • • • In and its cofactors by A,*, then by Cramer's rule y.k = A*,/A Whether the student completely understands the details of solving simultaneous equations is at the moment of little importance. The Generated on 2013-12-10 21:34 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google point in writing these things down here is rather to be able to call atten- tion to the fact that one can (through well-defined algebraic methods) express the currents in the set of mutually coupled coils (Fig. 4) in terms of their voltage drops (specifically in terms of the voltage inte- grals) as straightforwardly as one can express the voltage drops in these coils in terms of their currents (specifically in terms of the current derivatives). The latter is done in Eqs. 30, the former in Eqs. 32. In Eqs. 30 the coefficients are the self- and mutual inductances for the given group of coils; in Eqs. 32 the coefficients are the self- and mutual reciprocal inductances for the same group of coils. The latter coeffi- cients are related to the former in a manner expressed by Eqs. 34 and Fig. 5. Relevant to the determination of the algebraic sign of a mutual inductance. 35, namely, as are the coefficients in inverse sets of simultaneous linear equations. Thus, while the reciprocal inductance coefficients y,k are not simply the respective reciprocals of the inductance coefficients l,k, they are nevertheless related in a one-to-one rational algebraic manner, which, once understood, is simple and straightforward in its application (although tedious if the number of coefficients is large). Before continuing with the discussion of how the present relations are used in the process of setting up equilibrium equations when a group of mutually coupled coils such as those in Fig. 4 is imbedded in a given network, a number of additional remarks may be in order with regard to the determination of algebraic signs for mutual inductances in situa- tions where the relative directions of coil windings and mutual mag- netic fields are indicated schematically. A situation of this sort is Generated on 2013-12-10 22:07 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google shown in Fig. 5. Here the preferred path taken by the magnetic field is indicated as a closed rectangular core structure (which may be the iron core of a transformer), and the windings of the coils are drawn in such a manner that one recognizes the directions in which they encircle the core. If a battery is applied to the left-hand winding so as to make the indicated terminal positive, current in this winding increases in the arrow direction, and, according to the right-hand screw rule, the flux ct> MUTUAL INDUCTANCE AND HOW TO DEAL WITH IT 379 in the core increases in the direction shown by its arrow. By the rule for induced voltages (which is a left-hand screw rule because of Lenz's law), we see that the increasing core flux ct> induces a voltage in the right-hand winding so as to make the bottom terminal plus with respect to the top. If we place a reference arrow on the right-hand winding as indicated, we note that the induced voltage there is a voltage rise or a negative drop. Hence, for the reference arrows shown, the mutual inductance is seen to be numerically negative; it becomes positive, however, if the reference arrow on either winding (not both) is reversed. We may say in this example that the plus-marked ends of the two windings are corresponding ends in the sense that they will always be- come plus together or minus to- gether when a voltage is induced in one winding by a changing current in the other, regardless of which winding is doing the in- ducing. Since the marked ends may become negative as well as positive, the plus sign might be „ „ n . .... , r ' f ° . ° Fig. 6. Relevant to the sign determina- regarded as inappropriate. For tion of a set of three mutual inductances, this reason many writers (and ap- paratus manufacturers) prefer to mark corresponding winding ends simply with dots instead of plus signs, and this is a widely accepted Note, however, that this scheme of relative polarity marking cannot always be used without modification when more than two windings are associated with the same magnetic structure, as the following discussion of the example in Fig. 6 will show. If we assume the top terminal in winding 1 to be positive with respect to the bottom one, current enters this coil and increases in the arrow direction, thus producing a flux that increases upward in the core of winding 1 and downward in the cores of windings 2 and 3. From their winding directions relative to their cores, one deduces that it is the bottom ends of coils 2 and 3 that become positive. Hence we would place a dot at the top of coil 1, and corre- Generated on 2013-12-10 22:07 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google sponding dots at the bottoms of coils 2 and 3. If we now move the source from coil 1 to coil 2 and make the bottom terminal (the dot- marked one) positive, we see that flux increases downward in the core of coil 2 and hence upward in the cores of coils 1 and 3. Thus the top terminals of both of these coils become positive. For coil 1 this terminal is the dot-marked one, but for coil 3 it isn't. Therefore, it becomes clear that dot-marked terminals can in general indicate relative polarities correctly only for a specific pair of coils. One would have to use a dif- ferent set of dots for the pair of coils 2 and 3 from those that are already placed upon these coils in pairing them separately with coil 1. While the method of marking relative polarities of mutually coupled coils by means of dots is thus seen to become prohibitively confusing where many coupled coils are involved, the determination of a set of self- and mutual-inductance coefficients consistent with assumed refer- ence arrows remains simple and unambiguous, as already explained. In the example of Fig. 6 we clearly find all three mutual-inductance coeffi- cients li2, hz, I23 numerically negative. Once these are known, the volt- ampere relations for the group of coils is unambiguously written down as is done in Eqs. 30 or 32. 5 Coupling Coefficients Suppose we consider the simple case of just two mutually coupled coils, and let the associated inductance coefficients be denoted by £n, £22, Ii2 — hi. The volt-ampere relations read vi = £n (dii/dt) + li2 (di2/dt) t>2 = hi (dii/dl) + h2 (diz/dt) If we multiply these equations respectively by ii and i2 and add, we dii . di2 , dii . dt'2 •Vi + t>2*2 = hih — + h2H — + hiiz — + £22*2 — (37) dt dt dt di which we may alternatively write as viii + t>2*2 = dT/dt (38) 2T = £nl'i2 + ii2*i*2 *t" 121*2*1 ~T" £22*22 T - \Qnii2 + 2li2Hi2 + h2k2) (39) Equation 38 states a simply understandable physical fact, namely, that the" instantaneous power absorbed by the pair of coils (viii + t>2i2) is equal to the time rate of change of the energy T stored in the asso- ciated magnetic fields, the latter being given by expression 39. Alge- braically this expression is homogeneous and quadratic in the current variables ii and *2 (known as a quadratic form). Physically it is clear Generated on 2013-12-10 22:08 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google that T must be positive no matter what values (positive or negative) the currents ii and i2 may have. Mathematicians have found that this requirement on T imposes conditions on the coefficients l,k. Specifically one can show that, if 39 is to be a positive definite quadratic form, it is necessary and sufficient that ln > 0, l22 > 0, and in addition lnh2 ~ ha2 > 0 (40) which can be written h2Vhiha < 1 (41) Since the quantity fc-|li2|/V^^ (42) is defined as the coupling coefficient for the pair of coils in question, the requirement that the associated stored energy be positive for all values of the coil currents leads to the condition | k | < 1 (43) The limiting condition expressed by \k \ = 1, which is approachable but never attainable in a pair of physical coils, is spoken of as a condi- tion of perfect coupling or close coupling. Physically it represents a situation in which all the flux links all of the windings of both coils. If the coupling coefficient k (Eq. 42) is derived from the standpoint of flux linkages, condition 43 is arrived at on the basis that the state of perfect coupling is manifestly an upper limit. A difficulty with this method of deriving condition 43 is that it does not lend itself to gen- eralization while the method based upon stored energy is readily ex- tended to any number of coupled coils. A logical extension of the reasoning leading from Eq. 36 to Eq. 39 shows that the stored energy is in general expressible as * 2T = Inii2 + Z12V2 H— •+ hniiin + ?2il2^1 + ^22*22 + ' • 1 + hnhin + Iniinii + h2ini2 H h Innin Since the self-inductances In, h2, etc. are positive in any case, the con- ditions assuring T positive are expressed by stating that the determi- lu Ii2 • •' 'in I21 I22 • . • hn I Ini ln2 "'' ^nn Generated on 2013-12-10 22:08 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google * See Art. 6; Ch. 10. and all minors formed through cancelation of the first row and column, the first two rows and columns, the first three rows and columns, etc. (called the principal minors) are positive. Although it is not the pur- pose of the present discussion to go deeply into matters of this sort, it is nevertheless useful to point out (wherever this can easily be done! what methods are available for extending our considerations to more elaborate situations. 6 Forming the Equilibrium Equations When Mutual Inductances Are Present The procedure is most easily presented in terms of a specific example. For this purpose consider the network of Fig. 7, for which the ecjui- Fig. 7. A circuit for which the equilibrium equations are to be found on the loop basis. Numerical element values are in ohms and darafs. The coupled coils are characterized by the self- and mutual-inductance values in matrix 46. librium is to be formulated on the loop basis. So far as the resistance and elastance parameter matrices are concerned, there is no new prob- lem presented here. Hence we need concern ourselves only with the formation of the inductance parameter matrix. In this regard we are given the three mutually coupled coils Lu L2, L3, which, for the reference arrows indicated, shall be characterized by the self- and mutual-inductance matrix. [l.k] = 2 -1 2 .1 3 -2 2-2 5 That is to say, the self-inductance of hi is 2 henrys, the mutual between it and L2 is — 1 henry, and so forth. If the voltage drops in these coils are denoted by vi, v2, v3, then, since the corresponding currents are Generated on 2013-12-10 22:08 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google respectively ii, (ii — i2), and i2, we have di, d di2 dit di2 t'i = 2 l-fa - *2) + 2— = — + 3 — dt dt dt dt dt dii d dio dii dio v2 1 — + 3 - (t\ - i2) - 2 — = 2 5— (47) dt dt dt dt dt dii d dio du dio t* = 2 2-(i, - ta) + 5— = 0 — + 7 — dt dt dt dt dt The total inductive voltage drop around loop 1 is vi + v2, and that around loop 2 is — v2 + v3. From Eq. 47 this gives dii di2 vi + v2 = 3 2—. dt dt du di2 -v2 + v3 = -2 — + 12— (48) dt dt whence the loop inductance matrix is seen to be The fact that L12 must be equal to L21 serves as a partial check on the numerical work. Now let us consider a simple example on the node basis. Let the network be that shown in Fig. 8. Here only the method of finding the The instantaneous power delivered by this source is Pi = eiH = | Eih | cos ut cos (ut + ct>) = [cos ct> + cos (2ut + ct>)] In the other two phases the voltages and currents are given by the expressions 83 advanced (or retarded) by 120° and 240° respectively; or we can say that the quantity (ul) in 83 is replaced by (ut ± 120°) and (ut ± 240°) respectively. The corresponding expressions for in- stantaneous power in these phases are, therefore, the same as pi in Eq. 84, except that (w + cos (2ut + ct>) + cos (2w< + ct> ± 240°) + cos (2ut + 4> db 480°)] Since the last three terms in this expression cancel, we have simply 3| E1h | Pi + P2 + P3 = cos 4> (86) The important part about this result is that the pulsating components in the several phases neutralize each other, so that the net instantaneous power is composed of the steady component alone. It is simply three times the average active power per phase. In three-phase rotating machinery this feature results in a steady torque rather than one containing a pulsating component. The prac- tical advantage thus gained is significant. 1. Two inductances are characterized by the matrix 4 -31 ■-[J 1] Find the value of the net inductance when they are connected in the ways shown in the diagrams (a) through (d). (a)o—nflPP * IPPP^o (b)o—rWtP > « —0 (d) 0 Find the impedance function Z(s), and sketch a network, giving element values, having this impedance. 26. For the circuit shown in the following sketch find = E*/h as a quotient of polynomials in s, and sketch the corresponding pole-zero configura- tion in the s plane. Determine the analytic expression for the instantaneous output voltage et(f) if the input current ii(fl is a unit step. Ohms, henrys, farads Generated on 2013-12-12 07:50 GMT / http://hdl.handle.net/2027/mdp.39015000463490 Public Domain, Google-digitized / http://www.hathitrust.org/access_use#pd-google Prob. 26. 27. A given network has the transfer impedance ZuM = io/(S + io) When the input current !i(
{"url":"https://scaniabridge.com/article/introductory-circuit-theory-by-guillemin-ernst-pdfcoffee-com","timestamp":"2024-11-04T22:00:35Z","content_type":"text/html","content_length":"478802","record_id":"<urn:uuid:65cceb8a-3939-4faa-be45-c0456cb2aab8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00154.warc.gz"}
<< Back Pythagorean Theorem First draw a triangle, don't worry about the right angle, that comes later Now we'll worry about the right angle... Set the lengths of AB and BC to be x and y Open the numeric panel and add an expression. Type in distance(A,C)^2 Add another expression. Type in x^2+y^2 Change the numeric values of x and y << Back
{"url":"https://geometryexpressions.com/gxweb/help/intro/Hypotenuse/index.html","timestamp":"2024-11-03T09:37:22Z","content_type":"text/html","content_length":"1558","record_id":"<urn:uuid:f0f37c21-04ff-4abc-9300-164a9a8d5a86>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00414.warc.gz"}
How to group even and odd nodes together in a linked list In this article, we are going to learn how to rearrange the nodes in a singly linked list so that its odd elements come before its even elements. The basic idea is to split the list into two separate lists for even elements and odd elements, and then to append the even elements to the odd elements. Complexity analysis The time complexity of this algorithm is O(N) where N is the number of nodes in the input linked list. Code example #include <iostream> #include "Node.h" using namespace std; //Rearrange a linked list such that all the odd elements come before the even elements Node * rearrange(Node * head){ Node *current = head; Node *prevOdd = nullptr, *prevEven = nullptr; Node *evenHead = nullptr, *oddHead = nullptr; // add current in odd elements list if(current->data % 2 == 1){ prevOdd->next = current; oddHead = current; prevOdd = current; else { prevEven->next = current; evenHead = current; prevEven = current; current = current->next; //prevEven->next may be nullptr or an odd node. prevEven->next = nullptr; //append even elements to odd elements prevOdd->next = evenHead; oddHead = evenHead; return oddHead; //arr will be used for list1 values int arr[] = { 1,2,4,3,5,7,8,6,9}; int size = 9; In main.cpp, we do the following: • Lines 11–28: We traverse a list where current is the current node. • Lines 13–18: We point to the last added odd node with prevOdd. Since current has an odd value, current is appended to the odd list and prevOdd is updated to point to current. • Lines 21–25: We point to the last added even node with prevEven. Since current has an even value, current is appended to the even list and prevEven is updated to point tocurrent. • Lines 30–31: We point to the last even node with prevEven. We also see that prevEven->next may be nullptr or an odd node. It is updated to NULL to prevent loops. • Lines 34–37: We append even elements to odd elements where oddHead or evenHead may point to empty lists.
{"url":"https://www.educative.io/answers/how-to-group-even-and-odd-nodes-together-in-a-linked-list","timestamp":"2024-11-04T02:38:30Z","content_type":"text/html","content_length":"340195","record_id":"<urn:uuid:566980e6-8c96-4720-b1c9-05607173ce6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00775.warc.gz"}
Timber Retaining Wall Calculator Online Home » Simplify your calculations with ease. » Construction » Timber Retaining Wall Calculator Online Timber Retaining Wall Calculator Enter height in meters Enter length in meters Enter width in meters Enter thickness in meters The Timber Retaining Wall Calculator is an innovative online tool that simplifies the calculation process for designing and building a timber retaining wall. It takes into account various factors such as the height, length, and thickness of the timber boards, as well as the total number of boards and support posts needed. By inputting these measurements, the calculator provides an accurate estimate of materials, including the volume of backfill material, gravel, and soil required. This tool is invaluable for homeowners, gardeners, and construction professionals alike, offering a clear overview of the project scope and material needs. Formula of Timber Retaining Wall Calculator The Timber Retaining Wall Calculator uses a set of formulas to calculate the necessary quantities of materials. Here is a breakdown of these formulas for those who prefer manual calculations or wish to understand the calculator’s logic: Height of Retaining Wall (H) = Desired height of the wall Length of Retaining Wall (L) = Total length of the wall Width of Retaining Wall (W) = Width of the wall (usually 6 inches) Thickness of Timber Boards (T) = Thickness of each timber board (usually 6 inches) Number of Timber Boards (N) = (H / T) + 1 Total Length of Timber Boards (TL) = N * L Total Area of Timber Boards (A) = TL * W Number of Support Posts (P) = (L / 4) + 1 Total Length of Support Posts (SL) = P * H Volume of Backfill Material (V) = L * W * H Total Amount of Gravel or Drainage Stone (G) = V * 0.1 (10% of the volume) Total Amount of Soil (S) = V - G These calculations are essential for estimating the quantity of materials needed and ensuring the structural integrity of the retaining wall. General Terms and Calculations Term/Measurement Value/Conversion Notes/Description Timber Board Thickness 6 inches (15.24 cm) Standard thickness for timber boards used in retaining walls. Width of Retaining Wall 6 inches (15.24 cm) Standard width for the construction of the wall itself. Support Post Interval 4 feet (1.22 meters) Distance between support posts for structural integrity. Volume of Backfill Material L * W * H Calculated in cubic feet or cubic meters for material estimate. Gravel or Drainage Stone 10% of Backfill Volume Essential for drainage behind the wall to prevent water pressure buildup. Soil Replacement Volume of Backfill Material – Volume of Gravel Amount of soil needed after accounting for gravel. Weight of Soil per Cubic Yard Approximately 2,200 lbs (997 kg) Useful for estimating delivery needs and structural calculations. Average Cost of Timber Board Variable based on market and quality Important for budgeting; consult local suppliers for current prices. Conversion: Inches to Centimeters 1 inch = 2.54 cm For international projects where metric is preferred. Example of Timber Retaining Wall Calculator Imagine you want to build a retaining wall that is 3 feet high and 10 feet long. Assuming a standard width of 6 inches for the wall and timber boards that are 6 inches thick, here’s how you would use the formulas: 1. Calculate the number of timber boards needed. 2. Determine the total length of timber boards. 3. Estimate the total area of timber boards. 4. Find out the number of support posts required. 5. Calculate the total length of support posts. 6. Estimate the volume of backfill material, gravel, and soil. This example helps visualize the process and emphasizes the calculator’s practical application in planning and executing a timber retaining wall project. Most Common FAQs What is the ideal thickness for timber boards in a retaining wall? The ideal thickness typically depends on the wall’s height and the soil’s pressure. However, 6 inches is commonly used for residential projects. How many support posts do I need? You generally need one support post for every 4 feet of wall length, plus one extra for the end of the wall. Can I use the calculator for walls of any size? Yes, the Timber Retaining Wall Calculator is versatile and can be adjusted for walls of various sizes, making it suitable for a wide range of projects. Leave a Comment
{"url":"https://calculatorshub.net/construction/timber-retaining-wall-calculator/","timestamp":"2024-11-06T21:23:31Z","content_type":"text/html","content_length":"118930","record_id":"<urn:uuid:37a5cb79-55c0-4be1-978c-45990411b5f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00506.warc.gz"}
Which countries can I make transfers to (and how long does it take)? Below is a list of countries we support. For the most up to date fees and rates, see our handy calculator. Please note, the settlement guidelines provided are calculated after your payment reaches our account and your transfer is updated to 'Awaiting Settlement'. • Australian Dollar (AUD) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? Will I need to verify my identity? Learn More • Bangladeshi Taka (BDT) Is there a fee? How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Canadian Dollar (CAD) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Chinese Yuan (CNY) Is there a fee? How long will it take? Is there a limit on how much I can send? See Limits Can I send money to businesses/charities or trusts? Find out more Will I need to verify my identity? Learn More • Danish Krone (DKK) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? Can I send money to businesses/charities or trusts? See Limits Will I need to verify my identity? Learn More • Euro (EUR) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? Will I need to verify my identity? Learn More What are the rates? Login to see current rates • Fijian Dollar (FJD) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Hong Kong Dollars (HKD) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Indian Rupee (INR) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Indonesian Rupiah (IDR) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Malaysian Ringgit (MYR) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Can I send money to businesses/charities or trusts? See Limits Will I need to verify my identity? Learn More • Nepalese Rupee (NPR) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Can I send money to businesses/charities or trusts? Find out more Will I need to verify my identity? Learn More • New Zealand Dollar (NZD) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? Will I need to verify my identity? Learn More • Norwegian Krone (NOK) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? Will I need to verify my identity? Learn More • Philippine Peso (PHP) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Polish Złoty (PLN) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? Will I need to verify my identity? Learn More • Samoan Tala (WST) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Singapore Dollar (SGD) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • South African Rand (ZAR) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? Will I need to verify my identity? Learn More • South Korean Won (KRW) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Sri Lankan Rupee (LKR) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Swedish Krona (SEK) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Thai Baht (THB) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Pound Sterling (GBP) Is there a fee? How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • US Dollar (USD) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More • Vietnamese Dong (VND) Is there a fee? View Fees How long will it take? Is there a limit on how much I can send? See Limits Will I need to verify my identity? Learn More
{"url":"https://support.orbitremit.com/hc/en-us/articles/360005825334-Which-countries-can-I-make-transfers-to-and-how-long-does-it-take","timestamp":"2024-11-04T05:18:26Z","content_type":"text/html","content_length":"86936","record_id":"<urn:uuid:01643b38-998b-412f-af2e-f23f78c8c6b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00490.warc.gz"}
Search result: Catalogue data in Autumn Semester 2020 Computer Science Master Master Studies (Programme Regulations 2020) Major in Theoretical Computer Science Elective Courses Number Title Type ECTS Hours Lecturers 3V + 252-1407-00L Algorithmic Game Theory W 7 credits 2U + P. Penna Abstract Game theory provides a formal model to study the behavior and interaction of self-interested users and programs in large-scale distributed computer systems without central control. The course discusses algorithmic aspects of game theory. Learning Learning the basic concepts of game theory and mechanism design, acquiring the computational paradigm of self-interested agents, and using these concepts in the computational and objective algorithmic setting. The Internet is a typical example of a large-scale distributed computer system without central control, with users that are typically only interested in their own good. For instance, they are interested in getting high bandwidth for themselves, but don't care about others, and the same is true for computational load or download rates. Game theory provides a mathematical model for the behavior and interaction of such selfish users and programs. Classic game theory dates back to the 1930s and typically does not consider algorithmic aspects at all. Only a few years back, algorithms and game theory have been considered together, in an attempt to reconcile selfish behavior of independent agents with the common good. This course discusses algorithmic aspects of game-theoretic models, with a focus on recent algorithmic and mathematical developments. Rather than giving an overview of such developments, the course aims to study selected important topics in depth. - Introduction to classic game-theoretic concepts. - Existence of stable solutions (equilibria), algorithms for computing equilibria, computational complexity. - Speed of convergence of natural game playing dynamics such as best-response dynamics or regret minimization. - Techniques for bounding the quality-loss due to selfish behavior versus optimal outcomes under central control (a.k.a. the 'Price of Anarchy'). - Design and analysis of mechanisms that induce truthful behavior or near-optimal outcomes at equilibrium. - Selected current research topics, such as Google's Sponsored Search Auction, the U.S. FCC Spectrum Auction, Kidney Exchange. Lecture notes Lecture notes will be usually posted on the website shortly after each lecture. "Algorithmic Game Theory", edited by N. Nisan, T. Roughgarden, E. Tardos, and V. Vazirani, Cambridge University Press, 2008; Literature "Game Theory and Strategy", Philip D. Straffin, The Mathematical Association of America, 5th printing, 2004 Several copies of both books are available in the Computer Science library. Prerequisites Audience: Although this is a Computer Science course, we encourage the participation from all students who are interested in this topic. / Notice Requirements: You should enjoy precise mathematical reasoning. You need to have passed a course on algorithms and complexity. No knowledge of game theory is required. 401-3054-14L Probabilistic Methods in Combinatorics W 6 credits 2V + B. Sudakov Abstract This course provides a gentle introduction to the Probabilistic Method, with an emphasis on methodology. We will try to illustrate the main ideas by showing the application of probabilistic reasoning to various combinatorial problems. Content The topics covered in the class will include (but are not limited to): linearity of expectation, the second moment method, the local lemma, correlation inequalities, martingales, large deviation inequalities, Janson and Talagrand inequalities and pseudo-randomness. - The Probabilistic Method, by N. Alon and J. H. Spencer, 3rd Edition, Wiley, 2008. Literature - Random Graphs, by B. Bollobás, 2nd Edition, Cambridge University Press, 2001. - Random Graphs, by S. Janson, T. Luczak and A. Rucinski, Wiley, 2000. - Graph Coloring and the Probabilistic Method, by M. Molloy and B. Reed, Springer, 2002. 401-3901-00L Mathematical Optimization W 11 credits 4V + R. Zenklusen Abstract Mathematical treatment of diverse optimization techniques. Learning The goal of this course is to get a thorough understanding of various classical mathematical optimization techniques with an emphasis on polyhedral approaches. In particular, we want objective students to develop a good understanding of some important problem classes in the field, of structural mathematical results linked to these problems, and of solution approaches based on this structural understanding. Key topics include: - Linear programming and polyhedra; Content - Flows and cuts; - Combinatorial optimization problems and techniques; - Equivalence between optimization and separation; - Brief introduction to Integer Programming. - Bernhard Korte, Jens Vygen: Combinatorial Optimization. 6th edition, Springer, 2018. Literature - Alexander Schrijver: Combinatorial Optimization: Polyhedra and Efficiency. Springer, 2003. This work has 3 volumes. - Ravindra K. Ahuja, Thomas L. Magnanti, James B. Orlin. Network Flows: Theory, Algorithms, and Applications. Prentice Hall, 1993. - Alexander Schrijver: Theory of Linear and Integer Programming. John Wiley, 1986. Prerequisites Solid background in linear algebra. / Notice 401-4521-70L Geometric Tomography - Uniqueness, Statistical Reconstruction and Algorithms W 4 credits 2V J. Hörrmann Abstract Self-contained course on the theoretical aspects of the reconstruction of geometric objects from tomographic projection and section data. Learning Introduction to geometric tomography and understanding of various theoretical aspects of reconstruction problems. The problem of reconstruction of an object from geometric information like X-ray data is a classical inverse problem on the overlap between applied mathematics, statistics, computer Content science and electrical engineering. We focus on various aspects of the problem in the case of prior shape information on the reconstruction object. We will answer questions on uniqueness of the reconstruction and also cover statistical and algorithmic aspects. R. Gardner: Geometric Tomography Literature F. Natterer: The Mathematics of Computerized Tomography A. Rieder: Keine Probleme mit inversen Problemen Prerequisites A sound mathematical background in geometry, analysis and probability is required though a repetition of relevant material will be included. The ability to understand and write / Notice mathematical proofs is mandatory.
{"url":"https://www.vorlesungen.ethz.ch/Vorlesungsverzeichnis/sucheLehrangebot.view?abschnittId=88583&semkez=2020W&ansicht=2&lang=en&seite=1","timestamp":"2024-11-11T04:37:20Z","content_type":"text/html","content_length":"19177","record_id":"<urn:uuid:c7fffbdc-2f44-4d49-ac8e-8d24ee5afe03>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00860.warc.gz"}
Calculating Risk: A Guide to Determining Your Trade Size This post contains affiliate links. If you use these links to register at one of the trusted brokers, I may earn a commission. This helps me to create more free content for you. Thanks! Are you looking to get started with trading, but unsure of how much to invest in each trade? One of the most important considerations when trading is risk management, and a key aspect of that is determining the appropriate trade size based on your risk tolerance and account size. In this article, we’ll take a closer look at how to calculate your risk before opening a trade. What is Trade Size? Trade size refers to the amount of money you are investing in a trade. It is determined by the number of units of the currency pair you are trading, and can vary based on your account size and risk tolerance. Trade size is a critical component of risk management, as it determines the potential profit or loss of a trade. Calculating Risk Before opening a trade, it is important to determine how much you are willing to risk on the trade. One common rule of thumb is to risk no more than 1-2% of your account balance on any given trade. For example, if you have a $10,000 account balance, you may choose to risk no more than $100-$200 per trade. To calculate the appropriate trade size based on your risk tolerance, you’ll need to know the stop loss level for the trade. The stop loss is the level at which you will exit the trade if it moves against you, in order to limit your losses. A stop loss should be placed at a level that is both reasonable and allows for sufficient room for the trade to move in your favor. Once you have determined your stop loss level, you can calculate the appropriate trade size based on your risk tolerance. The formula for calculating trade size is as follows: Trade Size = (Account Balance x Risk Percentage) / (Stop Loss Distance x Pip Value) Let’s break down each of the components of this formula. Account Balance Your account balance is the amount of money you have available to trade in your account. This should be the total amount of funds in your trading account, including any open trades. Risk Percentage The risk percentage is the percentage of your account balance that you are willing to risk on the trade. As mentioned earlier, a common rule of thumb is to risk no more than 1-2% of your account balance on any given trade. Stop Loss Distance The stop loss distance is the distance between your entry price and your stop loss price, measured in pips. This distance should be determined based on technical analysis and the volatility of the currency pair you are trading. Pip Value The pip value is the amount of money you stand to gain or lose for each pip movement in the currency pair you are trading. The pip value varies based on the currency pair and the currency of your Once you have determined these components, you can use the formula to calculate the appropriate trade size. Let’s walk through an example. Example Calculation Suppose you have a $10,000 account balance and are willing to risk 1% of your account on a trade. You are trading the EUR/USD currency pair, and your stop loss is 50 pips away from your entry price. The pip value for the EUR/USD pair in a USD account is $10 per pip. Using the formula, we can calculate the appropriate trade size as follows: Trade Size = ($10,000 x 0.01) / (50 x $10) = 2 units This means that you should enter a trade for 2 units of the EUR/USD currency pair in order to risk 1% of your account balance with a 50 pip stop loss. In conclusion, computing and managing risk is a crucial part of successful trading. By calculating your potential risk and reward, setting stop-loss and profit target levels, and using position sizing to control your exposure, you can effectively manage your risk and increase your chances of success in the markets. It is important to remember that risk management should be an ongoing process, and you should regularly review and adjust your risk management strategies as needed. Additionally, it is essential to have a solid understanding of the markets and the instruments you are trading, as well as a disciplined approach to executing your trades. While there is no foolproof way to eliminate risk entirely, by following these principles and guidelines, you can minimize your risk and increase your chances of success as a trader. Remember to always prioritize risk management in your trading strategy and stay disciplined in your approach to the markets. Still, have no trading account yet? Open an account at one of my trusted brokers suitable for algorithmic trading completely for free and start testing today! This post contains affiliate links. If you use these links to register at one of the trusted brokers, I may earn a commission. This helps me to create more free content for you. Thanks!
{"url":"https://www.getforexstrategy.com/calculating-risk-a-guide-to-determining-your-trade-size/","timestamp":"2024-11-08T08:48:07Z","content_type":"text/html","content_length":"126856","record_id":"<urn:uuid:daf90e6f-bf70-4e12-8998-52507916df9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00409.warc.gz"}
r-Matrices for Relativistic Deformations of Integrable Systems We include the relativistic lattice KP hierarchy, introduced by Gibbons and Kupershmidt, into the r-matrix framework. An r-matrix account of the nonrelativistic lattice KP hierarchy is also provided for the reader's convenience. All relativistic constructions are regular one-parameter perturbations of the nonrelativistic ones. We derive in a simple way the linear Hamiltonian structure of the relativistic lattice KP, and find for the first time its quadratic Hamiltonian structure. Amasingly, the latter turns out to coincide with its nonrelativistic counterpart (a phenomenon, known previously only for the simplest case of the relativistic Toda lattice). Journal of Nonlinear Mathematical Physics Pub Date: November 1999 □ Nonlinear Sciences - Exactly Solvable and Integrable Systems J. Nonlinear Math. Phys. 6 (1999), no. 4, 411-447
{"url":"https://ui.adsabs.harvard.edu/abs/1999JNMP....6..411S/abstract","timestamp":"2024-11-12T19:07:09Z","content_type":"text/html","content_length":"36934","record_id":"<urn:uuid:23d79ef6-8b5a-4e0e-ba9c-77320678ced7>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00067.warc.gz"}
KSEEB Solutions for Class 6 Maths Chapter 5 Understanding Elementary Shapes Ex 5.5 Students can Download Chapter 5 Understanding Elementary Shapes Ex 5.5 Questions and Answers, Notes Pdf, KSEEB Solutions for Class 6 Maths helps you to revise the complete Karnataka State Board Syllabus and score more marks in your examinations. Karnataka State Syllabus Class 6 Maths Chapter 5 Understanding Elementary Shapes Ex 5.5 Question 1. Which of the following are models for perpendicular lines: a) The adjacent angles of a table top The adjacent edge of a table top are perpendicular to each other. b) The line of a railway track The lines of a railway track are parallel to each other. c) The line segments forming the letter ‘L’ The line segments forming the letter ‘L’ are perpendicular to each other. d) The letter V. The sides of letter V are inclined at some a cute angle on each other Hence (a) and (c) are the models for perpendicular lines. Question 2. Let \(\overline{\mathbf{P Q}}\) be the perpendicular to the line segment \(\overline{\mathbf{X Y}}\). Let \(\overline{\mathbf{P Q}}\) and \(\overline{\mathbf{X Y}}\) intersect in the point A. What is the measure of ∠PAY? From the figure if c be easily observed that the measure of ∠PAY is 90° Question 3. There are two set-squares in your box. What are the measures of the angles that are formed at their corners? Do they have any angle measure that is common? On has a measure of 90°, 45°, 45° other has a measure of 90°, 30°, 60° Therefore, the angle of 90° measure is common between them. Question 4. Study the diagram. The line l is perpendicular to line m a) Is CE = EG? Yes, As CE = EG = 2 units b) Does PE bisect CG? Yes, PE bisects CG since CE = EG c) Identify any two line segments for which PE is the perpendicular bisector. DF and BH d) Are these true? i) AC < EG True, As length of AC and FG are of 2 units and 1 unit respectively ii) CD = GH True, As both have 1 unit length iii) BC < EH True, As the length of BC and EH are of 1 unit and 3 units respectively.
{"url":"https://www.kseebsolutions.com/kseeb-solutions-for-class-6-maths-chapter-5-ex-5-5/","timestamp":"2024-11-07T15:36:42Z","content_type":"text/html","content_length":"67457","record_id":"<urn:uuid:120a52a9-d28c-4658-b395-bd069701d7eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00515.warc.gz"}
The Windsor Boys' School Year 9 Foundation Maths The Windsor Boys’ School Mathematics Curriculum Map Yr 9 (Foundation) Autumn 1 Autumn 2 Spring 1 Spring 2 Summer 1 Summer 2 Integers and place values. Representation of Data. Pythagoras theorem. 2D and 3D shapes. Fractions, decimals and Averages and range. Probability. percentages. Factors, and Algebra the basics. Setting up Percentages. Perimeter, area and circles. multiples. and solving equations. Scatter graphs. Collecting Data. Sequences. Linear graphs. Ratio and proportion. Volume and surface area. Indices, powers and roots.. Reflections, rotations, Project on Money Matters. translations and enlargement. Compound measures. Angles Add and subtract positive and Simplify expressions by Find the length of a shorter Sketch nets of cuboids and Be able to list all outcomes negative integers. Multiply or cancelling. Substitute positive Create histograms with equal side in a right-angled triangle. prisms. Identify and sketch for single events. Work out divide any number by powers of and negative numbers into class intervals; stem and leaf Apply Pythagoras’s Theorem to planes of symmetry of 3D solids. probabilities from frequency 10. Multiply and divide signed expressions, including those diagrams. Interpret simple pie worded problems. Calculate Find the perimeter and area of tables and two-way tables. Add integers. Round to a given using brackets and powers. Solve charts using fractions and percentage increase and parallelograms, trapezia and simple probabilities. Identify number of decimal places or linear equations, with integer percentages. Complete 2-way decrease. Find a percentage of a compound shapes. Calculate the possible sources of bias in the significant figures. Convert coefficients, in which the tables. Estimate the mean from quantity; including using a circumference and area of design and use of data between fractions, decimals and unknown appears on either side a grouped data frequency table. multiplier. Solve a ratio circles. Find the surface area collection sheets and percentages. Find the prime or both sides of the equation. Recognise and describe single problem in context. Compare and volume of cylinders, prisms, questionnaires. Understand factor decomposition of positive Solve angle or perimeter translations using column ratios. Write ratios in the form cubes and cuboids. Understand primary and secondary data integers and write as a product problems using algebra. Find the vectors on a coordinate grid. 1:m or m:1. Calculate pressure. and use the angle properties of sources. Use the stratified using index notation. Find LCM nth term for a pattern or linear Extend knowledge of shapes to Calculate area under triangles and intersecting sampling methods. Reflect, and HCF. sequence. fractional and negative. velocity-time graphs for lines. revise and extend. 60 minutes timed assessment on 60 minutes timed assessment on 60 minutes timed assessment on 60 minutes timed assessment on 60 minutes timed assessment on End of Year Exam. topics covered since the topics covered since the topics covered since the topics covered since the topics covered since the beginning of the year. beginning of the year. beginning of the year. beginning of the year. beginning of the year. Two papers, one non- calculator and one calculator.
{"url":"https://www.twbs.co.uk/year-9-foundation-maths/","timestamp":"2024-11-13T18:27:38Z","content_type":"text/html","content_length":"172280","record_id":"<urn:uuid:dfb9ecc5-49d7-4738-be10-32910ca8f93f>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00060.warc.gz"}
n Science Detail Page content provider: the Science Buddies published by the Science Buddies This web page lets students browse more than 100 science and engineering careers to discover what scientists really do and what it takes to prepare for these careers. Each career profile provides basic information such as salary, job outlook, nature of the work, and degree requirements. Also included are videos featuring interviews with real scientists or on-the-job The Science Buddies website aims to promote deep understanding of scientific research in the K-12 classroom and through informal education. Please note that this resource requires Java Applet Plug-in, or Flash. Subjects Levels Resource Types - Instructional Material Education Practices = Student Guide - Informal Education - Reference Material General Physics - High School = Career Information - General - Middle School - Audio/Visual Other Sciences = Image/Image Set - Engineering = Movie/Animation Appropriate Courses Categories Ratings - Physical Science - Physics First - Activity - Conceptual Physics - New teachers - Algebra-based Physics - AP Physics Intended Users: Access Rights: Free access © 2002 Science Buddies Duplication of images or graphics from Science Buddies website is forbidden. Career preparation, Practice of Science, STEM careers, engineering careers, science career videos, science careers, scientist profiles Record Cloner: Metadata instance created September 18, 2013 by Caroline Hall Record Updated: August 12, 2016 by Lyle Barbato Last Update when Cataloged: September 2, 2011 Next Generation Science Standards Crosscutting Concepts (K-12) Influence of Engineering, Technology, and Science on Society and the Natural World (K-12) • Engineers improve existing technologies or develop new ones to increase their benefits (e.g., better artificial limbs), decrease known risks (e.g., seatbelts in cars), and meet societal demands (e.g., cell phones). (3) Interdependence of Science, Engineering, and Technology (K-12) • Science and engineering complement each other in the cycle known as research and development (R&D). Many R&D projects may involve scientists, engineers, and others with wide ranges of expertise. (9-12) Science is a Human Endeavor (3-12) • Science affects everyday life. (3-4) Supplements • Advances in technology influence the progress of science and science has influenced advances in technology. (6-8) • Scientists and engineers are guided by habits of mind such as intellectual honesty, tolerance of ambiguity, skepticism, and openness to new ideas. (6-8) Contribute • Science is a result of human endeavors, imagination, and creativity. (9-12) NGSS Nature of Science Standards (K-12) Materials AAAS Benchmark Alignments (2008 Version) 1. The Nature of Science 1B. Scientific Inquiry • 3-5: 1B/E1. Scientific investigations may take many different forms, including observing what things are like or what is happening somewhere, collecting specimens for analysis, and doing experiments. 1C. The Scientific Enterprise • 3-5: 1C/E1. Science is an adventure that people everywhere can take part in, as they have for many centuries. • 3-5: 1C/E3. Doing science involves many different kinds of work and engages men and women of all ages and backgrounds. • 6-8: 1C/M4. Scientists are employed by colleges and universities, business and industry, hospitals, and many government agencies. Their places of work include offices, classrooms, laboratories, farms, factories, and natural field settings ranging from space to the ocean floor. • 6-8: 1C/M9. Scientists are linked to other scientists worldwide both personally and through international scientific organizations. • 9-12: 1C/H4. Science disciplines differ from one another in what is studied, techniques used, and outcomes sought, but they share a common purpose and philosophy, and all are part of the same scientific enterprise. Although each discipline provides a conceptual structure for organizing and pursuing knowledge, many problems are studied by scientists using information and skills from many disciplines. Disciplines do not have fixed boundaries, and it happens that new scientific disciplines are being formed where existing ones meet and that some subdisciplines spin off to become new disciplines in their own right. ComPADRE is beta testing Citation Styles! <a href="https://www.compadre.org/precollege/items/detail.cfm?ID=13018">Science Buddies. Science Buddies: Careers in Science. Carmel: Science Buddies, September 2, 2011.</a> (Science Buddies, Carmel, 2002), WWW Document, (https://www.sciencebuddies.org/science-engineering-careers). Science Buddies: Careers in Science (Science Buddies, Carmel, 2002), <https://www.sciencebuddies.org/science-engineering-careers>. Science Buddies: Careers in Science. (2011, September 2). Retrieved November 12, 2024, from Science Buddies: https://www.sciencebuddies.org/science-engineering-careers Science Buddies. Science Buddies: Careers in Science. Carmel: Science Buddies, September 2, 2011. https://www.sciencebuddies.org/science-engineering-careers (accessed 12 November 2024). Science Buddies: Careers in Science. Carmel: Science Buddies, 2002. 2 Sep. 2011. Science Buddies. 12 Nov. 2024 <https://www.sciencebuddies.org/science-engineering-careers>. @misc{ Title = {Science Buddies: Careers in Science}, Publisher = {Science Buddies}, Volume = {2024}, Number = {12 November 2024}, Month = {September 2, 2011}, Year = {2002} } %T Science Buddies: Careers in Science %D September 2, 2011 %I Science Buddies %C Carmel %U https://www.sciencebuddies.org/science-engineering-careers %O text/html %0 Electronic Source %D September 2, 2011 %T Science Buddies: Careers in Science %I Science Buddies %V 2024 %N 12 November 2024 %8 September 2, 2011 %9 text/html %U https:// : ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. Citation Source Information The AIP Style presented is based on information from the AIP Style Manual. The APA Style presented is based on information from APA Style.org: Electronic References. The Chicago Style presented is based on information from Examples of Chicago-Style Documentation. The MLA Style presented is based on information from the MLA FAQ.
{"url":"https://www.compadre.org/Precollege/items/detail.cfm?ID=13018","timestamp":"2024-11-12T17:09:20Z","content_type":"application/xhtml+xml","content_length":"45701","record_id":"<urn:uuid:a393d3d5-34c1-4347-b51d-e056d42d111f>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00780.warc.gz"}
Math Drills Printable Worksheet | Order of Operation Worksheets Math Drills Printable Worksheet The Multiplication Facts To 81 100 Per Page B Math Worksheet From Math Drills Printable Worksheet Math Drills Printable Worksheet – You may have listened to of an Order Of Operations Worksheet, yet what specifically is it? In addition, worksheets are a great way for students to exercise new abilities as well as testimonial old ones. What is the Order Of Operations Worksheet? An order of operations worksheet is a kind of math worksheet that requires pupils to execute math operations. These worksheets are separated right into three main sections: subtraction, addition, and also multiplication. They also consist of the examination of backers as well as parentheses. Trainees that are still finding out exactly how to do these tasks will certainly discover this sort of worksheet valuable. The major function of an order of operations worksheet is to help pupils learn the appropriate way to fix mathematics formulas. If a student doesn’t yet recognize the idea of order of operations, they can review it by referring to an explanation web page. In addition, an order of operations worksheet can be divided into a number of classifications, based upon its trouble. An additional important function of an order of operations worksheet is to educate students how to carry out PEMDAS operations. These worksheets begin with straightforward troubles connected to the fundamental regulations as well as build up to more intricate troubles entailing every one of the rules. These worksheets are a fantastic method to present young learners to the exhilaration of fixing algebraic formulas. Why is Order of Operations Important? One of the most crucial points you can find out in mathematics is the order of operations. The order of operations guarantees that the mathematics problems you address are consistent. An order of operations worksheet is an excellent method to educate students the proper way to fix mathematics formulas. Prior to trainees begin using this worksheet, they may require to examine ideas connected to the order of operations. An order of operations worksheet can help pupils create their skills additionally as well as subtraction. Teachers can make use of Prodigy as an easy method to differentiate technique and deliver engaging content. Prodigy’s worksheets are a perfect method to help pupils learn about the order of operations. Educators can start with the fundamental ideas of addition, multiplication, and division to help trainees develop their understanding of parentheses. Math Drills Printable Worksheet Multiplication Drills 1 12 Times Tables Worksheets Addition 5 Minute Drill H 10 Math Worksheets With Etsy First Grade Math Drills Multiplication Worksheets Printable Multiplication Math Drills Printable Worksheet Math Drills Printable Worksheet offer a great resource for young students. These worksheets can be conveniently customized for specific demands. The Math Drills Printable Worksheet can be downloaded free of cost and also can be published out. They can after that be assessed making use of addition, multiplication, subtraction, as well as division. Students can also use these worksheets to assess order of operations as well as using backers. Related For Math Drills Printable Worksheet
{"url":"https://orderofoperationsworksheet.com/math-drills-printable-worksheet/","timestamp":"2024-11-06T04:55:00Z","content_type":"text/html","content_length":"40452","record_id":"<urn:uuid:d00a31cb-76f0-43a1-bc7f-dfb75be933e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00182.warc.gz"}
The renormalized volume of quasifuchsian manifolds Quasifuchsian manifolds are an important class of hyperbolic 3-manifolds, classically parametrized by two copies of Teichmüller space. Their volume is infinite, but they have a well-defined finite "renormalized volume" which has nice properties, both analytic and "coarse". In particular, considered as a function over Teichmüller space, the renormalized volume provides a Kähler potential for the Weil-Petersson metric; moreover, it is within bounded additive constants of the volume of the convex core and is bounded from above by the Weil-Petersson distance between the conformal structures at infinity. After describing these properties, we will outline some recent applications (by Kojima, McShane, Brock, Bromberg, Bridgeman, and others) to the Weil-Petersson geometry of Teichmüller space or the geometry of hyperbolic 3-manifolds that fiber over the circle. We will then explain how properties of the renormalized volume suggest new questions and viewpoints on quasifuchsian manifolds. The talk will be accessible to nonexperts.
{"url":"https://indico.math.cnrs.fr/event/2710/?print=1","timestamp":"2024-11-13T06:05:58Z","content_type":"text/html","content_length":"11331","record_id":"<urn:uuid:fa0a5f36-2387-4894-a830-def2b3c92064>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00157.warc.gz"}
Frequently Asked Questions How can I access and use the TreeScan software? TreeScan is free software, available for download at www.treescan.org. The TreeScan User Guide provides detailed instructions on how to use the software. It is compatible with Windows, Macintosh, and Linux Operating Systems. How can I generate a cohort and analytic dataset that can be used with TreeScan software? The input dataset structure required for TreeScan is printed in the TreeScan User Guide. Within the Sentinel System, version-controlled, quality-checked cohort identification programs enable creation of an analytic dataset compatible with most TreeScan analyses. These SAS programs require data formatted into the Sentinel Common Data Model and a SAS® license to execute. Additionally, TreeScan requires pre-specification of the hierarchical tree structure. Tree files based on the Agency for Healthcare Research and Quality’s (AHRQ’s) Multi-level Clinical Classification System (MLCCS) are available here. Where can I find more information about the TreeScan statistical method? How is the tree generated from the data? It is not. The tree is pre-specified by the user, before collecting the data and doing the analysis. What qualifies as a “tree” that can be used for TreeScan? Any structure that aggregates data into logical, hierarchical groupings is considered a “tree” and would be sufficient for our purpose. For example, MedDRA is a standardized medical terminology to share regulatory information internationally, and its concepts are organized in a tree structure. The International Classification of Diseases (ICD) coding system is also organized in a tree by body system. Finally, the Agency for Healthcare Research and Quality has created the MLCCS, which groups the ICD-9/10-Clinical Modification (CM) outcome codes into a smaller number of clinically meaningful categories. The MLCCS is a hierarchical system that is defined using both aggregate groupings and ICD-9-CM codes. Four levels exist in the multi-level diagnosis CCS in the ICD-9-CM era. That is, there are four levels of groupings in addition to one level of the base codes. These base codes are referred to as “leaves” or “leaf nodes” on the tree and are the most granular data available. The least granular of the four levels in the MLCCS are the root nodes that are aggregated into 18 body system categories (e.g., Diseases of the Nervous System and Sense Organs as shown in the figure below). The specificity of categories increases with the higher levels, such that second-level categories are more specific than first-level categories, third-level categories are more specific than second-level categories, and fourth-level categories are more specific than third-level categories. An example table is shown below. Is it possible to execute a TreeScan analysis with both ICD-9-CM and ICD-10-CM codes in the same study, i.e., perhaps because the study spans periods when both ICD-9-CM and ICD-10-CM codes were in Yes. One common tree must be used. This can be created by mapping ICD-9-CM codes onto ICD-10-CM codes or vice versa or by basing the common tree on a combination of ICD-9-CM and ICD-10-CM codes. How does the TreeScan method account for potential differences in how outcomes are coded in administrative data? One of TreeScan’s strengths is that an investigator is not required to pre-specify how a clinical concept will be coded in administrative data. For example, different physicians could code an outcome slightly differently, for example, coding a febrile seizure • convulsions (780.3), • or febrile convulsions NOS (780.31), • or complex febrile convulsions (780.32) Hypothesis testing done at more coarsely-aggregated levels of the tree can combine strength even when a singular outcome or leaf level node does not have a large enough sample size to independently cause a rejection of the null hypothesis. However, there are fundamental limitations to using administrative claims data for medical product safety surveillance that will not be automatically overcome with TreeScan (no matter what its inherent strengths), such as undercoding or missingness of various common conditions (e.g., fever) for which medical care is not sought and claims are not filed. Administrative data is also subject to “upcoding” and is heavily influenced by reimbursement policies. Why is an incident outcome defined based on the second, third, or fourth level of the outcome tree rather than on ICD-9/10-CM codes? Many ICD-9/10-CM codes are similar, and the same disease condition may be recorded using slightly different ICD-9/10-CM codes during follow-up visits. Using TreeScan, an outcome is incident if there were no other outcomes in the same user-specified level of the tree during the preceding pre-specified number of days. What are the pros and cons of using the second vs. third vs. fourth level (i.e., numerically higher levels are closer to the leaf level) of the hierarchical tree to identify incident outcomes? How does the definition of incident outcome impact the TreeScan results and the ability to detect alerts? A trade-off exists in choosing any tier of the outcome tree (e.g., second, third, fourth levels) for defining incident outcomes. Defining incident events at the second level results in fewer diagnostic events compared to defining them at the third or fourth level, leading to a smaller sample size. Using the fourth level leads to a larger sample size. The additional diagnostic events can either reduce statistical power if they are random noise due to pre-existing conditions recorded during follow-up visits, or they may increase statistical power if they are informative diagnostic events unrelated to pre-existing conditions. Increased power will increase the sensitivity of the method, that is, the ability to detect true adverse reactions. Wouldn’t the choice of the third level to define incident outcomes cause an increased frequency of second- or fourth-level outcomes in the risk window to be missed? In this example, the third level is used to determine if an outcome is incident or not. Hypothesis tests are not performed at levels higher than the outcome incidence level to prevent data dependencies in the tree; that is, there would be no hypothesis testing at the second level. The TreeScan method can generate an alert at the incident outcome level or numerically higher (i.e., more granular) levels of the tree. What happens if there are outcomes that are not more finely differentiated than at the second level of the MLCCS tree? How is such an outcome taken into consideration if incident outcomes are determined using a higher level? We refer to the ICD-9-CM codes that form the basis of the tree that we use as the fifth-level (i.e., in tree parlance, the leaf level), and hence, that level exists for all outcomes. Aggregation of multiple leaves (i.e., into a branch or trunk of the tree) occurs at the first through fourth levels. Sometimes, the MLCCS tree ends at the second or third levels. In such cases, we have artificially filled in the tree up to the finest level of aggregation, i.e., the fourth level. For example, ICD-9-CM code 729.5, “pain in soft tissues of the limb,” corresponds to a second level outcome on the tree, 13.08, in which the first level 13 is “diseases of the musculoskeletal system and connective tissue” and the second level 08 is “other connective tissue disease.” There is no finer MLCCS differentiation of this outcome, but additional levels were created for it by adding zeroes — the third level is 13.08.00 and the fourth level is 13.08.00.00. What is the purpose of defining and using incident outcomes, rather than using all recorded outcomes? An outcome is an incident outcome if there is no prior outcome of the same or similar kind in the prior “X” days. The primary goal of the incident outcome definition is to identify only new-onset events and distinguish them from ongoing conditions that existed before medical product exposure. By removing such ongoing conditions, the signal-to-noise ratio increases, which in turn increases statistical power. A second reason for using only incident outcomes is to ensure that a person can only contribute one event during the follow-up period in the same grouping of outcomes in the tree. This is further accomplished by ensuring that the number of days used in the incident outcome definition is longer than the follow-up time. Together, these operational characteristics help to ensure the independence assumptions that underlie the TreeScan probability models. Instead of using the same rules to define incident outcomes, set the observation period, etc., for all outcomes, wouldn’t it make more sense to tailor the rules to the various outcomes, some of which would have more acute and/or immediate onsets than others? Customization of the rules for specific outcomes is not feasible with a method of this kind, where there are thousands of different outcomes being assessed. This is an early-warning, alert-detection system. Definitions of incident outcomes, risk intervals, and other parameters can and should be customized in subsequent alert investigations. What does a “TreeScan alert” mean? A TreeScan alert occurs when TreeScan detects a statistically significant difference in the observed frequency of a coded outcome relative to the expected counts depending on the specific tree-based scan statistic being utilized. The tree-based scan statistic is a maximum log likelihood ratio and the threshold is given based on the percentile value of the maximum log likelihood ratio distribution. The threshold is user-specified. Adjustments are made for multiple testing, so the alerts are unlikely to be due to chance. An alert could be expected (i.e., a labeled outcome), or unexpected. If unexpected, the TreeScan alert should be further evaluated for its likelihood of being due to confounding using traditional pharmacoepidemiologic methods. Is it likely or unlikely that there will be statistical alerts just due to chance? It is unlikely, since the method adjusts for the multiple testing inherent in the thousands of outcomes and groups of related outcomes that are evaluated. Suppose you use an alpha level of 0.05, declaring an alert when p≤0.05. If you perform 100 TreeScan analyses, the expected number of TreeScan analyses without any alerts due to chance is 95. If the null hypothesis is true, then the expected number of TreeScan analyses without any alerts at all is 95. In other words, in the long run, 5 percent of your TreeScan analyses will contain a false positive alert that is due to chance. If this is considered too high, you can instead use an alpha level of 0.01. Then, in the long run, only 1 percent of your TreeScan analyses will contain a false positive alert that is due to chance. What kind of investigations will be undertaken if and when unexpected safety alerts arise from TreeScan analysis, and how will decisions about those investigations be made? In general, the approach is as follows: We will first identify whether alerts correspond to a known or expected association, such as those noted in the product label. Then, of the remaining alerts, we will consider the potential biases that might have been operating and the strength of the bias needed to generate a false alert. We may perform additional alert follow-up under the guidance of the Food and Drug Administration (FDA) using other analyses or the Patient Episode Profile Retrieval Tool. If TreeScan alerts arise, how can we be sure that the alert is not related to one particular Data Partner? Any alert investigation will consider the possibility that a TreeScan alert is Data Partner-specific and will include stratification of data by Data Partner. How is the FDA going to use data that have been used for a TreeScan analysis to follow-up an alert? Any alert investigation will include a) pre-specification and transparency of the follow-up analysis plan and subsequent results, and b) proper attention to statistical testing (i.e., overall type I and type II error control). In general, an alert investigation is not intended to replicate a TreeScan finding, but rather to refine evidence generation and understand any biases or confounding that may have contributed to an alert. A higher frequency of healthcare visits and outcome codes has been observed in the first two weeks after medical product initiation than subsequently, usually due to workup of conditions that were present at the time of medical product initiation and diagnosed shortly thereafter. Wouldn’t you expect false TreeScan alerts to arise due to this phenomenon? In a self-controlled design when time periods are being explicitly compared, it is easy to test by comparing the total outcomes in different periods after medical product initiation. When present, it can bias the unconditional Bernoulli (i.e., when applied to a self-controlled design) and the unconditional tree-temporal scan statistics. On the other hand, the conditional versions of the tree-based scan statistics are explicitly designed to adjust for this type of bias, although they will only adjust for the general phenomenon across all outcomes, not for an exceptionally strong time bias for a specific outcome. The latter must be dealt with using a follow-up analysis that is specific to that outcome and accounts for time-varying variables. How is the adjustment for multiple testing implemented? Considering the thousands of overlapping disease outcome definitions evaluated, adjustment for multiple testing is critical. This is accomplished through the simulation component of the method. The maximum log likelihood ratio test statistic in the real dataset is compared with the maximum log likelihood ratio test statistics in each of, say, 999 random datasets, and we note its rank. For example, if it has the fifth highest test statistic, its rank is 5. Note that we are not comparing the specific log likelihood ratios for specific nodes on the tree, but rather, comparing the maxima of the log likelihood ratios obtained over all possible nodes. Since the random datasets were all generated under the null hypothesis, and if the null hypothesis is true in the real dataset, then the test statistics come from exactly the same probability distribution. This means that, if the null hypothesis is true, then the rank of the test statistic from the real dataset will range uniformly from 1 to 1000, and the probability of having a rank in the top 5% is exactly 5%. If the test statistic from the real dataset is in the top 5%, we will reject the null hypothesis; we have a 5% probability of falsely rejecting the null. Why isn’t a Bonferroni type of adjustment used to adjust for the multiple testing? Because the overlapping data are obtained from different nodes on the tree, where the data for one node are a subset of the data in another node, there is a lot of dependence between the likelihoods calculated for closely related nodes. With such dependence, a Bonferroni type of adjustment becomes too conservative. Could bias enter a TreeScan analysis as a result of differences in recommendations for use of the product (including the timing of rollout) among different demographic subgroups, such as males vs. In using a self-controlled version of TreeScan, there would be no bias introduced by differential use of the product by different subgroups, nor by differences in the timing of rollout to different subgroups. In propensity-score or multi-factor matched analyses, these biases are mitigated by the use of active comparator groups and attention to the time periods under consideration. What if only one subgroup of the population has an excess product-associated risk of an outcome, while there is no excess risk for the rest of the population? For example, maybe a medical product increases the risk of seizures in just females or in just a particular age group. Will TreeScan be able to detect the risk and identify the subgroup? This would be a case of effect modification (interaction). If an excess risk were limited to a particular demographic subgroup of the population, the statistical power to see it would be reduced due to the random noise caused by the random timing of the outcome events in the unaffected subgroups, reducing the signal-to-noise ratio. If the sample size were big enough, TreeScan could detect it, but without identifying the subgroup. The group at risk could then be identified during an investigation of the TreeScan alert. That being said, if an investigator were concerned about subgroup-specific effects a priori, then the investigator could subset the population under investigation at the outset. For example, the population of interest could be defined as pregnant women, provided such a population could be identified in administrative data. What are likely sources of confounding in TreeScan analysis? Confounding adjustment is all done through design in a TreeScan analysis. Self-controlled designs automatically adjust for all time-invariant confounders, but they do not adjust for time-varying confounding except for overall patterns of medical care utilization in the conditional version of the tree-based scan statistics. Examples of possible sources of time-varying confounding are seasonality, concomitant drug exposures, and varying health status over the time period of observation. For example, there is time-varying confounding by indication when a person is temporarily indicated or contraindicated to receive a particular medical product, e.g., certain medications avoided during pregnancy. In a propensity-score matched or multi-factor matched design, confounding adjustment relies on the specification of the propensity score model or the exact matching criteria. These active comparator designs are less vulnerable to time-varying confounding. Can TreeScan be used to identify statistically protective effects as well as excess risks? If so, how? Currently, this cannot be done when using the Poisson or tree-temporal versions of the TreeScan software, because a one-sided statistical test is implemented. A future version of TreeScan will be able to do two-sided tests. Theoretically, when using the TreeScan Bernoulli option, it is possible to do a one-sided test for a protective effect by simply redefining the risk interval as the control interval and the control interval as the “protective interval.” In propensity-score or multi-factor matched designs, one might simply perform two analyses, and change the “referent” cohort in each analysis. Since there is no gold standard against which to compare the TreeScan results, could some or all of the following kinds of analyses be done for validation purposes? • Conduct a power evaluation by creating simulated data spiked with an excess frequency of a particular outcome of interest during the risk window using different relative risks, different sample sizes, and different baseline frequencies of the outcome of interest. • Divide the dataset into two random subsets, perform a TreeScan analysis on each one, and compare the results. □ Not recommended for either validating the TreeScan method or validating TreeScan findings. Since the two datasets are randomly obtained from the same larger dataset, any difference between the results of the two analyses will, by definition, be due to chance. Moreover, once we have shown that the results from the comparison are the same or very similar, we will still not have proven that they are both accurate results instead of both erroneous results. Hence, it is not a useful way to evaluate the method. Using only part of the available data would reduce statistical power, so it is also not a recommended approach to use only half the data for the actual analysis and save the other half for validating the findings. • Divide the dataset in two (e.g., 2006-2011 vs. 2012-2014, or Data Partners A and B vs. C and D), perform TreeScan analysis on each half, and compare the results. □ Not recommended for validating either TreeScan findings or the method. Using only part of the available data would reduce statistical power, so it is not a recommended approach for validating actual findings. However, the investigator might choose to subset the data by time period based on their awareness of secular trends (e.g., significant changes in coding practices over time) or of other forms of time-varying confounding that could introduce a bias. That being said, unlike the question above, subsets of data originating from different periods of time and/or different Data Partners could differ. Hence, any difference in TreeScan results could be due either to that reason or to random chance. Also, as in the question above, even if we were to find that the results from the comparison were the same or very similar, we would not have proven that they were both accurate results instead of both erroneous results. • Divide the available data into two random subsets and perform a TreeScan analysis on one and a conventional regression analysis on the other. □ Not recommended for validating TreeScan findings, but could be used for methods validation. We could run TreeScan on one randomly selected half of the dataset, note the statistically significant alerts, and then use a non-data-mining method, such as standard logistic regression, on the other half of the dataset to demonstrate that the TreeScan alerts are statistically significant using standard methods. When using the standard method, we would not have to adjust for multiple testing, since we would be running the analysis on a completely independent, non-overlapping dataset, for a few previously detected outcomes. This exercise could serve an educational purpose in confirming the TreeScan alerts using a widely known and established method. It must be noted, however, that this approach should not be used for validating actual TreeScan findings, as power would be greatly reduced. • Use another established signal detection method on the same dataset as TreeScan and compare results. □ Recommended for methods validation, and acceptable, with a caveat, for data analysis and validation of actual findings. One such comparison has already been done, for evaluation purposes, in which TreeScan was compared with the Gamma Poisson Shrinker (GPS), a method that is commonly applied in spontaneous reporting systems.1 Another is planned for self-controlled designs: Evaluation of Three Self-Controlled Methods for Signal Detection: TreeScan, Sequence Symmetry Analysis, and Information Component Temporal Pattern Discovery. For actual analysis, it is sometimes good to use multiple methods, but one must also be careful not to use so many different methods that at least one of them will generate a statistically significant result just by 1. Brown JS, Petronis KR, Bate A, et al. Drug adverse event detection in health plan data using the Gamma Poisson Shrinker and comparison to the tree-based scan statistic. Pharmaceutics, 2013; Jacobsen SJ, Ackerson BK, Sy LS, et al. Observational safety study of febrile convulsion following first dose MMRV vaccination in a managed care setting. Vaccine, 2009;27:4656-4661. Klein NP, Fireman B, Yih WK, et al. Vaccine Safety Datalink. Measles-mumps-rubella-varicella combination vaccine and the risk of febrile seizures. Pediatrics, 2010;126:e1-8. doi: 10.1542/ Kulldorff M, et al. A Tree-Temporal Scan Statistic for Drug and Vaccine Safety Data Mining. Manuscript, 2014. Kulldorff M, Dashevsky I, Avery TR, et al. Drug safety data mining with a tree-based scan statistic. Pharmacoepidemiology and Drug Safety, 2013;22:517-523. Kulldorff M, Fang Z, Walsh SJ. A tree-based scan statistic for database disease surveillance. Biometrics, 2003;59:323-331. McClure DL, Raebel MA, Yih WK, et al. Mini-Sentinel methods: Framework for assessment of positive results from signal refinement. Pharmacoepidemiology and Drug Safety, 2014;23:3–8. doi: 10.1002/ Yih WK, Kulldorff M, Fireman BH, et al. Active surveillance for adverse events: The experience of the Vaccine Safety Datalink project. Pediatrics, 2011;127:S54–S64.
{"url":"https://sentinelinitiative.org/methods-data-tools/signal-identification-sentinel-system/faqs","timestamp":"2024-11-13T14:58:07Z","content_type":"text/html","content_length":"94260","record_id":"<urn:uuid:7b06c6de-edb6-46b3-925e-4d2930532f43>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00445.warc.gz"}
Probability in higher dimensions MA 363: Probability in higher dimensions Credits: 3:0 Pre-requisites : 1. This is a graduate level topics course in probability theory. 2. Graduate level measure theoretic probability will be useful, but not a requirement. 3. Students are expected to be familiar with basic probability theory and linear algebra. 4. The course will be accessible to advanced undergraduates who have had sufficient exposure to probability and linear algebra. This course will be aimed at understanding the behavior of random geometric objects in high dimensional spaces such as random vectors, random graphs, random matrices, and random subspaces, as well. Topics will include the concentration of measure phenomenon, non-asymptotic random matrix theory, chaining and Gaussian processes, empirical processes, and some related topics from geometric functional analysis and convex geometry. Towards the latter half of the course, a few applications of the topics covered in the first half will be considered such as community detection, covariance estimation, randomized dimension reduction, and sparse recovery problems. Suggested books and references: 1. Roman Vershynin, High-dimensional probability: An introduction with Applications in Data Science, Cambridge Series in Statistical and Probabilistic Mathematics (Series Number 47), 2018. 2. Roman Vershynin, Introduction to the non-asymptotic analysis of random matrices, Compressed sensing, 210-268, Cambridge University Press, 2012. 3. Stéphane Boucheron, Gábor Lugosi, and Pascal Massart, Concentration Inequalities: A nonasymptotic theory of independence, Oxford University Press, 2013. 4. Michel Ledoux and Michel Talagrand, Probability in Banach spaces, Springer Science & Business Media, 2013. 5. Avrim Blum, John Hopcroft, and Ravindran Kannan, Foundations of Data Science, Cambridge University Press, 2020. 6. Joel Tropp, An Introduction to Matrix Concentration Inequalities, Foundations and Trends in Machine Learning, Vol. 8, No. 1-2, pp 1-230, 2015.. +91 (80) 2293 2711, +91 (80) 2293 2265 ; E-mail: chair.math[at]iisc[dot]ac[dot]in Last updated: 11 Nov 2024
{"url":"https://math.iisc.ac.in//all-courses/ma363.html","timestamp":"2024-11-11T04:06:13Z","content_type":"text/html","content_length":"17902","record_id":"<urn:uuid:149e6797-d2f4-40de-9f48-3e668d060b02>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00749.warc.gz"}
Linearized rigid link constraints. Rigid links are often used to model stiff connections in finite element models. One generates a set of linear constraints that relate the 6 DOFs of master M and slave S nodes by Resolution of linear constraints is performed using fe_case or model assembly (see section 4.10.7) calls. The theory is discussed in section 7.14. Note that the master node of a rigid link has 6 DOF, even if the model may only need less (3 DOF for volumes). If coordinate systems are defined in field model.bas (see basis), PID (position coordinate system) and DID (displacement coordinate system) declarations in columns 2 and 3 of model.Node are properly Although rigid are linear constraints rather than true elements, such connections can be declared using an element group of rigid connection with a header row of the form [Inf abs('rigid')] followed by as many element rows as connections of the form [ n1 n2 DofSel MatId ProId EltId] where node n2 will be rigidly connected to node n1 which will remain free. DofSel lets you specify which of the 3 translations and 3 rotations are connected (thus 123 connects only translations while 123456 connects both translations and rotations). The rigid elements thus defined can then be handled as standard elements. With this strategy you can use penalized rigid links (celas element) instead of truly rigid connections. This requires the selection of a stiffness constant but can be easier to manipulate. To change a group of rigid elements into celas elements and set a stiffness constant Kv, one can do model=feutil('SetGroup rigid name celas',model); model.Elt(feutil('findelt group i',model),7) = Kv; % celas in group i The other rigid definition strategy is to store them as a case entry. rigid entries are rows of the Case.Stack cell array giving {'rigid', Name, Elt}. The syntax is where Name is a string identifying the entry. Elt is a model description matrix containing rigid elements. Command option Append allows concatenating a new list of rigid constraints to a preexisting list in Case.Stack. The call model=fe_case(model,'rigidAppend','Name',Elt1); would thus concatenate the previously defined list Name with the new rigid element matrix Elt1. Using the fe_case call to implement rigid allows an alternative rigid constraint input that can be more comprehensive in some applications. You may use a list of the form [MasterNode slaveDOF slaveNode_1 slaveNode_2 ... slaveNode_i] instead of the element matrix. Command option Append is also valid. The following sample calls are thus equivalent, and consists in implementing a rigid link between nodes 1 and 2, and 1 and 3 (with 1 as master) for all six DOF in a sample model: model=fe_case(model,'rigid','Rigid edge',... [Inf abs('rigid'); 1 2 123456 0 0 0; 1 3 123456 0 0 0]); % or model=fe_case(model,'rigid','Rigid edge',[1 123456 2 3]); In some cases, interactions with feplot visualization may transform the Elt matrix into a structure with fields Elt that contains the original data, and Sel that is internally used by feplot to display the rigid constraint on the mesh. The following example generates the mesh of a square plate with a rigid edge, the rigid constraint is here declared as rigid elements % generate a sample plate model model=femesh('testquad4 divide 10 10'); % generate beam1 elements based on the edge % of the underlying 2D model at x=0 elt=feutil('selelt seledge & innode{x==0}',model); % remove element header from selection, % we only use the node connectivity % assign the rigid element property elt(2:end,3)=123456; % all 6 DOF are slave % remove old data from the previous element selection % add rigid elements to the model % % alternative possible: define as a case entry % model=fe_case(model,'rigid','Rigid edge',[Inf abs('rigid'); elt]); % Compute and display modes def=fe_eig(model,[6 20 1e3]); The rigid function itself is only used for low level access by generating the subspace T that verifies rigid constraints [T,cdof] = rigid(node,elt,mdof) [T,cdof] = rigid(Up) See also Section 7.14, celas ©1991-2024 by SDTools
{"url":"https://www.sdtools.com/helpcur/rigid.html","timestamp":"2024-11-09T23:07:04Z","content_type":"text/html","content_length":"9799","record_id":"<urn:uuid:f50be26d-6f44-41cf-8f05-62387934a81a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00419.warc.gz"}
F.INV.RT - Excel docs, syntax and examples The F.INV.RT function is used to calculate the inverse of the F probability distribution. It provides the value at which an F-distribution cumulative probability is met. =F.INV.RT(probability, degrees_freedom1, degrees_freedom2) probability The probability at which you want to evaluate the F-distribution. degrees_freedom1 The number of degrees of freedom in the numerator. degrees_freedom2 The number of degrees of freedom in the denominator. About F.INV.RT 🔗 When diving into statistical analysis and scrutinizing the F-distribution, Microsoft Excel's F.INV.RT function steps in as a handy tool for determining critical values. This function aids in extracting the precise value at which the cumulative F-distribution probability is achieved, allowing for insightful interpretations of statistical significance and variance in data sets. By leveraging F.INV.RT, you gain the ability to make informed decisions based on the probabilities associated with the F-distribution, contributing to the accuracy of statistical evaluations within your Examples 🔗 If you are analyzing an F-distribution with 3 and 5 degrees of freedom for the numerator and denominator respectively, and you wish to find the value at which the probability of 0.05 is met, the F.INV.RT formula would be: =F.INV.RT(0.05, 3, 5). This will return the critical value for the specified F-distribution probability. Ensure that the provided probability is between 0 and 1 for accurate calculations. The degrees of freedom parameters should also be non-negative integers. Additionally, familiarize yourself with the context of your statistical analysis to appropriately apply the F.INV.RT function in your calculations. Questions 🔗 What is the main purpose of the F.INV.RT function? The F.INV.RT function is primarily used to determine the critical values of the F-distribution based on specified probabilities and degrees of freedom, aiding in statistical analysis and hypothesis Can the F.INV.RT function handle probabilities outside the range of 0 to 1? No, the F.INV.RT function requires the probability argument to be within the range of 0 to 1 for accurate evaluation of the F-distribution. How crucial are the degrees of freedom parameters in the F.INV.RT function? The degrees of freedom parameters (degrees_freedom1 and degrees_freedom2) play a pivotal role in defining the F-distribution and are essential for obtaining the correct critical value in statistical Related functions 🔗 Leave a Comment
{"url":"https://spreadsheetcenter.com/excel-functions/F.INV.RT/","timestamp":"2024-11-15T03:28:10Z","content_type":"text/html","content_length":"28685","record_id":"<urn:uuid:e18f4829-7adc-4738-9d41-2acfe52c4e08>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00791.warc.gz"}
Logitech G230 and G35 Headsets Review: Immersive Gaming Nirvana? - PC Perspective Logitech G230 and G35 Headsets Review: Immersive Gaming Nirvana? Manufacturer: Logitech Logitech Focuses in on Gaming Logitech’s latest headsets promise maximum bang for the buck. Logitech has been around seemingly forever. The Swiss based company is ubiquitous in the peripherals market, providing products ranging from keyboards and mice, to speakers and headsets. There is not much that the company does not offer when it comes to PC peripherals. Their 3 button mice back in the day were considered cutting edge that also happened to be semi-programmable. Since that time we have seen them go from ball mice, to optical mice, to the latest laser based products that offer a tremendous amount of precision. Gaming has become one of the bigger movers for Logitech, and they have revamped their entire lineup as well as added a few new products to hopefully cash in on the popularity of modern gaming. To further address this market Logitech has designed and marketed a new batch of gaming headsets. These promise to be moderately priced, but high quality products that bear the Logitech name. We go from the very basic up to the top 7.1 wireless products. Originally these covered a pretty significant price range, but lately the discounts have been extremely deep. The lowest end gaming headset is at $40US while the 7.1 wireless model comes in around $90 US. I am looking at two models today that span the lowest end to the 2^nd highest. The first headset is the G230 analog set. The second is the G35 wired 7.1 USB with Dolby Headphone technology. I have never been a fan of wireless headphones, but the G35 should be a fairly good approximation of the performance of that part. My goal is to look at these two wired units and see what Logitech can offer at these two very affordable price points. The G230 These are the introductory analog models that Logitech offers in their G series collection. These are an eye-catching shade of red that are entirely analog based. Users will need to utilize their sound card or integrated audio on their motherboard to support these products. There is an inline volume control that also features a microphone mute toggle. The headphones are relatively lightweight and comfortable to wear. The ear pads are very reminiscent of “sand knit” jerseys that I remember from my more athletic youth. The material and padding stay fairly comfortable throughout long hours of gaming, but like any headset the user will take time to get used to the feel and the pressure it puts on the ears. The more a user will use these headsets, the more comfortable they will become. The headset utilizes 40 mm paper speakers to provide sound. These are about average size for this particular market, but we see some competing brands utilize 50 mm speakers in a variety of materials that may or may not provide better sound. The woven cable that connects the headset to the computer is a very small diameter unit which handles left, right, and microphone wires. We all know that the larger the diameter, the lower the resistance, so it is a little concerning to me that the cable is as slight as it is. There is supposedly 35 ohms of resistance with these headsets, so a better than average audio setup that could include amplified output might be key for good performance with this particular headset. Some people might recoil from the thought of paper cones, but it is nothing to be afraid of. Paper is still used in a wide variety of audio applications due to its light weight and natural rigidity. Much has been done over the years to improve durability of paper cones so that they continue to stay very rigid after many years of use. High end audio will often use composite or metal cones (New England Audio Research had some fantastic aluminum alloy units that were extremely pricey, but worth it), but when we look at applications which will be sold for less than $100 those exotic materials are just too expensive. Paper speakers are still very common even in high end headphones. The combination of light weight and rigidity make for a compelling argument. The boom microphone is a nicely constructed, flexible unit that is rubberized on the outside. The microphone is a noise cancelling unit that again can be muted by the inline pod on the cable. The boom does not have any LED lighting that the higher end models utilize to tell a user when their mic is muted or active. It swivels easily into place and the boom can be bent to a decent degree to more adequately place the mic near the mouth. The inline pod also handles headset volume via a rheostat. These are the most basic gaming headphones that Logitech make. Do not expect a lot of bells and whistles for $40US. 26 Comments Rick0502 on July 8, 2015 at 3:45 pm is this a new model of the is this a new model of the g35? ive had this headset for about 5 years now. Josh Walrath on July 8, 2015 at 4:21 pm Looks much the same as the Looks much the same as the original. Obviously the packaging has changed, but I will check and see if there are differences or revisions. Ben on July 9, 2015 at 8:11 am Me too. Terrible twist on Me too. Terrible twist on the cable that was impossible to avoid. Caused all sorts of metal fatigue. Never again. homerdog on July 8, 2015 at 4:52 pm I have the G930 which looks I have the G930 which looks identical to the G35 except it is wireless. It can be found easily for $100 so I dunno why you wouldn’t spend the little bit extra to lose the wire. Battery life is very good and of course if it does die you can always plug it in and keep using it. Josh Walrath on July 8, 2015 at 5:03 pm I would of course be curious I would of course be curious to hear if there are any differences in audio quality going with a battery/wireless unit vs. wired USB? As mentioned in this article there is a big difference between their pure analog version and the USB based. Brett Suydam on July 8, 2015 at 6:23 pm I cannot say for the USB I cannot say for the USB based version but I have the 930 and the wireless versions audio is pretty good. I’m no audiophile so it works for me quite well. The only negative is that the battery life is never as good as the application says it will and occasionally I’ll get crack noises and such from the wireless. homerdog on July 10, 2015 at 5:36 pm I can’t compare the sound I can’t compare the sound quality to the wired version since I don’t have it, but I will say the 930 sounds very good to me. No complaints in the slightest. homerdog on July 10, 2015 at 5:39 pm To expand on this, I assume To expand on this, I assume the 930 gets the exact same digital audio stream as the G35 so any potential differences in sound quality would likely come from the speakers themselves. I also suspect that the speakers are identical but have no way to prove that. Josh Walrath on July 11, 2015 at 7:01 pm I’m thinking more about the I'm thinking more about the bandwidth of a wired vs. wireless solution. Do we see downsampling on the wireless side due to potentially lower bandwidth? MarkB on July 13, 2015 at 5:38 pm I’ve had both the G35 and the I’ve had both the G35 and the G930 after the right earcup broke. I’ve noticed that there seems to be less overall power in the G930 compared to the G35, and people whom I talk to using the microphone said the quality was a bit lower. It’s still good, but if you want a better quality sound and microphone, go with the G35. MarkB on July 13, 2015 at 5:38 pm I’ve had both the G35 and the I’ve had both the G35 and the G930 after the right earcup broke. I’ve noticed that there seems to be less overall power in the G930 compared to the G35, and people whom I talk to using the microphone said the quality was a bit lower. It’s still good, but if you want a better quality sound and microphone, go with the G35. Anonymous on July 8, 2015 at 5:03 pm Have owned the G230’s for Have owned the G230’s for about a year or more and the ear cups definitely feel hard at first before they gradually sink on your ears. Not uncomfortable though, more awkward at first. Being my first headset/mic combo I’m not sure if this is normal or not, but out the box it was practically useless without going into your audio configuration and adding a +10db gain. Then people could actually hear me, but at the cost of having this very low static in the background. Any further increase in gain made the static on your end very annoying. I’m not sure if this is normal of all Headset Mic combos or not. If it is then disregard my lack of knowledge on the subject, but if not then you might want to think twice unless you’re really budget minded. They’re great for the price, but the mic gain part can get super annoying. You don’t really have a choice either since the default +0db gain makes you pretty much unhearable to anyone on Steam/Skype from my testing. Josh Walrath on July 8, 2015 at 5:12 pm In my review I did mention In my review I did mention how poorly the built-in mic for the G230s fared in our testing. While people on Skype on the other end heard me fine, the quality was just sub-par. Anonymous on July 8, 2015 at 6:04 pm Oh that’s why you don’t sound Oh that’s why you don’t sound as good in the podcast as Ryan 😉 Josh Walrath on July 8, 2015 at 6:15 pm I really should get a nice I really should get a nice desk mic soon… Anonymous on July 8, 2015 at 11:43 pm We must hear your laughter We must hear your laughter and inappropriate comments in their full glory razor512 on July 8, 2015 at 7:33 pm How does the quality compare How does the quality compare to other headphones in its price range. (how much quality are you giving up to the gamer tax?) While are the mics always so horrible on these headsets. Gaming headsets are rarely ever worth even a quarter of their price. The build quality is often lower, most of the parts are non user replaceable. They often use enamel coated wires in the ear cups (thus shortening the life of the headphone by allowing earcup fled to eventually break the wire. Companies use enamel coating instead of individual insulators as solid core copper wire breaks too quickly, but stranded wire does not break soon enough, so they use enamel which eventually rubs off and shorts, thus causing the device to fail more quickly. Overall, it is best to invest in a proper desktop mic, and then get a good pair of headphones. A quality desktop mic such as the Blue yeti, audio technica at2020, audio technica ATR2500, rode NT-USB, all give very good quality, and best of all, you can upgrade to what ever headphone you want without worrying about the microphone. The microphones in most gaming headsets, are some of the lowest quality mics possible. Really, why does the mic on a $200 gaming headset such as the sennheiser g4me series, sound worst than a $3 mic made by Zalman? (that is not even an audio company) Gaming headset mics just suck, and they tack on a massive price premium just to essentially throw in a cheap mic. G4me zero: https://youtu.be/6JBbGLDRVVw?t=434 Logitech G230: https://youtu.be/BhnMVUe_X2I?t=62 Logitech G35: https://youtu.be/DFyW7_glgRA?t=251 Zalman ZM-mic 1: https://youtu.be/yZSZcgiHeyA?t=37 Josh Walrath on July 8, 2015 at 8:05 pm Mics like that are cheap, and Mics like that are cheap, and the beancounters at Logi probably ask themselves how many people actually need a high quality mic while screaming over Vent or Teamspeak? As mentioned in my article, Plantronics seems to put good quality mics in even their lower end products. Anonymous on July 8, 2015 at 9:45 pm Illustrates the background Illustrates the background noise on the G230 perfectly. At least some of the other mics sound more clear despite the drastically higher price. Boggins on July 8, 2015 at 7:52 pm I’ve had a pair of the G35s I’ve had a pair of the G35s for around 5 years now. They were originally wired USB. Mine are still kicking, but the cheap vinyl covering over the foam earpieces have long disintegrated. Are they still using that cheap paper thin vinyl wrap? Josh Walrath on July 8, 2015 at 8:04 pm The covering on the earpieces The covering on the earpieces feels fairly thick. Certainly not paper thin. When I get home from work I will check it out a bit further. Prodeous on July 8, 2015 at 9:41 pm Any recommendation for true Any recommendation for true 5.1 or 7.1 headsets? Josh Walrath on July 8, 2015 at 10:12 pm None? I have messed with None? I have messed with some other multi-speaker headsets, and none of them were overly impressive. I would rather have a clean set of 2 channel headphones. Anonymous on July 8, 2015 at 10:50 pm If you wanted a air of If you wanted a air of headphones with microphone which had good sound for music (not rap, I said music) including rock as well as gaming, how much would I need to spend to get into the “good” and “very good” range? I would like a pair which exclude outside noise so I can use them on flights to watch movies and listen to music including Queen, AC/DC and other 70s/80s/90s rock/popular music. Be very interested in your thoughts if you wouldn’t mind Josh.. Western Australia Josh Walrath on July 9, 2015 at 3:40 pm Hmm, good questions. You Hmm, good questions. You probably need to look in the $150 range for a good set that will have the sound quality you desire. The G35 is certainly a nice set, and they have some terrific sales going on. Another option that I like are the Corsair headsets. They have pretty balanced sound and good microphone quality. Hipolito Garcia on July 10, 2015 at 6:10 am Just a quick comment on the Just a quick comment on the article: at the moment of writing this, the G35 are just $59.99 USD vs the $49.99 for the G230, so the “double the price” disadvantage of the G35 is gone for now. I was thinking of getting the G230 (got red+black PC) but I’m probably switching to the G35. Thanks for the review! Leave a reply This site uses Akismet to reduce spam. Learn how your comment data is processed. Video News
{"url":"https://pcper.com/2015/07/logitech-g230-and-g35-headsets-review-immersive-gaming-nirvana/","timestamp":"2024-11-07T18:51:40Z","content_type":"text/html","content_length":"158771","record_id":"<urn:uuid:7d3e8959-5fc9-455e-932f-a9513bb39205>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00611.warc.gz"}
During editing, how do I enter in exact coordinates when moving objects? FAB 3000 supports both RELATIVE and ABSOLUTE coordinates in all editing commands. During Move, Copy, or similar commands - FAB 3000 will first ask for a base point. If you're using relative coordinates it can be anything such as 0,0 or just picking any point on the screen. When prompted for the "Displacement", is where you're able to implement the relative coordinates. Note: To use relative coordinates, please make sure to only use the @ when prompted for "Displacement". Relative coordinates start with the character '@'. There are 2 formats for Relative Coordinates: Offset and Polar OFFSET: @X,Y POLAR: @Distance<Angle See Examples: @4.5,2 : Moves selection 4.5 to the right, and 2 up @-3,-3 : Moves selection 3 to the left, and 3 down. @4<33 : Moves selection a distance of 4 at 33 degrees. Here's a quick tutorial movie, using Copy with Polar relative coordinates for the destination.
{"url":"https://support.numericalinnovations.com/support/discussions/topics/14000002379","timestamp":"2024-11-04T01:24:34Z","content_type":"text/html","content_length":"24561","record_id":"<urn:uuid:5840b982-a483-4a14-ad32-4f7dee560cb7>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00300.warc.gz"}
Modeling Stock Market Volatility Using GARCH Models: A Case Study of Nairobi Securities Exchange (NSE) Open Journal of Statistics Vol.07 No.02(2017), Article ID:76003,13 pages Modeling Stock Market Volatility Using GARCH Models: A Case Study of Nairobi Securities Exchange (NSE) Arfa Maqsood^1, Suboohi Safdar^1, Rafia Shafi^1, Ntato Jeremiah Lelit^2 ^1Department of Statistics, University of Karachi, Karachi, Pakistan ^2Actuarial Science and Risk Management, University of Karachi, Karachi, Pakistan Copyright © 2017 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). Received: March 21, 2017; Accepted: April 27, 2017; Published: April 30, 2017 The aim of this paper is to use the General Autoregressive Conditional Heteroscedastic (GARCH) type models for the estimation of volatility of the daily returns of the Kenyan stock market: that is Nairobi Securities Exchange (NSE). The conditional variance is estimated using the data from March 2013 to February 2016. We use both symmetric and asymmetric models to capture the most common features of the stock markets like leverage effect and volatility clustering. The results show that the volatility process is highly persistent, thus, giving evidence of the existence of risk premium for the NSE index return series. This in turn supports the positive correlation hypothesis: that is between volatility and expected stock returns. Another fact revealed by the results is that the asymmetric GARCH models provide better fit for NSE than the symmetric models. This proves the presence of leverage effect in the NSE return series. Nairobi Securities Exchange (NSE), Symmetric and Asymmetric GARCH Models, Volatility, Leverage Effect 1. Introduction Modeling and forecasting volatility in financial time series have become an area that has attracted a lot of researches in both empirical and theoretical aspects. In this situation, the models established to capture the variations in conditional mean of financial time series become no longer useful and hence, the performance of such mean models is reduced to give accuracy in estimation process. Engle [1] first identified this dilemma and presented an equation of variance which helped to capture the volatility in the series. He observed the autoregressive effect in conditional variance and suggested an autoregressive conditional heteroscedasticity (ARCH) process using lag disturbances. Since the publication of ARCH model on the subject, there has been a lot of research work done in this area. Some limitations of ARCH model are identified and listed by Brooks [2] , and Tsay [3] . The empirical evidence based on Engle’s work showed that an ARCH process of high order was needed to capture the dynamic behavior of conditional variance. Thus, Bollerslev [4] proposed an extension of the ARCH type model which was called Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model. The GARCH model has fulfilled this requirement since it is based on the infinite ARCH specifications. Many useful properties of GARCH models are given by Posedel [5] , Enders [6] , and Patterson [7] . ARCH and GARCH models are used to capture both volatility and leptokurtosis. The so called “leverage effect” is also often observed in the financial time series (see Black [8] ). This usually occurs when stock price changes are negatively correlated with changes in volatility. Since ARCH and GARCH models are symmetric in nature, they fail to capture the leverage effect. In order to address this problem, many nonlinear extensions of the GARCH models have been proposed. These include asymmetric class of GARCH models such as exponential GARCH (EGARCH) model by Nelson [9] , the so-called GJR model by Glosten et al. [10] and the power GARCH (PGARCH) model by Ding et al. [11] . In the light of these observations in the financial time series, a wide range of varying variance models have been used to estimate and predict volatility. Financial time series have the property of thick tails which are thicker than those of the normal distribution. The other problem encountered by the GARCH model is that they do not fully embrace this property of thick/heavy tails which are so much evident in the behavior of financial time series. To address this problem, again Bollerslev [12] , Baillie and Bollerslev [13] used the Student’s t-distribution and non normal distribution. Liu and Brorsen [14] used an asymmetric stable density to capture skewness. Fernandez and Steel [15] used the skewed Student’s t-distribution to model both skewness and kurtosis. Today, the emphasis is mostly given towards the application of heteroscedastic models to the financial data. A large number of empirical studies have been accomplished to address the concept of volatility of stock markets using the family of ARCH/GARCH processes. The progress in such studies is provided for the purpose of estimation and prediction of the conditional variance of stock returns over the specified period. The volatility analysis of stock markets is important for the investors in measuring and managing market risks more accurately which, in turn is useful in pricing capital assets, financial securities, and selecting portfolios. For instance, the reader might get benefit from the research done by Ahmed and Suliman [16] , Naimy [17] , Shamiri and Isa [18] , and Kalu [19] . They used some models from GARCH family both symmetric and asymmetric to capture the stock market volatility. Ahmed and Suliman [16] worked with the reference of Sudan stock market, while Kalu [19] provides the volatility analysis of Nigerian stock exchange. Modeling volatility of Paris stock market using GARCH (1,1) and compared with exponential weighted moving average (EWMA) was done by Naimy [17] . Similarly, Shamiri and Isa [18] provide the comparison of usual GARCH model with the non linear asymmetric NAGARCH models based on Malaysian stock market. Another study by Wagala et al. [20] considers the Nairobi securities exchange (NSE) weekly returns using ARCH-type models. See also Sharma and Vipul [21] . The main objective of this paper is to model stock returns volatility for the Kenya’s Nairobi Securities Exchange (NSE), by applying different univariate spe- cifications of GARCH type models. The rest of this paper is organized as follows. Following this introduction, Section 2 provides a brief review of the methodology of modeling volatility using some well known symmetric and asymmetric GARCH models. A general overview of Nairobi Securities Exchange is provided in Section 3. The description of data and summary statistics are also presented in the same section. The results of the estimated GARCH models are discussed in Section 3.3. Lastly, Section 4 concludes the paper. 2. Methodology of Modeling Volatility Since the development of GARCH models, a number of extensions and variants have been proposed. These variants are well classified in one of the two broad classes of symmetric and asymmetric GARCH models. Before we discuss these extensions, let us assume some notations that are useful to describe the general GARCH framework. Let ${\epsilon }_{t}$ denote a real valued discrete time stochastic pro- cess and ${\psi }_{t}$ is the information set of all information through time t. We define the model that contains the features of both conditional mean and conditional variance as given below. ${r}_{t}=E\left({r}_{t}/{\psi }_{t-1}\right)+{\epsilon }_{t};\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\epsilon }_{t}~N\left(0,{\sigma }_{t}^{2}\right).$ (1) Equation (1) can be rewritten as ${r}_{t}={\mu }_{t}+{\epsilon }_{t}.$ (2) Here, ${\mu }_{t}=E\left({r}_{t}/{\psi }_{t-1}\right)$ is the expression used to model the conditional mean of ${r}_{t}$ given that the information through time t − 1. The error is assumed to be non constant quantity with respect to time and thus given by ${\epsilon }_{t}={\sigma }_{t}{a}_{t}$ (3) where ${\sigma }_{t}=\sqrt{V\left({r}_{t}/{\psi }_{t-1}\right)}$ and ${a}_{t}~N\left(0,1\right)$ . Keeping Equation (2) as for modeling the conditional mean of return, we briefly present a number of specifications of GARCH models to represent the situations for expressing the conditional variance. These are given by the following sub-sections. 2.1. Symmetric GARCH Models In the symmetric GARCH models, the conditional variance only depends on the magnitude of the underlying asset and not on the sign. This ignores the effect raised by the positive or negative asset on conditional variance. The widely used symmetric GARCH models include generalized autoregressive conditional heteroscedasticity (GARCH) model and GARCH in mean model. These are discussed below. 2.1.1. GARCH Models A generalized autoregressive conditional heteroscedasticity (GARCH) model is the first and basic symmetric model developed by Bollerslev [4] . It is defined as the linear function of past squared residuals and the lagged conditional variances as given below ${\sigma }_{t}^{2}={\alpha }_{0}+\sum _{i=1}^{q}{\alpha }_{i}{\epsilon }_{t-i}^{2}+\sum _{i=1}^{p}{\beta }_{i}{\sigma }_{t-i}^{2}$ (4) where ${\alpha }_{0}$ is the constant term, ${\alpha }_{1},{\alpha }_{2},\cdots \text{,}{\alpha }_{q}$ are the parameters or coefficients of ARCH specifications, and ${\beta }_{1},{\beta }_{2},\cdots \text{,}{\beta }_{p}$ are the parameters or coefficients of GARCH specifications. The q and p are the respective orders of ARCH and GARCH processes. The simplest specification of this model is GARCH (1,1) model, that is, ${\sigma }_{t}^{2}={\alpha }_{0}+{\alpha }_{1}{\epsilon }_{t-1}^{2}+{\beta }_{1}{\sigma }_{t-1}^{2}.$ (5) 2.1.2. GARCH-in-Mean (GARCH-M) Models Another well known symmetric model is GARCH in Mean (GARCH-M) model developed by Engle, et al. [22] . In most of the financial markets, we expect risk to be compensated by a higher return and hence the return of a security may de- pend on its volatility. To model such phenomenon one might consider GARCH- M model. This variant of GARCH family allows the conditional mean of return series to depend on its conditional variance. A simple GARCH-M (1,1) model is defined by the two equations, the one for conditional mean is given by ${r}_{t}={\mu }_{t}+{\epsilon }_{t}$ where ${\mu }_{t}=\mu +\lambda {\sigma }_{t}^{2}.$ (6) The equation for conditional variance is same as provided by the GARCH (p, q) model in Equation (4) and its specific case GARCH (1,1) by Equation (5). 2.2. Asymmetric GARCH Models This section deals with the asymmetric models that are extensively motivated by the need to distinguish between good news and bad news and their impact on volatility in financial markets. Engle, and Ng, [23] introduced the news impact curve and summarized the effect of a shock interpreted as negative shocks (bad news) or a positive shock (good news) on the time varying conditional variance. For many stock returns, there is a negative correlation between the current return and the future volatility. The tendency of volatility to decline when return rises and to rise when return falls is called the leverage effect. Symmetric models fail to explain the leverage effect due to incapability of conditional variance to respond asymmetrically. Consequently, many asymmetric variants of GARCH models have been established to deal with this phenomenon. Some of them are discussed below. 2.2.1. Exponential GARCH (EGARCH) Models Nelson [9] proposed the exponential GARCH (EGARCH) models particularly designed to allow asymmetric effect between positive and negative asset returns. The EGARCH (p,q) specification is given by $\mathrm{log}\left({\sigma }_{t}^{2}\right)={\alpha }_{0}+\sum _{i=1}^{q}{\alpha }_{i}\left[|\frac{{\epsilon }_{t-i}}{{\sigma }_{t-i}}|\right]+\sum _{i=1}^{p}{\beta }_{i}\mathrm{log}\left({\sigma }_ {t-i}^{2}\right)+\sum _{i=1}^{q}{\gamma }_{i}\frac{{\epsilon }_{t-i}}{{\sigma }_{t-i}}$ (7) where ${\gamma }_{i}$ is the asymmetric or leverage effect parameter. The value of conditional variance will be positive even if the parameters are negative because it models the log of conditional variance. If the relationship between the current return and future volatility is negative then $\gamma$ will be negative and hence the leverage effect is confined. 2.2.2. Threshold GARCH (TGARCH) Models Another important volatility model commonly used to handle the leverage effect is the threshold GARCH (TGARCH) model. This model is developed by Glosten, Jagannathan, and Runkle in 1993 so that it is also called GJR model (see Zakoian [24] ). The TGARCH (p, q) framework of conditional variance is given by ${\sigma }_{t}^{2}={\alpha }_{0}+\sum _{i=1}^{q}{\alpha }_{i}{\epsilon }_{t-i}^{2}+\sum _{i=1}^{p}{\beta }_{i}{\sigma }_{t-i}^{2}+\sum _{i=1}^{q}{\gamma }_{i}{\epsilon }_{t-i}^{2}{I}_{t-i}$ (8) where ${I}_{t-i}=1$ , if ${\epsilon }_{t-i}<0$ , otherwise ${I}_{t-i}=0$ , and ${\gamma }_{i}$ is the parameter of leverage effect. If ${\gamma }_{i}=0$ , the model collapses to the classical GARCH (p, q) process. Otherwise, when the shock is positive, the effect on volatility is ${\alpha }_{i}$ (i.e. ${I}_{t-i}=0$ ), and when the shock is negative, the effect on volatility is ${\alpha }_{i}+{\ gamma }_{i}$ (i.e. ${I}_{t-i}=1$ ). Hence, we can say that for ${\alpha }_{i}>0$ , the effect of bad news have larger impact on conditional variance than does good news. 2.2.3. Power GARCH (PGARCH) Models Ding, et al. [11] proposed a variant to asymmetric GARCH models and provided with power GARCH (PGARCH) models. Unlike the GARCH family, we are able to model both the conditional standard deviation as well as conditional variance. The PGARCH (p, q) specification is as under; ${\sigma }_{t}^{\delta }={\alpha }_{0}+\sum _{i=1}^{q}{\alpha }_{i}{\left(|{\epsilon }_{t-i}-{\gamma }_{i}{\epsilon }_{t-i}|\right)}^{\delta }+\sum _{i=1}^{p}{\beta }_{i}{\sigma }_{t-i}^{\delta }$ where $\delta$ is the parameter for power term such that $\delta >0$ . For $\delta =2$ , the model simply becomes a standard GARCH model that allows for leverage effect. For $\delta =1$ , we deal with the model used to estimate the conditional standard deviation rather conditional variance. 3. Modeling Volatility of Nairobi Securities Exchange 3.1. An Overview of Nairobi Securities Exchange Nairobi Securities Exchange is the only stock exchange of Kenya. It is abbreviated as NSE. It was started as Nairobi Stock Exchange in 1954 as a voluntary association of stockbrokers in the European community registered under the Societies Act. This was when Kenya was still a British colony. It changed its name to Nairobi Securities Exchange in July 2011. It is based in Nairobi the capital city of Kenya. The principle index of NSE is the NSE 25 Share Index. It uses the local currency (Kenyan shilling) for its operations and on 24^th October 2014, it had a market capitalization of about Ksh 2.248 trillion. NSE is the leading securities exchange in East Africa. Securities traded in NSE are ordinary shares and investment units. There are other products also available and being traded at the NSE which include derivative securities and debt securities which include government bonds. NEXT is the Nairobi Securities Exchange Derivatives market where members can trade future contracts across variety of asset classes such as equities, currencies and fixed income securities. It is regulated by the Central Bank of Kenya (CBK). The self-listing of NSE stock through an IPO on 27^th June 2014, made the NSE to join the Johannesburg Stock Exchange being the only exchange in Africa that is self-listed. In March 2015, the NSE officially joined the United Nations Sustainable Stock Exchanges (SSE) initiative whereby they made a voluntary pledge to inform their stakeholders of the importance of integrating sustainability in their capital markets. 3.2. Data Description and Basic Statistics The time series data used for modeling volatility in this paper is the daily closing prices of Nairobi Securities Exchange (NSE) index over the period from 18th March 2013 to 18^th February 2016, resulting in total observations of 730 excluding public holidays. The daily returns (r[t]) are calculated as the continuously compounded returns which are the first differences of log prices of NSE-Index of successive days: ${r}_{t}=\mathrm{log}\left(\frac{{p}_{t}}{{p}_{t-1}}\right)$ (10) where ${p}_{t}$ and ${p}_{t-1}$ are respectively the closing market index of NSE at the current day and previous day. Various descriptive statistics are calculated and reported in Table 1. This is in order to specify the descriptive properties of the daily NSE return series ( ${r}_{t}$ ) during the period of the study. The mean return is Table 1. Various descriptive statistics of NSE daily return series. 0.000297 with the standard deviation of 0.007429. There is also an excess in kurtosis as can be seen clearly. A high value of kurtosis 52.377 indicates a leptokurtic distribution that is an apparent departure from normality. Another important test of normality is the Jarque-Bera (JB) statistic, which confirms the null hypothesis of normality for the daily NSE returns should be rejected at 1% level of significance. We can thus summarize that the NSE return series do not conform to normality but actually tend to have positive skewness (i.e. the distribution has a thick tail). The plot of NSE daily returns is displayed in Figure 1. We observe a large swing in the period of April 2015 with maximum and minimum returns are respectively 0.0373 and −0.0374. 3.3. Analysis of NSE Return In order to analyze the return series, the first step is to check the stationary scenario of return series. For this purpose, we employed the familiar Augmented Dickey Fuller (ADF) test and Phillips Perron (PP) test for original series ( ${p}_{t}$ ) and the return series ( ${r}_{t}$ ). The test results are presented in Table 2. The series ( ${p}_{t}$ ) is not stationary, however the results for return ( ${r}_{t}$ ) led towards the rejection of null hypothesis of unit root, and hence stationarity is present in return series. It is important to examine the residuals to find the evidence of possible heteroscedasticity before applying the methodology of modeling conditional variance. In order to test the presence of heteroscedasticity in the residuals of NSE index returns series, the Lagrange Multiplier (LM) test is applied to test the hypothesis that ${\alpha }_{1}={\alpha }_{2}=\cdots ={\alpha }_{q}$ , where q is the order of ARCH effect. The test procedure entails first obtaining the residuals ${\stackrel{^}{\epsilon }}_{t}$ from the ordinary least square regression of the conditional mean equation which might be an autoregressive (AR) process, moving average (MA) process, or a combination of the two pro- cesses termed as ARMA process. We assume a constant mean model for modeling the conditional mean and the LM test is applied to compute the test statistic value $T{R}^{2}$ , where $T$ is the number of observations and ${R}^{2}$ is the coefficient of multiple correlation obtained from regressing the squared residuals on q own lagged values. The test statistic is evaluated against chi-square ꭕ^2(q) distribution Figure 1. Time plot of NSE daily return series. (see Patterson (2000) for details on ARCH-LM test). The results of LM test for various ARCH order q are presented in Table 3, which provide strong evidence of rejecting the null hypothesis of constant variance for all lags included. Rejecting H[0] indicates the presence of ARCH effect in the residuals series and therefore we can conclude that the variance of the return of NSE index is no-constant for all periods specified. Once the volatility is confirmed in data, we proceed our analysis further to estimate the parameters of both conditional mean and conditional variance equations. For this purpose, we employed the symmetric and asymmetric GARCH models including GARCH (1,1), GARCH-M (1,1), EGARCH (1,1), TGARCH (1,1), and PGARCH (1,1). The estimation procedure uses the Broyden-Fletch- er-Goldfarb―Shanno (BFGS) optimization method useful for solving unconstrained non-linear problems. The estimation results are presented in Table 4. The constant mean parameter in mean equation is not significant in any of these underlying models. However, we observe the significant constant ( ${\alpha }_{0}$ ), significant ARCH effect ( ${\alpha }_{1}$ ), and significant GARCH effect ( ${\beta }_{1}$ ) in conditional variance equation of GARCH (1,1), GARCH-M (1,1), EGARCH (1,1), and TG- ARCH (1,1). There is an insignificant contribution of GARCH effect in conditional mean of GARCH-M (1,1) model and also the estimates of conditional variance equations in GARCH (1,1), and GARCH-M (1,1) are observed similar. Therefore, due to insignificant GARCH in mean model one has to prefer the simple parsimonious GARCH (1,1) model. The estimated power parameter ( $\delta$ ) in Power GARCH model is found to be 3.697 which is significant at 5% level. However, PGARCH model provides only significant GARCH effect and does not offer the considerable contribution of lag squared disturbances that is the ARCH effect. The leverage effect ( $\gamma$ ) is estimated for three asymmetric GARCH models Table 2. Results of unit root test for original NSE index series, and NSE return series. *Significant at both levels of 1% and 5%. Table 3. Results of ARCH-LM test for different values of q. Table 4. Estimation results of GARCH (1,1), GARCH-M (1,1), EGARCH (1,1), TGARCH (1,1), and PGARCH (1,1) models. *Significant at 5% level. **Significant at 1% level. taking the values 0.0506, −0.1076, and 0.165 respectively for EGARCH, TGARCH, and PGARCH models. We found significance only for TGARCH (1,1) process that confirms the leverage effect. The value of asymmetric parameter for TGARCH (1,1) process is negative, which means the volatility increases more with the bad news (negative shocks) than the good news (positive shocks) of the same magnitude for NSE index. This result is consistent with the finding of Wagala et al. [20] , which used the ARCH- type process to model volatility of NSE weekly returns and provided with the evidence of volatility clustering and leverage effect. In our analysis, both GARCH (1,1) and TGARCH (1,1) models performed well to analyze the volatility clustering of stock returns, however, TGARCH (1,1) outperformed GARCH (1,1) model, mainly due to its ability to capture the asymmetry exhibited by the stock data. The performance of these estimated models are determined on the basis of some accuracy measures. In our study, we compute the Akaike information criteria (AIC), Schwarz criteria (SC), and Root mean square error (RMSE). The results are displayed in Table 5. A look on the table reveals that there are not many differences seen among the values of accuracy measures obtained for all of five estimated models. Based on these measures, we may suggest that TGARCH (1,1) is more suitable process to capture the main features of NSE return like the volatility and the leverage effect. We plot the observed and estimated NSE prices for the period from 19^th March 2013 to 18^th February 2016 in Figures 2(a)-(e). These graphs show a close match to the data exhibiting that these estimated econometric models provide a good fit to the observed NSE time series. However, a close look into these graphs would reveal that Figure 2(d) showed a better fit of TGARCH (1,1) model to the observed NSE index because a smoother fitted curve can be seen. Table 5. Some accuracy measures for different specifications of GARCH Models. Figure 2. (a) Plot of observed and estimated NSE prices from GARCH (1,1) Model. (b) Plot of observed and estimated NSE prices from GARCH-M (1,1) Model. (c) Plot of observed and estimated NSE prices from EGARCH (1,1) Model. (d) Plot of observed and estimated NSE prices from TGARCH (1,1) Model. (e) Plot of observed and estimated NSE prices from PGARCH (1,1) Model. 4. Conclusion In this paper, we presented an empirical study to model the Nairobi securities exchange (NSE) using the family of GARCH models. Among many symmetric and asymmetric type heteroscedastic processes, we estimated GARCH (1,1), GARCH-M (1,1), EGARCH (1,1), TGRACH (1,1), and PGARCH (1,1) models. On one hand, the presence of volatility clustering is strongly confined from all these estimated models as we obtained the significant estimates corresponding to ARCH effect and GARCH effect parameters. The leverage effect in NSE return is also confirmed on the other hand. The asymmetric TGARCH (1,1) model has the significant estimates of the leverage effect. By this analysis, we might say that the TGARCH (1,1) model is more appropriate in term of capturing the volatility clustering and leverage effect of the NSE stock market. The authors wish to thank the Department of Statistics, University of Karachi for providing the computing and research facilities. The authors thank to the anonymous referees of the journal “Open Journal of Statistics” for their helpful suggestions and comments. Cite this paper Maqsood, A., Safdar, S., Shafi, R. and Lelit, N.J. (2017) Modeling Stock Market Volatility Using GARCH Models: A Case Study of Nairobi Securities Exchange (NSE). Open Journal of Statistics, 7, 369-381. https://doi.org/10.4236/ojs.2017.72026 Submit or recommend next manuscript to SCIRP and we will provide best service for you: Accepting pre-submission inquiries through Email, Facebook, LinkedIn, Twitter, etc. A wide selection of journals (inclusive of 9 subjects, more than 200 journals) Providing 24-hour high-quality service User-friendly online submission system Fair and swift peer-review system Efficient typesetting and proofreading procedure Display of the result of downloads and visits, as well as the number of cited articles Maximum dissemination of your research work Submit your manuscript at: http://papersubmission.scirp.org/ Or contact ojs@scirp.org
{"url":"https://file.scirp.org/Html/14-1240873_76003.htm","timestamp":"2024-11-14T10:35:22Z","content_type":"application/xhtml+xml","content_length":"83443","record_id":"<urn:uuid:baf7de26-a9b3-4b2d-89a1-4ec322e32e11>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00781.warc.gz"}
How to Create and Deploy a Clarity Smart Contract on the Stacks Blockchain | QuickNode Guides 6 min read Stacks is an open-source layer-1 blockchain that utilizes the Proof of Transfer (PoX) consensus mechanism. The Stacks blockchain leverages Bitcoin's security and allows direct read access to the Bitcoin chain. Stacks is also exciting because it supports smart contracts, so developers can build dApps such as decentralized exchanges, DAOs, NFT marketplaces, and more! This guide will demonstrate how to create and deploy a Clarity smart contract on the Stacks Testnet using Clarinet, a command line tool for developing, testing, and deploying smart contracts. Let us get started! What You Will Need What You Will Do • Learn about Clarity & Clarinet • Set up a Clarinet project • Create a Storage smart contract using Clarity • Test the smart contract in a local environment • Deploy the smart contract to the Stacks Testnet Clarity 101 Clarity is a Lisp-styled smart contract language designed for the Stacks blockchain. It differs from other smart contract languages as it is not compiled and non-Turing complete. What does this exactly mean? Well, on most EVM-based chains, you need to compile your solidity contract code into bytecode so that the EVM can understand it. The Stacks blockchain is different as you don't need to compile your code. Instead, the executed code is human-readable, which makes it easily auditable. Another benefit this has for users is that when you deploy a smart contract on the Stacks blockchain, the source code for the contract is automatically available, which is not always the case on EVM-based chains. Other properties of Clarity include: • No Reentrancy attack vulnerabilities • No arithmetic underflows or overflows • Precise gas estimation • Custom token support is built-in • Post conditions (i.e., prompts that help you verify what you're doing) To learn about the language of Clarity, check out the following Clarity language book resource. In the next section, we will talk a bit about the smart contract development tool, Clarinet. What is Clarinet? Now that you have some familiarity with Stacks and Clarity, let us dive into the tool we will use to build our smart contract, Clarinet. Clarinet is a command-line tool useful for efficient smart contract development. It can be used for developing, testing, debugging, and deploying your contracts to local and production environments. It's similar to other popular web3 smart contract libraries such as Hardhat. Hiro Web Wallet Set Up In order to deploy our clarity smart contract to the Stacks Testnet network, we will need to have the Hiro Web Wallet installed. Once you have set up your wallet, you will also need to change the wallet's network configuration to Testnet. You can do this by clicking the ellipses ("...") button on your wallet's home page, then click Change Network. Next, you'll want to ensure you have some STX Testnet tokens. You can request some tokens on the Stacks Explorer Faucet page. Just connect your wallet, then click the Request STX button. Once you have your wallet and STX Testnet tokens, feel free to move on to the next section! Set Up the Clarinet Project Once the prerequisites are installed, open up your terminal and run the following command to create a new Clarinet project. clarinet new storage-project && cd storage-project The clarinet command above creates a boilerplate project directory called storage-project that we can use to develop smart contracts quickly. Next, we will need to create the smart contract file that will hold our storage logic. We can run the following command in the same terminal window to create a .clar file for our smart contract and a .ts file for our tests (we will ignore the tests for now, but in production, its best practice to create unit tests). clarinet contract new storage-contract Create and Define the Clarity Smart Contract In your code editor of choice, open the storage-contract.clar file located in the contracts folder, and then delete the boilerplate comments. We will be using our own comments for the purpose of this tutorial. Then, input the following code in the file. ;; define variables (define-data-var storage (string-utf8 500) u"initial value") ;; define public get function (define-read-only (get-storage) (var-get storage) ;; define public write function ;; #[allow(unchecked_data)] (define-public (set-storage (message (string-utf8 500))) (ok (var-set storage message)) Explanation of the code above: Lines 1-2: We define a comment starting with ";;", then define our public string variable storage (that has a max 500 character length). Lines 4-7: We define comments and a public read only function that returns our storage variable's value. Lines 9-13: A public write function that takes a string parameter and sets our storage variable's value. Testing the Smart Contract in a Local Environment To verify the syntax and completeness of our storage contract, run the command clarinet check in your terminal: After your contract has been checked, run the command clarinet console to open up the Clarinet REPL console. The Clarinet REPL will help us test and execute our Clarity code. The Clarinet REPL will automatically list the available contracts and the simulated wallets in memory on startup. Here's how ours look: The identifier (also know as contract address) of our smart contract is ST1PQHQKV0RJXZFY1DGX8MNSNYVE3VGZJSRTPGZGM.storage-contract, and the public functions are get-storage and set-storage Now to interact with the get-storage function of our storage contract, we can use the following syntax: (contract-call? .storage-contract get-storage) The response should be the default value which we set as "initial value". Let us set a new value in our storage contract via the set-storage function. The code should look like this: (contract-call? .storage-contract set-storage u"new value") The response should be (ok true). We can re-run the first function (i.e., get-storage) to verify our storage variable has been updated. Now that we confirmed our storage contract works as expected, we can move on to the Testnet deployment! Deploy the Smart Contract to Testnet Before moving forward, remember to complete the wallet setup instructions in the Hiro Web Wallet Set Up section of this guide. Next, navigate to the Write and Deploy tab on the Stacks Explorer Sandbox and connect your wallet. Once your wallet is connected, click the Network tab at the top-right of the page and verify you have the Testnet option selected. Then, paste the code from your storage-contract.clar file into the clarity code editor. Note: By default, the Sandbox will create a random name for your contract. Feel free to adjust this if you wish. Next, click the Deploy button and then confirm the transaction in your Hiro wallet. It may take a couple of minutes for the transaction to verify. You can watch the status of the transaction in the activity tab of your Hiro wallet or through the Stacks Explorer by searching the transaction details. Once your transaction is mined, head over to the Call a Contract tab (the f in the left sidebar) on the Stacks Sandbox Explorer and input the full address of your storage smart contract (e.g., ST12KGMZCKXERR1VG1TFEQQZ3VQXSMVVC3J31S604.nft-factory). The contract name field should automatically populate the name of your contract. Now all that's left is clicking the Get Contract button. Once your contract is loaded, it should look something like this: The Stacks Sandbox automatically loads all the public functions available to us. In order to test the contract, we can follow the same process as we did in the local environment. First, we'll check the value of the storage variable by clicking the function get-storage and then clicking the Call function button. The output value should be u"initial value". Next, go back, click the set-storage function and input a string that you want stored in your storage variable. For this test, we will use the value Stacks is gmi. The sandbox will automatically format our string to make it in the format our contract accepts (i.e, u"Stacks is gmi"). Once you've inputted the string, click the call function button and sign the transaction in your Hiro wallet. Once the transaction is mined, you can verify the value by calling the set-storage function again. The value should match your inputted string (i.e., Stacks is gmi). Kudos for making it all the way! You now know how to create and test a Clarity smart contract using Clarinet. To learn more, check out our How to Mint NFTs on Stacks guide or some of our other tutorials here. We ❤️ Feedback! If you have any feedback or questions on this guide, let us know. We’d love to hear from you!
{"url":"https://www.quicknode.com/guides/other-chains/stacks/how-to-create-and-deploy-a-clarity-smart-contract-on-the-stacks-blockchain?utm_campaign=Unleashing%20Bitcoin&utm_medium=email&utm_source=Revue%20newsletter","timestamp":"2024-11-12T09:31:31Z","content_type":"text/html","content_length":"65665","record_id":"<urn:uuid:507c5d53-0819-4c1b-bdc7-0ece852d6126>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00823.warc.gz"}
Rest in peace - 21-1! | Hackerearth practice problem solution The grandest stage of all, Wrestlemania XXX recently happened. And with it, happened one of the biggest heartbreaks for the WWE fans around the world. The Undertaker's undefeated streak was finally Now as an Undertaker fan, you're disappointed, disheartened and shattered to pieces. And Little Jhool doesn't want to upset you in any way possible. (After all you are his only friend, true friend!) Little Jhool knows that you're still sensitive to the loss, so he decides to help you out. Every time you come across a number, Little Jhool carefully manipulates it. He doesn't want you to face numbers which have "21" as a part of them. Or, in the worst case possible, are divisible by 21. If you end up facing such a number you feel sad... and no one wants that - because you start chanting "The streak is broken!" , if the number doesn't make you feel sad, you say, "The streak lives still in our heart!" Help Little Jhool so that he can help you! Input Format: The first line contains a number, t, denoting the number of test cases. After that, for t lines there is one number in every line. Output Format: Print the required string, depending on how the number will make you feel. 1 ≤ t ≤ 100 1 ≤ n ≤ 1000000 Time Limit: 1 Memory Limit: 256 using namespace std; int main() int t,n; int flag=0; // check number is divisible by 21 or not // if not divisible then chekck it has 21 or not int r=n%10; cout<<"The streak is broken!"<<endl; cout<<"The streak lives still in our heart!"<<endl; return 0; Recommended post:- Hackerearth Problems:- Data structure:- Key points:- Post a Comment
{"url":"https://www.easycodingzone.com/2021/04/rest-in-peace-21-1.html","timestamp":"2024-11-13T14:38:50Z","content_type":"application/xhtml+xml","content_length":"167191","record_id":"<urn:uuid:39e62c38-496a-438c-9c92-c0e357baae1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00894.warc.gz"}
Create empty array of specified class The empty method creates empty arrays of a given class. For an introduction to empty arrays in MATLAB^®, see Empty Arrays. To test if an existing array is an empty array, use isempty. A = ClassName.empty returns an empty 0-by-0 array of the specified class. Replace ClassName with the actual name of the class. For more information on how empty arrays behave, see Empty Arrays in A = ClassName.empty(sz1,...,szN) returns an empty array with the specified dimensions. At least one of the dimensions must be 0. A = ClassName.empty(sizeVector) returns an empty array with the specified dimensions. At least one of the dimensions must be 0. Create Empty Array Call the empty method on uint8 with no size specified. A = 0x0 empty uint8 matrix Assigning a value to the empty array expands it to a nonempty array. The value you assign to the empty array must be of the same class as the array or convertible to that class. MATLAB fills the other elements of the array with the default value of the array type, which for uint8 is 0. A = 3x3 uint8 matrix Initializing a Nonempty Array To initialize a nonempty array, use a function such as zeros or ones to fill the array with initial values. MATLAB does not have a null value, so all nonempty arrays must have values for all elements. You cannot use empty to create a 3-by-3 array, for example, because at least one dimension must have length 0. Empty Array with Nonzero Dimension Sizes MATLAB allows for empty arrays that have dimensions with nonzero sizes, as long as at least one dimension is 0. These empty arrays, such as a 0-by-5 array, can arise naturally in many iterative algorithms, and they follow the same rules as 0-by-0 empty arrays. The array has a class but does not contain any elements. You can create empty arrays with nonzero dimension sizes as a way of documenting your code. For example, create an int16 empty array with dimensions 0-by-5. A = 0x5 empty int16 matrix Use A as the initial value for a 6-by-5 matrix of integers, built by vertical concatenation of 1-by-5 vectors. for i = 1:6 A = [A; randi(9,[1 5],"int16")]; A = 6x5 int16 matrix Create Empty Array Using Vector of Dimensions Use a vector to define the dimensions of an empty array. V = [0 0 6]; Bdouble = double.empty(V) Bdouble = 0x0x6 empty double array Input Arguments sz1,...,szN — Dimensions of array 0 (default) | integers Dimensions of array, specified as integers. At least one dimension must be 0. Negative values are treated as 0. Trailing dimensions of 1 are not included in the size of the array. sizeVector — Vector of dimensions 0 (default) | row vector of integers Vector of dimensions, specified as a row vector of integers. At least one element must be 0. Negative values are treated as 0. Trailing dimensions of 1 are not included in the size of the array. Output Arguments A — Empty array empty array Empty array, returned as an empty array of the specified class and dimensions. More About Empty Arrays in MATLAB In MATLAB, an empty array is an array that has at least one dimension of size 0. An empty array has no elements. The empty method enables you to initialize arrays of a specific class. You can expand an empty array into a nonempty array by assigning a specific value into the empty array. Any object you assign to the array must be of the same class or convertible to the class of the empty array. For example, create an empty array of uint8 and try to assign a string value to it. A = uint8.empty; A(3,3) = "word" Unable to perform assignment because value of type 'string' is not convertible to 'uint8'. Because empty arrays must have at least one dimension of size 0, you cannot use empty to create a 3-by-3 array, for example. MATLAB does not have a null value, so all nonempty arrays must have values assigned to all elements. To quickly initialize a nonempty numeric array, consider using functions like zeros or ones. Empty Arrays of a User-Defined Class You can initialize an empty array of a user-defined class. For example, the empty static method is a hidden method of the ColorInRGB class defined here. classdef ColorInRGB Color (1,3) = [1,0,0]; function obj = ColorInRGB(c) if nargin > 0 obj.Color = c; Call the empty method. You can expand this empty array into a nonempty array by assigning a value to it. For more information on how MATLAB fills arrays with objects, see Create and Initialize Object Arrays. Identify Empty Arrays You can use the isempty, size, and length functions to identify empty object arrays. For example, create an empty array of the ColorInRGB class defined in the previous section. A = ColorInRGB.empty(0,5); Concatenation of Empty Arrays Empty arrays follow array concatenation behavior. For example, create an empty array of double and concatenate it to create a second array. A = double.empty(0,5); B = [A A] B = 0×10 empty double matrix • empty is a hidden, public, static method of all nonabstract MATLAB classes. You can override the empty method in class definitions. • This method is useful for creating empty arrays of data types that do not have a special syntax for creating empty arrays, such as [] for double arrays. Version History Introduced in R2008a
{"url":"https://ch.mathworks.com/help/matlab/ref/empty.html","timestamp":"2024-11-12T15:37:27Z","content_type":"text/html","content_length":"98275","record_id":"<urn:uuid:f05aaaa5-333b-4e59-be8a-6b4027131636>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00795.warc.gz"}
\[\begin{split}\newcommand{\as}{\kw{as}} \newcommand{\case}{\kw{case}} \newcommand{\cons}{\textsf{cons}} \newcommand{\consf}{\textsf{consf}} \newcommand{\emptyf}{\textsf{emptyf}} \newcommand{\End}{\ kw{End}} \newcommand{\kwend}{\kw{end}} \newcommand{\even}{\textsf{even}} \newcommand{\evenO}{\textsf{even}_\textsf{O}} \newcommand{\evenS}{\textsf{even}_\textsf{S}} \newcommand{\Fix}{\kw{Fix}} \ newcommand{\fix}{\kw{fix}} \newcommand{\for}{\textsf{for}} \newcommand{\forest}{\textsf{forest}} \newcommand{\Functor}{\kw{Functor}} \newcommand{\In}{\kw{in}} \newcommand{\ind}[3]{\kw{Ind}~[#1]\left (#2\mathrm{~:=~}#3\right)} \newcommand{\Indp}[4]{\kw{Ind}_{#4}[#1](#2:=#3)} \newcommand{\Indpstr}[5]{\kw{Ind}_{#4}[#1](#2:=#3)/{#5}} \newcommand{\injective}{\kw{injective}} \newcommand{\kw}[1]{\ textsf{#1}} \newcommand{\length}{\textsf{length}} \newcommand{\letin}[3]{\kw{let}~#1:=#2~\kw{in}~#3} \newcommand{\List}{\textsf{list}} \newcommand{\lra}{\longrightarrow} \newcommand{\Match}{\kw {match}} \newcommand{\Mod}[3]{{\kw{Mod}}({#1}:{#2}\,\zeroone{:={#3}})} \newcommand{\ModImp}[3]{{\kw{Mod}}({#1}:{#2}:={#3})} \newcommand{\ModA}[2]{{\kw{ModA}}({#1}=={#2})} \newcommand{\ModS}[2]{{\kw {Mod}}({#1}:{#2})} \newcommand{\ModType}[2]{{\kw{ModType}}({#1}:={#2})} \newcommand{\mto}{.\;} \newcommand{\nat}{\textsf{nat}} \newcommand{\Nil}{\textsf{nil}} \newcommand{\nilhl}{\textsf{nil\_hl}} \ newcommand{\nO}{\textsf{O}} \newcommand{\node}{\textsf{node}} \newcommand{\nS}{\textsf{S}} \newcommand{\odd}{\textsf{odd}} \newcommand{\oddS}{\textsf{odd}_\textsf{S}} \newcommand{\ovl}[1]{\overline{# 1}} \newcommand{\Pair}{\textsf{pair}} \newcommand{\plus}{\mathsf{plus}} \newcommand{\SProp}{\textsf{SProp}} \newcommand{\Prop}{\textsf{Prop}} \newcommand{\return}{\kw{return}} \newcommand{\Set}{\ textsf{Set}} \newcommand{\Sort}{\mathcal{S}} \newcommand{\Str}{\textsf{Stream}} \newcommand{\Struct}{\kw{Struct}} \newcommand{\subst}[3]{#1\{#2/#3\}} \newcommand{\tl}{\textsf{tl}} \newcommand{\tree} {\textsf{tree}} \newcommand{\trii}{\triangleright_\iota} \newcommand{\Type}{\textsf{Type}} \newcommand{\WEV}[3]{\mbox{$#1[] \vdash #2 \lra #3$}} \newcommand{\WEVT}[3]{\mbox{$#1[] \vdash #2 \lra$}\\ \ mbox{$ #3$}} \newcommand{\WF}[2]{{\mathcal{W\!F}}(#1)[#2]} \newcommand{\WFE}[1]{\WF{E}{#1}} \newcommand{\WFT}[2]{#1[] \vdash {\mathcal{W\!F}}(#2)} \newcommand{\WFTWOLINES}[2]{{\mathcal{W\!F}}\begin {array}{l}(#1)\\\mbox{}[{#2}]\end{array}} \newcommand{\with}{\kw{with}} \newcommand{\WS}[3]{#1[] \vdash #2 <: #3} \newcommand{\WSE}[2]{\WS{E}{#1}{#2}} \newcommand{\WT}[4]{#1[#2] \vdash #3 : #4} \ newcommand{\WTE}[3]{\WT{E}{#1}{#2}{#3}} \newcommand{\WTEG}[2]{\WTE{\Gamma}{#1}{#2}} \newcommand{\WTM}[3]{\WT{#1}{}{#2}{#3}} \newcommand{\zeroone}[1]{[{#1}]} \end{split}\] Query commands¶ Unlike other commands, query_commands may be prefixed with a goal selector (natural:) to specify which goals it applies to. If no selector is provided, the command applies to the current goal. If no proof is open, then the command only applies to accessible objects. (see Section Invocation of tactics). Eval and Compute are also query_commands, which are described elsewhere Requests to the environment¶ Example: Locate examples Locate nat. Inductive Coq.Init.Datatypes.nat Locate Datatypes.O. Constructor Coq.Init.Datatypes.O (shorter name to refer to it in current context is O) Locate Init.Datatypes.O. Constructor Coq.Init.Datatypes.O (shorter name to refer to it in current context is O) Locate Coq.Init.Datatypes.O. Constructor Coq.Init.Datatypes.O (shorter name to refer to it in current context is O) Locate I.Dont.Exist. No object of suffix I.Dont.Exist Printing flags¶ Flag Fast Name Printing¶ When this flag is turned on, Coq uses an asymptotically faster algorithm for the generation of unambiguous names of bound variables while printing terms. While faster, it is also less clever and results in a typically less elegant display, e.g. it will generate more names rather than reusing certain names across subterms. This flag is not enabled by default, because as Ltac observes bound names, turning it on can break existing proof scripts. Loading files¶ Coq offers the possibility of loading different parts of a whole development stored in separate files. Their contents will be loaded as if they were entered from the keyboard. This means that the loaded files are text files containing sequences of commands for Coq’s toplevel. This kind of file is called a script for Coq. The standard (and default) extension of Coq’s script files is .v. Command Load Verbose? stringident¶ Loads a file. If ident is specified, the command loads a file named ident.v, searching successively in each of the directories specified in the load path. (see Section Logical paths and the load If string is specified, it must specify a complete filename. ~ and .. abbreviations are allowed as well as shell variables. If no extension is specified, Coq will use the default extension .v. Files loaded this way can't leave proofs open, nor can Load be used inside a proof. We discourage the use of Load; use Require instead. Require loads .vo files that were previously compiled from .v files. Verbose displays the Coq output for each command and tactic in the loaded file, as if the commands and tactics were entered interactively. Error Can’t find file ident on loadpath.¶ Error Load is not supported inside proofs.¶ Error Files processed by Load cannot leave open proofs.¶ Compiled files¶ This section describes the commands used to load compiled files (see Chapter Coq commands for documentation on how to compile a file). A compiled file is a particular case of a module called a library file. Load paths¶ Changed in version 8.18: Commands to manage load paths within Coq have been removed. Load paths can be managed using Coq command line options or enviroment variables (see Logical paths and the load Command Print LoadPath dirpath?¶ Displays the current Coq load path. If dirpath is specified, displays only the paths that extend that prefix. In the output, the logical path <> represents an empty logical path. Command Print ML Path¶ Displays the current OCaml loadpath, as provided by the command line option -I string (cf. Declare ML Module). Extra Dependencies¶ Dependencies on external files, i.e. non .v files, can be declared as follows: Command From dirpath Extra Dependency string as ident?¶ Adds an additional dependency of the current .v file on an external file. This information is included in the coqdep tool generated list of dependencies. The file name string must exist relative to one of the top directories associated with dirpath. string can include directory separators (/) to select a file in a subdirectory. Path elements in string must be valid Coq identifiers, e.g. they cannot contain characters such as - or ,. See Lexical conventions. When ident is provided, that name can be used by OCaml code, typically in a plugin, to access the full path of the external file via the API ComExtraDeps.query_extra_dep. Warning File ... found twice in ...¶ The file is found in more than once in the top directories associated with the given dirpath. In this case the first occurrence is selected. The backtracking commands described in this section can only be used interactively, they cannot be part of a Coq file loaded via Load or compiled by coqc. Quitting and debugging¶ Time, Redirect, Timeout, Fail and Succeed are control_commands. For these commands, attributes and goal selectors, when specified, are part of the sentence argument, and thus come after the control command prefix and before the inner command or tactic. For example: Time #[ local ] Definition foo := 0. or Fail Timeout 10 all: auto. Printing constructions in full¶ Flag Printing All¶ Coercions, implicit arguments, the type of pattern matching, but also notations (see Syntax extensions and notation scopes) can obfuscate the behavior of some tactics (typically the tactics applying to occurrences of subterms are sensitive to the implicit arguments). Turning this flag on deactivates all high-level printing features such as coercions, implicit arguments, returned type of pattern matching, notations and various syntactic sugar for pattern matching or record projections. Otherwise said, Printing All includes the effects of the flags Printing Implicit, Printing Coercions, Printing Synth, Printing Projections, and Printing Notations. To reactivate the high-level printing features, use the command Unset Printing All. In some cases, setting Printing All may display terms that are so big they become very hard to read. One technique to work around this is use Undelimit Scope and/or Close Scope to turn off the printing of notations bound to particular scope(s). This can be useful when notations in a given scope are getting in the way of understanding a goal, but turning off all notations with Printing All would make the goal unreadable. Controlling Typing Flags¶ Unset Guard Checking. Print Typing Flags. check_guarded: false check_positive: true check_universes: true definitional uip: false Fixpoint f (n : nat) : False := f n. f is defined f is recursively defined (guarded on 1st argument) Fixpoint ackermann (m n : nat) {struct m} : nat := match m with | 0 => S n | S m => match n with | 0 => ackermann m 1 | S n => ackermann m (ackermann (S m) n) end end. ackermann is defined ackermann is recursively defined (guarded on 1st argument) Print Assumptions ackermann. Axioms: ackermann is assumed to be guarded. Note that the proper way to define the Ackermann function is to use an inner fixpoint: Fixpoint ack m := fix ackm n := match m with | 0 => S n | S m' => match n with | 0 => ack m' 1 | S n' => ack m' (ackm n') end end. ack is defined ack is recursively defined (guarded on 1st argument) Typing flags may not be changed while inside sections. Internal registration commands¶ Due to their internal nature, the commands that are presented in this section are not for general use. They are meant to appear only in standard libraries and in support libraries of plug-ins. Exposing constants to OCaml libraries¶ Command Register qualid[1] as qualid[2]¶ Makes the constant qualid[1] accessible to OCaml libraries under the name qualid[2]. The constant can then be dynamically located in OCaml code by calling Coqlib.lib_ref "qualid[2]". The OCaml code doesn't need to know where the constant is defined (what file, module, library, etc.). As a special case, when the first segment of qualid[2] is kernel, the constant is exposed to the kernel. For instance, the PrimInt63 module features the following declaration: Register bool as kernel.ind_bool. This makes the kernel aware of the bool type, which is used, for example, to define the return type of the #int63_eq primitive. Command Print Registered¶ Inlining hints for the fast reduction machines¶ Command Register Inline qualid¶ Gives a hint to the reduction machines (VM and native) that the body of the constant qualid should be inlined in the generated code. Registering primitive operations¶ Command Primitive ident_decl : term? := #ident¶ Makes the primitive type or primitive operator #ident defined in OCaml accessible in Coq commands and tactics. For internal use by implementors of Coq's standard library or standard library replacements. No space is allowed after the #. Invalid values give a syntax error. For example, the standard library files PrimInt63.v and PrimFloat.v use Primitive to support, respectively, the features described in Primitive Integers and Primitive Floats. The types associated with an operator must be declared to the kernel before declaring operations that use the type. Do this with Primitive for primitive types and Register with the kernel prefix for other types. For example, in PrimInt63.v, #int63_type must be declared before the associated operations. Error The type ident must be registered before this construction can be typechecked.¶ The type must be defined with Primitive command before this Primitive command (declaring an operation using the type) will succeed.
{"url":"https://coq.inria.fr/doc/V8.19.1/refman/proof-engine/vernacular-commands.html","timestamp":"2024-11-03T16:18:48Z","content_type":"text/html","content_length":"355312","record_id":"<urn:uuid:0bde8ac5-eb75-49cf-9e9e-8684bbd45f26>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00433.warc.gz"}
Convergencia absoluta y condicional No Javascript It looks like you have javascript disabled. You can still navigate around the site and check out our free content, but some functionality, such as sign up, will not work. If you do have javascript enabled there may have been a loading error; try refreshing your browser.
{"url":"https://www.studypug.com/es-calculus-help/es-absolute-conditional-convergence","timestamp":"2024-11-06T15:35:37Z","content_type":"text/html","content_length":"358357","record_id":"<urn:uuid:4e3da7a6-d198-4bc3-b49c-a22d3be363ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00037.warc.gz"}
Buy pmako.com ? Products related to Diameter: Similar search terms for Diameter: • What is a diameter? A diameter is a straight line passing from one side of a circle to the other and going through the center of the circle. It is the longest chord of a circle and divides the circle into two equal halves. The diameter of a circle is always twice the length of the radius. • Are there everyday objects whose diameter is one billionth of Earth's diameter? Yes, there are everyday objects whose diameter is one billionth of Earth's diameter. For example, a typical grain of sand has a diameter that is roughly one billionth of Earth's diameter. Additionally, some bacteria and viruses also have sizes that are on the order of one billionth of Earth's diameter. These objects may seem incredibly small in comparison to Earth, but they are still tangible and observable in our everyday lives. • What is the diameter of a sphere with a diameter of 2? The diameter of a sphere is the distance across the sphere passing through its center. Therefore, if the diameter of a sphere is 2, then the radius of the sphere is 1 (since the radius is half the diameter). The formula for the diameter of a sphere is 2 times the radius, so in this case, the diameter of the sphere with a radius of 1 is 2. • Are there everyday objects whose diameter is one billionth of the Earth's diameter? Yes, there are everyday objects whose diameter is one billionth of the Earth's diameter. For example, a typical human hair has a diameter of about 100 micrometers, which is one millionth of a meter. To put this into perspective, the diameter of the Earth is about 12,742 kilometers, which is one billion times larger than the diameter of a human hair. This shows that there are indeed everyday objects that are incredibly small compared to the Earth's diameter. • What is the diameter for sockets and what is the diameter for lights? The standard diameter for sockets is 1 inch (25.4 mm), which is the size of the opening where the light bulb is inserted. The diameter for lights can vary depending on the type of light fixture. For example, recessed lights typically have a diameter of 4-6 inches, while pendant lights can have a diameter ranging from 6-12 inches or more. It's important to check the specifications of the specific light fixture to determine its diameter. • Does water flow faster through a larger diameter than through a thin hose diameter? Yes, water flows faster through a larger diameter hose than through a thin hose diameter. This is because the larger diameter allows for more space for the water to flow through, reducing the resistance and increasing the flow rate. In contrast, a thin hose diameter restricts the flow of water, resulting in a slower flow rate. • Does water flow faster in a larger diameter than in a thin hose diameter? Water will flow faster in a larger diameter hose than in a thin hose diameter. This is because a larger diameter hose provides more space for the water to flow through, reducing the resistance to flow. In contrast, a thin hose diameter restricts the flow of water, causing it to move slower. Therefore, the diameter of the hose directly impacts the speed at which water flows through it. • What is the diameter of 1m? The diameter of a 1m circle is also 1m. The diameter is the distance across the circle passing through the center, so in the case of a 1m circle, the diameter is also 1m. This means that if you were to measure from one side of the circle, through the center, to the other side, the distance would be 1m. • What is the pitch circle diameter? The pitch circle diameter is the diameter of the circle that represents the theoretical point where two gears mesh together. It is an imaginary circle that is used to determine the gear ratio and calculate the gear tooth dimensions. The pitch circle diameter is crucial in gear design and is used to ensure proper gear meshing and transmission efficiency. • How do I calculate this diameter? To calculate the diameter of a circle, you can use the formula: diameter = 2 x radius. First, you need to determine the radius of the circle, which is the distance from the center of the circle to any point on its circumference. Once you have the radius, you can multiply it by 2 to find the diameter. This formula works for any circle, regardless of its size. • What diameter to pre-drill wood? The diameter to pre-drill wood will depend on the size of the screw being used. As a general rule of thumb, the pre-drilled hole should be slightly smaller than the diameter of the screw. This allows the screw to grip the wood securely without splitting it. It is recommended to use a drill bit that is about 1/64 inch smaller than the diameter of the screw for hardwoods, and about 1/32 inch smaller for softwoods. • How do you calculate the diameter? To calculate the diameter of a circle, you can use the formula: diameter = 2 x radius. The radius is the distance from the center of the circle to any point on the circle's circumference. By multiplying the radius by 2, you can find the diameter, which is the distance across the circle passing through its center. * All prices are inclusive of VAT and, if applicable, plus shipping costs. The offer information is based on the details provided by the respective shop and is updated through automated processes. Real-time updates do not occur, so deviations can occur in individual cases.
{"url":"https://www.pmako.com/%20Diameter","timestamp":"2024-11-14T10:46:25Z","content_type":"text/html","content_length":"74228","record_id":"<urn:uuid:ac0c46e2-392b-44d6-8479-12f8d0edb7d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00125.warc.gz"}