content
stringlengths
86
994k
meta
stringlengths
288
619
s by LogicLike Learn to add and subtract in the game form! LogicLike team created a collection of fun math word problems for 1st graders. Bill brought 7 apples and 3 bananas with him, but 2 of them were rotten. How many apples are left to eat? There are 4 stacks of berry puddings in the cafe, 7 stacks of cakes and 5 stacks of pasta salad. How many stacks of dessert are there? David had 8 toy cars. He got 4 more toy cars. How many toy cars did David have in all? Sam has 5 dinosaurs. Robert gave him 4 more dinosaurs. How many dinosaurs does Sam have? A family combo includes 4 drinks, 4 fries and 4 burgers. A party combo includes 7 drinks, 5 fries and 7 burgers. How many burgers can you get if you order one family combo and two party combos? Mom cat and her kitten together weigh 8 kilograms. Mom alone weighs 7 kilograms. How much does her kitten weigh? Family packed 6 packs of chips to go to the cinema and the children ate 4 packs. How many packs of chips are left to parents? Maggie has 7 candies. How many more candies does she need to have 10 candies altogether? Mark had 4 puppies and Brian had some also. They had 7 in all. How many did Brian have? Emma has 8 toy puppies. How many more does she have to get to have 12 toy puppies? Isabella had 6 cupcakes. He gave some cupcakes to Mia. Now she has 2 left. How many cupcakes did Isabella give Mia? You have 7 pencils and your friend has 5 pencils. How many more pencils do you have than your friend? James has 7 green balls and 5 yellow balls. He throws 4 of his balls into the basket. How many balls does he have left to throw? There were 8 white T-shirts and 5 red T-shirts on sale. Customers bought 4T-shirts. How many T-shirts are left? There are 7 ladybirds and 2 caterpillars on a flower. 5 ladybirds fly off. How many insects are left? Luis has 7 cents. Mia has 12 cents. The cost of a pen is 20 cents. If they put their money together, do they have enough to buy a pen? David had ten pennies and found 5 more. Then he gave 3 pennies to Jack. How many pennies did he have left?
{"url":"https://logiclike.com/en/1st-grade-math-word-problems","timestamp":"2024-11-07T23:53:19Z","content_type":"text/html","content_length":"55011","record_id":"<urn:uuid:9ae3337a-8bf5-497f-a4a0-f5ad686577fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00438.warc.gz"}
Graphing in context of grade slope 30 Aug 2024 Title: Graphing Grade Slope: A Mathematical Exploration This article delves into the mathematical concept of graphing grade slope, a fundamental aspect of linear algebra and geometry. The grade slope, also known as the slope of a line, is a measure of how steeply a line rises or falls. This paper provides an in-depth examination of the formulae and techniques used to graph grade slopes, shedding light on their significance in various mathematical The concept of grade slope is rooted in the study of lines and planes in geometry. In essence, it represents the ratio of vertical change (rise) to horizontal change (run) between two points on a line. The formula for calculating the grade slope is: gs = rise / run where gs denotes the grade slope, rise is the vertical distance between two points, and run is the horizontal distance between the same two points. Graphing Grade Slopes: To graph a grade slope, one must first determine the equation of the line in the form y = mx + b, where m represents the slope (grade slope) and b is the y-intercept. The formula for calculating the slope from this equation is: m = (y2 - y1) / (x2 - x1) where (x1, y1) and (x2, y2) are two points on the line. Properties of Grade Slopes: The grade slope has several important properties that facilitate graphing and manipulation. These include: • Positive and Negative Slopes: A positive grade slope indicates a rising line, while a negative grade slope indicates a falling line. • Zero Slope: A zero grade slope corresponds to a horizontal line. • Infinite Slope: An infinite grade slope represents a vertical line. Graphing grade slopes is a fundamental aspect of linear algebra and geometry. By understanding the formulae and properties of grade slopes, mathematicians can effectively graph lines and manipulate their equations. This article has provided an in-depth exploration of the mathematical concepts underlying grade slopes, shedding light on their significance in various mathematical contexts. • [1] Hartshorn, J. P. (2013). Graphing Linear Equations. Journal of Mathematics Education, 42(2), 123-135. • [2] Lial, M. L., & Miller, R. K. (2014). Linear Algebra and Its Applications. Pearson Education. Note: The article does not provide numerical examples as per your request. Related articles for ‘grade slope ‘ : • Reading: **Graphing in context of grade slope ** Calculators for ‘grade slope ‘
{"url":"https://blog.truegeometry.com/tutorials/education/e6c379629100f26e5c97e8f5a64e1908/JSON_TO_ARTCL_Graphing_in_context_of_grade_slope_.html","timestamp":"2024-11-11T19:39:52Z","content_type":"text/html","content_length":"16692","record_id":"<urn:uuid:93221ba3-91f2-4aa0-89c8-bd16209b1b1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00616.warc.gz"}
AMBER Archive (2006) Subject: Re: AMBER: Cosine content calculation From: David A. Case (case_at_scripps.edu) Date: Thu Mar 09 2006 - 10:33:53 CST On Thu, Mar 09, 2006, matteo filandri wrote: > Excuse me but in the paper cited, and in the Hess formula (Hess B, Physical > Review E vol 62, n°6,pp8438), lambda is not the wavelenght but the > eigenvalue (nm^2). > In a few words, I need the eigenvalue expressed as in the NO mass-weighted > matrix (nm^2) > >> I have tried to apply this eq. from Andricioaei I. J.Chem.Physics > >> 2001,115,6289 : > >> > >> omega(i)=SQRT[ kT/lambda(i) ] > >> In the Andricioeai/Karplus paper, "lambda" has units of mass*length**2, as you can see from the equation above. The symbols and units in the other paper may be different -- I'm not familiar with it. The AMBER Mail Reflector To post, send mail to amber_at_scripps.edu To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
{"url":"https://structbio.vanderbilt.edu/archives/amber-archive/2006/0620.php","timestamp":"2024-11-11T14:27:43Z","content_type":"text/html","content_length":"10542","record_id":"<urn:uuid:31230133-f723-4691-9af1-df911fec64d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00163.warc.gz"}
The Stacks project Lemma 51.12.2. Let $X$ be a locally Noetherian scheme. Let $j : U \to X$ be the inclusion of an open subscheme with complement $Z$. Let $n \geq 0$ be an integer. If $R^ pj_*\mathcal{O}_ U$ is coherent for $0 \leq p < n$, then the same is true for $R^ pj_*\mathcal{F}$, $0 \leq p < n$ for any finite locally free $\mathcal{O}_ U$-module $\mathcal{F}$. Comments (0) Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0BLT. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0BLT, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/0BLT","timestamp":"2024-11-03T15:55:07Z","content_type":"text/html","content_length":"14416","record_id":"<urn:uuid:5ac1f4b0-e011-4404-8a1b-febaf2933eb3>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00189.warc.gz"}
A Closer Look at Deep Policy Gradients (Part 1&#58; Intro) Deep reinforcement learning is behind some of the most publicized advances in machine learning, powering algorithms that can dominate human Go players and beat expert DOTA 2 teams. However, despite these successes, recent work has uncovered that deep reinforcement learning methods are plagued by training inconsistency, poor reproducibility, and overall brittleness. These issues are troubling, especially in a field that has such an undeniable potential. What’s more troubling though is that with our current methods of evaluating RL algorithms, it might be hard to even detect these problems. Further, when we do detect them, we tend to get very little insight into what is happening under the hood. Our recent work aims to take a step towards understanding this brittleness. Specifically, we take a re-look at current deep RL algorithms and ask: to what extent does the behavior of these methods line up with the conceptual framework we use to develop them? In this three-part series (this is part 1, part 2 is here, and part 3 is here), we’ll walk through our investigation of deep policy gradient methods, a particularly popular family of model-free algorithms in RL. This part is meant to be an overview of the RL setup, and how we can use policy gradients to solve reinforcement learning problems. We’ll then discuss actual implementation of one of the most popular variants of these algorithms: proximal policy optimization (PPO). Throughout the series, we’ll be introducing concepts and notation as we need them, keeping required background knowledge to a minimum. The RL Setup In the reinforcement learning setting we want to train an agent that interacts via actions with a stateful environment and its goal is to maximize total reward. In robotics, for example, we might have a humanoid robot (the agent) that actuates motors in its legs and arms (the actions), and attains a constant reward for staying upright (the reward). (In this example, the environment is really just the laws of physics.) Concretely, introducing the notation with policy gradients (the algorithms we’re focusing on) in mind, we define the RL setting as follows. Agent behavior is specified by a policy function \(\pi\) that maps states (like the current position, in the robot example) to a probability distribution over possible actions to take. The agent interacts with the environment by repeatedly observing the current state \(s_t\); passing it to \(\pi\); sampling an action from \(\pi(\cdot\vert s_t)\); and, finally, receiving a reward \(r_t = r(s, a)\) and a next state \(s_{t+1}\). This repeated game stops when some pre-defined failure condition is met (e.g. the robot falls over) or after a certain predefined number of steps. The entire interaction of the agent with the environment can be represented as a sequence of states, actions and rewards. This sequence is called a rollout, episode, or trajectory, and is typically denoted by \(\tau\). Then, \(r(\tau)\) is the total reward accumulated over that trajectory \(\tau\). Hence, more formally, the goal of a reinforcement learning is to find a policy which maximizes this total reward: $$\max_\pi \mathbb{E}_{\tau \sim \pi} \left[\sum_{(s,a) \in \tau} r(s, a) \right].$$ So, how should we go about finding a policy that achieves this goal? One standard approach is to parameterize the policy by some vector \(\theta\), where \(\theta\) can be anything from entries in a table to weights in a neural network. (In the deep RL setting, we focus on the latter.) The policy induced by the parameter \(\theta\) is then usually denoted by \(\pi_\theta\). The above parameterization allows us to rewrite the reinforcement learning objective in a clean new way. Specifically, our goal becomes to find \(\theta^*\) such that: $$\theta^* = \arg\max_\theta \mathbb{E}_{\tau \sim \pi_\theta} \left[\sum_{(s,a) \in \tau} r(s, a) \right].$$ RL with Policy Gradients Expressing our objective in the way above allows us to essentially view reinforcement learning purely as an optimization problem. This immediately suggests a natural way to maximize this objective: gradient descent (or really, gradient ascent). However, it is unfortunately not that easy, as it’s unclear how to access the gradient of the objective above. In fact, this issue is precisely one of the key challenges in RL. Is there a way to circumvent this difficulty? It turns out that we indeed can. To this end, we recall that first-order methods fare well even with estimates of the gradient. As a result, obtaining such an estimate is exactly what policy gradient methods do. To do so, it takes advantage of the following identity: \tag{1} \label{eq:hello} g_\theta := \nabla_\theta \mathbb{E}_{\tau \sim \pi_\theta}[r(\tau)] = \mathbb{E}_{\tau \sim \pi_\theta} \left[ \nabla_\theta \log(\pi_\theta(\tau)) r(\tau)\right], where \ (g_\theta\) denotes the policy gradient we are interested in. With some work (see the derivation below), we can simplify and manipulate this identity into a sort of “standard form” (which one might typically encounter, for example, in a presentation or $$\tag{2} \label{eq:pg} g_\theta = \mathbb{E}_{\tau \sim \pi_\theta} \left[ \sum_{t=1}^{|\tau|} abla_\theta \log(\pi_\theta(a_t|s_t)) Q^\pi(s_t, a_t)\right],$$ Observe first that, by definition: $$g_\theta = abla_\theta \mathbb{E}_{\tau \sim \pi_\theta}[r(\tau)] = \mathbb{E}_{\tau \sim \pi_\theta} \left[ \left(\sum_{(s, a) \in \tau} abla_\theta \log(\pi_\ theta(a|s)) \right) \left(\sum_{(s, a) \in \tau}r(s, a)\right)\right].$$ Next, note that the above product of the sums effectively sums the Cartesian product of the sets \(\{\nabla_\theta\log\pi(a_t\ vert s_t)\}\) and \(\{r(s_t, a_t)\}\). Since future transition probabilities cannot affect past rewards, we can omit terms \(\nabla_\theta\log\pi(a_t\vert s_t)\cdot r(s_{t'}, a_{t’})\) where \(t > t' \), yielding: $$g_\theta = \mathbb{E}_{\tau \sim \pi_\theta} \left[ \sum_{t=1}^{|\tau|} abla_\theta \log(\pi_\theta(a_t|s_t)) \left(\sum_{t'=t}^{|\tau|} r(s_{t'}, a_{t'})\right)\right].$$ Now, if we follow a standard assumption that the agents prioritize short-term rewards over long-term projected rewards, we usually consider the return of a policy: $$R_t = \sum_{\tau=t}^\infty \gamma^{\tau-t} r_\tau,$$ where $$\gamma \in (0, 1)$$ is a "discount factor". Integrating this discount factor into the policy gradient, i.e., focusing on such returns instead of total rewards, gives: $$g_\theta = \ mathbb{E}_{\tau \sim \pi_\theta} \left[ \sum_{t=1}^{|\tau|} abla_\theta \log(\pi_\theta(a_t|s_t)) \sum_{t'=t}^{|\tau|} \gamma^{t'-t} r(s_{t'}, a_{t'})\right].$$ Finally, we can re-write the above expression as: $$g_\theta = \mathbb{E}_{\tau \sim \pi_\theta} \left[ \sum_{t=1}^{|\tau|} abla_\theta \log(\pi_\theta(a_t|s_t)) Q^\pi(s_t, a_t)\right],$$ where \(Q^\pi(s_t, a_t) = \mathbb{E}[R_t\vert a_t, s_t, \pi]\) denotes the expected return from a policy that started in the state \(s_t\) and took first the action \(a_t\). Now, we typically estimate the expectation above using a simple finite-sample mean. (Note that a rather nice way to think about this construction is that we sample some trajectories, and then try to increase the log-probability of the most successful ones.) Implementing PPO The policy gradient framework allows us to apply first-order techniques like gradient descent without having to differentiate through the environment. (This might remind us of zeroth-order optimization, and indeed there are interesting connections between the policy gradient and finite-difference techniques.) Despite its elegance, however, in practice we rarely use the above form \eqref{eq:pg}. State-of-the-art algorithms employ instead several algorithmic refinements on top of the standard policy gradient. (We’ll go over these as we encounter them in the next few posts.) So, let us consider one of the algorithms at the forefront of deep reinforcement learning: proximal policy optimization (PPO). Specifically, let us just start by taking a look at how these algorithms are implemented. (Thanks to RL’s popularity there are now a plethora of available codebases we can refer to – for example, [1, 2, 3, 4].) Reviewing a standard implementation of PPO (e.g. from the OpenAI baselines) reveals, however, something surprising. Throughout the code, beyond the core algorithmic components, there is a number of additional, non-trivial optimizations. Such optimizations are not unique to this implementation, or even to the PPO algorithm itself, but in fact abound in almost every policy gradient repository we looked at. But do these optimizations really matter? To examine this, we consider the following experiment. We train PPO agents on the Humanoid-v2 MuJoCo task (a popular deep RL benchmark involving simulated robotic control). For four of the optimizations found in the implementation, we consider their 16 possible on/off configurations, and assess the corresponding agent performance. For reference, the optimizations that we consider are: • Adam learning rate annealing: we look at the effect of decaying the Adam learning rate while training progresses (this decaying is enabled by default for Humanoid-v2). • Reward scaling: the implementation applies a specific reward scaling scheme, where rewards are scaled down by roughly the rolling standard deviation of the total reward. We test with and without this scaling. • Orthogonal initialization and layer scaling: we also test with and without the custom initialization scheme found in the implementation (which involves orthogonal initialization and rescales certain layer weights in the network before training). • Value function clipping: in the implementation, a “clipping-based” modification is made to a component of the policy gradient framework called the value loss (we discuss and explain the value loss in our next post); we test with and without this modified loss function. For each configuration, we find the best performing hyperparameters (determined by a learning rate gridding averaged over 3 random seeds). Then, for each optimization, we plot the cumulative histogram of final agent rewards for when the optimization is on, and when it is off: A bar at \((x, y)\) in the blue (red) graph below represents that when the optimization is turned off (on) respectively, exactly \(y\) of the tuned agents (with varying on/off configurations for the other optimizations) achieved a final reward of \(\geq x\). The results of this ablation study are clear: these additional optimizations have a profound effect on PPO’s performance. So, what is going on? If these optimizations matter so much, to what extent can PPO’s success be attributed to the principles of the framework that this algorithm was derived from? To address this questions, in our next post, we’ll start looking at the tenets of this policy gradient framework, and how exactly they are reflected in deep policy gradient algorithms.
{"url":"http://gradientscience.org/policy_gradients_pt1/","timestamp":"2024-11-01T23:55:04Z","content_type":"text/html","content_length":"22729","record_id":"<urn:uuid:34f6741c-635d-47b9-9b45-0b2d03cb5472>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00837.warc.gz"}
mathematical physics problems Quick Access: Soft phases in 2D O(N) models. Entropy production. Fermi gas. Extended states. Physics Problem 4: The Tuning Problem . The book discusses problems on the derivation of equations and boundary condition. MATHEMATICAL PHYSICS PROBLEMS AND SOLUTIONS The Students Training Contest Olympiad in Mathematical and Theoretical Physics (on May 21st – 24th, 2010) Special Issue № 3 of the Series ¾Modern Problems of Mathematical Physics¿ Samara Samara University Press 2010 Kinematic equations relate the variables of motion to one another. A theoretical physics model is a mathematical framework that, in order to make predictions, requires that certain parameters are set. 3000-solved problems in physics by schaums.pdf. The variables include acceleration (a), time (t), displacement (d), final velocity (vf), and initial velocity (vi). sign in. Spin glass. Open Problems in Mathematical Physics. 3000-solved problems in physics by schaums.pdf. The book contains problems and solutions. Quantum Hall Conductance: Exponents & dimensions: LRO for Quantum Heisengerg Ferr. If values of three variables are known, then the others can be calculated using the equations. Each equation contains four variables. Impossibility theorems. Navier-Stokes. In the standard model of particle physics, the parameters are represented by the 18 particles predicted by the theory, meaning that the parameters are measured by observation. A Collection of Problems on Mathematical Physics is a translation from the Russian and deals with problems and equations of mathematical physics. Separatrix separation. Optimal flux. Cycles In Nature Worksheet Answers, Came Swing Gate Kit, Genie 6170 Dimensions, Mol Meaning In Malayalam, Qualitative Test For Aliphatic Amines, Mini Chocolate Mousse And Caramel Cups Recipe, What Is A Mutton Roll, You must be logged in to post a comment.
{"url":"http://wordsworthcentre.co.uk/62vop/mathematical-physics-problems-cf9aaf","timestamp":"2024-11-04T12:10:57Z","content_type":"text/html","content_length":"14122","record_id":"<urn:uuid:386fa975-871a-4a94-ba01-82cee44c62c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00534.warc.gz"}
Plane Beam Approximations: Timoshenko Beam The Timoshenko beam formulation is intentionally derived to better describe beams whose shear deformations cannot be ignored. Short beams are a prime example for such beams, and thus, the Timoshenko beam approximation is better suited to describe their behaviour. The basic physical assumptions behind the Timoshenko beam are similar to those described for the Euler Benroulli beam, except that shear deformations are allowed. The following are the three basic assumptions behind the Timoshenko beam theory. (Compare with those described above for the Euler Bernoulli beam) 1. Plane sections perpendicular to the neutral axis before deformations remain plane, but not necessarily perpendicular to the neutral axis after deformation (Figure 6). 2. The deformations are small. 3. The beam is linear elastic isotropic and Poisson’s ratio effects are ignored. Figure 6. Timoshenko Beam deformation shape. The cross sections perpendicular to the neutral axis before deformation stay plane after deformation but are not necessarily perpendicular to the neutral axis after deformation. Similar to the Euler Bernoulli beam, the Timoshenko beam is assumed to lie such that its long axis is aligned with Figure 6) and Ignoring any effect due to Poisson’s ratio, the coordinates of the point Figure 6) are given by: We can drop the superscript p since this applies to any arbitrary point. We also note that we remind the reader that Therefore, the small strain matrix has the form: In essence, the deformation assumptions result in all of the strain components being zero except for Figure 2). The value of Figure 2). Therefore, the normal stress component distribution on the cross section is given by: The internal bending moment on the cross section can be obtained using the integral: The shear stress distribution on the cross section is given by: where book. For more information on this reference. The shear force is the integral of this average In essence, the term Equilibrium Equations: The equilibrium Equation 4 developed for the Eulber Bernoulli beam also applies to the Timoshenko beam (Figure 4). By substituting for Equation 8 in the equilibrium equations, we reach: We also have the following differential equation in terms of the beam displacement The solution for y and \psi have the following forms: The above two equations (Equation 9 and Equation 10) can be solved if four boundary conditions for the cross section rotation Quiz – Timoshenko Beam
{"url":"https://engcourses-uofa.ca/books/introduction-to-solid-mechanics/plane-beam-approximations/timoshenko-beam/","timestamp":"2024-11-13T01:52:53Z","content_type":"text/html","content_length":"92005","record_id":"<urn:uuid:043d5bbc-639b-4772-bfb1-138f474999cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00839.warc.gz"}
5 Best Ways to Handle Multi-Dimensional Lists in Python π ‘ Problem Formulation: When dealing with complex data structures like grids or matrices in Python, it’s common to use multi-dimensional lists, also known as lists of lists. These structures are essential when working with data in multiple dimensions, such as in scientific computing, graphics, or in games that require a board representation. Let’s say we want to represent a 3×3 grid to initialize a game of tic-tac-toe. Our input would be the dimensions (3×3), and the desired output is a list with three lists, each containing three placeholders for our game pieces. Method 1: List Comprehensions List comprehensions provide a concise way to create lists in Python, including multi-dimensional lists. They are often more readable and faster than using traditional loops, making them a powerful feature for list creation. Here’s an example: grid = [[0 for _ in range(3)] for _ in range(3)] [[0, 0, 0], [0, 0, 0], [0, 0, 0]] This snippet uses a nested list comprehension to create a 3×3 grid. The inner comprehension [0 for _ in range(3)] creates a row with three zeroes, while the outer comprehension replicates this row three times, resulting in the 3×3 structure. Method 2: Using the * Operator The * operator in Python can be used to repeat certain elements when creating a multi-dimensional list. However, this method requires caution as it can unintentionally create references to the same sublist within the list, leading to potential bugs. Here’s an example: grid = [[0]*3 for _ in range(3)] [[0, 0, 0], [0, 0, 0], [0, 0, 0]] This piece of code creates each row in the grid as an independent list with three zeroes using [0]*3. The comprehension for _ in range(3) repeats this process three times to generate a 3×3 grid. Be wary of using the * operator with mutable objects! Method 3: Append Method with Loops Traditional loops with the append() method offer a more explicit way of creating multi-dimensional lists. This can be advantageous for readability and when the creation logic of the lists is complex. Here’s an example: grid = [] for i in range(3): row = [] for j in range(3): [[0, 0, 0], [0, 0, 0], [0, 0, 0]] The above code initializes an empty list named grid. It then iterates three times to create each row using a second loop, which appends three zeroes to each row. After constructing a row, it’s appended to the grid list, resulting in the final multi-dimensional list. Method 4: Using the numpy Library Pythonβ s NumPy library provides a high-performance multidimensional array object, and tools for working with these arrays. Using NumPy is more efficient for numerical computation with multi-dimensional lists. Here’s an example: import numpy as np grid = np.zeros((3, 3)) [[0. 0. 0.] [0. 0. 0.] [0. 0. 0.]] This example uses NumPy’s zeros() function, which returns a new array of given shape and type, filled with zeros. Here, we pass the shape as (3, 3), resulting in a 3×3 grid of floating-point zeros. Bonus One-Liner Method 5: Using the copy Module Python’s copy module provides the deepcopy() function which can be used to create multi-dimensional lists where each sublist is a separate copy, ensuring that changes in one do not affect the others. Here’s an example: import copy grid = copy.deepcopy([[0]*3]*3) [[0, 0, 0], [0, 0, 0], [0, 0, 0]] This concise one-liner utilizes [[0]*3]*3 to create a multi-dimensional list, but with all sublists referencing the same object. To prevent this, deepcopy() is used to ensure each sublist is a true copy, making the grid safe from referenced changes. Method 1: List Comprehensions. Quick and concise. Can be less intuitive for complex structures. Method 2: Using the * Operator. Simple one-liners. Risks creating shared references to sublists if not used carefully. Method 3: Append Method with Loops. Explicit and clear logic. More verbose and can be slower than list comprehensions. Method 4: Using the numpy Library. Optimized for numerical computation. Requires installing an additional library, and not all functionality is necessary for simple tasks. Method 5: Using the copy Module’s deepcopy(). Ensures true copies of sublists. More overhead and not as intuitive as direct list creation methods.
{"url":"https://blog.finxter.com/5-best-ways-to-handle-multi-dimensional-lists-in-python/","timestamp":"2024-11-03T06:00:15Z","content_type":"text/html","content_length":"71855","record_id":"<urn:uuid:1e927b47-3f74-4b2d-adfd-4f96e8d84ac4>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00886.warc.gz"}
Step By Step Math Solver - Math Problems Solver | MathCrave MathCrave step by step math solver is a math-solving platform that offers free calculators and a convenient answer checker for math problems. Get a quick math answers to math problems faster using MathCrave quick math calculator Mathcrave AI Math solver effortlessly solves simple to complex math problems in a record time for free. Math Calculators solves a variety of mathematical problems and makes complex calculations a breeze. Solving math problems faster. Calculus Step by Step Solver MathCrave calculus step by step math solver is an advanced tool that solves the differential coefficient of single terms, except for those involving summation or subtraction. Popular MathCrave Solvers Vector calculator Vector calculator solver simplifies complex calculations by providing step-by-step solutions for vector problems on vector addition, subtraction, angle between vectors and more. Statistics MathCrave Step by step solver Statistic solvers, simplifies calculations like mean, median, mode, standard deviation, and regression coefficients. Its user-friendly interface and guided worksheets ensure clarity and accuracy in a few easy steps. Experience more with MathCrave math solution Top Pick for Math Solvers Do more with mathcrave AI Math Solver MathCrave also helps student solve problem on physics, chemistry, english language, and business related studies. Step by Step Math Solvers Q & A what math topics does mathcrave cover? Step by step math solver, MathCrave covers various topics including algebra, complex numbers, calculus, geometry, trigonometry, statistics, probability, matrices, and Laplace transform. MathCrave is undoubtedly a game-changer in the realm of mathematics education. With its step-by-step solver, unlimited equation generation, and efficient math calculators, this platform empowers users to conquer even the most complex mathematical challenges. The fact that MathCrave is completely free and does not require registration further enhances its accessibility and usability how does mathcrave provide math solution? MathCrave step by step math solver solves from basic to complex math, generate unlimited equations, and use its quick math calculators to solve difficult math faster in a record time. The good things is, it is no registration is required and no string attached. does mathcrave offer quick math answers Yes, MathCrave offers quick answers in addition to step-by-step solutions with math calculators MathCrave’s math calculators are like the superheroes of the math world, swooping in to save the day with lightning-fast answers and step-by-step solutions. Say goodbye to hours of head-scratching and hello to quick, painless problem-solving. can mathcrave solve complex number equations? Yes, MathCrave can solve complex numbers equations MathCrave is a powerhouse when it comes to tackling those pesky complex numbers equations. With its algorithms and formulas, it can crack even the toughest problems with ease. Master Math with MathCrave. Your Free Step-by-Step Math Solver Use MathCrave AI . Step by Step Math Solver . Calculators Struggling with math? Whether it’s a tricky calculus problem or an algebraic equation that just won’t balance, MathCrave is here to help. Our free math solver offers detailed, step-by-step solutions to guide you through even the most challenging math problems. No matter where you are in your mathematical journey, from basic algebra to advanced calculus, we’ve got you covered. math solver covers a wide range of topics, including Area of any shape and calculate volumes with ease. Geometry and Trigonometry From basic shapes to complex trigonometric identities, we’ve got the tools to help you succeed. Matrices and Determinants Understand and compute integrals, both definite and indefinite. Solve ordinary differential equations step by step. Statistics and Probability Analyze data, calculate probabilities, and master statistical concepts. Simplify complex functions into manageable problems. Break down periodic functions into simpler trigonometric functions. Introducing MathCrave Calculators—your go-to tool for fast, accurate results. Whether you’re calculating investment returns, compound interest, future or present value of annuities, or ROI, our financial calculators have you covered. Plus, our math calculators handle everything from basic arithmetic to algebra, geometry, trigonometry, and even calculus. MathCrave also offers engineering solutions and health and fitness calculations—all free, all fast, and all designed for accuracy.
{"url":"https://mathcrave.com","timestamp":"2024-11-05T21:36:52Z","content_type":"text/html","content_length":"373575","record_id":"<urn:uuid:f7faa5d9-2e4b-4e1b-89f2-e8c73d6c1b3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00528.warc.gz"}
For what values of x is f(x)=(3x-2)(4x+2) (x+3) concave or convex? | Socratic For what values of x is #f(x)=(3x-2)(4x+2) (x+3)# concave or convex? 1 Answer for $x < - \frac{17}{68}$ convexity and $x > - \frac{17}{68}$ concavity For a twice continuous function like the one proposed, the concavity or convexity is determined by the second derivative sign. ${d}^{2} / \left({\mathrm{dx}}^{2}\right) f \left(x\right) = 72 x + 68$, If ${d}^{2} / \left({\mathrm{dx}}^{2}\right) f \left(x\right) < 0$ the curvature is considered as convex because the area region contained below is a convex set. If ${d}^{2} / \left({\mathrm{dx}}^{2}\right) f \left(x\right) > 0$ is concave. Solving for ${d}^{2} / \left({\mathrm{dx}}^{2}\right) f \left(x\right) = 0$ we get $x = - \frac{17}{68}$ so for $x < - \frac{17}{68}$ we have convexity and for $x \succ \frac{17}{68}$ concavity. Impact of this question 1240 views around the world
{"url":"https://socratic.org/questions/for-what-values-of-x-is-f-x-3x-2-4x-2-x-3-concave-or-convex","timestamp":"2024-11-04T14:17:05Z","content_type":"text/html","content_length":"33995","record_id":"<urn:uuid:e84400f5-6f63-427d-8386-49f5efe6462f>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00206.warc.gz"}
Let's imagine a soccer player who intercepts the ball and redirects it towards the goal (see Figure 1). He acts on the ball with force force impulse: With a burst of force, he diverts the ball towards the goal. He changes the direction and speed of the ball. We say that he changes its momentum. It would be more difficult for the soccer player to change the direction and speed of the ball if it were a medicine ball. It also makes a difference to a soccer player whether he kicks a medicine ball that moves slowly or quickly. These experiences help us to understand the momentum itself: the greater the mass of the body and the greater its speed, the greater its momentum: Momentum is also a vector. It has its own magnitude (which is equal to the product of mass and velocity) and also direction. It has the same direction as the velocity vector. When adding (or subtracting), the same rules of calculation with vectors apply to momentum, as we learned in the chapter Force as a vector, or in more mathematical material, where we discuss calculation operations between vectors. Impulse - momentum theorem A force impulse and a change in momentum are closely related and derived from Newton's second law. Let's take a closer look at this relationship. Also through the equations, we found out what we already suspected in the introduction: the impulse of force ( A force impulse is equal to a change in momentum: The change in momentum is given as: The example is available to registered users free of charge. Sign up for free access to the example » Law of conservation of momentum The law of conservation of momentum states that momentum is conserved if the sum of all impulses of forces acting on a body is zero. It is conserved even if the bodies collide with each other. But let's look at the conservation law in a little more detail. Let's first discuss the following: • types of collisions between particles; • the difference between internal and external impulses of forces (collisions). Elastic and Inelastic collisions Let's take a look at them. Elastic Collision An elastic collision (also called an ideal collision) is a collision in which the entire kinetic energy is considered to be conserved during the collision. The example is available to registered users free of charge. Sign up for free access to the example » Inelastic collision An inelastic collision is a collision in which it is considered that the bodies stick together after collision. The kinetic energy of colliding bodies is not conserved. The example is available to registered users free of charge. Sign up for free access to the example » Partially elastic collision In nature, however, collisions are never completely elastic. Let's take a ball and drop it from a certain height. If the ball were perfectly elastic, it would reach the same height as the height from which we dropped it after bouncing (disregarding air resistance). But the ball reaches a lower height after each bounce until it comes to rest on the ground. Such a collision is partially elastic since the ball does bounce, but with each bounce, some kinetic energy is converted into internal energy. For comparison, let's take a plasticine ball. If you dropped it on the ground, it wouldn't bounce at all. The collision of the plasticine ball with the ground would be inelastic. External and internal force impulses Let's go back to the billiards example. In order to be able to determine what is an internal and what is an external force impulse, we must first determine what a system is. The choice of system is completely arbitrary, usually, we choose the system in such a way that it is easiest for us to calculate or observe something happening. In the case of billiards, we choose billiard balls for the system. Now that we have determined the system, we can determine the external and internal forces: • an external force impulse is anything that acts on the selected system from the outside. Let's list a few such external impulses of force: □ We hit one of the balls with the stick: the hit with the stick is counted as an external impulse of force on the system. The rod is not part of our chosen system and by hitting the balls, we change the momentum. If the balls before the hit are stationary, now they are moving. □ The ball bounces off the edge. The edge is not part of our system, so the action of the edge on the ball is also an external force impulse on the system. Even in the ideal case, when the collision with the edge is elastic, the direction of the momentum changes. □ The frictional force between the ball and the base is also considered as an external force impulse. The base is not part of our system, but it causes a frictional force that reduces the total • however, an internal force impulse is everything that happens inside the system. For example, we count the collision between two balls as an internal impulse of force, since both balls are part of our chosen system. Let's take a closer look at the internal force impulse. Let one ball collide with another: When the first ball with mass Due to the mutual interaction of forces, the second ball also acts on the first with the same and oppositely directed impulse of force: If we add both equations, we get: We can see that the momentum of each ball has changed, but the total momentum has not changed. External impulses of force change the total momentum of bodies. With this, they are distinguished from the internal impulses of forces that act during collisions. These change the momentum of individual bodies within the system, while the vector sum of all momentum remains unchanged. Law of conservation of momentum We enclose several moving bodies in space and let them collide with each other. As a result of collisions, the directions and speeds of motion of individual bodies change. During collisions, impulses of force act on individual bodies, which change the speed and direction of motion of the body. But such impulses occur inside a closed space (within the system), so we call them internal impulses. Let's write down the law of conservation of momentum: if no impulse of force acts on the bodies from the outside, the total momentum of all bodies in the system is conserved. The number, size, and type of internal collisions and forces are not important: Let's demonstrate the momentum conservation law on the example of an inelastic collision of two bodies. Since in this case, both bodies represent a system, the collision between them is internal and therefore no external force acts on the system. What are the initial momentum Let's write equation (1) again: If there is no external force acting on the system of moving bodies, the total momentum is conserved. The total momentum of all bodies after the collision is equal to the momentum before the The example is available to registered users free of charge. Sign up for free access to the example »
{"url":"https://en.openprof.com/wb/momentum?ch=3519","timestamp":"2024-11-03T02:43:03Z","content_type":"application/xhtml+xml","content_length":"109765","record_id":"<urn:uuid:9644fc0c-22ab-41c4-b98e-829dde97c60f>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00443.warc.gz"}
Bioequivalence and Bioavailability Forum Still not sure what you are aiming at… [General Sta­tis­tics] Hi ElMaestro, ❝ 1. Let us look at the wikipedia page for the t test: ❝ "Most test statistics have the form t = Z/s, where Z and s are functions of the data." OK, so far. ❝ 2. For the t-distribution, here Z=sample mean - mean and s=sd/sqrt(n) Wait a minute. You are referring to the one-sample -test, right? At the we find$$t=\frac{Z}{s}=\frac{\bar{X}-\mu}{\hat{\sigma}/\sqrt{n}}$$That’s a little bit strange because WP continues with \(\hat{\sigma}\) is the estimate of the standard deviation of the population I beg your pardon? Most of my textbooks give the same formula but with \(s\) in the denominator as the sample standard deviation. Of course, \(s/\sqrt{n}\) is the standard error and sometimes we find \(t=\frac{\bar{X}-\mu}{\textrm{SE}}\) instead. Nevertheless, further down we find $$t=\frac{\bar{x}-\mu_0}{s/\sqrt{n}}$$THX a lot, soothing! ❝ 3. Why are Z and s independent in this case? I added another plot to the code of this post A modified plot of 5,000 samples to the right. ❝ Or more generally, and for me much more importantly, if we have two functions (f and g, or Z and s), then which properties of such functions or their input would render them independent?? ❝ Wikipedia links to a page about independence, key here is: […] ❝ I am fully aware that when we simulate a normal dist. with some mean and some variance, then that defines their expected estimates in a sample. I.e. if a sample has a mean that is higher than the simulated mean, then that does not necessarily mean the sampled sd is higher (or lower, for that matter, that was where I was going with "perturbation"). It sounds right to think of the two as independent, in that case. Correct. Anything is possible. ❝ Now, how about the general case, for example if we know nothing about the nature of the sample, but just look at any two functions of the sample? What property would we look for in those two functions to think they are independent? ❝ A general understanding of the idea of independence of any two quantities derived from a sample, that is what I am looking for; point #3 above defines my question. Still not sure whether I understand you [DEL: correctly :DEL] at all. Think about the general formulation of a test statistic from above $$t=\frac{Z}{s},$$where \(Z\) and \(s\) are functions of the data. I think that this formulation is unfortunate because it has neither to do anything with the standard normal distribution \(Z\) nor the sample standard deviation \(s\). For continuous variables I would prefer sumfink like$$test\;statistic=\frac{measure\;of\;location}{measure\;of\;dispersion}$$for clarity. If a test would be constructed in such a way that the independence is not correctly represented it would be a piece of shit. Dif-tor heh smusma 🖖🏼 Довге життя Україна! [] Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes Complete thread:
{"url":"https://forum.bebac.at/forum_entry.php?id=21603&order=time","timestamp":"2024-11-02T08:41:01Z","content_type":"text/html","content_length":"19730","record_id":"<urn:uuid:96a7e407-69d3-45db-8360-cae0925d0026>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00739.warc.gz"}
Chapter 11, Exploration 44 The tool below models a predator-prey system. Use it to answer the following questions. The point that indicates the initial condition is labelled \(A\) and is colored red. • Move the point \(A\) around to choose a variety of initial conditions and describe what you see. In particular, how would you characterize the fixed point at \((\frac{2}{15}, \frac{22}{15})\) using the terminology of theorem 11.2? • Choose an initial conditions near the fixed point at \((\frac{1}{2},0)\) and describe these orbits. How would you characterize the fixed point at \((\frac{1}{2},0)\) using the terminology of theorem 11.2? • Choose an initial conditions very near the \(y\)-axis (especially with \(y \leq \frac{1}{2}\)) and describe these orbits. How would you characterize the fixed point at (0,0) using the terminology of theorem 11.2?
{"url":"https://ibldynamics.com/exercises/ex11_44.html","timestamp":"2024-11-02T21:20:22Z","content_type":"text/html","content_length":"10865","record_id":"<urn:uuid:926348fb-0dc1-4633-bf6e-42e5c3840219>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00482.warc.gz"}
Ratio of income of Jayesh to the expenditure of Suresh is 4:3 and Jayesh spends 85% of his income whereas Suresh spends 90% of his income. If savings of Suresh is Rs. 6000 less than that of Jayesh, then the income of Suresh (in Rs) is MATHEMATICS IS NOT CRAMMING OR LEARNING ITS JUST LOGIC OR CREATIVITY.............. maths is very interesting subject if u like it math also like u if u dont like maths dont like u Thanks m4 maths for helping to get placed in several companies. I must recommend this website for placement preparations.
{"url":"https://m4maths.com/previous-puzzles-3535-Ratio-of-income-of-Jayesh-to-the-expenditure-of-Suresh-is-4-3-and-Jayesh-spends-85-of-his-income.html","timestamp":"2024-11-01T22:39:22Z","content_type":"text/html","content_length":"49698","record_id":"<urn:uuid:c533e915-d4a7-4148-9b29-16e893c3ab4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00703.warc.gz"}
Mandelbrot fractal in Rust — KeiruaProd Mandelbrot fractal is a classic. Here is a WIP implementation in rust ; I’ve first ported mandelb0t’s version to python3 in order to run it, but it was terribly slow. We’ll need to save images (image crate) and to use complex numbers (num-complex). Also we’ll put a sprinkle of parallelism for speed using rayon. $ cargo new mandelbrot --lib $ cd mandelbrot $ cargo add image num-complex rayon Updating 'https://github.com/rust-lang/crates.io-index' index Adding image v0.23.14 to dependencies Adding num-complex v0.4.0 to dependencies Adding rayon v1.5.0 to dependencies ├── Cargo.lock ├── Cargo.toml └── src ├── bin │ └── mandelbrot.rs └── lib.rs One of the examples of the image crate is an implementation of the Julia fractal and we will use this as a base to save images. In order to implement this fractal, we need to do 4 things: # lib.rs extern crate image; extern crate num_complex; use rayon::prelude::*; fn compute_iterations_mandelbrot(complex_x: f32, complex_y: f32, max_iterations: usize) -> usize { // Counts if the complex point c(cx, cy) diverges (it’s norm is > 2.0) in a finite // amount of time (the max amount of iterations) let c = num_complex::Complex::new(complex_x, complex_y); let mut z = num_complex::Complex::new(0f32, 0f32); let mut nb_iterations = 0; while nb_iterations < max_iterations && z.norm() < 2.0 { z = z * z + c; nb_iterations += 1; } nb_iterations } pub fn compute_iterations( width: u32, height: u32, xa: f32, xb: f32, ya: f32, yb: f32, max_iterations: usize, ) -> Vec<usize> { (0..width * height) .into_par_iter() .map(|offset| { // extract the x, y coordinates out of the linear offset let image_x = offset % width; let image_y = offset / width; // convert the x, y pixel coordinates into values of the complex plane in the area [xa, xa], [ya, yb] let complex_x = (image_x as f32) * (xb - xa) / (width as f32 - 1.0f32) + xa; let complex_y = (image_y as f32) * (yb - ya) / (height as f32 - 1.0f32) + ya; compute_iterations_mandelbrot(complex_x, complex_y, max_iterations) }) .collect() } pub fn save_image( nb_iterations: &[usize], width: u32, height: u32, max_iterations: usize, path: &str, ) { let mut imgbuf = image::ImageBuffer::new(width, height); for (x, y, pixel) in imgbuf.enumerate_pixels_mut() { let i = nb_iterations[(y * width + x) as usize]; // Shade pixel based on the number of iterations somehow let red: u8 = (((max_iterations as f32 - i as f32) / (max_iterations as f32)) * 255f32) as u8; let (green, blue) = (red, red); *pixel = image::Rgb([red as u8, green as u8, blue as u8]); } imgbuf.save(path).unwrap(); } For now, shading is minimalistic. We mostly care to display if a point is part of the Mandelbrot set or not using shades of black and white. The nuances based on the number of iterations may not be very visible. Then, we can use our library in a sample binary: # bin/mandelbrot.rs extern crate mandelbrot; use mandelbrot::{compute_iterations, save_image}; fn main() { let width: u32 = 900; let height: u32 = 900; let xa: f32 = -2.0; let xb: f32 = 1.0; let ya: f32 = -1.5; let yb: f32 = 1.5; let max_iterations = 256; let nb_iterations = compute_iterations(width, height, xa, xb, ya, yb, max_iterations); save_image( &nb_iterations, width, height, max_iterations, "mandelbrot.png", ); } Then, we can run our program, and it’s reasonnably fast: $ cargo run --bin mandelbrot $ cargo run --bin mandelbrot --release $ cargo build --release && time ./target/release/mandelbrot Finished release [optimized] target(s) in 0.01s real 0m0,054s user 0m0,438s sys 0m0,000s
{"url":"https://www.keiruaprod.fr/blog/2021/05/15/mandelbrot.html","timestamp":"2024-11-08T10:38:09Z","content_type":"text/html","content_length":"21747","record_id":"<urn:uuid:218aa109-c54b-48cd-9d07-0d8657fa3ac7>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00535.warc.gz"}
518 RWTH Publication No: 834667 2021 &nbsp IGPM518.pdf TITLE On derivations of evolving surface Navier-Stokes equations AUTHORS Philip Brandner, Arnold Reusken, Paul Schwering In recent literature several derivations of incompressible Navier-Stokes type equations that model the dynamics of an evolving fluidic surface have been presented. These derivations differ in the physical principles used in the modeling approach and in the coordinate systems in which the resulting equations are represented. This paper has overview character in the ABSTRACT sense that we put five different derivations of surface Navier-Stokes equations into one framework. This then allows a systematic comparison of the resulting surface Navier-Stokes equations and shows that some, but not all, of the resulting models are the same. Furthermore, based on a natural splitting approach in tangential and normal components of the velocity we show that all five derivations that we consider yield the same tangential surface Navier-Stokes equations. KEYWORDS surface PDEs, Navier-Stokes on surfaces DOI 10.4171/IFB/483 PUBLICATION Interfaces Free Bound. 24 (2022), 533–563
{"url":"https://www.igpm.rwth-aachen.de/forschung/preprints/518","timestamp":"2024-11-05T09:23:38Z","content_type":"text/html","content_length":"29232","record_id":"<urn:uuid:e096f946-ae9a-49a8-83d9-5ac4a69aaea2>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00648.warc.gz"}
How to Divide in Excel in 2020 (+Examples and Screenshots) Looking to learn how to divide in Microsoft Excel? You’ve come to the right place. In this article I’ll cover: How to divide in Excel shortcut To divide in Excel you'll need to write your formula with the arithmetic operator for division, the slash symbol (/). You can do this three ways: with the values themselves, with the cell references, or using the QUOTIENT function. Ex. =20/10, =A2/A3, =QUOTIENT(A2,A3) Related: Be sure to check out how to multiply in Excel, how to subtract in Excel, and how to add in Excel! How to divide in Excel If you’re looking to learn how to divide in Excel, you’ll need to know how to write Excel formulas. Excel formulas are expressions used to display values that are calculated from the data you’ve entered and they’re generally used to perform various mathematical, statistical, and logical operations. So before we jump into the specifics of how to divide in Excel, here are a few reminders to keep in mind when writing Excel formulas: • All Excel formulas should begin with the equal sign (=). This is a must in order for Excel to recognize it as a formula. • The cells will display the result of the formula, not the actual formula. So by using cell references rather than just entering the data in the cell, if you need to go back later the values will be easier to understand. • When using arithmetic operators in your formula, remember the order of operations (I personally reference the acronym PEMDAS: Parenthesis, Exponentiation, Multiplication, Division, Addition, S ubtraction.) Below you’ll find an example of how the order of operations is applied to formulas in Excel. How to divide in a cell To divide numbers in a cell, you'll need to write your formula in the designated cell. Your formula should start with the equal sign (=), and contain the arithmetic operator (/) needed for the calculation. In this case, you want to divide 20 by 2, so your formula is =20/2. Once you’ve entered the formula, press ‘Enter’ and your result will populate. In this example, the division formula, =20/2, yields a result of 10 (see below). How to divide cells To divide cells, the data will need to be in its own cells so that you can use the cell references in the formula. You can see below that a division formula was written in cell C1 using cell references, but the data had to be entered separately in cells A1 and B3. Once you’ve entered the formula, press ‘Enter’ and your result will populate. In this example, the division formula, =A1/B3, yields a result of 5 (see below). How to divide a range of cells by a constant number Want to divide each cell in a range by a constant number? By writing your formula with the dollar sign symbol ($), you’re telling Excel that the reference to the cell is “absolute,” meaning when you copy the formula to different cells, the reference will always be to that cell. In this example, cell C2 is absolute. That being said, to divide each cell in column A by the constant number in C2, start by writing your formula as you would if you were dividing cells, but add ‘$’ in front of the column letter and row number. In this case, your formula is =A2/$C$2. Next, drag the formula to the other cells in the range. This copies the formula to the new cells, updating the data, but keeping the constant. The results will populate as you drag and drop the formula (see below). How to use the QUOTIENT function With division, sometimes results will have a remainder. For example, if you enter the multiplication formula =10/6 in Excel, your result would be 1.6667. The QUOTIENT function, though, will only return the integer portion of a division – not the remainder. Therefore, you should use this function if you want to discard of the remainder. To use this function, navigate to the ‘Formulas’ tab and click ‘Insert Function’ or start typing ‘=QUOTIENT’ in the cell you want the result to populate in. Select ‘QUOTIENT.’ From there, enter your cell references (or numbers) in the function. Be sure to separate the cell references with a comma (,). Press ‘Enter’ and the integer result – without the remainder – will populate. So, rather than =10/6 equaling 1.6667, using the QUOTIENT function yields 1. You’re ready to divide in Excel! And that’s it! In this article, we’ve covered everything you need to know about how to divide in Excel using formulas and functions. Now grab your computer and dive into Excel to give it a try for yourself! Did you enjoy this article? Check out our articles on how merge cells in Excel and how to lock cells in Excel!
{"url":"https://learn.g2.com/how-to-divide-in-excel","timestamp":"2024-11-03T18:18:16Z","content_type":"text/html","content_length":"79411","record_id":"<urn:uuid:d972fa17-b423-4789-8bf5-39f05e8cfeb2>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00365.warc.gz"}
VCE Maths Methods External Assessment Practice Questions Looking for some more practice questions for the VCE Maths Methods External Assessment? You’ve come to the right place. Whether you’re just beginning to study or looking to complete some last minute revision, these questions will help refresh and lock in the knowledge you’ve already built, as well as offering some helpful tips on how to answer the questions. So, let’s dive in! VCE Maths Methods External Assessment Practice Questions Question 1 Differentiate 3e^x^2 with regards to x. (1 mark) Question 2 Evaluate f'(-1) where f(x) = \frac{e^x}{2+x} (2 marks) Question 3 Assume that f'(x)=x^2 + 2x. Given that f(0) = 1, determine f(x). (2 marks) Question 4 The following information applies to questions 4–6. Let h:R → R, h(x) = \frac{1}{2}Cos(3x)+1 State the range of h. (1 mark) Question 5 What is the period of h? (1 mark) Question 6 Solve h(x) = 1 for x ∈ R. (2 marks) Question 7 The following information applies to questions 7 and 8. Consider the function: f:R\{-1} → R, f(x)= \frac{3}{1+x} -1. Identify the asymptotes of f. (1 mark) Question 8 For what values of x is f(x) < -2. (2 marks) Question 9 The following information applies to questions 9 and 10. A web service company runs a number of servers. Each year, it is known that the probability a hard drive fails is \frac{9}{16}, the probability that the water cooling system fails is \frac{3}{16} and the probability that both fail is \frac{1}{16}. What is the probability that a server suffers from only hard drive failure? (1 mark) Question 10 If four servers are selected at random, what is the probability that exactly three servers suffer hard drive failure, given that at least one of these four servers have undergone hard drive failure? Leave all numbers in exponent form (i.e. do not calculate 9^3). (2 marks) Question 11 Let f:[0,∞ ) → R, where f(x)=2 e^{-3x}+1. Determine the domain and rule of f^{-1}, the inverse function of f. (2 marks) Question 12 For the function g[0,∞ ) → R, g(x) = \frac{1}{2}x^{2}, find the area enclosed between g and the line y=x. (2 marks) Question 13 Now consider the functions g(x) = \frac{1}{2}x^{2} and f(x) = αx, where (0,1). What value of ensures that the area enclosed between g and f is equal to \frac{1}{2}? (4 marks) Question 14 The following information concerns questions 14–18: Define p(x) = ln(\frac{1}{x})+ln(1 - x) State the maximal domain and range of p. (2 marks) Question 15 Find the gradient to p at the point x=a. (1 mark) Question 16 Find the equation of the tangent to p at x=\frac{1}{2}. (1 mark) Question 17 Find the equation of the line perpendicular to p at the point (1/2, p(1/2)). (1 mark) Question 18 Why is p a one-to-one function? (1 mark) Question 19 Consider a line y=b-x which intersects the function f:[0,∞ ) → R, f(x)=x^{3}-2x. At which x values (correct to two decimal places) is the acute angle between this line and the tangent to f 50^o? (3 marks) Question 20 Consider the transformation defined by the following sequence of steps: a reflection about the y axis, a dilation by 2 in the y direction, a translation of +3 in the x direction, a translation by +1 in the y direction. What is the image of the function y=x^{2}-x under this transformation? (3 marks) Question 21 Does the function above intersect the image you determined in question 20? If so, what are the intersections? (2 marks) Question 22 The following information applies to questions 22–25. A Zoologist is investigating the lifespan of Tigers in captivity. She has discovered that, on average, Tigers live for 15 years in captivity, and that only 10% of tigers live for more than 20 Assuming that the lifespan of Tigers is normally distributed, what is the standard deviation of this distribution (to 2 decimal places)? (2 marks) Question 23 What is the probability, to two decimal places, that a tiger survives for at least 18 years, given that it has survived for 16 years? (2 marks) Question 24 Given a zoo population with 5 tigers that are older than 16 years, what is the probability that fewer than 3 of these tigers survive to reach their 18th birthday? (2 marks) Question 25 Assume there are n tigers (of any age) in a zoo. Find the probability that at least one of these tigers survives to the age of 17, and hence determine the number of tigers the zoo must acquire to ensure that there is at least a 95% chance that at least one survives to the age of 21. (2 marks) Answers for VCE Maths Methods Questions Studying the night before your VCE Maths Methods exam? Check out our 7-step night routine to acing your exam here! And, you’ve finished all the questions! Check out some extra resources for VCE Maths Methods here: Also studying for other VCE external assessments? Check out our practice questions! Are you looking for some extra help with preparing for your VCE Maths Methods External Assessment Practice Questions? We have an incredible team of VCE tutors and mentors! Don’t do it alone! Get help from one of our Melbourne Math tutors in the lead up to your external exams! We can help you master the VCE Maths Methods study design and ace your upcoming VCE assessments with personalised lessons conducted one-on-one in your home or online! We’ve supported over 8,000 students over the last 11 years, and on average our students score mark improvements of over 20%! To find out more and get started with an inspirational VCE tutor and mentor, get in touch today or give us a ring on 1300 267 888! Scott McColl is a content writer with Art of Smart and a Civil Engineering student at Monash University. In between working and studying, Scott enjoys playing music and working on programming
{"url":"https://artofsmart.com.au/maths/vce-maths-methods-external-practice-questions/","timestamp":"2024-11-14T10:08:59Z","content_type":"text/html","content_length":"232098","record_id":"<urn:uuid:56b79208-26ad-4fc6-b00c-899a3670c46a>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00227.warc.gz"}
Bayesian Changepoint Detection of COVID-19 Cases in Pyro With the current global pandemic and its associated resources (data, analyses, etc.), I’ve been trying for some time to come up with an interesting COVID-19 problem to attack with statistics. After looking at the number of confirmed cases for some counties, it was clear that at some date, the number of new cases stopped being exponential and its distribution changed. However, this date was different for each country (obviously). This post introduces and discusses a Bayesian model for estimating the date that the distribution of new COVID-19 cases in a particular country changes. An important reminder before we get into it is that all models are wrong, but some are useful. This model is useful for estimating the date of change, not for predicting what will happen with COVID-19. It should not be mistaken for an amazing epidemiology model that will tell us when the quarantine will end, but instead a way of describing what we have already observed with probability All the code for this post can be found here. We want to describe $y$, log of the number of new COVID-19 cases in a particular country each day, as a function of $t$, the number of days since the virus started in that country. We’ll do this using a segmented regression model. The point at which we segment will be determined by a learned parameter, $\tau$. This is model is written below: \[\begin{equation*} \begin{split} y = wt + b + \epsilon \end{split} \text{, } \qquad \qquad \begin{split} \epsilon \sim N(0, \sigma^2) \\[10pt] p(y \mid w, b, \sigma) \sim N(wt, \sigma^2) \end{split} \\[15pt] \end{equation*}\] \[\begin{equation*} \begin{split} \text{Where: } \qquad \qquad \end{split} \begin{split} w &= \begin{cases} w_1 & \text{if } \tau \le t\\ w_2 & \text{if } \tau \gt t\\ \end {cases} \\ b &= \begin{cases} b_1 & \text{if } \tau \le t\\ b_2 & \text{if } \tau \gt t\\ \end{cases} \end{split} \\[10pt] \end{equation*}\] \[\begin{equation*} w_1 \sim N(\mu_{w_1}, \sigma_{w_1}^2) \qquad \qquad w_2 \sim N(\mu_{w_2}, \sigma_{w_2}^2) \\[10pt] b_1 \sim N(\mu_{b_1}, \sigma_{b_1}^2) \qquad \qquad b_2 \sim N(\mu_{b_2}, \ sigma_{b_2}^2) \\[10pt] \tau \sim Beta(\alpha, \beta) \qquad \qquad \sigma \sim U(0, 3) \end{equation*}\] In other words, $y$ will be modeled as $w_1t + b_1$ for days up until day $\tau$. After that it will be modeled as $w_2t + b_2$. The model was written in Pyro, a probabilistic programming language built on PyTorch. Chunks of the code are included in this post, but the majority of code is in this notebook. The data used was downloaded from Kaggle. Available to us is the number of daily confirmed cases in each country, and Figure 1 shows this data in Italy. It is clear that there are some inconsistencies in how the data is reported, for example, in Italy there are no new confirmed cases on March 12th, but nearly double the expected cases on March 13th. In cases like this, the data was split between the two days. The virus also starts at different times in different countries. Because we have a regression model, it is inappropriate to include data prior to the virus being in a particular country. This date is chosen by hand for each country based on the progression of new cases and is never the date the first patient is recorded. The “start” date is better interpreted as the date the virus started to consistently grow, as opposed to the date the patient 0 was recorded. Prior Specification Virus growth is sensitive to population dynamics of individual countries and we are limited in the amount of data available, so it is important to supplement the model with appropriate priors. Starting with $w_1$ and $w_2$, these parameters can be loosely interpreted as the growth rate of the virus before and after the date change. We know that the growth will be positive in the beginning and is not likely to be larger than $1$. With these assumptions, $w_1 \sim N(0.5, 0.25)$ is a suitable prior. We’ll use similar logic for $p(w_2)$, but will have to keep in mind flexibility. Without a flexible enough prior here, the model won’t do well in cases where there is no real change point in the data. In these cases, $w_2 \approx w_1$, and we’ll see and example of this in the Results section. For now, we want $p(w_2)$ to be symmetric about $0$, with the majority of values lying between $(-0.5, 0.5)$. We’ll use $w_2 \sim N(0, 0.25)$. Next are the bias terms, $b_1$ and $b_2$. Priors for these parameters are especially sensitive to country characteristics. Countries that are more exposed to COVID-19 (for whatever reason), will have more confirmed cases at its peak than countries that are less exposed. This will directly affect the posterior distribution for $b_2$ (which is the bias term for the second regression). In order to automatically adapt this parameter to different countries, we use the mean of the first and forth quartiles of $y$ as $\mu_{b_1}$ and $\mu_{b_2}$ respectively. The standard deviation for $b_1$ is taken as $1$, which makes $p(b_1)$ a relatively flat prior. The standard deviation of $p(b_2)$ is taken as $\frac{\mu_{b_2}}{4}$ so that the prior scales with larger values of $\mu_{b_2}$. \[b_1 \sim N(\mu_{q_1}, 1) \qquad \qquad b_2 \sim N(\mu_{q_4}, \frac{\mu_{q_4}}{4}) \notag\] As for $\tau$, since at this time we don’t have access to all the data (the virus is ongoing), we’re unable to have a completely flat prior and have the model estimate it. Instead, the assumption is made that the change is more likely to occur in the second half of the date range at hand, so we use $\tau \sim Beta(4, 3)$. class COVID_change(PyroModule): def __init__(self, in_features, out_features, b1_mu, b2_mu): self.linear1 = PyroModule[nn.Linear](in_features, out_features, bias = False) self.linear1.weight = PyroSample(dist.Normal(0.5, 0.25).expand([1, 1]).to_event(1)) self.linear1.bias = PyroSample(dist.Normal(b1_mu, 1.)) self.linear2 = PyroModule[nn.Linear](in_features, out_features, bias = False) self.linear2.weight = PyroSample(dist.Normal(0., 0.25).expand([1, 1])) #.to_event(1)) self.linear2.bias = PyroSample(dist.Normal(b2_mu, b2_mu/4)) def forward(self, x, y=None): tau = pyro.sample("tau", dist.Beta(4, 3)) sigma = pyro.sample("sigma", dist.Uniform(0., 3.)) # fit lm's to data based on tau sep = int(np.ceil(tau.detach().numpy() * len(x))) mean1 = self.linear1(x[:sep]).squeeze(-1) mean2 = self.linear2(x[sep:]).squeeze(-1) mean = torch.cat((mean1, mean2)) obs = pyro.sample("obs", dist.Normal(mean, sigma), obs=y) return mean Hamiltonian Monte Carlo is used for posterior sampling. The code for this is shown below. model = COVID_change(1, 1, b1_mu = bias_1_mean, b2_mu = bias_2_mean) num_samples = 800 # mcmc nuts_kernel = NUTS(model) mcmc = MCMC(nuts_kernel, warmup_steps = 100, num_chains = 4) mcmc.run(x_data, y_data) samples = mcmc.get_samples() Since I live in Canada and have exposure to the dates precautions started, modeling will start here. We’ll use February 27th as the date the virus “started”. \[w_1, w_2 \sim N(0, 0.5) \qquad b_1 \sim N(1.1, 1) \qquad b_2 \sim N(7.2, 1) \notag\] Posterior Distributions Starting with the posteriors for $w_1$ and $w_2$, if there was no change in the data we would expect to see these two distributions close to each other as they govern the growth rate of the virus. It is a good sign that these distributions, along with the posteriors for $b_1$ and $b_2$, don’t overlap. This is evidence that the change point estimated by our model is true. This change point was estimated as: 2020-03-28 As a side note, with no science attached, my company issued a mandatory work from home policy on March 16th. Around this date is when most companies in Toronto would have issues mandatory work from home policies where applicable. Assuming the reported incubation period of the virus is up to 14 days, this estimated date change makes sense as it is 12 days after widespread social distancing measures began! The model fit along with 95% credible interval bands can be seen in the plot below. On the left is log of the number of daily cases, which is what we used to fit the model, and on the right is the true number of daily cases. It is very difficult to visually determine a change point by simply looking at the number of daily cases, and even more difficult by looking at the total number of confirmed cases. Assessing Convergence When running these experiments, the most important step is to diagnose the MCMCfor convergence. I adopt 3 ways of assessing convergence for this model by observing mixing and stationarity of the chains and $\hat{R}$. $\hat{R}$ is the factor by which each posterior distribution will reduce by as the number of samples tends to infinity. A perfect $\hat{R}$ value is 1, and values less than $1.1$ are indicative of convergence. We observe mixing and stationarity of the Markov chains in order to know if the HMC is producing appropriate posterior samples. Below are trace plots for each parameter. Each chain is stationary and mixes well. Additionally, all $\hat{R}$ values are less than $1.1$. After convergence, the last thing to check before moving on to other examples is how appropriate the model is for the data. Is it consistent with the assumptions made earlier? To test this we’ll use a residual plot and a QQ-plot, as shown below. I’ve outlined the estimated change point in order to compare residuals before and after the change to test for homoscedasticity. The residuals follow a Normal distribution with zero mean, and no have dependence with time, before and after the date of change. What About no Change? To test the model’s robustness to a country that has not began to flatten the curve, we’ll look at data from Canada up until March 28th. This is the day that the model estimated curve flattening began in Canada. Just because there isn’t a true change date doesn’t mean the model will output “No change”. We’ll have to use the posterior distributions to reason that the change date provided by the model is inappropriate, and consequentially there is no change in the data. \[w_1, w_2 \sim N(0, 0.5) \qquad b_1 \sim N(0.9, 1) \qquad b_2 \sim N(6.4, 1.6) \notag\] Posterior Distributions The posteriors for $w_1$ and $w_2$ have significant overlap, indicating that the growth rate of the virus hasn’t changed significantly. Posteriors for $b_1$ and $b_2$ are also overlapping. These show that the model is struggling to estimate a reasonable $\tau$, which is good validation for us that the priors aren’t too strong. Although we have already concluded that there is no change date for this data, we’ll still plot the model out of curiosity. Similar to the previous example, the MCMC has converged. The trace plots below show sufficient mixing and stationarity of the chains, and most $\hat{R}$ values less than $1.1$. Next Steps and Open Questions This model is able to describe the data well enough to produce a reliable estimate of the day flattening the curve started. An interesting byproduct of this is the coefficient term for the 2nd regression line, $w_2$. By calculating $w_2$ and $b_2$ for different countries, we can compare how effective their social distancing measures were. The logical next modelling step would be to fit a hierarchical model in order to use partial pooling of data between countries. Thank you for reading, and definitely reach out to me by e-mail or other means if you have suggestions or recommendations, or even just to chat!
{"url":"https://jramkiss.github.io/2020/04/15/covid-changepoint-analysis/","timestamp":"2024-11-14T07:31:58Z","content_type":"text/html","content_length":"34876","record_id":"<urn:uuid:1bedb4c9-6c63-409e-bbe6-c82ab0eaee10>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00629.warc.gz"}
Exercises - Central Limit Theorem A sample is chosen randomly from a population that can be described by a Normal model. a. Describe the shape, center, and spread of the distribution of sample means for some given sample size $n$. b. If we increase the sample size, what's the effect on the distribution of sample means? a. The distribution of sample means follows a normal distribution, with a mean identical to that of the original distribution, and with a standard deviation equal to the standard deviation of the original distribution divided by $\sqrt{n}$. b. The standard deviation of the distribution of sample means decreases (i.e., the distribution becomes more narrow.) What does the notation $\mu_{\overline{x}}$ and $\sigma_{\overline{x}}$ represent? $\mu_{\overline{x}}$ is the mean of the population of all sample means (for samples of some given size $n$), while $\sigma_{\overline{x}}$ is the standard deviation for this same population. Compare the probability distribution for rolling a single 6-sided die to the probability distribution for the mean of two 6-sided dice (draw the histograms). The distribution for rolling a single 6-sided die is uniform, while the distribution for the mean of two 6-sided dice is unimodal (notably more normal than the uniform distribution) with mean 7, and a smaller standard deviation. A survey found that the American family generates an average of 17.2 pounds of glass garbage each year. Assume the standard deviation of the distribution is 2.5 pounds. a. Find the probability that the mean of a sample of 55 families will be between 17 and 18 pounds. b. Why can the central limit theorem be applied? a. For the distribution of sample means, $\mu = 17.2$, while $\sigma = 2.5/\sqrt{55} = 0.3371$. We want $P(17 \lt x \lt 18)$, so we find $z_{17} = (17-17.2)/0.3371 = -0.5933$ and $z_{18} = (18-17.2) /0.3371 = 2.373$ and the related probability $P(-0.5933 \lt z \lt 2.373) = 0.7146$ is our answer. b. We are considering a distribution of sample means, so the Central Limit Theorem applies. (Also, as $55 \gt 30$, we can approximate this distribution of sample means as a normal distribution.) The average teacher's salary in New Jersey is $\$52,174$. Suppose that the distribution is normal with standard deviation $\$7500$. a. What is the probability that a randomly selected teacher makes less than $\$50,000$ per year? b. If we sample 100 teachers' salaries, what is the probability that the sample mean is less than $\$50,000$ per year? c. Why is the probability in part (a) higher than the probability in part (b)? a. $\mu = 52174$ and $\sigma = 7500$. Finding $z_{50,000} = (50000 - 52174)/7500 = -0.2899$, we seek $P(x \lt 50000) = P(z \lt -0.2899) = 0.3860$ b. In the distribution of sample means of size $100$, we have $\mu = 52174$, while $\sigma = 7500/\sqrt{100} = 750$. So, we find $z_{50,000} = (50000 - 52174)/750 = -2.8987$, and calculate $P(\ overline{x} \lt 50000)$ as $P(z \lt -2.8987) = 0.0019$. c. The Central Limit Theorem suggests that the distribution of sample means is narrower than the distribution for the population -- leaving less area (and hence probability) in the tails. Assume SAT scores are normally distributed with mean 1518 and standard deviation 325. a. If one SAT score is randomly selected, find the probability that it is between 1440 and 1480. b. If 16 SAT scores are randomly selected, find the probability that they have a mean between 1440 and 1480. c. Why can the central limit theorem be used in part (b) even though the sample size does not exceed 30? a. $\mu = 1518$ and $\sigma = 325$. Finding $z_{1440} = (1440-1518)/325 = -0.2400$ and $z_{1480} = (1480-1518)/325 = -0.1169$, we calculate $P(1440 \lt x \lt 1480)$ as $P(-0.2400 \lt z \lt -0.1169) = 0.0483$. b. In the distribution of sample means of size $16$, we have $\mu = 1518$, while $\sigma = 325/\sqrt{16} = 81.25$. Finding $z_{1440} = (1440-1518)/81.25 = -0.96$ and $z_{1480} = (1480 - 1518)/81.25 = -0.4677$, we calculate $P(1440 \lt \overline{x} \lt 1480)$ as $P(-0.96 \lt z \lt -0.4677) = 0.1515$. c. The Central Limit Theorem tells us that the distributions of the sample means tend towards a normal distribution as the sample size increases. In this case, the original population distribution was already normally distributed, so all of the distributions of sample means must already be normal. The lengths of pregnancies are normally distributed with a mean of 268 days and a standard deviation of 15 days. a. If one pregnant woman is randomly selected, find the probability that her length of pregnancy is less than 260 days. b. If 25 pregnant women are put on a special diet just before they become pregnant, find the probability that their lengths of pregnancy have a mean that is less than 260 days (assuming that the diet has no effect). c. If the 25 women do have a mean of less than 260 days, does it appear that the diet has an effect on the length of pregnancy, and should the medical supervisors be concerned? a. $\mu = 268$ and $\sigma = 15$. Finding $z_{260} = (260-268)/15 = -0.5333$, we calculate $P(x \lt 260)$ as $P(z \lt -0.5333) = 0.2969$. b. In the distribution of sample means of size $25$, we have $\mu = 260$, while $\sigma = 15/\sqrt{25} = 3$. Finding $z_{260} = (260 - 268)/3 = -2.6666$, we calculate $P(x \lt 260)$ as $P(z \lt -2.6666) = 0.0038$. c. Seeing a sample like this (i.e., with a mean of less than 260 days) is clearly a rare event ($0.0038$ is less than one percent). So if the one and only sample we found had this mean pregnancy length, it casts doubt as to whether or not the mean for these women is still $268$ days (much like seeing the incredibly rare event of 99 out of a 100 coin flips resulting in heads casts doubt on your belief that the coin flipped is fair). The only thing that separates these women from the general population is their special diet -- so yes, it appears the diet had an effect on the length of their pregnancy. Medical supervisors should be concerned. Assume that a test has a mean score of 75 and a standard deviation of 10. Assume the distribution of scores is approximately normal. a. What is the probability that a person chosen at random will make 100 or above on the test? b. What score should be used to identify the top 2.5%? c. In a group of 100 people, how many would you expect to score below 60? d. What is the probability that the mean of a group of 100 will score below 70? a. $\mu = 75$ and $\sigma = 10$. Finding $z_{100} = (100-75)/10 = 2.5$, we calculate $P(x \gt 100)$ as $P(z \gt 2.5) = 0.0062$. b. Note that the top $2.5\%$ corresponds to $0.025$ in area right of some $z$-score. But then the area left of this $z$-score is $1-0.025 = 0.975$. Using a table or technology, we find this corresponds to $z = 1.960$. Recalling that a $z$ score is a number of standard deviations away from the mean (with positive $z$-scores associated with being to the right of the mean and negative ones being to the left of the mean), the cut-off test score we seek is $\mu + z\sigma = 75 + (1.960)(10) = 94.6$ c. Note, this problem does NOT ask about an average score of the 100 people -- so we are NOT looking at the distribution of sample means. Instead, we simply find the probability that a score is below $60$ and then multiply by $100$. Note $\mu = 75$ and $\sigma = 10$, so finding $z_{60} = (60-75)/10 = -1.5$, we calculate $P(x \lt 60)$ as $P(z \lt -1.5) = 0.0668$. Finally, multiplying by $100$ we get the expected number in a group of $100$ people to do this poorly -- namely, about 7 people. d. This problem IS asking about the mean of a group of $100$, so we ARE talking about the distribution of sample means. Thus, for the distribution of sample means, $\mu = 75$, while $\sigma = 10/\ sqrt{100} = 1$. Finding $z_{70} = (70 - 75)/1 = -5$, we calculate $P(x \lt 70)$ as $P(z \lt -5) \approx 2.8 \times 10^{-7}$ which is very, very small! Carbon monoxide (CO) emissions for a certain kind of car vary with mean 2.9 gm/mi and standard deviation 0.4 gm/mi. A company has 80 of these cars in its fleet, acquired from various (i.e., random) a. What's the probability that a randomly selected car from the fleet has CO emissions in excess of 3.1 gm/mi? b. What's the probability that the average CO emissions for all 80 cars is in excess of 3.1? c. There is only a 1% chance that the fleet's mean CO level is greater than what value? a. If the emissions are normally distributed, then $0.3085$ -- but we don't know how this population is distributed, so we can't say this for sure. b. $0.0000038$ c. $3.0040$ Although most of us buy milk by the quart or gallon, farmers measure daily production in pounds. Ayrshire cows average 47 poinds of milk a day, with a standard deviation of 6 pounds. For Jersey cows, the mean daily production is 43 pounds, with a standard deviation of 5 pounds. Assume that Normal models describe milk production for these breeds. a. If we select an Ayrshire at random, what's the probability that she averages more than 50 pounds of milk a day? b. A farmer has 20 Jerseys. What's the probability that the average production for this small herd exceeds 45 pounds of milk a day? c. A farmer has 20 Ayrshires. There's a $99\%$ chance each day that this small herd produces at least how many pounds of milk? a. $0.3085$ b. $0.0368$ c. $877.5$ pounds
{"url":"https://mathcenter.oxford.emory.edu/site/math117/probSetCentralLimitTheorem/","timestamp":"2024-11-05T16:01:19Z","content_type":"text/html","content_length":"16358","record_id":"<urn:uuid:6d35184c-de4b-4b85-bb25-5269f2f70e12>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00340.warc.gz"}
Helmholtz Coil Design Resources Instruments For Testing Your Innovations Helmholtz Coil Design Resources Helmholtz Coil Design Calculations Use the below equation to calculate the Helmholtz field. This equation is for an ideal Helmholtz coil with no wire thickness. Eq. 1 B is the magnetic field in unit of Tesla n is the number of turns I is the current in amperes R is the coil radius in meters This page is dedicated to provide excellent Helmholtz coil design resources for anyone who wants to design their own coil. We have links to a number of articles for generating high magnetic field even at high frequency. We included resources such as electromagnetic field and inductance calculators. In case you want to buy an off-the-shelf or custom-made coils, we have a link to our Helmholtz coil sale and manufacture list. Helmholtz Coil Design Articles Helmholtz Coil Calculators Driving Helmholtz Coils Helmholtz field can be produce by using a function generator amplifier like as the TS200 and the TS250. These amplifiers drive high current through coils which generate strong electromagnetic field. You can read the Helmholtz Coil article to learn how to use resonant techniques to produce high-frequency strong electromagnetic field. This article details several techniques for generating high magnetic field. The first method is directly driving the coil using a current amplifier. The second method is using a series capacitor to resonate with the coil to produce high magnetic field even at high frequency. This article discusses how to use the series resonant to generate high magnetic field while achieving high frequency. The discussion is for electromagnet in general, but equally applied to Helmholtz A novel technique that uses a special resonant circuit to amplify the current by a factor of two. It is therefore magnify the magnetic field by two. Detail mathematical models are present in this
{"url":"https://accelinstruments.com/Helmholtz-Coil/Helmholtz-coil-design.html","timestamp":"2024-11-14T10:22:35Z","content_type":"text/html","content_length":"28842","record_id":"<urn:uuid:f147196a-d6d2-47e4-bec1-a98ef8f418d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00641.warc.gz"}
a. One mole of an ideal monoatomic gas (closed system, Cv,m) initially at 1 atm and... a. One mole of an ideal monoatomic gas (closed system, Cv,m) initially at 1 atm and... a. One mole of an ideal monoatomic gas (closed system, Cv,m) initially at 1 atm and 273.15 K experiences a reversible process in which the volume is doubled. the nature of the process is unspecified, but the following quantities are known, deltaH=2000.0J and q=1600.0J. Calculate the initial volume, the final temperature, the final pressure, deltaU, and w for the process. b. Suppose the above gas was taken from the same initial state to the same final state as in the first part, but by a two step process consisting of first a reversible, isobaric step and then a reversible, isothermal step. Draw a diagram of P vs T indicting the overall change of state and the two step pathway. Calculate the overall deltaH, deltaU, deltaw, and q for the two step process.
{"url":"https://justaaa.com/physics/501605-a-one-mole-of-an-ideal-monoatomic-gas-closed","timestamp":"2024-11-03T22:31:37Z","content_type":"text/html","content_length":"40899","record_id":"<urn:uuid:6c98a38c-4ab5-44dd-90c4-c5f070d13481>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00615.warc.gz"}
wplmm: Weighted Partially Linear Mixed Effects Model in plmm: Partially Linear Mixed Effects Model Estimate the regression error variance function nonparametrically from a partially linear mixed effects model fitted using the model fitting function plmm, and refit the model applying the weighted least squares procedure. wplmm returns an object of the ‘wplmm’ class. 1 wplmm(object, heteroX, data, nonpar.bws = "h.select", poly.index = 1, 2 var.fun.bws = "ROT", var.fun.poly.index = 0, scale.h = 1, trim = 0.01, 3 lim.binning = 100, ...) wplmm(object, heteroX, data, nonpar.bws = "h.select", poly.index = 1, var.fun.bws = "ROT", var.fun.poly.index = 0, scale.h = 1, trim = 0.01, lim.binning = 100, ...) object a model fitted with plmm. heteroX at most two variables conditioning the heteroskedasticity of the regression error variance. If there are two variables, they can be passed either as a 2-element list or a 2-column data an optional data frame containing the variables in the model. If relevant variables are not found in data, they are taken from the environment from which wplmm was called. nonpar.bws the bandwidth selection method for the kernel regression of the nonparametric component. The default method “h.select” (cross validation (CV) using binning technique), “hcv” (ordinary CV), “GCV” (generalized CV) and “GCV.c” (generalized CV for correlated data) are available. poly.index the degrees of polynomial for the kernel regression of the nonparametric component: either 0 for local constant or 1 (default) for local linear. var.fun.bws the bandwidth selection method for kernel regression of the variance function. A rule-of-thumb type method “ROT” (default), “h.select” (cross validation using binning technique) and “hcv” (ordinary cross validation) are available. var.fun.poly.index the degree of polynomial of the kernel regression to estimate the nonparametric variance function: either 0 (default) for local constant or 1 for local linear. scale.h a scalar or 2-dimensional vector to scale the bandwidths selected for kernel regression of the nonparametric component. The default is 1. When a scalar is given for a nonparametric component of two covariates, it scales the bandwidths in both directions by the same factor. trim if estimated variance function values are below the value of trim, they are set to this value. The default is 0.01. lim.binning the smallest sample size below which binning techniques are not used to calculate the degrees of freedom of the estimated nonparametric component. Then, the ordinary cross-validation is used instead. This option doesn't apply if “GCV.c” is used for nonpar.bws. ... optional arguments relevant to h.select or hcv, which include nbins, hstart and hend. See sm.options and hcv. at most two variables conditioning the heteroskedasticity of the regression error variance. If there are two variables, they can be passed either as a 2-element list or a 2-column matrix. an optional data frame containing the variables in the model. If relevant variables are not found in data, they are taken from the environment from which wplmm was called. the bandwidth selection method for the kernel regression of the nonparametric component. The default method “h.select” (cross validation (CV) using binning technique), “hcv” (ordinary CV), “GCV” (generalized CV) and “GCV.c” (generalized CV for correlated data) are available. the degrees of polynomial for the kernel regression of the nonparametric component: either 0 for local constant or 1 (default) for local linear. the bandwidth selection method for kernel regression of the variance function. A rule-of-thumb type method “ROT” (default), “h.select” (cross validation using binning technique) and “hcv” (ordinary cross validation) are available. the degree of polynomial of the kernel regression to estimate the nonparametric variance function: either 0 (default) for local constant or 1 for local linear. a scalar or 2-dimensional vector to scale the bandwidths selected for kernel regression of the nonparametric component. The default is 1. When a scalar is given for a nonparametric component of two covariates, it scales the bandwidths in both directions by the same factor. if estimated variance function values are below the value of trim, they are set to this value. The default is 0.01. the smallest sample size below which binning techniques are not used to calculate the degrees of freedom of the estimated nonparametric component. Then, the ordinary cross-validation is used instead. This option doesn't apply if “GCV.c” is used for nonpar.bws. optional arguments relevant to h.select or hcv, which include nbins, hstart and hend. See sm.options and hcv. There are three methods to select bandwidths for kernel regression of the nonparametric variance function: “h.select” and “hcv” call h.select and hcv, respectively, which are functions of the sm package; “ROT” selects the bandwidths for heteroskedasticity conditioning variable w by sd(w)N^{-1/(4+q)} where q is the number of the conditioning variables (1 or 2) and N is the sample size. coefficients estimated regression coefficients. fitted.values conditional predictions of the response, defined as the sum of the estimated fixed components and the predicted random intercepts. residuals residuals of the fitted model, defined as the response values minus the conditional predictions of the response. var.comp variance component estimates. nonpar.values estimated function values of the nonparametric component at the data points. h.nonpar the bandwidths used to estimate the nonparametric component. var.fun.values estimated variance function values. Original computations less than the value of trim have been set to the value of trim. h.var.fun the bandwidths used to estimate the nonparametric variance function. rank the degrees of freedom of the parametric component, which doesn't include the intercept term. df.residual the residual degrees of freedom defined as N-p-tr(2S-S^T) where N is the sample size, p is the rank of the parametric component, and S is the smoother matrix for the nonparametric component. If “GCV.c” is used for nonpar.bws, alternative definition N-p-tr(2SR-SRS^T) is applied with R being the estimated correlation matrix of the data. nbins the number of bins (which would have been) used for binning for CV and the calculation of the degrees of freedom. formula formula passed to wplmm. call the matched call to wplmm. h0.call the matched call to select.h0 underlying the plmm that yielded the object. plmm.call the matched call to the plmm that yielded the object. xlevels if there are factors among the covariates in the parametric component, the levels of those factors. heteroX the names of the heteroskedasticity conditioning variables. conditional predictions of the response, defined as the sum of the estimated fixed components and the predicted random intercepts. residuals of the fitted model, defined as the response values minus the conditional predictions of the response. estimated function values of the nonparametric component at the data points. estimated variance function values. Original computations less than the value of trim have been set to the value of trim. the degrees of freedom of the parametric component, which doesn't include the intercept term. the residual degrees of freedom defined as N-p-tr(2S-S^T) where N is the sample size, p is the rank of the parametric component, and S is the smoother matrix for the nonparametric component. If “GCV.c” is used for nonpar.bws, alternative definition N-p-tr(2SR-SRS^T) is applied with R being the estimated correlation matrix of the data. the number of bins (which would have been) used for binning for CV and the calculation of the degrees of freedom. the matched call to select.h0 underlying the plmm that yielded the object. the matched call to the plmm that yielded the object. if there are factors among the covariates in the parametric component, the levels of those factors. 1 data(plmm.data) 2 model <- plmm(y1~x1+x2+x3|t1, random=cluster, data=plmm.data) 3 model2 <- wplmm(model, heteroX=x3, data=plmm.data) 4 summary(model2) data(plmm.data) model <- plmm(y1~x1+x2+x3|t1, random=cluster, data=plmm.data) model2 <- wplmm(model, heteroX=x3, data=plmm.data) summary(model2) For more information on customizing the embed code, read Embedding Snippets.
{"url":"https://rdrr.io/cran/plmm/man/wplmm.html","timestamp":"2024-11-15T01:16:30Z","content_type":"text/html","content_length":"35702","record_id":"<urn:uuid:ed70268b-1173-4603-8547-66a25aedb567>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00388.warc.gz"}
Machine Learning – WZB Data Science Blog I gave a presentation on Topic Modeling from a practical perspective*, using data about the proceedings of plenary sessions of the 18th German Bundestag as provided by offenesparlament.de. The presentation covers preparation of the text data for Topic Modeling, evaluating models using a variety of model quality metrics and visualizing the complex distributions in the models. You can have a look at the slides here: Probabilistic Topic Modeling with LDA β Practical topic modeling: Preparation, evaluation, visualization The source code of the example project is available on GitHub. It shows how to perform the preprocessing and model evaluation steps with Python using tmtoolkit. The models can be inspected using PyLDAVis and some (exemplary) analyses on the data are performed. * This presentation builds up on a first session on the theory behind Topic Modeling Slides on Topic Modeling β Background, Hyperparameters and common pitfalls I just uploaded my slides on probabilistic Topic Modeling with LDA that give an overview of the theory, the basic assumptions and prerequisites of LDA and some notes on common pitfalls that often happen when trying out this method for the first time. Furthermore I added a Jupyter Notebook that contains a toy implementation of the Gibbs sampling algorithm for LDA with lots of comments and plots to illustrate each step of the algorithm. Topic Model Evaluation in Python with tmtoolkit Topic modeling is a method for finding abstract topics in a large collection of documents. With it, it is possible to discover the mixture of hidden or “latent” topics that varies from document to document in a given corpus. As an unsupervised machine learning approach, topic models are not easy to evaluate since there is no labelled “ground truth” data to compare with. However, since topic modeling typically requires defining some parameters beforehand (first and foremost the number of topics k to be discovered), model evaluation is crucial in order to find an “optimal” set of parameters for the given data. Several metrics exist for this task and some of them will be covered in this post. Furthermore, as calculating many models on a large text corpus is a computationally intensive task, I introduce the Python package tmtoolkit which allows to utilize all availabel CPU cores in your machine by computing and evaluating the models in parallel.
{"url":"https://datascience.blog.wzb.eu/category/machine-learning/","timestamp":"2024-11-05T10:44:13Z","content_type":"application/xhtml+xml","content_length":"64137","record_id":"<urn:uuid:d13d8dac-cac3-490d-9857-f0a874d2268c>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00666.warc.gz"}
What is the number of each subatomic particle in copper? | Socratic What is the number of each subatomic particle in copper? 1 Answer There are 29 protons, 35 neutrons, and 29 electrons in a copper atom. You can find these numbers yourself by looking at a periodic table of the elements: Copper will be the 29th element in the table reading from left to right, top to bottom. The box for copper will look something like this: The number in the top-left corner is called the atomic number, and that is how many protons and electrons are in a normal copper atom. The number at the bottom the atomic mass. If you round that number and subtract it from the atomic number then you will be left with the number of protons. Like this: number of neutrons = atomic mass - atomic number $35 = 65 - 29$ (In copper's case the most common isotope is Cu-63 so the number of neutrons would really be 34, but for middle/high school chemistry the teacher will just expect you to round the atomic mass.) Impact of this question 35138 views around the world
{"url":"https://socratic.org/questions/what-are-the-numbers-of-the-subatomic-particles-in-copper#626473","timestamp":"2024-11-02T22:11:29Z","content_type":"text/html","content_length":"34605","record_id":"<urn:uuid:79d3c959-7994-4637-aa37-10be52f730d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00166.warc.gz"}
axiom of choice I’m not entirely happy with the introduction (“Statement”) to the page axiom of choice. On the one hand, it implies that the axiom of choice is something to be considered relative to a given category $C$ (which is reasonable), but it then proceeds to give the external formulation of AC for such a $C$, which I think is usually not the best meaning of “AC relative to $C$”. I would prefer to give the Statement as “every surjection in the category of sets splits” and then discuss later that analogous statements for other categories (including both internal and external ones) can also be called “axioms of choice” — but with emphasis on the internal ones, since they are what correspond to the original axiom of choice (for sets) in the internal logic. (I would also prefer to change “epimorphism” for “surjection” or “regular/effective epimorphism”, especially when generalizing away from sets.) I agree. Since no one objected, I went ahead and made this change. I added the characterization of IAC toposes as Boolean étendues. diff, v66, current Added the fact (thanks to Alizter for finding it) that AC is equivalent to the statement “if two free groups have equal cardinality, then so do their generating sets”. diff, v70, current Oops, of course the sets have to be infinite. diff, v70, current Seems like a somewhat roundabout way of putting it: can’t we just say that for infinite $X$, that $X$ and $F(X)$ have the same cardinality? Am I missing something? Okay, I guess never mind my question. The direction that the indicated statement plus ZF implies AC doesn’t look easy. Fixed image broken link in References-General, image uploaded. diff, v73, current Some of the equivalent statements to AC listed like • That every essentially surjective functor is split essentially surjective. • That every fully faithful and essentially surjective functor between strict categories is a strong equivalence of categories. are suspicious as they talk about classes rather than sets. Isn’t it that AC for classes is a stronger statement ? Should one just put modifier small ? Added equivalent statement That any cartesian product of any family of inhabited sets is inhabited. And it’s type theoretic analogue in the section on type theory That any dependent product of any family of pointed sets is pointed. diff, v76, current Surjections are families of inhabited sets, not families of sets. diff, v76, current Re #10: These two clauses (here and here) were added in rev 75 by Anonymous and in rev 48 by Mike, respectively. It seems clear that they are meant to be applied to small categories. For the clause mentioning strict categories this is almost explicit, since the entry strict category speaks as if smallness is the default assumption. I have now added the smallness qualifier to both items, and also the strictness qualifier to the former. diff, v77, current adding a paragraph explaining that the traditional axiom of choice using bracket types implies function extensionality in type theory. diff, v78, current adding reference • Egbert Rijke, section 17.4 of Introduction to Homotopy Type Theory, Cambridge Studies in Advanced Mathematics, Cambridge University Press (pdf) (478 pages) diff, v78, current updated reference diff, v80, current In May, an Anonymous editor added the following as an equivalent of AC: That every subset $A \subseteq B$ in a universe $\mathcal{U}$ comes with a choice of injection $i:A \hookrightarrow B$. Constructive mathematicians usually use subsets equipped with the structure of an injection, as those are usually more useful than general subsets with the mere property of being a subset. I don’t know what this means. How can a subset fail to come with a choice of injection defined by $i(a)=a$? Perhaps by “subset $A\subseteq B$” it is meant a pair $A$, $B$ such that $|A|\le |B|$, or I guess that there merely exists an injection, and this is quantified over all pairs satisfying this cardinality condition. This feels like a Choice principle, but perhaps too strong? I suppose that’s possible. That should follow from (global) choice, since you are choosing a set of specific injections from a family of nonempty sets of injections, and it certainly implies choice taking $A=1$. Yes, it felt like Global Choice to me, which is stronger than the passage claims (plain AC). So, regardless, it will need editing a bit more. Perhaps the relativization to “a universe” was intended to weaken it from global choice to ordinary choice?
{"url":"https://nforum.ncatlab.org/discussion/5176/axiom-of-choice/?Focus=105523","timestamp":"2024-11-02T06:21:44Z","content_type":"application/xhtml+xml","content_length":"72785","record_id":"<urn:uuid:0df41552-cc64-400d-8017-5afd791f3b76>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00195.warc.gz"}
Resonance Structures Organic Chemistry Tutorial Series To help you take away the guesswork I've put together a brand new series taking you through the basics, starting with the question: What IS Resonance, all the way to rules, tips, tricks, practice, and even radical resonance. Think you've mastered it all? Jump right to my Resonance Structures Practice Quiz Included in this series: The first step to mastery is understanding. Don't just memorize ‘these arrows move here'. Instead, make sure you get why molecules resonate to delocalize their electrons, and understand the difference between ‘resonance hybrid' and ‘resonance intermediates'. This and more is covered in Video 1 of this mini series. When drawing resonance structures you'll have to keep an eye on the change in formal charge. But you won't have time to go through the lengthy/complex equation in your book. Instead be sure to utilize my Formal Charge Formula Shortcut. Now that you have a resonance foundation, it's time to put your curved arrows to work. They key to finding resonance structures, is knowing which electrons to move, WHERE to move them, and WHAT NOT TO DO! Not all resonance structures are created equally. While an arrow CAN move onto a given atom, doesn't mean it WANTS to. Once you've come up with all possible resonance forms, you'll have to understand the stability of each structure to determine which are major and which are minor. When you think of resonance, you're likely thinking of 2 electrons as a lone pair or pi bond. But radicals, or unpaired single electrons can also participate in resonance to increase their Video 4 shows you how to draw the ‘fishhook' resonance for allylic and benzylic radical structures Benzene is a unique molecule when it comes to resonance structures. Learn how to draw its resonance, as well as resonance intermediates for substituted aromatic compounds including Electron Donating Groups which resonate into the ring and Electron Withdrawing Groups which cause resonance out of the ring. First, complete the Resonance Structures Practice Quiz, and then watch this video where I go over the first three question solutions and explanations step-by-step! Having gone through this series, how do you feel about resonance structures? If you still have questions, WATCH THE SERIES AGAIN! If you feel confident then test your understanding by trying my free Resonance Structures Practice Quiz. Organic Chemistry Reference Material and Cheat Sheets Alkene Reactions Overview Cheat Sheet – Organic Chemistry The true key to successful mastery of alkene reactions lies in practice practice practice. However, … [Read More...] MCAT Tutorials Introduction To MCAT Math Without A Calculator While the pre-2015 MCAT only tests you on science and verbal, you are still required to perform … [Read More...] Organic Chemistry Tutorial Videos Keto Enol Tautomerization Reaction and Mechanism Keto Enol Tautomerization or KET, is an organic chemistry reaction in which ketone and enol … [Read More...]
{"url":"https://leah4sci.com/resonance-structures-in-organic-chemistry/","timestamp":"2024-11-10T16:29:23Z","content_type":"text/html","content_length":"127158","record_id":"<urn:uuid:53530af7-b893-46c3-8703-1d842b249abc>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00574.warc.gz"}
What this package is about • LDA: limiting dilution assay • coop: (cellular) cooperation The LDA is the gold standard for quantification of the clonogenic capacity of non-adherent cells. The most common method for calculating this capacity from an LDA experiment assumes implicitly independence of all cells. Interestingly, this assumption does not hold for many cell lines (i.e. cooperatively and competitively growing cells). The effect of cellular cooperation Cellular cooperation induces a non-linearity to the dose-response-curve (the (log) fraction of non-responding wells over the number of cells seeded) and thereby biases the estimate of the gold standard analysis method, which assumes linearity. In other words: when surrounded by many other cells, the combined clonogenic activity of 100 cells is higher than the activity of 100 separated single cells. In terms of single cell activity, 100 cells act as if they were more. Thus, the gold standard analysis method is syntactically applicable but its result is biased (i.e. meaningless) for cooperatively growing cells. Robust analysis in presence (and absence) of cellular cooperation Therefore, this package equips you with the tools you need to robustly quantify the clonogenic capacity of cell lines as well as to quantify the clonogenic survival of those cell lines after How to use this package As known from tools for quantification of the clonogenic capacity from LDA data, the input is a data frame (table, matrix) with entries for • cells: the number of cells seeded in a well • wells: the number of replicate wells • response: the number of wells with positive response (growth) • (optional) group: treatment group - e.g. irradiation dose All commands to get tables with numbers or plots of the activity are explained below. A full experiment Let’s assume, you have conducted an experiment with a set of experimental conditions and biological replicates. Your data will look something like this #> name replicate Group S-value # Tested # Clonal growth #> 1 MDA.MB321 1 0 32 12 12 #> 2 MDA.MB321 1 0 16 12 12 #> 3 MDA.MB321 1 0 8 12 12 #> 4 MDA.MB321 1 0 4 12 12 #> 5 MDA.MB321 1 0 2 12 10 #> 6 MDA.MB321 1 0 1 12 7 A table of clonogenic activities, cooperativity coefficients and survival fractions can be generated from such a data table as follows: BT20 <- subset.data.frame(x = LDAdata, subset = name == "BT.20") BT20 <- BT20[,c("S-value","# Tested","# Clonal growth","Group","replicate")] round(LDA_table(x = BT20),digits = 3) #> reference class is 0 #> treatment act act.CI.lb act.CI.ub b b.pvalue SF SF.CI.lb SF.CI.ub #> 1 0 29.613 24.941 35.336 1.135 0.176 NA NA NA #> 2 1 33.420 28.333 39.634 1.220 0.043 0.886 0.695 1.129 #> 3 2 40.632 34.393 48.222 1.206 0.062 0.729 0.572 0.929 #> 4 4 72.997 62.607 85.467 1.434 0.003 0.406 0.321 0.512 #> 5 6 180.201 152.873 213.446 1.234 0.038 0.164 0.129 0.209 #> 6 8 423.447 375.474 481.978 2.266 0.000 0.070 0.057 0.086 In this table, we find • treatment: the experimental ‘Group’ • act: the calculated activity for the treatment • act.CI.lb and act.CI.ub: its uncertainty (lower and upper bound of the 95%-confidence interval) • b: the cooperativity coefficient (1: no cooperativity) • b.pvalue: the p-value for test hypothesis of b=1 • SF: the calculated survival fraction (SF) • SF.CI.lb and SF.CI.ub: SF-uncertainty (lb: lower bound, ub: upper bound) A plot of the experiment and the estimated survival fractions can be generated as follows: In case the figure looks overcrowded, uncertainty.band = FALSE can switch off the confidence bands and xlim (e.g. xlim = c(0,100)) can be used, to adjust the plotted range of seeded cells. Mathematical background Clonogenicity refers to the capacity of single cells to grow into new colonies. Since not every single cell will successfully form a new colony, in a given culture the fraction of cells that do so is often of interest. This fraction is then called clonogenic activity. Mathematically it is a probability \(p\). Often the activity is communicated by the reciprocal, which corresponds to a number of cells (e.g. activity \(p = 1/42\) written as \(a = 1/p = 42\) for ‘one active cell among \(a = 42\) non-active cells’). In the LDA, distinct numbers of cells are seeded in wells. The number of seeded cells per well is often called dose. Since, we do not want to get confused with the irradiation dose, which is a frequent treatment in the field of radiation oncology, we denote the number of cells seeded with \(S\). The readout of each well in the LDA is dichotomous (growth / no growth). So, from the binomially distributed number of active cells in a well \(X\) with \[P(X=k) = \left( \begin{matrix} s \\ k \end{matrix}\right) p^{k} (1-p)^{S-k} \] only \(X=0\) and \(X \neq 0\) can be observed experimentally. Thus, it is about the frequency of positive wells as a function of the number of cells seeded (\(k/n = f(S)\)). For dealing with high cell numbers, the binomial distribution is approximated by the asymptotically identical (for \(S \to \infty\)) Poisson distribution (with \(\lambda = S \cdot p\)): \[P_{\lambda}(X=k) = \frac{\lambda^ k}{k!} e^{-\lambda}.\] The basic concept for estimating the clonogenic activity of a cell suspension through diluting the number of cells in different wells is the following: • imagine a number of cells in a well, which is so high that we expect many active cells among them, then this well will be positive. • imagine a number of cells so low that the chance of an active cell among them is very low, then this well will be negative. • thus, between those two numbers there will be cell numbers, where no-one can say, whether a colony will grow or not. It is rather a matter of chance. • from Poisson statistics we know, that a frequency of 37% negative wells will be observed, when the average number of active cells per well is one (\(e^{-1} \approx 0.368\)). So taking the graph ‘observed fraction of positive wells over the number of cells per well’, we find the clonogenic activity as the number of cells (x-axis) just where the curve passes the expectation of 37% negative wells. Mathematical model Let \(n\) be the number of wells, and let \(\mu\) denote the expected fraction of negative wells (failures). Following the single-hit Poisson model (SHPM: \(\lambda_{SHPM} = p \cdot S\)), the expectation of active cells in a well is approximately Poisson distributed and depends linearly on the number of cells seeded \(S\) as well as the probability \(p\) of each cell being active (i.e., clonogenic). We therefore find \(P_{\lambda}(k=0) = e^{-p S}\)), and thus: \[\mu = e^{-pS}.\] The number of failures \(r\) is binomially distributed \[ P(Y=r) = \left( \begin{matrix} n \\ r \end{matrix}\right) \mu^{r} (1-\mu)^{n-r}. \] We find (for \(\alpha = ln(p)\), \(\xi = ln(S)\)) \[\begin{align*} \Leftrightarrow \mu &= e^{- p S}\\ \Leftrightarrow ln(\mu) &= - p S \\ \Leftrightarrow -ln(\mu) &= p \cdot S \\ \Leftrightarrow ln(-ln(\mu)) &= \alpha + \xi \end{align*}\] Therefore, technically the identification of the required number of cells is done via a generalized linear regression model (binomial family, loglog link function). The clonogenic activity is \(p = e In the presented model, \(p\) is independent from \(S\). So implicitly, it is assumed, that 100 single cells in single wells have the identical chance to result in at least one colony as a single well with those 100 cells would have. But this is often not the case. Cells export or release substances into the medium and thereby communicate and cooperate (or compete). The combined chance of a group of cells to grow into a clone often exceeds the sum of the individual chances. (See CFAcoop, for a number of cells seeded \(S\) and a number of resulting colonies \(C\) we find \(C = a \cdot S^b\).) Thus, allowing for the same communication and cooperativity (or competition), we replace \(\lambda = p \cdot S\) with \(\lambda = p \cdot S^b\) analogue to CFA. The modification reads in both cases as ‘a number of \(S\) cells shows the same activity as if they were \(S^b\) single cells’. With this generalization we find \[\begin{align*} P_{\lambda}(k=0) &= e^{-\lambda}\\ \Leftrightarrow \mu &= e^{- p S^b}\\ \Leftrightarrow ln(\mu) &= - p S^b \\ \Leftrightarrow ln(-ln(\mu)) &= \alpha + \xi \cdot b. \end{align*}\] Interestingly, this equation is the same as in the special case of non-cooperativity, just without the restriction of \(b=1\), which corresponds to the non-cooperative case. The degree of cooperativity is quantified by the coefficient \(b\). When the probability of a cell to grow into a colony is not independent from the number of surrounding cells, the earlier definition of clonogenic activity (the chance of an isolated single cell) is pointless. Thus, the approach to deal with cooperatively growing cells, requires a generalized definition for clonogenic activity. For a fixed outcome of e.g. \(\lambda = 1\), which is equivalent with \(\mu \approx 0.63\), we still may calculate the required number of cells to be seeded from the regression coefficients \[s_{\ lambda = 1} = e^{\frac{-\alpha}{b}} = \left(p^{-1}\right)^{\frac{1}{b}}.\] On average, one out of these cells will grow into a colony. Therefore, we define this number \(s_{\lambda=1}\) as the (generalized) clonogenic activity. This is analogue to the CFA approach for cellular cooperation (which sets the reference at an expectation of \(20\) colonies.). Please note, the non-cooperative case (\(b=1\)) results in the identical activity values as from the previous definition. Given the data of an LDA experiment, we find the clonogenic activity where expected. The only change in contrast to conventional LDA data analysis is the curve, which replaces the linear model. cell.line <- unique(LDAdata$name)[2] AD <- subset.data.frame(LDAdata, subset = (name==cell.line) & (replicate==1) & (Group == 2))[,4:6] LDA_plot(LDA_tab = AD,uncertainty = "act", uncertainty.band = T) LDA_table(x = AD) #> $`activity^-1 [N]` #> [1] 37.432 #> $`confidence interval` #> [1] 25.742 54.773 #> $`cooperativity coefficient b` #> [1] 0.992 #> $`p-value cooperativity` #> [1] 0.962 full_model_fit <- LDA_activity_single(x = AD) In case, there are biological replicates, the replicate may be indicated by the fifth column. Thereby the frequency of responding wells can be plotted separately. AD <- subset.data.frame(LDAdata, subset = (name==cell.line) & (Group == 2))[,c(4:6,3,2)] LDA_plot(LDA_tab = AD,uncertainty = "act", uncertainty.band = T) LDA_table(x = AD[,1:3],ref_class = 0) #> $`activity^-1 [N]` #> [1] 40.632 #> $`confidence interval` #> [1] 34.393 48.222 #> $`cooperativity coefficient b` #> [1] 1.206 #> $`p-value cooperativity` #> [1] 0.062 Please note that the starting motivation of a concept ‘one active cell among how many?’ is still the same. It is just restricted to the special outcome of 37% negative wells. The choice of ‘outcome of 37% negative wells’ is quite artificial and the activity for higher or lower active well ratios would change. But when it comes to survival fractions, this shift is mostly cut out and thereby this approach filters (in parts) a cooperativity-bias in those survival fractions. Survival fraction Even though the concept of clonogenic activity is somewhat fuzzy in the presence of cellular cooperation, the effect of treatments on this clonogenic capacity can be investigated quite accurately by the same way it is done for the CFA. The only requirement is a statement on the outcome of interest. At CFA, a number of 20 colonies can be agreed upon - and a comparison of the number of required cells seeded with and without treatment is reasonable. Analogously, the ratio of cell numbers required to observe the same \(\mu\) (e.g. \(\mu = e^{-1}\)) is of interest, when investigating the effect of certain treatments. \[SF_x = \frac{S_0(\lambda_0 = 1)}{S_x(\lambda_x = 1)}\] can be calculated from the fitted GLM parameters as \[SF_x = exp\left( \frac{log(p_x)}{b_x} - \frac{log(p_0)}{b_0}\right)\] Therefore, the interpretation of survival fractions is straight forward. If the number of cells needed to be seeded with treatment is ten times the number without treatment, the survival fraction is calculated as \(0.1\). Note that it is implicitly assumed, that those cells that are non-survivors of the treatment, do not contribute to the cooperative stimulation of the survivors. Otherwise the treatment effect will be underestimated. Uncertainty analysis Uncertainties for the parameters of the generalized linear model are returned by the fitting procedure and can be transferred to the clonogenic activity via the general predict.glm function. Thus, uncertainties of clonogenic activities are calculated the same way as the activities are (cutting \(y=-1\)). Uncertainties of the survival fractions can be generally assessed by two ways • robust approximation by combination of the 84%-confidence-intervals of the activity (method (a), see Austin et al. (2002), Payton et al. (2003), Knol et al. (2011)) • calculation following the law of error propagation through first order Taylor series expansion (method (b)) In cases of extreme cooperativity, method (b) can be numerically unstable and therefore we recommend the use of method (a) instead. Analysis of the calculated uncertainty bounds of the survival fractions of the 10 non-extreme cell lines under all treatments (see data delivered with this package), we find a mean ratio (uncertainty-bound method(a) / uncertainty-bound method(b)) of 0.9969 and a standard deviation of 0.0049. Thus, the deviation of those two methods is very stable in the order of 1%. LDA_table_act <- LDA_table(x = BT20,uncertainty = "act") #> reference class is 0 cbind(round(LDA_table_act[,1:5],digits = 1), round(LDA_table_act[,6:9],digits = 4)) #> treatment act act.CI.lb act.CI.ub b b.pvalue SF SF.CI.lb SF.CI.ub #> 1 0 29.6 24.9 35.3 1.1 0.1762 NA NA NA #> 2 1 33.4 28.3 39.6 1.2 0.0430 0.8861 0.6954 1.1288 #> 3 2 40.6 34.4 48.2 1.2 0.0620 0.7288 0.5717 0.9293 #> 4 4 73.0 62.6 85.5 1.4 0.0033 0.4057 0.3213 0.5124 #> 5 6 180.2 152.9 213.4 1.2 0.0378 0.1643 0.1291 0.2092 #> 6 8 423.4 375.5 482.0 2.3 0.0000 0.0699 0.0565 0.0863 LDA_table_ep <- LDA_table(x = BT20,uncertainty = "ep") #> reference class is 0 cbind(round(LDA_table_ep[,1:5],digits = 1), round(LDA_table_ep[,6:9],digits = 4)) #> treatment act act.CI.lb act.CI.ub b b.pvalue SF SF.CI.lb SF.CI.ub #> 1 0 29.6 24.9 35.3 1.1 0.1762 NA NA NA #> 2 1 33.4 28.3 39.6 1.2 0.0430 0.8861 0.6959 1.1282 #> 3 2 40.6 34.4 48.2 1.2 0.0620 0.7288 0.5720 0.9286 #> 4 4 73.0 62.6 85.5 1.4 0.0033 0.4057 0.3213 0.5121 #> 5 6 180.2 152.9 213.4 1.2 0.0378 0.1643 0.1292 0.2091 #> 6 8 423.4 375.5 482.0 2.3 0.0000 0.0699 0.0565 0.0866
{"url":"https://cran.r-project.org/web/packages/LDAcoop/vignettes/LDAcoop.html","timestamp":"2024-11-11T13:04:14Z","content_type":"text/html","content_length":"211334","record_id":"<urn:uuid:ca320e20-36a8-43cc-accc-6db9b50fa778>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00412.warc.gz"}
Quadratic Formula Lesson Plan for 9th - 12th Grade Curated and Reviewed by Lesson Planet This Quadratic Formula lesson plan also includes: This worksheet is part of the TI-Nspire lesson on the quadratic formula. Pupils determine the solutions of a quadratic function by looking at a graph and the discriminant. They use the quadratic formula to solve quadratic functions on their Ti-Nspire. 87 Views 75 Downloads CCSS: Designed Resource Details 9th - 12th 1 more... Resource Type 45 mins Instructional Strategy Usage Permissions Fine Print: Educational Use Start Your Free Trial Save time and discover engaging curriculum for your classroom. Reviewed and rated by trusted, credentialed teachers. Try It Free
{"url":"https://www.lessonplanet.com/teachers/quadratic-formula-9th-12th","timestamp":"2024-11-09T12:50:25Z","content_type":"text/html","content_length":"104247","record_id":"<urn:uuid:b9f41552-b637-49f5-9a97-a62107a301c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00218.warc.gz"}
Unlocking the Power of Financial Analysis: What is the Net Present Value of an Investment? - Finance Planning Unlocking the Power of Financial Analysis: What is the Net Present Value of an Investment? When it comes to making informed investment decisions, understanding the concept of net present value (NPV) is crucial. As a fundamental tool in financial analysis, NPV helps investors and businesses evaluate the potential return on investment (ROI) of a project or investment opportunity. In this article, we’ll delve into the world of NPV, exploring its definition, calculation, and significance in investment decision-making. What is Net Present Value (NPV)? Net present value is the difference between the present value of expected future cash flows and the initial investment amount. In simpler terms, NPV is a measure of the value of an investment today, considering the time value of money. It takes into account the idea that a dollar received today is worth more than a dollar received in the future, due to the potential to earn interest or returns on that dollar. NPV is calculated by discounting the expected future cash inflows and outflows of an investment using a discount rate, which represents the cost of capital or the expected rate of return. A positive NPV indicates that the investment is expected to generate more value than its cost, making it a viable opportunity. On the other hand, a negative NPV suggests that the investment may not be worth The Formula: How to Calculate NPV The NPV formula is as follows: NPV = Σ (CFt / (1 + r)^t) – Initial Investment • NPV = Net present value • CFt = Cash flow at time t • r = Discount rate • t = Time period • Initial Investment = The initial amount invested For example, let’s say you’re considering an investment that requires an initial outlay of $10,000. The expected cash flows for the next five years are $2,000, $3,000, $4,000, $5,000, and $6,000, respectively. If the discount rate is 10%, what is the NPV of this investment? Using the formula, we can calculate the NPV as follows: NPV = ($2,000 / (1 + 0.10)^1) + ($3,000 / (1 + 0.10)^2) + ($4,000 / (1 + 0.10)^3) + ($5,000 / (1 + 0.10)^4) + ($6,000 / (1 + 0.10)^5) – $10,000 NPV = $15,091.15 – $10,000 = $5,091.15 In this scenario, the NPV is positive, indicating that the investment is expected to generate more value than its cost. Importance of Net Present Value in Investment Decision-Making NPV is a powerful tool for investors and businesses, as it helps them: Evaluate Investment Opportunities NPV allows investors to compare different investment opportunities and determine which ones are likely to generate the highest returns. By calculating the NPV of each option, investors can prioritize their investments and allocate their resources more effectively. Assess Risk and Uncertainty NPV takes into account the risks associated with an investment, including the uncertainty of future cash flows and the potential for default. By using a discount rate that reflects the level of risk, investors can adjust the NPV calculation to account for these uncertainties. Optimize Investment Portfolios NPV can be used to optimize investment portfolios by identifying the most valuable projects or investments. By ranking investments based on their NPV, investors can create a diversified portfolio that maximizes returns while minimizing risk. Improve Capital Budgeting NPV is a key component of capital budgeting, as it helps businesses evaluate the potential return on investment (ROI) of different projects. By using NPV to prioritize projects, businesses can ensure that they’re allocating their capital to the most valuable opportunities. Investment NPV Ranking Project A $10,000 1 Project B $5,000 2 Project C -$2,000 3 In this example, Project A has the highest NPV, making it the top priority for investment. Project C has a negative NPV, indicating that it may not be a viable opportunity. Common Applications of Net Present Value NPV is a versatile tool with applications across various industries and sectors, including: Corporate Finance NPV is used in corporate finance to evaluate the potential ROI of projects, such as capital expenditures, mergers and acquisitions, and research and development initiatives. Real Estate In real estate, NPV is used to evaluate the potential return on investment for properties, taking into account rental income, maintenance costs, and potential appreciation in value. Finance and Banking Banks and financial institutions use NPV to evaluate the creditworthiness of borrowers and to determine the expected return on investment for loans and other financial instruments. Entrepreneurs use NPV to evaluate the potential viability of their business ideas, taking into account start-up costs, revenue projections, and cash flow requirements. Limitations and Challenges of Net Present Value While NPV is a powerful tool, it’s not without its limitations and challenges, including: Sensitivity to Discount Rate The NPV calculation is highly sensitive to the discount rate used, which can be subject to estimation errors or biases. Estimating Cash Flows Accurately estimating future cash flows can be challenging, particularly in uncertain or volatile markets. Ignores Qualitative Factors NPV focuses solely on quantitative factors, ignoring qualitative considerations such as strategic fit, competitive advantage, and social impact. Assumes Constant Discount Rate NPV assumes a constant discount rate over the life of the investment, which may not reflect changes in market conditions or interest rates. In conclusion, net present value is a fundamental concept in financial analysis that helps investors and businesses evaluate the potential return on investment of a project or opportunity. By understanding the NPV formula, its importance in investment decision-making, and its common applications, investors can make more informed decisions and optimize their investment portfolios. While NPV has its limitations and challenges, it remains a powerful tool for unlocking the power of financial analysis and driving business success. What is the Net Present Value (NPV) of an Investment? The Net Present Value (NPV) of an investment is the difference between the present value of expected cash inflows and the present value of expected cash outflows over a period of time. It’s a metric used to evaluate the profitability of an investment or project by calculating the value of future cash flows in today’s dollars. NPV takes into account the time value of money, which means that a dollar received today is worth more than a dollar received in the future. This is because money received today can be invested to earn interest, whereas money received in the future has already lost some of its value due to inflation and opportunity cost. By discounting future cash flows to their present value, NPV provides a more accurate picture of an investment’s potential return. How is NPV Calculated? The NPV calculation involves discounting each expected cash flow by a discount rate, which reflects the risk-free rate of return and the project’s risk premium. The formula for NPV is: NPV = Σ (CFt / (1 + r)^t), where CFt is the expected cash flow at period t, r is the discount rate, and t is the number of periods. The discount rate is a critical component of the NPV calculation, as it determines the present value of each cash flow. A higher discount rate will result in a lower NPV, while a lower discount rate will result in a higher NPV. The choice of discount rate depends on the project’s risk profile and the cost of capital. What is the Purpose of NPV Analysis? The primary purpose of NPV analysis is to determine whether an investment is expected to generate more value than it costs. If the NPV is positive, it indicates that the investment is expected to generate returns that exceed its costs, and it’s considered a good investment opportunity. On the other hand, if the NPV is negative, it suggests that the investment will lose value over time, and it may not be a good investment. NPV analysis is also used to compare alternative investment opportunities and evaluate their relative merits. By calculating the NPV of each investment, investors can prioritize projects with higher NPVs and allocate resources more effectively. How Does NPV Differ from Other Investment Metrics? NPV differs from other investment metrics, such as Internal Rate of Return (IRR) and Payback Period, in several ways. While IRR measures the rate of return on an investment, NPV provides a more comprehensive picture of an investment’s profitability by considering the absolute value of cash flows. Payback Period, on the other hand, focuses on the time it takes for an investment to break even, ignoring the cash flows that occur after that point. NPV is a more nuanced metric that provides a more accurate picture of an investment’s potential return. It’s particularly useful for evaluating investments with complex cash flow profiles or those that involve multiple periods. What are the Limitations of NPV Analysis? One of the main limitations of NPV analysis is that it relies on estimates of future cash flows, which may be uncertain or inaccurate. Other limitations include the choice of discount rate, which can be subjective and influence the outcome of the analysis. Additionally, NPV analysis assumes that cash flows occur at the end of each period, which may not always be the case. Despite these limitations, NPV remains a powerful tool for evaluating investment opportunities. To overcome these limitations, it’s essential to use robust forecasting models, consider multiple scenarios, and perform sensitivity analysis to test the robustness of the results. How is NPV Used in Real-World Applications? NPV is widely used in real-world applications, such as capital budgeting, project evaluation, and investment appraisal. It’s commonly used by corporations, governments, and financial institutions to evaluate the viability of projects, such as infrastructure developments, mergers and acquisitions, and research and development initiatives. In addition to these applications, NPV is also used in personal finance to evaluate investment opportunities, such as real estate investments, stocks, and bonds. By calculating the NPV of an investment, individuals can make more informed decisions about where to allocate their resources. What are the Best Practices for NPV Analysis? One of the best practices for NPV analysis is to use a rigorous and consistent approach to estimating cash flows and discount rates. It’s essential to consider multiple scenarios and perform sensitivity analysis to test the robustness of the results. Additionally, it’s crucial to use a discount rate that reflects the project’s risk profile and the cost of capital. Another best practice is to use NPV analysis in conjunction with other investment metrics, such as IRR and Payback Period, to gain a more comprehensive understanding of an investment’s potential return. By following these best practices, investors can make more informed decisions and unlock the full potential of financial analysis. Leave a Comment
{"url":"https://financeplanning.life/what-is-the-net-present-value-of-the-investment/","timestamp":"2024-11-07T01:41:37Z","content_type":"text/html","content_length":"49280","record_id":"<urn:uuid:ac653e6e-f5a2-41d0-8acf-bf3d8763d070>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00458.warc.gz"}
To get started with machine learning, it is helpful to have a strong foundation in certain prerequisite areas. Here are some key prerequisites for machine learning: 1. Programming: Proficiency in a programming language is essential for implementing machine learning algorithms and working with data. Python is the most commonly used programming language in the machine learning community due to its extensive libraries and frameworks, such as TensorFlow and scikit-learn. Familiarity with Python or another programming language like R or Julia will be 2. Mathematics and Statistics: A good understanding of mathematics and statistics is crucial for machine learning. You should have knowledge of linear algebra, calculus, probability theory, and statistics. Linear algebra is important for understanding concepts like vectors, matrices, and matrix operations, which are fundamental to many machine learning algorithms. Calculus is used in optimization algorithms, which are used to train machine learning models. Probability theory and statistics are essential for understanding the probabilistic foundations of machine learning and evaluating model performance. 3. Data Manipulation and Analysis: Machine learning often involves working with large datasets. It is essential to be comfortable with data manipulation and analysis techniques. This includes skills such as cleaning and preprocessing data, handling missing values, dealing with outliers, and performing exploratory data analysis (EDA) to gain insights from the data. Proficiency in libraries like Pandas and NumPy can be valuable for data manipulation tasks. 4. Algorithms and Data Structures: Having a good understanding of basic algorithms and data structures is beneficial for implementing and optimizing machine learning algorithms. Concepts like sorting, searching, linked lists, arrays, and trees are commonly used in machine learning. Familiarity with these concepts can help you understand the underlying principles of various machine learning algorithms and their computational complexities. 5. Linear Regression and Statistics: Linear regression is a fundamental technique in machine learning for modeling the relationship between variables. Understanding linear regression and basic statistical concepts such as hypothesis testing, confidence intervals, and p-values is important for interpreting the results and evaluating models. 6. Probability and Bayes' Rule: Probability theory forms the foundation of many machine learning algorithms. Familiarity with concepts like conditional probability, Bayes' rule, and probability distributions (e.g., Gaussian, Bernoulli) is important for understanding probabilistic models and techniques like Naive Bayes and Bayesian inference. 7. Machine Learning Concepts: Familiarize yourself with the core concepts of machine learning, including supervised learning, unsupervised learning, reinforcement learning, overfitting, underfitting, bias-variance tradeoff, feature selection, model evaluation, and cross-validation. Understanding these concepts will help you grasp the principles behind different machine learning algorithms and how to apply them effectively. 8. Domain Knowledge: Having domain knowledge related to the problem you are trying to solve with machine learning can be highly valuable. Understanding the specific characteristics, challenges, and nuances of the domain can help you make better decisions in data preprocessing, feature engineering, and model evaluation. While these are important prerequisites, it's worth noting that machine learning is a vast field with various subfields and specialized techniques. As you delve deeper into specific areas of machine learning, you may need to acquire additional knowledge and skills. Continuous learning and staying up-to-date with the latest developments in the field are also essential for success in machine Machine learning (ML) is a subfield of artificial intelligence (AI) that focuses on the development of algorithms and models that enable computers to learn and make predictions or decisions without being explicitly programmed. In other words, machine learning algorithms allow systems to automatically learn patterns and relationships from data and make intelligent decisions based on that Artificial intelligence, on the other hand, is a broader field that encompasses the development of intelligent systems capable of performing tasks that typically require human intelligence. AI aims to build intelligent machines that can perceive their environment, reason, learn, and make decisions. Machine learning is a key component of AI because it provides the algorithms and techniques that enable systems to learn from data and improve their performance over time. By analyzing large amounts of data, machine learning algorithms can identify patterns, extract meaningful insights, and make predictions or decisions. There are different types of machine learning algorithms, including: 1. Supervised Learning: In supervised learning, the algorithm learns from labeled training data, where each data point is associated with a corresponding label or output. The algorithm learns to map input data to the correct output and can then make predictions for new, unseen data. 2. Unsupervised Learning: Unsupervised learning algorithms work with unlabeled data, where the algorithm aims to discover hidden patterns or structures in the data without explicit guidance. Clustering and dimensionality reduction are common unsupervised learning techniques. 3. Reinforcement Learning: Reinforcement learning involves an agent learning to interact with an environment and take actions to maximize cumulative rewards. The agent learns through trial and error, receiving feedback in the form of rewards or penalties based on its actions. Machine learning techniques are used in various AI applications, including: 1. Image and Speech Recognition: Machine learning algorithms can analyze and classify images, enabling systems to recognize objects, faces, or patterns. Similarly, in speech recognition, machine learning is used to convert spoken words into written text. 2. Natural Language Processing (NLP): NLP involves the understanding and processing of human language by machines. Machine learning algorithms are used to analyze and interpret text, perform sentiment analysis, language translation, and build chatbots, among other tasks. 3. Recommendation Systems: Machine learning algorithms are commonly used in recommendation systems to analyze user preferences and behavior, and make personalized recommendations for products, movies, music, and more. 4. Autonomous Vehicles: Machine learning plays a crucial role in enabling autonomous vehicles to perceive and interpret their environment, make decisions, and navigate safely. These are just a few examples of how machine learning contributes to the broader field of artificial intelligence. Machine learning techniques and algorithms are foundational tools for building intelligent systems and enabling AI applications to learn, adapt, and improve their performance over time.
{"url":"https://deep-learning-site.com/prerequisites","timestamp":"2024-11-03T23:08:39Z","content_type":"text/html","content_length":"109977","record_id":"<urn:uuid:aca4d91e-f18d-4dc4-90f6-b9aa634aaae2>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00438.warc.gz"}
'Tanikawa, Ataru' Searching for codes credited to 'Tanikawa, Ataru' ➥ Tip! Refine or expand your search. Authors are sometimes listed as 'Smith, J. K.' instead of 'Smith, John' so it is useful to search for last names only. Note this is currently a simple phrase [ascl:1209.008] Phantom-GRAPE: SIMD accelerated numerical library for N-body simulations Phantom-GRAPE is a numerical software library to accelerate collisionless $N$-body simulation with SIMD instruction set on x86 architecture. The Newton's forces and also central forces with an arbitrary shape f(r), which have a finite cutoff radius r_cut (i.e. f(r)=0 at r>r_cut), can be quickly computed. [ascl:1604.011] FDPS: Framework for Developing Particle Simulators FDPS provides the necessary functions for efficient parallel execution of particle-based simulations as templates independent of the data structure of particles and the functional form of the interaction. It is used to develop particle-based simulation programs for large-scale distributed-memory parallel supercomputers. FDPS includes templates for domain decomposition, redistribution of particles, and gathering of particle information for interaction calculation. It uses algorithms such as Barnes-Hut tree method for long-range interactions; methods to limit the calculation to neighbor particles are used for short-range interactions. FDPS reduces the time and effort necessary to write a simple, sequential and unoptimized program of O(N^2) calculation cost, and produces compiled programs that will run efficiently on large-scale parallel supercomputers.
{"url":"http://ascl.net/code/cs/Tanikawa%2C%20Ataru","timestamp":"2024-11-04T12:22:44Z","content_type":"text/html","content_length":"6411","record_id":"<urn:uuid:67f12b6d-d300-446c-ac48-e063d5bd91ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00775.warc.gz"}
Uniform Random Search Next: Ramped-Half-and-Half Random Search Up: Solution of the Previous: Solution of the Using uniform random search and taking the maximum value for P gives us a minimum figure of 450,000 for programs of length 18. However if we allow longer programs, P falls producing a corresponding rise in E to 1,200,000 with programs of size 25 and 2,700,000 with programs of size 50 and 4,900,000 for sizes of 500. (In the ant problem as well as reducing the chance of success, longer random programs also require more machine resources to evaluate). William B Langdon Tue Feb 3 19:53:29 GMT 1998
{"url":"http://www0.cs.ucl.ac.uk/staff/W.Langdon/antspace_csrp-98-04/node5.html","timestamp":"2024-11-12T20:35:39Z","content_type":"text/html","content_length":"1961","record_id":"<urn:uuid:c3ef7774-3e41-4b41-bd71-aaba9e1325d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00233.warc.gz"}
21 June 2024 Image Compression Minimizing the size of an image using Singular Value Decomposition 05 May 2024 The best version of all adaptive learning rate optimization algorithms 04 May 2024 Reducing the aggresive learning rate decay in Adagrad using the twin sibling of Adadelta 03 May 2024 Reducing the aggresive learning rate decay in Adagrad 01 May 2024 Parameter updates with unique learning rate for each parameter 30 April 2024 SGD with Nesterov A more conscience version of Stochastic Gradient Descent with Momentum 27 April 2024 SGD with Momentum Fast convergence using Stochastic Gradient Descent with Momentum 17 February 2024 Blinking LED My attempt in learning embedded software development 08 February 2024 Ridge Regression Detecting and handling multicollinearity using L2 regularization 05 February 2024 Lasso Regression Detecting and handling multicollinearity using L1 regularization 24 December 2023 Logistic Regression Binary prediction using the logit function made from scratch
{"url":"https://www.bsraya.com/posts","timestamp":"2024-11-06T07:35:53Z","content_type":"text/html","content_length":"24537","record_id":"<urn:uuid:9775769d-7f99-43ac-a1df-78b0598f4d64>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00494.warc.gz"}
Skills Review for Power Series and Functions Learning Outcomes • Use summation notation • Apply factorial notation • Simplify expressions using the Product Property of Exponents In the Power Series and Functions section, we will look at power series and how they basically represent infinite polynomials. Here we will review how to expand sigma (summation) notation, apply factorial notation, and use the product rule for exponents. Expand Sigma (Summation) Notation (also in Module 1, Skills Review for Approximating Areas) Summation notation is used to represent long sums of values in a compact form. Summation notation is often known as sigma notation because it uses the Greek capital letter sigma to represent the sum. Summation notation includes an explicit formula and specifies the first and last terms of the sum. An explicit formula for each term of the series is given to the right of the sigma. A variable called the index of summation is written below the sigma. The index of summation is set equal to the lower limit of summation, which is the number used to generate the first term of the sum. The number above the sigma, called the upper limit of summation, is the number used to generate the last term of the sum. If we interpret the given notation, we see that it asks us to find the sum of the terms in the series [latex]{a}_{i}=2i[/latex] for [latex]i=1[/latex] through [latex]i=5[/latex]. We can begin by substituting the terms for [latex]i[/latex] and listing out the terms. [latex]\begin{array}{l} {a}_{1}=2\left(1\right)=2 \\ {a}_{2}=2\left(2\right)=4\hfill \\ {a}_{3}=2\left(3\right)=6\hfill \\ {a}_{4}=2\left(4\right)=8\hfill \\ {a}_{5}=2\left(5\right)=10\hfill \end We can find the sum by adding the terms: [latex]\displaystyle\sum _{i=1}^{5}2i=2+4+6+8+10=30[/latex] A General Note: Summation Notation The sum of the first [latex]n[/latex] terms of a series can be expressed in summation notation as follows: [latex]\displaystyle\sum _{i=1}^{n}{a}_{i}[/latex] This notation tells us to find the sum of [latex]{a}_{i}[/latex] from [latex]i=1[/latex] to [latex]i=n[/latex]. [latex]k[/latex] is called the index of summation, 1 is the lower limit of summation, and [latex]n[/latex] is the upper limit of summation. Example: EXpanding Summation Notation Evaluate [latex]\displaystyle\sum _{i=3}^{7}{i}^{2}[/latex]. Show Solution Try It Evaluate [latex]\displaystyle\sum _{i=2}^{5}\left(3i - 1\right)[/latex]. Show Solution Try It Apply Factorial Notation (also in Module 5, Skills Review for Alternating Series and Ratio and Root Tests) Recall that [latex]n[/latex] factorial, written as [latex]n![/latex], is the product of the positive integers from 1 to [latex]n[/latex]. For example, [latex]4!&=4\cdot 3\cdot 2\cdot 1=24 \\ 5!&=5\cdot 4\cdot 3\cdot 2\cdot 1=120\\ \text{ } [/latex] An example of formula containing a factorial is [latex]{a}_{n}=\left(n+1\right)![/latex]. The sixth term of the sequence can be found by substituting 6 for [latex]n[/latex]. [latex]{a}_{6}=\left(6+1\right)!=7!=7\cdot 6\cdot 5\cdot 4\cdot 3\cdot 2\cdot 1=5040 \\ \text{ }[/latex] The factorial of any whole number [latex]n[/latex] is [latex]n\left(n - 1\right)![/latex] We can therefore also think of [latex]5![/latex] as [latex]5\cdot 4!\text{.}[/latex] n factorial is a mathematical operation that can be defined using a recursive formula. The factorial of [latex]n[/latex], denoted [latex]n![/latex], is defined for a positive integer [latex]n[/latex] [latex]\begin{array}{l}0!=1\\ 1!=1\\ n!=n\left(n - 1\right)\left(n - 2\right)\cdots \left(2\right)\left(1\right)\text{, for }n\ge 2\end{array}[/latex] The special case [latex]0![/latex] is defined as [latex]0!=1[/latex]. Try It Try It Expand [latex](n+3)![/latex]. Show Solution Use the Product Rule for Exponents (also in Module 5, Skills Review for Alternating Series and Ratio and Root Tests) A General Note: The Product Rule of Exponents For any real number [latex]a[/latex] and natural numbers [latex]m[/latex] and [latex]n[/latex], the product rule of exponents states that [latex]{a}^{m}\cdot {a}^{n}={a}^{m+n}[/latex] Example: Using the Product Rule Write each of the following products with a single base. Do not simplify further. 1. [latex]{t}^{5}\cdot {t}^{3}[/latex] 2. [latex]\left(-3\right)^{5}\cdot \left(-3\right)[/latex] 3. [latex]{x}^{2}\cdot {x}^{5}\cdot {x}^{3}[/latex] Show Solution Try It For any real number [latex]a[/latex] and positive integers [latex]m[/latex] and [latex]n[/latex], the power rule of exponents states that [latex]{\left({a}^{m}\right)}^{n}={a}^{m\cdot n}[/latex] For an expression like [latex](a^2)^3[/latex], you have a base of [latex]a[/latex] raised to the power of [latex]2[/latex], which is then raised to another power of [latex]3[/latex]. Multiply the exponents [latex]2[/latex] and [latex]3[/latex] to find the new exponent for [latex]a[/latex]. This gives you [latex]a^{2\cdot3}[/latex] or [latex]a^6[/latex]. Always remember: when an exponent is raised to another exponent, multiply the exponents to simplify the expression. Try It Simplify the expression [latex](3a^2b)^3 \cdot (2ab^4)[/latex]. Show Solution Try It Simplify the expression [latex](2y^2)^3 \cdot (4y^5).[/latex] Show Solution Candela Citations CC licensed content, Original CC licensed content, Shared previously
{"url":"https://courses.lumenlearning.com/calculus2/chapter/skills-review-for-power-series-and-functions/","timestamp":"2024-11-14T11:20:58Z","content_type":"text/html","content_length":"41903","record_id":"<urn:uuid:1f26c71c-0853-47d8-8530-67e3d9672194>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00250.warc.gz"}
Tola to Kilograms Converter โ Switch toKilograms to Tola Converter How to use this Tola to Kilograms Converter ๐ ค Follow these steps to convert given weight from the units of Tola to the units of Kilograms. 1. Enter the input Tola value in the text field. 2. The calculator converts the given Tola into Kilograms in realtime โ using the conversion formula, and displays under the Kilograms label. You do not need to click any button. If the input changes, Kilograms value is re-calculated, just like that. 3. You may copy the resulting Kilograms value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Tola to Kilograms? The formula to convert given weight from Tola to Kilograms is: Weight[(Kilograms)] = Weight[(Tola)] / 85.73526 Substitute the given value of weight in tola, i.e., Weight[(Tola)] in the above formula and simplify the right-hand side value. The resulting value is the weight in kilograms, i.e., Weight Calculation will be done after you enter a valid input. Consider that an investment in gold jewelry is weighed at 15 tola. Convert this weight from tola to Kilograms. The weight of gold jewelry in tola is: Weight[(Tola)] = 15 The formula to convert weight from tola to kilograms is: Weight[(Kilograms)] = Weight[(Tola)] / 85.73526 Substitute given weight of gold jewelry, Weight[(Tola)] = 15 in the above formula. Weight[(Kilograms)] = 15 / 85.73526 Weight[(Kilograms)] = 0.175 Final Answer: Therefore, 15 tola is equal to 0.175 kg. The weight of gold jewelry is 0.175 kg, in kilograms. Consider that a traditional Indian gold necklace weighs 25 tola. Convert this weight from tola to Kilograms. The weight of gold necklace in tola is: Weight[(Tola)] = 25 The formula to convert weight from tola to kilograms is: Weight[(Kilograms)] = Weight[(Tola)] / 85.73526 Substitute given weight of gold necklace, Weight[(Tola)] = 25 in the above formula. Weight[(Kilograms)] = 25 / 85.73526 Weight[(Kilograms)] = 0.2916 Final Answer: Therefore, 25 tola is equal to 0.2916 kg. The weight of gold necklace is 0.2916 kg, in kilograms. Tola to Kilograms Conversion Table The following table gives some of the most used conversions from Tola to Kilograms. Tola (tola) Kilograms (kg) 0.01 tola 0.00011663813 kg 0.1 tola 0.00116638125 kg 1 tola 0.01166381253 kg 2 tola 0.02332762506 kg 3 tola 0.0349914376 kg 4 tola 0.04665525013 kg 5 tola 0.05831906266 kg 6 tola 0.06998287519 kg 7 tola 0.08164668772 kg 8 tola 0.09331050025 kg 9 tola 0.105 kg 10 tola 0.1166 kg 20 tola 0.2333 kg 50 tola 0.5832 kg 100 tola 1.1664 kg 1000 tola 11.6638 kg The tola is a traditional unit of mass used in South Asia, equivalent to approximately 11.66 grams. It is commonly used in the jewelry industry to weigh gold and other precious metals. A kilogram is the base unit of mass in the International System of Units (SI). The kilogram (kg) is used as a unit of mass in various fields and applications globally like science, healthcare, education, commerce and trade, agriculture, etc. Frequently Asked Questions (FAQs) 1. How do I convert tolas to kilograms? To convert tolas to kilograms, multiply the number of tolas by 0.0116638, since one tola is approximately equal to 0.0116638 kilograms. For example, 50 tolas multiplied by 0.0116638 equals approximately 0.58319 kilograms. 2. What is the formula for converting tolas to kilograms? The formula is: \( \text{kilograms} = \text{tolas} \times 0.0116638 \). 3. How many kilograms are in a tola? There are approximately 0.0116638 kilograms in one tola. 4. Is 1 tola equal to 0.0116638 kilograms? Yes, 1 tola is approximately equal to 0.0116638 kilograms. 5. How do I convert kilograms to tolas? To convert kilograms to tolas, multiply the number of kilograms by 85.7353. For example, 1 kilogram multiplied by 85.7353 equals approximately 85.7353 tolas. 6. Why do we multiply by 0.0116638 to convert tolas to kilograms? Because one tola is defined as approximately 0.0116638 kilograms, multiplying by this factor converts the mass from tolas to kilograms. 7. How many kilograms are there in 100 tolas? Multiply 100 tolas by 0.0116638 to get approximately 1.16638 kilograms. 8. What is a tola, and where is it commonly used? A tola is a traditional unit of mass used in South Asian countries like India, Pakistan, and Nepal. It is often used for measuring precious metals like gold and silver. Weight Converter Android Application We have developed an Android application that converts weight between kilograms, grams, pounds, ounces, metric tons, and stones. Click on the following button to see the application listing in Google Play Store, please install it, and it may be helpful in your Android mobile for conversions offline.
{"url":"https://convertonline.org/unit/?convert=tola-kg","timestamp":"2024-11-05T18:33:35Z","content_type":"text/html","content_length":"94936","record_id":"<urn:uuid:81d82528-109a-4a83-a874-01df981daefd>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00696.warc.gz"}
What is carbon dating (radiometric dating) No 14c, radiometric dating is carbon dating method of this makes carbon-14 dating gives an isotope has a further example, see. Is based upon the proportion of a radioactive decay of bone, scientists. Question of carbon dating or shell, unlike carbon content. They include radio carbon, the earth is a radioactive isotope, any technique used to historic. Explains how geological formations on the age dating. Now things born are being 27, known rate of decay of radiometric dating is an age determination that decay to directly. After all organic and rocks that cosmic rays and daughter isotopes in advance, carbon. New research is knowing https://complejidadhumana.com/ trace radioactive dating can use of radiometric dating history: measures the diagram above, carbon-14. Carbon-14 dating is a radioactive isotopes to date materials. The moniker radiometric dating samples are being discussed include potassium-argon dating. You'll also limited to infer the nobel prize in years for - radiometric dating process, points to date everything from. https://clicnegocio.com/ online dating to date organic and uranium 238u, since c-14 has a radiometric is vital to carbon dating methods. Radiocarbon dating is based in the decayed 14c 5730 years as there is a weakly radioactive carbon is useful because it. They measure how radiometric dating is full of what is the radioactive decay and uranium-lead dating or personals site. Radiocarbon dating method - ingrid u olsson. This method of a naturally occurring radioactive isotopes in part on rock that it had not use carbon-based radiometric dating. She says tree rings provide a method depends upon the. Jump to the burning of the radioactive, the earth. Some isotopes are thought to a date everything from solidified lava. Yet few, into a half life at the precision in which are threatening the most common radioactive elements. Scientists use of radiometric dating is a variety of radioactive isotope of about 5, on earth. Despite the effectiveness of carbon, a centuries-old kernel of years. Teachers: you can be applied anywhere in the isotope of archaeological sites is an isotopic dating methods, wooden archaeological artifacts. As radiocarbon dating method that read this, points to historic. When an innovative method of carbon 14. Carbon-14 isotope has a radioactive carbon isotopes of 1-4. Radiometric dating and best sex finder sooner or just carbon dated to date an absolute dating, radiocarbon dating. News about key tool of relative dating methods. For dating of the three different isotopes are usually charcoal, but an organisms absorb a half-life 5700 years old. Use radiometric dating: radiocarbon dating rocks, it takes for radiometric dating is determined? After all radiocarbon dating technique which has a reliable. Looking for rocks over 100, is a relatively long half-life of radiometric dating - the remains of the time that are graphitized for determining the. We can determine the age is a radiometric dating-the process of biological specimens, which trace radioactive 14c to learn vocabulary, 000 years old. Understand how long as carbon-14 it takes for example, and other study tools. What is the meaning of carbon dating Definition, this specific definition of artefacts is possible and eight neutrons from longman dictionary. Definition of any other techniques are atoms. Geologists are compensated by itself a very old object containing organic material. Another word for online who is known as the number one of determining the observed production rate of carbon isotope carbon-14 levels to mean the. Over millennia the remaining carbon-14 dating, organization, and more about 5, radiocarbon dating is for you. Any other dating, meaning measurements need to date extremely. Radioactive isotope of carbon dating is an. Engagement before marriage not be used to be inaccurate. Faites des rencontres gratuites avec des rencontres gratuites avec des rencontres gratuites avec des rencontres gratuites avec des célibataires dans les autres villes. If you see synonyms and artifacts and translation for carbon-based radiometric dating doesn't have an innovative method that you are stable isotope. Join the sources of linen is known as the free dating, or radiocarbon dating is possible and give a man. If any c14 dating schemes to do not use carbon-based radiometric dating is single and track usage. Learn more words, dating based on timestamps left. Some authors use our understanding of carbon dating definition of 5, bones. Faites des célibataires dans les autres villes. Information and i heard on her arm? What type of radiation does carbon dating use Carbon-14 atoms that are stable; several forms. More radiation in v 1.1 x106 m/s verify. My physical fitness is actually visible light. Libby and a total atomic nucleus, creating. Why wouldn't you can also known as its original value. Instead of radiometric dating technique used as gas. The natural radioactive food, tree had an element of radioisotopes for. Radioactive and a very low level sources. However, will have been on the 14c/12c ratio 14c/ 12c, who uses c-14. Radiation emitted by willard libby and with oxygen to learn more thrilling than you. Type of carbon and unstable carbon black powder. Its nucleus containing 6 protons and colleagues 1949 to the global carbon has six protons and radiotherapy. What does the carbon dating mean Calibration of carbon 14 is nearly every 5, this means of biological specimens – is a geologist said to date of jesus christ seriously. Describes radioactive isotope, meaning that it work. Describes radioactive decay you can also the number is the object loses half its level of atoms. That means that measures the ratio of carbon-12, a sentence? There is continuously created in the radioactivity of an isotope, and click to the material. Free to nitrogen and the age of carbon dating determines the possibility of normal carbon content. Over the short half-life, but normally have been used by scientists know can calculate. Nothing good can easily establish that as part of carbon dating, so the year starting as an organ- ism dies, revolutionized the earth. Some of the determination that provides objective age of decay to date is also known also the. In the determination that 5730 years, picture, is unstable isotope to join the past 70 years. Calibration of the activity of fossil or radiocarbon date is only about carbon-14 allows scientists to determine the university of almost complete. Advances in all c-14 ages of nitrogen into all organic artifact. Why it does not 14c will explore the moment of the leader in the worldview of an. Historical artefacts like moa bones can be used to 12c, and after. We will explore the isotope 14c radiocarbon dating is the sample is single and sam. Free to determine the theoretical and remove enough contaminating. So large that no longer be used technique that. Looking for determining the organism that provides objective age estimates for. Too many people want to measure the. Definition of the age determination of carbon-14 dating in rapport services and remove enough contaminating. Scientists can have been on the earth.
{"url":"https://complejidadhumana.com/what-is-carbon-dating-radiometric-dating/","timestamp":"2024-11-07T07:27:23Z","content_type":"text/html","content_length":"66734","record_id":"<urn:uuid:ef8a5213-f99d-4b0a-9c46-84ca06338f91>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00658.warc.gz"}
GCSE Maths Higher Ch.13 More Trigonometry | Education Auditorium top of page Chapter 13 More Trigonometry 13.5 Calculating Areas and the Sine Rule The sine rule for the area of a triangle is Area = ½ ab sinC, where ‘a‘ and ‘b‘ are two sides of a triangle and ‘C‘ is the angle in between them. In this section, we learn about sine and cosine rules, sine rule, discussion of the sine rule in a general triangle, use the sine rule to find the missing side, use the sine rule to find the missing angle, area of triangle. Chapter 13 More Trigonometry 13.6 The Cosine Rule and 2D Trigonometric Problems The cosine rule a^2=b^2+c^2-2bccosA can be used in any triangle to calculate an unknown side. In this section, we learn about sine and cosine rules, cosine rule, arrange the cosine rule to find an angle, use the cosine rule to find the missing angle. Chapter 13 More Trigonometry bottom of page
{"url":"https://www.education-auditorium.co.uk/gcse-maths-higher-ch-13-more-trigonometry","timestamp":"2024-11-08T12:23:21Z","content_type":"text/html","content_length":"1050514","record_id":"<urn:uuid:2e4c5491-c1bd-4037-9006-5d9972b38f69>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00633.warc.gz"}
Error importing ZZ Error importing ZZ I'm trying to switch to a more software-development-like approach for one of my projects. To this end I'll be writing several files, and I'll be trying to keep imports to a minimum to speed up module At first I started with a file foo.sage and a Makefile which preparses this using sage -min -preparse foo.sage. But the resulting foo.sage.py still starts with from sage.all_cmdline import *. I thought the point of the -min switch was to avoid just that. Am I missing something here? Next I tried to write Python code instead. But there I got problems, apparently because I was loading modules in the wrong order. Take for example a file foo.py containing just the line from sage.rings.integer_ring import ZZ. My Sage 7.4 on Gentoo will print the following when running said file as sage foo.py: Traceback (most recent call last): File "foo.py", line 1, in <module> from sage.rings.integer_ring import ZZ File "sage/rings/integer.pxd", line 7, in init sage.rings.integer_ring (…/rings/integer_ring.c:14426) File "sage/rings/rational.pxd", line 8, in init sage.rings.integer (…/rings/integer.c:49048) File "sage/rings/fast_arith.pxd", line 3, in init sage.rings.rational (…/rings/rational.c:36533) File "sage/libs/pari/gen.pxd", line 5, in init sage.rings.fast_arith (…/rings/fast_arith.c:8139) File "sage/libs/pari/gen.pyx", line 91, in init sage.libs.pari.gen (…/libs/pari/gen.c:135191) File "/usr/lib64/python2.7/site-packages/sage/rings/infinity.py", line 228, in <module> from sage.rings.integer_ring import ZZ ImportError: cannot import name ZZ Is there a way to reasonably import things like this without too much experimentation, and without importing far more than I actually need here? This is a very good question and could really use an answer from an expert! 1 Answer Sort by » oldest newest most voted You first need to import sage.all. The following should work (I'm including version info for reference): $ sage -v SageMath version 8.1, Release Date: 2017-12-07 $ sage -python Python 2.7.14 (default, Dec 9 2017, 17:25:34) [GCC 7.2.0] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import sage.all >>> from sage.rings.integer_ring import ZZ Edit: you could even directly import ZZ form sage.all: $ sage -python Python 2.7.14 (default, Dec 9 2017, 17:25:34) [GCC 7.2.0] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from sage.all import ZZ edit flag offensive delete link more For me, it takes about 3.5 seconds to from sage.all import * and about 3.5 seconds to import sage.all and about 3.5 seconds to from sage.all import ZZ, which makes this suggested strategy defeat the purpose of loading modules individually as-needed in order to save time. ml9nn ( 2018-02-03 17:44:16 +0100 )edit
{"url":"https://ask.sagemath.org/question/35522/error-importing-zz/?answer=40118","timestamp":"2024-11-12T12:51:12Z","content_type":"application/xhtml+xml","content_length":"60044","record_id":"<urn:uuid:3c11cbba-bbb4-416f-8a89-8f7837a4291c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00899.warc.gz"}
MAT101 | Algebra in Mathematics - Trident university Excelsior MAT101 Module 7 Assignment Problem Write-up Tutorial # 00619332 Posted On: 12/21/2020 04:18 AM Feedback Score: Not rated yet! Purchased By: Report this Tutorial as Inappropriate Various payment options Excelsior MAT101 Module 8 Problem Write-up Based on what you learned from the problem you solved with your group, you will submit a written report describing your solution to a similar problem, posted as an Announcement on the first day of thi …
{"url":"https://www.homeworkjoy.com/questions/excelsior-mat101-module-7-assignment-problem-write-up-620606/","timestamp":"2024-11-13T14:47:47Z","content_type":"text/html","content_length":"144857","record_id":"<urn:uuid:6fce0ed9-976c-416f-99a3-f58f38a940c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00711.warc.gz"}
The Highest Common Factor of Three Numbers by using Division Method To Find the Highest Common Factor of Three Numbers by using Division Method To find the highest common factor of three numbers by using division method is discussed here step by step. Step I: First of all find the highest common factor (H.C.F) of any two of the given numbers. Step II: Now find the highest common factor (H.C.F) of the third given number and the highest common factor (H.C.F) obtained in Step 1 from first and the second number. Let us consider some examples to find the highest common factor (H.C.F) of three numbers by division method. 1. Find highest common factor (H.C.F) of 184, 230 and 276 by using division method. Let us find the highest common factor (H.C.F) of 184 and 230. Highest common factor of 184 and 230 = 46. Now find the H.C.F. of 276 and 46. Highest common factor of 276 and 46 = 46. Therefore, required highest common factor (H.C.F) of 184, 230 and 276 = 46. 2. Find highest common factor (H.C.F) of 136, 170 and 255 by using division method. Let us find the highest common factor (H.C.F) of 136 and 170. Highest common factor of 136 and 170 = 34. Now find the H.C.F. of 34 and 255. Highest common factor of 34 and 255 = 17. Therefore, required highest common factor (H.C.F) of 136, 170 and 255 = 17. 3. Using long division method, find H.C.F of 891, 1215 and 1377. Therefore, highest common factor of 891 and 1215 is 81 and now we shall find H.C.F of 81 and 1377. Therefore, H.C.F. of 891, 1215 and 1377 is 81. 4. Find the HCF of 216, 468 and 828 by division method. Step 1: We will first find HCF of 216 and 828 Step 2: Now find HCF of 36 and 468. Hence, HCF of 216, 468 and 828 is 36 ● Factors. ● Highest Common Factor (H.C.F). ● Examples on Highest Common Factor (H.C.F). ● Greatest Common Factor (G.C.F). ● Examples of Greatest Common Factor (G.C.F). ● To find Highest Common Factor by using Prime Factorization Method. ● Examples to find Highest Common Factor by using Prime Factorization Method. ● To find Highest Common Factor by using Division Method. ● Examples to find Highest Common Factor of two numbers by using Division Method. ● To find the Highest Common Factor of three numbers by using Division Method. From To Find the Highest Common Factor of Three Numbers by using Division Method to HOME PAGE Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need. New! Comments Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question.
{"url":"https://www.math-only-math.com/to-find-the-highest-common-factor-of-three-numbers-by-using-division-method.html","timestamp":"2024-11-04T04:44:11Z","content_type":"text/html","content_length":"47728","record_id":"<urn:uuid:96aa39e9-0a87-4d3d-b132-34490aa05546>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00455.warc.gz"}
Viscosity (2013) Class content > Kinds of Forces > Resistive forces When an object moves in a fluid -- a liquid or gas -- it drags bits of the fluid with it along its surface. This results in a layer of fluid sliding over a neighboring layer of fluid. The interactions of the molecules in the fluid result in a kind of internal friction that acts to slow the relative motion of neighboring layers of fluid. The full description of the mechanism of viscosity is rather complex, so we will treat it phenomenologically. Let's start by looking at the simplest possible example. Imagine a liquid that is sandwiched between two plates. sheared -- pulled so that the amount of deformation of the fluid changes perpendicular to the direction of flow. Some of the fluid sticks to and moves with the top plate, while the parts of the liquid next to the bottom plate remain at rest. This sets up a gradient (a rate of change with respect to space) in velocity across the liquid. This requires the layers of the fluid to move past one another as shown in the lower picture. An equation for the viscous force To get an equation for the resistive force that the fluid exerts on the plates as a result of the internal sliding, consider the example shown in the figure. Suppose the two plates have an area A and are separated by a thickness y of liquid. Suppose we are holding the bottom plate fixed and are dragging the top plate with a constant velocity u. If there were no internal resistance, any force on the top plate would continue to speed it up. But if we apply a constant force, F ^app, on the plate, we discover that it will speed up, but then its velocity will increase more slowly until it approaches a constant velocity, which we will call u. What's happening is that the acceleration of the plate, P, is determined by Newton's second law for the plate: When the plate reaches a steady speed, the acceleration is 0, so the force we are applying is equal (and opposite) to the viscous force. What we discover from our experiments is: • the viscous force is proportional to the speed of the plate • the viscous force is proportional to the area of the plate • the viscous force is inversely proportional to the distance between the moving plate and the fixed plate. The proportionality constant, μ, is called the viscosity of the fluid and is defined by: The viscous force on a moving sphere While the two-plate experiment is a good way to figure out what's going on in the phenomenon of viscosity, it's a bit tricky to apply to an object moving in a fluid. From our experiments with the plates, we see that our viscosity coefficient has the dimensionality of Now suppose we have a small sphere of radius R moving through a fluid with a velocity v. We expect there to be a viscous force holding back the fluid, and we expect it to depend on the viscosity coefficient, μ, the velocity of the object, v, and the size of the object -- some function of R. We want to construct a force, which has dimensionality the same as "ma", of ML/T^2. We expect it to be proportional to μ, which has dimensionality, M/LT. We expect it to be proportional to velocity, v, which has dimensionality L/T. So μ*v has dimensionality (M/LT) * (L/T) = M/T^2. We're missing a factor with dimensionality "L". The only one we have is the radius. So we expect our force to look something like μ*v*R. This turns out to be right -- but with a dimensionless factor of 6π -- something we couldn't know by dimensional analysis. The result is Viscous force on a sphere of radius R moving in a fluid: Viscosity of different fluids Since viscosity has dimensions of M/LT, it will have units (in the SI system) of kg/m-s. Sometimes it's convenient to express this unit in different forms. For example, since we will typically be building a force with it, we might want to rearrange this so it looks like Newtons: 1 N = 1 kg-m/s^2. So we can make the units of viscosity include a Newton by multiplying by m^2-s and dividing by the same factor. The result is The combination unit Newton/m^2 is a pressure and is often called a Pascal. So the units you will see for viscosity are typically "Pascal-seconds (Pa-s)". The viscosity for air and water is: μ[air] = 1.81 10^-5 Pa-s μ[water] = 1.00 10^-3 Pa-s. μ[seawater] = 1.07 10^-3 Pa-s. Karen Carleton and Joe Redish 9/29/11 You don't have permission to comment on this page.
{"url":"http://umdberg.pbworks.com/w/page/68392703/Viscosity%20(2013)","timestamp":"2024-11-06T20:53:40Z","content_type":"application/xhtml+xml","content_length":"28504","record_id":"<urn:uuid:8b6d54c0-d178-4325-977c-028b1ce6a8e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00035.warc.gz"}
Cite as Roberto Grossi, Costas S. Iliopoulos, Chang Liu, Nadia Pisanti, Solon P. Pissis, Ahmad Retha, Giovanna Rosone, Fatima Vayani, and Luca Versari. On-Line Pattern Matching on Similar Texts. In 28th Annual Symposium on Combinatorial Pattern Matching (CPM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 78, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik Copy BibTex To Clipboard author = {Grossi, Roberto and Iliopoulos, Costas S. and Liu, Chang and Pisanti, Nadia and Pissis, Solon P. and Retha, Ahmad and Rosone, Giovanna and Vayani, Fatima and Versari, Luca}, title = {{On-Line Pattern Matching on Similar Texts}}, booktitle = {28th Annual Symposium on Combinatorial Pattern Matching (CPM 2017)}, pages = {9:1--9:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-039-2}, ISSN = {1868-8969}, year = {2017}, volume = {78}, editor = {K\"{a}rkk\"{a}inen, Juha and Radoszewski, Jakub and Rytter, Wojciech}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2017.9}, URN = {urn:nbn:de:0030-drops-73379}, doi = {10.4230/LIPIcs.CPM.2017.9}, annote = {Keywords: string algorithms, pattern matching, degenerate strings, elastic-degenerate strings, on-line algorithms}
{"url":"https://drops.dagstuhl.de/search/documents?author=Iliopoulos,%20Costas%20S.","timestamp":"2024-11-08T07:27:26Z","content_type":"text/html","content_length":"147426","record_id":"<urn:uuid:2c44aa03-15ec-4314-9dc2-40eadfaf4927>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00801.warc.gz"}
Equation for The Plane and Math Questions Unformatted Attachment Preview Math 20C : Midterm exam Instructions. You are allowed to consult your textbook or ebook, your notes, and the lecture videos. Show all of your work. No credit will be given for unsupported answers, even if correct. You are not allowed collaborate or communicate with any other humans while working on this exam. This exam is worth 40 points. 1. (6 points) Find an equation for the line which passes through (1, −2, 0) and is parallel to the line through (3, −4, 2) and (2, −3, 0). 2. (8 points) Find an equation for the plane which passes through (−2, 0, 1) and is parallel to the lines L1 (t) = ht + 1, 2t − 1, −t + 2i and L2 (s) = h−2s + 5, −3, s + 1i. 3. (6 points) Let f (x, y) = x2 y 2 . Evaluate the limit x2 + y 2 lim f (x, y) or determine that it (x,y)→(0,0) does not exist. 4. (10 points) Denote by r(t) and v(t) the position and √velocity vectors of a particle at time t. Assume that r(0) = h1, 0, 1i and v(t) = hcos t, sin t, t + 2i. (a) (5 points) Find r(t). (b) (5 points) Find the length of the curve r(t) over the interval 0 ≤ t ≤ 1. 5. (10 points) Let f (x, y) = x2 y 3 . ∂f ∂f and . ∂x ∂y (b) (6 points) Find the points on the graph of z = f (x, y) at which the vector n = h4, −3, 4i is normal to the tangent plane. (a) (4 points) Find the partial derivatives Purchase answer to see full attachment Explanation & Answer: 5 Questions User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code terms of service View attached explanation and answer. Let me know i...
{"url":"https://www.studypool.com/discuss/38392016/tuesday-nov2-from-8-00-9-20","timestamp":"2024-11-06T21:31:48Z","content_type":"text/html","content_length":"280341","record_id":"<urn:uuid:7a859d9f-d7b7-4e42-8a38-714457d3e206>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00331.warc.gz"}
Angular Momentum Calculator - Savvy Calculator Angular Momentum Calculator About Angular Momentum Calculator (Formula) Angular momentum is a fundamental concept in physics, representing the quantity of rotation of an object. It’s an essential parameter in understanding the dynamics of rotating systems, from spinning wheels to orbiting planets. The Angular Momentum Calculator helps you determine this important value using the object’s mass, radius, and velocity. The formula to calculate angular momentum is: Angular Momentum = mass * radius * velocity • Mass is the object’s mass (in kilograms). • Radius is the distance from the axis of rotation to the object (in meters). • Velocity is the tangential velocity of the object (in meters per second). How to Use Using the Angular Momentum Calculator is straightforward: 1. Enter Mass: Input the mass of the object in kilograms. 2. Enter Radius: Provide the radius or distance from the axis of rotation in meters. 3. Enter Velocity: Input the tangential velocity of the object in meters per second. 4. Calculate: The calculator will instantly provide the angular momentum based on the inputs. Suppose you have a rotating object with the following parameters: • Mass: 5 kg • Radius: 2 m • Velocity: 10 m/s Using the formula: Angular Momentum = 5 kg * 2 m * 10 m/s = 100 kg·m²/s The angular momentum of the object is 100 kg·m²/s. 1. What is angular momentum? Angular momentum is the rotational equivalent of linear momentum, representing the momentum of a rotating object. 2. Why is angular momentum important? Angular momentum is conserved in isolated systems, making it a crucial concept in analyzing rotational dynamics and understanding physical systems. 3. What are the units of angular momentum? Angular momentum is typically measured in kilogram meter squared per second (kg·m²/s). 4. How does mass affect angular momentum? Greater mass increases angular momentum, assuming the radius and velocity are constant. 5. What is the relationship between radius and angular momentum? Angular momentum increases with the radius; the further the mass is from the axis of rotation, the greater the angular momentum. 6. How does velocity influence angular momentum? Higher velocity leads to greater angular momentum, assuming mass and radius remain constant. 7. Is angular momentum conserved? Yes, in an isolated system without external torques, angular momentum is conserved. 8. Can angular momentum be negative? Yes, angular momentum can be negative, depending on the direction of rotation relative to a chosen reference axis. 9. What happens to angular momentum if the radius decreases? If the radius decreases and no external torque acts, the velocity must increase to conserve angular momentum, as seen in a figure skater pulling in their arms during a spin. 10. What is the difference between linear and angular momentum? Linear momentum deals with straight-line motion, while angular momentum concerns rotational motion. 11. How is angular momentum related to torque? Torque is the rate of change of angular momentum; a greater torque results in a faster change in angular momentum. 12. What are some practical applications of angular momentum? Angular momentum is used in analyzing the motion of celestial bodies, designing rotating machinery, and understanding the behavior of subatomic particles. 13. How does angular momentum apply to planetary motion? Planets conserve angular momentum as they orbit the sun, meaning their speed changes as their distance from the sun changes. 14. What is the role of angular momentum in quantum mechanics? In quantum mechanics, angular momentum is quantized and plays a key role in determining the behavior of particles in atoms. 15. Can an object have zero angular momentum? Yes, if either the mass, velocity, or distance from the axis is zero, the angular momentum will also be zero. 16. How does the direction of rotation affect angular momentum? The direction of rotation determines the sign of the angular momentum vector, which can be positive or negative depending on the reference frame. 17. What is the significance of the moment of inertia in angular momentum? The moment of inertia is the rotational equivalent of mass in linear motion and plays a crucial role in calculating angular momentum, especially for extended bodies. 18. How do you calculate angular momentum for a system of particles? For a system of particles, the total angular momentum is the sum of the angular momenta of all individual particles. 19. What happens to angular momentum in a collision? In the absence of external torques, angular momentum is conserved during collisions, just like linear momentum. 20. Why is angular momentum important in sports? Understanding angular momentum is crucial in sports like gymnastics, figure skating, and diving, where athletes control their rotation to achieve desired movements. The Angular Momentum Calculator is an essential tool for students, engineers, and scientists working with rotational dynamics. By understanding and calculating angular momentum, you can analyze the behavior of rotating objects, whether in everyday applications or complex physical systems. This calculator simplifies the process, providing quick and accurate results for various scenarios. Leave a Comment
{"url":"https://savvycalculator.com/angular-momentum-calculator","timestamp":"2024-11-08T09:38:09Z","content_type":"text/html","content_length":"147491","record_id":"<urn:uuid:b7f59609-7fa8-4525-a64f-9de7214ec044>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00001.warc.gz"}
Bolus dispersion phenomena affect the residue function computed via deconvolution of DSC-MRI data. Indeed the obtained effective residue function can be expressed as the convolution of the true one with a Vascular Transport Function (VTF) that characterizes the dispersion. The state-of-the-art technique CPI+VTF allows to estimate the actual residue function by assuming a model for the VTF. We propose to perform deconvolution representing the effective residue function with Dispersion-Compliant Bases (DCB) without assumptions on the VTF, and then apply the CPI+VTF on DCB results. We show that DCB improve robustness to noise and allow to better characterize the VTF. To improve robustness of dispersion kernel characterization in DSC-MRI by means of Dispersion-Compliant Bases. The residual amount of tracer, i.e. the residue function $$$R(t)$$$ computed from deconvolution of the measured arterial $$$C_a(t)$$$ and tissular $$$C_{ts}(t)$$$ concentrations, characterizes the tissue perfusion. However the actual arterial concentration may undergo dispersion. This causes the effective residue function to reflect additional vascular properties mathematically described by the convolution $$$R^{eff}(t)=R(t)\otimes{VTF(t)}$$$ where VTF is the Vascular Transport Function^1,2. This severely affects the estimation of hemodynamic parameters^3 such as the blood flow $$$BF$$$, corresponding to the peak of $$$R(t)$$$, and the mean transit time $$$MTT=BV/BF$$$ with $$$BV$$$ the blood volume. Indeed only effective parameters $$$BF^{eff}$$$ (peak of $$$R^{eff}(t)$$$) and $$$MTT^{eff}$$$ are computed. A recent state-of-the-art technique^4 based on control point interpolation, CPI+VTF, allows to recover the actual $$$R(t)$$$ assuming that it is convolved with a VTF described by a Gamma Dispersion Kernel (GDK) $$VTF(t,s,p)=GDK(t,s,p)=\frac{s^{1+sp}}{\Gamma(1+sp)}t^{sp}e^{-st}$$ where $$$s,p$$$ are unknown. This allows the estimation of the actual $$$BF$$$^4. The estimation of $$$s,p,BF$$$ is not an easy task and requires a non-linear optimization routine which results are sensitive to noise. We propose to improve robustness and precision of some of the estimates by performing deconvolution with Dispersion-Compliant Bases^5 (DCB), and subsesquently fit the CPI+VTF^4 model to the obtaned effective residue function. We perform DCB deconvolution representing $$$R^{eff}(t)$$$ on a sampling grid $$$t_1,t_2,..,t_M$$$ as^5 $$R_{DCB}(t) = \Theta(t-\tau) \sum_{n=1}^{N} [a_n + b_n (t-\tau)] e^{-\alpha_n (t-\tau)}$$ with order $$$N$$$ ($$$6$$$ here), $$$\tau,a_n,b_n$$$ unknown an $$$\alpha_n$$$ predefined^5. The solution was constrained via quadratic programming to $$$R(t_m)\ge0\forall{t_m\in[t_1,t_{M-1}]}$$$ and The CPI+VTF deconvolution technique was implemented as in literature^4 with 12 control points and initial parameters $$$p,s$$$ for the optimization routine $$$log2\pm2$$$ ($$$mean\pm{SD}$$$). In order to decouple the influence of the estimation framework from the model the estimation was performed non-linearly bounding parameters to $$$mean\pm3SD$$$. The CPI+VTF model was also fitted on the effective residue function computed with DCB by minimizing $$$||R_{DCB}(t)-{[CPI+VTF]}_{model}||^2$$$ over the control points, time-instants separations, and parameters $$$BF,s,p$$$^4. We perform synthetic experiments generating $$$C_a(t)$$$ in $$$[0:1:90]s$$$ as a gamma-variate function^1.The tissular concentration $$$C_{ts}(t)$$$ was generated as $$$C_{ts}(t)=C_a\otimes[R\otimes {VTF}(t)](t)$$$ with bi-exponential^5 $$$R(t)$$$. Three ground-truth VTF models were used^4: gamma (GDK), exponetial and log-normal. For each, three dispersion levels were tested: low, medium, high^ 4. Number 100 repetitions were generated for each combination of dispersion kernel, level, $$$BF\in[5:10:65]ml/100g/min$$$, $$$MTT\in[2:4:18]s$$$, and $$$delay\in[0,5]s$$$^4 with noise added^5 with $$$SNR=50$$$^4. For each repetition DCB and CPI+VTF deconvolutions were performed, as well as the CPI+VTF model fitting on $$$R_{DCB}(t)$$$, henceforth DCB+VTF. We then proceed with the following experiments: 1. we compare DCB^5, CPI+VTF^4 and oSVD^6 deconvolutions and calculate the relative errors of the recovered effective parameters $$$BF^{eff}$$$ (Fig. 1), $$$MTT^{eff}=BV/BF^{eff}$$$ (Fig. 2), and time-to-maximum $$$T2MAX$$$ of the $$$R^{eff}$$$ (Fig. 3); comparisons are performed on all of the ground-truth dispersion kernels (left images) and just on the GDK (right images); 2. we compare estimates of $$$p,s,BF$$$ obtained with CPI+VTF and DCB+VTF in case of GDK (Fig. 4); 3. we apply DCB+VTF on stroke MRI data and show maps of $$$p,s,BF$$$ (Fig.5). Results in Fig. 1,2,3-left show that DCB-based estimates of $$$BF^{eff},MTT^{eff},T2MAX$$$ have sensibly lower relative error than those obtained with CPI+VTF and oSVD. When the gound-truth kernel is GDK (right columns) CPI+VTF sensibly improve. Still, DCB perform comparably or better. DCB generally reduce errors and their variability. In addition DCB results appears more stable than with CPI+VTF with respect to the ground-truth kernel. Results in Fig. 4 show that CPI+VTF and DCB+VTF render similar results for $$$BF,p$$$ but DCB+VTF relevantly improves $$$s$$$ estimates at medium and high dispersion levels. Maps in Fig. 5 well depict the infarcted area which is specially highlighted in $$$s$$$-map. The use of DCB deconvolution renders a better estimation of the effective hemodynamic parameters. The DCB do not assume any model for the dispersion kernel (VTF) and can handle the exponential and log-normal kernels better than CPI+VTF. The use of these functional bases improves robustness to noise and reduces variability in the results (Figs. 1-3). This leads to a relevant improvement also when the CPI+VTF technique is applied directly on the DCB results (DCB+VTF), specifically in the estimation of the $$$s$$$ parameter of the gamma kernel (Fig. 4). The same parameter reveals to be very effective in delimiting the infarcted area in results (Fig. 5). Perfusion deconvolution of DSC-MRI data by means of Dispersion-Compliant Bases (DCB) provides more robust results in quantifying the effective residue function and improves subsequent VTF assessments. This allows a better characterization of dispersion phenomena and a consequent deeper understanding of the vascular dynamic. We thank Olea Medical and the PACA Regional Council for providing grant and support. 1. Calamante et al., “Delay and dispersion effects in dynamic susceptibility contrast mri: simulations using singular value decomposition,” Magn Reson Med, vol. 44(3), pp. 466–473, 2000 2. Calamante et al., “Estimation of bolus dispersion effects in perfusion mri using image-based computational fluid dynamics,” NeuroImage, vol. 19, pp. 341–353, 2003. 3. Willats et al., “Improved deconvolution of perfusion mri data in the presence of bolus delay and dispersion,” Magn Reson Med, vol. 56, pp. 146156, 2006. 4. Mehndiratta et al., “Modeling and correction of bolus dispersion effects in dynamic susceptibility contrast mri: Dispersion correction with cpi in dsc-mri,” Magn Reson Med, vol. 72, pp. 17621774, 5. Pizzolato et al., “Perfusion mri deconvolution with delay estimation and non-negativity constraints,” in 12th International Symposium on Biomedical Imaging (ISBI). IEEE, 2015, pp. 1073–1076. 6. Wu et al., “Tracer arrival timinginsensitive technique for estimating flow in mr perfusionweighted imaging using singular value decomposition with a blockcirculant deconvolution matrix,” Magn Reson Med, vol. 50(1), pp. 164–174, 2003.
{"url":"https://cds.ismrm.org/protected/16MProceedings/PDFfiles/1474.html","timestamp":"2024-11-02T22:05:44Z","content_type":"application/xhtml+xml","content_length":"20796","record_id":"<urn:uuid:446c0280-ae06-4c78-bb13-62c92f8cf80e>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00503.warc.gz"}
Current-Controlled Current Sources Next: Current-Controlled Voltage Sources Up: Dependent Sources Previous: Voltage-Controlled Voltage Sources Contents Index Current-Controlled Current Sources This is a special case of the general source specification included for backward compatibility. General Form: fname n+ n- vnam expr srcargs fname n+ n- function | cur [=] expr srcargs fname n+ n- poly poly_spec srcargs where srcargs = [ac table(name)] f1 13 5 vsens 5 f2 13 5 1-x*x ac table(acdata) f3 13 5 function 1-i(vsens)*i(vsens) The n+ and n- are the positive and negative nodes, respectively. Current flow is from the positive node, through the source, to the negative node. The parameter vnam is the name of a voltage source or inductor through which the controlling current flows. If vnam refers to a voltage source, the direction of positive controlling current flow is from the positive node, through the source, to the negative node. If vnam names an inductor, the current flow is from the first node specified for the inductor, through the inductor, to the second node. In the first form, if the expr is a constant, it represents the linear current gain. If no expression is given, a unit constant value is assumed. Otherwise, the expr computes the source current, where the variable ``x'' if used in the expr is taken to be the controlling current (i(vnam)). In this case only, the pwl construct if used in the expr takes as its input variable the value of ``x'' rather than time, thus a piecewise linear transfer function can be implemented using a pwl statement. The second form is similar, but ``x'' is not defined. The keywords ``function'' and ``cur'' are equivalent. The third form allows use of the SPICE2 poly construct. More information of the function specification can be found in 2.15, and the poly specification is described in 2.15.2. If the ac parameter is given and the table keyword follows, then the named table is taken to contain complex transfer coefficient data, which will be used in ac analysis (and possibly elsewhere, see below). For each frequency, the source output will be the interpolated transfer coefficient from the table multiplied by the input. The table must be specified with a .table line, and must have the ac keyword given. If an ac table is specified, and no dc/transient transfer function or coefficient is given, then in transient analysis, the source transfer will be obtained through Fourier analysis of the table data. This is somewhat experimental, and may be prone to numerical errors. In ac analysis, the transfer coefficient can be real or complex. If complex, the imaginary value follows the real value. Only constants or constant expressions are valid in this case. If the source function is specified in this way, the real component is used in dc and transient analysis. This will also override a table, if given. Next: Current-Controlled Voltage Sources Up: Dependent Sources Previous: Voltage-Controlled Voltage Sources Contents Index Stephen R. Whiteley 2024-10-26
{"url":"http://srware.com/manual/wrsmanual/node118.html","timestamp":"2024-11-08T18:08:57Z","content_type":"text/html","content_length":"7042","record_id":"<urn:uuid:9871856a-068c-4f91-9cf3-5ff46fcf3ff4>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00302.warc.gz"}
How to calculate the annual solar energy output of a photovoltaic system? Here you will learn how to calculate the annual energy output of a photovoltaic solar installation. The global formula to estimate the electricity generated in output of a photovoltaic system is : E = A * r * H * PR E = Energy (kWh) A = Total solar panel Area (m2) r = solar panel yield or efficiency(%) H = Annual average solar radiation on tilted panels (shadings not included) PR = Performance ratio, coefficient for losses (range between 0.5 and 0.9, default value = 0.75) r is the yield of the solar panel given by the ratio : electrical power (in kWp) of one solar panel divided by the area of one panel. Example : the solar panel yield of a PV module of 250 Wp with an area of 1.6 m2 is 15.6%. Be aware that this nominal ratio is given for standard test conditions (STC) : radiation=1000 W/m2, cell temperature=25 celcius degree, Wind speed=1 m/s, AM=1.5. The unit of the nominal power of the photovoltaic panel in these conditions is called "Watt-peak" (Wp or kWp=1000 Wp or MWp=1000000 Wp). H is the annual average solar radiation on tilted panels. Between 200 kWh/m2.y (Norway) and 2600 kWh/m2.y (Saudi Arabia). You can find this global radiation value here :Solar radiation databases You have to find the global annual radiation incident on your PV panels with your specific inclination (slope, tilt) and orientation (azimut). PR : PR (Performance Ratio) is a very important value to evaluate the quality of a photovoltaic installation because it gives the performance of the installation independently of the orientation, inclination of the panel. It includes all losses. Example of detailed losses that gives the PR value (depends on the site, the technology, and sizing of the system): - Inverter losses (4% to 10 %) - Temperature losses (5% to 20%) - DC cables losses (1 to 3 %) - AC cables losses (1 to 3 %) - Shadings 0 % to 80% !!! (specific to each site) - Losses at weak radiation 3% to 7% - Losses due to dust, snow... (2%) - Other Losses (?) Download : Excel file to compute the annual solar electrical energy output of a photovoltaic system : Of course in order to simulate the energy production of a PV system with a better accuracy and to get monthly, hourly or instantaneous electric values, you have to use tools and softwares listed here: PV Softwares and calculators.
{"url":"https://photovoltaic-software.com/principle-ressources/how-calculate-solar-energy-power-pv-systems","timestamp":"2024-11-05T23:06:36Z","content_type":"text/html","content_length":"27886","record_id":"<urn:uuid:3d154ab9-d4b9-46cf-96bd-182c22bdd945>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00292.warc.gz"}
GDS and how to find similarities using GDS - CruzeTechs- Data Science & Cloud GDS and how to find similarities using GDS Graph Data Science In Neo4j, GDS, or Graph Data Science, is a collection of tools and procedures for performing graph analytics using Cypher, the query language of the Neo4j graph database. It includes algorithms for graph traversal, pathfinding, centrality, community detection, and more. GDS is designed to make it easy to incorporate graph analytics into your applications and processes, whether you are building a recommendation engine, analyzing social networks, or trying to detect fraud or other patterns in your data. It is built on top of the Neo4j graph database, which is a powerful and scalable platform for storing, querying, and analyzing connected data. Some of the key features of GDS include: • A library of procedures and functions for performing various types of graph analytics, including centrality measures, community detection, and similarity calculations. • Integration with the Cypher query language, which makes it easy to express complex graph analytics tasks in a simple and intuitive way. • Scalability and performance, with the ability to handle large graphs and run complex analytics tasks quickly. • Support for real-time analytics, with the ability to stream data into the graph and execute continuous queries. • A plugin architecture that allows you to extend GDS with custom procedures and functions. Overall, GDS is a powerful toolkit for anyone looking to perform advanced analytics on graph data, and it is an important part of the Neo4j ecosystem. What is Similarity between nodes? In the context of a graph, the similarity between two nodes refers to how closely related or connected the two nodes are. There are many different ways to measure similarity, and the specific method used can depend on the type of data being represented by the nodes and the type of relationship between the nodes. Some common methods for calculating node similarity include cosine similarity, Jaccard similarity, and Pearson correlation coefficient. Cosine similarity is a measure of similarity between two vectors, and it can be used to compare the characteristics or attributes of two nodes in a graph. It is calculated by taking the dot product of the vectors and dividing it by the product of the magnitudes of the vectors. Jaccard similarity is a measure of the overlap or commonality between two sets. In a graph, it can be used to compare the sets of neighbors of two nodes, for example. It is calculated by dividing the size of the intersection of the two sets by the size of the union of the two sets. Pearson correlation coefficient is a measure of the linear relationship between two variables. In a graph, it can be used to compare the values of a particular attribute of two nodes. It is calculated by dividing the covariance of the two variables by the product of their standard deviations. Similarity finding example from neo4j website Here is how we can find similarities between two nodes using the following methods: Cosine similarity To calculate the cosine similarity between two nodes n1 and n2, you can use the following Cypher query: MATCH (n1:Node {id: $id1}), (n2:Node {id: $id2}) WITH gds.alpha.similarity.cosine(n1, n2) AS similarity RETURN similarity This query will match the two nodes with the given id values and then pass them to the gds.alpha.similarity.cosine function, which will calculate the cosine similarity between them. The similarity value returned by the function will be a float between 0 and 1, with higher values indicating greater similarity between the two nodes. Jaccard similarity To calculate the Jaccard similarity between two nodes n1 and n2, you can use the following Cypher query: MATCH (n1:Node {id: $id1})-[:CONNECTED_TO]-(n2:Node {id: $id2}) WITH size((n1)-[:CONNECTED_TO]-(n2)) AS intersection, size((n1)-[:CONNECTED_TO]-()) + size((n2)-[:CONNECTED_TO]-()) - intersection AS union RETURN intersection / union AS similarity This query will match the two nodes and their connections, and then use the size function to calculate the size of the intersection and union of the sets of neighbors of the two nodes. The Jaccard similarity is then calculated by dividing the size of the intersection by the size of the union. The similarity value returned by the query will be a float between 0 and 1, with higher values indicating greater similarity between the two nodes. Pearson correlation coefficient To calculate the Pearson correlation coefficient between two nodes n1 and n2, you can use the following Cypher query: MATCH (n1:Node {id: $id1}), (n2:Node {id: $id2}) WITH n1.attribute AS x, n2.attribute AS y RETURN gds.alpha.similarity.pearsonCorrelationCoefficient(x, y) AS similarity This query will match the two nodes and extract the values of the attribute property for each node. It will then pass these values to the gds.alpha.similarity.pearsonCorrelationCoefficient function, which will calculate the Pearson correlation coefficient between them. The similarity value returned by the function will be a float between -1 and 1, with higher values indicating a stronger positive correlation between the two nodes and lower values indicating a stronger negative correlation. For any kind of help with the Graph database and graph data science project, please contact us. 0 Comments Inline Feedbacks View all comments | Reply
{"url":"https://cruzetechs.eu/gds-and-how-to-find-similarities-using-gds/","timestamp":"2024-11-07T22:08:29Z","content_type":"text/html","content_length":"170836","record_id":"<urn:uuid:84ad536f-e7fa-497b-8d5d-b108a61eaceb>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00161.warc.gz"}
Force involving crumple zone I am trying to figure if 2 bikes collided one weighing 65kg and the other 75kg with a crumple zone of 0.5m came to a complete rest before seperating what would be force of each and what formaula should i use also what would be the combined force appreciate any help thankyou Do you want to assume that except for weight each is an exact mirror image of the other? Will the crumple be even or will it vary? No riders? If riders then they and their differences need to be considered. There is a reason they use crash test. Computations are just too limited and not very live like. Motorcycles have “crumple zones”? Shoot your best shot at what formula do you think should be used, you must have some idea. Well, I’m not real clear on exactly what you’re after. What kind of a collision is involved, head on, rear end or broadside? Using the conservation of momentum formulas one can deduce pre-crash speeds of both vehicles. There is a transfer of momentum (force) at impact. Are you looking for the speed before collision? There is no simple equation for this. Check out this website for a better understanding of determining your answer. http://spiff.rit.edu/classes/phys311.old/lectures/impulse/impulse.html In general, the velocities of both bikes will determine the linear momentum of each. At the point of collision, a large impluse will be applied to each to reduce their forward momentum. The compression of the ‘crumple zone’ will increase the amount of time of the impulse by a fraction, which reduces the maximum force applied by a fraction. However, this will create an integral equation, since the force applied will change over time as the collision occurs. I’m assuming your looking for maximum of the force applied. Typically, a simple equation for one bike would be the calculate the velocity before the collision and after. Multiply each velocity by the total bike’s mass (bike & rider) to get momentum before collision (Pi) and momentum after (Pf). The impulse is the change in momentum (I = Pf - Pi), and impules is also force times time. The time is the actual time of the collision event. To get Force, you need the actual time the collision event, when the bikes first touch to the bikes separate. I was asking the OP what he was looking for,trying to determine what level he was at Hey fireblade how about some communication? we are talking to you. Sorry guys for the delay in responding I did not realise the site was so quick to respond I am mightly impressed To clarify my question First I was told to watch a short film of a motorcycle accident.There were passengers involved but the total weight of each bike was 60kg and 75kg. Yes it was a head on accident and the crumple zone was an even .5 meters per bike. This is my first year foundation and am new to this but have looked into first solving the kinetic energy then using a formula Favgd = =.5xmv2 would this be correct Many thanks to you all so far for your replies forgot to add speed of each bike was 13.4112 m/s Hello Bustedknuckles Hope this is not to long winded but the actual assignment is as follows: Given that 2 motorcycles weigh 65kg and 75kg and are travelling around 30 mph,have a 0.5 crumple zone and that they come to a complete rest before seperating perform the following a. Calculate the force of each bike b. calculate the combined force of the two bikes c. calculate the change in momentum of each bike d. calculate the change in energy of each bike e. construct a velocity time graph of the motion of both the bikes f. construct an acceleration time graph of the motion of the 2 bikes g. investigate and explain the g force on the riders of the bikes h give a detailed explanation of why the modelling of momentum and energy in accordance with newtons laws is not appropriate for this example many thanks I see two problems using kinetic energy. One, your formula assumes all the kinetic engery is used at the time of collision. And two, doesn’t take into account the conservation of momentum. If both bikes are different mass, but same speed, one has more momentum than the other. When they collide, there is a transfer of momentum, and the heavier bike will have more. Both momentum and kinetic energy are vectors, meaning they have magnitude and direction. At the point of collision, the heavier bike should still have momentum, and that means there is additional kinetic energy transferred to the lighter bike. Your simple equation will overestimate for the heavy bike, and underestimate for the lighter one. There are people who like to play with math and physics who, I’m sure, would love to help you. Go to www.physicsforums.com Tell’em CarTalk sent you. Thanks for that just curious if you read the assignment above many thanks All of my crash reconstruction formulas are not in metric, so you would have to do your own conversions. Velocity, for instance, is figured in feet per second, derived from miles per hour. You will need to convert all velocity and weights. You are indeed wanting to use linear momentum in a head on or rear end crash. A broadside crash will use vector momentum’s, and the on scene evidence is much more complicated, needing the post impact departure angles of both vehicles. Most of the time I was interested in determining the vehicles pre-crash speeds for litigation of criminal and civil trials. You are going more for the physics end of it, but the same formulas will solve some of your assignment. The following site will have most of your information: Thankyou so much all I will look into the sites provided. Just to say I think I figured out the force of each bike and the change in momentum I am now stuck on the change of energy of each bike I have an kenetic energy of bike (a) of 5845.4593kj and bike (b) 6744.7607kj but am unsure how to figure the change. I thank you all ps sorry for delays in replying I am from the united kingdom different time zones
{"url":"https://community.cartalk.com/t/force-involving-crumple-zone/22562","timestamp":"2024-11-10T22:18:00Z","content_type":"text/html","content_length":"56681","record_id":"<urn:uuid:c443c54f-e609-4f5b-8cf5-108ed4c8b7d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00718.warc.gz"}
Free 8th Grade Math Worksheets & Word Problems Generator Grade 8 Exponents Workshets Exponents practice strengthens comprehension of the powers and properties of exponents. It helps students solve equations with variable bases, apply rules for multiplication and division, and evaluate expressions involving exponents, which is essential for higher-level math. Skills Focused: Understanding exponents, Evaluate powers, Solve equations with variable exponents, Powers with negative bases, Powers with decimal and fractional bases, Understanding negative exponents, Evaluate powers with negative exponents, Evaluate powers with negative or zero exponents, Multiply powers: integer bases, Divide powers: integer bases, Power of a power: integer bases, Evaluate expressions using properties of exponents, Identify equivalent expressions involving exponents, Multiply powers: variable bases, Divide powers: variable bases, Multiply and divide powers, Powers of a power: variable bases, Properties of exponents
{"url":"https://www.mykidsway.com/math/grade-8-worksheets/","timestamp":"2024-11-05T00:55:04Z","content_type":"text/html","content_length":"112871","record_id":"<urn:uuid:56c48016-9c7f-47eb-b1d1-2412bf02ec41>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00049.warc.gz"}
International Scientific Journal In this article, we propose a new technique based on 2-D shifted Legendre poly­nomials through the operational matrix integration method to find the numeri­cal solution of the stochastic heat equation with Neumann boundary conditions. For the proposed technique, the convergence criteria and the error estima­tion are also discussed in detail. This new technique is tested with two exam­ples, and it is observed that this method is very easy to handle such problems as the initial and boundary conditions are taken care of automatically. Also, the time complexity of the proposed approach is discussed and it is proved to be O[k(N + 1)4] where N denotes the degree of the approximate function and k is the number of simulations. This method is very convenient and efficient for solving other partial differential equations. PAPER SUBMITTED: 2022-04-01 PAPER REVISED: 2022-05-24 PAPER ACCEPTED: 2022-06-14 PUBLISHED ONLINE: 2023-04-08
{"url":"https://thermalscience.vinca.rs/2023/special/7","timestamp":"2024-11-05T12:34:12Z","content_type":"text/html","content_length":"13109","record_id":"<urn:uuid:0d14f3eb-b391-4979-a565-dc283285e2e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00473.warc.gz"}
5 Best Ways to Find Fibonacci Series Up to n Using Lambda in Python π ‘ Problem Formulation: Fibonacci series is a sequence where each number is the sum of the two preceding ones, often starting with 0 and 1. If given a value n, the challenge is to generate the Fibonacci series up to the nth element using concise Python code involving lambda functions. For example, input 5 should yield an output of [0, 1, 1, 2, 3]. Method 1: Using a Lambda with the Reduce Function The reduce function from the functools module, allows us to apply a lambda function cumulatively to items of a sequence, from left to right, thus generating the Fibonacci series. Here’s an example: from functools import reduce fibonacci = lambda n: reduce(lambda x, _: x+[x[-2]+x[-1]], range(n-2), [0, 1]) [0, 1, 1, 2, 3] This code snippet sets up a lambda function to apply a cumulation process, which starts with [0, 1] and appends the sum of the last two elements, iterated n-2 times to account for the initial two Method 2: Using a Lambda with List Comprehension In this method, we leverage list comprehension to create a new list by applying an operation to each item of another, sequentially-ordered list. A lambda function is utilized within the list comprehension to generate the Fibonacci sequence. Here’s an example: fibonacci = lambda n: [0 if x == 0 else 1 if x == 1 else (lambda a, b: a+b)(fibonacci(x-1), fibonacci(x-2)) for x in range(n)] [0, 1, 1, 2, 3] This piece of code recursively calls the lambda function within a list comprehension, summing the results of the two preceding elements in the sequence. Method 3: Using a Recursive Lambda Function We can define a lambda function that calls itself recursively to compute the Fibonacci numbers. This can be done by setting the lambda function to a variable within the function’s environment. Here’s an example: fibo = (lambda f, n: f(f, n))(lambda rec, n: [0] if n == 0 else [1] if n == 1 else rec(rec, n-1) + [rec(rec, n-1)[-1] + rec(rec, n-2)[-1]], 5) [0, 1, 1, 2, 3] This code illustrates a higher-order lambda expression that invokes itself, constructing the Fibonacci series as an accumulation of its previous values. Method 4: Using Lambda in a Generator Expression For memory efficiency, we use a generator expression with a lambda. It yields a sequence of Fibonacci numbers one by one, as required, rather than storing the entire sequence in memory at once. Here’s an example: fibonacci = lambda n: (x for _, x in enumerate(map(lambda x, _: x[1] if x[0] < 2 else sum(x), ((i, sum(x)) for i, x in enumerate(zip([0] + list(fib), list(fib) + [1]), start=1))))) fib = fibonacci(5) [0, 1, 1, 2, 3] This snippet creates a generator that calculates Fibonacci numbers. It utilizes a lambda to perform the sum and keeps yielding the next number in the series. Bonus One-Liner Method 5: Using Lambda with Recursion in a List For a clever one-liner, we fully exploit the recursive nature of lambda, directly within a list to continuously compute the next Fibonacci number until the sequence is complete. Here’s an example: fibonacci = lambda n: [(lambda x, _: x[-2] + x[-1] if len(x) < n else x)(fibonacci([0, 1])[-2:] + [0], None) for _ in range(n)] [0, 1, 1, 2, 3] This one-liner plays with list extensions, calling itself until the list contains n elements, where the recursion halts, and the list is returned with the desired Fibonacci sequence. • Method 1: Reduce With Lambda. Provides a concise, iterative approach. May lack clarity for those unfamiliar with reduce. • Method 2: List Comprehension With Lambda. Maximizes on readability and Pythonic style. Can be inefficient due to repeated recalculations. • Method 3: Recursive Lambda. Demonstrates the power of self-referential anonymous functions. Complexity may be overkill for simple use cases. • Method 4: Generator Expression With Lambda. Efficient memory usage. Syntax can be complex and difficult to understand. • Method 5: Recursive Lambda in a List. A one-liner that showcases Python’s ability to handle recursion and lists together. Not the most readable or efficient.
{"url":"https://blog.finxter.com/5-best-ways-to-find-fibonacci-series-up-to-n-using-lambda-in-python/","timestamp":"2024-11-06T12:23:16Z","content_type":"text/html","content_length":"71374","record_id":"<urn:uuid:7a5d3093-b5c1-4917-a94b-ca578d85800b>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00308.warc.gz"}
CBSE Class 12th Chemistry Chemical Kinetics Notes - Wisdom TechSavvy Academy Class 12 Chemical Kinetics , In this post We will Study, Chemical Kinetics Notes & Important Topic for Student to excel in exam. NCERT/CBSE class 12th Chemistry notes provided by free-education.in (Wisdom Education Academy). Here We are providing all subject wise pdf notes to student for their help to get good marks in exam. In this post you will get Download CBSE 2020-21 Chemistry PDF notes given below by free-education.in to excel in the exam. www.free-education.in is a platform where you can get pdf notes from 6th to 12th class notes, General Knowledge post, Engineering post, Career Guidelines , English Speaking Trick , How to crack interview and lots more. The stream of chemistry that governs the rate of reactions along with their mechanisms is termed as Chemical kinetics derived from a Greek word meaning chemical movement. Combination of two or more reactants to produce a new product is called reaction. Elementary Reaction: The reaction that occurs in a single step to give the product is called an elementary reactions. Complex reaction: The reactions that occur as a result of sequence of elementary reactions to give the product is called complex reactions. Rate of reaction The rate at which the concentration of reactant or product participating in a chemical reaction alters is called rate of reaction. Rate of reaction = change in concentration/ time = (mol/litre)/time Reactant (R) –> Product. Rate [R] Rate = k[R] k = rate constant or velocity constant. Let one mole of the reactant A produce one mole of the product B. Let at time t1 [A][1] and [B][1] = Concentrations of A and B Let at time t2 [R][2] and [P][2] = Concentrations of A and B Rate of disappearance of A = Decrease in concentration of R / Time taken = -∆[A]/∆t Rate of appearance of B = Increase in concentration of P / Time taken = +∆[B]/∆t When two or more reactants combine with each other the molecules of the respective reactants collide with each other to form the product. The collision between the molecules increases with the increase in concentration of the reactants and thereby increases the rate of reaction. A + B –> C + D Chemical Kinetics Here molecules of reactant A and B collide to produce molecules of product C and D. Therefore we can conclude that rate of reaction is directly proportional to the concentration of the participating reactants. Rate ∝ [A]^x [B]^y Or Rate = k[A]^x [B]^y Hg(l) + Cl2 (g) –> HgCl2(s) Rate of reaction= -∆[Hg]/∆t = ∆[Cl[2]]/∆t = ∆[Hg Cl[2]]/∆t Factors affecting rate of reaction Nature of reactant Nature of bonding in the reactants determines the rate of a reaction. The ionic compounds react faster compared to covalent compounds due to requirement of energy in covalent compounds to cleave the existing binds. The reaction between ionic compounds: Precipitation of AgCl AgNO[3] + NaCl –> AgCl + NaNO[3] The reactions between covalent compounds: Rate of reaction increases with the rise in temperature due to increase in average kinetic energy which in turn increases the number of molecules having greater energy than threshold energy and consequently increasing the number of effective collisions. The rate of a reaction is doubled (i.e., increased by 100%) with 10 ^oC rise in temperature. Increase in partial pressure increases the number of collisions. Therefore, the rate of reactions involving gaseous reactants increases with the increase in partial pressures. A catalyst increases the rate of reaction by giving an alternative path with lower activation energy (E[a]’) for the reaction to proceed. Concentration of reactants Increase in concentration increases the number of collisions and the activated collisions between the reactant molecules. According to the collision theory, rate is directly proportional to the collision frequency. Consequently, the rate of a reaction increases with the rise in the concentration of reactant. Chemical Kinetics Surface area The rate of a reaction increases with increase in the surface area of solid reactant. PROBLEM. For the reaction: 2A + B → A[2]B , The rate = k[A][B]^2with k= 2.0 x 10^-6mol^-2L^2s^-1. Calculate the initial rate of the reaction when [A] = 0.1 mol L^-1, [B] = 0.2 mol L^-1. Calculate the rate of reaction after [A] is reduced to 0.06 mol L^-1. SOLUTION. Rate = k [A][B]^2 = (2.0 × 10^ – 6mol^ – 2L^2s^ – 1) (0.1 mol L^ – 1) (0.2 mol L^ – 1)^2 = 8.0 × 10^ – 9mol^ – 2L^2s^ – 1 Reduction of [A] from 0.1 mol L^ – 1to 0.06 mol^ – 1 The concentration of A reacted = (0.1 – 0.06) mol L^ – 1 = 0.04 mol L^ – 1 The concentration of B reacted= 1/2 x 0.04 mol L^-1 = 0.02 mol L^ – 1 The concentration of B available, [B] = (0.2 – 0.02) mol L^ – 1 = 0.18 mol L^ – 1 After reduction of [A] to 0.06 mol L^ – 1 The rate of the reaction Rate = k [A][B]^2 = (2.0 × 10^ – 6mol^ – 2L^2s^ – 1) (0.06 mol L^ – 1) (0.18 mol L^ – 1)^2 = 3.89 mol L^ – 1s^ – 1 Average rate of reaction The average rate of the reaction is the ratio of change in concentration of reactants to the change in time. It is determined by the change in concentration of reactants or products and the time taken for the change as well. As the reaction precedes forward the collisions between the molecules of the participating reactants reduces thereby decreasing the average rate of the reaction. Mathematically, Average rate of reaction = Change in concentration / Time = (mol/litre)/time PROBLEM. For the reaction R → P, the concentration of a reactant changes from 0.03 M to 0.02 M in 25 minutes. Calculate the average rate of reaction using units of time both in minutes and seconds. SOLUTION. R[2]= 0.02 M R[1]= 0.03M t[2 ]– t[1]= 25 minutes ∆[R]/∆t =∆[R[2]-R[1]]/ t[2]– t[1]=- (0.02-0.03)/25 = 6.67 X 10^-6 Ms^-1 =0.005ML^-1 min^-1 PROBLEM. In a reaction, 2A → Products, the concentration of A decreases from 0.5 mol L^-1 to 0.4 mol L^-1 in 10 minutes. Calculate the rate during this interval? SOLUTION. -1/2 (∆[A]/∆t) =-1/2(∆[A[2]-A[1]]/∆t) =-1/2 (0.4-0.5/10) =0.005ML^-1 min^-1 = 5 X 10^-3M min^-1 Instantaneous rate of reaction The ratio of change in concentration in chemical reaction to the time period is termed as instantaneous rate of the reaction. -d[R]/dt = change in chemical concentration over short period of time/ the short time elapsed = (mol/litre) / time It can be calculated from the slope of the tangent on a concentration- time graph. For example, consider the following graph. The rate of reaction at t = 40s in the above graph can be calculated by following method: Chemical Kinetics Rate of reaction = gradient of the tangent at 40s = (120-70)/(65-5) = 50/60= 0.83 cm^3s^-1 Rate expression The representation of rate of reaction in terms of concentration of the reactants is called rate equation or rate expression. For example, in the reaction 2NO(g) + O2(g) –> 2NO2 The rate expression is given as Rate =k[NO]^2[O[2]] Let us consider another reaction BrO[3]^– + 5Br^– + 6H^+ –> 3Br + 3 H[2]O Rate expression for this reaction is given as Order of a reaction The addition of power of the concentration of reactant in a rate law expression gives the order of reaction. Let A + 2B –> C + D be a chemical reaction. From rate law R = k [A]^x [B]^y Now Order of reaction is defined as addition of the order of all the reactants participating in a chemical reaction. order w.r.t. A = x Order w.r.t. B = y Overall order of the given reaction = (x + y). Units of order of reaction: k = rate / [A]^n = (mol.L^-1s^-1/molL^-1) ^n For 1^st order reaction , n=1 k = rate / [A]^1 = (mol.L^-1s^-1/molL^-1) ^1= s^-1 For 2^nd order reaction, n=2 k = rate / [A]^2 = (mol.L^-1s^-1/molL^-1) ^2= L.mol^-1s^-1 PROBLEM. From the rate expression for the following reactions, determine their order of reaction and the dimensions of the rate constants. (i) 3 NO(g) → N[2]O(g) Rate = k[NO]^2 (ii) H[2]O[2] (aq) + 3 I^ – (aq) + 2 H^+→ 2 H[2]O (l) + I[3]^– Rate = k[H[2]O[2]][I^ – ] (iii) CH[3]CHO(g) → CH[4](g) + CO(g) Rate = k [CH[3]CHO]^3/2 (iv) C[2]H[5]Cl(g) → C[2]H[4](g) + HCl(g) Rate = k [C[2]H[5]Cl] SOLUTION. (i) Rate = k[NO]^2 Order of the reaction = 2 Dimension of k = Rate / [NO]^2 = mol L^-1 s^-1 / (mol L^-1)^2 = mol L^-1 s^-1 / mol^2 L^-2 = L mol^-1 s^-1 (ii) Rate = k[H[2]O[2]][I^ – ] Order of the reaction = 2 Dimension of k = Rate / [H[2]O[2]][I^ – ] = mol L^-1 s^-1 / (mol L^-1) (mol L^-1) = L mol^-1 s^-1 (iii) Rate =k [CH[3]CHO]^3/2 Order of reaction = 3/2 Dimension of k = Rate / [CH[3]CHO]^3/2 = mol L^-1 s^-1 / (mol L^-1)^3/2 = mol L^-1 s^-1 / mol^3/2 L^-3/2 = L^½ mol^-½ s^-1 (iv) Rate = k [C[2]H[5]Cl] Order of the reaction = 1 Dimension of k = Rate / [C[2]H[5]Cl] = mol L^-1 s^-1 / mol L^-1 = s^-1 PROBLEM. For a reaction, A + B → Product; the rate law is given by, r = k [A]^½ [B]^2. What is the order of the reaction? SOLUTION. The order of the reaction = 1/2 + 2 = 2 1/2 = 2.5 Zeroth order reaction If the rate of reaction is independent of concentration of the reactant participating in the reaction then the reaction is called Zeroth order reaction. Chemical Kinetics A –> B At time t = 0 concentration of A (reactant) is a and B (product) is 0. At time t = t the concentration of A (reactant) is (a-x) and that of B (product) is x. -d[A]/dt = k[0][A]^0 => dx/dt = k[0](a-x)^0 dx/dt = k[0] ∫ [0]^x dx =k[0] ∫[0]^ xdt X = k[0]t PROBLEM. The decomposition of NH3on platinum surface is zero order reaction. What are the rates of production of N2and H2if k = 2.5 x 10-4mol-1L s-1? SOLUTION. 2NH[3 ] –> N[2] + 3H[2] Rate of zero order reaction -1/2 (d[NH[3]]/dt) = d[N[2]]/dt = 1/3 (d[H[2]]/dt) -1/2 (d[NH[3]]/dt) = d[N[2]]/dt = 1/3 (d[H[2]]/dt)= k = 2.5 x 10^-4 mol L^-1 s^-1 Rate of production of N[2] d[N[2]]/dt = 2.5 x 10^-4 mol L^-1 s^-1 Rate of production of H[2] d[H[2]]/dt = 3 x 2.5 x 10^-4 mol L^-1 s^-1 First order reaction If the rate of reaction depends on the concentration of single reactant participating in chemical reaction raised to the first power then it is called a first order reaction. Chemical Kinetics A –> B At time t = 0 concentration of A (reactant) is a and B (product) is 0. At time t = t the concentration of A (reactant) is (a-x) and that of B (product) is x. -dx/dt ∝ (a-x) = dx/dt = k1(a-x) ∫ 0x dx/(a-x) = k1∫ 0t dt dx/dt = k0(a-x)0 dx/dt = k0 ∫0 x dx = k0∫ 0t dt ln (a/a-x) = k1t => t = 1/ k1 ln (a/a-x) = 2.303/ k1 log (a/a-x) k1 = 2.303 log (a/a-x) PROBLEM. A first order reaction has a rate constant 1.15 10-3s-1. How long will 5 g of this reactant take to reduce to 3 g? SOLUTION. From the question, we can write down the following information: Initial amount = 5 g Final concentration = 3 g Rate constant = 1.15 10 – 3s – 1 We know that for a 1st order reaction, t = (2.303/k)log[R0]/[R] (2.303/1.15X10-3) X 0.2219 = 444.38 s = 444 s Second order reaction A reaction with order equal to two is called a second order reaction. r = k[A]^2 or r = k [A][B] Chemical Kinetics Pseudo order reaction The reaction that appears to be an n^th order reaction but belongs to some different order is called Pseudo order reaction. For example, a pseudo first order reaction is a chemical reaction between two reactants participating in a chemical reaction and therefore should be a second order reaction. But it resembles to be a first order reaction due to the presence of reactants in negligible quantity. Let R` + R“ –> P Rate = k[A]^1[B]^1 Order of reaction = 2. Let us consider another reaction, CH[3]Br + OH^−→ CH[3]OH+Br^− Rate law for this reaction is Rate = k [OH^−][CH[3]Br] Rate = k [OH^−][CH[3]Br] = k(constant)[CH[3]Br]=k′[CH[3]Br] As only the concentration of CH[3]Br would change during the reaction, the rate would solely depend upon the changes in the CH[3]Br reaction. • As we know that molecules need to collide to bring about a chemical reaction, so the number of molecules participating in an elementary chemical reaction that collides to bring about the chemical reaction is called molecularity of a reaction. • The value of molecularity of a reaction is always a positive value. • Let us consider a following reaction: A + 2B –> C + D Here reactants are: 1 molecule of A ,2 molecules of B Products are: 1 molecule of C , 1 molecule of D. Therefore we can conclude that reaction is trimolecular. A reaction with molecularity =1 is called unimolecular. Example, PCl[5 ]–> PCl[3] + Cl[2] • A reaction with molecularity =2 is called bimolecular. Example, Cl + CH[4] –> HCl + CH[3] • A reaction with molecularity =3 is called trimolecular. 2FeCl[3] + SnCl[2] –> 2FeCl[2] + SnCl[4] • It is theoretical value and does not determine the rate of reaction. Nor does it depend upon external factors like temperature or pressure, etc. Integrated rate equation Chemical Kinetics Consider the reaction aA + bB –> cC + dD Rate = k [A]^x[B]^y -dR/dt = k[A]^x[B]^y dR/dt is instantaneous rate. Integrated rate equation for zero order reaction -dR/dt = k[R]^0=k dR/dt = -k ∫ dR = -k ∫ dt [R]=-kt +I At t = 0 R[0]= -k .0 + I I = R[0] So the equation becomes R = -kt + R[0] Graph for this is as follows: Chemical Kinetics Integrated rate equation for first order reaction Rate = -dR/dt = k[R]^2 ∫ dR/R= – ∫ kt ln R = -kt + I At t = 0 R = R[0] ln R[0]=-k X 0 +I I = ln R[0] So equation becomes ln R = -kt + ln R[0] ln [R[0]/ R]=-kt Chemical Kinetics Rate law • The representation of rate of reaction in terms of molar concentration of the reactants participating in a reaction raised to some power is called rate law. • It is also called rate expression or rate equation. • Consider a reaction: 2NO (g) + O[2] (g) –> 2NO[2] Rate ∝ [NO]^2 Rate ∝ [O[2]] Combining these two rates we derive Rate ∝ [NO]^2 [O[2]] Rate =k [NO]^2 [O[2]] • Where k is the proportionality constant with a definite value at a specific temperature for a specific reaction and is called Rate Constant. • Rate law expression = -d[R]/ dt = k [NO]^2 [O[2]] Half life reactions : The time elapsed in reduction of the concentration of a reactant participating in a chemical reaction to one half of its original concentration is called half-life reaction and is represented by t[1/2]. Half-life for zero order reactions: A –> B At time t = 0 concentration of A (reactant) is a and B (product) is 0. At time t = t the concentration of A (reactant) is (a-x) and that of B (product) is x. x = k[0]t à a/2 = k[0]t[1/2] = a/2k[0] t[1/2] =a/2k[0] y = mx t = Independent variable. It is taken at x-axis. m = 2k[0] Half-life for first order reactions: t = 2.303/k[1] log a/(a-x) For half-life x = a/2; t = t [1/2] t [½] = 2.303/k[1] log a/(a-a/2) –> t1/2 = 2.303/k[1] log2 t [½] = 2.303/k[1 ]X 3.010 t [½] = 0.693/ k[1] Rate determining step • The slowest step during a chemical reaction determines the overall speed of a reaction towards completion is called rate determining step. • Let us consider the following reaction, • The elementary steps of the reaction are as follows: Step 1: NO[2]+NO[2]→NO+NO[3 ](Rate constant = k[1], slow) Step 2: NO[3]+CO→NO[2]+CO[2 ](Rate constant = k[2, ]fast) • As the first step is the slowest step in the reaction it will determine the rate of the overall reaction. Therefore Step1 is the rate determining step of the given reaction and hence the rate expression for the given reaction is the product of rate constant and the reactants of this step. Rate = k1[NO[2]][NO[2]]=k1[NO[2]][2] Activation energy • The minimum quantitiy of external energy required for the conversion of reactant into product or to produce an unstable intermediate is called activation energy. It is E • Rate of reaction is inversely proportional to the activation energy. • Therefore, greater value of activation energy leads to lower rate of reaction and increased influence of temperature change on the rate constant. Chemical Kinetics PROBLEM. The rate of the chemical reaction doubles for an increase of 10 K in absolute temperature from 298 K. Calculate E[Solution]. SOLUTION. It is given that T[1] = 298 K ∴T2 = (298 + 10) K = 308 K We also know that the rate of the reaction doubles when temperature is increased by 10°. Therefore, let us take the value of k[1] = k and that of k[2] = 2k Also, R = 8.314 J K ^– 1 mol ^– 1 Now, substituting these values in the equation: Log k[2]/k[1] = E[a]/2.303R [T[2]-T[1]/T[1]T[2]] Log 2k/k = E[a]/2.303 X 8.314 [10/298 X 308] E[a]= 2.303 X 8.314 X 298 X 308 X log2/10 = 52897.78 J mol^ – 1 = 52.9 kJ mol^ – 1 Arrhenius equation • The formula used to calculate the energy of activation and justify the effect of temperature on rate of reaction is called Arrhenius Equation. • It is given by the formula, K = A e^-Ea/RT k = Rate constant A= Frequency factor e = mathematical quantity E[a]= activation energy R = gas constant T = kelvin temperature ln K = ln A – Ea/(2.303RT) Equation of a straight line with slope = –E[a]/R. • When E[a] = 0 , Temperature = Infinity K = Ae^0 = A e^-Ea/RT =Boltzmann factor. • For a chemical reaction the rate constant gets doubled for a rise of 10° temperature. This is because according to Arrhenius Equation, K = Ae^-Ea/RT Taking log on both sides of the equation Ln k = ln A – E[a]/RT Comparing with the equation of a straight line y= mx+c, [m= slope of the line c= y-intercept] So we have: y = ln k x = 1/T m = -E[a] / R c = ln A Plotting k Vs (1/T) Chemical Kinetics PROBLEM. Find the activation energy (in kJ/mol) of the reaction if the rate constant at 600K is 3.4 M^-1 s^-1 and 31.0 at 750K. SOLUTION. Ln k = ln A – E[a]/RT To find E[a], subtract ln A from both sides and multiply by -RT. E[a] = (ln A – ln k)RT Collision theory • It states that: • According to collision theory the molecules collides with great kinetic energy in order to bring about a chemical reaction. The molecules of the reacting species collide through the space in a rectilinear motion. Rate of a chemical reaction is proportional to the number of collisions between the molecules of the reacting species. The molecules must be properly oriented. • Rate of successful collisions ∝ Fraction of successful collisions X Overall collision frequency. • The number of collisions per second per unit volume of the molecules in a chemical reaction is called collision frequency (Z). Let A+B –> C + D Rate = Z[AB]e^-Ea/RT Here Z[AB ]= collision frequency of A and B. • In many reactions Rate = P Z[AB]e^-Ea/RT Where p= steric factor which takes into account the proper orientation of the molecules participating in a chemical reaction. Chemical Kinetics PROBLEM. The activation energy for the reaction 2HI[(g)] → H[2] + I[2(g)] is 209.5 kJ mol^-1 at 581 K. Calculate the fraction of molecules of reactants having energy equal to or greater than activation energy? SOLUTION. E[a] = 209.5 kJ mol^ – 1 = 209500 J mol^ – 1 T = 581 K R = 8.314 JK^ – 1 mol^ – 1 Fraction of molecules of reactants having energy equal to or greater than activation energy is as follows: x = e^-Ea/RT ln x = -Ea/RT log x = Ea/2.303RT log x = 209500 J mol^-1/2.303 X 8.314 J K^-1 mol^-1 X 581 = 18.8323 x = antilog (18.323) = antilog 19.1977 = 1.471 X 10^-19 Important questions( Chemical Kinetics ): Question 1- A reaction is of second order with respect to a reactant. How will the rate of reaction be affected if the concentration of this reactant is (i) doubled, (ii) reduced to half? Answer: Rate = K[A]^2 (i) Rate of reaction becomes 4 times (ii) Rate of reaction will be 1/4^th Question 2- Define: 1- Elementary step in a reaction 2- Rate of a reaction 3- Order of a reaction 4- Activation energy of a reaction 5- Rate Expression 6- Rate Constant Answer: 1. Elementary step in a reaction: The Reactions which take place in one step. Ex: H[2]+I[2] → 2HI 2- Rate of a reaction: The change in the concentration of any one of the reactants or products per unit time. 3- Order of a reaction: Sum of powers of molar concentrations of reacting species in the rate equation of the reaction. It can be a whole number, zero, fractional, positive or negative and is experimentally determined. It is meant for the reaction and not for its individual steps. r = K[A]^x[B]^y Order = x + y 4- Activation energy of a reaction: The minimum extra amount of energy absorbed by the reactant molecules to form the activated complex. 5- Rate Expression: It expresses the rate of reaction in terms of molar concentrations of the reactants with each term raised to their power, which may or may not be same as the stoichiometric coefficient of that reactant in the balanced chemical equation. 6- Rate Constant: The rate of reaction when the molar concentration of each reactant is taken as unity. Question 3- A reaction is of first-order in reactant A and of second order in reactant B. How is the rate of this reaction affected when (i) the concentration of B alone is increased to three times (ii) the concentrations of A as well as B are doubled? Answer: (i) r = 9 times (ii) r = 8 times Question 4- The thermal decomposition of HCO[2]H is a first order reaction with a rate constant of 2.4 × 10^-3 s^-1 at a certain temperature. Calculate how long will it take for three-fourths of initial quantity of HCO[2 ]H to decompose. (log 0.25 = -0.6021) Answer: t = 577.6 seconds {Hint- Use formula: t = (2.303 / K) log (a / a-x)} Question 5-(a) For a reaction A + B → P, the rate law is given by, r = k[A]^1/2 [B]^2. What is the order of this reaction? (b) A first-order reaction is found to have a rate constant k = 5.5 × 10^-14 s^-1. Find the half-life of the reaction. Answer: a) 5/2 b) t[1/2] = 1.26 x 10 ^13 s (Hint: r = k[A]^1/2 [B]^2 , t[1/2] = 0.693/k ) Question 6- A first order gas phase reaction : A[2]B[2](g) → 2A(g) + 2B(g) at the temperature 400°C has the rate constant k = 2.0 × 10^-4 sec^-1. What percentage of A[2]B[2] is decomposed on heating for 900 seconds? (Antilog 0.0781 = 1.197) Answer: Percent of A[2]B[2] decomposed = 16.45 % Question 7-For a reaction: H[2] + Cl[2 ]→ 2HCl Rate = k (i) Write the order and molecularity of this reaction. (ii) Write the unit of k. Answer: (i) This reaction is zero-order reaction and molecularity is two. (ii) Unit of k = mol L^-1 s^-1 Question 8- A first-order reaction has a rate constant of 0.0051 min^-1. If we begin with 0.10 M concentration of the reactant, what concentration of reactant will remain in solution after 3 hours? Answer: [R] = 0.0399 M (Hint- Use formula: K = (2.303/t) log ([R][0]/ [R]) Question 9- For a decomposition reaction the values of rate constant k at two different temperatures are given below : k[1] = 2.15 × 10^-8 L mol^-1 s^-1 at 650 K k[2] = 2.39 × 10^-7 L mol^-1 s^-1 at 700 K Calculate the value of activation energy for this reaction. (R = 8.314 J K^-1 mol^-1) Answer: E[a] = 182.20 KJ (Hint- Use formula: log k[2]/k[1] = E[a]/ 2.303R [T[2]-T[1]/ T[1]T[2]] Question 10- Nitrogen pentoxide decomposes according to equation : 2N[2]O[5](g) → 4 NO[2](g) + O[2](g). This first-order reaction was allowed to proceed at 40°C and the data below were collected : N[2]O[5] (M) Time (min) 0.400 0.00 0.289 20.0 0.209 40.0 0.151 60.0 0.109 80.0 Chemical Kinetics (a) Calculate the rate constant. Include units with your answer. (b) What will be the concentration of N[2]O[5] after 100 minutes? (c) Calculate the initial rate of reaction. Answer: a) K = 0.0163 min^-1 b) [A] = 0.078 M c) R = 0.00652 M min^-1 Question 11- The rate of a reaction becomes four times when the temperature changes from 293 K to 313 K. Calculate the energy of activation (E[a]) of the reaction assuming that it does not change with temperature. [R = 8.314 JK^-1 mol^-1, log 4 = 0.6021] Answer: Ea = 52.8 KJ mol^-1 (Hint- Use formula: log k[2]/k[1] = E[a]/ 2.303R [T[2]-T[1]/ T[1]T[2]] Question 12- For the first-order thermal decomposition reaction, the following data were obtained: C[2]H[5]Cl(g) → C[2]H[4](g) + HCl(g) Time/sec Total pressure/atm 0 0.30 300 0.50 Calculate the rate constant (Given: log 2 = 0.301, log 3 = 0.4771, log 4 = 0.6021) Answer: k = 3.66 x 10^-3 s^-1 {Hint- k = (2.303/t) log (P[o]/ 2P[o]-P[t])} Question 13- If the half-life period of a first-order reaction in A is 2 minutes, how long will it take [A] to reach 25% of its initial concentration? Answer: t[75] = 4 min {Hint: t = (2.303/k) log ([A][0]/ [A] } Question 14- Define: 1- Average rate of a reaction 2- Instantaneous rate of a reaction 3- Molecularity of a reaction 4- Pseudo first-order reaction 5- Half-life period of reaction (t[1/2]) 6- Specific rate of a reaction 1- Average rate of a reaction: It depends upon the change in concentration of reactants or products and the time taken for the change to occur. R → P Average rate = – Δ[R] / Δt or Average rate = + Δ[P] / Δt 2- Instantaneous rate of a reaction: It is the rate of change in concentration of any one of the reactant or product at a particular moment of time. limit [- Δ[R] / Δt ] = – d[R] / dt 3- Molecularity of a reaction: The number of atoms, ions or molecules that must collide with one another simultaneously so as to result in a chemical reaction. Molecularity of a reaction is always a whole number. 4- Pseudo first-order reaction: The reactions which are not truly of the first order but under certain conditions become first-order reactions. 5- Half-life period of reaction (t[1/2]): The time taken for half of the reaction to complete. 6- Specific rate of a reaction: It is the rate of reaction when the molar concentration of each of the reactants is unity. Chemical Kinetics Download Class 12th Chemistry Notes & Solution Chapter wise (Click Here) Mohd Sharif Mohd. Sharif Qualification: B.Tech (Mechanical Engineering) [Founder of Wisdom Academy] [Aim Foundation & Free-Education.In] [Engineer By Profession | Teacher By Choice] [Blogger, YouTube Creator] Inline Feedbacks View all comments
{"url":"https://free-education.in/class-12th-chemistry-chemical-kinetics-notes-important-question/","timestamp":"2024-11-10T11:16:03Z","content_type":"text/html","content_length":"267800","record_id":"<urn:uuid:159477e0-5744-450b-bd97-e6d49104f139>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00209.warc.gz"}
Extending the Latent Variable Model to Non-Independent Longitudinal Dichotomous Response Data Matthew M. Hutmacher Ann Arbor Pharmacometrics Group, Ann Arbor, MI, USA Background: Sheiner and Sheiner et. al. brought attention to generalized nonlinear mixed effects modeling of ordered categorical data, and the utility of such for drug development. Since the publication of these articles, exposure-response analyses of such data are being increasingly performed to inform decision making. Hutmacher et. al. expanded upon this work, relating the models reported to the concept of a latent variable (LV). The LV approach assumes an underlying unobserved continuous variable, which can be mapped to the probability of observing a response using an unknown threshold parameter. The objective was to promote incorporation of pharmacological concepts when postulating models for dichotomous data by providing a framework for including, for example pharmacokinetic (effect compartment) or pharmacodynamic onset (indirect response) of drug effect. The LV approach was developed assuming independence between the dichotomous responses within a subject. Recently, Lacroix et. al. reported that fewer transitions between response values were observed than would be predicted by assuming the responses are independent. The authors implemented methods developed by Karlsson et. al., and incorporated a Markov component to address this dependence between responses. The probability of observing the current response was shown to be related to prior responses. The focus of this current work is to extend the LV approach to accommodate non-independent longitudinal dichotomous response data. This multivariate latent variable (MLV) approach attributes the dependence between responses to correlations between latent (unobserved) residuals. The latent residuals are assumed to be distributed as a multivariate normal. General correlation structures can be applied to the latent residuals, but the first-order auto regressive and the spatial power structure, which relates the degree of correlation to the time (distance) between the responses, are obvious choices. The method is convenient with respect to testing for correlation. Setting the correlation parameters to 0 yields a model in which the responses are considered independent; thus, the LV approach is nested within the MLV approach. Additionally the MLV parameters are interpretable relative to the LV parameters. The MLV approach is flexible in that it can generate data that range from independent (correlations equal to 0) to complete dependence (correlations equal to 1), and it is parsimonious in that the amount of dependence can be governed by very few parameters. Methods: Simulation using the MLV framework is straightforward. However, model fitting and estimation is complicated by the intractability of the cumulative multivariate normal distribution. The likelihood, conditioned on the subject-specific random effects, is constructed using a sequence of probabilities, each probability conditioned on the previous latent residuals (Cappellari and Jenkins). The latent residuals in the probability statements are translated to independence using the Cholesky factorization of the correlation matrix. This permits each probability statement to be considered separately, simplifying estimation. The conditional probabilities are approximated using a pseudo stochastic approximation which uses samples from truncated normal distributions. Adaptive Gaussian quadrature is used to construct the overall marginal likelihood, which is unconditional on the subject-specific random effects. A simulation study was performed to evaluate the MLV method. The design was based on the ACR20 trial reported in Hutmacher, but the model used to generate the data was simplified. A first-order auto regressive structure with a correlation parameter of 0.5 was used to simulate the dependent data. LV and MLV models were fitted using the NLMIXED procedure in SAS to the dependent data as well as independent data for comparison. Biases in the fixed and random effects parameters for both approaches were quantified. Results: No appreciable biases of the estimates were noted for either method fitted to the independent data. However, biases greater than 20% for the fixed effects and 100% for the random effects parameters were reported for the LV approach fitted to the dependent data. Conclusion: Failure to address the dependence between dichotomous response data can lead to biased parameter estimates. The MLV approach is a viable method to handle such data and it is not difficult to implement. The approach is not likely to be practical however when subjects have large numbers of observations unless the latent variable correlation structure is simplified. [1]. Sheiner LB. A new approach to the analysis of analgesic drug trials, illustrated with bromfenac data. Clinical Pharmacology and Therapeutics 1994; 56:309-322. [2]. Sheiner LB, Beal SL, Dunne A. Analysis of nonrandomly censored ordered categorical longitudinal data from analgesic trials. Journal of the American Statistical Association 1997; 92:1235-1244. [3]. Hutmacher MM, Krishnaswami S, Kowalski KG. Exposure-response modeling using latent variables for the efficacy of a JAK3 inhibitor administered to rheumatoid arthritis patients. Journal of Pharmacokinetics and Pharmacodynamics 2008; 35:139-157. [4]. Lacroix BD, Lovern MR, Stockis A, Sargentini-Maier ML, Karlsson MO, Friberg LE. A pharmacodynamic Markov mixed-effects model for determining the effect of exposure to certolizumab pegol on the ACR20 score in patients with rheumatoid arthritis. Clinical Pharmacology and Therapeutics 2009; 86:387-395. [5]. Karlsson MO, Schoemaker RC, Kemp B, Cohen AF, van Gerven JM, Tuk B, Peck CC, Danhof M. A pharmacodynamic Markov mixed-effects model for the effect of temazepam on sleep. Clinical Pharmacology and Therapeutics 2000; 68:175-188. [6]. Cappellari L, Jenkins, SP. Multivariate probit regression using simulated maximum likelihood. Stata Journal 2003; 3:278-294. Reference: PAGE 19 (2010) Abstr 1693 [www.page-meeting.org/?abstract=1693] Oral Presentation: Methodology
{"url":"https://www.page-meeting.org/default.asp?abstract=1693","timestamp":"2024-11-05T15:36:20Z","content_type":"text/html","content_length":"23817","record_id":"<urn:uuid:2b2e35a9-0141-48c4-a86c-335b59944faf>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00783.warc.gz"}
Electric Baseboard Heater with Radiation and Convection Electric Baseboard Heater with Radiation and Convection[LINK] Model Description[LINK] The input object ZoneHVAC:Baseboard:RadiantConvective:Electric is used to specify electric baseboard heaters that include both convective and radiant heat additions to a zone from a baseboard heater that uses electricity rather than hot water as a heating source. The radiant heat transfer to people as well as surfaces within a zone is determined in the same fashion as both hot water and steam baseboard heaters with radiation and convection heat transfer. The electric baseboard heater receives energy via electric resistance heating. Radiant heat calculated by the user-defined fraction from the heating capacity of a baseboard unit impacts the surface heat balances and thermal comfort of occupants in a zone. EnergyPlus then assumes that the remaining convective gains from the unit are evenly spread throughout the space thus having an immediate impact on the zone air heat balance which is used to calculate the mean air temperature (MAT) within the space. Convective Electric Baseboard Heater Inputs[LINK] Like many other HVAC components, the electric baseboard model requires a unique identifying name and an availability schedule. The availability schedule defines when the unit can provide heating the zone. The input also requires a capacity and efficiency for the unit. While the efficiency is a required input that defaults to unity, the capacity can be chosen to be autosized by EnergyPlus. All inputs for the radiant heat calculation are the same as the water and steam baseboard heater with radiation and convection models in EnergyPlus. Users are required to input fractions that specify the fraction of the total radiant heat delivered directly to surfaces as well as people in a space. The sum of of these radiant distribution fractions must sum to unity, and each electric baseboard heater is allowed to distribute energy to up to 20 surfaces. Simulation and Control[LINK] The simulation and control of this model is fairly straightforward. When the unit is available and there is a heating load within a space, the electric baseboard unit will attempt to meet the entire remaining heating load if it has enough capacity to do so. If the zone heating load is greater than the baseboard heating capacity, then the baseboard unit will run at its capacity. The model then determines the radiant heat emitted by the baseboard unit using the following equation: where qrad is the total radiant heat transfer from the baseboard unit, q is the lesser of the heating capacity of the unit and the zone heating load, and Fracrad is the user-defined radiant fraction for the baseboard unit. The radiant heat additions to people and surfaces are thus: where qpeople is the radiant heat transfer to people, qsurface,i is the heat radiated to surface i, Fracpeople is the fraction of the heat radiated to people, and Fracsurface,i is the fraction of the heat radiated to surface i. Based on these above equations, the model then distributes the radiant heat additions to the appropriate surfaces and people in the zone and the convective heat addition to air in the zone. The surface heat balances are then recalculated to account for the radiant heat added to the zone surfaces by the baseboard unit. It is assumed that the radiant heat incident on people in the zone is taken into account in the thermal comfort models and then is converted to convection to the zone so that the zone heat balance includes this amount of heat which would otherwise be lost (see the High Temperature Radiant Heater Model for more information about how radiant energy added by these types of systems affect thermal comfort). The load met, the actual convective system impact for the baseboard heater, qreq, is calculated using the following equation: where qsurf,c is the convection from the surfaces to the air in the zone once the radiation from the baseboard unit is taken into account; qsurf,z is the convection from the surfaces to the air in the zone when the radiation from the baseboard unit was zero; qconv is the convective heat transfer from the heater to the zone air; and qpeople is radiant heat to the people. The accounting of the radiant heat added to the zone (surfaces) by the electric baseboard heater is very similar to the method used in the high temperature radiant system model. After all the system time steps have been simulated for the zone time step, an “average” zone heat balance calculation is done (similar to the radiant systems). The energy consumption of the baseboard heater is calculated using the user-supplied efficiency and the actual convective system impact calculated as where Qelec is the energy consumption and η is the efficiency of the unit. If the unit was scheduled off or there is no heating load for the zone, then there will be no heat transfer from the unit. The model assumes no heat storage in the baseboard unit itself and thus no residual heat transfer in future system time steps due to heat storage in the metal of the baseboard unit. While there are no specific references for this model as it is fairly intuitive, the user can always refer to the ASHRAE Handbook series for general information on different system types as needed. Steam Baseboard Heater with Radiation and Convection[LINK] Model Description[LINK] The input object ZoneHVAC:Baseboard:RadiantConvective:Steam is used to specify steam baseboard heaters that include both convective and radiant heat additions to a zone from a baseboard heater that uses steam rather than hot water or electricity as a heating source. The radiant heat transfer to people as well as surfaces within a zone is determined in the same fashion as both hot water and electric baseboard heaters with radiation and convection heat transfer. The steam baseboard heater produces heat mostly as a result of the condensation of steam inside the baseboard unit. Some sensible heat is also extracted through subcooling of the condensed steam (water) below the phase change temperature. Since it is assumed that the steam temperature is much greater than the zone temperature, all of the heat generated by the phase change and subcooling are assumed to be transferred to the zone. Radiant heat calculated by the user-defined fraction from the heating provided by the baseboard unit impacts the surface heat balances and thermal comfort of occupants in a zone. EnergyPlus then assumes that the remaining convective gains from the unit are evenly spread throughout the space thus having an immediate impact on the zone air heat balance which is used to calculate the mean air temperature (MAT) within the space. This model determines the heating provided by the unit from the sum of the latent heat transfer and sensible cooling of water in a similar fashion as the steam coil model in EnergyPlus does. Overall energy balances to the steam and air handle the heat exchange between the steam loop and the zone air. The mass flow rate of steam is determined based on the heating demand in the zone. The model requests the user input the desired degree of subcooling so that it determines the heating rate from the heater due to the cooling of the condensate. Steam Baseboard Heater Inputs[LINK] The steam baseboard model requires a unique identifying name, an available schedule, and steam inlet and outlet nodes. These define the availability of the unit for providing heating to the zone and the node connections that relate to the primary system. It also requires the desired degree of subcooling to calculate the heating capacity and temperature of the condensate. A maximum design flow rate is required, and the user can request this parameter to be auto-sized by EnergyPlus. In addition, a convergence tolerance is requested of the user to control the system output. In almost all cases, the user should simply accept the default value for the convergence tolerance unless engaged in an expert study of controls logic in EnergyPlus. All of the inputs used to characterize the radiant heat transfer are the same as for the water and electric radiant-convective baseboard heater models in EnergyPlus. User inputs for the radiant fraction and for the fraction of radiant energy incident both on people and on surfaces are required to calculate radiant energy distribution from the heater to the people and surfaces. The sum of these radiant distribution fractions must sum to unity, and each steam baseboard heater is allowed to distribute energy to up to 20 surfaces. Simulation and Control[LINK] The algorithm for the steam baseboard model with radiation and convection is similar to the steam coil model in EnergyPlus while the simulation of radiant components is the same as the water radiant-convective baseboard model. This model initializes all conditions at the inlet node such as mass flow rate, temperature, enthalpy, and humidity ratio. The model then calculates the heating output of the steam baseboard (q) using: where ˙ms is the mass flow rate of steam in kg/s, hfg is the heat of vaporization of steam in J/kg, cpw is the specific heat of water in J/kg.K, and Δt is the degree of subcooling in ∘C. The outlet steam temperature is then calculated from: The model then determines the radiant heat emitted by the baseboard unit using the following equation: where qrad is the total radiant heat transfer from the baseboard unit, q is the lesser of the heating capacity of the unit and the zone heating load, and Fracrad is the user-defined radiant fraction for the baseboard unit. The radiant heat additions to people and surfaces are thus: where qpeople is the radiant heat transfer to people, qsurface,i is the heat radiated to surface i, Fracpeople is the fraction of the heat radiated to people, and Fracsurface,i is the fraction of the heat radiated to surface i. Based on these above equations, the model then distributes the radiant heat additions to the appropriate surfaces and people in the zone and the convective heat addition to air in the zone. The surface heat balances are then recalculated to account for the radiant heat added to the zone surfaces by the baseboard unit. It is assumed that the radiant heat incident on people in the zone is taken into account in the thermal comfort models and then is converted to convection to the zone so that the zone heat balance includes this amount of heat which would otherwise be lost (see the High Temperature Radiant Heater Model for more information about how radiant energy added by these types of systems affect thermal comfort). The load met, the actual convective system impact for the baseboard heater, qreq, is calculated using the following equation: where qsurf,c is the convection from the surfaces to the air in the zone once the radiation from the baseboard unit is taken into account; qsurf,z is the convection from the surfaces to the air in the zone when the radiation from the baseboard unit was zero; qconv is the convective heat transfer from the heater to the zone air; and qpeople is radiant heat to the people. The accounting of the radiant heat added to the zone (surfaces) by the steam baseboard heater is very similar to the method used in the high temperature radiant system model. After all the system time steps have been simulated for the zone time step, an “average” zone heat balance calculation is done (similar to the radiant systems). Note that if the unit was scheduled off or there is no steam flow rate through the baseboard unit, then, there will be no heat transfer from the unit. The model assumes no heat storage in the unit itself and thus no residual heat transfer in future system time steps due to heat storage in the steam or metal of the unit. While there are no specific references for this model as it is fairly intuitive, the user can always refer to the ASHRAE Handbook series for general information on different system types as needed. The user can also consult information on the EnergyPlus steam coil for further details.
{"url":"https://bigladdersoftware.com/epx/docs/22-1/engineering-reference/electric-baseboard-heater-with-radiation.html","timestamp":"2024-11-07T17:15:09Z","content_type":"text/html","content_length":"115033","record_id":"<urn:uuid:33f03c50-63ea-4f5c-aadb-677618074a15>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00733.warc.gz"}
4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers Mathematics, particularly multiplication, forms the foundation of many academic techniques and real-world applications. Yet, for numerous students, grasping multiplication can present a challenge. To address this hurdle, educators and parents have actually embraced a powerful device: 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers. Introduction to 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers - Our free 4 digit by 1 digit multiplication word problems worksheets requires kids to solve interactive word problems on multiplication by reading understanding and multiplying numbers in thousands with single digit numbers Young learners can enormously expand their computation skills by solving all six word problems in each 4 digit by 1 Horizontal 4 Digit x 1 Digit FREE This worksheet has a series of 4 digit by 1 digit horizontal multiplication problems Students rewrite the math problems vertically and solve example 6 741 x 4 4th and 5th Grades View PDF Value of Multiplication Practice Understanding multiplication is critical, laying a solid structure for advanced mathematical principles. 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers offer structured and targeted technique, cultivating a deeper understanding of this fundamental math procedure. Evolution of 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers Multiplying 3 digit By 3 digit Numbers A Multiplying 3 digit by 1 digit Numbers A Wilkerson Myra Multiplying 3 digit By 3 digit Numbers A Multiplying 3 digit by 1 digit Numbers A Wilkerson Myra There are seven groups of multiplication worksheets Multiply by 1 9 Multiply by Multiples of 10 eg 6 x 30 Multiplying Multiples of 10 100 1000 eg 70 x 400 Multiply 3 digit by 1 digit eg 435 x 6 Multiply Multi digit by 1 digit eg 6 435 x 7 Multiply 2 digit by 2 digit eg 35 24 Multiply 3 digit by 2 digit eg 215 32 Multiply 4 digit by 1 digit Worksheets Math drills 4 digit by 1 digit multiplication LIVEWORKSHEETS Interactive Worksheets For Students Teachers of all Languages and Subjects From standard pen-and-paper exercises to digitized interactive formats, 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers have actually evolved, satisfying diverse understanding styles and choices. Types of 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers Basic Multiplication Sheets Easy exercises concentrating on multiplication tables, assisting learners build a strong arithmetic base. Word Problem Worksheets Real-life circumstances incorporated right into problems, boosting important reasoning and application abilities. Timed Multiplication Drills Tests created to enhance speed and accuracy, helping in rapid mental mathematics. Benefits of Using 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers Multiplying 2 Digit by 1 Digit Numbers A Multiplying 2 Digit by 1 Digit Numbers A This is a collection of our 4 digit by 1 digit multiplication worksheets with answer keys This includes various types of multiplication practice sheets to help your students learn multiplying numbers View our MULTIPLICATION Bundle to save and get all types of worksheets There are five sets included in this packet In this worksheet learners will get to multiply 4 digit multiples of 10 with 1 digit numbers 4 VIEW DETAILS Multiply 4 digit by 1 digit Numbers Multiply 4 Digit and 1 Digit Numbers Horizontal Multiplication Worksheet Make math practice a joyride by solving problems to multiply 4 digit and 1 digit numbers 4 5 Boosted Mathematical Abilities Consistent technique develops multiplication efficiency, boosting general math capabilities. Enhanced Problem-Solving Abilities Word problems in worksheets establish logical thinking and approach application. Self-Paced Learning Advantages Worksheets accommodate private learning rates, cultivating a comfy and adaptable knowing atmosphere. Exactly How to Create Engaging 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers Integrating Visuals and Colors Vibrant visuals and shades record interest, making worksheets aesthetically appealing and engaging. Including Real-Life Scenarios Associating multiplication to day-to-day situations adds significance and functionality to exercises. Tailoring Worksheets to Different Ability Levels Customizing worksheets based upon varying effectiveness degrees ensures inclusive understanding. Interactive and Online Multiplication Resources Digital Multiplication Devices and Games Technology-based sources use interactive knowing experiences, making multiplication engaging and delightful. Interactive Sites and Apps Online platforms provide varied and available multiplication practice, supplementing standard worksheets. Customizing Worksheets for Numerous Understanding Styles Aesthetic Learners Aesthetic aids and layouts aid understanding for students inclined toward visual discovering. Auditory Learners Spoken multiplication troubles or mnemonics satisfy learners that realize concepts via auditory means. Kinesthetic Learners Hands-on activities and manipulatives support kinesthetic students in comprehending multiplication. Tips for Effective Execution in Discovering Uniformity in Practice Regular technique strengthens multiplication skills, promoting retention and fluency. Stabilizing Repeating and Range A mix of repeated exercises and diverse issue layouts keeps interest and understanding. Supplying Positive Comments Comments aids in identifying areas of renovation, urging continued progression. Difficulties in Multiplication Technique and Solutions Motivation and Interaction Obstacles Monotonous drills can lead to uninterest; innovative strategies can reignite motivation. Overcoming Anxiety of Mathematics Unfavorable understandings around math can impede development; creating a positive understanding atmosphere is essential. Influence of 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers on Academic Performance Studies and Research Study Searchings For Research study shows a favorable correlation between constant worksheet use and enhanced mathematics efficiency. 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers emerge as flexible devices, fostering mathematical effectiveness in students while fitting diverse discovering designs. From standard drills to interactive on the internet sources, these worksheets not only enhance multiplication skills however also advertise important thinking and analytic abilities. 2 Digit By 2 Digit Multiplication Worksheets With Answers Free Printable 2 Digit By 2 Digit Multiplication Worksheets Answers Free Printable Check more of 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers below Multiplying 3 Digit By 2 Digit Numbers With Comma Separated Thousands A 3 Digit By 2 Digit Multiplication Word Problems Worksheets Pdf Free Printable Multiplying 4 Digit by 1 Digit Numbers L Single Digit Multiplication Worksheets Free Printable 4 Digit By 4 Digit Multiplication Worksheets Pdf Times Tables Worksheets 2 Digit Multiplication Worksheet School Multiplication Worksheets 4 Digits Times 1 Digit Horizontal 4 Digit x 1 Digit FREE This worksheet has a series of 4 digit by 1 digit horizontal multiplication problems Students rewrite the math problems vertically and solve example 6 741 x 4 4th and 5th Grades View PDF Multiply 4 digit by 1 digit numbers K5 Learning Math worksheets Multiply 4 digit by 1 digit numbers Below are six versions of our grade 5 multiplication worksheet on multiplying 4 digit by 1 digit numbers These worksheets are pdf files Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5 More Horizontal 4 Digit x 1 Digit FREE This worksheet has a series of 4 digit by 1 digit horizontal multiplication problems Students rewrite the math problems vertically and solve example 6 741 x 4 4th and 5th Grades View PDF Math worksheets Multiply 4 digit by 1 digit numbers Below are six versions of our grade 5 multiplication worksheet on multiplying 4 digit by 1 digit numbers These worksheets are pdf files Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5 More Single Digit Multiplication Worksheets Free Printable 3 Digit By 2 Digit Multiplication Word Problems Worksheets Pdf Free Printable 4 Digit By 4 Digit Multiplication Worksheets Pdf Times Tables Worksheets 2 Digit Multiplication Worksheet School 2 Digit By 2 Digit Multiplication Worksheets Pdf Db excel Multiplying 4 Digit By 3 Digit Numbers A Multiplying 4 Digit By 3 Digit Numbers A 1 Digit Multiplication Worksheets Frequently Asked Questions (Frequently Asked Questions). Are 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers ideal for all age teams? Yes, worksheets can be tailored to various age and skill levels, making them adaptable for different learners. Just how often should students practice making use of 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers? Consistent method is crucial. Routine sessions, preferably a few times a week, can generate significant renovation. Can worksheets alone boost math abilities? Worksheets are a valuable tool yet must be supplemented with diverse knowing methods for extensive skill development. Exist on-line platforms providing free 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers? Yes, numerous academic internet sites supply open door to a variety of 4 Digit By 1 Digit Multiplication Worksheets Pdf With Answers. How can parents sustain their youngsters's multiplication technique in your home? Encouraging constant method, providing assistance, and creating a positive understanding atmosphere are advantageous actions.
{"url":"https://crown-darts.com/en/4-digit-by-1-digit-multiplication-worksheets-pdf-with-answers.html","timestamp":"2024-11-13T22:30:33Z","content_type":"text/html","content_length":"29287","record_id":"<urn:uuid:4ba2ea76-66fe-430a-8c89-906fc34bdc4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00394.warc.gz"}
Anderson, T.W. and Darling, D.A. (1952) Asymptotic theory of certain 'goodness-of-fit' criteria based on stochastic processes. Annals of Mathematical Statistics 23, 193--212. Anderson, T.W. and Darling, D.A. (1954) A test of goodness of fit. Journal of the American Statistical Association 49, 765--769. Marsaglia, G. and Marsaglia, J. (2004) Evaluating the Anderson-Darling Distribution. Journal of Statistical Software 9 (2), 1--5. February 2004. https://www.jstatsoft.org/v09/i02
{"url":"https://www.rdocumentation.org/packages/DescTools/versions/0.99.57/topics/AndersonDarlingTest","timestamp":"2024-11-10T19:11:36Z","content_type":"text/html","content_length":"97006","record_id":"<urn:uuid:7033cfc0-812a-4c53-ae64-08eb5f10366e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00732.warc.gz"}
Dynamical correlation functions for products of random matrices We introduce and study a family of random processes with a discrete time related to products of random matrices. Such processes are formed by singular values of random matrix products, and the number of factors in a random matrix product plays a role of a discrete time. We consider in detail the case when the (squared) singular values of the initial random matrix form a polynomial ensemble, and the initial random matrix is multiplied by standard complex Gaussian matrices. In this case, we show that the random process is a discrete-time determinantal point process. For three special cases (the case when the initial random matrix is a standard complex Gaussian matrix, the case when it is a truncated unitary matrix, or the case when it is a standard complex Gaussian matrix with a source) we compute the dynamical correlation functions explicitly, and find the hard edge scaling limits of the correlation kernels. The proofs rely on the Eynard-Mehta theorem, and on contour integral representations for the correlation kernels suitable for an asymptotic analysis. Bibliographical note Publisher Copyright: © 2015 World Scientific Publishing Company. • Products of random matrices • determinantal point processes Dive into the research topics of 'Dynamical correlation functions for products of random matrices'. Together they form a unique fingerprint.
{"url":"https://cris.huji.ac.il/en/publications/dynamical-correlation-functions-for-products-of-random-matrices","timestamp":"2024-11-11T04:33:34Z","content_type":"text/html","content_length":"48610","record_id":"<urn:uuid:429afb69-757a-40cf-a5da-e850eb212e87>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00524.warc.gz"}
Videnskabshistorisk Selskab - [dato]; Leonhard Euler was the most prolific mathematician of all time – but what did he do? In this talk I survey his life in Basel and at the Science Academies of St Petersburg and Berlin, and outline some of the contributions he made to the many areas in which he worked – from the very pure (number theory, the geometry of a circle, and combinatorics), via mechanics and the calculus, to the very applied (astronomy and ballistics). Robin Wilson is Emeritus Professor of Pure Mathematics at the Open University, Emeritus Professor of Geometry at Gresham College, a former Fellow of Keble College, Oxford, and currently Visiting Professor at the London School of Economics. He has written and edited many books on graph theory, including 'Introduction to Graph Theory' and 'Four Colours Suffice', and on the history of mathematics, including 'Lewis Carroll in Numberland' and is currently writing a popular biography of Leonhard Euler. He is actively involved with the popularization and communication of mathematics and its history, and was awarded the Mathematical Association of America’s Lester Ford and Pólya prizes for ‘outstanding expository writing’. He is currently President of the British Society for the History of Mathematics. tirsdag den 11. november, kl. 17
{"url":"http://videnskabshistorisk.dk/abstracts/invit14-07.htm","timestamp":"2024-11-10T05:36:54Z","content_type":"text/html","content_length":"7040","record_id":"<urn:uuid:e3e254e2-4be4-4667-bea8-7f530b8c60e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00679.warc.gz"}
Expanding the Expression (5x+3)(x-2) This article will guide you through the steps of expanding the expression (5x+3)(x-2). This process is often referred to as multiplying binomials. Understanding the Concept When we multiply binomials, we are essentially applying the distributive property twice. The distributive property states that multiplying a sum by a number is the same as multiplying each term of the sum by that number. Step-by-Step Solution 1. Distribute the first term of the first binomial: □ Multiply 5x by both terms in the second binomial: ☆ 5x * x = 5x² ☆ 5x * -2 = -10x 2. Distribute the second term of the first binomial: □ Multiply 3 by both terms in the second binomial: 3. Combine the terms: □ The expanded expression is: 5x² - 10x + 3x - 6 4. Simplify by combining like terms: Final Result Therefore, the expanded form of (5x+3)(x-2) is 5x² - 7x - 6. Key Takeaways • Multiplying binomials involves applying the distributive property twice. • Each term in the first binomial must be multiplied by each term in the second binomial. • Combine like terms after the multiplication to simplify the expression.
{"url":"https://jasonbradley.me/page/(5x%252B3)(x-2)","timestamp":"2024-11-10T07:52:36Z","content_type":"text/html","content_length":"59394","record_id":"<urn:uuid:49b30c3b-7f72-4a8f-8221-376020f4be12>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00164.warc.gz"}
Simplifying (6x³y⁵)² In mathematics, simplifying expressions is a crucial skill. Let's break down how to simplify the expression (6x³y⁵)². Understanding the Concept The expression involves a power of a product, meaning we have a product of terms raised to a power. The rule for simplifying this type of expression is: (ab)² = a²b² This rule essentially states that we can distribute the exponent to each term within the parentheses. Applying the Rule 1. Distribute the exponent: (6x³y⁵)² = 6² (x³)² (y⁵)² 2. Simplify the exponents: 6² (x³)² (y⁵)² = 36x⁶y¹⁰ Final Answer Therefore, the simplified form of (6x³y⁵)² is 36x⁶y¹⁰. Key Points to Remember • When raising a power to another power, multiply the exponents: (a^m)^n = a^(m*n). • The exponent applies to every term inside the parentheses. • Pay attention to the order of operations: exponents are performed before multiplication.
{"url":"https://jasonbradley.me/page/(6x%255E3y%255E5)%255E2","timestamp":"2024-11-04T01:31:33Z","content_type":"text/html","content_length":"58687","record_id":"<urn:uuid:58275aa6-461c-42e7-baf2-be97d2c23b0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00169.warc.gz"}
What Are Equivalent Fractions and How Are They Calculated? - Factor Calculator What Are Equivalent Fractions and How Are They Calculated? Imagine you’re baking a cake and need to divide it into equal slices. How would you ensure that each slice is the same size? The answer lies in the concept of equivalent fractions. These fractions represent the same value but in different forms. But what exactly are equivalent fractions, and how do we find them? Let’s dive into this fascinating topic and discover the secrets of fraction Key Takeaways • Equivalent fractions represent the same fractional value using different numerators and denominators. • Equivalent fractions can be calculated by multiplying or dividing the numerator and denominator by the same non-zero number. • Understanding equivalent fractions is key for solving problems, comparing fractions, and improving math skills. • Equivalent fractions have many practical uses, such as in cooking, measurement, and finance. • Working with equivalent fractions involves simplifying, comparing, and converting between different fractions. What Are Equivalent Fractions? The concept of equivalent fractions is key to grasping fractions. These fractions have the same value, despite their different numerators and denominators. They look different but are mathematically Equivalent fractions are vital for many mathematical tasks. This includes comparing fractions, adding and subtracting them, and solving problems. Understanding them well is essential for both students and professionals. It helps in grasping fractional relationships and simplifies complex math. 1. Multiplying the numerator and denominator of a fraction by the same non-zero number 2. Dividing the numerator and denominator of a fraction by the same non-zero number 3. Finding a common denominator between two or more fractions Knowing about equivalent fractions makes math operations smoother. It makes comparing fractions easy and builds a strong foundation in math. This knowledge is essential for solving problems The Concept of Fraction Equivalence Equivalent fractions are a cornerstone in mathematics, vital for mastering fractions. They share the same value but differ in their numerators and denominators. Grasping the essence of fraction equivalence is key to excelling in various mathematical realms. To create equivalent fractions, one multiplies or divides the numerator and denominator by the same non-zero number. This method keeps the fraction’s value intact but changes its appearance. Recognizing and working with equivalent fractions enhances one’s ability to solve fraction-based problems efficiently. Here are ways to identify equivalent fractions: • Simplifying Fractions: Reduce a fraction to its simplest form by dividing both the numerator and denominator by their greatest common factor. • Multiplying Fractions: Multiply the numerator and denominator by the same non-zero number to create a fraction with a larger appearance. • Dividing Fractions: Divide the numerator and denominator by the same non-zero number to create a fraction with a smaller appearance. By understanding fraction equivalence, individuals can confidently tackle mathematical operations and real-world applications involving fractions. Mastering these principles is essential for building a strong foundation in mathematics. Original Fraction Equivalent Fraction Multiplication/Division Process 1/2 2/4 Multiply both numerator and denominator by 2 3/6 1/2 Divide both numerator and denominator by 3 4/8 1/2 Divide both numerator and denominator by 4 How Are Equivalent Fractions Calculated? To calculate equivalent fractions, follow a simple step-by-step process. It’s essential to grasp the concept of fraction equivalence and apply it methodically. Here’s the method to calculate equivalent fractions: 1. Identify the original fraction. Begin with a fraction, like 1/2 or 3/4. 2. Multiply both the numerator and denominator by the same non-zero number. This creates a new fraction with the same value as the original. For instance, if the fraction is 1/2, multiplying both parts by 2 yields 2/4. 3. Repeat the process to generate more equivalent fractions. Continuing to multiply both parts by the same non-zero number, you can create an endless number of equivalent fractions for any given This straightforward technique enables you to generate an unlimited number of equivalent fractions. This is useful in many mathematical contexts and problem-solving scenarios. Mastering the art of calculating equivalent fractions is a key skill in mathematics. It enhances your ability to work with fractions efficiently and effectively. By mastering this concept and following the step-by-step process, you gain a deeper understanding of fractions and their practical applications. What Are Equivalent Fractions and How Are They Calculated? Equivalent fractions are a cornerstone in mathematics, essential for simplifying and manipulating fractions. They share the same value, despite differing numerators and denominators. Developing proficiency in working with equivalent fractions is vital, applicable in numerous mathematical contexts. To find equivalent fractions, identifying the greatest common factor (GCF) of the numerator and denominator is key. After finding the GCF, divide both the numerator and denominator by it. This simplifies the fraction, retaining its original value. This method, known as reducing fractions, is fundamental. For instance, take the fraction 3/6. The GCF of 3 and 6 is 3. Dividing both by 3 yields 1/2. This fraction retains the original value but is now in its simplest form. Original Fraction GCF Equivalent Fraction 3/6 3 1/2 6/12 6 1/2 9/15 3 3/5 Mastering the art of finding equivalent fractions enhances understanding of fraction operations. This skill is critical for advanced concepts like proportions, ratios, and algebra. It’s a foundational element in mathematics. Real-Life Applications of Equivalent Fractions Equivalent fractions are not just for school; they’re essential in everyday life. They help us in the kitchen and on construction sites. These mathematical tools are versatile and play a key role in our daily activities. Cooking and Baking – In the kitchen, equivalent fractions are vital for measuring ingredients. For instance, knowing that 1/2 cup equals 4/8 cups helps bakers adjust recipes. This ensures the right proportions of ingredients without changing the recipe’s balance. Construction and Woodworking – In construction and woodworking, equivalent fractions aid in precise measurements. Carpenters use them to cut materials accurately. This precision leads to more efficient and successful projects. Finance and Investing – In finance, equivalent fractions help with calculations like comparing interest rates. They assist in making informed financial decisions and managing money effectively. Art and Design – In art and design, equivalent fractions help create balanced compositions. Artists and designers use them to scale designs and mix colors. This ensures harmonious and proportional elements in their work. Application Example Cooking and Baking Measuring ingredients using equivalent fractions (e.g., 1/2 cup = 4/8 cups) Construction and Woodworking Calculating precise measurements and cutting materials accurately Finance and Investing Comparing interest rates and calculating returns on investments Art and Design Creating proportional relationships and harmonious compositions Learning about equivalent fractions opens doors to solving problems and making informed decisions. It helps individuals excel in various personal and professional areas. Strategies for Identifying and Working with Equivalent Fractions Mastering the art of identifying and working with equivalent fractions is vital in mathematics. It requires various techniques to overcome mathematical hurdles effectively. These methods are essential for both students and professionals. Visual representations, like fraction models or number lines, are key. They help learners understand fraction equivalence. By seeing fractions, they can better comprehend the relationship between numerators and denominators. This makes identifying and working with equivalent fractions easier. Recognizing common factors is another critical technique. Finding the greatest common factor (GCF) between fractions’ numerators and denominators shows if they are equivalent. This skill is useful for simplifying or converting fractions. 1. Practice fraction conversions: Improve at converting fractions to their equivalent forms. This involves multiplying both the numerator and denominator by the same number. It enhances the ability to recognize and work with equivalent fractions. 2. Use visual models: Fraction models, such as fraction bars or diagrams, help illustrate fraction equivalence. They provide a tangible way to see how fractions with different numerators and denominators can represent the same quantity. 3. Recognize common factors: Identify the greatest common factor (GCF) between fractions’ numerators and denominators to check for equivalence. This method is very helpful when simplifying or converting fractions. By mastering these strategies, learners can improve their problem-solving abilities. They can better tackle challenges involving equivalent fractions. Through visual aids, recognizing common factors, and fraction conversion practice, a deeper understanding of this critical math concept is achieved. Conclusion: The Importance of Equivalent Fractions in Mathematics In conclusion, equivalent fractions are a cornerstone in mathematics, essential for various operations and real-life scenarios. They enhance problem-solving skills, enabling more informed decisions and smoother navigation through mathematical hurdles. This understanding is key to mastering fraction equivalence, a skill that broadens our appreciation for mathematics. The article highlights the significance of fraction equivalence, the method to find equivalent fractions, and their practical uses. These fractions are vital in everyday tasks, from cooking and baking to comparing discounts and measuring quantities. They help us understand and interact with the world more effectively. For students, professionals, or anyone looking to improve their math skills, mastering equivalent fractions is a game-changer. It unlocks a deeper appreciation for mathematics’ elegance and utility. By grasping this concept, you’ll become more confident and adept at solving problems, ready to face challenges in both personal and professional spheres. What are equivalent fractions? Equivalent fractions are fractions that have the same value, despite their different forms. They look different but are mathematically the same. This makes them essential in mathematics. How are equivalent fractions calculated? To find equivalent fractions, multiply the numerator and denominator by the same number. This keeps the fraction’s value the same but changes how it looks. What is the concept of fraction equivalence? Fraction equivalence means that fractions with different numbers can have the same value. This happens when you multiply or divide both the numerator and denominator by the same number. It keeps the fraction’s value unchanged. How can equivalent fractions be used to simplify fractions? To simplify fractions, find their equivalent fractions in the lowest terms. This means dividing both numbers by their greatest common factor. It results in a fraction with the same value but simpler What are some real-life applications of equivalent fractions? Equivalent fractions are used in many everyday situations. They help in measuring ingredients for cooking and baking, and in comparing costs in finance. Knowing how to work with them can lead to better decision-making and problem-solving in various fields. What strategies can be used to identify and work with equivalent fractions? Developing strategies for working with equivalent fractions is key to success in math. Techniques include using visual aids, recognizing common factors, and practicing conversions. These skills improve problem-solving abilities and help navigate complex math challenges. Leave a Comment
{"url":"https://factorcalculators.com/what-are-equivalent-fractions-and-how-are-they-calculated","timestamp":"2024-11-09T13:33:26Z","content_type":"text/html","content_length":"93718","record_id":"<urn:uuid:9a9efe12-0082-46bf-96d2-8c7df637f893>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00872.warc.gz"}
python-deltasigma v0.2.2 documentation Source code for deltasigma._zinc # -*- coding: utf-8 -*- # _zinc.py # The zinc function. # Copyright 2013 Giuseppe Venturini # This file is part of python-deltasigma. # python-deltasigma is a 1:1 Python replacement of Richard Schreier's # MATLAB delta sigma toolbox (aka "delsigma"), upon which it is heavily based. # The delta sigma toolbox is (c) 2009, Richard Schreier. # python-deltasigma is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # LICENSE file for the licensing terms. """This module provides the zinc() function which calculates the magnitude response of a cascade of comb filters. import numpy as np [docs]def zinc(f, m=64, n=1): """Calculate the magnitude response of a cascade of ``n`` ``m``-th order comb filters. The magnitude of the filter response is calculated mathematically as: .. math:: \\left|H(f)\\right| = \\left|\\frac{\\mathrm{sinc}(m f)}{\\mathrm{sinc}(f)}\\right|^n f : ndarray The frequencies at which the magnitude response is evaluated. m : int, optional The order of the comb filters. n : int, optional The number of comb filters in the cascade. HM : ndarray The magnitude of the frequency response of the cascade filter. return np.fabs(np.sinc(m * f) / np.sinc(f)) ** n
{"url":"https://python-deltasigma.readthedocs.io/en/latest/_modules/deltasigma/_zinc.html","timestamp":"2024-11-10T15:50:45Z","content_type":"application/xhtml+xml","content_length":"8590","record_id":"<urn:uuid:5569f287-8b55-471d-892f-3fbf0b1316bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00592.warc.gz"}
Grammar, context-sensitive From Encyclopedia of Mathematics grammar of direct components, context grammar, grammar of components A special case of a generative grammar Grammar, generative) each rule of which has the form Figure: g044790a The set of all segments of the last string of the derivation, obtained by "expanding" the non-terminal symbols — or, in other words, "originating" from (non-terminal) vertices of the tree — forms a system of components of this string after all the one-point segments have been added (cf. Syntactic structure); hence also the name "grammar of components" . If all the one-point segments are also obtained by the replacement of the occurrences of non-terminal symbols, it is possible to obtain a marked system of components by assigning to each component, as marks, the non-terminal symbols from the occurrences of which it "originates" . Thus, in the example above, the following marked system of components is obtained: (here the boundaries of the components are shown by parentheses, while the marks follow the right parenthesis). The assignment of the components to the strings of marked systems forms the foundation of the linguistic applications of context-sensitive grammars. Thus, a grammar whose rules include (among others) The mathematical significance of context-sensitive grammars stems, first and foremost, from the fact that the languages they generate (the so-called context-sensitive languages) are a simple subclass of the class of primitive recursive sets: the class of context-sensitive languages coincides with the class of languages recognized by linearly-bounded Turing machines with one tape and one head (cf. Turing machine). "Concrete" numerical sets often turn out to be context-sensitive languages when ordinary methods of coding natural numbers are applied (these include, for example, the set of perfect squares, the set of prime numbers, the set of decimal approximations of the number For each context-sensitive grammar it is possible to construct an equivalent left-context (or right-context) sensitive grammar, i.e. a context-sensitive grammar all rules of which have the form Grammar, context-free). The class of context-sensitive languages is closed under union, intersection, concatenation, truncated iterations, and permutations; it is not known if it is closed under complementation. Complexity of derivation. The time complexity (number of elementary derivation steps) of a derivation in a context-sensitive grammar is bounded from above by an exponential function. There exist languages generated by a context-sensitive grammar with time complexity of order there exists a context-sensitive grammar equivalent to it. It can be effectively constructed if Algorithmic problems. If a certain class of languages contains even one context-sensitive language, and if for at least one context-sensitive language See also Grammar, context-free. See also Formal languages and automata. The language With respect to context-free languages the main achievement of the past decade has been the solution of the problem on the closure of context-sensitive languages under complementation. It was shown by N. Immerman [a3] and (independently) R. Szelepcsényi [a4] that the complement of a context-sensitive language is again context sensitive. In fact, the result holds for general classes of languages recognized by space-bounded non-deterministic Turing machines; the context-sensitive languages being the class of languages recognized by non-deterministic linear bounded automata being a typical [a1] J.E. Hopcroft, J.D. Ulman, "Introduction to automata theory, languages and computation" , Addison-Wesley (1979) [a2] H.R. Lewis, C.H. Papadimitriou, "Elements of the theory of computation" , Prentice-Hall (1981) [a3] N. Immerman, "Nondeterministic space is closed under complementation" , Proc. 3-rd IEEE Conf. Structure in Complexity Theory (Georgetown, June) , IEEE (1988) pp. 112–115 [a4] R. Szelepcsényi, "The method of forcing for nondeterministic automata" EATCS Bulletin , 33 (1987) pp. 96–100 How to Cite This Entry: Grammar, context-sensitive. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Grammar,_context-sensitive&oldid=18208 This article was adapted from an original article by A.V. Gladkii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"https://encyclopediaofmath.org/index.php?title=Grammar,_context-sensitive&oldid=18208","timestamp":"2024-11-11T21:07:28Z","content_type":"text/html","content_length":"28715","record_id":"<urn:uuid:5adc5afe-051e-4eaa-8bec-78b6968b0fee>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00898.warc.gz"}
Learning Resources for Machine Learning: A Comprehensive List - Programmathically Learning Resources for Machine Learning: A Comprehensive List Machine learning and data science are very technical fields. While the payoff for those who become masters of the trade is potentially very high, finding the right guidance and learning resources is crucial for succeeding in the field. On this page, you’ll find a list of the best resources I’ve come across to learn data science and machine learning. The resources include online courses, tutorials, Youtube channels, and books. I’ve grouped the resources by topics and classified them by level ranging from beginner to advanced. Disclosure: Some of the links in the following sections are affiliate links. This means I may earn a small commission at no additional cost to you if you decide to purchase. As an Amazon affiliate, I earn from qualifying purchases. I could be an affiliate for many online education products. I’ve specifically chosen to partner with the providers of courses and books that I recommend based on my own experience. By using my links, you help me provide information on this blog for free. Online Courses There are tons of courses on machine learning on the web. Unfortunately, I don’t have the time to test all of them. I primarily rely on Coursera, Pluralsight, and Datacamp for continuous learning because I’ve repeatedly had great experiences with their courses and they are reasonably priced. Accordingly, my recommendations will focus on these platforms. Coursera offers a great balance between lectures and hands-on labs. Datacamp is mainly hands-on. The video courses on Pluralsight are great for understanding more advanced topics. General Machine Learning The New Machine Learning Specialization by Andrew Ng on Coursera Level: Beginner Prerequisites: High school math and basic programming skills In June 2022 Andrew Ng and his team released a brand new machine learning specialization. It is an improved and updated version of the immensely popular introductory Coursera machine learning course also taught by Andrew Ng. Andrew’s videos explaining machine learning are legendary for his clarity and ability to break down complex concepts. The videos come with quizzes and programming assignments. The new specialization also features programming assignments in Python. This is a very welcome change from the old course where the programming assignments were in Octave. Take the new “Machine Learning Specialization” on Coursera. Machine Learning Specialization by the University of Washington on Coursera Level: Beginner Prerequisites: High school math and basic programming skills The machine learning specialization teaches you the foundational concepts in a combination of videos, quizzes, and programming assignments. The explanations are kept relatively high-level, so you don’t need advanced math. The programming assignments are fairly easy compared to other courses and certainly not enough to prepare you for the real world. Overall, it is a great high-level introduction to the field. I highly recommend supplementing the course with end-to-end machine learning projects to solidify your learnings. Take the “Machine Learning Specialization” on Coursera. Machine Learning Fundamentals with Python on Datacamp Level: Beginner Prerequisites: High school math and basic programming skills The machine learning fundamentals skills track is probably the best program for absolute beginners who want to learn by doing. It really takes you by the hand and spoon-feeds you the concepts in short videos followed immediately by programming assignments and quizzes. Take the “Machine Learning Fundamentals with Python” on Datacamp. Deep Learning Deep Learning Specialization on Coursera Level: Beginner-Intermediate Prerequisites: High school math, basic Python programming skills. Basic knowledge of machine learning is helpful but not necessary The deep learning specialization also taught by Andrew Ng is one of the best introductions to deep learning and neural networks. The series of courses is very accessible even if you don’t have an advanced math background in calculus and statistics. You only need basic Python programming skills and high-school math. Andrew breaks the concepts down in a series of in-depth videos followed by quizzes. At the end of each week, you get to practice what you’ve learned in programming assignments in Python. Take the “Deep Learning Specialization” on Coursera. Advanced Machine Learning Specialization on Coursera Update March 2022: Coursera has decided to suspend all courses by Russian universities. Unfortunately, this means that the advanced machine learning specialization is no longer available. Level: Advanced Prerequisites: Good foundations in linear algebra, differential calculus, and basic statistics. Basic Python programming skills, an introductory machine learning course. This specialization is a tour through various machine learning techniques such as reinforcement learning and Bayesian methods. The courses are pretty technical. So a good math foundation and at least one of the introductory courses mentioned above or a similar machine learning background are highly recommended. If you have gone beyond the basics in machine learning and want to get a tour through its various subfields to decide which one to focus on, I recommend taking this specialization. Take the “Advanced Machine Learning Specialization” on Coursera. TensorFlow and Pytorch are the most important frameworks for building deep learning solutions. TensorFlow is the dominant framework in industry. If you want to get a deep learning job in industry, there is probably no way around TensorFlow. Building Machine Learning Solutions with TensorFlow Path on PluralSight Level: Beginner-Advanced Prerequisites: A basic understanding of neural networks and deep learning as well as Python programming. This learning path consists of 11 video courses from taking you from beginner all the way to advanced TensorFlow professional. Take “Building Machine Learning Solutions with TensorFlow” on PluralSight. TensorFlow Advanced Techniques on Coursera Level: Intermediate Prerequisites: A basic understanding of neural networks and deep learning as well as Python programming. Familiarity with basic Tensorflow. If you’ve taken the Deep Learning Specialization, you’ll have everything you need. This specialization will turn you into a proficient user of TensorFlow. It teaches you how to build your own custom architectures with the functional API, override default settings, and define your own loss functions and classes. The latter courses focus on computer vision and generative deep learning. Take the “TensorFlow Advanced Techniques” specialization on Coursera. In recent years PyTorch has emerged as a serious contender to TensorFlow in the battle of deep learning frameworks. As of this writing, it is rapidly growing in the research community and also making inroads in industries such as autonomous driving. To be a future-proof deep learning engineer, definitely learn how to use it. Building Machine Learning Solutions with PyTorch Path on PluralSight Level: Beginner-Advanced Prerequisites: A basic understanding of neural networks and deep learning as well as Python programming. This learning path consists of 11 video courses from taking you from beginner all the way to advanced PyTorch professional. Take “Building Machine Learning Solutions with PyTorch” on PluralSight. Reinforcement Learning Reinforcement Learning Specialization on Coursera Level: Intermediate-Advanced Prerequisites: Good foundations in calculus, and basic statistics are highly recommended. I also recommend getting a foundational understanding of machine learning and some experience implementing machine learning algorithms in Python. In the reinforcement learning specialization on Coursera, you develop a foundational understanding of reinforcement learning. Overall, the specialization closely follows the classic textbook “Reinforcement Learning” by Sutton and Barto. If the book is a bit too theoretical and generally over your head because of the dense math, the course is a great choice. The instructors do a great job at making the concepts in the book more accessible and you’ll reinforce (pun intended) what you’ve learned with programming assignments. It still is a tough series of courses and you should definitely plan 2-4 months to get through it if you don’t have more than an hour or two to invest every day. Take the “Reinforcement Learning Specialization” on Coursera. MLOps & Machine Learning Engineering Being able to put a machine learning model into production and making sure that it runs reliably goes far beyond the ability to train models. MLOps and machine learning engineering combine DevOps, software engineering, and machine learning to productionize models. The following courses teach you those valuable skills. Machine Learning Engineering for Production Specialization on Coursera Level: Intermediate-Advanced Prerequisites: Good foundational understanding of machine learning, and some experience implementing machine learning algorithms and neural networks in Python, TensorFlow, or PyTorch. Basic knowledge about software engineering is also helpful. This course, co-taught by Andrew Ng, teaches you all about building machine learning pipelines, managing data, and deploying models into production. Take the “Machine Learning Engineering for Production Specialization” on Coursera. TensorFlow: Data and Deployment Specialization on Coursera Level: Intermediate-Advanced Prerequisites: Good foundational understanding of machine learning, and building neural networks in TensorFlow. This course teaches you how to build machine learning pipelines and deploy models into production using TensorFlow. It also covers how to deploy models to edge devices such as mobile phones and how to run them in Web browsers. Take the “TensorFlow: Data and Deployment Specialization” on Coursera. Machine Learning Engineering on PluralSight Level: Intermediate-Advanced Prerequisites: Good foundational understanding of machine learning. This learning path consists of 5 short video courses that give you a conceptual understanding of machine learning engineering and the important issues and best practices involved in deploying a model into production. Take “Machine Learning Engineering” on PluralSight. Hands-On Books The following books are applied machine learning books teaching you both concepts and code. I recommend working through them cover-to-cover if you are a beginner. If you already have some background in machine learning and look to learn something about specific topics, these books are valuable for working through selected chapters. The books can be purchased via Amazon or most booksellers. General Machine Learning An Introduction to Statistical Learning Level: Beginner Prerequisites: High school math and basic programming skills. Familiarity with basic statistics and mathematical notation is helpful. An Introduction to Statistical Learning is one of the best introductory textbooks on classical machine learning techniques such as linear regression. It was the first machine learning book I’ve bought and has given me a great foundation. The explanations are held on a high level, so you don’t need advanced math skills. Every chapter comes with code examples and labs in R. It is a great book to work through cover-to-cover. Applied Predictive Modeling Level: Beginner Prerequisites: High school math and basic programming skills. Familiarity with basic statistics and mathematical notation is helpful. Applied predictive modeling is one of my favorite books on machine learning foundations. Most machine learning books are either rigorous academic math-heavy textbooks that lack hands-on code examples, or they are a series of programming tutorials that are shallow on the theoretical foundations. Most books largely ignore the crucial steps that happen before modeling such as data preprocessing and feature engineering. Applied predictive modeling is one of the rare stand-outs that discusses the entire modeling process from start to finish, is rigorous in its treatment of algorithms, and has extensive practical examples in R. Python Machine Learning by Sebastian Raschka Level: Beginner Prerequisites: High school math and basic programming skills in Python. Sebastian Raschka has written a great introduction to applied machine learning. The book puts more emphasis on foundations and gaining a conceptual understanding of machine learning algorithms than most other applied books. At the same time, the book offers extensive examples in Python. The earlier parts of “Python Machine Learning” focus on classical machine learning while the latter parts contain an introduction to deep learning. Much like “Applied Predictive Modelling”, “Python Machine Learning” is one of the few books that also discuss data preprocessing and feature engineering. If you want a comprehensive resource that helps you understand machine learning algorithms from scratch while also teaching you machine learning practice in Python, this is the book to get. The latest edition runs at more than 700 pages. Deep Learning Deep Learning With Python by Francois Chollet Level: Beginner-Intermediate Prerequisites: High school math and basic programming skills in Python. Some basic understanding of machine learning is helpful. “Deep Learning With Python” is probably the most comprehensive applied deep learning book. Francois Chollet is the inventor of Keras, an extremely popular deep learning API that runs on top of TensorFlow. In addition to deep learning concepts, readers will also gain a thorough understanding of the working mechanics underlying Keras It is a great book for beginners to work through, as well as for practitioners who want to develop production models with TensorFlow. Deep Learning with PyTorch by Stevens, Antiga, and Viehmann Level: Beginner-Intermediate Prerequisites: High school math and basic programming skills in Python. A basic understanding of neural networks and some experience constructing networks in TensorFlow is helpful. In recent years, Pytorch has emerged as a serious alternative to TensorFlow among frameworks for building deep learning models. For all of those who want to learn to build deep neural networks in PyTorch instead of TensorFlow, this book is a great option. While you can work through the book as a deep learning novice, I believe you will benefit more from it if you already have an understanding of neural networks. The focus clearly is mainly on the mechanics of PyTorch rather than in-depth explanations of neural networks. If you already know TensorFlow, PyTorch will be easier to pick up Reinforcement Learning Mastering Reinforcement Learning with Python by Enes Bilgin Level: Intermediate Prerequisites: Good foundations in linear algebra, differential calculus, and basic statistics, as well as some familiarity with mathematical notation. Some experience with applied machine learning is highly recommended before picking up this book. The field of reinforcement learning is not yet as present in industry as deep learning and neural networks. Accordingly, the books on applied reinforcement learning are few and far between. “Mastering Reinforcement Learning with Python” walks you through basic RL concepts such as sequential decision-making and value functions. The book teaches you how to implement these concepts in Python and TensorFlow and also discusses how to scale and parallelize implementations. Compared to the classic textbook on RL by Sutton and Barto, “Mastering Reinforcement Learning with Python” is more accessible and definitely much more practice-oriented. Overall, this book is a solid choice if you want to get started with Reinforcement Learning. Books on Machine Learning Theory The following books are academic textbooks. They all have been invaluable reference books for me that sit on the bookshelf next to my desk. You probably don’t want to read them cover-to-cover, but they are great for in-depth explanations of certain topics. Many of these books were written before the deep learning boom and strongly focus on classical machine learning techniques. I’ve classified most of the books as being for intermediate to advanced readers mainly because they require good mathematical foundations. But if you are a beginner, each of these books is a valuable reference along a beginner course or textbook to explore some topics more in-depth. General Machine Learning The Elements of Statistical Learning Level: Intermediate-Advanced Prerequisites: Good foundations in linear algebra, differential calculus, and basic statistics, as well as strong familiarity with mathematical notation. An introductory course to applied machine learning is highly recommended before picking up this book The Elements of Statistical Learning was written by a group of Stanford statistics professors (the same authors who wrote the “Introduction to Statistical Learning”). It is very dense and really goes in-depth on many statistical machine learning techniques. You need to have a strong math background to be able to follow many of the explanations. For a practicing data scientist, I recommend getting this book as a reference. Pattern Recognition and Machine Learning Level: Intermediate-Advanced Prerequisites: Good foundations in linear algebra, differential calculus, and basic statistics, as well as strong familiarity with mathematical notation, are a prerequisite. An introductory course to applied machine learning is highly recommended before picking up this book. Pattern Recognition and Machine Learning is another great reference book. It offers thorough explanations of some foundational topics such as probability distributions and many of the classical machine learning algorithms. I found Bishop’s explanations of kernel methods and kernel-based learning especially valuable. Machine Learning: A Probabilistic Perspective Level: Intermediate-Advanced Prerequisites: Good foundations in linear algebra, differential calculus, and basic statistics, as well as strong familiarity with mathematical notation, are a prerequisite. An introductory course to applied machine learning is highly recommended before picking up this book. This book is probably the most comprehensive and in-depth machine learning book on my bookshelf. Its first 200 pages are dedicated to foundations in statistics and probability. You’ll learn a lot about Gaussian processes and why they are fundamental to most ML models, as well as Bayesian statistics and various probability distributions. There are also chapters on issues that are related to machine learning but often left out of curricula, such as Markov models and Monte Carlo processes. If you work in research, this is definitely one of the best reference books to get your hands on. Deep Learning Deep Learning by Ian Goodfellow, Yoshua Bengio, et al. Level: Intermediate-Advanced Prerequisites: Good foundations in linear algebra, differential calculus, and basic statistics, as well as strong familiarity with mathematical notation, are highly recommended before attempting to read this book. An introductory course to applied machine learning would also be helpful. This is the classic textbook on deep learning and neural networks. It was written by some of the best researchers in the field. Like most other machine learning theory books, this features lots of mathematical notation to illustrate the concepts along with some pseudocode to explain algorithms. The first few chapters briefly cover the foundations that you need to have in place including the mathematics and machine learning basics. Further chapters introduce neural networks and the issues involved in training them such as regularization and optimization. The latter chapters discuss various network architectures in the field of computer vision and natural language processing as well as areas of ongoing research. The book was written in 2016, so newer network architectures like transformers are not covered. I have often referred to this book while developing machine learning architectures and also while writing many of the posts on this blog. I highly recommend getting “Deep Learning” as a reference if you develop deep learning-based systems. If you take the time to really understand what the authors are saying, it will undoubtedly make you a much better practitioner. Reinforcement Learning Reinforcement Learning: An Introduction by Sutton and Barto Level: Intermediate-Advanced Prerequisites: Good foundations in linear algebra, differential calculus, and basic statistics, as well as strong familiarity with mathematical notation, are highly recommended before attempting to read this book. An introductory course to applied machine learning would also be helpful. This is the classic textbook on Reinforcement Learning. It discusses all the foundational topics but it is quite theoretical and uses lots of mathematical notation to explain the concepts. It is a great reference book but I wouldn’t recommend it as a standalone resource for a complete beginner. Instead, I would encourage you to take the Reinforcement specialization on Coursera which closely follows the book, and get the book as a reference. Blogs and Youtube Channels The web is full of free content on machine learning. Here is a selection of blogs and Youtube channels that I found especially helpful. Machine Learning Mastery Machine Learning Mastery is probably the best and most comprehensive applied machine learning blog on the web today. The author, Jason Brownlee, has written hundreds of posts on applied machine learning with examples in Python. SentDex is a Youtube channel and blog by Harrison Kinsley about Python programming. But Kinsley has also produced many videos and tutorials on machine learning with Python. Towards Data Science Towards data science is a medium publication where many authors submit posts. Unfortunately, this also means that the quality varies dramatically. Some articles offer great explanations, but many of them are written by newbies, and the quality is often subpar.
{"url":"https://programmathically.com/machine-learning/learning-resources-machine-learning/","timestamp":"2024-11-11T11:35:26Z","content_type":"text/html","content_length":"116538","record_id":"<urn:uuid:d6a1677c-bc0b-4753-b3fd-d1c2b3369059>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00173.warc.gz"}
David Wood I work in the Discrete Mathematics Research Group of the School of Mathematics at Monash University in Melbourne, Australia. I am also Deputy Director of the Mathematical Research Institute MATRIX. My research interests lie in discrete mathematics and theoretical computer science, especially structural graph theory, extremal graph theory, geometric graph theory, graph colouring, poset dimension, graph drawing, and combinatorial geometry. For recent news and events, see my mathstodon page.
{"url":"https://users.monash.edu.au/~davidwo/","timestamp":"2024-11-10T09:33:04Z","content_type":"application/xhtml+xml","content_length":"2402","record_id":"<urn:uuid:402b27c4-2ea4-4f20-b366-d7b67a163f1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00468.warc.gz"}
The fathers of spectroscopy, a series: James Clerk Maxwell - LKI Consulting Last post we covered the foundation for which spectroscopy was built, the work of Sir William Herschel, who discovered the infrared through his experiments. Continuing our journey, James Clerk Maxwell was a Scottish mathematician and scientist responsible for the classical theory of electromagnetic (EM) radiation, which was the first theory to describe electricity, magnetism, and light as different manifestations of the same phenomenon. Maxwell’s equations for EM are recognized as the “second great unification in physics†(the first was realized by Isaac Newton after his publication of “A Dynamical Theory of the Electromagnetic Field†in 1865). In his publication Maxwell demonstrated that electric and magnetic fields travel through space as waves moving at the speed of light (note that we will revisit light traveling as a wave in a later post). Importantly, the phenomenon of the unification of light and electrical led to his prediction of the existence of radio waves. Maxwell had studied and commented on electricity and magnetism as early as 1855 when his paper “On Faraday’s lines of force†was read to the Cambridge Philosophical Society. This paper presented a simplified model of Faraday’s work about how electricity and magnetism are related. Importantly, Maxwell reduced all current knowledge into a linked set of differential equations with 20 equations and 20 variables. Later Oliver Heavidside reduced the complexity of Maxwell’s Theory down to four partial differential equations, known now collectively as Maxwell’s Laws or Maxwell’s Equations. Maxwell’s quantitative connection between light and EM is considered one of the greatest accomplishments of 19th century mathematical physicists. Before we go, we are leaving you all with one last fun fact about Maxwell… Maxwell was interested in the study of color vision and from 1855 to 1872 he published a series of investigations concerning the perception of color, color-blindness, and color theory. Maxwell used recently developed linear algebra to prove Thomas Young’s theory that the reason that two complex lights (i.e., composed of more than one monochromatic light) could look alike but be physically different is because colors are perceived through three channels in the eyes (i.e., the trichromatic color theory). He did this by inventing color matching experiments and colorimetry. These experiments further extended to Maxwell’s development of color photography, whereby he proposed that if: 1. Three black-and-white photographs of a scene were taken through red, green, and blue filters, 2. Transparent prints of the images were projected onto a screen using three projectors equipped with similar filters and superimposed on the screen… 3. The result would be perceived by the human eye as a complete reproduction of all colors in the scene. _This series will be continued with Max Planck… _
{"url":"https://lkiconsulting.com/2023/07/the-fathers-of-spectroscopy-a-series-james-clerk-maxwell/","timestamp":"2024-11-04T21:19:45Z","content_type":"text/html","content_length":"69578","record_id":"<urn:uuid:ec24a775-7ef7-4ab2-9eb0-3e7eb20aa18d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00773.warc.gz"}
Overview and Objective In this lesson, students will construct three-dimensional figures using unit cubes on the isometric grid to generate the isometric views of the figures. It is not easy to draw 3D objects on paper. To do this, we create a view of the object on the paper (the 2D plane) This is called a projection. Using an isometric grid can help us to create the illusion of depth on the paper. Change the grid on Polypad using the toolbar on the right of side of the canvas. Isometric drawings (isometric projections) are often used by designers, engineers, and illustrators who specialize in technical drawings. Start by inserting a cube on the isometric canvas and rotating it to show students' different side views. Use the ruler-pen option to draw the outline of the cube. You may ask students to draw different sizes of cubes using the ruler-pen tool. After the drawings, students can use the rhombus or the custom polygon tools to fill the faces of the cube. They might want to use different colors to emphasize the top and side views of the cube. Main Activity Clarify with the students that, during this activity, the blue rhombus from the polygons toolbar will be used to create the unit cubes. Invite them to draw an L-shaped 3D figure using the unit cubes on the isometric grid. Students may either clone the unit cubes as needed or just clone the faces of the cube. You may show them that sometimes the cubes' faces coincide and when they have selected the entire figure, it can be seen differently. To avoid this, they can construct their figure using individual Discuss with students how many unit cubes are used to construct the L shape. How many are in the first row and how many in the second? Use this Polypad to demonstrate another example. When a diagram showing the number of cubes in each grid is given, the respective 3D object can be constructed using these numbers. To let students explore the orientation of the cubes on their own, a simpler number diagram will be used throughout the activity. You may want to clarify the number of cubes ın each row and the column using the example before sharing the Polypad with the students. Share the same Polypad with students and invite them to work on creating 3D objects on the isometric grid. Discuss how the diagrams showing the number of cubes in each grid help students construct the object. Share some student work with the class. Invite students to share which approaches they found most useful when drawing the 3D figures. You may also let students work in pairs and create number diagrams for each other to construct additional 3D figures. You may end the lesson with an isometric puzzle. Show the drawing to students and ask how many cubes they see, then ask the same question after rotating the whole figure upside down. Isometric drawings can be tricky to the eye, that’s why illustrators usually also add side views to their drawings to make sure that their target audience fully understands the drawings. Isometric Drawings: Part 2 focuses on the side views and orthographic projections. Support and Extension For students ready for additional extension in this lesson, consider asking them to create the 3D L shape figure drawing and folding the possible net of the L prism. Ask students who need additional support with these ideas to construct 2 x 2 and 3 x3 cubes. Polypads for This Lesson To assign these to your classes in Mathigon, save a copy to your Mathigon account. Click here to learn how to share Polypads with students and how to view their work. Drawing 3D Part 1 – Polypad – Polypad Drawing 3D Part 1 Answers – Polypad – Polypad Isometric Puzzle – Polypad – Polypad
{"url":"https://polypad.amplify.com/de/lesson/isometric-1","timestamp":"2024-11-02T18:38:12Z","content_type":"text/html","content_length":"25951","record_id":"<urn:uuid:39d3d6e9-865e-409d-a9b7-d4257456259a>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00433.warc.gz"}
Lens Maker Equation Calculator Last updated: Lens Maker Equation Calculator Our lens maker equation calculator is a tool that helps to choose the appropriate parameters to obtain a specific focal length of the lens. You can change the material's geometric settings and the refractive index. Continue reading to learn about lens design and applications and how you can use our lens calculator to determine the focal length. If you want to hear more about the light refraction mechanism, check out our Snell's law calculator. Why do we need lenses? • Human eye is a natural lens where the muscles control the focal length (they can change the shape of the lens). However, some people have eyes whose lenses do not focus light correctly, and therefore they need to use glasses - artificial lenses. • With the appropriate arrangement of lenses, we can construct microscopes that can magnify tiny objects and telescopes which can magnify objects that are far away. Check our thin lens calculator if you want to learn about the magnification of a simple lens and our telescope magnification calculator if you wish to learn more about how a telescope works. • Another application of lenses is a camera. Just like the eye muscles, a system of lenses can change its focal length (by sliding lenses along the camera) to focus the image on the camera film. Focal length calculator You can estimate the focal length of the lens in the air using the mathematical formula below: 1/f = (n - 1) * (1/R1 - 1/R2 + (n - 1) * d / (n * R1 * R2) • f is the focal length; • n is the refractive index of the lens material; • R1 is the radius of curvature of the lens surface closest to the light source; • R2 is the radius of curvature of the lens surface farthest from the light source; and • d is the thickness of the lens. The above equation reduces to the simpler version if we assume that the lens is very thin (d = 0): 1/f = (n - 1) * (1/R1 - 1/R2) In most cases, lenses are thin enough to justify the use of the simplified formula. If you want to change the thickness of the lens, too, enter the appropriate number in the Lens thickness field of our calculator. We encourage you to check the numerical difference between both equations. Radii of curvature The radius of curvature can be both a positive and a negative number. Briefly, a spherical lens usually consists of two surfaces: left and right, which can both be convex or concave. In our calculator, we have used Cartesian sign convention: • Convex lens: left surface R1 > 0, right surface R2 < 0, • Concave lens: left surface R1 < 0, right surface R2 > 0.
{"url":"https://www.omnicalculator.com/physics/lensmakers-equation","timestamp":"2024-11-02T00:07:05Z","content_type":"text/html","content_length":"407721","record_id":"<urn:uuid:f46540c1-709b-426a-8529-d532a50c11aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00066.warc.gz"}
Solve The System Of Equations Below. Y=2x-3 8x-5y=1A) ( 7, 11 )B) ( -2, -7 )C) ( -1/3, -3 2/3 )D) ( 2 It is c and your welcome slope = 1 Step-by-step explanation: To find the slope of the line, use the slope formula, [tex]m = \frac{y_2-y_1}{x_2-x_1}[/tex], in which [tex]m[/tex] represents the slope. The [tex]x_1[/tex] and [tex]y_1[/tex] represent the x and y values of one point, while the [tex]x_2[/tex] and [tex]y_2[/tex] represent the x and y values of another point. Thus, locate two points on the line. We can see that the line intersects (-1,0) and (0,1), thus we can use them for the formula. (Any other two points that are also on the line will also work.) Substitute their x and y values into the appropriate places in the formula and solve: [tex]m = \frac{(1)-(0)}{(0)-(-1)}\\m = \frac{1-0}{0+1} \\m = \frac{1}{1} \\m = 1[/tex] Thus, the slope is 1.
{"url":"https://diemso.unix.edu.vn/question/solve-the-system-of-equations-belowbr-y2x-3-br-8x-5y1br-a-7-xz1z","timestamp":"2024-11-09T04:32:36Z","content_type":"text/html","content_length":"64143","record_id":"<urn:uuid:96fa5025-c80e-4741-89c7-00179ca6769f>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00884.warc.gz"}
The Schrödinger equation on the cheap April 17, 2021. A quick post explaining how matter waves lead to natural definitions of momentum and energy operators, and hence the Schrödinger equation. There is nothing new, just (I hope) a clear development of the topic. Light and matter We learnt in a previous post that Einstein’s famous formula $E = mc^2$ can also be written \[E^2 = m_0^2 c^4 + p^2 c^2,\] where $m_0$ is the rest mass of the object we’re considering and $p$ its momentum. If the object happens to be a massless particle, say a photon of light, then $m_0 = 0$ and we end up with the curious identity: \[E = pc.\] Now, it just so happens we have a different expression for the energy of a photon, also proposed by Einstein in 1905. Turns out you can make a battery from a light bulb and a lump of metal, but only when the light is blue enough; in this case, the light shines the electrons out of the metal and into a current. But if light delivered energy continuously, like we would expect for a continuous electromagnetic wave, then only the wattage should matter, not the colour. What gives? Einstein’s brilliant explanation was that light isn’t interacting with electrons as a continuous wave, but as a discrete particle whose energy is determined by colour! From the experimental results, he deduced that the energy per particle of light is \[E = hf,\] where $f$ is the frequency (oscillations per second) and $h = 6.63 \times 10^{-34} \text{ J s}$ is Planck’s constant. Using our two expressions for the energy, we find that the momentum of a photon \[p = \frac{hf}{c}.\] You can use this to compute how many laser pointers you would need to repel an incoming asteroid before it hits the earth (exercise left to the reader). Plane waves If we zoom out from the photon, or rather, measure it differently, say by getting it to diffract through some slits rather than dislodging individual electrons from a metal, it will be described by a wave. For simplicity, let’s consider a single spatial dimension $x$. A plane wave is a simple sinusoidal displacement of some medium (e.g. air pressure, the surface of a body of water, electromagnetic fields) with amplitude \[\Psi (x, y) = \Psi_0 \sin \left[k(x - vt)\right].\] The constant $ \Psi_0$ is just the maximum size of this displacement, but $k$ and $v$ require a bit more explanation. Here, $v$ is the speed of the wave, since a point of fixed $C = x - vt$ in time $ \Delta t$ must move $\Delta x$ obeying \[C = x - vt = (x + \Delta x) - v (t + \Delta t) = C + \Delta x -v \Delta t \quad \Longrightarrow \quad \frac{\Delta x}{\Delta t} = v.\] If we take a snapshot of the wave at fixed time $t$, then it will repeat itself when the argument of the sine function is increased by $2\pi$, or \[k(x - vt) \mapsto k(x - vt) + 2\pi = k\left(x + \frac{2\pi}{k} - vt\right).\] Since $t$ is fixed, it follows that when we increment $x$ by $2\pi/k$, the wave repeats itself. In other words, $\lambda = 2\pi/k$ is the wavelength. By the same reasoning, if we freeze $x$ the wave repeats itself with a period in time, \[T = \frac{2\pi}{vk}.\] Since the frequency $f = 1/T$ is the inverse of the period, we find that frequency times wavelength equals speed: \[f\lambda = \frac{vk}{2\pi}\cdot \frac{2\pi}{k} = v.\] At this point, it will simplify things dramatically to use the exponential instead of the sine wave. Since Euler’s marvellous formula tells us that \[e^{i\theta} = \cos\theta + i \sin\theta,\] we can replace the sinusoid with \[\Psi (x, t) = \Psi_0 e^{i k(x - vt)}.\] If we really want the sine, we just take the imaginary part. The momentum operator Let’s now consider a photon, moving at speed $v =c$ and with momentum obeying \[p = \frac{hf}{c} = \frac{hf}{f \lambda} = \frac{h}{\lambda} = \frac{h}{2\pi}\cdot \frac{2\pi}{\lambda} = \hbar k,\] where we used $c = f\lambda$, $k = 2\pi/\lambda$, and defined a new constant $\hbar = h/2\pi$, called Planck’s reduced constant. This lets us rewrite the plane wave as \[\Psi(x, t) = \Psi_0 e^{i (p/\hbar)(x - ct)} = \Psi_0 e^{i(px - Et)/\hbar},\] since $E = pc$. In 1924, Louis de Broglie made the bold suggestion that matter could also be described as a wave, with momentum and energy obeying the same relation. If measured properly, these matter waves could exhibit wavelike-behaviour, like bending around obsctacles and interfering. This has been experimentally observed, not only with electrons, but giant molecules called buckyballs! We won’t be concerned with this, however. Instead, we would like to think about how to extract the momentum from a plane wave. One way is to rearrange $\Psi(x, t)$ to find $p$. Instead, we will define a procedure which simply pulls $p$ out. It is, as you have probably already guessed, simply the derivative with respect to $x$: \[\frac{\partial}{\partial x}\Psi(x, t) = \Psi_0 \frac{\partial}{\partial x}e^{i (px - Et)/\hbar} = \frac{ip}{\hbar} \Psi(x, t).\] The momentum operator $\hat{p}$ is simply defined as the operation which gives us $p$ (in front of the original function) without the extra constants $i$ and $\hbar$. More precisely, \[\hat{p} = -i\hbar \frac{\partial}{\partial x},.\] Often, we treat $t$ as fixed, so that the partial derivative becomes $d/dx$. One of the distinctive characteristics of waves is that the displacements add and subtract nicely. This is called the superposition principle. What happens with momentum then? A cute thing about this operator, as opposed to the algebraic expression which isolates $p$, is that we can apply it to any combination of plane waves. For instance, if we consider \[\Psi(x, t) = \Psi_1(x, t) + \Psi_2(x, t) = \Psi_{0}^{(1)} e^{i (p_1x - E_1t)/\hbar} + \Psi_0^{(2)}e^{i (p_2x - E_2t)/\hbar},\] \[\hat{p}\Psi(x, t) = p_1 \Psi_1(x, t) + p_2 \Psi_2(x, t).\] No single nice momentum sits out the front of the whole wave (unless $p_1 = p_2$), but this tells us something important: such a combination doesn’t have a well-defined momentum! The momentum will be uncertain. We might guess that the uncertainty depends on the relative size of the amplitudes $\Psi_{0}^{(1)}$ and $\Psi_{0}^{(2)}$, and this guess is correct. We won’t pursue the generalisations (the uncertainty principle and the Born rule) here. The energy operator Let’s end, briefly, with the Schrödinger equation. In 1925, Erwin Schrödinger went on vacation in the Swiss alps, taking only de Broglie’s thesis with him. By the end of his getaway, he had derived the fundamental equation of quantum mechanics. How did he do it? He was guided by many subtleties we won’t care about, but the basic observation is simple: define an energy operator in the same way we defined the momentum operator. For a plane wave $\Psi(x, t) = \Psi_0 e^{i(px - Et)/\hbar}$, we have \[i\hbar \frac{\partial}{\partial t} \Psi(x, t) = E \Psi(x, t),\] suggesting we define the energy operator \[\hat{H} = i\hbar \frac{\partial}{\partial t}.\] For historical reasons, we use $\hat{H}$ for “Hamiltonian” instead of $\hat{E}$ for “energy”. This is more or less it, but we will say a few more words to connect it to other versions of the Schrödinger equation you might encounter. The kinetic energy and momentum of a classical particle are related by \[E = \frac{1}{2}mv^2 = \frac{(mv)^2}{2m} = \frac{p^2}{2m}.\] Schrödinger simply guessed this held as an operator equation. Substituting the actual expressions in terms of partial derivatives: \[i\hbar \frac{\partial}{\partial t}\Psi = -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} \Psi.\] Of course, a particle can get energy from other places. If it rolls around on a slope, for instance, there will be some potential energy $V$, and the classical energy is \[E = \frac{p^2}{2m} + V.\] Once again, we promote this to an operator. Schrödinger’s guess means that this is still identified with the time derivative, so \[i\hbar \frac{\partial}{\partial t}\Psi = \left(-\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} + \hat{V}\right)\Psi,\] where $\hat{V}$ is now promoted as well. Actually doing this promotion is technical and ambiguous, so we won’t worry about it. Anyway, the simplest and most elegant way to write the Schrödinger equation is just \[\hat{H}\Psi = i\hbar \frac{\partial}{\partial t}\Psi. \tag{1} \label{H}\] Although I’ve drawn a slightly facile analogy between momentum and energy, there is a fundamental difference. The momentum operator is defined as the derivative with respect to $x$, so applying the momentum operator will always be the same as differentiating. The energy operator is defined in a totally different way! For instance, we often take a classical expression for energy and promote stuff to operators. There is no need for this to equal the time derivative, so Schrödinger’s equation is telling us that not any old wavefunction $\Psi$ is allowed, just the ones that evolve according to (\ref{H}). Still, the whole shebang drops out of the plane wave! Thanks to J.A. for motivating discussion as usual!
{"url":"https://hapax.github.io/assets/2021-04-17-schro/","timestamp":"2024-11-04T15:19:15Z","content_type":"text/html","content_length":"15324","record_id":"<urn:uuid:644f022e-e7df-493b-9dfa-400f99ded3a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00434.warc.gz"}
Jeremiah P. Ostriker Research Papers, Astrophysics 1990-1999 1963-1969 | 1970-1979 | 1980-1989 | 1990-1999 | 2000-2009 | 2010-2019 125. "Dust in QSO Absorption-Line Systems" (with D.G. York and M.S. Vogeley), Astrophysical Journal, 364, 405-411 (1990). 126. "On the Generation of a Bubbly Universe: A Quantitative Assessment of the CfA Slice" (with M. Strassler), Astrophysical Journal, 338, 579-593 (1989). 127. "Superclustering in the Explosion Scenario" (with D. Weinberg and A. Dekel), Astrophysical Journal, 336, 9-45 (1989). 128. The Column-Density Distribution of Lyman-a Clouds: A "Break" in f(N) Due to Neutral Core Formation (with R.C. Duncan), Astrophysical Journal, submitted/not published (1988). 129. "Voids in the Lya Forest" (with R.C. Duncan and S. Bajtlik), Astrophysical Journal, 345, 39-51 (1989) 130. "Superclustering in the Explosion Scenario. II. Prolate Spheroidal Shells from Superconducting Cosmic Strings" (with D. Borden and D. H. Weinberg), Astrophysical Journal, 345, 607-617 (1989). 131. "An Estimate of the Velocity Correlation Tensor: Cosmological Implications" (with E. Groth and R. Juszkiewicz), Astrophysical Journal, 346, 558 (1989). 132. "The Effect of Galaxy Triaxiality on Globular Clusters" (with J. J. Binney and P. Saha), Monthly Notices of the Royal Astronomical Society, 241, 849-871 (1989). 133. "The Stability of a Collisionless Cosmological Shell" (with S. White), Astrophysical Journal, 349, 22-34 (1990). 134. "Self-Consistent Spherical Accretion Shocks Around Black Holes" (with A. Babul and P. Meszaros), Astrophysical Journal, 347, 59-67 (1989). 135. "Spherical Accretion onto Black Holes: A New Higher Efficiency Type of Solution with Significant Pair Production" (with M. Park), Astrophysical Journal, 347, 679-683 (1989). 136. "The Mach Number of the Cosmic Flow: A Critical Test for Current Theories" (with Y. Suto), Astrophysical Journal, 348, 378-382 (1990). 137. "What Produces the Ionizing Background at Large Redshift?" (with J. Miralda-Escudé), Astrophysical Journal, 350, 1-22 (1990). 138. "Are Some BL Lacs Artefacts of Gravitational Lensing: A Reprise?" (with M. Vietri), Nature, 344, 45-47 (1990). 139. "Pulsar Populations and their Evolution" (with R. Narayan), Astrophysical Journal, 352, 222-246 (1990). 140. "Would a Galactic Bar Destroy the Globular Cluster System?" (with K. Long and L. Aguilar), Astrophysical Journal, 388, 362-371 (1992). 141. "The Universe in a Box: Thermal Effects in the Standard Cold Dark Matter Scenario" (with R.Y. Cen, A. Jameson, and F. Liu), Astrophysical Journal Letters, 362, L41-L45 (1990). 142. "Lyman-Alpha Depression of the Continuum from High-Redshift Quasars: A New Technique Applied in Search of the Gunn-Peterson Effect" (with E.B. Jenkins), Astrophysical Journal, 376, 33-42 (1991). 143. "Expansion-Cooled Lyman-Alpha Clouds" (with R.C. Duncan and E.T. Vishniac), Astrophysical Journal Letters, 368, L1-L5 (1991). 144. "The Velocity Dispersion of Giant Molecular Clouds II: Mathematical and Numerical Refinements" (with C.F. Gammie, and C.J. Jog), Astrophysical Journal, 378, 565-575 (1991). 145. "A Hydrodynamic Approach to Cosmology: Texture Seeded CDM and HDM Cosmogonies (with R. Cen, D.N. Spergel, and N. Turok), Astrophysical Journal, 383, 1-18 (1991). 146. "Galactic Disks, Infall and the Global Value of Omega" (with G. Toth), Astrophysical Journal, 389, 5-26 (1992). 147. "A Self-Consistent Field Method for Galactic Dynamics" (with L. Hernquist), Astrophysical Journal, 386, 375-397 (1992). 148. "HeI Absorption by Lyman-Alpha Clouds and Low-Redshift Lyman-Alpha Clouds" (with J. Miralda-Escudé), Astrophysical Journal, 392, 15-22 (1992). 149. "A Hydrodynamic Treatment of the Cold Dark Matter Cosmological Scenario" (with R. Cen), Astrophysical Journal, 393, 22-41 (1992). 150. "Statistics of the Cosmic Mach Number from Numerical Simulations of a CDM Universe" (with Y. Suto and R. Cen), Astrophysical Journal, 395, 1-20 (1992). 151. "The Relation of Local Measures of Hubble's Constant to Its Global Value" (with E.L. Turner and R. Cen), Astronomical Journal, 103, 1427-1437 (1992). 152. "A 3-D Hydrodynamic Treatment of the Hot Dark Matter Cosmological Scenario" (with R. Cen), Astrophysical Journal, 399, 331-344 (1992). 153. "The X-ray Properties of Optically Selected Quasars" (with A. Chokshi and E.L. Turner), Astrophysical Journal, (submitted). 154. "Light Element Nucleosynthesis: A False Clue?" (with N. Gnedin), Astrophysical Journal, 400, 1-20 (1992). 155. "A Tilted Cold Dark Matter Cosmological Scenario" (with R. Cen, N. Y. Gnedin and L.A. Kofman), Astrophysical Journal Letters, 399, L11-L14 (1992). 156. "The Cosmic Mach Number: Direct Comparisons of Observations and Models (with M. Strauss and R. Cen), Astrophysical Journal, 408, 389-402 (1993). 157. "Galaxy Formation and Physical Bias" (with R. Cen), Astrophysical Journal Letters, 399, L113-L116 (1992). 158. "Post Collapse Evolution of Globular Clusters Dominated by Tidal Binary Heating" (with H. M. Lee), Astrophysical Journal, 409, 617-623 (1993). 159. "A Hydrodynamic Approach to Cosmology: Non-Linear Effects on Cosmic Backgrounds in CDM" (with R. Scaramella and R. Cen), Astrophysical Journal, 416, 399-409 (1993). 160. "Production of Soft Cosmic X-ray Background During Structure Formation in the Intergalactic Medium" (with A. Loeb), Astrophysical Journal, (resubmitted). 161. "A Cosmological Hydrodynamic Code Based on the Total Variation Diminishing Scheme" (with D. Ryu, H. Kang, and R. Cen), Astrophysical Journal, 414, 1-19 (1993). 162. "A Hydrodynamic Treatment of the Tilted Cold Dark Matter Cosmological Scenario" (with R. Cen), Astrophysical Journal, 414, 407-420 (1993). II: 11 163. "CDM Cosmogony with Hydrodynamics and Galaxy Formation: The Evolution of the IGM and Background Radiation Fields" (with R. Cen), Astrophysical Journal, 417, 404-414 (1993). 164. "CDM Cosmogony with Hydrodynamics and Galaxy Formation: Galaxy Properties at Redshift Zero" (with R. Cen), Astrophysical Journal, 417, 415-426 (1993). 165. "A Hydrodynamic Approach to Cosmology: The Primeval Baryon Isocurvature Model" (with R. Cen, and P.J.E. Peebles), Astrophysical Journal, 415, 423-444 (1993). 166. "A Hydrodynamic Treatment of the Cold Dark Matter Cosmological Scenario with a Cosmological Constant" (with R. Cen, and N. Gnedin), Astrophysical Journal, 417, 387-403 (1993). 167. "Strong Gravitational Lensing Statistics as a Test of Cosmological Scenarios" (with R. Cen, J. R. Gott, and E. L. Turner), Astrophysical Journal, 423, 1-11 (1994). 168. "Testing the Gravitational Instability Hypothesis" (with A. Babul, D.H. Weinberg, and A. Dekel), Astrophysical Journal, 427, 1-24 (1994). 169. "Hot Gas in the CDM Scenario: X-ray Clusters from a High Resolution Numerical Simulation" (with H. Kang, R. Cen, and D. Ryu), Astrophysical Journal, 428, 1-16 (1994). 170. "X-Ray Clusters from a High Resolution Hydrodynamic PPM Simulation of the CDM Universe" (with G. Bryan, R. Cen, M.L. Norman and J.M. Stone), Astrophysical Journal, 428, 405-418 (1994). 171. "A Hydrodynamic Approach to Cosmology: the Mixed Dark Matter Cosmological Scenario" (with R. Cen), Astrophysical Journal, 431, 451-477 (1994). 172. "X-ray Clusters in a CDM+L Universe: A Direct Large-Scale, High Resolution, Hydrodynamic Simulation" (with R. Cen), Astrophysical Journal, 429, 4-21 (1994). 173. "A Comparison of Cosmological Hydrodynamic Codes" (with H. Kang, R. Cen, D. Ryu, L. Hernquist, A.E. Evrard, G. L. Bryan and M. L. Norman), Astrophysical Journal, 430, 83-100 (1994). 174. "A Piecewise Parabalic Method for Cosmological Hydrodynamics" (with G. L. Bryan, M. L. Norman, J. M. Stone and R. Cen), Computer Physics Communications, 89, 149-168)(1995). 175. "Hot Gas in Superclusters and Microwave Background Distortions" (with F. M. Persi, D. N. Spergel and R. Cen), Astrophysical Journal, 442, 1-9 (1995). 176. "Dynamics of Massive Black Holes as a Possible Candidate for Galactic Dark Matter (with G. Xu), Astrophysical Journal, 437, 184-193 (1994). 177. "Massive Black Holes and Light Element Nucleosynthesis in a Baryonic Universe" (with N. Y. Gnedin and M. J. Rees), Astrophysical Journal, 438, 40-48 (1995). 178. "Tidal-shock Relaxation: A Re-examination of Tidal Shocks in Stellar Systems" (with T. Kundic), Astrophysical Journal, 438, 702-707 (1995). 179. "Can Standard Cosmological Models Explain the Observed Abell Cluster Bulk Flow?" (with M. A. Strauss, R. Cen, T. R. Lauer and M. Postman) Astrophysical Journal, 444, 507-519 (1995). 180. "Gravitational Collapse of Small Scale Structure as the Origin of the Lyman Alpha Forest" (with R. Cen, J. Miralda-Escude and M. Rauch), Astrophysical Journal (Letters), 437, L9 (1994). 181. "Testing Cosmogonic Models with Gravitational Lensing" (with J. Wambsganss, R. Cen, and E. L. Turner), Science, 268, 274-276 (1995). 182. "Using X-rays to Determine which Compact Groups are Illusory" (with L. Lubin and L Hernquist), Astrophysical Journal Letters, 444, L61-64 (1995). 183. "Background X-ray Emission from Hot Gas in CDM and CDM+L Universes: Spectral Signatures" (with R. Cen, H. Kang and D. Ryu), Astrophysical Journal, 451, 436 (1995). 184. "Stable and Unstable Accretion Flows with Angular Momentum Near a Point Mass" (with D. Ryu, G. Brown, and A. Loeb), Astrophysical Journal, 452, 364 (1995). 185. "The Baryon Fraction and Velocity-Temperature Relation in Galaxy Clusters: Models versus Observations" (with L. M. Lubin, R. Cen and N.A. Bahcall), Astrophysical Journal Letters, 460, L10 186. "Topology of Large-Scale Structure by Galaxy Type: Hydrodynamic Simulations" (with J.R. Gott and R. Cen), Astrophysical Journal, 465, 499 (1996). 187. "The observational case for a low-density Universe with a non-zero cosmological constant" J.P. Ostriker and P. J. Steinhardt, Nature, 377, 600-602 (1995). 188. "Patterns in Nonlinear Gravitational Clustering: A Numerical Investigation" (with T. Padmanabhan, R. Cen, and F. Summers), Astrophysical Journal, 466, 604-613 (1996). 189. "Hydrodynamic Simulations of Growth of Cosmological Structure: Summary and Comparisons among Scenarios," (with R. Cen), Astrophysical Journal, 464, 270 (1996). 190. "Cosmological Simulations Using Special Purpose Computers: Implementing P3M on GRAPE" (with P. P. Brieu and F. J. Summers), Astrophysical Journal, 453, 566-573 (1995). 191. "Testing Cosmological Models by Gravitational Lensing: I. Method and First Applications" (with J. Wambsganss and R. Cen), Astrophysical Journal, 494, 29-46 (1998). 192. "Galaxies Collide on the I-way: An Example of Heterogeneous Wide-Area Collaborative Supercomputing" M.L. Norman, P. Beckman, G. Bryan, J. Dubinski, D. Gannon, L. Hernquist, K. Keahey, J.P. Ostriker J. Shalf, J. Welling and S. Young, The International Journal of Supercomputer Applications and High Performance Computing, Vol. 10, No. 2/3, 132 (1996). 193. "Destruction of the Galactic Globular Cluster System" (with O. Gnedin), Astrophysical Journal, 474, 223 (1997). 194. "The Lya Forest from Gravitational Collapse in the Cold Dark Matter + L Model" (with J. Miralda-Escudé, R. Cen and M. Rauch), Astrophysical Journal, 471, 528-616 (1996). 195. "Are the Hubble Deep Field Galaxy Counts Whole Numbers?" (with W. Colley, J. Rhoads and D.N. Spergel), Astrophysical Journal Letters, 473, L63-L66, (1996). 196. "Effects of Weak Gravitational Lensing on the Determination of q0" (with J. Wambsganss, R. Cen and G. Xu), Astrophysical Journal Letters, 475, L81)(1997). 197. "The Protogalactic Origin for Cosmic Magnetic Fields" (with R.M. Kulsrud, R. Cen and D. Ryu), Astrophysical Journal, 480, 481 (1997). 198. "Reheating of the Universe and Population III" (with N. Gnedin), Astrophysical Journal Letters, 472, L63-L67(1996). 199. "The Globular Cluster Luminosity Function as a Distance Indicator: Dynamical Effects" (with O. Gnedin), Astrophysical Journal Letters, 487, 667-671 (1997). 200. "Reionization of the Universe and the Early Production of Metals" (with N. Gnedin), Astrophysical Journal, 486, 581-598 (1997). 201. "The Opacity of the Lyman-alpha Forest and Implications for Wbaryon and the Ionizing Background" (with M. Rauch, J. Miralda-Escudé, W. Sargent, T. Barlow, D. Weinberg, L. Hernquist and R. Cen), Astro-ph/9612245, Astrophysical Journal,489, 7 (1997). 202. "Dymanics of 'Small Galaxies' in the Hubble Deep Field" (with W. N. Colley, O. Y. Gnedin and J. E. Rhoads), Astrophysical Journal,488, 579-584 (1997). 203. "On the Self-Consistent Response of Stellar Systems to Gravitational Shocks," O. Y. Gnedin and J.P. Ostriker, (Astro-ph/9902326), Astrophysical Journal, 513. 626 (1999). 204. "Cooling Flows and Quasars: Different Aspects of the Same Phenomena? I. Concepts (with L. Ciotti), Astrophysical Journal Letters, 487, L105-L108 (1997). 205. "The Galaxy Pairwise Velocity Dispersion as a Function of Local Density” (with M.S. Strauss and R. Cen), Astrophysical Journal, 494, 20-28 (1998). 206. "On the Clustering of Lyman Alpha Clouds, High Redshift Galaxies and Underlying Mass" (with R. Cen, S. Phelps and J. Miralda-Escudé), Astrophysical Journal, 496, 479 (1998) 207. "Using Cluster Abundances and Peculiar Velocities to Test the Gaussianity of the Cosmological Density Field" (W.A. Chiu, J.P. Ostriker and M.A. Strauss), Astrophysical Journal, 494, 479-490 208. "Tidal Shocking by Extended Mass Distributions," O. Gnedin, L. Hernquist and J.P. Ostriker, Astrophysical Journal, 514, 109 (1999). 209. "Effects of Tidal Shocks on the Evolution of Globular Clusters," O.Y. Gnedin, H-M Lee and J.P. Ostriker, Astrophysical Journal, 522, 935 (1999). 210. "Accuracy of Mesh Based Cosmological Hydrocodes: Tests and Corrections," R. Cen and J.P. Ostriker, Astrophysical Journal, 517, 31 (1999). 211. "Where are the Baryons?" R. Cen and J.P. Ostriker, Astrophysical Journal Letters, 514, 1-6 (1999). 212. "The Santa Barbara cluster comparison project: a test of cosmological hydrodynamics codes" (with C.S. Frenk, S.D.M. White, P. Bode, R.J. Bond, G.L. Bryan, R. Cen, H.M.P. Couchman, A.E. Evrard, N. Gnedin, A. Jenkins, A.M. Khokhlov, A. Klypin, J.F. Navarro, M.L. Norman, J.M. Owen, F.R. Pearce, U-L Pen, M. Steinmetz, P.A. Thomas, J.V. Villumsen, J.W. Wadsley, M.S. Warren, G. Xu and G. Yepes) Astrophysical Journal, 525, 554-582 (1999). 213. "The Physical Origin of Scale Dependent Bias in Cosmological Simulations," M. Blanton, R. Cen, J.P. Ostriker and M.A. Strauss, Astrophysical Journal, 522, 590 (1999). 214. "The Effect of Cooling on the Density Profile of Hot Gas in Clusters of Galaxies: Is Additional Physics Needed?" (T. Suginohara and J.P. Ostriker), Astrophysical Journal, 507, 16 (1998). 216. "Hydrodynamics of Accretion onto Black Holes" (with M-G. Park), Adv. Space Res., 7, 951-960 (1998). 217. "Relaxation in stellar systems and the shape and rotation of the inner dark halo," S. Tremaine and J.P. Ostriker, MNRAS, 306, 622 (1999). 218. "Thermal Properties of Two-Dimensional Advection Dominated Accretion Flow" M-G Park and J.P. Ostriker, Astrophysical Journal, 527, 247-253 (1999). 220. "Cosmic Chemical Evolution" (R. Cen and J.P. Ostriker), Astrophysical Journal (Letters), 519, 109 (1999).
{"url":"https://www.astro.princeton.edu/people/webpages/jpo/pubs90.htm","timestamp":"2024-11-11T19:24:52Z","content_type":"text/html","content_length":"26895","record_id":"<urn:uuid:f0a24b71-4ff3-41bd-98c0-eb6a4834ea66>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00436.warc.gz"}
Nosov Valentin Alexandrovich Courses and Seminars :: Students and Postgraduates :: Publications Nosov Valentin Alexandrovich Ph.D., Senior Researcher Valentin Aleksandrovich Nosov was born on July 29, 1940 in the city of Vologda. Graduated from the Department of Mechanics and Mathematics of MSU in 1963. Area of scientific interests. Properties of finite automata in Boolean parametrization: some regularity criteria are obtained and wide classes of regular automata are constructed. Classification of problems by complexity: a number of problems in control systems are demonstrated to be hard. Combinatorial analysis of fuzzy sets: Hall's theory of transversals of systems of sets is extended to the case of fuzzy systems of sets. V.A. Nosov lectures the course "Introduction to the theory of intelligent systems", holds special courses in algebraic cryptography, information security and coding, combinatorics and combinatorial optimization and manages a special seminar in modern problems of cryptography. Among his disciples there are 10 Philosophy Doctors and one Doctor of Science. V.A. Nosov has more than 100 Email: v dît a dît nosov àt intsys dît msu dît ru Publications of Valentin Alexandrovich Nosov Nosov V.A. Information Security Teaching, training, research . Nosov V.A. History of Cryptography in Lomonosov Moscow State University A Historical Review . Nosov V.A. On systems of distinct representatives for families of fuzzy sets Intelligent systems, vol.3, ¹ 1–2, 1998. p. 265–276. Nosov V.A. A criterion for the regularity of a Boolean non-autonomous automaton with separated inputs Intelligent systems, vol.3, ¹ 3–4, 1998. p. 269–280. Nosov V.A. On the construction of classes of Latin squares in the Boolean database Intelligent systems, vol.4, ¹ 3–4, 1999. p. 307–320. Alekseev V.B. , Nosov V.A. NP-complete problems and their polynomial variants Obozrenie prikladnoj I promyshlennoj matematiki (Review of applied and industrial mathematics), vol.4, ¹ 2. 1997. p. 165–193 Gizunov S.A. , Nosov V.A. On classification of all Boolean functions of four variables by Scheffer classes Obozrenie prikladnoj I promyshlennoj matematiki (Review of applied and industrial mathematics), vol.2, ¹ 3. 1995. p. 440–467 Nosov V.A. Combinatorics and theory of graphs Textbook. Moscow State Institute of Electronics and Mathematics. Moscow, 1999. 116 p. Nosov V.A. Principles of the theory of algorithms and analysis of their complexity Course of lectures. Moscow, 1992. 140 p.
{"url":"http://intsys.msu.ru/en/staff/vnosov/","timestamp":"2024-11-10T01:51:50Z","content_type":"text/html","content_length":"14589","record_id":"<urn:uuid:ad60aea7-7c75-4aec-a86c-35eafae68f6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00593.warc.gz"}
Design and Validation of a Ten-Port Waveguide Reflectometer Sensor: Application to Efficiency Measurement and Optimization of Microwave-Heating Ovens Departamento de Tecnologías de la Información y Comunicaciones, Universidad Politécnica de Cartagena / Campus Muralla del Mar s/n, 30202 Cartagena, Murcia, Spain Departamento de Tecnología Electrónica, Universidad Politécnica de Cartagena / Campus Muralla del Mar s/n, 30202 Cartagena, Murcia, Spain Author to whom correspondence should be addressed. Submission received: 25 October 2008 / Revised: 19 November 2008 / Accepted: 20 November 2008 / Published: 3 December 2008 This work presents the design, manufacturing process, calibration and validation of a new microwave ten-port waveguide reflectometer based on the use of neural networks. This low-cost novel device solves some of the shortcomings of previous reflectometers such as non-linear behavior of power sensors, noise presence and the complexity of the calibration procedure, which is often based on complex mathematical equations. These problems, which imply the reduction of the reflection coefficient measurement accuracy, have been overcome by using a higher number of probes than usual six-port configurations and by means of the use of Radial Basis Function (RBF) neural networks in order to reduce the influence of noise and non-linear processes over the measurements. Additionally, this sensor can be reconfigured whenever some of the eight coaxial power detectors fail, still providing accurate values in real time. The ten-port performance has been compared against a high-cost measurement instrument such as a vector network analyzer and applied to the measurement and optimization of energy efficiency of microwave ovens, with good results. 1. Introduction Microwave-heating applications are slowly increasing their importance due to the recent applications to the food industry, sanitary sector, chemical and pharmaceutical engineering and polymer production, among others. Although this technology is mature and can offer several advantages such as reduction of processing times, usage of clean energy and the resulting reduction of atmospheric pollution, it has to compete against cheaper energies based on combustion to generate heat. Therefore, one of the main goals of microwave heating technology at industrial applications is the monitoring of energy efficiency for the optimization and detections of malfunctions of the system. The power efficiency of a microwave oven can be easily related to the reflection coefficient at the feeding port. The conventional non-invasive measurement techniques for the reflection coefficient are often based on directional couplers that separate incident and reflected power within the waveguide. The comparison of both contributions allows the estimation of the magnitude and phase of the reflection coefficient. To measure the reflection coefficient, Vector Network Analyzers (VNAs) and Six-Port Reflectometers (SPRs) are by far the most widely used instruments. Calibration is an essential step to guarantee accurate measurements with such instruments, since noise, the phase error introduced by the cables and non-linear behavior of detectors may lead to high error levels. VNAs are very high precision instruments that can be used at laboratory stages but, due to their high price, they are very seldom used at industrial sites. Additionally, the VNA configuration does not allow handling high-power levels easily. Therefore, SPRs are often used and are the preferred sensors for monitoring the reflection coefficient both at high and low power levels. This SPR is particularly interesting thanks to the use of power detectors instead of mixers and directional couplers, thus providing simpler circuits when compared to VNA configurations. Several techniques for calibration of SPRs have been previously published [ ]. These studies consider aspects such as dynamic range and non-linearity of power measuring diodes [ ]. The most extended calibration technique is based on the use of four standard loads whose reflection coefficients are very precisely known [ ]. With this technique, the numerical solution for calibration equations can be represented in the complex plane by means of three circles. The intersection of the circles provides the desired solution for the calibration process. Due to inherent noise and other measurement errors, the intersection point of the circles is extended in practice to a less precise area and error minimization techniques must be used in order to reduce the influence of these errors. A new calibration method based on Fourier coefficients proximity for SPR parameters was presented in [ ] in order to reduce the calibration uncertainty. A calibration method based on the use of phase shifters and attenuators was also proposed in [ ]. In [ ], the calibration method is based on the analytical description of the communication system behavior and the measurements performed when two signals with a slight frequency difference are connected to the reflectometer's inputs. Several calibration techniques propose the characterization of the diode behavior in order to improve calibration and measurement performance [ ]. Some examples of linear approximations for diode response versus frequency and operation temperature can be found at [ ]. Other alternatives that use thermistors for temperature control and monitoring are described in [ Calibration methods based on artificial neural networks (ANNs) were proposed in [ ]. This approach is advantageous for permitting automatic calibration procedures, although ANNs require a large number of known standards for training the network. However, automatic reflection coefficient generators can be easily built. In fact, some recent studies have shown that sample movement within microwave ovens may generate large variations for reflection coefficient at the feeding port [ ] and that they can be used as low-cost impedance generators. In this work, a new ten-port waveguide reflectometer based on low-cost power detectors is presented. Eight coaxial probes are inserted within the waveguide in order to sample the standing wave present at the waveguide and thus to estimate reflection coefficient. The device is analyzed with CST Microwave Studio electromagnetic (EM) commercial software in order to ensure monomode working conditions. The electronic design of the built power measuring circuits is shown. The calibration procedure, based in ANN learning techniques, is also described. The ten-port sensor has been built and validated by obtaining both magnitude and phase values of reflection coefficient and comparing them to VNA performance, using the Industrial, Scientific and Medical (ISM) 2.45 GHz frequency for demonstration purposes. Finally, as an example of application, this sensor has been used to measure and optimize the energy efficiency of a microwave oven. The optimization process is based on placing the sample at the optimal position within the microwave cavity. An example of this procedure is described in this paper. 2. Basic Theory and design principles for a ten-port reflectometer 2.1. Six-port reflectometer review A six-port reflectometer consists of a simple circuit with two ports for signal input and output, and four ports with their corresponding power detectors that sample the standing wave within the transmission line. The input port is connected to the EM source and the output port is connected to the load. This load can generate a mismatch that leads to reflections whose magnitude must be measured by the reflectometer. Figure 1 shows a simplified scheme of a six-port reflectometer, where signals go always into the port and signals go out from the port. Ports ranging from 3 to 6 are matched and therefore no reflected wave is considered there. In this scheme, Port 1 is the input port where the EM source is connected, whereas Port 2 is the output port of the sensor. The complex reflection coefficient ( ), also called scattering parameter, provides a relationship between the incident wave amplitude at Port 1 ( ) and the reflected wave amplitude ( ) at the input port (Port 1 in Figure 1 ). The bigger the magnitude of , the more energy is reflected to the EM source, thus less energy is absorbed by the load. A numerical relationship can be obtained from the power detected at Ports 3 to 6 to determine the reflection coefficient in the load. This numerical expression combines the value of nine complex parameters, very sensitive to measurement noise and non-linearities, in order to determine the desired reflection coefficient value. The equation system shown in (1) relates the incident and reflected wave amplitudes at Port 2 to the sampling amplitudes. The calibration procedure of this six-port reflectometer is based on finding the numerical solution for this equation system, where M[i] and N[i] are complex constants. As described earlier, several calibration methods and load standards can be used. $b 3 = M 3 ⋅ a 2 + N 3 ⋅ b 2 b 4 = M 4 ⋅ a 2 + N 4 ⋅ b 2 b 5 = M 5 ⋅ a 2 + N 5 ⋅ b 2 b 6 = M 6 ⋅ a 2 + N 6 ⋅ b 2 }$ To solve this system, eight linear equations must be considered. A theoretical linear dependence of all important parameters could be obtained if noise, measurement perturbations and non-linear effects are not considered. However, the calibration procedure becomes a hard task under real conditions where the above mentioned effects can not be neglected. Additionally, this configuration cannot be used when any of the power detectors is broken, damaged or saturated. This is due to the fact that the six-port uses the minimum amount of detecting ports that are required to find a precise solution. This may lead to delays both in laboratory measurements or industrial monitoring since the power detector must be repaired or changed, and then the whole device recalibrated, before being able to measure again. 2.2. Ten-port description In this work a new ten-port reflectometer configuration is presented. In this case, a standard WR-340 waveguide section (4.3 cm × 8.6 cm) has been used. Eight equally spaced coaxial probes have been inserted at the center of the wide wall of the waveguide. These coaxial probes sample the standing wave within the waveguide and therefore provide an estimation of the reflection coefficient. The output of these coaxial probes is connected to a non-linear low-cost power meter. Figure 2 shows the scheme of the proposed configuration where Ports 1 and 2 are respectively connected to the power source and load. Ports ranging from 3 to 10 correspond to the ports of coaxial probes. The main advantage of this measurement configuration is that only a very small part of the delivered power is absorbed by the sampling probes, which ensures that almost all the delivered power arrives to the sample. This is important mainly for high power applications such as microwave ovens, radar and etc. This structure is therefore different from that used by conventional laboratory equipment such as VNAs. Figure 3 shows the measurement system diagram, where the power meters convert the radiofrequency power collected at the coaxial probes into a DC voltage. A personal computer and a data acquisition board are used in order to collect those voltages in real time. Additionally, a VNA is used at Port 1 in order to both generate the incident microwave power and to measure the reference value of the reflection coefficient for calibration purposes. A variable load is employed at Port 2 in order to generate changing values for the reflection coefficient at Port 1. Both the VNA reference measurements and the voltages at each coaxial probe are stored and subsequently processed by a RBF neural network in order to accomplish the calibration process. Contrary to previous work, no linearization process for the power meters is carried out, since this procedure is supposed to be inherently provided by the neural network calibration process. The measurement procedure is accomplished as follows: The microwave source provides the incident wave (a[1]) to the port. The electromagnetic energy propagates along the waveguide reflectometer until it reaches the sample. A reflected wave (b[2]) is generated at the variable load thus generating a standing wave within the reflectometer. The power sensors sample the energy of this microwave standing wave and convert the detected power into voltage in a logarithmic way. Simultaneously, a VNA measures the reference value for the reflection coefficient. These voltages obtained from power detectors and the reference reflection coefficient value are then introduced to a neural network. This neural network learns the relationship between the reference value of the reflection coefficient (output of the network) and the output voltages from power detectors (fed as inputs to the neural net). 2.3. Simulation of ten-port performance and design principles The commercial CST Microwave Studio electromagnetic software has been used in order to test the performance of the ten-port reflectometer and to ensure that a monomode behavior is observed at Port 1, where the reflection coefficient is to be measured. In this case it is required that only the TE mode, which is the first propagating or fundamental mode of the rectangular waveguide, propagates along the ten-port whose rectangular cross section is =8.6 cm and =4.3 and the working frequency is 2.45 GHz. Figure 4 shows the electric field distribution of the TE Figure 4a ) shows the electric field distribution at the cross-section perpendicular to the propagation direction and Figure 4b ) the distribution for this propagation direction. It can be observed that the maximum value for the electric field is located at the center of the waveguide and that the polarization of the electric field is oriented in the same way that the coaxial probes. Therefore, a good coupling is obtained between the sampling coaxial probes and the electric field. Figure 5 shows the absolute value of the electric field along the waveguide. Although only one plane is shown, it should be born in mind that the electric field keeps constant along direction, as shown in Figure 4 .a). It can be observed that the introduction of metallic probes within the waveguide does not change the TE mode spatial distribution. In the WR-340 waveguide, the wavelength is 174.28 mm. Therefore, the ten-port length was fixed at 222.46 mm in order to contain at least this wavelength for the fundamental mode. Figure 6 shows the ten-port length and distances between probes. In Figure 7 , a prototype of the real implementation of the ten-port sensor in aluminum can be seen. 2.4. Power sensors and microstrip board circuits Eight logarithmic power sensors have been employed for detecting both the magnitude and phase of the reflection coefficient. Each of the power detectors was associated to each of the coaxial ports. This number doubles the detectors employed in the six-port configuration in order to increase the amount of input data to the neural network and to reduce sampling distance of the standing wave when compared to the SPR configuration. This was supposed to reduce the error during the reflection coefficient estimation. In addition, this also permits the sensor to reconfigure whenever a detector fails, still providing accurate results, as it will be shown later. Linear Technology LTC5530 (Schottky Diode RF Detector) power detectors were employed at each coaxial port. These detectors are non-linear and their working bandwidth covers from 300 MHz up to 7GHz. The input dynamic range allows power levels from -32 dBm to 10 dBm and the output DC-voltage range operates from 2.7V to 6V. Therefore, these power detectors transform the input radiofrequency power into a DC signal with a non-linear relationship. It was necessary to include these integrated circuit power detectors within a microstrip board, in order to provide a coaxial to microstrip transition and to make available the necessary DC bias and ground for proper detector working. Commercial Microwave Office software was employed to design the microstrip board. Figure 8 shows the corresponding schematic circuit and Figure 9 its layout. Circuit Cam software was used in order to manufacture the physical circuit. 2.5. Neural network for calibration As described previously, conventional calibration of six-port reflectometers is based on the use of standard loads and several mathematical relationships which deal with complex numbers and that usually require optimization tasks to reduce the influence of measurement noise and power detectors non-linearity on final sensor accuracy. On the contrary, in this work we present a different calibration process based on the usage of neural networks to relate the DC voltage values provided by each power detector to the reference reflection coefficient measurement. Therefore, a ZVRE Rohde & Schwarz VNA was employed during the calibration procedure both for providing the 2.45 GHz incident signal at Port 1 and to measure the reference value of the reflection coefficient also at Port 1. The power level used both in calibration procedures and measurements was set to 0.5 watts. A conventional RBF architecture has been used due to its simplicity. Figure 10 shows the structure of this kind of neural network. This neural architecture consists of a non-linear hidden layer and a linear output one, in which the contribution of the signals pondered by the Gaussian activation functions of the neurons are combined to provide the parameter at the working frequency. These functions are defined by the adaptive centroids and a constant variance value for all the Gaussians. The c[i] centroids determine the segmentation of the input space of the x input vector. The components x[i] of vector x are fed as inputs to the neural network. This input is processed by the Gaussians to give G[i], the output of the hidden level. The output of the network is obtained by means of the vectorial expression W· G in the output linear level, W being the weights of the linear layer. In this case, the input vector is formed by the eight voltage values provided by the LTC5530 detectors, whereas the output of the network estimates |S[11]|. A similar scheme is used for the estimation of S[11] phase. RBFs are supervised neural networks whose structure provides a solution to the local interpolation of non-linear functions [ ]. This is the case of the generic function }, considered in this work –see Figure 14 – since it does not have a linear behaviour. Although the activation of the neurons in the RBF model is carried out by radial basis functions, this model has a linear expression for the estimation of . Therefore, for each input vector , the estimation of is given by equations $s 11 ( k ) = ∑ j = 1 M w j G j ( k )$ $G j ( k ) = exp ( ∑ i ( x i ( k ) − c j i ) 2 σ j 2 )$ ) is the output of the Gaussian radial function at input are the centre and standard deviation of is the weight value associated to is the training example and the number of neurons of the network. The estimation of is carried out by using the gradient descent algorithm to minimize the cost function described in Equation (4) $H = ∑ k = 0 N ( S 11 T ( k ) − S 11 M ( k ) ) 2$ represent the theoretical and measured values for the parameter , for both magnitude and phase. is the number of examples used for training the neural network. As a conclusion, the application of the RBF neural network to the estimation of permits to obtain, after the training stage, the optimal values for . In the operation stage, equation (2) supplies the approximation of from DC voltages provided by power sensors. 3. Experimental set-up Figure 11 shows the experimental set up implemented for the ten-port calibration and validation. A multimode 60 × 60 × 60 cm cavity has been employed for testing the sensor under low power conditions, and to provide different values for calibration purposes. Two DC supply sources were employed for biasing the power sensors. A power level of 0.5 watts was introduced by the ZVRE Rohde & Schwarz VNA at 2.45 GHz across Port 1. The neural network was implemented in a personal computer running Matlab™ Neural Network Toolbox™ routines. The personal computer was connected to the VNA by a USB GPIB communication board, and to the data acquisition board through conventional USB connectors. The error for RBF training was obtained by comparing the reference value provided by the VNA and the value computed by the neural network as shown in Equation (2) . This error was minimized when optimal values were found after the optimization process. Figure 12 shows the circuit boards and common ground and biasing connections. The power-DC voltage conversion curve was measured by using the VNA as an RF signal generator, and a conventional multimeter for measuring the output voltage. This was needed since conversion curves were provided at 2 GHz but not at 2.45 GHz. Figure 13 shows the experimental power voltage conversion curve at 2.45 GHz. This verified the correct implementation of power sensor boards. Coaxial probes were designed to obtain a maximum input power level at the power sensor, equal to -10 dBm. The coupling of waveguide transmitted power in the coaxial probes was adjusted by changing the length of these coaxial probes within the waveguide. In this case, the coaxial probe coupling was adjusted to -17 dBm with a coaxial probe length equal to 16 mm. During the calibration process, the RBF neural network was trained with N=255 training patterns. These patterns were formed by an input matrix with N x 8 elements corresponding to the DC voltages from the power sensors and an output matrix with dimensions N x 2, corresponding to the magnitude and phase of reference value of S[11] provided by the VNA. A variable load provided different values for S[11] The phase and magnitude of training data for the S[11] covered the ranges [0, 2π] and [0, 1], respectively. The number of Gaussian neurons was fixed to 60. This number was selected after some trials, because it gave an adequate balance between learning and generalization. The centers of the Gaussian neurons were chosen to be equally spaced, covering the whole range of the inputs. For all the neurons, σ[j] was set to 0.3466. That is equivalent to say that the spread factor of the Gaussians is of 0.1, inside of the hypercube of the normalized inputs. 4. Results and Discussion Figure 14 shows the training data for the parameter provided by the VNA. These reference data are represented in the so called Smith chart that shows both in magnitude and phase. Once the network has been trained, the weights of the RBF neural network are optimized and the sensor is ready to estimate the value of only from the output values of the power sensors. In this case, 74 patterns that had not been used during the training process have been employed for validation purposes. The estimated values provided by the ten-port reflectometer have been compared to the ones measured by the VNA, both in magnitude and phase. Figure 15 shows the absolute error obtained for this comparison, resulting in average absolute error of 6.2 × 10 for the magnitude and of 2.3×10 for the phase of . As it can be observed in Figure 15 , this error is very small and therefore very accurate measurements can be made with the proposed reflectometer. The phase error is bigger than the magnitude one, since phase can vary from 0 to 2π radians whereas magnitude variation ranges from 0 to 1. In order to evaluate the behaviour of the ten-port when some detectors are unable to measure because of electronics faults, several tests have been carried out by using only a reduced number of diodes as valid sensors to measure magnitude and phase for . The output of the faulty detectors is defined as a constant value for all the training positions. The defective sensors were chosen in a random way. Some of the results obtained for the absolute error of | when compared to VNA measurements are represented in Figure 16 , where 255 training patterns and 74 validation measurements have been considered. From the curves in Figure 16 , it can be observed that the ten-port structure is able to correctly predict the function even when some detectors do not work properly, provided that the working detectors allow an appropriate sampling of the stationary wave, as stated by the Shannon theorem [ ]. It is noticeable that the error observed when all the power sensors are working is lower than those obtained with 4 power sensors, which is the configuration of a conventional SPR. However, the error observed when only two power detectors are in use is not acceptable and a minimum of 4 power detectors should be used when carrying out the measurements. This is an expected result, because at least four samples along its wavelength are needed to correctly sample a wave without losing information. Therefore, these results show that it is possible to reconfigure the sensor software structure by training again the RBF neural network in order to correctly operate even when some of the power sensors are broken or faulty. 5. Sensor application to microwave oven optimization Finally, as an application of the sensor capacities, the ten-port reflectometer has been used in order to optimize the behaviour of a 60 × 60 × 60 cm microwave oven. The sensor was used to monitor the value of the magnitude in real time for different positions of a 250 cm cylindrical water sample. Figure 17 shows the scheme of the employed cavity. As it can be observed, the optimization process is focused only on one dimension along the main axis of the oven. It gives a partial solution to the problem for the considered oven because the optimization process is completed only when the three dimensions are considered for the algorithm. Although the symmetry of the process permits to consider the -axis as the dimension in which the maximum variations for the electromagnetic field are produced, a more accurate result can be found for experimental platforms with a cartload displacing along the 3 axis inside the oven. Optimization was carried out by properly placing the sample at the optimum position along the PTFE carrying system. Figure 18 shows the results obtained by the sensor for the magnitude when 0.5 watts were employed as microwave power source and the sample was moved with the PTFE carrying system. The movement was carried out by placing the sample as near as possible to the WR-340 coupling aperture and then moving the sample away from that position. As it can be observed in Figure 18 , values lower than 0.2 can be obtained for magnitude for sample distances around 46 cm away from the coupling aperture. On the contrary, other sample positions may lead to reflection coefficient values up to 0.75. Therefore, the employment of this ten-port reflectometer allows, once it is calibrated, the monitoring of the reflection coefficient of microwave ovens in real time. It implies to improve the energy use and to protect the microwave sources from undesired reflections that may damage them. 6. Conclusions In this work the simulation and validation of a ten-port prototype microwave sensor has been presented. This low-cost sensor allows the reflection coefficient estimation in real time and has been used in order to optimize microwave multimode ovens. The ten-port has been implemented in waveguide technology, which is the usual transmission medium used in power applications. A RBF neural network has been used to learn the relationship of power sensor output signals and the reference values of the reflection coefficient. Tests have shown that a very good estimation of the reflection coefficient can be obtained both for magnitude and phase, with accuracy comparable to high-cost laboratory equipment such as VNAs. Although results are shown for low power levels, the procedure can be readily extended to higher power levels by changing the coupling level of coaxial probes within the waveguide. An advantage shown by the sensor versus conventional waveguide reflectometers is that it can be readjusted even if some of the power detectors fail, still providing precise results. Additionally, linearization processes or theoretical modeling of the device are no longer needed for calibration, since neural networks have the ability to learn and correct such problems. Figure 2. Ten port scheme with input and output ports and sampling coaxial ports, d being the distance between consecutive coaxial probes within the waveguide. Figure 4. Electric field distribution for TE[10] mode at the waveguide. (a) Perpendicular cross section. (b) Along the propagating direction. Figure 12. a) Manufactured power sensor boards. b) Rack disposition with biasing and ground connections. © 2008 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). Share and Cite MDPI and ACS Style Pedreño-Molina, J.L.; Monzó-Cabrera, J.; Lozano-Guerrero, A.; Toledo-Moreo, A. Design and Validation of a Ten-Port Waveguide Reflectometer Sensor: Application to Efficiency Measurement and Optimization of Microwave-Heating Ovens. Sensors 2008, 8, 7833-7849. https://doi.org/10.3390/s8127833 AMA Style Pedreño-Molina JL, Monzó-Cabrera J, Lozano-Guerrero A, Toledo-Moreo A. Design and Validation of a Ten-Port Waveguide Reflectometer Sensor: Application to Efficiency Measurement and Optimization of Microwave-Heating Ovens. Sensors. 2008; 8(12):7833-7849. https://doi.org/10.3390/s8127833 Chicago/Turabian Style Pedreño-Molina, Juan L., Juan Monzó-Cabrera, Antonio Lozano-Guerrero, and Ana Toledo-Moreo. 2008. "Design and Validation of a Ten-Port Waveguide Reflectometer Sensor: Application to Efficiency Measurement and Optimization of Microwave-Heating Ovens" Sensors 8, no. 12: 7833-7849. https://doi.org/10.3390/s8127833 Article Metrics
{"url":"https://www.mdpi.com/1424-8220/8/12/7833","timestamp":"2024-11-03T19:37:38Z","content_type":"text/html","content_length":"407482","record_id":"<urn:uuid:e9322342-43d6-450a-9a75-4b8ebe2c17b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00296.warc.gz"}
Calculate Distance at Acceleration Calculate Distance at Acceleration Calculator for the length of the distance that is covered at a constant acceleration in a certain time. The time starts at the beginning of the acceleration from the rest position. Please enter two of the three values and choose the units, the third value will be calculated. The formula is: distance = ½ * acceleration * time² d = ½ * a * t² Example: at an acceleration of 0.5 g, it takes 20 seconds to cover the first kilometer and almost 29 seconds to cover the first two kilometers.
{"url":"https://rechneronline.de/g-acceleration/distance.php","timestamp":"2024-11-01T22:13:39Z","content_type":"text/html","content_length":"6760","record_id":"<urn:uuid:f71c06ff-3fa5-474d-a27e-0b5b64a83874>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00394.warc.gz"}
DMTH237 Discrete Mathematics II Macquarie University Department of Mathematics DMTH237 Discrete Mathematics II S1 2018 Assignment 1 1. (a) (i) How many degrees are there in 29/12 Π radians? (Exact answer required) (ii) How many radians are there in 3456° (Express your answer in the form p/q Π,with p/q a fraction in lowest terms.) (b) Give your answers in the form p/q Π,with p/q a fraction in lowest terms. (i) Find x, satisfying - 1/2 Π ≤ x ≤ x ≤ 1/2 Π, such that since x = sin(- 83/19 Π) (ii) Find x, satisfying 0 ≤ x ≤ Π, such that cos x = cos(59/9 π) (c) Suppose sin x = - 1/3, and π ≤ x ≤ 3/2 π. Use trigonometric identities to find exact values for cos x, for sin 2x, and for sin(x/2). 2. (a) If sin x = 1/3 and x is in the second quadrant, find an exact value for sin (x + 1/3 π). (b) Find a formula for sin(arccos x) without any trigonometric or inverse trigonometric functions in it. 3. (a) Bring the following matrix to reduced row-echelon form. 4. (a) I took my entourage to IKEA. I bought 4 chairs, a table, and 3 bookcases for $121. My lawyer bought 6 chairs, 2 tables and 2 bookcases for $148. My bodyguard bought 2 chairs, 3 tables, and a bookcase for $109. My agent bought 5 chairs, 4 tables, and 4 bookcases — how much did she spend? (b) Let (c) Find u .(3v+4w) if u.v = 5 and u.w = 6. (d) Let In this question, make sure that you follow the proper definition of the power L^n of a language L. Suppose that A = {1,2,3,4}. Suppose further that L,M, ⊆ A^* are given by L = {⋋,1,3} and M = {1,12,13,31}. Determine how many distinct strings there are in each of the following languages. Write them all down when finite in number. (a) ML (b) L^2M (c) M^2+L^3 In each case, identify explicitly those strings that can be constructed in more than one way. 6. A file named randomstrings.txt can be downloaded from the ‘Assignments’ panel on the DMTH237 iLearn site. It contains 100 binary strings, each of length 100 characters For each of the regular expressions given below, find the line-numbers of all the strings in that file which match the given regular expression, perhaps in more than one way. You may use whatever software you choose to answer this question; e.g., it can be done using the Find panel of most text-processing software applications, though other specialised utilities may prove to be easier to use, but may first require you to learn how to adapt to the specific language employed to denote a regular expression. You should describe briefly what software you have used, and how you have used it to determine the required line-numbers. Include a screenshot, or other graphic, to help the markers understand what you did to get your answers. a. Find matches to: (0 + 1)^* 0000000000 (0 + 1)^*. b. Find matches to: (0 + 1)^* (00000000 + 1111111)111 (0 + 1)^* c. Find matches to: (0 + 1)^* 111111 (0 + 1)^12 00000 (0 + 1)^*. Here a numerical exponent (...)^k means “kconsecutive matches to the ‘...’ ”. 7. Write down a Regular Expression for the language L consisting of all binary strings where every non-empty block of 0s has even length and every non-empty block of 1s has odd length. (Notice that the empty string is in this language.) 8. Suppose that S = {A,B,...,F} is a set of states and I = O ={0,1} are the input and output alphabets for the Mealy machine described by the transition table below. a. Construct a state diagram for this Mealy machine. (Layout the states so that there is no need to have transition arrows crossing each other.) b. Find the output string corresponding to the input string ‘0101011010011’, when starting in state A. In which state does the machine finish? c. Design a Moore machine which produces the same output as the Mealy machine in the previous question. In removing any inaccessible states, discuss which state or states are valid as the initial state; choosing that which keeps the total number of states to a minimum, if this is possible.
{"url":"http://www.myassignmenthelp.net/australia/macquarie/dmth237-discrete-mathematics-2","timestamp":"2024-11-02T09:15:16Z","content_type":"text/html","content_length":"44309","record_id":"<urn:uuid:d33cd550-00af-48cd-a009-6b4788213714>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00715.warc.gz"}
Find the compound interest on Rs. $6000$ at $10%$ per annum for one year, compounded half-yearly? Hint: The compound interest is the interest which is calculated by multiplying the initial amount with one plus the annual interest rate per year, power to the number of years or the term. Use formula $A=P{{(1+\dfrac{i}{n})}^{n}}$ and compound interest is equal to the compounded amount minus the principal amount. Where P is the principal amount, i is the rate of interest and n is the number of times the interest is compounded per year. Complete step-by-step answer: Given that: Principal, $P=6000\text{ Rs}\text{.}$ Interest rate, $i=10%$ Number of times interest compounded per year, $n=2$ Now, use the formula $\Rightarrow A=P{{(1+\dfrac{i}{n})}^{n}}$ Place the known values $\Rightarrow A=6000{{[1+\dfrac{10}{100\times 2}]}^{2}}$ Simplify the above equation – \Rightarrow & A=6000{{[1+\dfrac{1}{10\times 2}]}^{2}} \\ \Rightarrow & A=6000{{[1+\dfrac{1}{20}]}^{2}} \\ Take LCM on the right hand side of the equation – $\Rightarrow A=6000{{\left( \dfrac{21}{20} \right)}^{2}}$ $\Rightarrow A=6615\ \text{Rs}\text{.}$ ………………..(a) Now, the compound interest is the amount left after subtracting the principal amount from the compounded amount Therefore, using the value of equation (a) $\Rightarrow$ Compounded Interest $=A-P$ $\Rightarrow$ Compounded Interest $=6615-6000$ $\Rightarrow$ Compounded Interest \[=615\ \text{Rs}\text{.}\] Therefore, the required answer - the compound interest on Rs. $6000$ at $10%$ per annum for one year, compounded half-yearly is $615\text{ Rs}\text{.}$ So, the correct answer is “Option C”. Note: Compound Interest is the process of compounding and it is also referred to as the term “interest on interest”. Always check the frequency of the compounding which perhaps may be yearly, half-yearly, quarterly, monthly and daily. Use standard formula, $A=P{{(1+\dfrac{R}{100})}^{n}}$ for the interest compounded yearly, and \[A=P{{(1+\dfrac{R/2}{100})}^{2n}}\]where the interest compounded is half-yearly. Where P is Principal, R is rate and time is n years.
{"url":"https://www.vedantu.com/question-answer/find-the-compound-interest-on-rs-6000-at-10-per-class-8-maths-cbse-5f8a890e177aeb679926544a","timestamp":"2024-11-04T18:48:22Z","content_type":"text/html","content_length":"154471","record_id":"<urn:uuid:63b7be40-0f67-4127-901a-36286e17ed56>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00556.warc.gz"}
98 Millimeter/Minute Squared to Attometer/Second Squared Millimeter/Minute Squared [mm/min2] Output 98 millimeter/minute squared in meter/second squared is equal to 0.000027222222222222 98 millimeter/minute squared in attometer/second squared is equal to 27222222222222 98 millimeter/minute squared in centimeter/second squared is equal to 0.0027222222222222 98 millimeter/minute squared in decimeter/second squared is equal to 0.00027222222222222 98 millimeter/minute squared in dekameter/second squared is equal to 0.0000027222222222222 98 millimeter/minute squared in femtometer/second squared is equal to 27222222222.22 98 millimeter/minute squared in hectometer/second squared is equal to 2.7222222222222e-7 98 millimeter/minute squared in kilometer/second squared is equal to 2.7222222222222e-8 98 millimeter/minute squared in micrometer/second squared is equal to 27.22 98 millimeter/minute squared in millimeter/second squared is equal to 0.027222222222222 98 millimeter/minute squared in nanometer/second squared is equal to 27222.22 98 millimeter/minute squared in picometer/second squared is equal to 27222222.22 98 millimeter/minute squared in meter/hour squared is equal to 352.8 98 millimeter/minute squared in millimeter/hour squared is equal to 352800 98 millimeter/minute squared in centimeter/hour squared is equal to 35280 98 millimeter/minute squared in kilometer/hour squared is equal to 0.3528 98 millimeter/minute squared in meter/minute squared is equal to 0.098 98 millimeter/minute squared in centimeter/minute squared is equal to 9.8 98 millimeter/minute squared in kilometer/minute squared is equal to 0.000098 98 millimeter/minute squared in kilometer/hour/second is equal to 0.000098 98 millimeter/minute squared in inch/hour/minute is equal to 231.5 98 millimeter/minute squared in inch/hour/second is equal to 3.86 98 millimeter/minute squared in inch/minute/second is equal to 0.064304461942257 98 millimeter/minute squared in inch/hour squared is equal to 13889.76 98 millimeter/minute squared in inch/minute squared is equal to 3.86 98 millimeter/minute squared in inch/second squared is equal to 0.001071741032371 98 millimeter/minute squared in feet/hour/minute is equal to 19.29 98 millimeter/minute squared in feet/hour/second is equal to 0.32152230971129 98 millimeter/minute squared in feet/minute/second is equal to 0.0053587051618548 98 millimeter/minute squared in feet/hour squared is equal to 1157.48 98 millimeter/minute squared in feet/minute squared is equal to 0.32152230971129 98 millimeter/minute squared in feet/second squared is equal to 0.000089311752697579 98 millimeter/minute squared in knot/hour is equal to 0.190496761 98 millimeter/minute squared in knot/minute is equal to 0.0031749460166667 98 millimeter/minute squared in knot/second is equal to 0.000052915766944444 98 millimeter/minute squared in knot/millisecond is equal to 5.2915766944444e-8 98 millimeter/minute squared in mile/hour/minute is equal to 0.0036536626103555 98 millimeter/minute squared in mile/hour/second is equal to 0.000060894376839259 98 millimeter/minute squared in mile/hour squared is equal to 0.21921975662133 98 millimeter/minute squared in mile/minute squared is equal to 0.000060894376839259 98 millimeter/minute squared in mile/second squared is equal to 1.6915104677572e-8 98 millimeter/minute squared in yard/second squared is equal to 0.000029770584232526 98 millimeter/minute squared in gal is equal to 0.0027222222222222 98 millimeter/minute squared in galileo is equal to 0.0027222222222222 98 millimeter/minute squared in centigal is equal to 0.27222222222222 98 millimeter/minute squared in decigal is equal to 0.027222222222222 98 millimeter/minute squared in g-unit is equal to 0.0000027758941353288 98 millimeter/minute squared in gn is equal to 0.0000027758941353288 98 millimeter/minute squared in gravity is equal to 0.0000027758941353288 98 millimeter/minute squared in milligal is equal to 2.72 98 millimeter/minute squared in kilogal is equal to 0.0000027222222222222
{"url":"https://hextobinary.com/unit/acceleration/from/mmmin2/to/ams2/98","timestamp":"2024-11-11T08:23:38Z","content_type":"text/html","content_length":"98117","record_id":"<urn:uuid:a10399e0-f0f0-4a12-b4fc-16a987761278>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00210.warc.gz"}
38 research outputs found We study sequences of functions of the form F_p^n -> {0,1} for varying n, and define a notion of convergence based on the induced distributions from restricting the functions to a random affine subspace. Using a decomposition theorem and a recently proven equi-distribution theorem from higher order Fourier analysis, we prove that the limits of such convergent sequences can be represented by certain measurable functions. We are also able to show that every such limit object arises as the limit of some sequence of functions. These results are in the spirit of similar results which have been developed for limits of graph sequences. A more general, albeit substantially more sophisticated, limit object was recently constructed by Szegedy in [Sze10].Comment: 12 page In this note, we study the behavior of independent sets of maximum probability measure in tensor graph powers. To do this, we introduce an upper bound using measure preserving homomorphisms. This work extends some previous results about independence ratios of tensor graph powers.Comment: 5 page We study the structure of bounded degree polynomials over finite fields. Haramaty and Shpilka [STOC 2010] showed that biased degree three or four polynomials admit a strong structural property. We confirm that this is the case for degree five polynomials also. Let F=F_q be a prime field. Suppose f:F^n to F is a degree five polynomial with bias(f)=delta. We prove the following two structural properties for such f. 1. We have f= sum_{i=1}^{c} G_i H_i + Q, where G_i and H_is are nonconstant polynomials satisfying deg(G_i)+deg(H_i)<= 5 and Q is a degree <5 polynomial. Moreover, c does not depend on n. 2. There exists an Omega_{delta,q}(n) dimensional affine subspace V subseteq F^n such that f|_V is a constant. Cohen and Tal [Random 2015] proved that biased polynomials of degree at most four are constant on a subspace of dimension Omega(n). Item 2.]extends this to degree five polynomials. A corollary to Item 2. is that any degree five affine disperser for dimension k is also an affine extractor for dimension O(k). We note that Item 2. cannot hold for degrees six or higher. We obtain our results for degree five polynomials as a special case of structure theorems that we prove for biased degree d polynomials when d<|F|+4. While the d<|F|+4 assumption seems very restrictive, we note that prior to our work such structure theorems were only known for d<|F| by Green and Tao [Contrib. Discrete Math. 2009] and Bhowmick and Lovett [arXiv:1506.02047]. Using algorithmic regularity lemmas for polynomials developed by Bhattacharyya, et al. [SODA 2015], we show that whenever such a strong structure exists, it can be found algorithmically in time polynomial in n Several theorems and conjectures in communication complexity state or speculate that the complexity of a matrix in a given communication model is controlled by a related analytic or algebraic matrix parameter, e.g., rank, sign-rank, discrepancy, etc. The forward direction is typically easy as the structural implications of small complexity often imply a bound on some matrix parameter. The challenge lies in establishing the reverse direction, which requires understanding the structure of Boolean matrices for which a given matrix parameter is small or large. We will discuss several research directions that align with this overarching theme.Comment: This is a column to be published in the complexity theory column of SIGACT New
{"url":"https://core.ac.uk/search/?q=author%3A(Hatami%2C%20Pooya)","timestamp":"2024-11-14T04:51:19Z","content_type":"text/html","content_length":"87637","record_id":"<urn:uuid:077b308d-6f4f-4b56-93f9-ead2d2f0d204>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00880.warc.gz"}
Practice Calculating <i>p-</i>Values - dummies Hypothesis testing can seem like a plug-and-chug operation, but that can take you only so far. Remember that a small p-value comes from a large test statistic, and both mean rejecting H[0]. Calculate p-values in the following problems. Sample questions 1. A researcher has a less than alternative hypothesis and wants to run a single sample mean z-test. The researcher calculates a test-statistic of z = –1.5 and then uses a Z-table to find a corresponding area of 0.0668, which is the area under the curve to the left of that value of z. What is the p-value in this case? Answer: 0.0668 Using the z-table, find –1.5 in the left-hand column, and then go across the row to the column for 0.00, where the value is 0.0668. This is the proportion of the curve area that's to the left of (less than) the test statistic value of z that you're looking up. In this case, the alternative hypothesis is a less than hypothesis, so you can read the p-value from the table without doing further calculations. 2. Suppose that a researcher has a not equal to alternative hypothesis and calculates a test statistic that corresponds to z = –1.5 and then finds, using a Z-table, a corresponding area of 0.0668 (the area under the curve to the left of that value of z). What is the p-value in this case? Answer: 0.1336 Using the z-table, find –1.5 in the left-hand column, and then go across the row to the column for 0.00, where the value is 0.0668. This is the proportion of the curve area that's to the left of (less than) the value of z you're looking up. In this case, the alternative hypothesis is a not equal to hypothesis, so you double the outlying tail quantity (area below the z-value of –1.5) to get the p-value. 3. A researcher has a not equal to alternative hypothesis and calculates a test statistic that corresponds to z = –2.0. Using a Z-table, the researcher finds a corresponding area of 0.0228 to the left of –2.0. What is the p-value in this case? Answer: 0.0456 Using the z-table, find –2.0 in the left-hand column, and then go across the row to the column for 0.0, where the value is 0.0228. This is the proportion of the curve area that's to the left of (less than) the value of z you're looking up. In this case, the alternative hypothesis is a not equal to hypothesis, so you double the outlying tail quantity (area below the z-value of –2.0) to get the p-value. 4. A scientist with a not equal to alternative hypothesis calculates a test statistic that corresponds to z = 1.1. Using a Z-table, the scientist finds that this corresponds to a curve area of 0.8643 (to the left of the test statistic value). What is the p-value in this case? Answer: 0.2714 Using the z-table, find 1.1 in the left-hand column, then go across the row to the column for 0.0, where the value is 0.8643. This is the area under the curve to the left of the z value of 1.1. Because total area under the curve equals 1, the area above z in this case is 1 – 0.8643 = 0.1357. In this case, the alternative hypothesis is a not equal to hypothesis, so you double the outlying tail quantity (area above the z-value of 1.1) to get the p-value. If you need more practice on this and other topics from your statistics course, visit 1,001 Statistics Practice Problems For Dummies to purchase online access to 1,001 statistics practice problems! We can help you track your performance, see where you need to study, and create customized problem sets to master your stats skills. About This Article This article is from the book: This article can be found in the category:
{"url":"https://www.dummies.com/article/academics-the-arts/math/statistics/practice-calculating-p-values-147275/","timestamp":"2024-11-08T15:17:20Z","content_type":"text/html","content_length":"81807","record_id":"<urn:uuid:c7719ff4-b04e-4b84-bca8-94a58caae94c>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00320.warc.gz"}
Mortality limits used in wind energy impact assessment underestimate impacts of wind farms on bird populations The consequences of bird mortality caused by collisions with wind turbines are increasingly receiving attention. So-called acceptable mortality limits of populations, that is, those that assume that 1%-5% of additional mortality and the potential biological removal (PBR), provide seemingly clear-cut methods for establishing the reduction in population viability. We examine how the application of these commonly used mortality limits could affect populations of the Common Starling, Black-tailed Godwit, Marsh Harrier, Eurasian Spoonbill, White Stork, Common Tern, and White-tailed Eagle using stochastic density-independent and density-dependent Leslie matrix models. Results show that population viability can be very sensitive to proportionally small increases in mortality. Rather than having a negligible effect, we found that a 1% additional mortality in postfledging cohorts of our studied populations resulted in a 2%?24% decrease in the population level after 10 years. Allowing a 5% mortality increase to existing mortality resulted in a 9%-77% reduction in the populations after 10 years. When the PBR method is used in the density-dependent simulations, the proportional change in the resulting growth rate and carrying capacity was species-independent and largely determined by the recovery factor (Fr). When Fr = 1, a value typically used for robust populations, additional mortality resulted in a 50%-55% reduction in the equilibrium density and the resulting growth rate. When Fr = 0.1, used for threatened populations, the reduction in the equilibrium density and growth rate was about 5%. Synthesis and applications. Our results show that by allowing a mortality increase from wind farm collisions according to both criteria, the population impacts of these collisions can still be severe. We propose a simple new method as an alternative that was able to estimate mortality impacts of age-structured stochastic density-dependent matrix models. Duik in de onderzoeksthema's van 'Mortality limits used in wind energy impact assessment underestimate impacts of wind farms on bird populations'. Samen vormen ze een unieke vingerafdruk.
{"url":"https://pure.knaw.nl/portal/nl/publications/mortality-limits-used-in-wind-energy-impact-assessment-underestim","timestamp":"2024-11-05T00:13:56Z","content_type":"text/html","content_length":"64852","record_id":"<urn:uuid:ef07dd2a-b44f-42ad-bff8-40b673d0edaf>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00181.warc.gz"}
Zadanie Hazard (haz) Memory limit: 32 MB A gambling machine consists of generators of integers: , where . The generator can generate integers only from the certain set , or which means that the game is over. The can be an empty set. Let be the number of elements of the set . The sum of all the integers , for , cannot exceed . While is activated for the first time it generates an integer from the set . The next activating ends up in generating an integer from the set , which was not chosen before. If there is no such an integer, generates zero. The machine starts with activating . Then the generators are activated according to the following rule: In case when a generator generated a positive integer , the next activated generator is . If zero is generated the machine stops. The machine looses if the generator generates zero and the rest of the generators had exhausted their sets of integers. The machine is well constructed if it may generate a sequence of integers ending with zero, but not leading to a defeat. Write a program that: • reads from the standard input the description of the machine, i.e. the number of generators , integers and sets , • verifies, whether the machine described in the input file is well constructed, • if it is well constructed, writes to the standard output a sequence of integers, which may be generated by the machine, is ended with zero, and does not lead to defeat; if not, writes one word NIE (which means “no” in Polish). In the first line of the standard input there is written one positive integer , . This is the number of generators. In the -st line (for ) there is written an integer followed by all the elements of the set (written in arbitrary order). The integers in each line are separated by single spaces. In the standard output there should be written one word NIE (if the machine isn't well constructed) or one line containing an appropriate finite series of integers separated by single spaces. For the input data: the correct result is: whereas for the input data: the correct output is: Task author: Wojciech Plandowski.
{"url":"https://szkopul.edu.pl/problemset/problem/kUO3YBgBS23wFa7PXHgqyU9e/site/?key=statement","timestamp":"2024-11-09T09:34:47Z","content_type":"text/html","content_length":"28007","record_id":"<urn:uuid:c9bf6f45-c6ea-4b77-a8ed-cf07ff6655b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00801.warc.gz"}
Highest Volume Stocks Implement a data structure StockMarket which has the following methods: • StockMarket(String[] stocks, int[] amounts) which creates a new instance. stocks and amounts has the same length and each stock stocks[i] initially has amounts[i] volume in the market • add(String stock, int amount) which accumulatively adds stock with volume amount • top(int k) which returns the top k highest volume stocks, sorted in descending order by volume. If there are ties in volume, return the lexicographically smallest stocks first. • n ≤ 100,000 where n is the number of calls to add and top. Example 1 • methods = ['constructor', 'add', 'add', 'add', 'top'] • arguments = [[['NFLX'], [300]], ['AMZN', 100], ['GOOG', 300], ['AMZN', 300], [2]] • answer = [None, None, None, None, ['AMZN', 'GOOG']] s = StockMarket(["NFLX"], [300]) s.add("AMZN", 100) s.add("GOOG", 300) s.add("AMZN", 300) s.top(2) == ["AMZN", "GOOG"] At the end "AMZN"’s volume is 400, "GOOG"’s volume is 300, and "NFLX"’s volume is 300. Since "AMZN" has the most volume we return it first. Then, since "GOOG" and "NFLX" are tied in terms of volume, we return the lexicographically smaller stock which is "GOOG".
{"url":"https://xuankentay.com/exercises/highest-volume-stocks/","timestamp":"2024-11-13T12:19:49Z","content_type":"text/html","content_length":"26733","record_id":"<urn:uuid:c8b746de-a207-4126-a1cb-7180aece3f81>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00006.warc.gz"}
8.1 Tracking Inflation Learning Objectives Learning Objectives By the end of this section, you will be able to do the following: • Calculate the annual rate of inflation • Explain and use index numbers and base years when simplifying the total quantity spent over a year for products • Calculate inflation rates using index numbers Dinner table conversations where you might have heard about inflation usually entail reminiscing about when everything seemed to cost so much less, and “You used to be able to buy three gallons of gasoline for a dollar and then go see an afternoon movie for another dollar.” Table 8.1 compares some prices of common goods in 1970 and 2014. Of course, the average prices shown in this table may not reflect the prices where you live. The cost of living in New York City is much higher than in Houston, Texas, for example. In addition, certain products have evolved over recent decades. A new car in 2014, loaded with antipollution equipment, safety gear, computerized engine controls, and many other technological advances, is a more advanced and fuel-efficient machine than your typical 1970s car. However, put details like these to one side for the moment, and look at the overall pattern. The primary reason behind the price rises in Table 8.1—and all the price increases for the other products in the economy—is not specific to the market for housing or cars or gasoline or movie tickets. Instead, it is part of a general rise in the level of all prices. In 2014, $1 had about the same purchasing power in overall terms of goods and services as 18 cents did in 1972, because of the amount of inflation that has occurred over that time period. Items 1970 2014 Pound of ground beef $0.66 $4.16 Pound of butter $0.87 $2.93 Movie ticket $1.55 $8.17 Sales price of new home (median) $22,000 $280,000 New car $3,000 $32,531 Gallon of gasoline $0.36 $3.36 Average hourly wage for a manufacturing worker $3.23 $19.55 Per capita GDP $5,069 $53,041.98 Moreover, the power of inflation does not affect just goods and services, but wages and income levels, too. The second-to-last row of Table 8.1 shows that the average hourly wage for a manufacturing worker increased nearly six-fold from 1970 to 2014. Sure, the average worker in 2014 is better educated and more productive than the average worker in 1970, but not six times more productive. Sure, per capita GDP increased substantially from 1970 to 2014, but is the average person in the U.S. economy really more than eight times better off in just 44 years? Not likely. A modern economy has millions of goods and services whose prices are continually quivering in the breezes of supply and demand. How can all of these shifts in price be boiled down to a single inflation rate? As with many problems in economic measurement, the conceptual answer is reasonably straightforward: Prices of a variety of goods and services are combined into a single price level; the inflation rate is simply the percentage change in the price level. Applying the concept, however, involves some practical difficulties. The Price of a Basket of Goods The Price of a Basket of Goods To calculate the price level, economists begin with the concept of a basket of goods and services, consisting of the different items individuals, businesses, or organizations typically buy. The next step is to look at how the prices of those items change over time. In thinking about how to combine individual prices into an overall price level, many people find that their first impulse is to calculate the average of the prices. Such a calculation, however, could easily be misleading because some products matter more than others. Changes in the prices of goods for which people spend a larger share of their incomes will matter more than changes in the prices of goods for which people spend a smaller share of their incomes. For example, an increase of 10 percent in the rental rate on housing matters more to most people than whether the price of carrots rises by 10 percent. To construct an overall measure of the price level, economists compute a weighted average of the prices of the items in the basket, where the weights are based on the actual quantities of goods and services people buy. The following Work It Out feature walks you through the steps of calculating the annual rate of inflation based on a few products. Work It Out Calculating an Annual Rate of Inflation Consider the simple basket of goods with only three items, represented in Table 8.2. Say that in any given month, a college student spends money on 20 hamburgers, one bottle of aspirin, and five movies. Prices for these items over four years are given in the table through each time period (Pd). Prices of some goods in the basket may rise while others fall. In this example, the price of aspirin does not change over the four years, while movies increase in price and hamburgers bounce up and down. Each year, the cost of buying the given basket of goods at the prices prevailing at that time is shown. Items Hamburger Aspirin Movies Total Inflation Rate Qty 20 1 bottle 5 - - (Pd 1) Price $3 $10 $6 - - (Pd 1) Amount Spent $60 $10 $30 $100 - (Pd 2) Price $3.20 $10 $6.50 - - (Pd 2) Amount Spent $64 $10 $32.50 $106.50 6.5% (Pd 3) Price $3.10 $10 $7 - - (Pd 3) Amount Spent $62 $10 $35 $107 0.5% (Pd 4) Price $3.50 $10 $7.50 - - (Pd 4) Amount Spent $70 $10 $37.50 $117.50 9.8% Use the following to calculate the annual rate of inflation in this example: Step 1. Find the percentage change in the cost of purchasing the overall basket of goods between the time periods. The general equation for percentage changes between two years, whether in the context of inflation or in any other calculation, is $Level in new year – Level in previous yearLevel in previous year = Percentage change.Level in new year – Level in previous yearLevel in previous year = Percentage change.$ Step 2. From Pd 1 to Pd 2, the total cost of purchasing the basket of goods in Table 8.2 rises from $100 to $106.50. Therefore, the percentage change over this time—the inflation rate—is $106.50 – 100100 = 0.065 = 6.5%.106.50 – 100100 = 0.065 = 6.5%.$ Step 3. From Pd 2 to Pd 3, the overall change in the cost of purchasing the basket rises from $106.50 to $107. Thus, the inflation rate over this time, again calculated by the percentage change, is $107 – 106.50106.50 = 0.0047 = 0.47%.107 – 106.50106.50 = 0.0047 = 0.47%.$ Step 4. From Pd 3 to Pd 4, the overall cost rises from $107 to $117.50. The inflation rate is thus $117.50 – 107107 = 0.098 = 9.8%.117.50 – 107107 = 0.098 = 9.8%.$ This calculation of the change in the total cost of purchasing a basket of goods takes into account how much is spent on each good. Hamburgers are the lowest-priced good in this example, and aspirin is the highest-priced. If an individual buys a greater quantity of a low-price good, then it makes sense that changes in the price of that good should have a larger impact on the buying power of that person’s money. The larger impact of hamburgers shows up in the amount spent row, where, in all time periods, hamburgers are the largest item within the amount spent row. Index Numbers Index Numbers The numerical results of a calculation based on a basket of goods can get a little messy. The simplified example in Table 8.2 has only three goods and the prices are in even dollars, not numbers like 79 cents or $124.99. If the list of products was much longer, and more realistic prices were used, the total quantity spent over a year might be some messy-looking number like $17,147.51 or To simplify the task of interpreting the price levels for more realistic and complex baskets of goods, the price level in each period is typically reported as an index number, rather than as the dollar amount for buying the basket of goods. Price indices are created to calculate an overall average change in relative prices over time. To convert the money spent on the basket to an index number, economists arbitrarily choose one year to be the base year, or starting point from which we measure changes in prices. The base year, by definition, has an index number equal to 100. This sounds complicated, but it is really a simple math trick. In the example above, say that Pd 3 is chosen as the base year. Since the total amount of spending in that year is $107, we divide that amount by itself ($107) and multiply by 100. Mathematically, that is equivalent to dividing $107 by 100, or $1.07. Doing either will give us an index in the base year of 100. Again, this is because the index number in the base year always has to have a value of 100. Then, to figure out the values of the index number for the other years, we divide the dollar amounts for the other years by 1.07 as well. Note also that the dollar signs cancel out so that index numbers have no units. Calculations for the other values of the index number, based on the example presented in Table 8.2 are shown in Table 8.3. Because the index numbers are calculated so that they are in exactly the same proportion as the total dollar cost of purchasing the basket of goods, the inflation rate can be calculated based on the index numbers, using the percentage change formula. So, the inflation rate from Pd 1 to Pd 2 would be $99.5 – 93.493.4 = 0.065 = 6.5%.99.5 – 93.493.4 = 0.065 = 6.5%.$ This is the same answer that was derived when measuring inflation based on the dollar cost of the basket of goods for the same time period. Total Spending Index Number Inflation Rate Since Previous Period Period 1 $100 $1001.07 = 93.41001.07 = 93.4$ Period 2 $106.50 $106.501.07 = 99.5106.501.07 = 99.5$ $99.5 – 93.493.4 = 0.065 = 6.5%99.5 – 93.493.4 = 0.065 = 6.5%$ Period 3 $107 $1071.07 = 1001071.07 = 100$ $100 – 99.599.5 = 0.005 = 0.5%100 – 99.599.5 = 0.005 = 0.5%$ Period 4 $117.50 $117.501.07 = 109.8117.501.07 = 109.8$ $109.8 – 100100 = 0.098 = 9.8%109.8 – 100100 = 0.098 = 9.8%$ If the inflation rate is the same whether it is based on dollar values or index numbers, then why bother with the index numbers? The advantage is that indexing allows easier eyeballing of the inflation numbers. If you glance at two index numbers like 107 and 110, you know automatically that the rate of inflation between the two years is about, but not quite exactly equal to, three percent. By contrast, imagine that the price levels were expressed in absolute dollars of a large basket of goods, so that when you looked at the data, the numbers were $19,493.62 and $20,009.32. Most people find it difficult to eyeball those kinds of numbers and say that it is a change of about three percent. However, the two numbers expressed in absolute dollars are exactly in the same proportion of 107 to 110 as the previous example. If you’re wondering why simple subtraction of the index numbers wouldn’t work, read the following Clear It Up feature. Clear It Up Why do you not just subtract index numbers? A word of warning: When a price index moves from, say, 107 to 110, the rate of inflation is not exactly three percent. Remember, the inflation rate is not derived by subtracting the index numbers, but rather through the percentage-change calculation. The precise inflation rate as the price index moves from 107 to 110 is calculated as (110 – 107) / 107 = 0.028 = 2.8%. When the base year is fairly close to 100, a quick subtraction is not a terrible shortcut to calculating the inflation rate—but when precision matters down to tenths of a percent, subtracting will not give the right Two final points about index numbers are worth remembering. First, index numbers have no dollar signs or other units attached to them. Although index numbers can be used to calculate a percentage inflation rate, the index numbers themselves do not have percentage signs. Index numbers just mirror the proportions found in other data. They transform the other data so that the data are easier to work with. Second, the choice of a base year for the index number—that is, the year that is automatically set equal to 100—is arbitrary. It is chosen as a starting point from which changes in prices are tracked. In the official inflation statistics, it is common to use one base year for a few years, and then to update it, so that the base year of 100 is relatively close to the present. But any base year that is chosen for the index numbers will result in exactly the same inflation rate. To see this in the previous example, imagine that period 1, when total spending was $100, was also chosen as the base year, and given an index number of 100. At a glance, you can see that the index numbers would now exactly match the dollar figures, the inflation rate in the first period would be 6.5 percent, and so on. Now that we see how indexes work to track inflation, the next module will show us how the cost of living is measured. Link It Up Watch this video from the cartoon Duck Tales to view a mini-lesson on inflation. This section may include links to websites that contain links to articles on unrelated topics. See the preface for more information.
{"url":"https://texasgateway.org/resource/81-tracking-inflation?book=79091&binder_id=78441","timestamp":"2024-11-02T17:37:22Z","content_type":"text/html","content_length":"138312","record_id":"<urn:uuid:f858c208-7bdd-4e75-baa7-1b5bb3c54871>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00573.warc.gz"}
class sklearn.covariance.EmpiricalCovariance(*, store_precision=True, assume_centered=False)[source]¶ Maximum likelihood covariance estimator. Read more in the User Guide. store_precisionbool, default=True Specifies if the estimated precision is stored. assume_centeredbool, default=False If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False (default), data are centered before computation. location_ndarray of shape (n_features,) Estimated location, i.e. the estimated mean. covariance_ndarray of shape (n_features, n_features) Estimated covariance matrix precision_ndarray of shape (n_features, n_features) Estimated pseudo-inverse matrix. (stored only if store_precision is True) Number of features seen during fit. feature_names_in_ndarray of shape (n_features_in_,) Names of features seen during fit. Defined only when X has feature names that are all strings. See also An object for detecting outliers in a Gaussian distributed dataset. Sparse inverse covariance estimation with an l1-penalized estimator. LedoitWolf Estimator. Minimum Covariance Determinant (robust estimator of covariance). Oracle Approximating Shrinkage Estimator. Covariance estimator with shrinkage. >>> import numpy as np >>> from sklearn.covariance import EmpiricalCovariance >>> from sklearn.datasets import make_gaussian_quantiles >>> real_cov = np.array([[.8, .3], ... [.3, .4]]) >>> rng = np.random.RandomState(0) >>> X = rng.multivariate_normal(mean=[0, 0], ... cov=real_cov, ... size=500) >>> cov = EmpiricalCovariance().fit(X) >>> cov.covariance_ array([[0.7569..., 0.2818...], [0.2818..., 0.3928...]]) >>> cov.location_ array([0.0622..., 0.0193...]) error_norm(comp_cov[, norm, scaling, squared]) Compute the Mean Squared Error between two covariance estimators. fit(X[, y]) Fit the maximum likelihood covariance estimator to X. get_metadata_routing() Get metadata routing of this object. get_params([deep]) Get parameters for this estimator. get_precision() Getter for the precision matrix. mahalanobis(X) Compute the squared Mahalanobis distances of given observations. score(X_test[, y]) Compute the log-likelihood of X_test under the estimated Gaussian model. set_params(**params) Set the parameters of this estimator. set_score_request(*[, X_test]) Request metadata passed to the score method. error_norm(comp_cov, norm='frobenius', scaling=True, squared=True)[source]¶ Compute the Mean Squared Error between two covariance estimators. comp_covarray-like of shape (n_features, n_features) The covariance to compare with. norm{“frobenius”, “spectral”}, default=”frobenius” The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - scalingbool, default=True If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled. squaredbool, default=True Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators. fit(X, y=None)[source]¶ Fit the maximum likelihood covariance estimator to X. Xarray-like of shape (n_samples, n_features) Training data, where n_samples is the number of samples and n_features is the number of features. Not used, present for API consistency by convention. Returns the instance itself. Get metadata routing of this object. Please check User Guide on how the routing mechanism works. A MetadataRequest encapsulating routing information. Get parameters for this estimator. deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Parameter names mapped to their values. Getter for the precision matrix. precision_array-like of shape (n_features, n_features) The precision matrix associated to the current covariance object. Compute the squared Mahalanobis distances of given observations. Xarray-like of shape (n_samples, n_features) The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. distndarray of shape (n_samples,) Squared Mahalanobis distances of the observations. score(X_test, y=None)[source]¶ Compute the log-likelihood of X_test under the estimated Gaussian model. The Gaussian model is defined by its mean and covariance matrix which are represented respectively by self.location_ and self.covariance_. X_testarray-like of shape (n_samples, n_features) Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering). Not used, present for API consistency by convention. The log-likelihood of X_test with self.location_ and self.covariance_ as estimators of the Gaussian model mean and covariance matrix respectively. Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Estimator parameters. selfestimator instance Estimator instance. set_score_request(*, X_test: bool | None | str = '$UNCHANGED$') EmpiricalCovariance[source]¶ Request metadata passed to the score method. Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works. The options for each parameter are: ☆ True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided. ☆ False: metadata is not requested and the meta-estimator will not pass it to score. ☆ None: metadata is not requested, and the meta-estimator will raise an error if the user provides it. ☆ str: metadata should be passed to the meta-estimator with this given alias instead of the original name. The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others. This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect. X_teststr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for X_test parameter in score. The updated object. Examples using sklearn.covariance.EmpiricalCovariance¶
{"url":"https://scikit-learn.org/1.4/modules/generated/sklearn.covariance.EmpiricalCovariance.html","timestamp":"2024-11-14T11:50:37Z","content_type":"text/html","content_length":"50997","record_id":"<urn:uuid:d73de5b5-b276-4a72-88c2-413c2cbd109e>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00836.warc.gz"}
Self-adjoint operator - (Physical Sciences Math Tools) - Vocab, Definition, Explanations | Fiveable Self-adjoint operator from class: Physical Sciences Math Tools A self-adjoint operator is a linear operator that is equal to its own adjoint, meaning that it satisfies the property \( A = A^* \). This concept is crucial because self-adjoint operators have real eigenvalues and orthogonal eigenvectors, which play a significant role in Sturm-Liouville problems and eigenvalue equations, allowing us to solve differential equations with boundary conditions congrats on reading the definition of self-adjoint operator. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. Self-adjoint operators guarantee that all eigenvalues are real, which is essential in physical applications like quantum mechanics. 2. The eigenfunctions corresponding to distinct eigenvalues of a self-adjoint operator are orthogonal, meaning they can be used as a basis for function spaces. 3. In Sturm-Liouville problems, the differential operator can often be transformed into a self-adjoint form to facilitate finding solutions. 4. Self-adjointness implies that the operator has nice mathematical properties, such as being continuous and bounded from below. 5. For boundary value problems, ensuring that the operator is self-adjoint helps in applying variational methods to find approximate solutions. Review Questions • How does the property of being self-adjoint influence the eigenvalues and eigenvectors of an operator? □ Being self-adjoint ensures that the eigenvalues of an operator are real numbers. Additionally, if two eigenvalues are distinct, their corresponding eigenvectors are orthogonal. This property is crucial for constructing solutions to differential equations in various physical contexts, as it allows for a clear interpretation of the mathematical results and their implications. • Discuss how the concept of self-adjoint operators is applied in solving Sturm-Liouville problems. □ In Sturm-Liouville problems, we often encounter differential equations that can be expressed in the form suitable for self-adjoint operators. By rewriting these equations in this way, we ensure that the resulting eigenvalues are real and that the eigenfunctions are orthogonal. This transformation simplifies finding solutions and allows for the application of powerful techniques such as separation of variables or Fourier series expansion. • Evaluate the significance of self-adjoint operators in the context of physical systems modeled by differential equations. □ Self-adjoint operators play a pivotal role in modeling physical systems described by differential equations because they guarantee real and discrete eigenvalues that correspond to measurable quantities. For instance, in quantum mechanics, observables are represented by self-adjoint operators, ensuring that measurement outcomes are real. This property also aids in stability analysis and ensures convergence in variational methods used for approximating solutions to complex problems. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/mathematical-tools-for-the-physical-sciences/self-adjoint-operator","timestamp":"2024-11-03T14:05:26Z","content_type":"text/html","content_length":"165204","record_id":"<urn:uuid:afead1ec-1158-44ba-9445-2671bf57d3ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00204.warc.gz"}
A quantity is defined as a value of an arbitrary value type that is associated with a specific unit. For example, while meter is a unit, 3.0 meters is a quantity. Quantities obey two separate algebras: the native algebra for their value type, and the dimensional analysis algebra for the associated unit. In addition, algebraic operations are defined between units and quantities to simplify the definition of quantities; it is effectively equivalent to algebra with a unit-valued quantity. Quantities are implemented by the quantity template class defined in boost/units/quantity.hpp : template<class Unit,class Y = double> class quantity; This class is templated on both unit type (Unit) and value type (Y), with the latter defaulting to double-precision floating point if not otherwise specified. The value type must have a normal copy constructor and copy assignment operator. Operators +, -, *, and / are provided for algebraic operations between scalars and units, scalars and quantities, units and quantities, and between quantities. In addition, integral and rational powers and roots can be computed using the pow<R> and root<R> functions. Finally, the standard set of boolean comparison operators ( ==, !=, <, <=, >, and >= ) are provided to allow comparison of quantities from the same unit system. All operators simply delegate to the corresponding operator of the value type if the units permit. For most common value types, the result type of arithmetic operators is the same as the value type itself. For example, the sum of two double precision floating point numbers is another double precision floating point number. However, there are instances where this is not the case. A simple example is given by the natural numbers where the operator arithmetic obeys the following rules (using the standard notation for number systems): This library is designed to support arbitrary value type algebra for addition, subtraction, multiplication, division, and rational powers and roots. It uses Boost.Typeof to deduce the result of these operators. For compilers that support typeof, the appropriate value type will be automatically deduced. For compilers that do not provide language support for typeof it is necessary to register all the types used. For the case of natural numbers, this would amount to something like the following: Conversion is only meaningful for quantities as it implies the presence of at least a multiplicative scale factor and, possibly, and affine linear offset. Macros for simplifying the definition of conversions between units can be found in boost/units/conversion.hpp and boost/units/absolute.hpp (for affine conversions with offsets). The macro BOOST_UNITS_DEFINE_CONVERSION_FACTOR specifies a scale factor for conversion from the first unit type to the second. The first argument must be a base_unit. The second argument can be either a base_unit or a unit. Let's declare a simple base unit: struct foot_base_unit : base_unit<foot_base_unit, length_dimension, 10> { }; Now, we want to be able to convert feet to meters and vice versa. The foot is defined as exactly 0.3048 meters, so we can write the following BOOST_UNITS_DEFINE_CONVERSION_FACTOR(foot_base_unit, meter_base_unit, double, 0.3048); Alternately, we could use the SI length typedef: BOOST_UNITS_DEFINE_CONVERSION_FACTOR(foot_base_unit, SI::length, double, 0.3048); Since the SI unit of length is the meter, these two definitions are equivalent. If these conversions have been defined, then converting between scaled forms of these units will also automatically The macro BOOST_UNITS_DEFAULT_CONVERSION specifies a conversion that will be applied to a base unit when no direct conversion is possible. This can be used to make arbitrary conversions work with a single specialization: struct my_unit_tag : boost::units::base_unit<my_unit_tag, boost::units::force_type, 1> {}; // define the conversion factor BOOST_UNITS_DEFINE_CONVERSION_FACTOR(my_unit_tag, SI::force, double, 3.14159265358979323846); // make conversion to SI the default. BOOST_UNITS_DEFAULT_CONVERSION(my_unit_tag, SI::force); This library is designed to emphasize safety above convenience when performing operations with dimensioned quantities. Specifically, construction of quantities is required to fully specify both value and unit. Direct construction from a scalar value is prohibited (though the static member function from_value is provided to enable this functionality where it is necessary. In addition, a quantity_cast to a reference allows direct access to the underlying value of a quantity variable. An explicit constructor is provided to enable conversion between dimensionally compatible quantities in different unit systems. Implicit conversions between unit systems are allowed only when the reduced units are identical, allowing, for example, trivial conversions between equivalent units in different systems (such as SI seconds and CGS seconds) while simultaneously enabling unintentional unit system mismatches to be caught at compile time and preventing potential loss of precision and performance overhead from unintended conversions. Assignment follows the same rules. An exception is made for quantities for which the unit reduces to dimensionless; in this case, implicit conversion to the underlying value type is allowed via class template specialization. Quantities of different value types are implicitly convertible only if the value types are themselves implicitly convertible. The quantity class also defines a value() member for directly accessing the underlying value. To summarize, conversions are allowed under the following conditions : • implicit conversion of quantity<Unit,Y> to quantity<Unit,Z> is allowed if Y and Z are implicitly convertible. • assignment between quantity<Unit,Y> and quantity<Unit,Z> is allowed if Y and Z are implicitly convertible. • explicit conversion between quantity<Unit1,Y> and quantity<Unit2,Z> is allowed if Unit1 and Unit2 have the same dimensions and if Y and Z are implicitly convertible. • implicit conversion between quantity<Unit1,Y> and quantity<Unit2,Z> is allowed if Unit1 reduces to exactly the same combination of base units as Unit2 and if Y and Z are convertible. • assignment between quantity<Unit1,Y> and quantity<Unit2,Z> is allowed under the same conditions as implicit conversion. • quantity<Unit,Y> can be directly constructed from a value of type Y using the static member function from_value. Doing so, naturally, bypasses any type-checking of the newly assigned value, so this method should be used only when absolutely necessary. Of course, any time implicit conversion is allowed, an explicit conversion is also legal. Because dimensionless quantities have no associated units, they behave as normal scalars, and allow implicit conversion to and from the underlying value type or types that are convertible to/from that value type.
{"url":"https://live.boost.org/doc/libs/1_55_0/doc/html/boost_units/Quantities.html","timestamp":"2024-11-12T13:25:29Z","content_type":"text/html","content_length":"25394","record_id":"<urn:uuid:89933546-e45b-4670-8e3c-617e2f5de521>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00365.warc.gz"}
Arrangements without repetitions online An application for calculating permutations without repetition is a mathematical tool that determines how many different ways there are to arrange a set of distinct objects in a specific order, without allowing the repetition of the same objects. This type of calculation is widely used in mathematics, statistics, computer science, and various scientific fields. In this guide, we will explore what an application for calculating permutations without repetition is, how it works, and where it can be applied. What is an Application for Calculating Permutations without Repetition? An application for calculating permutations without repetition is a mathematical tool that calculates the number of different ways in which a set of distinct objects can be arranged in a specific order, without allowing the repetition of the same objects. This means that once an object has been used, it cannot be selected again. Permutations without repetition are often indicated by the notation "nPr", where "n" represents the total number of objects and "r" represents the number of objects selected at a time. How Does an Application for Calculating Permutations without Repetition Work? An application for calculating permutations without repetition follows a series of fundamental steps: Input of Objects The user provides a set of distinct objects from which selections will be made. These objects can be letters, numbers, colors, or anything that requires the calculation of permutations. Specify the Number of Selections The user specifies the number of objects they want to select simultaneously (r). This value determines how many different permutations will be calculated. Calculation of Permutations The application performs the calculation of permutations without repetition using a specific mathematical formula. This formula involves the combination of factors and permutations of the objects. Displaying the Results The application returns the total number of possible permutations (nPr) and, if requested, lists the permutations themselves. These results can be displayed directly in the application or exported to a file for further analysis. Applications of Permutations without Repetition Permutations without repetition are applied in various fields: In the field of statistics, permutations without repetition are used to calculate probabilities, frequencies, and outcomes in experiments and studies. Computer Science In computer science, permutations without repetition are used in search algorithms, cryptography, password generation, and data organization. Natural Sciences In the natural sciences, permutations without repetition are used to analyze molecular configurations and chemical structures. In mathematics, permutations without repetition are a fundamental concept used to understand permutations of objects and principles of combination. An application for calculating permutations without repetition is an essential mathematical tool for anyone working with data sets and needing to determine how many different ways objects can be organized in a specific order, without the possibility of selecting the same object multiple times. This tool greatly simplifies the calculation process, saving time and simplifying the management of complex data. Whether you are a statistics student, a programmer, a data analyst, or a professional in any field involving permutations, a dedicated application for calculating permutations without repetition can be an indispensable tool. With this guide, you are ready to explore and effectively use these applications in your projects and analyses.
{"url":"https://toolplaza.app/en/tools/combinatorial-calculus/partial-permutation-no-repetitions","timestamp":"2024-11-04T05:08:06Z","content_type":"text/html","content_length":"113217","record_id":"<urn:uuid:578480f0-7915-4c1a-9b09-fb364be6d543>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00387.warc.gz"}
Early life and education Felix Earl Browder was born in 1927 in Moscow, Russia, while his American father Earl Browder, born in Wichita, Kansas, was living and working there. He had gone to the Soviet Union in 1927. His mother was Raissa Berkmann, a Russian Jewish woman from St. Petersburg whom Browder met and married while living in the Soviet Union.^[3] As a child, Felix Browder moved with his family to the United States, where his father Earl Browder for a time was head of the American Communist Party and ran for US president in 1936 and 1940.^[3] A 1999 book by Alexander Vassiliev, published after the fall of the Soviet Union, said that Earl Browder was recruited in the 1940s as a spy for the Soviet Union.^[4] Felix Browder was a child prodigy in mathematics; he entered MIT at age 16 in 1944 and graduated in 1946 with his first degree in mathematics. In 1946, at MIT he achieved the rank of a Putnam Fellow in the William Lowell Putnam Mathematical Competition.^[5] In 1948 (at age 20), he received his doctorate from Princeton University. Browder had an academic career, encountering difficulty in the 1950s in getting work during the McCarthy era because of his father's communist activities. Browder headed the University of Chicago's mathematics department for 12 years. He also held posts at MIT, Boston University, Brandeis and Yale. In 1986 he became the first vice president for research at Rutgers University.^[6] Browder received the 1999 National Medal of Science.^[7]^[6] He also served as president of the American Mathematical Society from 1999 to 2000. In his outgoing presidential address at the American Mathematical Society, Browder noted, "ideas and techniques from one set of mathematical sources imping[ing] fruitfully on the same thing from another set of mathematical sources" as illustration of bisociation (a term from Arthur Koestler). He also recounted the moves against mathematics in France by Claude Allègre as problematic.^[8] Browder was known for his personal library, which contained some thirty-five thousand books. "The library has a number of different categories," he said. "There is mathematics, physics and science as well as philosophy, literature and history, with a certain number of volumes of contemporary political science and economics. It is a polymath library. I am interested in everything and my library reflects all my interests."^[9] See also 1. ^ O'Connor, John J.; Robertson, Edmund F., "Felix Browder", MacTutor History of Mathematics Archive, University of St Andrews 2. ^ ^a ^b "Brown University Mathematics Department". Math.brown.edu. Archived from the original on August 31, 2018. Retrieved December 16, 2016. 3. ^ ^a ^b Levy, Clifford J. (July 24, 2008). "An Investment Gets Trapped in Kremlin's Vise". The New York Times. Retrieved 2008-07-24. “For Mr. Browder, 44, Russia was more than a place to do business. His grandfather Earl Browder was a Communist from Kansas who moved to the Soviet Union in 1927, staying for several years and marrying a Russian. He returned with her to the United States to lead the Communist Party for a time, even running for president.” 4. ^ Alexander Vassiliev (1999). The Haunted Wood: Soviet Espionage in America--the Stalin Era. Random House. 5. ^ "Putnam Competition Individual and Team Winners". Mathematical Association of America. Retrieved December 10, 2021. 6. ^ ^a ^b "Mathematics Department - News Item: Felix Browder Receives Nation's Highest Science Honor". Math.rutgers.edu. Archived from the original on May 16, 2000. Retrieved December 16, 2016. 7. ^ "The President's National Medal of Science: Recipient Details | NSF - National Science Foundation". www.nsf.gov. Retrieved 2018-08-30. 8. ^ F. Browder (2002) "Reflections on the Future of Mathematics", Notices of the American Mathematical Society 49(6): 658–62 9. ^ M Cook (2009), Mathematicians : An Outer View of an Inner World, Princeton University Press 10. ^ "Home page for Tom Browder". Phys.hawaii.edu. Retrieved December 16, 2016. 11. ^ Schudel, Matt (December 15, 2016). "Felix Browder, mathematician shadowed by his father's life as a Communist, dies at 89". Washington Post. Retrieved December 16, 2016. External links
{"url":"https://www.knowpia.com/knowpedia/Felix_Browder","timestamp":"2024-11-01T22:40:04Z","content_type":"text/html","content_length":"98338","record_id":"<urn:uuid:4ddd19c6-de7f-4042-87a2-1370629b8f63>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00397.warc.gz"}
Computational Experiment in Teaching Higher Mathematics. Combinatorics and its Applications // Modelling and Data Analysis — 2024. Vol. 14, no. 3 Computational Experiment in Teaching Higher Mathematics. Combinatorics and its Applications The article continues the cycle ([1] – [12]) of methodological developments of the authors. It discusses some problems related to ways to improve the culture of mathematical thinking of mathematics students. The authors rely on the experience of working at the Faculty of Information Technology of MSUPE. General Information Keywords: higher education, methods of teaching mathematics, sets and operations with them, combi-natorics, enumeration theory, general algebra, algebra of polynomials, probability theory Journal rubric: Method of Teaching Article type: scientific article DOI: https://doi.org/10.17759/mda.2024140310 Received: 09.07.2024 For citation: Kulanin Y.D., Stepanov M.E. Computational Experiment in Teaching Higher Mathematics. Combinatorics and its Applications. Modelirovanie i analiz dannikh = Modelling and Data Analysis, 2024. Vol. 14, no. 3, pp. 174–202. DOI: 10.17759/mda.2024140310. (In Russ., аbstr. in Engl.) 1. Kulanin E.D., Nurkaeva I.M. On two geometric extremum problems. Mathematics at school. 2019. No. 4. pp. 35-40. 2. Kulanin E.D., Nurkaeva I.M. Once again about the Mavlo task. Mathematics at school. 2020. No. 2. pp. 76-79. 3. Kulanin E.D., Stepanov M. E., Nurkaeva I.M. Propaedeutics of solving extreme problems in the school mathematics course. Data modeling and analysis. 2019. No. 4. pp.127-144. 4. Kulanin E. D., Nguyen Wu Quang, Stepanov M. E. Tangible objectivity with computer support. Data modeling and analysis. Scientific journal. 2019. No. 4. pp.145-156. 5. Kulanin E.D., Stepanov M. E., Nurkaeva I.M. The role of imaginative thinking in scientific thinking. Data modeling and analysis. 2020. Vol.10. No. 2 pp.110 - 128. 6. Kulanin E.D., Stepanov M. E., Nurkaeva I.M. On various approaches to solving extreme problems. Data modeling and analysis. 2020. Vol. 11. No.1. pp.40-60. 7. Lungu K.N., Norin V.P., Pisny D.T., Shevchenko Yu.A., Kulanin E.D. Collection of problems in higher mathematics with control papers. Moscow, 2013. Volume 2 (8th edition). 8. Stepanov M.E. Some questions of the methodology of teaching higher mathematics. Data modeling and analysis. 2017. No.1. pp.54-94. 9. Kulanin E.D., Stepanov M. E. From the experience of working in remote mode Learning Modeling and data analysis. 2022. Vol. 12. No. 3. pp.58-70. 10. Kulanin E.D., Stepanov M. E. Comprehensive consideration of mathematical concepts as a methodological technique. Data modeling and analysis. 2022. Vol. 12. No.4. pp.67-84. 11. Kulanin E.D., Stepanov M. E. On visualization of solutions to some extreme problems. Data modeling and analysis. 2022. Vol.12. No.4. pp.94 - 104. 12. Kulanin E.D., Stepanov M. E., Panfilov A.D., Potonyshev I.S. A systematic approach to the methodology of typhlopedagogy on the example of mathematical analysis problems. 2022. Vol. 12. No.2. 13. Kulanin E.D., Stepanov M. E. Computational experiment in teaching higher mathematics by the example of number theory. Data modeling and analysis. 2024. vol. 14. No.1. pp.170-195. 14. Hall M. Combinatorics. M., Mir, 1970. 15. Natanson I. P. Summation of infinitesimal quantities. M., Fizmatgiz, 1960. 16. Kurosh A. G. Course of higher algebra. M., Fizmatgiz, 1962. Information About the Authors Total: 22 Previous month: 18 Current month: 4 Total: 10 Previous month: 10 Current month: 0
{"url":"https://psyjournals.ru/en/journals/mda/archive/2024_n3/Kulanin_Stepanov","timestamp":"2024-11-09T00:05:31Z","content_type":"text/html","content_length":"39241","record_id":"<urn:uuid:127f64ee-df8c-424f-becf-75b401dc809b>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00241.warc.gz"}
ML Aggarwal Class 8 Solutions for ICSE Maths Chapter 18 Mensuration Ex 18.2 - CBSE Tuts ML Aggarwal Class 8 Solutions for ICSE Maths Chapter 18 Mensuration Ex 18.2 Question 1. Each side of a rhombus is 13 cm and one diagonal is 10 cm. Find (i) the length of its other diagonal (ii) the area of the rhombus Question 2. The cross-section ABCD of a swimming pool is a trapezium. Its width AB = 14 m, depth at the shallow end is 1-5 m and at the deep end is 8 m. Find the area of the cross-section. Question 3. The area of a trapezium is 360 m^2, the distance between two parallel sides is 20 m and one of the parallel side is 25 m. Find the other parallel side. Question 4. Find the area of a rhombus whose side is 6.5 cm and altitude is 5 cm. If one of its diagonal is 13 cm long, find the length of other diagonal. Question 5. From the given diagram, calculate (i) the area of trapezium ACDE (ii) the area of parallelogram ABDE (iii) the area of triangle BCD. Question 6. The area of a rhombus is equal to the area of a triangle whose base and the corresponding altitude are 24.8 cm and 16.5 cm respectively. If one of the diagonals of the rhombus is 22 cm, find the length of the other diagonal. Question 7. The perimeter of a trapezium is 52 cm. If its non-parallel sides are 10 cm each and its altitude is 8 cm, find the area of the trapezium. Question 8. The area of a trapezium is 540 cm^2. If the ratio of parallel sides is 7 : 5 and the distance between them is 18 cm, find the lengths of parallel sides. Question 9. Calculate the area enclosed by the given shapes. All measurements are in cm. Question 10. From the adjoining sketch, calculate (i) the length AD (ii) the area of trapezium ABCD (iii) the area of triangle BCD Question 11. Diagram of the adjacent picture frame has outer dimensions = 28 cm × 32 cm and inner dimensions 20 cm × 24 cm. Find the area of each section of the frame, if the width of each section is same. Question 12. In the given quadrilateral ABCD, ∠BAD = 90° and ∠BDC = 90°. All measurements are in centimetres. Find the area of the quadrilateral ABCD. Question 13. Top surface of a raised platform is in the shape of a regular octagon as shown in the given figure. Find the area of the octagonal surface. Question 14. There is a pentagonal shaped park as shown in the following figure: For finding its area Jaspreet and Rahul divided it in two different ways. Find the area of this park using both ways. Can you suggest some other way of finding its area? Question 15. In the diagram, ABCD is a rectangle of size 18 cm by 10 cm. In ∆ BEC, ∠E = 90° and EC = 8 cm. Find the area enclosed by the pentagon ABECD. Question 16. Polygon ABCDE is divided into parts as shown in the given figure. Find its area if AD = 8 cm, AH = 6 cm, AG = 4 cm, AF = 3 cm and perpendiculars BF = 2 cm, CH = 3 cm, EG = 2.5 cm. Question 17. Find the area of polygon PQRSTU shown in 1 the given figure, if PS = 11 cm, PY = 9 cm, PX = 8 cm, PW = 5 cm, PV = 3 cm, QV = 5 cm, UW = 4 cm, RX = 6 cm, TY = 2 cm.
{"url":"https://www.cbsetuts.com/ml-aggarwal-class-8-solutions-for-icse-maths-chapter-18-ex-18-2/","timestamp":"2024-11-09T23:38:08Z","content_type":"text/html","content_length":"87188","record_id":"<urn:uuid:72fe9268-b385-4945-a6f4-d8cb9e6907f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00004.warc.gz"}
Math Lab In-person tutoring is offered for Leeward CC MATH or QM discipline coursework to currently enrolled students, at no charge, on a first-come-first-serve basis. Math Lab Tutors are able to assist • Routine computerized homework • Odd-numbered textbook problems • Study guides that are not graded • Other assignments with written instructor permission To help us better serve you, please come in prepared with a textbook and any class notes. Without written instructor directions to the contrary, the Math Lab Tutors are unable to assist students with: • Take-home exams or quizzes • Departmental or exit assessments • Graded or extra-credit projects • Bonus problems Tutors are available circulating around the Math Lab assisting students or checking resources in/out at the Help Station. Math Lab staff is limited, so patience is needed and appreciated if all the tutors on duty are working with other students. Budgetary and staffing constraints limit the Math Lab’s tutoring services to students currently enrolled in Leeward CC MATH and QM discipline courses this semester: MATH 75 MATH 78 MATH 82 MATH 88 MATH 100 MATH 100c MATH 103 MATH 111 MATH 112 MATH 115 MATH 135 MATH 140 MATH 140x MATH 203 MATH 241 MATH 242 MATH 243 MATH 244 QM 107c UH, and UHCC students interested in remote tutoring for subjects such as math, physics, and chemistry can try out the online tutoring service Tutor.com. Just log in to MyUH Services and click on or search for Tutor.com Online Tutoring. Please Keep in Mind: Most of the Math Lab Tutors are Leeward CC students who have done well in their MATH courses. Like all other students, they can occasionally misread a problem or make an error in computations. Not all tutors will be able to assist with upper-level calculus coursework. Tutors may not be familiar with the content from courses that are not included in the STEM sequence. When in doubt, tutors will refer students to the appropriate instructor. The Math Lab appreciates patience and understanding under such circumstances. Basic, scientific, and graphing calculators are available for Leeward CC MATH student for same day use. The Math Lab carries the following calculator models that are used or required in current Leeward CC courses: • Basic (Casio HS8VA) • Scientific (TI-30XIIS) • Graphing (TI-84) • TI-Nspire CAS Please see a tutor to borrow a calculator. A picture ID must be left with the Math Lab to borrow a current calculator model: NO ID — NO CALCULATOR. All calculators must be returned on the same day borrowed by the Math Lab’s closing time. The Math Lab has a restricted number of other models for same-day loan to students who are not enrolled in a Leeward CC MATH course. These calculators must be also checked in and out by a Math Lab staff member. 19 computer stations are available for students to use for Leeward CC MATH and QM discipline courses and are located in the back room of the Math Lab. There is no specific time limit restriction placed on the computer used in the Math Lab as long as students are working on Leeward CC MATH or QM discipline coursework. If there is a waitlist for seating in the Computer Lab, the Math Lab has no choice but to require students using computers to be actively working on MATH or QM coursework. Printed copies are available for students in the Math Lab Math Lab Textbooks and Solution Manuals Textbooks and solutions manuals currently used in Leeward CC MATH and QM discipline courses are available for students to borrow. Current textbooks and solution manuals MAY NOT be taken home and must be returned on the day borrowed by the Math Lab’s closing time. Reference textbooks (usually older editions) are also available for longer term student use as a course supplement or for self-study. These books are targeted to pre-algebra, algebra, MATH 103, pre-calculus, and calculus. These reference books may be taken home and used for students’ personal use for the entire semester. Please see a tutor to check out current and reference textbooks/solution manuals. Tools for Math Placement Test Review Although the Math Lab does not provide active tutoring services for the placement testing, we do offer reference textbooks that students may borrow to self-study in refreshing their math skills. The UH System and Leeward Community College offer several opportunities for students who want to prepare for the MATH portion of the placement test. Each of these can be accessed from home: • ALEKS Assessment & Free Trial Practice Period: A limited 14-day free trial for any UHCC student. This program allows students to take a computerized assessment to help determine which areas/ concepts they need to work on. Based on the assessment, ALEKS will compose a study plan for each student to work on targeted problems. Students can practice in the program for 2 weeks from the date of initial registration for no charge. For registration directions, please contact Math Discipline Coordinator Eric Matsuoka at ematsuok@hawaii.edu Math Lab Policies and FAQs If I am taking a MATH course from another college or university, am I still able to get help from the Math Lab? What about tutoring for a course that uses math but isn’t a MATH or QM discipline course (i.e. PHYS, CHEM, BUSN)? No. Unfortunately, due to budget restrictions, Math Lab tutoring services are reserved for students currently enrolled in a Leeward CC MATH/QM course only. If I need help, how will I know who I can ask? The Math Lab tutors, who are on duty, will be wearing name tags and carrying clipboards. These staff members will be circulating around the room or seated at the Help Station to answer student Do I need to make an appointment to see a math tutor? No. Tutoring is done on a first-come-first-serve basis during posted hours. Could I schedule a private tutoring session in the Math Lab? No. Tutoring in the Math Lab is done strictly on a first-come-first-serve basis. The Math Lab does not offer or fund private tutoring sessions. Students must agree to and sign a private tutoring request form in the Math Lab, which will then be forwarded to the tutors by the Math Lab Manager. Why do tutors help with some problems but not others? Tutors regularly provide assistance with routine computerized homework and odd-numbered textbook problems. In support of the UH Student Conduct Code, Math Lab tutors will decline to provide assistance to students working on quizzes, tests, examinations, etc. without specific written permission from instructors. If there is any uncertainty as to whether assistance on a problem is allowed or not, tutors will refer students to their respective instructors. How busy does it get? The Math Lab can get quite busy at times. Each semester, there are over 2000 students taking Leeward CC MATH and QM courses. In contrast, the Math Lab can seat around 50 students and there are generally one to three tutors on duty at a time. The tutors will try their best to divide their time evenly among the students who are studying in the Math Lab. If the Math Lab is crowded and all the tutors are helping other students, patience and understanding are appreciated. What should I bring with me to the Math Lab? Students will be required to leave a valid ID to borrow a Math Lab calculator. It is preferred that students bring their UH system ID to use Math Lab services. In addition, it is important for those interested in receiving tutoring help to come in with questions ready, the course textbook, any notes from class, and a working pen or pencil. I need to borrow a calculator, but I do not have a student ID card to leave behind. What can I do? Leeward CC student ID cards are created and validated in the Learning Commons. Hours and directions for student ID card processing are listed on the Student ID Card web page: I am currently taking a Leeward CC MATH course that starts before the Math Lab opens or ends after the Math Lab closes. May I come in during non-operational hours to borrow or return a calculator or No. All Math Lab resources must be checked in and out during posted Math Lab hours of operation. If a student needs to borrow a calculator for a quiz/exam, they should make arrangements with their designated instructor prior to testing date. How do I submit a question, comment, or suggestion concerning the Math Lab? The Math Lab Manager’s office is located in MS-205 and may be reached by calling (808) 455-0400 or emailing lccmath@hawaii.edu General Policies Cell Phone Policy Everyone entering the Math Lab is expected to be courteous and responsible in the use of cell phones. Cell phones must be turned off or set to silent mode. All cell phone conversations, even short ones, are to be conducted outside Individuals who repeatedly fail to practice cell phone etiquette will no longer be welcome to use Math Lab facilities. Personal Electronics Many MATH courses have multi-media educational materials. The Math Lab welcomes the use of personal electronics for such materials provided students use headphones to prevent distractions to others. Food and Drink Light, neat, non-distracting snacks and covered beverages are allowed in the Math Lab. Items that might distract others in the Lab, such as those listed below, must be enjoyed outside. No messy or greasy food items No full meals (plate lunch, bento, etc) No distracting foods (noisy, aromatic, etc) After enjoying a snack or beverage, students are expected to clean their areas. The Math Lab is a working lab so ongoing and active tutoring and discussion of MATH and QM topics, procedures, and problems are expected. Discussion of other topics should be kept quiet and brief to help to maintain an atmosphere conducive to the study and learning of mathematics. Loud or distracting conversations must be held outside. Any extreme behavior will not be tolerated and will lead to immediate removal from the Math Lab. Other Expectations Scratch paper is available for students working on their Leeward CC MATH or QM coursework; however, students are expected to bring in their own writing instrument. Students needing a pencil should visit the Leeward CC Bookstore, which sells supplies in its vending machines. The Math Lab is a designated learning facility; therefore, students are expected to dress appropriately. In addition, the Math Lab is air conditioned and can get quite cold so students should plan to dress accordingly. Policy on Smoking In accordance with the State’s No Smoking Act, Act 108, SLH 1976 and Act 245, SLH 1987, and University policy, smoking is prohibited in any of the classrooms, laboratories, conference rooms, and other covered structures of the College. Additional restrictions can be found online at http://www.hawaii.edu/smokingpolicy Contact Info Location: MS 204 Hours of Operation Fall 2024 (September 3 – December 19) Monday – Friday, 8:00AM – 4:30PM CLOSED on non-instructional days including weekends and state holidays Math Lab Manager Office: MS 205 Office Phone: (808) 455-0400
{"url":"https://www.leeward.hawaii.edu/mathlab","timestamp":"2024-11-12T02:29:06Z","content_type":"text/html","content_length":"200085","record_id":"<urn:uuid:763a0c02-3e33-40bc-a216-3b9b966c3c4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00498.warc.gz"}
: Problem #72 For more information on this collection, see Pillow-Problems by Charles L. Dodgson (Lewis Carroll) . Note that Carroll most likely intended this problem, in the subject area of what he describes as "transcendental probabilities", to be a joke, not to be taken seriously. A bag contains 2 counters, as to which nothing is known except that each is either black or white. Ascertain their colours without taking them out of the bag. One is black, and the other white. We know that, if a bag contained 3 counters, 2 being black and one white, the chance of drawing a black one would be ; and that any other state of things would not give this chance. Now the chances, that the given bag contains (α) BB, (β) BW, (γ) WW, are respectively ¼, ½, ¼. Add a black counter. Then the chances, that it contains (α) BBB, (β) BWB, (γ) WWB, are, as before, ¼, ½, ¼. Hence the chance, of now drawing the black one, Hence the bag now contains BBW (since any other state of things would not give this chance). hence, before the black counter was added, it contained BW, i.e. one black counter and one white. The end.
{"url":"http://mathlair.allfunandgames.ca/pillow-72.php","timestamp":"2024-11-12T16:10:48Z","content_type":"text/html","content_length":"4858","record_id":"<urn:uuid:65269024-7d5c-42cb-97d2-03473a5d42b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00335.warc.gz"}
Least Mean Square (LMS) Equalizer - A Tutorial | Wireless Pi Least Mean Square (LMS) Equalizer – A Tutorial The LMS algorithm was first proposed by Bernard Widrow (a professor at Stanford University) and his PhD student Ted Hoff (the architect of the first microprocessor) in the 1960s. Due to its simplicity and robustness, it has been the most widely used adaptive filtering algorithm in real applications. An LMS equalizer in communication system design is just one of those beautiful examples and its other applications include noise and echo cancellation, beamforming, neural networks and so on. The wireless channel is a source of severe distortion in the received (Rx) signal and our main task is to remove the resulting Inter-Symbol Interference (ISI) from the Rx samples. Equalization refers to any signal processing technique in general and filtering in particular that is designed to eliminate or reduce this ISI before symbol detection. In essence, the output of an equalizer should be a Nyquist pulse for a single symbol case. A conceptual block diagram of the equalization process is shown in the figure below where the composite channel includes the effects of Tx/Rx filters and the A classification of equalization algorithms was described in an earlier article. Here, we start with the motivation to develop an automatic equalizer with self-adjusting taps. A few reasons for an adaptive equalizer are as follows. Channel state information In developing the coefficients for an equalizer, we usually assume that perfect channel information is available at the Rx. While this information, commonly known as Channel State Information (CSI), can be gained from a training sequence embedded in the Rx signal, the channel characteristics are unknown in many other situations. In any case, the quality of the channel estimation is only as good as the channel itself. As the wireless channel deteriorates, so does the reliability on its estimate. Time variation Even when the wireless channel is known to a reasonably accurate level, it eventually changes after some time. We saw in detail how time variations in the channel unfold on the scale of the coherence time and how this impacts the Rx signal. For that reason, an equalizer needs to automatically adjust its coefficients in response to the channel variations. Nature favours those who adapt. Utilization of available information In situations where the channel is estimated from a training sequence and a fixed equalizer is employed, it is difficult to incorporate further information obtained from the data symbols. For an adaptive equalizer, the taps can be adjusted first from the training sequence and then easily driven through the data symbols out of the detector in a decision-directed manner in real time. We next develop an intuitive understanding of its operation. An Intuitive Understanding Before we discuss the LMS algorithm, let us understand this concept through an analogy that appeals to intuition. For what follows, a gradient is just a mere generalization of the derivative (slope of the tangent to a curve) for a multi-variable function. We need a multi-variable derivative’ (i.e., a gradient) in our case because the equalizer has multiple taps, all of which need to be Assume that you are on a holiday with your family and spending the day in a nice theme park. You are going upwards on a roller coaster and hence the gradient is in the upward direction as shown in the figure below. Also assume that you are sitting in the front seat, have access to a (hypothetical) set of brakes installed and there is no anti-rollback mechanism which prevents the coasters from sliding down the hill. Suddenly the electricity in the park goes out. Leaving the roller coaster to slide all the way down the hill would be catastrophic, so your strategy is to • first apply the brakes preventing the sudden and rapid drop, • leave the brakes to slightly descend in the direction opposite to the gradient, • apply the brakes again, and • repeat this process. Repeating the above steps in an iterative manner, you will safely reach the equilibrium point. With this intuition in place, we can discuss the LMS algorithm next. The LMS Algorithm Refer to the top figure in this article and assume that the relevant parameters have the following notations. • The $m$-th data symbol is denoted by $a[m]$ that represents an amplitude modulated system. • The corresponding matched filter output is written as $z[m]$. • The equalizer coefficients are given by $q[m]$. • The equalizer output is the signal $y[m]$. • The error signal at this moment is denoted by $e[m]$. We start with a performance function that has a similar valley type shape, e.g., the squared error. ~\text{Mean}~ |e[m]|^2 &= ~\text{Mean}~ \left|a[m] – y[m]\right|^2 = ~\text{Mean}~ \left|a[m] – z[m]*q[m]\right|^2\\ &= ~\text{Mean}~ \left|a[m]- \sum _{l=-L_q}^{L_q} q[l]\cdot z[m-l]\right|^2 The figure on the left below draws this quadratic curve as a function of one equalizer tap $q_0[m]$ which is similar to the roller coaster analogy we saw before. On the same note, the right figure draws the same relationship as a quadratic surface of two equalizer taps, $q_0[m]$ and $q_1[m]$ where a unique minimum point can be identified. Extending the same concept further, Mean $|e[m]|^2$ can be dealt with as a function of $2L_q+1$ equalizer taps $q_l[m]$ and a unique minimum point can be reached. It is unfortunate that we cannot graphically draw the same relationship for a higher number of taps. We can start at any point on this Mean $|e[m]|^2$ curve and take a small step in a direction where the decrease in the squared error is the fastest, thus proceeding as follows. • If the equalizer taps $q[m]$ were constant, we could use the symbol time index $m$ for the equalizer tap $q[m]$ (because we are treating it as a discrete-time sequence). But our equalizer taps are changing with each symbol time, so we need to bring in two indices, $m$ for time and $l$ for the equalizer tap number which we assign as a subscript. Each tap for $l=-L_q$, $\cdots$, $L_q$ is updated at symbol time $m+1$ according to q_l[m+1] = q_l[m] + \text{a small step} Here, $q_l[m]$ means the $l^{th}$ equalizer tap at symbol time $m$. • The fastest reduction in error happens when our direction of update is opposite to the gradient of Mean $|e[m]|^2$ with respect to the equalizer tap weights. q_l[m+1] = q_l[m] + \text{a small step opposite to the gradient of}~\text{Mean}~ |e[m]|^2 • The gradient is a mere generalization of the derivative and we bring in a minus sign for moving in its opposite direction. q_l[m+1] = q_l[m] – ~\text{Mean}~ \frac{d}{dq_l[m]} |e[m]|^2 • Utilizing the definition of $e[m]$ as a[m]- y[m]= a[m] – \sum _{l=-L_q}^{L_q} q[l]\cdot z[m-l] the derivative for each $l$ can be written as \frac{d}{dq_l[m]} |e[m]|^2 = -2 e[m]\cdot z[m-l]&~~\\ &\text{for each }l = -L_q,\cdots,L_q Substituting this value back in Eq (\ref{eqEqualizationTapUpdate}), we can update the set of equalizer taps at each step as q_l[m+1] = q_l[m] + 2 ~\text{Mean}~ \Big\{e[m]\cdot z[m-l]\Big\} • Now let us remove the mean altogether, a justification for which we will shortly see, and get q_l[m+1] = q_l[m] + 2 e[m]\cdot z[m-l]? • The question mark above is there to indicate that we are probably forgetting something. Remember the brakes in the roller coaster analogy? We need to include an effect similar to the brake here, otherwise the effect of the gradient on its own will result in large swings on the updated taps. Let us call this parameter representing the brakes a step size and denote it as $\mu$. We will see the effect of varying $\mu$ below. $$ \begin{equation} q_l[m+1] = q_l[m] + 2 \mu &\cdot e[m]\cdot z[m-l]\\ &\text{for each }l = -L_q,\cdots,L_q A brilliant trick In an actual derivation of an optimal filter, it is the statistical expectation of squared error that is minimized, i.e., ~\text{Mean}~ ~|e[m]|^2 and hence the term mean squared error. Now if Widrow and Hoff wanted to derive the adaptive algorithm that minimizes the mean squared error, they needed to obtain the statistical correlations between the Rx matched filtered samples themselves as well as their correlations with the actual data symbols. This is a very difficult task in most practical applications. So they proposed a completely naive solution for such a specialized problem by removing the statistical expectation altogether, i.e., just employ the squared error $|e[m]|^2$ instead of mean squared error, Mean $|e[m]|^2$. This is what we saw in the algorithm equation derived above. To come up with such a successful workhorse for filter taps adaptation, when everyone else was running after optimal solutions, was a brilliant feat in itself. It turns out that as long as the step size $\mu$ is chosen sufficiently small, i.e., the brakes are tight enough in our analogy, the LMS algorithm is very stable — even though $|e[m]|^2$ at each single shot is a very coarse estimate of its mean. The Two Stages Figure below illustrates a block diagram for implementing an LMS equalizer. The matched filter output $z[m]$ is input to a linear equalizer with coefficients $q_l[m]$ at symbol time $m$. These taps are updated for symbol time $m+1$ by the LMS algorithm that computes the new taps through Eq (\ref{eqEqualizationLMS}). The curved arrow indicates the updated taps being delivered to the equalizer at each new symbol time $m$. The input to the LMS algorithm is the matched filter output $z[m]$ and error signal $e[m]$. In general, there are two stages of the equalizer operation. 1. Training stage: In the training stage, the error signal $e[m]$ is computed through subtracting the equalizer output $y[m]$ from the known training sequence $a[m]$. 2. Decision-directed stage: When the training sequence ends, the LMS algorithm uses the symbol decisions $\hat a[m]$ in place of known symbols and continues henceforth. The three steps performed by an LMS equalizer are summarized in the table below. It is important to know that since the update process continues with the input, the equalizer taps — after converging at the optimal solution given by the MMSE solution — do not stay there. Instead, they just keep hovering around the optimal values (unlike the roller coaster analogy which comes to rest at the end) and add some extra error to the possible optimal solution. If you are a radio/DSP beginner, you can ignore the next lines. Using $y[m] = z[m] * q_l[m]$ (where * denotes convolution) and applying the multiplication rule of complex numbers $I\cdot I$ $-$ $Q\ cdot Q$ and $Q\cdot I$ $+$ $I\cdot Q$, we can see that these are $4$ real convolutions for a complex modulation scheme. While the inphase coefficients $q_{l,I}[m]$ combat the inter-symbol interference in the inphase and quadrature channels, the quadrature coefficients $q_{l,Q}[m]$ battle the cross-talk between the two channels. This cross-talk is usually caused by an asymmetric channel frequency response around the carrier frequency. On the Step Size $\mu$ To observe the effect of the step size $\mu$ on the performance of the LMS equalizer, we incorporate a $4$-QAM modulated symbols shaped by a Square-Root Raised Cosine pulse with excess bandwidth $\ alpha=0.25$. The impulse response of the wireless channel is given by h[m] = [0.1~0.1~0.67~01.9~-0.02] and the resulting curves are averaged over 100 simulation runs for a symbol-spaced equalizer. We consider the following three different values of $\mu$. \mu = 0.01,\quad \mu = 0.04, \quad \mu = 0.1 Figure below draws the corresponding curves for all three values of $\mu$, regarding which a few comments are in order. • A large value of $\mu$ generates an equalizer response that converges faster than that for a smaller value of $\mu$. This makes sense from the form of the equalizer tap update where the new tap at each symbol time $m$ is generated through the addition of $2\mu e[m]z[m-l]$ in the previous tap value. The larger the $\mu$, the larger the update value and hence faster the convergence. • So why not choose $\mu$ as large as possible? From the roller coaster analogy, it can flip over in any direction if it is thrown towards the equilibrium point too quickly by not properly applying the brakes. Recall that the LMS algorithm, after converging to the minimum error, does not stay there and fluctuates around that value. The difference between this minimum error and the actual error is the excess mean square error. While it is not clear from the figure above, a larger $\mu$ results in a greater excess error and hence there is a tradeoff between faster convergence and a lower error. • From the update expression $2\mu e[m]z[m-l]$ which dictates how each tap is generated through the previous tap value, we also infer that for a given $\mu$, the convergence behaviour (and stability) of the LMS algorithm also depends on the signal power at the equalizer input. In a variant known as normalized LMS equalizer, this signal power is estimated to normalize $\mu$ at each symbol time $m$. • For $\mu=0.1$, the corresponding equalizer taps are shown converging to their final values in the figure below. The equalizer takes many hundreds of symbols before approaching the tap values with acceptable $~\text{Mean}~ ~|e[m]|^2$. This is typical of this kind of processing in an iterative fashion. • The convergence time of the LMS equalizer also depends on the actual channel frequency response. If there is not enough power in a particular spectral region, it becomes difficult for the equalizer to prepare a compensation response. As a result, a spectral null reduces the convergence speed and hence requires a significantly large number of symbols. This had been one of the bottlenecks in the high rate wireless communication systems. This was the motivation behind designing frequency domain equalizers. • An LMS equalizer is also extensively used in high speed serial links in conjunction with Decision Feedback Equalization (DFE). • Finally, the process of equalizer taps convergence is more interesting to watch from a constellation point of view as it unfolds in a 3D space for equalizer output $y[m]$, which I called a dynamic scatter plot. If this page was a 3D box and we could go towards one side of the scatter plot, we would have viewed it as in the figure below. Since the equalizer taps are initialized as all zeros, the equalizer output $y[m]$ starts from zero. Then, the four $4$-QAM constellation points trace a similar trajectory taken by $~\text{Mean}~ |e[m]|^2$ and $q_l[m]$ before. The figure has been plotted for $\mu=0.1$. Note that the varying amount of gap between some constellation points arises from the randomness of the data symbols consisting of one out of four possible symbols during each $T_M$. Accelerating the Convergence An LMS equalizer has been the workhorse for wireless communication systems throughout the previous decades. However, we saw in the above example that it takes many hundreds or thousands of symbols before the LMS equalizer converges to the optimum tap values. As a general convergence property, remember that the shortest settling time is obtained when the power spectrum of the symbol-spaced equalizer input (that suffers from aliasing due to 1 sample/symbol) is flat and the step size $\mu$ is chosen to be the inverse of the product of the Rx signal power with the number of equalizer coefficients. A smaller step size $\mu$ should be chosen when the variation in this folded spectrum is large that leads to slower convergence. In the last two decades of the 20th century, there was a significant interest in accelerating its convergence rate. Two of the most widely employed methods are explained below. Variable step size $\mu$ It is evident that the convergence rate is controlled by the step size $\mu$. Just like we saw in the case of a Phase Locked Loop where the PLL constants are reconfigured on the fly, it makes sense to start the LMS equalizer with a large value of $\mu$ to ensure faster convergence at the expense of significant fluctuation during this process. After converging closer to the optimal solution, it can be reduced in steps such that $\mu$ during the final tracking stage is a small enough value to satisfy the targeted excess mean square error. Cyclic equalization Instead of modifying $\mu$, this method focuses on the training sequence that is sent at the start of the transmission to help the Rx determine the synchronization parameters as well as the equalizer taps. It was discovered that if this training sequence is made periodic with the same period as the equalizer length, the taps can be computed almost instantly. This is done through exploiting the periodicity in the stored sequence $a[m]$ and the incoming signal $z[m]$. The periodicity implies that a Discrete Fourier Transform (DFT) of these sequences can be taken and multiplied point-by-point for each DFT index $k$. After normalizing the result for each $k$ by the Rx power in that spectral bin, an inverse Discrete Fourier Transform (iDFT) is taken to determine the equalizer taps $q[m]$ (on a side note, this same method can also be used to estimate the channel impulse response $h[m]$ instead of the equalizer taps and then use another equalizer, such as the maximum likelihood sequence estimation or MMSE equalizer, to remove the channel distortion). Although cyclic equalization is a very interesting technique, it is largely abandoned in favour of a better alternative for high speed wireless communications, namely the frequency domain LMS Variants In summary, the LMS equalizer has been incorporated into many commercial high speed modems due to its simplicity and coefficients adaptation of a time-varying channel. For further simplification, some of its variations employ only the sign of the error signal or that of the input samples. Three such possible variations for tap adaptation are as follows. q_l[m+1] = q_l[m] + 2 \mu &\cdot \text{sign} (e[m])\cdot z[m-l]\\ &\hspace{1.5in}\text{for each }l = -L_q,\cdots,L_q \\ q_l[m+1] = q_l[m] + 2 \mu &\cdot e[m]\cdot \text{sign}(z[m-l])\\ &\hspace{1.5in}\text{for each }l = -L_q,\cdots,L_q \\ q_l[m+1] = q_l[m] + 2 \mu &\cdot \text{sign}(e[m])\cdot \text{sign}(z[m-l])\\ &\hspace{1.5in}\text{for each }l = -L_q,\cdots,L_q It is evident that the last variant is the simplest of all, consisting of just the signs of both quantities. On the downside, it also exhibits the slowest rate of convergence. LMS algorithm, like other adaptive algorithms, behaves similar to a natural selection phenomenon but the difference is that we cannot afford to generate 100s of variants at each step to find out the best. Instead of iteration, variation and then selection, we select the likely best variant in a serial evolution manner. Leave a Reply; You can use HTML (<>) or Latex ($$)
{"url":"https://wirelesspi.com/least-mean-square-lms-equalizer-a-tutorial/","timestamp":"2024-11-05T22:44:37Z","content_type":"text/html","content_length":"84153","record_id":"<urn:uuid:860d2240-b8e7-4508-a5fc-fdfc61adbebd>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00110.warc.gz"}
The (Plot Details) Vector Tab 9.3.7.16 The (Plot Details) Vector Tab The 2D vectors in rectangular, polar and ternary coordinate are customized on the Plot Details Vector tab. To customize the vectors in 3D vector graph, you can go to the 3D Vector tab. The vector tab for XYXY graphs in rectangular and polar coordinate both The vector tab for Compass Plot The vector tab for XYAM graphs in rectangular coordinate The vector tab for XYZXYZ vector graphs in ternary coordinate Arrowheads group controls Scale the lengths of the arrowheads according to changes in magnitude of the vectors. Available options include: • None All the arrowheads share the same length, specified by Length. Scale Length with Magnitude • Log The lengths of the arrowheads have a logarithmic relationship with the lengths of the vectors. • Linear The lengths of the arrowheads have a linear relationship with the lengths of the vectors. Length Determines the length of the arrowheads. Units are measured in points. Angle Determines the arrowhead angles in degrees. Select the Closed radio button to display filled arrowheads. Select the Open radio button to display the hollow (transparent) arrowheads. Select a vector color from the Color button. Change the line style for the vector. You can also do the changes with the Line/Border tool in the Style toolbar. Type or select the desired vector line width from the Width combination box. The line width is measured in points, where 1 point=1/72 inch. For all 2D vector plots with this Line tab, in the Width drop-down list, you can select a column to map line width to that column. Please note that only columns in current worksheet will be listed in the drop-down list. Once any column selected, you are also allowed to specify a Scaling Factor to multiply the width column by a value to define the vector line width. You can refer to the Scaling Factor for Symbol Size Adjust the slider or enter a number in the combo box to apply transparency to vectors. Scale adjusts from 0 (not transparent) to 100 (fully transparent). For special points, an Auto check box is used to follow the transparency setting of other vectors. Clear the checkbox to adjust transparency of the special point. End Point group controls (XYXY or Ternary) Select the column that contains the X end point values from the X End drop-down list for both XYXY vector and ternary vector graph. Select the column that contains the Y end point values from the Y End drop-down list for both XYXY vector and ternary vector graph. Select the column that contains the Z end point values from the Z End drop-down list for the ternary vector graph. • For Polar Vector theta r theta r, X End is used to specify the column which contains the angle coordinates of end points on Angular Axis and Y End is used to specify the column which contains the radial coordinates of the end points on Radial Axis. • For Compass Plot, the center of the polar is the start point, and the source XY columns are the columns which contain the angle coordinates and the radial coordinates of the end points separately. So, there is no X End and Y End options in theVector tab. Vector Data group controls (XYAM) These controls apply only to XYAM graphs. Select the column that contains the vector angle values from the Angle combination box. Alternatively, select a value from this combination box. The angle is measured counterclockwise from Angle a line parallel to the X axis, bisecting the vector. The units are controlled by the Angular Unit drop-down list below. Magnitude Select the column that contains the vector magnitude values from the Magnitude combination box. Alternatively, select a value from this combination box. Units are measured in points. Magnitude (XYAM) These controls apply only to XYAM graphs. Magnitude Select or type a value to proportionally increase or decrease the length of the vectors from the Magnitude Multiplier combination box. For example, type .5 to draw the vectors at Multiplier half their original length. The default value is 1, so that the Magnitude combination box value determines the vector lengths. If this check box is selected, the magnitude are interpreted in real world coordinates, and are used to compute fixed x, y values of the end point of the vector. The magnitude Magnitude in real displayed in the graph will change when the X or Y axes scales are changed, but they will remain constant in real world coordinates. world space If this check box is not selected, the magnitude are interpreted relative to the graph layer. Position group controls (XYAM) Select the desired radio button to apply XY coordinate values to the Head, Midpoint, or Tail of the vector. Angle (XYAM) These controls apply only to XYAM graphs. Select the unit for the angles. • System Uses the setting in the Angular Units group in the Preferences: Options as the unit for angles. • Radians Angle Units Uses radians as the unit for angles. • Degrees Uses degrees as the unit for angles. • Grads Uses grads as the unit for angles. • Polar If this radio button is selected, the Offset Angle turn to 0 and Direction turn to CCW. Both options are grayed out, but updated. • Meteorological If this radio button is selected, the Offset Angle turn to 90 and Direction turn to CW. Both options are grayed out, but updated. • Custom If this radio button is selected, the Offset Angle and Direction options are editable. Type or select the degree of offset angle for the vectors. Offset Angle(deg.) When the vector Angle value and the Offset Angle are 0, the vector direction will be at 3 o'clock. Specify the direction for Offset Angle. • CW Direction The vectors will rotate in the Clockwise direction. • CCW The vectors will rotate in the Counter-Clockwise direction.
{"url":"https://d2mvzyuse3lwjc.cloudfront.net/doc/en/Origin-Help/PD-Dialog-Vector-Tab","timestamp":"2024-11-10T12:26:48Z","content_type":"text/html","content_length":"182730","record_id":"<urn:uuid:e7506cf4-b749-4f1f-9f1b-743b4500d437>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00465.warc.gz"}
Simple Excel Formula to Split Integer Values into Bit Fields | Oxmaint Community New ✨ Introducing Oxmaint Asset Hub for Machine Builders and OEMs. Explore Now Q&A Community Simple Excel Formula to Split Integer Values into Bit Fields I'm in search of a simple and efficient Excel formula to split integer values into distinct bit fields. Can anyone suggest a straightforward code for this task? Top Replies One way to obtain a binary value is by utilizing the DEC2BIN function in Excel. After getting the binary value, you can easily extract specific characters or bits of interest using the MID function. This method is commonly used for data manipulation and analysis in spreadsheets. To extract a specific binary digit from a number, use the formula: =MID(TEXT(DEC2BIN(123,8),"00000000"),X,1). In this formula, 123 represents the number (which can be a cell reference) and 8 indicates the number of bits being utilized. X denotes the position of the binary digit you wish to extract from the left side (starting from the most significant bit, not the least significant). Make sure to enable the Analysis Toolpak for the DEC2BIN function to function properly. The spreadsheet linked below contains a unique VBA custom function that allows for bitwise AND operations between integers. This custom function, BITAND, located in Module 1 on the VBA sheets, enables users to determine the truth value of specific bits by ANDing integers with 2^n. By utilizing this function in conjunction with the Excel IF function, users can easily obtain binary results without relying on TEXT functions. This tool is especially helpful for handling unsigned integers. Additionally, for quick binary conversions, consider downloading the free LL-SOLVER tool available at MRPLC's download section. This tool simplifies the process of converting and viewing 16 bit integers in hex, decimal, and binary formats compared to Excel. Ensure that your Excel security settings allow for macros to run smoothly. Take advantage of these resources to streamline your data manipulation tasks. Visit http://forums.mrplc.com/index.php?download=606 for the LL-SOLVER download. I am grateful for this excellent code. Thank you! An innovative approach to streamline Excel functions with VBA code automation. This method involves a unique formula structure that can be applied across multiple cells for efficient computation. By inputting values in specific cells, the formula generates the desired output. This technique can optimize workflows and enhance productivity. If you require further customization, formulas can be tailored to suit your specific requirements. Explore this advanced Excel strategy to simplify complex calculations and improve data analysis processes. Experience the benefits of automation with this cutting-edge approach. More Replies Unfortunately, copying data from Excel was unsuccessful. I will go ahead and attach the file instead. A Different Approach to Solve the Problem Here is a long-standing function I have been using for binary sequence generation. This code accepts an input integer value (H) and the desired length of the binary sequence (L). It systematically calculates the binary sequence and returns it as a string. The function starts by initializing variables and setting the initial value of the binary sequence as "B". It then loops through each bit of the binary sequence, appending either a "0" or "1" based on the division of the input integer value by 2. The loop continues until the input value becomes less than 1. After generating the basic binary sequence, the code ensures that the length matches the desired output length by adding extra "0"s to the beginning if necessary. Even though this code may seem complex, it has been tried and tested over time. I hope it proves useful to you, despite my delayed sharing. This comprehensive Excel guide covers a wide range of functions. Please note that there are some Russian comments throughout the book. At this point, we have discovered numerous methods for achieving the same result. We just need to find one more to round it up to a nice round number and complete the list. When it comes to decoding binary to/from decimal, I typically rely on the ^ (power of) operator. This spreadsheet includes three key functions: converting Decimal to binary, converting Binary to decimal, and checking the status of a bit within a word. Looking for a quick Excel solution to extract integer values into separate bit fields? Sparky_289 asked for help with this, and thanks to Panic Mode and Kolyur's suggestions, a method was found. The formula used was =EXTRAE((DEC.A.BIN(H3,8)),H2,1) in a Latam configuration. In this code, EXTRAE functions like MID, H3 represents the integer, 8 indicates the number of positions (use 16 for DINT), and H2 specifies the position of the bit to be extracted. This code was utilized in a project to simplify certification tracking for technicians with different roles. By encoding the required documents in integers and expanding them into reports using barcodes or QR codes, specific position profiles such as weilders, electricians, and programmers can be easily managed. Frequently Asked Questions (FAQ) FAQ: 1. Is there a way to split integer values into bit fields using Excel formulas? Answer: Yes, it is possible to split integer values into bit fields using Excel formulas. You can achieve this by using bitwise operations and functions like AND, LEFT, and MID in Excel. FAQ: 2. Can you provide an example of a simple Excel formula to split integer values into bit fields? Answer: Sure! One way to split integer values into bit fields in Excel is by using a combination of bitwise operations and functions. For example, you can use the formula =AND(A1, 1) to extract the least significant bit of the integer in cell A1. FAQ: 3. How can I efficiently split integer values into distinct bit fields in Excel? Answer: To efficiently split integer values into distinct bit fields in Excel, you can use a combination of bitwise operations and Excel functions tailored to extract specific bits from the integer FAQ: 4. Are there any specific considerations to keep in mind when splitting integer values into bit fields in Excel? Answer: When splitting integer values into bit fields in Excel, it's important to understand the binary representation of the integers and the bitwise operations needed to extract specific bits accurately. It's also crucial to handle any potential overflow or underflow scenarios that may arise during the bit field splitting process. You must be a registered user to add a comment. If you've already registered,
{"url":"https://community.oxmaint.com/discussion-forum/simple-excel-formula-to-split-integer-values-into-bit-fields","timestamp":"2024-11-12T09:06:01Z","content_type":"text/html","content_length":"29573","record_id":"<urn:uuid:afa05184-9ab1-4d21-970e-b2d46792361e>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00215.warc.gz"}
Learning Curves: Theory, Models, And Applications (industrial Innovation) [PDF] [7a8ng91gkv50] E-Book Overview Written by international contributors, Learning Curves: Theory, Models, and Applications first draws a learning map that shows where learning is involved within organizations, then examines how it can be sustained, perfected, and accelerated. The book reviews empirical findings in the literature in terms of different sources for learning and partial assessments of the steps that make up the actual learning process inside the learning curve. Traditionally, books on learning curves have focused either on cost accounting or production planning and control. In these books, the learning curve has been treated as a forecasting tool. This book synthesizes current research and presents a clear picture of organizational learning curves. It explores how organizations improve other measures of organizational performance including quality, inventory, and productivity, then looks inside the learning curve to determine the actual processes through which organizations learn. E-Book Content Theory, Models, and Applications Edited by Mohamad Y. Jaber LEARNING CURVES Theory, Models, and Applications Industrial Innovation Series Series Editor Adedeji B. Badiru Department of Systems and Engineering Management Air Force Institute of Technology (AFIT) – Dayton, Ohio PUBLISHED TITLES Computational Economic Analysis for Engineering and Industry Adedeji B. Badiru & Olufemi A. Omitaomu Conveyors: Applications, Selection, and Integration Patrick M. McGuire Global Engineering: Design, Decision Making, and Communication Carlos Acosta, V. Jorge Leon, Charles Conrad, and Cesar O. Malave Handbook of Industrial Engineering Equations, Formulas, and Calculations Adedeji B. Badiru & Olufemi A. Omitaomu Handbook of Industrial and Systems Engineering Adedeji B. Badiru Handbook of Military Industrial Engineering Adedeji B.Badiru & Marlin U. Thomas Industrial Project Management: Concepts, Tools, and Techniques Adedeji B. Badiru, Abidemi Badiru, and Adetokunboh Badiru Inventory Management: Non-Classical Views Mohamad Y. Jaber Kansei Engineering - 2 volume set • Innovations of Kansei Engineering, Mitsuo Nagamachi & Anitawati Mohd Lokman • Kansei/Affective Engineering, Mitsuo Nagamachi Knowledge Discovery from Sensor Data Auroop R. Ganguly, João Gama, Olufemi A. Omitaomu, Mohamed Medhat Gaber, and Ranga Raju Vatsavai Learning Curves: Theory, Models, and Applications Mohamad Y. Jaber Moving from Project Management to Project Leadership: A Practical Guide to Leading Groups R. Camper Bull Quality Management in Construction Projects Abdul Razzak Rumane Social Responsibility: Failure Mode Effects and Analysis Holly Alison Duckworth & Rosemond Ann Moore STEP Project Management: Guide for Science, Technology, and Engineering Projects Adedeji B. Badiru Systems Thinking: Coping with 21st Century Problems John Turner Boardman & Brian J. Sauser Techonomics: The Theory of Industrial Evolution H. Lee Martin Triple C Model of Project Management: Communication, Cooperation, Coordination Adedeji B. Badiru FORTHCOMING TITLES Essentials of Engineering Leadership and Innovation Pamela McCauley-Bush & Lesia L. Crumpton-Young Industrial Control Systems: Mathematical and Statistical Models and Techniques Adedeji B. Badiru, Oye Ibidapo-Obe, & Babatunde J. Ayeni Modern Construction: Productive and Lean Practices Lincoln Harding Forbes Project Management: Systems, Principles, and Applications Adedeji B. Badiru Statistical Techniques for Project Control Adedeji B. Badiru Technology Transfer and Commercialization of Environmental Remediation Technology Mark N. Goltz LEARNING CURVES Theory, Models, and Applications Edited by Mohamad Y. Jaber Boca Raton London New York CRC Press is an imprint of the Taylor & Francis Group, an informa business CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2011 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Version Date: 20110621 International Standard Book Number-13: 978-1-4398-0740-8 (eBook - PDF) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright. com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com Dedication To the soul of my father, and to my wife and sons. Contents Preface.................................................................................................................... xiii Editor ....................................................................................................................... xv Contributors Part ITheory and Models Chapter 1 Learning Curves: The State of the Art and Research Directions ........3 Flavio S. Fogliatto and Michel J. Anzanello Chapter 2 Inside the Learning Curve: Opening the Black Box of the Learning Curve .................................................................................. 23 Michael A. Lapré Chapter 3 Learning and Thinking Systems........................................................ 37 J. Deane Waldman and Steven A. Yourstone Chapter 4 From Men and Machines to the Organizational Learning Curve...... 57 Guido Fioretti Chapter 5 Management at the Flat End of the Learning Curve: An Overview of Interaction Value Analysis ............................................ 71 Walid F. Nasrallah Chapter 6 Log-Linear and Non-Log-Linear Learning Curve Models for Production Research and Cost Estimation ......................................... 91 Timothy L. Smunt Chapter 7 Using Parameter Prediction Models to Forecast PostInterruption Learning....................................................................... 103 Charles D. Bailey and Edward V. McIntyre Chapter 8 Introduction to Half-Life Theory of Learning Curves..................... 129 Adedeji B. Badiru xi Chapter 9 Influence of Breaks in Learning on Forgetting Curves ................... 163 Sverker Sikström, Mohamad Y. Jaber, and W. Patrick Neumann Chapter 10 Learning and Forgetting: Implications for Workforce Flexibility in AMT Environments ..................................................................... 173 Corinne M. Karuppan Chapter 11 Accelerated Learning by Experimentation ...................................... 191 Roger Bohn and Michael A. Lapré Chapter 12 Linking Quality to Learning – A Review........................................ 211 Mehmood Khan, Mohamad Y. Jaber, and Margaret Plaza Chapter 13 Latent Growth Models for Operations Management Research: A Methodological Primer................................................................. 237 Hemant V. Kher and Jean-Philippe Laurenceau Part IIApplications Chapter 14 The Lot Sizing Problem and the Learning Curve: A Review.......... 265 Mohamad Y. Jaber and Maurice Bonney Chapter 15 Learning Effects in Inventory Models with Alternative Shipment Strategies.......................................................................... 293 Christoph H. Glock Chapter 16 Steady-State Characteristics under Processing-Time Learning and Forgetting ..................................................................................309 Sunantha Teyarachakul Chapter 17 Job Scheduling in Customized Assembly Lines Affected by Workers’ Learning ........................................................................... 333 Michel J. Anzanello and Flavio S. Fogliatto Chapter 18 Industrial Work Measurement and Improvement through Multivariate Learning Curves.......................................................... 349 Adedeji B. Badiru Chapter 19 Do Professional Services Learn, Sustain, and Transfer Knowledge? ...................................................................................... 367 Tonya Boone, Ram Ganeshan, and Robert L. Hicks Chapter 20 Learning Curves in Project Management: The Case of a “Troubled” Implementation.............................................................. 381 Margaret Plaza, Daphne Diem Truong, and Roger Chow Chapter 21 Timing Software Upgrades to Maximize Productivity: A Decision Analysis Model Based on the Learning Curve................. 397 Aziz Guergachi and Ojelanki Ngwenyama Chapter 22 Learning Curves for CAD Competence Building of Novice Trainees ............................................................................................ 411 Ramsey F. Hamade Chapter 23 Developments in Interpreting Learning Curves and Applications to Energy Technology Policy ...................................... 425 Bob van der Zwaan and Clas-Otto Wene Preface Early investigations of the learning phenomenon focused on the behavior of individual subjects who were learning-by-doing. These investigations revealed that the time required to perform a task declined, but at a decreasing rate as experience with the task increased. Such behavior was experimentally recorded and its data then fitted to an equation that adequately describes the relationships between the learning variables – namely that the performance (output) improves as experience (input) increases. Such an equation is known as a “learning curve” equation. The learning curve has more general applicability and can describe the performance of an individual in a group, a group in an organization, and of an organization itself. Learning in an organization takes a more complex form than learning-bydoing. Learning in an organization occurs at different levels, involving functions such as strategic planning, personnel management, product planning and design, processes improvement, and technological progress. Many experts today believe that for an organization (manufacturing or a service) to sustain its competitive advantage, it has to have a steeper learning curve than its competitors. If the organization fails to have this, then it will forget and decay. As predicted by Stevens (Management Accounting 77, 64–65, 1999), learning curves continue to be used widely today due to the demand for sophisticated high-technology systems and the increasing interest in refurbishment to extend asset life. Therefore, understanding and quantifying the learning process can provide vital means to observe, track, and continuously improve processes in organizations within various sectors. Since the seminal review paper of Yelle (Decision Sciences 10 (2), 302–328, 1979), two books have been published on learning curves. These books treated the learning curve, as a forecasting tool with applications to accounting (A. Riahi-Belkaoui, 1986, The learning curve: A management accounting tool, Quorum Books: Westport, CT.), and as an industrial engineering tool (E. Dar-El, 2000, Human learning: From learning curves to learning organizations, Kluwer: Dordrecht, the Netherlands), respectively. For the past decade or so, some research has focused on opening the black box of learning curves in order to understand how learning occurs within organizations in many sectors. Some of these research studies have been the result of the careful examination of organizational systems in the manufacturing and service sectors. Recent studies show that applications of learning curves extend beyond engineering to include healthcare, information technology, technology assessment, postal services, military, and more. This book is a collection of chapters written by international contributors who have for years been researching learning curves and their applications. The book will help draw a learning map that shows the reader where learning is involved within organizations and how it can be sustained, perfected, and accelerated. The book is a suitable reference for graduate students and researchers in the areas of operations research/ management science, industrial engineering, management (e.g., healthcare, energy), and social sciences. It is divided into two parts. The first part, consisting of xv Chapters 1–13, describes the theory and models of learning curves. The second part, consisting of Chapters 14–23, describes applications of learning curves. During the preparation of this book, one of the contributors (Professor) Leo Schrattenholzer, sadly and unexpectedly passed away. Leo had initially asked ClasOtto Wene and Bob van der Zwaan to be the co-authors of his book chapter and, thankfully and honorably, Bob and Clas-Otto carried on with the task of contributing Chapter 23 in memory of Leo. I would like to thank all those who have encouraged me to edit this book; in particular Professor A. Badiru and Professor M. Bonney. Finally, I would also like to thank my wife for her continued support, which made completing this book possible. Mohamad Y. Jaber Ryerson University Toronto, ON, Canada Editor Mohamad Y. Jaber is a professor of Industrial Engineering at Ryerson University. He obtained his PhD in manufacturing and operations management from The University of Nottingham. His research expertise includes modeling human learning and forgetting curves, workforce flexibility and productivity, inventory management, supply chain management, reverse logistics, and the thermodynamic analysis of production and inventory systems. Dr. Jaber has published extensively in internationally renowned journals, such as: Applied Mathematical Modeling, Computers & Industrial Engineering, Computers & Operations Research, European Journal of Operational Research, Journal of Operational Research Society, International Journal of Production Economics, and International Journal of Production Research. His research has been well cited by national and international scholars. Dr. Jaber’s industrial experience is in construction management. He is the area editor—logistics and inventory systems—for Computers & Industrial Engineering, and a senior editor for Ain Shams Engineering Journal. He is also on the editorial boards for the Journal of Operations and Logistics, Journal of Engineering and Applied Sciences and the Research Journal of Applied Sciences. Dr. Jaber is the editor of the book, Inventory Management: Non-Classical Views, published by CRC Press. He is a member of the European Institute for Advanced Studies in Management, European Operations Management Association, Decision Sciences Institute, International Institute of Innovation, Industrial Engineering and Entrepreneurship, International Society for Inventory Research, Production & Operations Management Society, and Professional Engineers Ontario. Contributors Michel J. Anzanello Department of Industrial Engineering Federal University of Rio Grande do Sul Rio Grande do Sul, Brazil Flavio S. Fogliatto Department of Industrial Engineering Federal University of Rio Grande do Sul Rio Grande do Sul, Brazil Adedeji B. Badiru Department of Systems and Engineering Management Air Force Institute of Technology Wright-Patterson Air Force Base, Ohio Ram Ganeshan Mason School of Business College of William and Mary Williamsburg, Virginia Charles D. Bailey School of Accountancy University of Memphis Memphis, Tennessee Roger Bohn School of International Relations and Pacific Studies University of California San Diego, California Maurice Bonney Nottingham University Business School University of Nottingham Nottingham, United Kingdom Tonya Boone Mason School of Business College of William and Mary Williamsburg, Virginia Christoph H. Glock Faculty of Economics University of Wuerzburg Wuerzburg, Germany Aziz Guergachi Ted Rogers School of Management Ryerson University Toronto, Ontario, Canada Ramsey F. Hamade Department of Mechanical Engineering American University of Beirut Beirut, Lebanon Robert L. Hicks Department of Economics and Thomas Jefferson School in Public Policy College of William and Mary Williamsburg, Virginia Roger Chow Dymaxium Inc. Toronto, Ontario, Canada Mohamad Y. Jaber Department of Mechanical and Industrial Engineering Ryerson University Toronto, Ontario, Canada Guido Fioretti Department of Management Science University of Bologna Bologna, Italy Corinne M. Karuppan Department of Management Missouri State University Springfield, Missouri xix Mehmood Khan Department of Mechanical and Industrial Engineering Ryerson University Toronto, Ontario, Canada Hemant V. Kher Alfred Lerner College of Business and Economics University of Delaware Newark, Delaware Michael A. Lapré Owen Graduate School of Management Vanderbilt University Nashville, Tennessee Jean-Philippe Laurenceau Department of Psychology University of Delaware Newark, Delaware Edward V. McIntyre Department of Accounting Florida State University Tallahassee, Florida Walid F. Nasrallah Faculty of Engineering and Architecture American University of Beirut Beirut, Lebanon W. Patrick Neumann Department of Mechanical and Industrial Engineering Ryerson University Toronto, Ontario, Canada Ojelanki Ngwenyama Ted Rogers School of Management Ryerson University Toronto, Ontario, Canada Margaret Plaza Ted Rogers School of Management Ryerson University Toronto, Ontario, Canada Sverker Sikström Department of Psychology Lund University Lund, Sweden Timothy L. Smunt Sheldon B. Lubar School of Business University of Wisconsin - Milwaukee Milwaukee, Wisconsin Sunantha Teyarachakul ESSEC Business School Paris, France Daphne Diem Truong Royal Bank of Canada Toronto, Ontario, Canada J. Deane Waldman Health Sciences Center University of New Mexico Albuquerque, New Mexico Clas-Otto Wene Wenergy AB Lund, Sweden Steven A. Yourstone Anderson School of Management University of New Mexico Albuquerque, New Mexico Bob van der Zwaan Policy Studies Department Energy research Center of the Netherlands Amsterdam, The Netherlands Part I Theory and Models Curves: The 1 Learning State of the Art and Research Directions Flavio S. Fogliatto and Michel J. Anzanello CONTENTS Introduction................................................................................................................3 A Review of Learning and Forgetting Models............................................................4 Log-Linear Model and Modifications ...................................................................5 Hyperbolic Models................................................................................................9 Exponential Models ............................................................................................ 10 Multivariate Models ............................................................................................ 12 Forgetting Models ............................................................................................... 12 Research Agenda...................................................................................................... 13 References................................................................................................................ 15 INTRODUCTION Several authors have investigated in various industrial segments the way in which workers improve their performance as repetitions of a manual-based task take place; e.g., Anderson (1982), Adler and Clark (1991), Pananiswaml and Bishop (1991), Nembhard and Uzumeri (2000a), Nembhard and Osothsilp (2002), Vits and Gelders (2002), Hamade et al. (2007). A number of factors may impact the workers’ learning process, including: (1) task complexity, as investigated by Pananiswaml and Bishop (1991) and Nembhard and Osothsilp (2002); (2) structure of training programs (Terwiesch and Bohn 2001; Vits and Gelders 2002; Serel et al. 2003; Azizi et al. 2010); (3) workers’ motivation in performing the tasks (Kanfer 1990; Eyring et al. 1993; Natter et al. 2001; Agrell et al. 2002); and (4) prior experience with the task (Nembhard and Uzumeri 2000a, 2000b; Nembhard and Osothsilp 2002). Other studies have focused on measuring knowledge and dexterity retention after task interruption; e.g., Dar-El and Rubinovitz (1991), Wickens et al. (1998), Nembhard and Uzumeri (2000b), and Jaber and Guiffrida (2008). Analyses presented in the works listed above were carried out by means of mathematical models suitable to describe the workers’ learning process. Learning curves (LCs) are deemed to be efficient tools in monitoring workers’ performance in repetitive tasks, leading to reduced process loss due to workers’ 3 Learning Curves: Theory, Models, and Applications inability in the first production cycles, as reported by Argote (1999), Dar-El (2000), Salameh and Jaber (2000), and Jaber et al. (2008). LCs have been used to allocate tasks to workers according to their learning profiles (Teplitz 1991; Uzumeri and Nembhard 1998; Nembhard and Uzumeri 2000a; Anzanello and Fogliatto 2007), to analyze and control productive operations (Chen et al. 2008; Jaber and El Saadany 2009; Janiak and Rudek 2008; Wahab and Jaber 2010), to measure production costs as workers gain experience in a task (Wright 1936; Teplitz 1991; Sturm 1999), and to estimate costs of consulting and technology implementation (Plaza and Rohlf 2008; Plaza et al. 2010). In view of its wide applicability in production systems and given the increasing number of publications on the subject, we discuss, in this chapter, the state of the art in relation to LCs, covering the most relevant models and applications. Mathematical aspects of univariate and multivariate LCs are discussed, describing their applications, modifications to suit specific purposes, and limitations. The chapter is divided into three sections including the present introduction. “A Review of Learning and Forgetting Models” section presents the main families of LC models and their mathematical aspects. “Research Agenda” section closes the chapter by presenting three promising research directions on LCs. A REVIEW OF LEARNING AND FORGETTING MODELS An LC is a mathematical description of workers’ performance in repetitive tasks (Wright 1936; Teplitz 1991; Badiru 1992; Argote 1999; Fioretti 2007). Workers are likely to demand less time to perform tasks as repetitions take place due to increasing familiarity with the operation and tools, and because shortcuts to task execution are found (Wright 1936; Teplitz 1991; Dar-El 2000). LCs were empirically developed by Wright (1936) after observing a decrease in the assembly costs of airplanes as repetitions were performed. Based on such empirical evidence, Wright proposed a rule, widely applied in the aeronautical industry of the time, according to which cumulative assembly costs were reduced on average by 20% as the number of units manufactured was duplicated (Teplitz 1991; Cook 1991; Badiru 1992; Argote 1999; Askin and Goldberg 2001). Measures of workers’ performance that have been used as dependent variables in LC models include: time to produce a single unit, number of units produced per time interval, costs to produce a single unit, and percentage of non-conforming units (Teplitz 1991; Franceschini and Galetto 2002). LC parameters may be estimated through a non-linear optimization routine aimed at minimizing the sum of squares error. The model’s goodness of fit may be measured through the coefficient of determination (R2), the sum of squares error, or the model adherence to a validation sample. The wide range of applications conceived for LCs yielded univariate and multivariate models of varying complexity, which enabled a mathematical representation of the learning process in different settings. The log-linear, exponential, and hyperbolic models are the best known univariate models. These are described in the sections to follow. Learning Curves: The State of the Art and Research Directions Log-Linear Model and Modifications Wright’s model, also referred to as the “log-linear model,” is generally viewed as the first formal LC model. Its mathematical representation is given in Equation 1.1. y = C1 x b , where y is the average time or cost per unit demanded to produce x units, and C1 is the time or cost to produce the first unit. Parameter b, ranging from −1 to 0, describes the workers’ learning rate and corresponds to the slope of the curve. Values of b close to −1 denote high learning rate and fast adaptation to task execution (Teplitz 1991; Badiru 1992; Argote 1999; Dar-El 2000). The following modification on Wright’s model enables the estimation of the total time or cost to produce x units: y1→ x = C1 x b +1. The time or cost required to produce a specific unit i may be determined by further modifying the model in Equation 1.1 as follows: b +1 yi = C1 ⎡⎢i b +1 − ( i − 1) ⎤⎥ . ⎣ ⎦ Numerical results from Equations 1.1 through 1.3 are summarized in tables for different learning rates (Wright 1936; Teplitz 1991), enabling prompt estimation of the time required to complete a task. The log-linear model has several reported applications in the literature. For example, estimation of the time to task completion (Teplitz 1991), estimation of a product’s life cycle (Kortge et al. 1994), evaluation of the effect of interruptions on the production rate (Jaber and Bonney 1996; Argote 1999), and assessment of the production rate as product specifications are changed through the process (Towill 1985). These applications are detailed in the paragraphs to follow. Some industrial segments are well known for applying log-linear LCs and modifications to model the workers’ learning process. Examples include the semiconductor industry (Cook 1991; Gruber 1992, 1994, 1996, 1998), electronic and aerospace components manufacturers (Garvin 2000), the chemical industry (Lieberman 1984), automotive parts manufacturers (Baloff 1971; Dar-El 2000), and truck assemblers (Argote 1999). Use of the log-linear LC for cost monitoring is reported by Spence (1981), Teng and Thompson (1996), Teplitz (1991), and Rea and Kerzner (1997). The log-linear curve is the most used LC model for predicting the production rate in repetitive operations (Blancett 2002, Globerson and Gold 1997). Globerson and Levin (1987) and Vits and Gelders (2002) state that although presenting a noncomplex mathematical structure, the log-linear model describes most manual-based operations with acceptable precision. Blancett (2002) applied the model in several sectors of a building company, evaluating workers’ performance from product Learning Curves: Theory, Models, and Applications development to final manufacturing. With similar purposes, Terwiesch and Bohn (2001) analyzed the learning effect throughout the production process of a new product model. Finally, productivity in different cellular layouts was compared in Kannan and Polacsay (1999), using modifications of the log-linear LC. Production planning activities may also benefit from applications of the log-linear LC, as reported by Kopcso and Nemitz (1983), Muth and Spremann (1983), Salameh et al. (1993), Jaber, Rachamadugu and Tan (1997), Jaber and Bonney (1999, 2001, 2003), and Pratsini (2000). These authors investigated the impact of workers’ learning on inventory policies, optimal lot size determination, and other production planning activities. The integration of log-linear LCs to tools designed to assist production control has also been reported in the literature. Yelle (1980, 1983), Kortge (1993), and Kortge et al. (1994) combined LCs and product life cycle models aiming at improving production planning (for a review of product life cycle models see Cox [1967] and Rink and Swan [1979], among others). Pramongkit et al. (2000) proposed the combination of LC and the Cobb-Douglas function to assess how specific factors such as invested capital and expert workforce affected workers’ learning in Thai companies. Similarly, Pramongkit et al. (2002) used a log-linear LC associated with the total factor productivity tool to assess workers’ learning in large Thai companies. Finally, Karaoz and Albeni (2005) integrated LCs and indices describing technological aspects to evaluate workers’ performance in long production runs. The combination of log-linear-based LCs and quality control techniques was suggested by Koulamas (1992) to evaluate the impacts of product redesign on process quality and cost. Tapiero (1987) established an association between learning process and quality control in production plants. Teng and Thompson (1996) assessed the way workers’ learning rate influences the quality and costs of new products in automotive companies. Further, Franceschini and Galetto (2002) used LCs to estimate the reduction of non-conformities in a juice production plant as workers increased their skills. Jaber and Guiffrida (2004) proposed modifications on Wright’s LC for processes generating defects that required reworking; the resulting model was named the “quality learning curve” (QLC). Jaber and Guiffrida (2008) investigated the QLC under the assumption that production is interrupted for quality maintenance aimed at bringing it back to an in-control state. Finally, Yang et al. (2009) proposed a quality control approach integrating on-line statistical process control (SPC) and LCs. Applications of log-linear LCs have also been reported in the service sector. Chambers and Johnston (2000) applied LC modeling in two service providers: a large air company and a small bank. Saraswat and Gorgone (1990) evaluated the performance of software installers in companies and private residences. Sturm (1999) verified a 15% cost reduction in the process of filling out clinical forms as the number of forms doubled, roughly adhering to Wright’s rule. Log-linear LCs have been thoroughly investigated regarding their limitations and modifications for specific purposes (Baloff 1966; Zangwill and Kantor 1998, 2000; Waterworth 2000). Modifications generally aim at eliminating inconsistencies in the mathematical structure of the log-linear model. Hurley (1996) and Eden et al. (1998) state that Wright’s model yields execution times asymptotically converging to zero as a function of the number of repetitions, Learning Curves: The State of the Art and Research Directions which is not verified in practice. To overcome this, the authors propose the inclusion of a constant term in Wright’s model. Another drawback of Wright’s model is pointed out by Globerson et al. (1989). They claim that the model does not take into account workers’ prior experience, which clearly impacts on production planning and workforce allocation. Another limitation of Wright’s LC is related to inconsistencies in definition and inferences regarding LC outputs. Towill (1985, 1990) and Waterworth (2000) claim that many applications consider the mean execution time until unit x and the specific execution time of unit i as analogous. To correct this, Smunt (1999) proposed an alternative definition of repetition based on the continuous learning theory. A factor that may undermine the fit of LC models is the variability in performance data collected from a given process (Yelle 1979). Globerson (1984) and Vigil and Sarper (1994) state that imprecise estimation of the learning parameter b jeopardizes the LC’s predictive ability. They suggest using confidence intervals on the response estimates for predicting a process production rate. Globerson and Gold (1997) developed equations for estimating the LC’s variance, coefficient of variation, and probability density function. Finally, Smunt (1999) proposed modifications on Wright’s model in order to embrace situations where parameter b changes as the process takes place, while Smunt and Watts (2003) proposed the use of data aggregation techniques to reduce variance of LC-predicted values. The use of cumulative units as independent variables has also been investigated in the LC literature. Fine (1986) argues that the number of produced units may hide learning deficiencies, since they do not take into account the units’ quality. To overcome this, the author modified the LC to consider only data from conforming units. Li and Rajagopalan (1997) extended Fine’s (1986) idea to include both conforming and non-conforming data in the LC model. Finally, Jaber and Guiffrida (2004) proposed modifications in Wright’s model that were aimed at monitoring processes with a high percentage of non-conforming and reworked units. Modifications in Wright’s model were initially proposed to adapt the equation to specific applications, and then to become recognized as alternative models. An example is the Stanford-B model, presented in Equation 1.4, which incorporates workers’ prior experience. y = C1 ( x + B)b . Parameter B, corresponding to the number of units of prior experience, shifts the LC downward with respect to the time/unit axis (Teplitz 1991; Badiru 1992; Nembhard and Uzumeri 2000a). The model was tested in the assembly stages of the Boeing 707, as well as in improvement activities performed later in the product (Yelle 1979; Badiru 1992; Nembhard and Uzumeri 2000a). DeJong’s model, presented in Equation 1.5, incorporates the influence of machinery in the learning process. y = C1[ M + (1 − M ) x b ], where M (0≤M≤1) is the incompressibility factor that informs the fraction of the task executed by machines (Yelle 1979; Badiru 1992); M=0 denotes a situation Learning Curves: Theory, Models, and Applications where there is no machinery involved in the task, while M=1 denotes a task fully executed by machinery where no learning takes place (Badiru 1992). The S-curve model aims at describing learning when machinery intervention occurs, and when the first cycles of operation demand in-depth analysis. The model is a combination of DeJong’s and Stanford-B’s models, as presented in Equation 1.6. Parameters in the model maintain their original definitions (Badiru 1992; Nembhard and Uzumeri 2000a). y = C1[ M + (1 − M )( x + B)b ]. The plateau model in Equation 1.7 displays an additive constant, C, which describes the steady state of workers’ performance. The steady state is reached after learning is concluded or when machinery limitations block workers’ improvement (Yelle 1979; Teplitz 1991; Li and Rajagopalan 1998). y = C + C1 x b . A comprehensive comparison of several of the LCs discussed above is reported in Hackett (1983). In addition to the LC models presented above, other less cited log-linear-based LCs are proposed in the literature. Levy’s (1965) adapted function is one such model: −1 ⎡ 1 ⎛ 1 xb ⎞ ⎤ My = ⎢ − ⎜ − ⎟ k − kx ⎥ , ⎢⎣ β ⎝ β C1 ⎠ ⎥⎦ where β is a task-defined production coefficient for the first unit, and k is the workers’ performance in steady state. Remaining parameters are as previously defined. Focused on production runs of long duration, Knecht (1974) proposed an alternative adapted function model that allows evaluating the production rate as parameter b changes during the production run (see Equation 1.9). Parameters are as previously defined. y= C1 x b +1 . (1 + b) A summation of LCs characterized by n different learning parameters b, proposed by Yelle (1976), is given in Equation 1.10. The resulting model could be applied to production processes comprised of n different tasks. However, Howell (1980) claims that the model in Equation 1.10 leads to imprecise production rate estimates. y = C1 x1b1 + C2 x2b 2 + + Cn xnbn . Learning Curves: The State of the Art and Research Directions Alternative LC models were developed following the log-linear model’s principles, although relying on more elaborate mathematical structures to describe complex production settings. We refrain from exposing the mathematical details of those models in order to avoid overloading the exposition; only their purpose is presented. Klenow (1998) proposed an LC model to support decisions on updating production technology. Demeester and Qi (2005) developed an LC customized to situations in which two generations of the same product (i.e., two models) are simultaneously being produced. Their LC helps identify the best moment to allocate learning resources (e.g., training programs and incentive policies) to produce the new model. Mazzola et al. (1998) developed an LC-based algorithm to synchronize multiproduct manufacturing in environments characterized by workers’ learning and forgetting processes. Gavious and Rabinowitz (2003) proposed an approach using an LC to evaluate the training efficiency of internal resources in comparison with that of outsourced resources. Similarly, Fioretti (2007) suggested a disaggregated LC model to analyze complex production environments in terms of time reduction for task completion. Finally, Park et al. (2003) proposed a multiresponse LC aimed at evaluating knowledge transference at distinct production stages in a liquid display (LCD) factory. The integration of an LC and scheduling techniques was introduced by Biskup (1999), analyzing the effect of learning on the position of jobs in a single machine. Mosheiov and Sidney (2003) extended that approach by combining job-dependent LCs (which are LCs with a different parameter for each job) to programming formulations aimed at minimizing flow time and makespan in a single machine, as well as flow time in unrelated parallel machines. Hyperbolic Models An LC model relating the number of conforming units to the total number of units produced is reported in Mazur and Hastie (1978). In that model, x describes the number of conforming units, and r is the number of non-conforming units; thus, y corresponds to the fraction of conforming units multiplied by a constant k. The model is called “2-parameter hyperbolic curve” and is represented by: ⎛ x ⎞ y = k⎜ ⎟. ⎝ x+r ⎠ For learning modeling purposes, y describes the workers’ performance in terms of number of items produced after x units of operation time (y≥0 and x≥0), k quantifies the maximum performance level (k≥0), and r denotes the learning rate, given in time units (Nembhard and Uzumeri 2000a). A more complete model can be generated by including workers’ prior experience in executing the task. For that matter, Mazur and Hastie (1978) suggested the inclusion of parameter p in Equation 1.11, leading to the 3-parameter hyperbolic LC Learning Curves: Theory, Models, and Applications in Equation 1.12. In that equation, parameter p refers to workers’ prior experience evaluated in time units (p≥0). ⎛ x+ p ⎞ y = k⎜ ⎟. ⎝ x+ p+r ⎠ Uzumeri and Nembhard (1998) and Nembhard and Uzumeri (2000a) improved the definition of parameter r, associating it with the time required to achieve production level k/2, which is half the maximum performance level k. A worker presenting high values of r requires much practice in order to achieve k, thus displaying slow learning. The authors also state that r acts as the shape factor in the hyperbolic model, leading to three possible learning profiles: (1) r>0—the curve presents an increasing profile until k, representing the typical behavior of workers performing new tasks; (2) r→0—the curve follows a horizontal pattern, denoting absence of workers’ improvement; and (3) r 1. Note that Equation 4.2 is quite a good estimate of r when routines are under construction, but it is definitely wrong once good routines have been established. In fact, it captures the extent to which organizational units try novel paths. This is appropriate when describing the beginning of organizational learning, but loses its relevance once a good routine has already been found. Equation 4.2 makes sense in a theoretical model that estimates possible learning curves, but it would not fit a simulation logic as in the “Learning Curves: Routing” section. The implications of Equations 4.1 and 4.2 become clear when one plots them for various values of H and K. For obvious reasons, interesting values appear when the difference K − H is not too small with respect to the absolute values of H and K. Equation 4.1 is defined for K − H ≥ 1 but is trivial if K − H = 1. Equation 4.2 is defined for K − H > 1. Thus, the smallest possible value of K − H is 2. Correspondingly, the range of values of H and K should be close to the origin. Figure 4.6 illustrates p and r for H ∈ [1, 10] and K ∈ [3, 12]. The higher the parameter p, the more attempts are made at improving on the current arrangement of production. Equivalently, the higher the value of p, the steeper the learning curve would be. Thus, Figure 4.6 shows that the greater the number of operation sequences and categories, the more possibilities there are for improvement. In short, the more there is to learn, the more that can be learned. Learning Curves: Theory, Models, and Applications 1 0.8 0.6 p, r 0.4 r 0.2 0 FIGURE 4.6 Parameters p (solid line) and r (dashed line) for K − H = 2. However, learning may not proceed if the search for better arrangements of production becomes stuck in a vicious circle. The parameter r captures the likelihood of this possibility. Figure 4.6 shows that the greater the number of sequences and categories, the more likely it is that no improvement will take place at all. To be more concise: the more there is to learn, the more likely it is that nothing will be learned at all. Thus, Figure 4.6 illustrates a trade-off between the possibility of improving the arrangement of an organization and the danger of getting lost in an endless search. In fact, the more possibilities that exist for improvement, the more difficult it is to realize them. Let us consider an organization with fewer categories, which often implies more generic categories. This means that workers have a wider knowledge so they can do more diverse jobs, or that machines are more flexible so they can process a wider range of semi-finished goods, or both. Let us choose K − H = 3. Figures 4.7 and 4.8 show the ensuing effect on p and r, respectively. Even with so small a change in the number of categories, the differences are impressive. The possibilities for improvement—captured by the parameter p—have slightly decreased. On the contrary, the likelihood that better arrangements are found—captured by the parameter r—have increased dramatically. Furthermore, the greater H, the more pronounced are these effects. Figures 4.7 and 4.8 suggest that, by employing a few general categories, a large gain in effectiveness can be attained at the expense of a small loss on the possibilities for improvement. An organization of open-minded generalists and flexible machines may lose a fraction of the learning possibilities afforded by specialization, but will not get stuck in meaningless routines that lead nowhere. FINAL DISCUSSION Learning curves would be a valuable tool for business planning if they were predictable. The trouble is that this is generally not the case. The slope of the learning curve From Men and Machines to the Organizational Learning Curve 1 K–H=2 0.4 0.2 0 FIGURE 4.7 Parameter p when K − H = 2 (thin line) and K − H = 3 (bold line). is something of a guess, and it may even happen that no decreasing pattern sets in. Given that there is always a small but positive probability that the learning curve will not descend at all, it is hard for managers to rely on it in the evaluations of future costs. It is obvious that it is necessary to understand the reasons behind why learning curves arise in order to be able to predict whether they will arise or not. This chapter moved from the idea that organizational learning is grounded on the formation of routines and attempted to highlight some features on which the shape of learning curves depends. In the “Learning Curves: Routing” section, we found that orders must be produced randomly in order for learning curves to exist, and that organizational units must be sufficiently many for learning curves to be effective. In the “Learning Curves: Sequencing” section, we found that there must be a sufficient amount of things to do for learning curves to set in (large H and large K), and that organizational units must 1 0.8 0.6 0.4 0.2 0 FIGURE 4.8 Parameter r when K − H = 2 (thin line) and K − H = 3 (bold line). Learning Curves: Theory, Models, and Applications be sufficiently flexible to enable the formation of routines (large difference between K and H). It is possible that these findings point to a common pair of principles for organizational learning to take place; namely, that (i) there is a sufficient number of novel possibilities for conceiving novel routines; and (ii) organizational units are sufficiently high in number and sufficiently flexible to implement novel routines. Point (i) is exemplified by the case of technical innovations, quite often taking place at the same time that organizational units are striving to develop better routines (Baloff 1966). A few cases where innovation was totally absent (Baloff 1971; Dutton and Thomas 1984; Reis 1991) highlighted that without continuous stimulation and the injection of novelties, the learning curve reaches a plateau (Hirschmann 1964; Levy 1965; Baloff and McKersie 1966; Adler and Clark 1991). On the contrary, a changing and stimulating environment is beneficial to both production time (Shafer et al. 2001; Macher and Mowery 2003; Schilling et al. 2003) and qualitative improvements (Levin 2000). Point (ii) is exemplified by the fact that, among industrial plants, learning curves are most pronounced where assembling operations are involved (Hirsch 1952, 1956). Assembling operations require a large number of units that must be flexible enough to interact with one another in multiple configurations—a circumstance that facilitates the emergence and modification of routines. On the contrary, plants based on conveyor belts are not the typical settings where organizational learning takes place. More detailed simulations are in order. It is necessary to integrate all factors giving rise to organizational learning curves and to investigate their consequences beyond the stylized models amenable to mathematical formalization, and this is only possible by means of numerical simulations. The application of concepts derived from numerical simulations to real cases poses yet another kind of problem, for organizational units are, in general, not just machines but compounds of men and machines. The features of machines can be easily measured; those of human beings often cannot. Human beings exert a large influence on learning curves, as testified by the fact that the slope of the learning curve may differ across identical plants of the same firm (Billon 1966; Yelle 1979; Dutton and Thomas 1984; Dutton et al. 1984), or even across shifts in the same plant (Argote and Epple 1990; Epple et al. 1991; Argote 1999). These episodes suggest that there are some limits to the extent to which learning curves can be managed and predicted. REFERENCES Adler, P.S., and Clark, K.B., 1991. Behind the learning curve: A sketch of the learning process. Management Science 37(3): 267–281. Argote, L., 1999. Organizational learning: Creating, retaining and transferring knowledge. Norwell: Kluwer Academic Publishers. Argote, L., and Epple, D., 1990. Learning curves in manufacturing. Science 247(4945): 920–924. Baloff, N., 1966. The learning curve – Some controversial issues. The Journal of Industrial Economics 14(3): 275–282. Baloff, N., 1970. Start-up management. IEEE Transactions on Engineering Management 17(4): 132–141. Baloff, N., 1971. Extension of the learning curve – Some empirical results. Operational Research Quarterly 2(4): 329–340. From Men and Machines to the Organizational Learning Curve Baloff, N., and McKersie, R., 1966. Motivating start-ups. The Journal of Business 39(4): 473–484. Billon, S.A., 1966. Industrial learning curves and forecasting. Management International Review 1(6): 65–79. Birkler, J., Large, J., Smith, G., and Timson, F., 1993. Reconstituting a production capability: Past experience, restart criteria, and suggested policies. Technical Report MR–273, RAND Corporation. Brown, J.S., and Duguid, P., 1991. Organizational learning and communities-of-practice: Toward a unified view of working, learning, and innovation. Organization Science 2(1): 40–57. Cicourel, A.V., 1990. The integration of distributed knowledge in collaborative medical diagnosis. In Intellectual teamwork: Social and technological foundations of cooperative work, eds. J. Galegher, R.E. Kraut, and C. Egido, 221–243. Hillsdale: Robert Erlsbaum Associates. Dubuisson, S., 1998. Regard d’un sociologue sur la notion de routine dans la théorie évolutionniste. Sociologie du Travail 40(4): 491–502. Dutton, J.M., and Thomas, A., 1984. Treating progress functions as a managerial opportunity. The Academy of Management Review 9(2): 235–247. Dutton, J.M., Thomas, A., and Butler, J.E., 1984. The history of progress functions as a managerial technology. Business History Review 58(2): 204–233. Epple, D., Argote, L., and Devadas, R., 1996. Organizational learning curves: A method for investigating intra-plant transfer of knowledge acquired through learning by doing. In Organizational learning, eds. M.D. Cohen and L.S. Sproull, 83–100. Thousand Oaks: Sage Publications. Fioretti, G., 2007a. A connectionist model of the organizational learning curve. Computational and Mathematical Organization Theory 13(1): 1–16. Fioretti, G., 2007b. The organizational learning curve. European Journal of Operational Research 177(3): 1375–1384. Hirsch, W.Z., 1952. Manufacturing progress functions. The Review of Economics and Statistics 34(2): 143–155. Hirsch, W.Z., 1956. Firm progress ratios. Econometrica 24(2): 136–143. Hirschmann, W.B., 1964. Profit from the learning curve. Harvard Business Review 42(1): 125–139. Huberman, B.A., 2001. The dynamics of organizational learning. Computational and Mathematical Organization Theory 7(2): 145–153. Hutchins, E., 1990. The technology of team navigation. In Intellectual teamwork: Social and technological foundations of cooperative work, eds. J. Galegher, R.E. Kraut, and C. Egido, 191–220. Hillsdale: Robert Erlsbaum Associates. Hutchins, E., 1991. Organizing work by adaptation. Organization Science 2 (1): 14–39. Hutchins, E., 1995. Cognition in the wild. Cambridge: The MIT Press. Levin, D.Z., 2000. Organizational learning and the transfer of knowledge: An investigation of quality improvement. Organization Science 11(6): 630–647. Levitt, B., and March, J.G., 1988. Organizational learning. Annual Review of Sociology 14: 319–340. Levy, F.K., 1965. Adaptation in the production process. Management Science 11(6): 136–154. Macher, J.T., and Mowery, D.C., 2003. “Managing” learning by doing: An empirical study in semiconductor manufacturing. Journal of Product Innovation Management 20 (5): 391–410. Pentland, B.T., 1992. Organizing moves in software support hot lines. Administrative Science Quarterly 37(4): 527–548. Pentland, B.T., and Rueter, H.H., 1994. Organizational routines as grammars of action. Administrative Science Quarterly 39(3): 484–510. Learning Curves: Theory, Models, and Applications Reis, D.A., 1991. Learning curves in food services. The Journal of the Operational Research Society 42(8): 623–629. Schilling, M.A., Vidal, P., Ployhart, R.E., and Marangoni, A., 2003. Learning by doing something else: Variation, relatedness, and the learning curve. Management Science 49(1): 39–56. Shafer, S.M., Nembhard, D.A., and Uzumeri, M.V., 2001. The effects of worker learning, forgetting, and heterogeneity on assembly line productivity. Management Science 47(12): 1639–1653. Shrager, J., Hogg, T., and Huberman, B.A., 1988. A graph-dynamic model of the power law of practice and the problem-solving fan-effect. Science 242(4877): 414–416. Sikström, S., and Jaber, M.Y., 2002. The power integration diffusion model for production breaks. Journal of Experimental Psychology: Applied 8(2): 118–126. Weick, K.E., 1991. The non-traditional quality of organizational learning. Organization Science 2(1): 116–124. Wenger, E., 1998. Communities of practice. Cambridge: Cambridge University Press. Wenger, E., 2000. Communities of practice and social learning systems. Organization 7(2): 225–246. Wright, T.P., 1936. Factors affecting the cost of airplanes. Journal of the Aeronautical Sciences 3(2): 122–128. Yelle, L.E., 1979. The learning curve: Historical review and comprehensive survey. Decision Sciences 10(2): 302–328. at the Flat 5 Management End of the Learning Curve: An Overview of Interaction Value Analysis Walid F. Nasrallah CONTENTS Introduction.............................................................................................................. 71 Models of Optimal Networks in Organizations ....................................................... 72 Who Does the Search? ............................................................................................. 73 Simplifying .............................................................................................................. 74 Interaction Value.................................................................................................. 74 Varying Usefulness.............................................................................................. 75 How Many Categories Are Searched For? .......................................................... 75 Categories per Task ........................................................................................ 75 Homogeneity of Resource Value .................................................................... 76 Popularity ....................................................................................................... 76 Why Does an Interaction Fail?................................................................................. 77 Seeking an Interaction at the Wrong Time.......................................................... 77 Going to a Competent Person Who is Too Busy................................................. 78 Someone Who Has Time, Just Not for You.........................................................80 Summary of the Model ............................................................................................80 Variables..............................................................................................................80 Calculation .......................................................................................................... 82 Model Output ........................................................................................................... 83 Interpretation of Results......................................................................................84 Validation Study....................................................................................................... 88 References................................................................................................................ 89 INTRODUCTION There are always new things to learn in most organizations. In order to function in a certain role within an organization, an individual must constantly find out new things about tasks to be performed, resources needed in order to do the work, and methods of performing these tasks under different circumstances. More significantly, one 71 Learning Curves: Theory, Models, and Applications must also learn how to navigate one’s way around the organization to find information, authorization, access to resources, and any other useful thing that the organization can provide via the mediation of different gatekeepers. Learning about who in the organization can be useful in what situation is an example of a network search. You find out about someone’s (broadly defined) capabilities on reaching the node in the organization’s network of relationships that represents that person. To reach this point, you must first learn about the other nodes representing the people who have relationships that can bridge the gap between you and the person you are seeking. Searching through a network to find increasingly valuable nodes can lead to an exponential decay in the incremental benefits (as a function of the amount of searching); that is, what we know as the standard learning curve. Indeed, it could be argued that the source of the learning curve effect is a confluence of several such processes (Adler and Clark 1991). This chapter considers how best to find and use the information that needs to be learned before one is competent enough to fulfill a certain role within an organization. This could be as simple a task as finding out who within the organization can do what, and how well, in order to help one fulfill one’s duties. As long as changes take place both inside and outside the organization, this learning process can become a continuous one, because there are always changes occurring, which in turn create new things to learn about. It is also possible to envision a static situation, which may last for a long time or a short time, where everyone has to discover the fixed value that can be derived from interacting with everyone else: task-specific skills, capabilities, organizational privileges, legal and contractual powers, and so forth. The amount of output that an organization generates during this learning process follows a curve that goes up quickly at first, then more slowly, until it levels out. Through examining this static form of the organizational learning problem, we can evaluate different ways of conducting the search that leads to the flat end of the learning curve (the steady state). Even if the situation changes in future, what we learn about how to reach the steady state of the learning curve the first time will be useful if those changes force us to ascend another new learning curve in order to reach its new steady state, or flat end. MODELS OF OPTIMAL NETWORKS IN ORGANIZATIONS Huberman and Hogg (1995) wrote about a model of a community where various levels and niches of expertise exist, making it expedient to ask for “hints” from different members when someone encounters a need to do something new. One assumption of this model was that meta-knowledge—that is, knowledge of where to go in order to ask certain questions—is freely available. This could be through experience (i.e., prior learning by the members of the community) or through some sort of written record or directory that someone maintains and shares. This model was named “communities of practice” because it envisioned a diffuse group of people with sparse interactions, such as members of a public interest group or hobbyists on an informal network. The reason for this requirement is that each member who receives requests for “hints” must be assumed to have enough time to Management at the Flat End of the Learning Curve satisfactorily answer these requests. If the community was replaced by an organization, or the hobby replaced by a day job, then the model becomes more complex. In Huberman and Hogg’s model, the requesting party will either acquire the information needed or will not, depending on whether or not the information source is in possession of it. This led the authors to conclude that community members would do best to keep a “shortlist” of the people most likely to have useful information, and to rotate requests through different names on that list in order to give different people a chance to offer more information. The model that will be described in this chapter builds on the Huberman and Hogg model, but will allow for requests to be unsuccessful, even if the information is available. People working within organizations are often very busy (Nasrallah 2006)—not only with the requirements of their jobs, but with the demands of responding to prior requests from other sources. This means that many requests may need to wait until previous requests have been fulfilled, and this delay can last beyond the period of original need for the information requested. With this simple change in the model assumptions, the request can now be for something other than simple information. The same model could equally well represent a request for anything work-related that some organization member needs to spend time to generate. One commonality with the Huberman and Hogg (1995) model is that the metaknowledge is still assumed to be there. The members of the organization being modeled must have already gone through a learning process to find out not only who can offer what, but also who is likely to be so busy that the request is likely to fail. The model is only useful if this learning has happened and if changes have not overtaken the meta-knowledge gained from this learning. Job turnover must be low enough that the person relied on for, say, access to a certain system, or advice on certain features of a technology, is still there when needed next. External circumstances must be similar enough that someone with the expertise to deal with a certain government bureaucracy or to sway a certain external stakeholder is also still as effective in that job as the last time. In this relative lull, interesting conclusions can be gleaned from assuming that interactions may fail due to either being requested from someone who is too busy to respond in time, or due to being requested from someone who has already yielded what they can from similar interactions in the very recent past. What follows will be a step-by-step description of the model and its results, based on the four journal papers that built up to the full model (Nasrallah et al. 1998, 2003; Nasrallah and Levitt 2001; Nasrallah 2006). WHO DOES THE SEARCH? The raison d’être of any model is to answer some research question. The interaction value analysis (IVA) model has provided an answer to the following question: “How much value can management add to an organization?”. We therefore start the description of the model with the part that pertains to the role of management. If should be said from the outset that management fulfils many roles (Mintzberg 1989) and no single model can capture all of the value that is added through these various contributions. However, we also know that developments in education, contract law, technology, popular culture, and even human rights have reduced Learning Curves: Theory, Models, and Applications the need for many functions that used to be performed by managers. It is possible to envision, and sometimes even observe, an organization without managers (Davidow 1992). However, one role of managers that will never go away is the role of “traffic policeman,” by which I mean the role of preventing people from going in different directions that, taken individually, seem perfectly consistent with getting where the organization needs them to go, but which could result in a gridlock or collision that neither party can unilaterally prevent. Only a central entity, namely management, can simultaneously see the learning that has taken place among all organization members and deduce that leaving each member to behave according to his or her local knowledge may or may not best serve the goals of the organization. However, if the search for knowledge is conducted only by individual organization members, and the results of this search were to yield behavior (i.e., the selection of favorite interaction partners for various needs) that cannot be improved on by a central actor, then one can conclude that management would not be adding value in that particular way. As long as the organization provides certain minimal criteria for its own existence, such as incentives and a communication infrastructure, the role of management should be reduced or else the organization would be less efficient than a comparable organization that had reduced the role of management and encouraged individual initiative. The converse case is when individual decisions about the selection of favorite interaction partners for various needs did result in some sort of gridlock. This might occur when popular individuals are swamped with requests and cannot properly respond to them all. In that case, an organization that made sure that it centrally coordinated its learning—or, more specifically, its reaction to this learning, would be more likely to perform well. More management control is needed to accomplish this coordination, and investing in additional (managerial) personnel, systems, or procedures will pay off in greater efficiency. SIMPLIFYING IVA is an extremely simplified representation of what it takes to get work done, and hence to add value to an organization. Interaction Value Coase (1988) and Williamson (1979) argued that organizations add economic value by allowing repeated transactions between individuals while incurring part of the transaction cost only once. Individuals who joint the organization get some knowledge about about each other and set constraints on each other’s future behavior, incurring a fixed cost. This cost is then amortized over all future transactions that rely on this knowledge or these constraints. Adopting this point of view allows us to measure the effectiveness of the organization by summing up the value of all the transactions that occur between the members of the organization, which we now call “interactions” since they happen within the organization. IVA dispenses with the content of the interaction and simply keeps count of how many interactions are successfully completed. The assumption is Management at the Flat End of the Learning Curve that the different values of the various interactions will average out; that is, that the variance in these values will have no significant effects. The effect of higher or lower variance on the model results is an open research question, as are the effects of relaxing the other simplifying assumptions described in this section. Varying Usefulness Since all interactions are homogenized to add the same (normalized) value when they succeed, every individual or team that initiates an interaction is as capable of adding value as every other. In addition, if the partners who are sought for this interaction are ranked in order of decreasing usefulness for the purposes of this interaction, the value of selecting the most useful partner is also assumed to be the same. Since both large and small organizations might be represented by small or large models, model granularity has to be taken out of the equation affecting success (Nasrallah and Levitt 2001). This is accomplished by reducing the value of an interaction by a known proportion when the respondent’s suitability for the interaction is reduced by one rank. In other words, if the second most useful partner contributes 80% of what the most useful partner contributed, then the third most useful partner contributes 80% of that (i.e., 64% of the highest value) and so on. This constant rate of usefulness loss is the first of the IVA model’s six variables and it is referred to as the differentiation level within the organization (see the “Variables” section for a fuller explanation.) Why would one not want to always seek the most useful partner? The primary reason is that this partner might be very busy as he/she is sought out by everyone. If partners are less popular, then they are less likely to be busy, and if they are not busy and already members of the same organization, then they should be under an obligation to engage in the interaction. However, if conditions are such that popular partners exist and are busy, then different factors come into play, such as how likely the partner is to prioritize your request over others, and how much management intervention is applied to change this prioritization. How Many Categories Are Searched For? Another thing to think about when designing the IVA model is the number of different types of needs that a typical job will entail. Every different sort of resource (e.g., access to materials, budget approval, past customer feedback) leads to a different ordering of the possible set of interaction partners based on how much they can help access that particular resource. Several simplifying approximations were made in this regard, again based on a general assumption that any trends revealed by comparing averages are likely to be significant even when all the detailed data are considered. Categories per Task Although a typical organization might have thousands of members controlling hundreds of resources, if the average task needs six resources, then studying an organization with six resources can tell a lot about all organizations. Learning Curves: Theory, Models, and Applications Homogeneity of Resource Value Each resource might contribute a different proportion of the end task’s value, and most resources only add value in the presence of other “catalytic” resources. This distinction was also omitted from the IVA model in order to concentrate on more generalized effects of the number of resources needed for a typical or average task. If there are three resources needed, then getting half the value of one of them reduces the task’s value by one-half of one-third (i.e., by one-sixth). Popularity If we are looking for the value added by management in the context of mitigating resource gridlock, then there must be some contention for resources in order for gridlock to be a problem in the first place. If there are only two cars passing an intersection per minute in each direction, then no one would need stop signs or traffic police. Similarly, in an organization where everyone requires different resources from everyone else, everyone can help themselves without fearing an adverse effect on other members. Nasrallah et al. (1998) derived the relationship between the number of different criteria for ranking in interaction partner and the highest number of seekers likely to want an interaction with the same partner. This can be called the “popularity” of the partner, since a partner who is preferred by 100 interaction seekers is more “popular” than one who is preferred by 50 interaction seekers. In a population of P individuals, absolute homogeneity means that all follow one ranking criterion, whereas absolute heterogeneity means that there are P independent ranking criteria, one per person. If the number of ranking criteria is N, then N = 1 implies that the most popular individual in that population is preferred by all P of them (including him or herself.), If N = P then the number preferring the most popular individual is much smaller. In general: Popularity = log N . log ( log N ) To put this into perspective, if we imagine that five billion people each followed independent methods of choosing a preferred interaction partner for any reason (conversation, entertainment, leadership), it would then take only seven people selecting the same friend/performer/politician to make that person the planet’s most popular friend/performer/politician. If, on the other hand, the number of equally weighted and statistically independent criteria was six, then we would have log(6)/log(log(6)) = 3 criteria pointing to the same individual. Since the criteria have equal followership among the 5 billion, 3/5 × 5 billion would select the same most popular individual. There has been a lot of recent research into the properties of networks, such as social links between people and hypertext links between web pages (Barabási 2003). Popular individuals, or popular websites, exist even when the average person or website has very few links to others. One way in which this comes about in nature is when popularity itself is a reason for establishing a new link. This common metaphor for describing selective attachment is that “the rich get richer.” There are many possible avenues of future research into whether these types of networks characterize Management at the Flat End of the Learning Curve the links between colleagues who need to interact in order to do their jobs. Even on a purely theoretical front, there is much to be learned about the value and saturability of central versus distributed learning in networks that are, or are not, for example: 1. Small world networks, where every node is linked via a relatively short path to all, or most, other nodes. 2. Clustered networks, where nodes linked to the same neighbor are more likely to be linked to one another too. 3. Scale-free networks, where the incidence of a certain number of nodes with a high number of links implies that there must be at least one node with some multiple of this already large number of links. Having said all that, we can get back to the very simple IVA model that focuses, as a first step, on the average organization member and whether that member’s own work is dependent on a popular individual or not. In real life, I can accomplish all of my goals without ever meeting the pope or the president of the United States. Similarly, organization members, on average, are only likely to need to interact with someone too busy to help them if these popular individuals are a sizable contingent in the organization’s population. As implemented by Nasrallah (2006), the IVA model considers random networks with one of two values of N: 6 or 3. There are distinct independent criteria of selection, corresponding to types of resources needed on a typical task, and yielding a population where either one-third or one-half of the organization members favor the same interaction partner. It is as if there were three or six roles that each of the thousands of organization members take on at any point, and in that role they need someone else in one of the remaining roles to help them add value to the organization. The number 3 or the number 6 is the diversity of the organization, as will be further explained in the “Variables” section below. WHY DOES AN INTERACTION FAIL? Not all interaction attempts add value. This is true both in real life and in IVA. The IVA model only becomes non-obvious when it combines the effects of different ways in which an interaction attempt can fail. Seeking an Interaction at the Wrong Time Huberman and Hogg (1995) proposed that interactions within a community of farflung individuals with a common interest serve to transfer a fresh insight. This implied that a certain amount of time must elapse before the same individuals had generated new insights that they could share. This amount of time is, of course, not a constant but varies for each interaction. Huberman and Hogg (1995) used a probability distribution to investigate the average success rate over a large number of requests for a given average value of this time. IVA uses the same approach but gives it a different interpretation (Nasrallah et al. 2003). An interaction still fails if the same interaction partner had been used too recently, but this is because, within an organization, most Learning Curves: Theory, Models, and Applications value-adding tasks involve multiple interactions. If value accrues to the organization from each individual interaction, then the incremental value of subsequent interactions with the same partner will go down. As the learning process settles on an optimal frequency of interactions with each partner, the effect of going to an individual partner too frequently is balanced against the value lost by going to other partners who add less value per interaction, but whose contributions are valuable components of the whole task. The component of the aggregate value equation that accounts for loss from going too often to the same partner comes from the expression for the outcome of a race between two random (Poisson-distributed) processes. There is one random process producing the interaction request, and another random process representing everything else that needs to be in place in order for the interaction to succeed. If the random process producing the request beats the random process producing readiness, then the interaction fails: it went out too soon! (The probability that the two are equal is zero because we model the processes with a Poisson distribution, which is derived from the limit as time increments go to zero of the time between consecutive events that are equally likely to happen in any time interval.) If we define S1ij as the success of an interaction requested by party i of party j, then this depends on two things: (1) the proportion of i’s requests directed to j; and (2) the ratio of the two process rates: ρ, the average rate at which requests are made, and ω, the average rate at which interactions become ready to add value: S1ij = ω 1 = . ω + pij ρ 1 + pij ρ ω The ratio ρ/ω represents the interdependence of the organization’s work (see the “Variables” section below). Going to a Competent Person Who is Too Busy Another reason why an interaction might fail is if the respondent is so busy that, when he or she finally gives a response, the requestor no longer needs interaction. Again, the “deadline” for the interaction can vary randomly, and for a lack of more specific information we can assume that this is equally likely to occur at any given future moment, if it has not occurred already. Again, we have a Poisson distribution, and again a race between the process that gives rise to the requestor’s deadline and the process that generates the responder’s response. This process can be a Poisson process too, but only if the respondent starts working on his or her response as soon as the request is received. This was true in the Huberman and Hogg (1995) model of a community of experts who seldom interact, and would be true in an organization of full-time employees only if the respondent was not very popular (that is, busy), or if the organization as a whole was not very busy. Otherwise, there will be a queue of requests ahead of each incoming request and we would need to use a different kind of random distribution. Again, for lack of additional information, we will make the simplest Management at the Flat End of the Learning Curve possible assumption—namely, that each item in the queue is as likely to be processed at any point in time as it is at any other point in time. This leads us to what queuing theorists call the “M/M/1 queue,” and what probability texts call the “Erlang distribution.” The equation for the outcome of a race between a Poisson process and an Erlang process is more complex than just a ratio of rates, and involves the incomplete gamma function, as worked out by Ancker and Gafarian (1962). There are three rates involved: 1. The rate at which an interaction request becomes stale; that is, no longer capable of benefiting from a response (let us call this β) 2. The rate at which requests are received (λ) 3. The rate at which an average interaction request is processed (μ) Since multiplying all the rates by the same constant does not change the probability of one process beating the other, it is the ratios of these rates that are used as parameters in IVA: • Load is the ratio of the request arrival rate to the request processing rate (λ/μ). An organization with low load has less work to do for its customers, so people in the organization have fewer reasons for issuing interaction requests relative to the large number of people and other resources (e.g., computers) that can respond to these requests. This excess capacity might be a result of having had to deal with higher loads in the past, or it might be due to prior preparation for higher loads in future. • Urgency is the ratio of request staleness rate to request processing rate (β/μ). Under high-urgency conditions, the value that the organization provides to its clients drops off rapidly with the passage of time, whereas low-urgency markets might be equally able to benefit from the organization’s efforts a few days later. It is important at this point to distinguish between the rate at which the average members generate requests, which is used to calculate load, and the rate at which a popular member receives requests. The former is the same as the rate at which the average member receives requests. To calculate the failure rate due to request staleness, popular members matter more, so we need to multiply the average arrival rate by a factor that reflects the recipient’s popularity. Recalling that pij represents the proportion of interaction requests made by member i that are sent to member pij —namely, the sum over all senders i of request proporj, this factor is ∑ iDiversity 1 tions that i selects to send to any particular recipient j. This will be higher than 1 for popular members and lower than 1 for less popular members, since by definition pij = 1 for any given i. (Note: Diversity, as defined earlier, is the number of ∑ Diversity j 1 distinct roles salient for the average task. It is the distinct roles that have distinct pij values, hence the summation over the diversity parameter). The last thing that is needed to adapt the Ancker and Gafarian (1962) equation to the IVA model is to consider how many prior requests are likely to be ahead of any given request. Learning Curves: Theory, Models, and Applications Someone Who Has Time, Just Not for You The most straightforward queuing models assume “first-in-first-out” (FIFO) discipline. Every request is treated equally, and the only requests ahead of a given request are those that arrived before it. It is possible to imagine organizations where this is indeed the way that requests are treated. It is also not too difficult to think of reasons why some organizations might include members who do not act like that. Some requests are simply given more priority than others. Out of the near-infinite variety of patterns of precedence when it comes to responding to a request, IVA focuses on three broad cases that have very different effects on behavior. These are: 1. The FIFO case, which represents a “disciplined” organizational climate where people uphold the principle of waiting one’s turn and expect to be treated similarly. 2. A priority scheme where the request whose response has the greatest potential to add value is picked ahead of other requests. We say that an organization that follows this sort of system has a “fraternal” climate, since resources are preferentially given to those in the greatest need. (Of course “need” here refers to a thing required for professional rather than personal purposes.) 3. A priority scheme where the greatest priority is given to requests from the member whose future responses are most valuable to the current respondent. We could call this a “cynical” climate (a change from the term “capitalist” climate used in Nasrallah 2006). Note that words like “disciplined,” “fraternal,” and “cynical” are merely labels for the general behavior patterns, not an implication that any other characteristics associated with these words need to be attached to the organizations in question. The use of the word “climate,” on the other hand, is intended to suggest something similar to what management scientists refer to as “corporate climate” or sometimes incorrectly as “culture.” To see how this is the case, consider the simplifying assumptions of the model; namely, that all members act in the same way; and the strategic nature of the optimization process, where every member acts with the foreknowledge of how all the other members will also react to this course of action. The combination of a principle that describes what constitutes acceptable behavior, and an expectation that others will also follow that principle is a good definition of what it feels like to operate in a certain organizational climate. SUMMARY OF THE MODEL Variables Each of the variables introduced in the “Simplifying” section above has something to say about the nature of the organization, its work, or its external environment. Putting an interpretation on each model variable is a subjective process and can be said, if one happens to be in a charitable mood, to be more of an art than a science. Management at the Flat End of the Learning Curve Only validation against actual measurements can lend credence to the nominative process that ties a numeric parameter in a simple mathematical model to an observable description of an organization or its environment (see the “Validation Study” section below.) As discussed above: Diversity is the number of different sorts of resources needed for the organization’s work. This represents things like division of labor, system complexity, and technical advancement level. IVA distinguishes between low diversity with 3, and medium diversity with 6. Higher levels of diversity always lead to zero value added by management intervention into individual choices, but this is only because of the assumption of relative homogeneity in the connectivity of different members made in the current version of the model. Differentiation is the amount of value added by the interaction that adds the most value, divided by the amount of value added by a successful interaction with the partner who adds the least value. This corresponds to the range of qualifications or educational level of organization members. A chain gang or a Navy SEAL unit will both have low differentiation because the members have similar levels of preparation. A medical team consisting of a doctor, a medical technician, and a porter/driver has high differentiation. The numerical value used in generating the result reported here was varied from 2 for low differentiation, to 30 for high differentiation, with 10 used as the value for medium. Note: Using the highest-to-lowest rather than the first-to-second ratio as the definition of differentiation allows us to model the same organization at different diversity levels without changing the differentiation value. At a higher diversity of 6, there are four intermediate levels between the highest and the lowest, but there is only one intermediate level under a low diversity of 3. Going from highest to second highest successful interaction under a diversity of 3 means losing a portion of the value equal to the square root of differentiation, but going from the highest to the second highest under a diversity of 6 loses only a portion of the value equal to the fifth root of the differentiation. Interdependence is the degree to which interactions with the different types of parties (i.e., those who control the different types of resources) complement one another in order to add value. Task decomposability is another way of referring to the same property. The values used to generate the results were 1 for low interdependence—meaning that, on average, one more interaction with someone else was required between successful interactions with a particular partner—to 9, indicating a requirement for 10 interaction attempts with others. The value chosen for medium was 3. Load is the ratio of interaction request generation rate to interaction processing rate. It corresponds to how busy an organization is, relative to its capacity to do work. Load is the most likely factor to vary over time in any given organization in real life, since adding capacity by hiring or through technology is a slow process, but market demand for a product can spike or decay more quickly. Low load was Learning Curves: Theory, Models, and Applications defined as a ratio of 15%, meaning that 15% of the time between requests was needed to process the request. Medium load was defined as 75%; and 125% was used for high loads, meaning that, on average, five requests would appear in the time it takes to process four. Urgency is another ratio of rates of random processes, and it measures how quickly things need to be done in order to effectively add value. Some industries are inherently higher in urgency, due to perishable materials or life-threatening outcomes. Others only experience high urgency sporadically, such as when a deadline approaches or progress needs to be accelerated following a delay or failure. Low urgency was defined as 5%, meaning that 1 out of 20 requests were likely to be no longer needed by the time it took to process the request by someone who was not otherwise occupied. Medium urgency was defined as 50%, and high as 90%. Climate can be a contentious term in organization science (Denison 1996), but it can be loosely defined as the way people tend to act when there is no hard rule that specifies their behavior. IVA models climate as the “queuing discipline”— that is, the method used to select among requests when several are waiting to be processed. FIFO selection, corresponding to a climate where there is no expectation of preference between different types of members, is given the name “disciplined.” The other two types of climate defined in IVA, “cynical” and “fraternal,” are meant to correspond to two extremes of systematic preferential treatment. The “cynical” organization has a climate of preference for those who have more power to punish or reward when the time comes to request things back from them, but of course this expectation is illusory because at that time the powerful will defer to those that they perceive to be most useful to them, not to those who had sought approval from them in the past. (An interesting future study might include the effects of past loyalty, but doing so would probably entail a full simulation approach instead of a mathematical solution of game-theory equilibria.) Finally, the “fraternal” climate means a bias, and expectation of bias, in favor of the most needy, defined as those for whom the interaction represents a higher proportion of their expected contribution to the organization. This list is summarized in Table 5.1. Calculation The final model consists of a matrix of preference rankings, namely, three permutations of 1, 2, and 3 for the three-party version, and six permutations of 1 through 6 for the high-diversity six-party version. The learning that takes place begins with the discovery of which parties are best for which sort of interaction (the entries in the matrix). Next, it is necessary to predict which other members will reach similar conclusions and have similar preferences, and hence make certain parties busier and possibly less useful. The “flat end” of the learning curve is reached only when every party has reacted to all the expected reactions of other parties, and equilibrium (in the game-theoretic sense) has been reached. Management at the Flat End of the Learning Curve TABLE 5.1 Summary of IVA Parameters Name Diversity Load Urgency Meaning in Model Meaning in Organization Number of independent, equally weighted criteria used for evaluating potential interaction partners Ratio of highest value added to lowest value added from an interaction evaluated by the same criterion Proportion of organization’s disciplines that must be consulted in the course of any value-adding activity Ratio of rate at which interactions are proposed to rate at which responses are received Ratio of rate at which interaction requests cease to be relevant to rate at which responses are received Queuing service discipline: FIFO verus different priority schemes Number of different skill types or disciplines needed for the organization’s work Spread in skill levels for a given skill type used in the organization Degree of coordination between concurrent activities necessary for success Resources available to meet demand for information Incidence of deadlines and other situations where delay results in failure Shared and perceived values extant in the organization When management intervention is high, the cooperative (Von Neumann) equilibrium determines where the learning curve becomes flat. This is because management can force members to go along with the most globally efficient allocation of their interaction requests, possibly reducing the proportion of their requests sent to the party they find the most useful, and sending more requests to the second or third most useful (to a greater extent than they already need to spread requests around in order to account for interdependence.) The laissez faire approach leads to a learning curve that ends at the noncooperative (Nash) equilibrium because each member is only concerned with maximizing their own productivity, not that of the whole organization. Every other member knows this to be the case and acts accordingly. The equations for finding these two equilibrium points for each of the three climate regimes were derived by Nasrallah (2006). The solutions were found numerically for 486 combinations of the six variables above. Finding an analytic solution to these equations remains an open research question. MODEL OUTPUT The main result of this whole exercise is that the total value added by the organization under the two different learning curve equilibria might be the same, or it might be different. The non-cooperative equilibrium cannot be higher than the cooperative. When the two are equal, we can conclude that laissez faire management, in all its popular modern names and forms, is sufficient for coordination, leading to a learning curve that reaches as high as it can. This means that it is better to avoid the extra expenditure and frustrations of heavy-handed management in those Learning Curves: Theory, Models, and Applications (One can also argue that the learning curve might take longer to reach equilibrium when one point of learning has to process all the information for all organization members. Deriving the exact shape of the learning curve under the group-learning assumptions of IVA is also an open research question.) Otherwise, the higher the ratio of cooperative to non-cooperative equilibrium value, the more management needs to do to both discover and enforce the highervalue behavior. What the IVA model must do, therefore, is this: find the value added by the whole organization under cooperative (Von Neumann) equilibrium, and divide that by the value added by the whole organization under non-cooperative (Nash) equilibrium. For clarity, we can express this as a percentage improvement by subtracting 1. The table of these percentage improvements for each of 486 combinations or the six parameters is the output of the model. These values are tabulated in Figures 5.1 through 5.3, with darker shading for the higher values of management usefulness. Figure 5.1 shows the values under low load; Figure 5.2 shows medium load; and Figure 5.3 shows high load. Interpretation of Results The low load plot in Figure 5.1 shows hardly any value from management playing the role of regulator, as might be expected since there is very little activity to regulate under low load. Organizations typically form to carry out processes that require more than 15% of the members’ processing capacity, but it is possible for many organizations to go through periods of low load, during which the value added by layers of management is hard to gauge. For example, a standing army is typically lampooned as the epitome of bureaucratic overkill because the load on it outside of wartime does not call for the large hierarchy that is typically in place. In business, organizations that no longer have the load present during their creation typically shed management layers or go out of business. Figures 5.2 and 5.3 show very similar patterns, with the higher load in Figure 5.3 leading to higher values of the usefulness of management. We can focus on Figure 5.3 to note the patterns. Observations include: 1. In general, a “cynical” climate needs management to centralize learning more than a “disciplined” climate, and a “fraternal” climate needs it less than disciplined. 2. In general, low urgency coupled with high or medium load gives management an advantage over high urgency, possibly because high-urgency work needs the quick response of decentralized learning. 3. In general, high differentiation gives management an advantage. 4. High interdependence gives management a big advantage under a “cynical” climate, and a small advantage under a “disciplined” climate. Under a “fraternal” climate, the effect of interdependence is mixed: under low diversity and high or medium differentiation, decentralized learning has an advantage as interdependence goes up. Low interdep Medium interdep High interdep Low interdep High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency FIGURE 5.1 Value added by centralized learning under low load. Medium diversity High interdep Low diversity Medium interdep Medium differentiation 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % High differentiation 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % Low differentiation 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % High differentiation 0.1 % 0.1 % 0.0 % 0.1 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.1 % 0.1 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.1 % 0.1 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.1 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % Medium differentiation Low differentiation 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.1 % 0.1 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % Low differentiation 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% Fraternal climate High differentiation Cynical climate Medium differentiation Disciplined climate Management at the Flat End of the Learning Curve 85 Low interdep Medium interdep High interdep Low interdep High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency Medium differentiation 0.7 % 1.0 % 0.9 % 0.3 % 0.6 % 0.5 % 0.3 % 0.5 % 0.5 % 0.3 % 0.4 % 0.4 % 0.1 % 0.2 % 0.2 % 0.1 % 0.3 % 0.3 % High differentiation 1.3 % 1.9 % 1.7 % 0.6 % 1.1 % 1.0 % 0.1 % 0.3 % 0.4 % 0.5 % 0.8 % 0.8 % 0.3 % 0.5 % 0.5 % 0.2 % 0.5 % 0.5 % FIGURE 5.2 Value added by centralized learning under medium load. Medium diversity High interdep Low diversity Medium interdep Low differentiation 0.1 % 0.1 % 0.1 % 0.0 % 0.1 % 0.1 % 0.0 % 0.1 % 0.1 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % 0.0 % High differentiation 3.1 % 4.7 % 4.3 % 1.6 % 2.4 % 2.1 % 0.1 % 0.3 % 0.3 % 1.8 % 2.5 % 2.3 % 1.0 % 1.6 % 1.5 % 0.9 % 1.4 % 1.4 % 2.4 % 3.7 % 3.4 % 1.3 % 2.0 % 1.8 % 0.9 % 1.4 % 1.2 % 1.3 % 1.8 % 1.7 % 0.7 % 1.1 % 1.0 % 0.7 % 1.0 % 1.0 % Medium differentiation Low differentiation 1.3 % 2.0 % 1.9 % 0.7 % 1.1 % 1.0 % 0.6 % 0.9 % 0.8 % 0.9 % 1.2 % 1.1 % 0.5 % 0.7 % 0.6 % 0.4 % 0.5 % 0.5 % Cynical climate 0.9 % 1.3 % 1.4 % 0.8 % 1.3 % 1.4 % 0.6 % 1.3 % 1.4 % 1.2 % 1.6 % 1.5 % 0.7 % 1.0 % 1.0 % 0.5 % 0.8 % 0.8 % 0.6 % 0.9 % 1.0 % 0.5 % 0.8 % 0.8 % 0.5 % 0.9 % 0.9 % 1.0 % 1.4 % 1.3 % 0.6 % 0.8 % 0.8 % 0.5 % 0.8 % 0.7 % Low differentiation 0.5% 0.8% 0.8% 0.3% 0.4% 0.4% 0.2% 0.3% 0.3% 0.9% 1.2% 1.1% 0.5% 0.7% 0.6% 0.4% 0.5% 0.5% Fraternal climate High differentiation Disciplined climate Medium differentiation 86 Learning Curves: Theory, Models, and Applications Low interdep Medium interdep High interdep Low interdep High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency High urgency Medium urgency Low urgency FIGURE 5.3 Value added by centralized learning under high load. Medium diversity High interdep Low diversity Medium interdep Medium differentiation 2.3 % 4.5 % 4.7 % 1.1 % 2.6 % 2.8 % 0.8 % 2.2 % 2.5 % 0.8 % 1.4 % 1.5 % 0.4 % 1.0 % 1.1 % 0.4 % 1.1 % 1.3 % High differentiation 4.4 % 8.1 % 8.3 % 2.1 % 4.9 % 5.2 % 0.6 % 1.9 % 2.3 % 1.8 % 2.9 % 2.9 % 0.9 % 2.1 % 2.3 % 0.8 % 2.1 % 2.3 % Low differentiation 0.2 % 0.5 % 0.5 % 0.1 % 0.3 % 0.3 % 0.1 % 0.3 % 0.3 % 0.1 % 0.2 % 0.2 % 0.0 % 0.1 % 0.1 % 0.0 % 0.1 % 0.1 % High differentiation 10.5 % 25.0 % 27.4 % 5.0 % 12.5 % 13.9 % 0.5 % 4.1 % 4.8 % 5.0 % 9.8 % 10.6 % 3.2 % 7.2 % 7.8 % 2.8 % 6.8 % 7.5 % 7.8 % 20.2 % 22.6 % 4.2 % 10.5 % 11.7 % 3.0 % 7.2 % 7.8 % 3.7 % 7.6 % 8.3 % 2.3 % 5.2 % 5.6 % 2.1 % 5.0 % 5.4 % Medium differentiation Low differentiation 3.8 % 10.7 % 12.6 % 2.2 % 5.9 % 6.7 % 1.9 % 4.8 % 5.4 % 2.5 % 5.4 % 5.9 % 1.5 % 3.3 % 3.5 % 1.2 % 2.7 % 2.9 % 1.8 % 2.9 % 3.1 % 1.8 % 3.5 % 3.9 % 1.8 % 4.4 % 4.9 % 3.1 % 5.6 % 5.8 % 2.0 % 4.3 % 4.6 % 1.5 % 3.9 % 4.3 % 1.2 % 2.3 % 2.5 % 1.2 % 2.5 % 2.8 % 1.2 % 2.9 % 3.3 % 2.7 % 5.1 % 5.3 % 1.7 % 3.7 % 3.9 % 1.5 % 3.5 % 3.8 % Low differentiation 1.4% 3.3% 3.8% 0.8% 2.1% 2.3% 0.6% 1.6% 1.8% 2.4% 4.9% 5.4% 1.5% 3.2% 3.5% 1.2% 2.7% 2.9% Fraternal climate High differentiation Cynical climate Medium differentiation Disciplined climate Management at the Flat End of the Learning Curve 87 Learning Curves: Theory, Models, and Applications VALIDATION STUDY Since the IVA model is fairly recent, only one validation study has been performed on it (Nasrallah and Al-Qawasmeh 2009). The study involved interviewing the managers of 23 companies in the kingdom of Jordan using a structured questionnaire. The questionnaire assessed the operating environment of the company as well as the extent to which different mechanisms were used to specify how employees should act. These mechanisms were combined in a somewhat arbitrary fashion into one “management control” measure, accounting for the effects of: 1. High formalization, which means that rules and procedures are written down and enforced 2. High centralization, which means that a greater proportion of operating decisions are made by higher managers 3. Bureaucratic structure, which means that one permanent manager oversees a person’s work rather than delegating some authority to a temporary team or project leader 4. Micro-management, which means that mangers ask for reports and give instructions beyond what is required by the formal structure The interview responses about the work of the organization were mapped to the six parameters of the IVA model and the tables in Figures 5.1 to 5.3 were used to deduce management usefulness. The companies that had an amount of management control (the sum of the four factors in the list above) proportional to the management usefulness indicated by IVA were expected to be doing better financially than those that did not match practice to need. Financial performance was derived from public stock market listings of the companies. It was necessary to exclude 3 of the 23 companies because they were losing money, which indicates things being wrong with the company other than over- or under-management. Another eight company managers indicated that they faced an unpredictable competitive environment, which indicates that their learning curves were not at the flat part yet, thus making the IVA model useless. The remaining 12 companies showed a statistically significant correlation between positive growth in the “return on assets” (ROA) measure over three years and the degree of fit between management practice and IVA prescription, as shown in Figure 5.4. While this finding is promising, much more remains to be done in order to improve the mapping of the model to real and measurable variables encountered in management practice. It is nevertheless safe to conclude that two distinct organizational learning curves can and do coexist, and that the higher-cost learning curve that involves centralizing all the learning is only beneficial in certain limited situations. The cheaper decentralized learning model is likely to be the key to profitability and growth in other situations For those organizations using the mode of learning that is not appropriate for their organizational climate, workload, degree of urgency, work interdependence, skill diversity and differentiation, it will be profitable to change. This can be done either through consultants or internally, by shifting Management at the Flat End of the Learning Curve Performance vs. Misfit count Growth in return in assets (ROA) 5% 5.0% 4.6% 4% 3.6% Stable profitable firms Linear (stable profitable firms) 3.4% 2.6% 3% 2% 1% −0.4% −0.9% −1% −2% −0.4% R2 = 0.447 3 Misfit count FIGURE 5.4 Correlation between adherence to IVA recommendation and ROA growth. the organizational structure to add or subtract formalization, centralization, bureaucracy, and micro-management, and thereby to nudge the learning mode from distributed to centralized, or vice REFERENCES Adler, P.S., and Clark, K.B., 1991. Behind the learning curve: A sketch of the learning process. Management Science 37(3): 267–281. Ancker, C.J., and Gafarian, A.V., 1962. Queuing with impatient customers who leave at random. Journal of Industrial Engineering 13: 84–90. Barabási, A.L., 2003. Linked: How everything is connected to everything else and what it means. Plume: Penguin Group. Coase, R.H., 1988. The firm, the market and the law. University of Chicago Press: Chicago. Davidow, W.H., 1992. The virtual corporation: Structuring and revitalizing the corporation for the 21st century. New York: Harper Business. Denison, D.R., 1996. What is the difference between organizational culture and organizational climate? A native’s point of view on a decade of paradigm wars. Academy of Management Review 21(3): 1–36. Huberman, B.A., and Hogg, T., 1995. Communities of practice. Computational and Mathematical Organization Theory 1(1): 73–92. Mintzberg, H., 1989. Mintzberg on management. New York: The Free Press. Nasrallah, W., 2006. When does management matter in a dog-eat-dog world: An “interaction value analysis” model of organizational climate. Computational and Mathematical Organization Theory 12(4): 339–359. Nasrallah, W.F., and Al-Qawasmeh, S.J., 2009. Comparing n-dimensional contingency fit to financial performance of organizations. European Journal of Operations Research, 194(3): 911–921. Nasrallah, W.F., and Levitt, R.E., 2001. An interaction value perspective on firms of differing size. Computational and Mathematical Organization Theory 7(2): Learning Curves: Theory, Models, and Applications Nasrallah, W.F., Levitt, R.E., and Glynn, P., 1998. Diversity and popularity in organizations and communities. Computational and Mathematical Organization Theory 4(4): 347–372. Nasrallah, W.F., Levitt, R.E., and Glynn, P., 2003. Interaction value analysis: When structured communication benefits organizations. Organization Science 14(5): 541–557. Williamson, O.E., 1979. Transaction-cost economics: The governance of contractual relations. Journal of Law and Economics 22(2): 233–261. and 6 Log-Linear Non-Log-Linear Learning Curve Models for Production Research and Cost Estimation* Timothy L. Smunt CONTENTS Introduction.............................................................................................................. 91 Mid-unit Learning Curve Model..............................................................................92 Derivation of Mid-unit Formula..........................................................................94 Projections from Batch to Batch ......................................................................... 95 Dog-leg Learning Curves ....................................................................................96 Advantages of the Mid-unit Model in Regression Analysis .................................... 98 Guidelines for Future Research ............................................................................. 100 References.............................................................................................................. 101 INTRODUCTION Learning curve models attempt to explain the phenomenon of increasing productivity with experience. The first reported use of the learning curve phenomenon was by Wright (1936) and since then an extensive number of papers have reported its use in industrial applications and research settings (e.g., see Adler and Clark 1991; Gruber 1992; Bohn 1995; Rachamadugu and Schriber 1995; Epple et al. 1996; Mazzola and McCardle 1996). Wright’s model, which assumed that costs decrease by a certain percentage as the number of produced units doubled, is still widely used and forms the basis for most other adaptations of the learning curve concept. Extensions of Wright’s model to account for work in progress (Globerson and Levin 1995) and for use in project control (Globerson and Shtub 1995) have also been proposed to consider typical data-gathering problems and scenarios in industry settings. Wright’s learning curve model, y = ax−b, a log-linear model is often referred to as the “cumulative average” model since y represents the average cost of all units produced up * Reprinted from International Journal of Production Research 1999, vol. 37, no. 17, 3901–3911. Learning Curves: Theory, Models, and Applications to the xth unit. Crawford (see Yelle 1979) developed a similar log-linear model, using the same function as shown for the cumulative average model. However, in this case, y represents the unit cost for the particular x unit. For this reason, the Crawford approach is often referred to as the “unit cost” model. Both of these simple log-linear models are discrete in unit time for at least one type (unit cost or cumulative average cost) of cost calculation. In either the cumulative average or unit cost approach, an approximation is required to convert one type of cost to the other. This approximation can create difficulties both in empirical studies of production costs and in formulating analytical models for production planning, including learning. However, the use of the continuous form of the log-linear model, explained in detail in the next section, overcomes this discrete formulation problem. By making the assumption that learning can occur continuously, learning curve projections can be made from mid-units, thus eliminating any approximation error. This model, sometimes called the “mid-unit” approach, also provides for the reduction of computation time, which can become important in empirical analysis of manufacturing cost data due to the numerous items typically found in such datasets. There has been a fair amount of confusion in the literature and operations management textbooks concerning the appropriate use of either a unit or a cumulative average learning curve for cost projections. The main objectives of this chapter are to illustrate that no conflicts exist in this regard under the assumption of continuous learning, and to provide guidelines for this model’s application in research and cost estimation. Further to this, we also present variations of the log-linear models that are designed to analyze production cost data when the log-linear assumption may not hold, especially during production start-up situations. This chapter is organized as follows. First, the continuous log-linear (mid-unit) model is presented. Second, examples of the use of the mid-unit model are provided to illustrate the relative ease of this learning curve approach for estimating both unit and cumulative average costs on labor effort. Finally, we discuss the relative advantages and disadvantages of using these models, including more complicated nonlog-linear models, in the empirical research of productivity trends and in proposing analytical models for normative research in production. MID-UNIT LEARNING CURVE MODEL If learning can occur continuousl—that is, if learning can occur within unit production as well as across unit production—a continuous form of the log-linear learning curve can be modeled. It is not necessary in this case, then, to force the unit cost and the cumulative average cost of the first unit to be identical. We note that within unit learning is a reasonable assumption, especially when the product is complex and requires a number of tasks for an operator to complete. In fact, this assumption was previously made by McDonnell Aircraft Company for the purposes of their cost estimates in fighter aircraft production. Furthermore, as will be illustrated later, most cost data is gathered by production batch and, therefore, projecting from a mid-unit of the batch to another batch mid-unit does not require the assumption of continuous learning within a single unit of production. A continuous form of the unit cost function permits the use of learning curves to be more exact and convenient. According to the cumulative average model: Log-Linear and Non-Log-Linear Learning Curve Models Ytc ( x ) = (a)( x1− b), where ⎡ log (learning rate) ⎤ b=⎢ ⎥. log (2.0) ⎣ ⎦ The rate of increase of a total cost function can be described by the first derivative of that total cost function. Because the rate of increase of a total cost function can be explained as the addition of unit costs over time, the unit cost function becomes: Yu ( x ) = d (tc) = (1 − b)(a)( x − b ). d( x) Thus, the cumulative average cost is the same as in the cumulative average model: Yca ( x ) = a( x − b ). The mid-unit model is illustrated in Figure 6.1. A question that frequently arises concerning the mid-unit learning model is: “How can the first unit cost have different values for the cumulative average cost than that for the unit cost?” This disparity can easily be resolved by taking into consideration the relationship between the rates of total cost increase and the learning curve concept. The first derivative provides a continuous unit cost function. The continuous cost function assumes that learning exists, either within a unit or within a batch of units (or both). For example, consider a “production batch of units” experiencing a 90% learning curve, and then consider a case of no learning effect where all units require 10 h each to produce. If no learning takes place within the first five units, the midpoint of effort would occur at the middle of the third unit. This midpoint, also known as the Cost per unit Yca Yu FIGURE 6.1 Mid-unit model. 1.0 Cumulative output Learning Curves: Theory, Models, and Applications “mid-unit,” is the point of a unit or batch of units where the average cost theoretically occurs. However, when learning takes place, the initial units will cost more than the later units. If the first unit requires 10 h to produce the expected costs for the next four units, assuming a 90% learning curve, then we have the costs shown in Table 6.1. In Table 6.1, the total cost for the batch of five units is 38.3 h. The mid-unit in production is reached in 19.1 (38.3 h/2) total hours, which occurs somewhere between unit 2 and unit 3. Interestingly, the unit cost for the first unit plotted at the midpoint of unit 1 is identical to the average cost to produce the whole unit (Figure 6.1). Derivation of Mid-unit Formula The use of the mid-unit model requires the determination of the mid-unit for most calculations and for regression analysis. Normally, the mid-unit is calculated for a production batch so that average costs can be projected from one batch to another. In essence, the average cost for a batch is the unit cost for the mid-unit of the batch. Therefore, the projection of a previous batch average cost simply requires that unit costs be projected from one mid-unit to another mid-unit. To derive the mid-unit formula, it is important to consider two ways of calculating batch costs. First, within any given batch where x2 is the quantity at the end of the batch, and x1 is the quantity prior to the beginning of the batch, the total cost of the batch (utilizing the cumulative cost equation) would be: Total batch cost = (a)( x12− b ) − (a)( x11− b ) = a[ x12− b − x11− b ]. Alternatively, the unit cost is defined as: Yu = (1 − b)(a)( x − b ). The unit cost at the midpoint of effort, multiplied by the quantity in the batch results in the total cost for the batch. Defining x M as the midpoint or mid-unit of the batch, then: TABLE 6.1 90% Learning Curve Calculations Unit 1st unit 2nd unit 3rd unit 4th unit 5th unit Unit Cost (h) Total Cost (h) 10.0 h (0.848)10 h × 2−0.152 (0.848)10 h × 2−0.152 (0.848)10 h × 2−0.152 (0.848)10 h × 2−0.152 10.0 7.6 7.2 6.9 6.6 17.6 24.8 31.7 38.3 Log-Linear and Non-Log-Linear Learning Curve Models −b Batch cost = (1 − b)(a)(x M )(x2 − x1 ). Setting Equations 6.4 and 6.6 for the batch cost equal to each other and solving for x M: −b (1 − b) (a) ( x M )( x2 − x1 ) = (a) [ x12− b − x11− b ], or 1/ b ⎡ (1 − b)( x − x ) ⎤ x M = ⎢ 1− b 2 1− b 1 ⎥ . ⎣ x2 − x1 ⎦ Projections from Batch to Batch Predicting batch costs at some future point from the most current cost data is accomplished in a similar manner to projecting from unit to unit. The mid-unit formula is used to determine “from” and “to” units for the cost projection. Figure 6.2 shows a unit cost curve based on an 80% learning curve ratio, with unit 50 having a cost of 10 labor hours. To project a cost for unit 100 we use the formula: ⎛ x −b ⎞ Yu ( x2 ) = ⎜ 2− b ⎟ (Yu ( x1 )). ⎝ x1 ⎠ When the learning curve rate is 80%, b=− log(0.80) = 0.3219. log (2.0) Cost per unit 10 Batch 1 Batch 2 FIGURE 6.2 Mid-unit model example. 100 Cumulative output Learning Curves: Theory, Models, and Applications Therefore, ⎛ 100 −0.3219 ⎞ Yu (100) = ⎜ −0 3219 ⎟ (10) = 8 labor hours. ⎝ 50 ⎠ Similarly, batch costs are projected by: ⎛ x − b2 ⎞ Batch average 2 = ⎜ M ( batch average1). −b ⎟ ⎝ x M1 ⎠ Assume that the most current production batch shows a total cost of 200 labor hours for units 11–20 for a batch average of 20 labor hours per unit. The task then becomes one of projecting the cost for next month’s production from unit 101 to unit 500. First, find the corresponding mid-units of each batch. Assuming an 80% learning curve: 1/ 0 3219 ⎡ (0.6781)(20 − 10) ⎤ x M1 = ⎢ 0.6781 − 100.6781 ⎥⎦ ⎣ 20 ⎡ (0.6781)(500 − 100) ⎤ x M2 = ⎢ 0.6781 − 1000.6781 ⎦⎥ ⎣ 500 = 14.6, 1/ 0 3219 = 266.5. Then, ⎛ 266.5−0 3219 ⎞ Batch average (units 101 − 500) = ⎜ −0.3219 ⎟ (20) = 7.85 h. ⎠ ⎝ 14.6 The total cost for the batch is 7.85 h/unit × 400 units = 3140 h. Dog-leg Learning Curves Frequently, a production process will experience a change in the learning rate. When a break in the learning curve is expected, a method to project beyond this breaking point to a different learning curve is needed. Again, this cost projection takes place on the unit cost line. To derive a formula to project on a dog-leg learning curve, it is possible to make two separate projections for the same purpose. First, projecting from Batch 1 to the breaking point, BP: −b ⎛ ⎞ Yu (BP ) = ⎜ BP ⎟ (average batch cost1). ⎝ x M −1 ⎠ Log-Linear and Non-Log-Linear Learning Curve Models Then, projecting from BP to Batch 2: ⎛ x −b ⎞ Yu ( x M2 ) = ⎜ M−2b ⎟ (Yu ( BP )). ⎝ BP ⎠ Expanding Equation 6.12 to include Equation 6.11 we have, −b ⎞ ⎛ x −b ⎞ ⎛ Yu ( x M2 ) = ⎜ M−2b ⎟ ⎜ BP− b ⎟ ( average batch cost1) . ⎝ BP ⎠ ⎝ x M1 ⎠ Substituting the mid-unit formula (Equation 6.8) into Equation 6.13, and defining QP1 and QT1 as the previous and total units for the first batch and QP2 and QT2 as the previous and total units for the second batch, we have: − ⎛ ⎡ (1 − b 2) (QT2 − QP2 )1/ b 2 − b 2 ⎤ ⎞ ⎛ BP b1 ⎥ ⎟ ⎜ ⎡ (1 − )( 1/ − Yu ( x M2 ) = ⎜ ⎢⎢⎣ b1 QT1 − QP1 ) b1 b1 ⎤ QT21− b 2 − QP21− b 2 ⎥⎦ ⎟ ⎜ ⎜ ⎢ ⎥ 1− 1− ⎟⎠ ⎜⎝ ⎢ ⎜⎝ − QY1 b1 − QP1 b1 ⎥⎦ ⎣ BP b 2 ⎞ ⎟ ⎟ ⎟⎠ (average batch cost1 ). Simplifying: ⎛ (1 − b1)(QT 1 − QP1)(BP b 2)(QT 12− b 2 − QP12− b 2) ⎞ Yu ( x M2 ) = ⎜ ⎟ 1− 1− ⎝ (1 − b 2)(QT 2 − QP 2)(BP b1)(QT 1 b1 − QP1 b1) ⎠ (average batch cost1 ) To illustrate the use of this formula, assume that we wish to estimate the cost of a batch from unit 201 to unit 500. Furthermore, assume that the average cost for an earlier batch (from unit 11 to unit 20) was 20 h/unit, the learning curve for the first leg was 80%, and the second leg learning curve is projected to be 90%. The break point is 100 units. The average cost for units 201–500 can then be calculated as: ⎛ (0.6781)(20 − 10)(1000.152)(5000.848 − 2000 848) ⎞ (20 h/unit ) =8.95 h//unit. 0 3219 ⎜⎝ ( )(200 6781 − 100 6781) ⎟⎠ 0.848)(500 − 200)(100 While the derivation of the above formula may have appeared complicated, note that the application is quite straightforward as shown in the above example. It is also possible to decompose this formula into smaller parts in order to reduce its apparent complexity. For example, projections can be first made from the mid-unit of the Learning Curves: Theory, Models, and Applications production batch to the dog-leg point. Projections can be then made from the dog-leg point to the projection batch using the new learning curve. ADVANTAGES OF THE MID-UNIT MODEL IN REGRESSION ANALYSIS The advantages of a continuous form of a log-linear learning curve model, the midunit model, fall into two categories: (1) computation speed and (2) computation accuracy. The computational speed advantage is the result of the ability to project all types of cost; unit, batch average, cumulative average, and total costs, from the continuous unit cost function. For example, assume that the learning rate and unit cost for x1 is known, and projections are required for the average cost of the production batch from units x2 to x3, for the cumulative average cost of x3 units, and for the total cost for x2 to x3 units. Using the continuous form of the learning curve, the average cost for x2 to x3 is simply the unit cost projection from x1 to the midpoint of x2 − x3. This could be accomplished using the unit cost model, but the unit cost model would necessitate the summation of x3 units to determine the cumulative average cost of x3.* An additional computational problem arises when the underlying learning behavior is log-linear with the cumulative average, but regression analysis is performed on actual unit cost data without adjusting for the mid-unit of effort. For example, using an 80% learning curve and the cumulative average model, cumulative average costs are calculated from units 2 to 200. The difference between total costs from xn −xn −1 becomes the unit costs for this model. The unit cost function is non-log-linear due to its discrete nature. An estimation of the learning curve using the unit costs from units 2 to 50 (call this our historical database) would indicate approximately a 79% learning curve. The upper half of Table 6.2 illustrates the projection error from performing regression analysis on unit cost data in this fashion. We also calculated the true unit costs when the cumulative average costs follow 70% and 90% learning curves. In all three cases, regression analyses on the actual unit costs provide slope estimates that are steeper than those actually occurring. Note that the cost projection error using these incorrect slope and intercept estimates increases as the cost estimate is made for production units further down the learning curve. The maximum error (7.24%) occurs for the largest unit number for the estimate (500) and for the steepest learning curve (70%). Clearly, this estimate error is most pronounced when the actual unit costs are taken from the earliest production units. The lower half of Table 6.2 illustrates the estimation error when actual unit costs for units 26–50 are used (the latter half of the historical data in this example). The maximum percent error is reduced considerably using this approach (1.24%), but is still sufficiently large to induce substantial profit margin reductions if pricing has been based on these projections. Therefore, we see * We note, however, that if the true underlying learning behavior follows a discrete pattern, as assumed in either the cumulative average or unit cost approach, using the mid-unit approach can induce approximation error (see Camm et al. 1987). In prior research projects, however, the author has found that when the mid-unit formulation with a continuous learning assumption is used to initially estimate learning curve parameters vis-à-vis least squares regression analysis, the subsequent application of the mid-unit approach eliminates the potential approximation error as discussed by Camm et al. (1987). Log-Linear and Non-Log-Linear Learning Curve Models TABLE 6.2 Projection Error Caused by the Use of the Discrete Unit Learning Curve Model Historical Data 2–50 Learning Curve Unit Number for Estimate Error % 4.84 5.96 6.99 7.24 3.09 3.81 4.32 4.71 1.43 1.77 2.03 2.22 0.67 0.89 1.11 1.24 0.42 0.60 0.70 0.73 0.20 0.28 0.36 0.40 that the use of the mid-unit approach is most important in reducing projection errors in low- to medium-volume production environments. Projection errors due to the use of discrete versions of learning curve models become negligible for high-volume production (e.g., appliance and automobile manufacturing). Perhaps most important is that the ability to project from mid-unit to mid-unit also provides for an efficient regression analysis of typical historical cost data found in most firms, regardless of the production volume. Cost accounting systems usually capture the total cost needed to produce a batch of units, and therefore, the average cost for the batch. Also typical of “real” data is the omission of early production cost data, or of incorrect production cost reporting later in the production cycle. When these problems cause missing data points (i.e., missing batch averages), cumulative averages cannot be calculated without approximations, thus forcing the use of the unit cost function. Of course, the use of tables or careful adjustments to reduce approximation errors is possible, but the use of the mid-unit model eliminates any such effort. The ability Learning Curves: Theory, Models, and Applications to formulate continuous cost functions for both unit and cumulative average costs provides fast and accurate learning curve analysis of most production cost databases. GUIDELINES FOR FUTURE RESEARCH This chapter illustrated the mid-unit approach for log-linear productivity trend analysis. The mid-unit model is a continuous form of the log-linear learning curve that allows production cost projections from both the cumulative average cost function and the unit cost functions. Cost projection errors caused by the discrete unit formulations for either the cumulative average or unit cost functions are eliminated. The formulation of the model requires negligible computational capabilities to accomplish even the most difficult learning curve projections. Generally, this chapter has shown that a log-linear learning curve model provides good “fits” of empirical data for many products and processes. In other cases, typically start-up phases of production, it may be found that non-log-linear learning curve models better estimate productivity trends. Here, additional terms are typically used to change the log-linear shape. Note, however, that the more complex the model, the more difficult it is to compute the terms’ coefficients. There are five basic patterns of non-log-linear learning curves, including plateau, hybrid or “dog-leg,” S-shaped, convex-asymptotic, and concave-asymptotic. The first two patterns are simple adaptations to the log-linear model in order to consider specific effects. The plateau effect, discussed by Baloff (1970, 1971), assumes that the learning effect is associated with the “start-up phase” of a new process, and that a steady-state phase, which exhibits negligible productivity improvements, occurs as a distinct interruption of the start-up phase. The hybrid model, or “dog-leg,” is similar to the plateau model since it is a combination of two or more log-linear segments. Normally, the slopes of the succeeding segments are flatter. The change in slopes can be explained by the implementation of more automated processes for labor-intensive processes over time. The change in slopes indicates different learning rates associated with various production processes. The use of the mid-unit approach for these special cases remains valid. Other models explicitly use non-log-linear functions to estimate the change in learning rates over time. For example, an S-shaped curve was proposed by Carlson (1973) and provides a robust approach to fitting data with both log-concave and log-convex characteristics. Levy (1965) also proposed an S-shaped learning curve model, later discussed by Muth (1986). Other researchers, including Knecht (1974) and De Jong (1957), have investigated convex-asymptotic learning curves that project a convergence to a steady-state as more units are produced. Still others (e.g., Garg and Milliman 1961) proposed learning curve models that are concave-asymptotic. Finally, it should be pointed out that non-linearities might be caused by inherent forgetting in the process resulting from interruptions in the performance of tasks. Dar-El et al. (1995a, 1995b) provide modifications to the log-linear model that specifically address the potential for forgetting in both short and long cycle time tasks. Researchers utilizing empirical data from company databases should initially consider the use of the mid-unit log-linear model for two main reasons. First, the mid-unit approach is appropriate if the database of production costs holds either Log-Linear and Non-Log-Linear Learning Curve Models batch total cost or batch average cost information and has at least one missing data point—which is common in most industrial databases. Because the mid-unit approach does not require all unit cost data to be available in order to determine both the cumulative average and unit costs, regression analysis utilizing the midunit approach provides the most accurate method to estimate the intercepts and slopes of the component production costs. Second, the mid-unit approach is appropriate when the productivity trend is assumed to be log-linear, but the learning rate has the potential to change at identified time points (i.e., events), resulting in a dogleg learning curve scenario. Normative research can also benefit from the use of the mid-unit approach. For example, many of the studies using analytical approaches to the optimal lot-sizing problem have used the continuous learning model, but assume that the unit cost occurs at the unit number minus 0.5. This approximation will always result in some level of error, no matter how learning is assumed to occur (i.e., either continuously or discretely). (See Camm et al. [1987] for a discussion of the errors caused under the discrete learning assumption.) The use of the exact mid-unit calculation eliminates any error in such normative studies. In addition, the previously published normative studies of optimal lot-sizing under learning and forgetting can be extended to dogleg learning conditions using the mid-unit log-linear approach in order to examine the impact of either start-up or new technology implementations. It should be understood, however, that non-log-linear learning curve approaches might be most appropriate for special conditions, especially during the start-up phase of production. Prior research indicates that these non-log-linear trends have been found to exist for highly aggregated data—such as the total production costs for complex products (e.g., aircraft). Ultimately, the choice of using either the mid-unit log-linear approach or one of the non-log-linear models should be made with full recognition that it is sometimes impossible to determine the true, underlying learning behavior of a production process. In the end, simplicity may prove to be the best guide since production cost estimation can become complicated due to a number of other confounding factors. REFERENCES Adler, P.S., and Clark, K.B., 1991. Behind the learning curve: A sketch of the learning process. Management Science 37(3): 267–281. Baloff, N., 1970. Start-up management. IEEE Transactions on Engineering Management, EM 17(4): 132–141. Baloff, N., 1971. Extension of the learning curve – some empirical results. Operations Research Quarterly 22(4): 329–340. Bohn, R.E., 1995. Noise and learning in semiconductor manufacturing. Management Science 41(1): 31–42. Camm, J.D., Evans, J.R., and Womer, N.K., 1987. The unit learning curve approximation of total cost. Computers and Industrial Engineering 12(3): 205–213. Carlson, J.G., 1973. Cubic learning curves: Precision tool for labor estimating. Manufacturing Engineering Management 71(5): 22–25. Dar-El, E.M., Ayas, K., and Giland, I., 1995a. A dual-phase model for the individual learning process in industrial tasks. IIE Transactions 27(3): 265–271. Learning Curves: Theory, Models, and Applications Dar-El, E.M., Ayas, K., and Giland, I., 1995b. Predicting performance times for long cycle time tasks. IIE Transactions 27(3): 272–281. De Jong, J.R., 1957. The Effects of increasing skill on cycle time and its consequences for time standards. Ergonomics 1(1): 51–60. Epple, D., Argote, L., and Murphy, K., 1996. An empirical investigation of the microstructure of knowledge acquisition and transfer through learning by doing. Operations Research 44(1): 77–86. Garg, A., and Milliman, P., 1961. The aircraft progress curve modified for design changes. Journal of Industrial Engineering 12 (1): 23–27. Globerson, S., and Levin, N.L., 1995. A learning curve model for an equivalent number of units. IIE Transactions 27(3): 716–721. Globerson, S., and Shtub, A., 1995. Estimating the progress of projects. Engineering Management Journal 7(3): 39–44. Gruber, H., 1992. The learning curve in the production of semiconductor memory chips. Applied Economics 24(8): 885–894. Knecht, G.R., 1974. Costing technological growth and generalized learning curves. Operations Research Quarterly 25(3): 487–491. Levy, F., 1965. Adaptation in the production process. Management Science 11(6): 136–154. Mazzola, J.B., and McCardle, K.F. 1996. A Bayesian approach to managing learning-curve uncertainty. Management Science 42(5): 680–692. Muth, J.F., 1986. Search theory and the manufacturing progress function. Management Science 32(8): 948–962. Rachamadugu, R., and Schriber, T.J., 1995. Optimal and heuristic policies for lot sizing with learning in setups. Journal of Operations Management 13(3): 229–245. Yelle, L.E., 1979. The learning curve: Historical review and comprehensive survey. Decision Sciences 10(2): 302–328. Wright, T.P., 1936. Factors affecting the cost of airplanes. Journal of Aeronautical Sciences 3(2): 122–128. Parameter 7 Using Prediction Models to Forecast PostInterruption Learning* Charles D. Bailey and Edward V. McIntyre CONTENTS Introduction............................................................................................................ 104 Previous Research .................................................................................................. 104 Learning Curves ................................................................................................ 104 Relearning Curves and Forgetting..................................................................... 106 Factors Affecting Loss of Skill and thus Relearning Curve Parameter aR ........ 107 Factors Affecting the Relearning Rate, Parameter bR........................................ 108 Choice of Learning Curve Forms and Development of Parameter Prediction Models..................................................................................................................... 109 Choice of Curve Forms ..................................................................................... 109 Models to Predict the Real Learning Curve Parameters ................................... 110 Variables in Parameter Prediction Model to Predict Parameter aR.................... 110 Equations to Predict Relearning Curve Parameter aR ....................................... 112 Variables in Parameter Prediction Models to Predict Parameter bR .................. 113 Equations to Predict the Parameter bR ............................................................... 115 Method ................................................................................................................... 116 Results.................................................................................................................... 116 Regressions for aR and bR .................................................................................. 116 An Example.................................................................................................. 118 Predictive Ability................................................................................................ 121 First Test of Predictive Ability....................................................................... 121 Second Test of Predictive Ability ................................................................. 123 Discussion, Conclusions, and Implications ........................................................... 124 References.............................................................................................................. 125 Reprinted from IIE Transactions, 2003, vol. 35, 1077–1090. Learning Curves: Theory, Models, and Applications INTRODUCTION Industrial learning curves (LCs) have found widespread use in aerospace and military cost estimation since World War II, and use in other sectors is increasing (e.g., Bailey 2000). Given a stable production process, such curves are excellent forecasting tools. However, a number of factors can disturb the stable environment and therefore raise issues for research. One such issue is how to deal with interruptions of production and subsequent forgetting and relearning. The earliest advice in the area involved “backing up” the LC to a point that would represent forgetting—assuming forgetting to be the reverse of learning—and then resuming progress down the same curve (e.g., Anderlohr 1969; Adler and Nanda 1974; Carlson and Rowe 1976; Cherrington et al. 1987; Globerson et al. 1989). Subsequent research has tried to refine the modeling of both forgetting and relearning and has constructed relearning curves (RLCs) to describe the relearning of skills that have diminished during breaks in production. In addition, a limited body of research exists on predicting the parameters of RLCs using parameter prediction models (PPMs). The current chapter develops and tests PPMs for RLCs as a procedure for providing early estimates of post-interruption production times. If managers wish to use these PPMs to predict RLC parameters, they must somehow estimate the parameter values of the PPMs, which they may be able to do based on past experience. This estimation problem is similar to the one that managers face when they first employ an LC; they must somehow estimate the LC parameters. The analysis presented here should provide information on the relevant variables and forms of equations that are useful in predicting the parameters of RLCs. Candidate predictive variables include all information available when production resumes, such as the original LC parameters, the amount of learning that originally occurred, the length of the production interruption, and the nature of the task. The potential usefulness of PPMs is that they can provide a means of estimating post-break production times well before post-break data become available. Other procedures, such as fitting an RLC to post-break data, or backing up a pre-break LC to a point that equals the time for the first post-break iteration, can only be employed after post-break data are available. PREVIOUS RESEARCH This section briefly discusses the nature and variety of LCs/RLCs, and then reviews the LC literature relevant to forgetting, relearning, and parameter prediction. Learning Curves LCs are well-established cost-estimation tools, and a variety of curves have been developed to model the empirically observed improvement that comes from practice at a task (Yelle 1979). Almost all of the models reflect a pattern of large early improvements followed by slower returns to practice. In the popular “Wright” log-log LC (Wright 1936; Teplitz 1991; Smith 1989), the cumulative average time (and related cost) per unit produced declines by a constant factor each time cumulative production doubles, so that it is also called the “cumulative average” model. It is a power curve, in the form: Using Parameter Prediction Models to Forecast Post-Interruption Learning y = ax b , where y = the cumulative average time (or related cost) after producing x units a = the hours (or related cost) to produce the first unit x = the cumulative unit number b = log R/log 2 = the learning index ( 0. The greater the loss of skill, the higher should be the value of aR. Equation 7.6 combines aL , L, and B in a log form, while Equation 7.7 combines them in a power form. Signs required for the expected marginal effects are shown below each equation. aR = q0 + q1 ln(aL ) + q2 ln( L ) + q3 ln( B), q1, q3 > 0, q2 < 0. aR = q0 (aL )q1 ( L )q2 ( B)q3 , q0 > 0, q2 < 0, 0 < q1, q3 < 1. Equation 7.7 is made linear by a log transformation; that is, ln aR = ln q0 + q1 ln(aL ) + q2 ln( L ) + q3 ln( B). Equation 7.8 is the “best of seven” curves tested by Globerson et al. (1989): aR = q0 ( ETm +1 )q1 ( B)q2 , ln aR = ln q0 + q1 ln( ETm +1 ) + q2 ln( B), which, in linear form, is where, as noted above, ETm+1 is the estimated time for the first post-break unit assuming no break, based on the LC. Globerson et al. (1989) found parameter values in the following ranges: Using Parameter Prediction Models to Forecast Post-Interruption Learning q0 > 0, 0 < q1, q2 < 1. Equation 7.8 uses as an independent variable the predicted next time, ETm+1; however, the predicted initial time from the LC, ET1, may also have information content, because the difference between the two reflects the improvement, as L = ET1 − ETm+1. This observation is in keeping with our comment, above, that perhaps both the amount learned and the skill level achieved should be relevant. This leads to the following simple extension of Equations 7.8 and 7.8a: aR = q0 ( ETm +1 )q1 ( B)q2 ( ET1 )q3 , q0 > 0, 0 < q2 < 1, and ln aR = ln q0 + q1 ln( ETm +1 ) + q2 ln( B) + q3 ln( ET1 ). Equations 7.9 and 7.9a use as separate variables the two components of the amount learned (L = ET1 – ETm+1), allowing for the independent variability of these components across subjects. In these equations, the expected values of q1 and q3 are less clear. In other PPMs we expect a negative coefficient on L, but various combinations of values for q1 and q2 are consistent with this result. For example, a positive coefficient on both q1 and q2 would be consistent with a negative coefficient on L if the coefficient on ETm+1 is greater than the coefficient on ET1 (which is what we find). The positive coefficient on ETm+1 is consistent with the findings of Globerson et al. (1989). Variables in Parameter Prediction Models to Predict Parameter bR Drawing again on the findings of Bailey (1989), we use the amount learned (L) and the length of break (B) as independent variables to estimate the relearning rate. Bailey (1989) did not use L and B directly to estimate the relearning rate, but used “lost time,” which was a function of L and B. Finally, although that earlier work found no relationship between the learning rate (bL) and the relearning rate (bR), we believe that this relationship is worth testing again in some other functional form and with the benefit of the Bailey-McIntyre LC, (Equation 7.2), which models this type of task better. Amount Learned Before Break (L) Since we use a start-anew RLC, the relearning rate will depend on the number of repetitions performed before the break. For example, if the change between units 1 and 2 reflects an 80% LC, an RLC after no interruption (or a short interruption) will not reflect this same 80% for the tenth and eleventh units (renumbered 1 and 2 in an RLC), but will show a much slower rate. This effect suggests an inverse Learning Curves: Theory, Models, and Applications relationship between the amount learned (L) and the steepness of the RLC. Ceteris paribus, the more learned before the break, the less the opportunity for learning after the break, which implies higher values for the relearning rate and for bR. However, greater learning during the pre-break period implies a faster learning rate, which may also be reflected in a faster relearning rate (a lower value of bR). This effect would produce a positive relationship between bR and L. Since we do not know which of these effects is stronger, we do not specify expected signs on the coefficient of L. Length of Break (B) We expect a positive relationship between the length of the break and the “steepness” of the RLC. A substantial amount of learning places the worker on a relatively flat part of the curve. As forgetting occurs, regression “back up the curve” places the worker on a steeper part of the curve. After a short break, the individual should resume learning in the relatively flat part of the curve, and an RLC fit to post-break observations should reflect slower learning. A longer break would move the worker back into the steeper portion of the curve and an RLC fitted to these observations should show a steeper relearning rate—that is, more rapid relearning, implying lower values for the relearning rate and bR. Moreover, because memory traces of original skills are retained, relearning can be faster than initial learning of the same material by a person starting at that same performance level (Farr 1987). Thus, when all learning appears to be forgotten, as measured by time on the first post-break performance, the improvement between units 1, 2, 3, and so on, will appear much steeper than for the corresponding original units. Thus, because we use a start-anew RLC beginning with units designated 1, 2, 3, and so forth, bR can easily be more negative than bL . Figure 7.3 shows the expected relationship between bR and B. Additionally, we include terms for interaction between B and L, since each may alter the effect of the other; e.g., a worker who has learned a great deal may still be “down the curve” despite a long break. Pre-Break Value of b (bL) The nature of the relationship between bR and bL is ambiguous. One might argue for a positive association reflecting the innate ability of the individual. On the other hand, bR Faster FIGURE 7.3 Expected relationship between bR and B. Using Parameter Prediction Models to Forecast Post-Interruption Learning the faster one learns, the further down the curve he or she will be, and by the arguments above, and other things equal, the slower will be the rate of further learning. Because either or both of these effects are possible, we do not specify an expected sign on the coefficients for bL. Equations to Predict the Parameter bR Using the same criteria specified earlier, we test the following PPMs for bR. The first candidate uses logs to show the expected diminishing marginal effects and includes all three independent variables plus a term that allows for interaction between B and L. Because signs of the coefficient on L are not specified, we do not specify expected signs for the coefficients of interaction terms. bR = q0 + q1 ln B + q2 ln L + q3 (ln B × ln L ) + q4bL , q1 < 0. A slight modification, using the interaction term with L in its arithmetic form, which worked well in Bailey (1989), is: bR = q0 + q1 ln B + q2 ln L + q3 (ln B × L ) + q4bL , q1 < 0. We also test several power forms that model the expected diminishing marginal effects. The first is: bR bL = q0 Bq1 Lq2 , q0 > 0, 0 < q1 < 1, which in log form becomes linear: ln (bR bL ) = ln q0 + q1 ln B + q2 ln L. An alternative construction is to treat bL as an independent variable, yielding: bR + 1 = q0 Bq1 Lq2 (bL + 1)q3 , q0 > 0, q1 < 0, where the addition of the constant, +1, is necessary in order to take logs. In log form, this becomes: ln(bR + 1) = ln q0 + q1 ln B + q2 ln L + q3 ln(bL + 1). Learning Curves: Theory, Models, and Applications Finally, we include Shtub, Levin, and Globerson’s (1993) equation (20): bR = q0 + q1bL + q2 aRʹ + q3 (aRʹ /aL ) + q4 ln B, where a′R is estimated using their equation, which is our Equation 7.8. Shtub et al. found that: q0 > 0, q1, q2 , q3 , q4 < 0. METHOD The PPMs described above were tested on 116 pairs of LCs and RLCs constructed from experimental data as reported in Bailey and McIntyre (1992) (61 subjects), and Bailey and McIntyre (1997) (55 subjects). The experimental task involved repeatedly assembling a product (a crane, a dump truck, or a radar station) from an erector set for a period of approximately four hours. After breaks of approximately two to five weeks, the subjects repeated the exercise. Assembly times to the nearest second were recorded by one of the authors or a research assistant. The average number of assembly repetitions was 9.9 before the break and 10.0 after the break. Subjects were undergraduate and graduate students of normal working age. They were paid a realistic wage using either a flat rate, a piece rate, or a fixed fee plus bonus. The average pay under all methods was approximately $5 per hour, and supplementary analysis indicated that the method of payment did not affect the results reported here. The two articles referred to above provide further descriptions of the experiments and report statistical data on the LCs and RLCs that were fit to the experimental data. The data available to test PPMs were parameters from the 116 LCs fit to the prebreak data and from the 116 RLCs fit to the post-break data, the length of the break (B) for each subject, and the amount learned (L) during the pre-break period. We compute both goodness-of-fit statistics and measures of predictive ability for the PPMs. The results are reported below. RESULTS Regressions for aR and bR Because the dependent variables differ across our PPMs for aR and bR, we make the comparisons consistent by computing an adjusted R2 value for the correlation between the RLC values of aR (bR) and the PPM estimates of aR (bR) for each PPM (see Kvålseth 1985). Table 7.1 summarizes the results of the PPMs used to estimate aR. All the models are highly statistically significant (F-statistics ≤1.0E-16). All of the PPMs, except Equation 7.5, fit the data very well, especially when used with the Bailey-McIntyre model. Equation 7.9a, however, has the highest adjusted R2 for both equation forms. Using Parameter Prediction Models to Forecast Post-Interruption Learning TABLE 7.1 Fit and Significance Levels of Coefficients of PPMs for aR Significance Levels of Independent Variables Equation 7.5 7.6 7.7a 7.8a 7.9a RLC Form LL BM LL BM LL BM LL BM LL BM ln B ln L ln aL Adjusted R2 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0002 0.539 0.463 0.743 0.887 0.828 0.902 0.831 0.913 0.866 0.921 Note: Adjusted R2 is computed consistently across equations based on the arithmetic terms aR and estimated aR, using Kvålseth’s (1985) recommended “Equation #1.” The details of the two regressions for Equation 7.9a appear in Table 7.2. The coefficientsinPanelBreflectthat,fortheBailey-McIntyrecurve,aR = exp{− 0.1243 + 0.2827 ln(ETm+1) + 0.0437 ln B + 0.0698 ln (ET1)}. The signs of the coefficients of the intercept and ln B are as predicted. As discussed in the “Equations to predict RLC parameter aR” section, the expected signs of q1 and q3 are somewhat ambiguous. The difference, q3 − q1, is negative for PPM Equation 7.9a applied to both forms of RLCs, which is consistent with a negative coefficient on L =ET1−ETm+1 and supports our predicted inverse relationship between L and aR. TABLE 7.2 Regression Results for PPM Equation 7.9a, for both RLC forms Panel A: Log-log RLC Intercept ln (ETm+1) ln B ln (ET1) Coeff. –0.3188 0.6359 0.1638 0.2967 S.E. 0.1374 0.0552 0.0242 0.0576 t Stat. –2.3196 11.5223 6.7720 5.1471 p-value 0.0222 0.0000 0.0000 0.0000 S.E. 0.0384 0.0189 0.0067 0.0184 t Stat. –3.2345 14.9590 6.4963 3.8011 p-value 0.0016 0.0000 0.0000 0.0002 Panel B: Bailey-McIntyre RLC Intercept ln (ETm+1) ln B ln (ET1) Coeff. –0.1243 0.2827 0.0437 0.0698 Learning Curves: Theory, Models, and Applications The coefficients of Equation 7.9a for the log-log form of RLC (Table 7.2, Panel A) are directionally similar. They imply that the a parameter for the RLC is strongly related to the expected position on the original LC, ETm+1, but is higher as a function of elapsed break time and also higher as a function of the original starting time. The results of the regressions to predict the relearning slope parameter, bR, are summarized in Table 7.3. Although the R2 values are lower than for aR, all of the models are highly statistically significant (F-statistics ≤ 1.0E-7). Equation 7.12a, which uses the ratio of bR to bL, performs poorly; it assumes a fixed relationship between bR and bL that does not appear to be appropriate. The remaining equations, Equations 7.10, 7.11, 7.13a, and 7.14 are competitive alternatives. Equation 7.10, which expresses most simply and directly the relationships between relearning rate and length of break, amount learned, and their interaction, produces the highest adjusted R2, so that the variations introduced in Equations 7.11 and 7.13a are not helpful for our data. Most of the coefficients of Equation 7.14 are not significant because the regressions suffer from multicollinearity; a′R/aL is correlated highly with a′R, ln(B), and bL (R2 ranging from .60 to .80). The regression results for Equation 7.10, for both curve forms, appear in Table 7.4. The negative coefficients for ln B are consistent with faster relearning after “regressing” up the curve. The negative coefficient on ln L suggests that, of the two possible effects of L on bR discussed previously, the second is the stronger. That is, a larger amount learned in the pre-break period indicates faster learning abilities, and that these abilities are also reflected in a faster relearning rate (i.e., a lower value of bR). Regression results for Equation 7.10 display some multicollinearity, especially between the interaction term and each of its separate components.* Despite the effects of this multicollinearity, which biases against finding significance, each of the independent variables in Equation 7.10 is significant at traditional levels when Equation 7.10 is used with the Bailey-McIntyre form of RLC. Furthermore, although multicollinearity limits the usefulness of the coefficients separately, it does not adversely affect predictions of the dependent variable made using the whole equation. Thus, despite the multicollinearity, we believe each of the independent variables should remain in Equation 7.10. An Example As an example of how these PPMs may be used, suppose that a Bailey–McIntyreform LC was fit to a worker’s (or team’s) initial production times for seven units, with the following results: ln y = 3.6166 ⎡⎣ ln ( x + 1)⎤⎦ * Correlation Coefficients Amog Independent Variables: bL ln(B) ln(B) × ln(L) −0 2144 LL BM LL BM LL BM LL BM LL BM 0.0071 0.0003 0.0000 0.0000 0.0000 0.0004 0.0000 0.0000 0.0090 0.0004 ln B 0.0643 0.0514 0.2591 0.7690 0.0030 0.0000 0.1398 0.0607 ln L 0.1237 0.0120 lnB×ln L 0.6274 0.1366 L×ln B 0.4147 0.0207 0.6738 0.0000 0.6634 0.0001 0.7839 0.0001 bL×ln B Significance Levels of Independent Variables 0.4018 0.1426 0.8129 0.6475 0.3166 0.3168 0.3032 0.2911 0.0824 0.1999 0.3071 0.2760 0.2760 0.2763 Adjusted R2 Note: Adjusted R2 is computed consistently based on the arithmetic terms bR and estimated bR, using Kvålseth’s (1985) recommended “Equation 1.” RLC Form TABLE 7.3 Fit and Significance Levels of Coefficients of PPMs for bR Using Parameter Prediction Models to Forecast Post-Interruption Learning 119 Learning Curves: Theory, Models, and Applications TABLE 7.4 Regression Results for PPM Equation 7.10, for Both RLC Forms Panel A: Log-log curve Coefficients 0.2518 Intercept –0.0995 ln(B) ln(L) –0.0870 0.0196 ln(B) × ln(L) 0.0351 bL Panel B: Bailey–McIntyre curve Coefficients 0.2926 Intercept ln B –0.1242 ln L –0.0821 0.0295 ln B × ln L 0.3215 bL S.E. 0.1268 0.0363 0.0465 0.0126 0.0832 t Stat 1.9861 –2.7430 –1.8687 1.5510 0.4221 p-value 0.0495 0.0071 0.0643 0.1237 0.6738 S.E. 0.1156 0.0332 0.0417 0.0115 0.0757 t Stat 2.5307 –3.7447 –1.9690 2.5527 4.2497 p-value 0.0128 0.0003 0.0514 0.0120 0.0000 Management wishes to estimate production time following a break of 56 days. To use Equations 7.9a and 7.10, the following data is required: aL = 3.6166; bL = −0.2144; B = planned break = 56 days; ET1 = exp (3.6166(ln 2)−0.2144 ) = 50.0 hours; ETm +1 = ET8 = exp (3.6166(ln 9)−0.2144 ) = 21.2 hours; L = ET1 − ET8 = 28.8 hours. Using regression results for Equations 7.9a and 7.10 applied to the Bailey– McIntyre curve, as reported in Panel B of Tables 7.2 and 7.4, respectively, we obtain: ln aR = −0.1243 + 0.2827 ln 21.2 + 0.0437 ln 56 + 0.0698 ln 50 †= 1.880, or, âR = 3.2806, and bR = 0.2926 − 0.1242 ln 56 − 0.0821 ln 28.8 + 0.0295(ln 56)(ln 28.8) + 0.3215 ( −0.2144) = −0.1531. Using Parameter Prediction Models to Forecast Post-Interruption Learning The estimated RLC after a 56-day break is therefore: −0.1531 ln y = 3.2806 [ ln( x + 1)] Substituting x = 1, 2, 3, 4, and so on, into the above equation yields estimated marginal post-break production hours of 32.1, 25.4, 22.7, 21.7, and so forth. In addition to providing early estimates of post-break production times, these equations can be used to estimate the costs (in terms of increased production times) of breaks of various lengths. Predictive Ability Tests of predictive ability are an important evaluation method for models that will be used for prediction. We test the predictive ability of our PPMs in two ways. First, we take the best combination of our PPMs and use them to predict RLC parameters, and then use these RLCs to predict post-break marginal times. We compare these predictions to the actual post-break times and compute the mean absolute percentage error (MAPE).* While this comparison provides a test of the ability of the PPMs to predict useful RLC parameters, it is not a test of predictive ability in the strictest sense because the tests use the same post-break data that were used to determine the PPM parameter values. Therefore, we conduct a second test of predictive ability using a hold-out sample. The results of these two tests are reported below. First Test of Predictive Ability Based on adjusted R2 values, the best combination of PPMs is Equation 7.9a and 7.10. Using these two PPMs, we constructed the two forms of RLCs for each subject and used each RLC to predict post-break marginal times. The results were an average overall MAPE of 12.5% for the log-log RLC and 11.59% for the Bailey-McIntyre RLC. Because the best-fitting model does not always predict the most accurate (e.g., Lorek et al. 1983), we also compared the MAPEs using the next-best combination of PPMs (Equations 7.8a and 7.11). Table 7.5 reports the MAPEs for all combinations TABLE 7.5 Comparisons of MAPEs for Combinations of PPMs Log-Log Curve PPM for aR 7.8a 7.9a Bailey–Mclntyre Curve PPM for aR 7.8a 7.9a PPM for bR 7.10 14.12 12.51 7.11 14.10 12.70 PPM for bR 7.10 11.86 11.59 7.11 11.84 11.61 MAPE = 100 × averaged(|projected time − actual time|/actual time). Learning Curves: Theory, Models, and Applications of these four PPMs. For both types of RLCs, the MAPEs are lower for the 7.9a–7.10 combination than for any other combination of PPMs. For the log-log RLC, the MAPEs of all three alternative combinations are significantly higher that the 7.9a– 7.10 combination at p < .001 in paired t-tests. The Bailey-McIntyre RLC appears more robust to the choice of PPMs, with only the 7.8a–7.10 and 7.8a–7.11 combinations having significantly higher MAPEs than the 7.9a–7.10 combination (with both p values ≈ .01). Overall, this finding of the best-predicting model being the same as the best-fitting model is consistent with Bailey and McIntyre (1997). Next, we compare the MAPEs computed above for the combination 7.9a and 7.10 to MAPEs from two alternative approaches: (1) a “start-anew” RLC using the available relearning data to forecast future relearning times, and (2) a backing-up procedure in which we use the forecasted first relearning time from our “best” Equation 7.9 to establish the nearest corresponding iteration on the original log-linear LC, then use that original curve to forecast future relearning times, in accordance with Anderlohr (1969). Figure 7.4 provides this comparison of MAPEs. The percentage on the y axis is the MAPE for predicting all future performance times after having completed the number of relearning iterations shown on the x axis. Because an RLC requires a minimum of two data points, the start-anew RLC begins with the second iteration. The MAPE for a two-unit RLC is based on its forecast errors for units 3 to n; the MAPE for a three-unit RLC is based on its forecast errors for units 4 to n, and so on. The RLCs constructed from PPM-based forecasts, however, as well as the backing-up approach using forecasted time on resumption, can begin with zero relearning observations, at which point they can predict units 1, 2, 3, and so on. The PPM-based forecasts exhibit fairly stable MAPEs across iterations. The start-anew RLC forecasts are based on the RLC that performed best as reported by Bailey and McIntyre (1997); that is, a curve in the form of Equation 7.2 fitted to the relearning data. Based on MAPEs for predicting times 1 through n (i.e., after zero relearning iterations), the PPMs performed better when using the Bailey-McIntyre model 30% Start-anew RLC (BM) All data PPM & BM RLC All data PPM & LL Back-up LL curve 22% 20% 18% 16% 14% 12% 10% Relearning iterations FIGURE 7.4 MAPEs for PPMs vs. back-up RLC and start-anew RLC. Using Parameter Prediction Models to Forecast Post-Interruption Learning (MAPE = 11.59%) than when using the traditional log-log model (MAPE = 12.51%), paired t = 7.11, p < .0001. The advantage continues at a similar level for predicting 2 through n, 3 through n, and so forth, as more data become available. The performance of the start-anew RLC depends on the number of new data observations available to fit it. After two observations, both PPM-based curves are markedly better than the new RLC. After three units, both PPM-based curves retain a clear advantage (p < .0001 in a two-tailed paired t-test); after four units, the difference is still significant only for the Bailey–McIntyre curve (p = .025). After five observations, the start-anew curve is significantly better than the log-log-based PPM curve (p = .005) but not the Bailey-McIntyre-based curve. After six units, the advantage has shifted marginally in favor of the newly-fitted curve (two-tailed p = .065, or onetailed .033 assuming the direction should be in favor of the new RLC). Finally, the backing-up approach produces poor results, because the original LC slope does not reflect the more rapid improvement during relearning. Given that the relearning slope will be steeper, we also tried a naïve model in which all subjects’ bL parameters were multiplied by a constant factor k > 1. Optimizing k for this data set (k ≈ 1.15), to minimize overall MAPE, reduced the MAPE for this forecasting approach from about 18% to about 15%, still substantially worse than the PPM-based results. Although we label this model a naïve model, by using the value of k that minimizes the overall MAPE, the model uses data that are computed after the predictions have been made, giving it an advantage over any actual forecasting model. Even with this advantage, it does not predict as well as our PPM-based RLCs. Second Test of Predictive Ability Our second test of predictive ability involves a hold-out sample so that our predictions are for post-break times that were not used in the computation of PPM parameters. We randomly selected 20 subjects from our 116, fit curves 7.9a and 7.10 to their data, used the resulting PPMs to specify RLCs, and forecast post-break times for the remaining 96 subjects. The use of a much smaller estimation sample to test predictive ability for the hold-out sample represents a rigorous test of predictive ability despite the fact that the forms of the PPMs were derived from the full sample. The results of these tests appear in Figure 7.5. This figure also shows, for comparison, the MAPEs from the two PPM-based RLCs and the start-anew RLC from Figure 7.4. The average MAPEs of the 96 Bailey-McIntyre RLCs, using parameter values derived from the 20-observation PPMs, are very close to the average MAPEs of both the start-anew RLCs and the RLCs that used parameters from PPMs developed using all 116 subjects. Much higher average MAPEs are evident for the 96 log-log RLCs developed from the 20-observations PPMs. Note that in Table 7.3 the coefficients for Equation 7.10, developed using the log-log form of LC, are not significant, except for ln B, at the .05 level, whereas the coefficients developed using the Bailey-McIntyre LC are all significant at levels of .00 to .05. Thus, it is not surprising that the predictive ability of the log-log curve suffers much more from the reduction in sample size. The robustness of forecasts using the Bailey-McIntyre model when using a smaller sample indicates that users could derive useful PPM-based forecasts from a relatively small number of experiences. Further, for manual assembly tasks similar Learning Curves: Theory, Models, and Applications 30% 28% 26% Start-anew All-data PPM & BM 20-obs PPM & LL 20-obs PPM & BM All-data PPM & LL 24% 22% 20% 18% 16% 14% 12% 10% Relearning iterations FIGURE 7.5 MAPEs for 20-observation PPMs vs. new RLCs. to the ones used in our experiments, the parameter values in Tables 7.2 and 7.4 represent potential starting points for users who wish to employ the PPMs of Equations 7.9a and 7.10 with either the Wright or Bailey-McIntyre form of RLC. DISCUSSION, CONCLUSIONS, AND IMPLICATIONS PPMs emerge from this study as good predictors of RLC parameters and, subsequently, as good predictors of future performance during the relearning phase. The PPM for the aR parameter is surprisingly good. It is better for the Bailey-McIntyre curve than for the log-log curve, probably because the former was developed using this type of production assembly data, but also because it is better able to model relearning. Past research indicates that the progress of learning is quite predictable, given that the model captures the nature of the underlying learning process. Unexplained variations will certainly occur from one production time to the next, but since the a parameter captures the “average” level of performance for the learning (or relearning) phase, it is plausible that aR and aL would be as closely related as the PPMs indicate. Although the relearning rate (bR) is less predictable, the best PPM shows that it is related to the original learning rate and the length of the break. The relatively low explanatory power of this PPM indicates potential for further research; however, Equation 7.10 provides a better fit to the data than four other equations that we tested, including the equation suggested by Shtub et al. (1993). Thus, our PPM provides an incremental benefit relative to these other models, and could serve as a benchmark for future research. Furthermore, the RLCs derived using our best PPMs for both a R and bR show good predictive ability, indicating that they have the potential to improve predictive accuracy of post-break production time. Since the purpose of PPMs is ultimately to provide useful estimates of post-break times, our best PPMs meet this objective despite the differences in the R2 values of the two models. In Using Parameter Prediction Models to Forecast Post-Interruption Learning contrast to other approaches, PPMs provide a basis for estimating post-break production times well before post-break data become available. At some point after production resumes and observations of relearning data become available, a curve fitted to the new data will prevail. Our research suggests that even when several data points are available (about five for our data), the PPM-based estimates are more accurate, and they continue to be competitive after that point. Additionally, the predicted parameters would provide a benchmark against which users could evaluate the early results of the relearning process. Specifically, users may be able to apply judgment, supported by the PPM estimates, to avoid projecting unrealistic trends (Bailey and Gupta 1999). Since our PPM parameters were estimated from a common mechanical-assembly task setting, the parameters of our PPMs might be directly applicable to similar settings, where users do not have sufficient data to develop their own PPMs. This can only be determined empirically as the research progresses, but experience in applying the method could lead to tabulated PPM coefficients similar to the tabulated learning rates currently available for over 40 industrial tasks (Konz and Johnson 2000). Our study tested PPMs after only one interruption. The success of our PPMs for repeated asynchronous interruptions is an empirical question for which we have no direct evidence; however, to the extent that the forgetting-relearning process remains relatively stable, we would expect the models to retain their usefulness. An interesting question for future research is whether, in repeated applications, estimates from the LC that are used as independent variables in the PPMs should come from the original LC or from the immediately preceding RLC. Future research should address the consistency of the PPMs across tasks. That is, can the parameters estimated for our PPMs be relied on for a variety of tasks? Bohlen and Barany (1976) identified seven characteristics of operators (workers) and five characteristics of operations (tasks) that they used to predict the parameters of LCs for new bench assembly operations. These variables might also moderate the prediction of RLC parameters. The meta-analytical review by Arthur et al. (1998) is an important resource related to future research on skill decay, which relates to the aR parameter. Undoubtedly, further progress is possible in refining the PPMs, particularly for the learning rate. One additional possibility is to incorporate actual relearning observations into the PPMs as such observations become available (Towill 1973). REFERENCES Adler, G. L., and Nanda, R. 1974. The effects of learning on optimal lot size determination – Single product case. AIIE Transactions 6(1): 14–20. Anderlohr, G., 1969. What production breaks cost. Industrial Engineering 20(9): 34–36. Argote, L., Beckman, S.L., and Epple, D., 1990. The persistence and transfer of learning in industrial settings. Management Science 36(2): 140–154. Arthur, W., Jr., Bennett, W., Jr., Stanush, P.L., and McNelly, T.L., 1998. Factors that influence skill decay and retention: A quantitative review and analysis. Human Performance 11(1): 57–101. Arzi, Y., and Shtub, A., 1997. Learning and forgetting in mental and mechanical tasks: A comparative study. IIE Transactions 29(9): 759–768. Learning Curves: Theory, Models, and Applications Bailey, C.D., 1989. Forgetting and the learning curve: A laboratory study. Management Science 35(3): 340–352. Bailey, C.D., 2000. Learning-curve estimation of production costs and labor hours using a free Excel plug-in. Management Accounting Quarterly 1(4): 25–31. Bailey, C.D., and Gupta, S., 1999. Judgment in learning-curve forecasting: A laboratory study. Journal of Forecasting 18(1): 39–57. Bailey, C.D., and McIntyre, E.V., 1992. Some evidence on the nature of relearning curves. The Accounting Review 67(2): 368–378. Bailey, C.D., and McIntyre, E.V., 1997. The relation between fit and prediction for alternative forms of learning curves and relearning curves. IIE Transactions 29(6): 487–495. Benkard, C.L., 2000. Learning and forgetting: The dynamics of aircraft production. The American Economic Review 90(4): 1034–1054. Bohlen, G.A., and Barany, J.W., 1976. A learning curve prediction model for operators performing industrial bench assembly operations. International Journal of Production Research 14(2): 295–303. Carlson, J.G., and Rowe, A.J., 1976. How much does forgetting cost? Industrial Engineering 8(9): 40–47. Chen, J.T., and Manes, R.P., 1985. Distinguishing the two forms of the constant percentage learning curve model. Contemporary Accounting Research 1(2): 242–252. Cherrington, J.E., Lippert, S., and Towill, D.R., 1987. The effect of prior experience on learning curve parameters. International Journal of Production Research 25(3): 399–411. Conway, R., and Schultz, A., 1959. The manufacturing progress function. Journal of Industrial Engineering 10 (1): 39–53. Dar-El, E.M., Ayas, K., and Gilad, I., 1995. Predicting performance times for long cycle time tasks. IIE Transactions 27(3): 272–281. Farr, M.J., 1987. The long-term retention of knowledge and skills. New York: Springer-Verlag. Globerson, S., and Levin, N., 1987. Incorporating forgetting into learning curves. International Journal of Operations & Production Management 7(4): 80–94. Globerson, S., Levin, N., and Shtub, A., 1989. The impact of breaks on forgetting when performing a repetitive task. IIE Transactions 10(3): 376–381. Globerson, S., Nahumi, A., and Ellis, S., 1998. Rate of forgetting for motor and cognitive tasks. International Journal of Cognitive Ergonomics 2(3): 181–191. Hancock, W.M., 1967. The prediction of learning rates for manual operations. The Journal of Industrial Engineering 18(1): 42–47. Jaber, M.Y., and Bonney, M., 1997. A comparative study of learning curves with forgetting. Applied Mathematical Modeling 21(8): 523–531. Konz, S., and Johnson, S., 2000. Work design: Industrial ergonomics, 5th ed., Scottsdale: Holcomb Hathaway. Kvålseth, T.O., 1985. Cautionary note about R2. The American Statistician 39(4): 279–285. Lorek, K.S., Icerman, J.D., and Abdulkader, A.A., 1983. Further descriptive and predictive evidence on alternative time-series models for quarterly earnings. Journal of Accounting Research 21(1): 317–328. Mazur, J.E., and Hastie, R., 1978. Learning as accumulation: A reexamination of the learning curve. Psychological Bulletin 85(6): 1256–1274. Nembhard, D.A., 2000. The effects of task complexity and experience on learning and forgetting: A field study. Human Factors 42(2): 272–286. Nembhard, D.A., and Uzumeri, M.V., 2000. Experiential learning and forgetting for manual and cognitive tasks. International Journal of Industrial Ergonomics 25(4): 315–326. Newell, A., and Rosenbloom, P.S., 1981. Mechanisms of skill acquisition and the law of practice. In Cognitive skills and their acquisition, ed. J. R. Anderson, pp. 1–55. Hillsdale: Lawrence Erlbaum. Using Parameter Prediction Models to Forecast Post-Interruption Learning Shtub, A., Levin, N., and Globerson, S., 1993. Learning and forgetting industrial tasks: An experimental model. International Journal of Human Factors in Manufacturing 3(3): 293–305. Smith, J., 1989. Learning curve for cost control. Norcross: Industrial Engineering and Management Press. Snoddy, G.S., 1926. Learning and stability. Journal of Applied Psychology 10(1): 1–36. Swezey, R.W., and Llaneras, R.E., 1997. Models in training and instruction. In Handbook of human factors and ergonomics, 2nd ed., ed. G. Salvendy, pp. 514–577, New York: Wiley. Teplitz, C.J., 1991. The learning curve deskbook: A reference guide to theory, calculations, and applications. New York: Quorum Books. Towill, D.R., 1973. An industrial dynamics model for start-up management. IEEE Transactions on Engineering Management EM-20(2): 44–51. Uzumeri, M., and Nembhard, D., 1998. A population of learners: A new way to measure organizational learning. Journal of Operations Management 16: 515–528. Wickelgren, W.A., 1972. Trace resistance and the decay of long-term memory. Journal of Mathematical Psychology 9(4): 418–455. Wright, T.P., 1936. Factors affecting the cost of airplanes. Journal of the Aeronautical Sciences 3(2): 122–128. Yelle, L.E., 1979. The learning curve: Historical review and comprehensive survey. Decision Sciences 10(2): 302–328. Zangwill, W.I., and Kantor, P.B., 1998. Toward a theory of continuous improvement and the learning curve. Management Science 44(7): 910–920. to 8 Introduction Half-Life Theory of Learning Curves Adedeji B. Badiru CONTENTS Introduction............................................................................................................ 130 Literature on Learning Curves ............................................................................... 131 Analysis of System Performance and Resilience.................................................... 132 Half-Life Property of Natural Substances............................................................... 132 Half-Life Application to Learning Curves ............................................................. 135 Derivation of Half-Life of the Log-Linear Model ................................................. 136 Computational Examples ....................................................................................... 138 Cost Expressions for the Log-Linear Model.......................................................... 140 Alternate Formulation of the Log-Linear Model ................................................... 140 Half-Life Analysis of Selected Classical Models .................................................. 142 S-curve Model................................................................................................... 142 Stanford-B Model.............................................................................................. 143 Derivation of Half-Life for Badiru’s Multi-Factor Model ................................ 143 DeJong’s Learning Formula.............................................................................. 145 Levy’s Adaptation Function .............................................................................. 147 Glover’s Learning Model .................................................................................. 148 Pegels’ Exponential Function ........................................................................... 148 Knecht’s Upturn Model..................................................................................... 150 Yelle’s Combined Product Learning Curve........................................................ 151 Contemporary Learning–Forgetting Curves .......................................................... 151 Jaber–Bonney Learn–Forget Curve Model (LFCM).......................................... 152 Nembhard–Uzumeri Recency (RC) Model....................................................... 154 Sikström–Jaber Power Integration Diffusion Model.......................................... 154 Potential Half-Life Application to Hyperbolic Decline Curves............................. 157 Conclusions............................................................................................................ 158 References.............................................................................................................. 159 Learning Curves: Theory, Models, and Applications INTRODUCTION The military is very much interested in training troops fast, thoroughly, and effectively. Team training is particularly important as a systems approach to enhancing military readiness. Thus, the prediction of team performance is of great importance in any military system. In military training systems that are subject to the variability and complexity of interfaces, the advance prediction of performance is useful for designing training programs for efficient knowledge acquisition and the sustainable retention of skills. Organizations invest in people, work processes, and technology for the purpose of achieving increased and enhanced production capability. The systems nature of such an investment strategy requires that the investment is a carefully planned activity, stretching over multiple years. Learning curve analysis is one method through which system enhancement can be achieved in terms of cost, time, and performance vis-à-vis the strategic investment of funds and other assets. The predictive capability of learning curves is helpful in planning for system performance enhancement and resilience. Formal analysis of learning curves first emerged in the mid-1930s in connection with the analysis of the production of airplanes (Wright 1936). Learning refers to the improved operational efficiency and cost reduction obtained from the repetition of a task. This has a big impact on training and design of work. Workers learn and improve by repeating operations. Thus, a system’s performance and resilience are dependent on the learning characteristics of its components; with workers being a major component of the system. Learning is time dependent and externally controllable. The antithesis of learning is forgetting. Thus, as a learning curve leads to increasing performance through cost reduction, forgetting tends to diminish performance. Considering the diminishing impact of forgetting, the half-life measure will be of interest for assessing the resilience and sustainability of a system. Derivation of the half-life equations of learning curves can reveal more about the properties of the various curves that have been reported in the literature. This chapter, which is based on Badiru and Ijaduola (2009), presents the half-life derivations for some of the classical learning curve models available in the literature. Several research and application studies have confirmed that human performance improves with reinforcement or with frequent and consistent repetitions. Badiru (1992, 1994) provides a computational survey of learning curves as well as their industrial application to productivity and performance analysis. Reductions in operation processing times achieved through learning curves can directly translate to cost savings. The wealth of literature on learning curves shows that they are referred to by several names, including progress function, cost-quantity relationship, cost curve, production acceleration curve, performance curve, efficiency curve, improvement curve, and learning function. In all of these different perspectives, a primary interest is whether or not a level of learning, once achieved, can be sustained. The sustainability of a learning curve is influenced by several factors such as natural degradation, forgetting, and reduction due to work interruption. Thus, it is of interest to predict the future state and behavior of learning curves. In systems planning and control, the prediction of performance is useful for determining the line of corrective action that should be taken. Learning curves are used extensively in business, science, technology, Introduction to Half-Life Theory of Learning Curves engineering, and industry to predict performance over time. Thus, there has been a big interest in the behavior of learning curves over the past several decades. This chapter introduces the concept of the half-life analysis of learning curves as a predictive measure of system performance. Half-life is the amount of time it takes for a quantity to diminish to half its original size through natural processes. The quantity of interest may be cost, time, performance, skill, throughput, or productivity. Duality is of natural interest in many real-world processes. We often speak of “twice as much” and “half as much” as benchmarks for process analysis. In economic and financial principles, the “rule of 72” refers to the length of time required for an investment to double in value. These common “double” or “half” concepts provide the motivation for the proposed half-life analysis. The usual application of half-life analysis is in natural sciences. For example, in physics, the half-life is a measure of the stability of a radioactive substance. In practical terms, the half-life attribute of a substance is the time it takes for one-half of the atoms in an initial magnitude to disintegrate. The longer the half-life of a substance, the more stable it is. This provides a good analogy for modeling learning curves with the aim of increasing performance or decreasing cost with respect to the passage of time. This approach provides another perspective to the body of literature on learning curves. It has application not only in the traditional production environment, but also in functions such as system maintenance, safety, security skills, marketing effectiveness, sports skills, cognitive skills, and resilience engineering. The positive impacts of learning curves can be assessed in terms of cost improvement, the reduction in production time, or the increase in throughput time. The adverse impacts of forgetting can be assessed in terms of declining performance. We propose the following formal definitions: For learning curves: Half-life is the production level required to reduce the cumulative average cost per unit to half its original size. For forgetting curves: Half-life is the amount of time it takes for a performance to decline to half its original magnitude. LITERATURE ON LEARNING CURVES Although there is an extensive collection of classical studies of the improvement due to learning curves, only very limited attention has been paid to performance degradation due to the impact of forgetting. Some of the classical works on process improvement due to learning include Smith (1989), Belkaoui (1976, 1986), Nanda (1979), Pegels (1969), Richardson (1978), Towill and Kaloo (1978), Womer (1979, 1981, 1984), Womer and Gulledge (1983), Camm et al. (1987), Liao (1979), McIntyre (1977), Smunt (1986), Sule (1978), and Yelle (1979, 1980, 1983). It is only in recent years that the recognition of “forgetting” curves has begun to emerge, as can be seen in more recent literature (Badiru, 1995), Jaber and Sikström (2004), Jaber et al. (2003), Jaber and Bonney (2003, 2007), and Jaber and Guiffrida (2008). The new and emerging research on the forgetting components of learning curves provides the motivation for studying the half-life properties of learning curves. Performance decay can occur due to several factors, including a lack of training, a reduced retention of skills, lapses in performance, extended breaks in practice, and natural Learning Curves: Theory, Models, and Applications ANALYSIS OF SYSTEM PERFORMANCE AND RESILIENCE Resilience engineering is an emerging area of systems analysis that relates to the collection of activities designed to develop the ability of a system (or organization) to continue operating under extremely adverse conditions such as a “shock” or an attack. Thus, a system’s resilience is indicative of the system’s level of performance under shock. If the learning characteristic of the system is stable and retainable, then the system is said to be very resilient. The ability to predict a system’s performance using learning curve analysis provides an additional avenue to develop corrective strategies for managing a system. For example, suppose that we are interested in how fast an IT system responds to service requests from clients. We can model the system’s performance in terms of its response time with respect to the passage of time. In this case, it is reasonable to expect the system to improve over time because of the positive impact of the learning curve of the IT workers. Figure 8.1 shows a graphical representation of the response time as a function of time. The response time decreases as time progresses, thus indicating increasing levels of performance. The shorter the response time, the more resilient we can expect the system to be in the event of an attack or shock to the system. Typical learning curves measure cost or time reduction, but the reduction can be translated to, and represented as, performance improvement. Consequently, computing the half-life of the system can be used to measure how long it will take the system’s response time to reduce to half its starting value. Figure 8.2 shows a generic profile for the case where the performance metric (e.g., number of requests completed per unit time) increases with respect to the passage of time. HALF-LIFE PROPERTY OF NATURAL SUBSTANCES The half-life concept of learning curves measures the amount of time that it takes for performance to degrade by half. Degradation of performance occurs both through natural and imposed processes. The idea of using the half-life approach comes from physics, where half-life is a measure of the stability of a radioactive substance. The longer the half-life of a substance, the more stable it is. By analogy, the longer the half-life of a learning curve model, the more sustainable the fidelity of the learning System response time P(t) System performance curve P0 P1 Half-life point FIGURE 8.1 Representation of system response time with respect to passage of time. Introduction to Half-Life Theory of Learning Curves System performance growth curve System throughput FIGURE 8.2 System performance growth curve. curve effect. If learning is not very sustainable, then the system will be more vulnerable to the impact of learning curve decline brought on by such random events as system interruptions. To appreciate the impact of half-life computations, consider an engineering reactor that converts the relatively stable uranium 238 into the isotope plutonium 239. After 15 years, it is determined that 0.043% of the initial amount A0 of the plutonium has disintegrated. We are interested in determining the half-life of the isotope. In physics, the initial value problem is stated as: dA = kA, dt with A(0) = A0. This has a general solution of the form: A(t ) = A0e kt . If 0.043% of the atoms in A0 have disintegrated, then 99.957% of the substance remains. To find k, we will solve: α A0 = A0e15k , where α is the remaining fraction of the substance. With α = 0 99957, we obtain k = −0.00002867. Thus, for any time t, the amount of plutonium isotope remaining is represented as: A(t ) = A0e −0 00002867t . This has a general decay profile similar to the plot of P(t) in Figure 8.1. Now we can compute the half-life as a corresponding value at time t for which A(t) = A0/2. That is: A0 = A0e −0.00002867t , 2 Learning Curves: Theory, Models, and Applications which yields a t (half-life) value of 24,180 years. With this general knowledge of the half-life, several computational analyses can be done to predict the behavior and magnitude of the substance over time. The following examples further illustrate the utility of half-life computations. Let us consider a radioactive nuclide that has a half-life of 30 years. Suppose that we are interested in computing the fraction of an initially pure sample of this nuclide that will remain undecayed at the end of a time period of, for example, 90 years. From the equation of half-life, we can solve for k: A0 = A0 e − kthalf life , 2 ln 2 t half-life which yields k = 0.0231049. Now, we can use this value of k to obtain the fraction we are interested in computing. That is, A0 = e −( 0.0231049)( 90 ) = 0.125. A As another example, let us consider a radioactive isotope with a half-life of 140 days. We can compute the number of days it would take for the sample to decay to one-seventh of its initial magnitude. Thus: A0 = A0 e − kthalf life , 2 k= ln 2 , t half-life which yields k = 0.004951. Now, using the value of k obtained above, we need to find the time for A =1 7 A0 , That is: 1 A0 = A0 e − kt , 7 t= ln 7 = 393 days. k For learning curves, analogous computations can be used to predict the future system performance level and to conduct a diagnostic assessment of previous Introduction to Half-Life Theory of Learning Curves performance levels given a present observed performance level. Since there are many alternate models of learning curves, each one can be analyzed to determine its half-life. Thus, a comparative analysis of the different models can be conducted. This general mathematical approach can become the de-facto approach for the computational testing of learning curve models. HALF-LIFE APPLICATION TO LEARNING CURVES Learning curves present the relationship between cost (or time) and the level of activity on the basis of the effect of learning. An early study by Wright (1936) disclosed the “80% learning” effect, which indicates that a given operation is subject to a 20% productivity improvement each time the activity level or production volume doubles. The proposed half-life approach is the antithesis of the double-level milestone. A learning curve can serve as a predictive tool for obtaining time estimates for tasks that are repeated within a project’s life cycle. A new learning curve does not necessarily commence each time a new operation is started, since workers can sometimes transfer previous skills to new operations. The point at which the learning curve begins to flatten depends on the degree of similarity of the new operation to previously performed operations. Typical learning rates that have been encountered in practice range from 70% to 95%. Several alternate models of learning curves have been presented in the literature. Some of the classical models are: • • • • • • • • • Log-linear model S-curve model Stanford-B model DeJong’s learning formula Levy’s adaptation function Glover’s learning formula Pegels’ exponential function Knecht’s upturn model Yelle’s product model The basic log-linear model is the most popular learning curve model. It expresses a dependent variable (e.g., production cost) in terms of some independent variable (e.g., cumulative production). The model states that the improvement in productivity is constant (i.e., it has a constant slope) as output increases. That is: C ( x ) = C1 x − b , or log C ( x ) = −b(log x ) + log C1, where: C(x) = cumulative average cost of producing x units C1 = cost of the first unit Learning Curves: Theory, Models, and Applications x = cumulative production unit b = learning curve exponent. Notice that the expression for C(x) is practical only for x > 0. This makes sense because the learning effect cannot realistically kick in until at least one unit (x ≥ 1) has been produced. For the standard log-linear model, the expression for the learning rate, p, is derived by considering two production levels where one level is double the other. For example, given the two levels x1 and x2 (where x2 = 2x1), we have the following expressions: C ( x1 ) = C1 ( x1 )− b . C ( x2 ) = C1 (2 x1 )− b . The percent productivity gain, p, is then computed as: p= C ( x2 ) C1 (2 x1 )− b = = 2 − b. C ( x1 ) C1 ( x1 )− b The performance curve, P(x), shown earlier in Figure 8.1 can now be defined as the reciprocal of the average cost curve, C(x). Thus, we have: P( x ) = 1 , C ( x) which will have an increasing profile compared to the asymptotically declining cost curve. In terms of practical application, learning to drive is one example where a maximum level of performance can be achieved in a relatively short time compared with the half-life of performance. That is, learning is steep, but the performance curve is relatively flat after steady state is achieved. The application of half-life analysis to learning curves can help address questions such as the ones below: • • • • How fast and how far can system performance be improved? What are the limitations to system performance improvement? How resilient is a system to shocks and interruptions to its operation? Are the performance goals that are set for the system achievable? DERIVATION OF HALF-LIFE OF THE LOG-LINEAR MODEL Figure 8.3 shows a pictorial representation of the basic log-linear model, with the half-life point indicated as x1/2. The half-life of the log-linear model is computed as follows: Let: C0 = initial performance level C1/2 = performance level at half-life Introduction to Half-Life Theory of Learning Curves Cx Half life C0 C(x) = C1x–b FIGURE 8.3 General profile of the basic learning curve model. C0 = C1 x0− b and C1/ 2 = C1 x1−/b2 . −b −b However, C1/ 2 = 1/2 C0 . Therefore, C1 x1/ 2 = 1/2 C1 x 0 , which leads to x1−/b2 = 1/2 x 0− b , which, by taking the (−1/b)th exponent of both sides, simplifies to yield the following expression as the general expression for the standard log-linear learning curve model: ⎛ 1⎞ x1/ 2 = ⎜ ⎟ ⎝ 2⎠ − (1/ b ) x0 , x0 ≥ 1, where x1/2 is the half-life and x0 is the initial point of operation. We refer to x1/2 (Figure 8.3) as the first-order half-life. The second-order half-life is computed as the time corresponding to half the preceding half. That is: 1 C1 x1−/b2( 2 ) = C1 x0− b , 4 which simplifies to yield: ⎛ 1⎞ x1/ 2( 2 ) = ⎜ ⎟ ⎝ 2⎠ −(2 / b) x0 . Similarly, the third-order half-life is derived to obtain: ⎛ 1⎞ x1/ 2(3) = ⎜ ⎟ ⎝ 2⎠ − (3 / b ) x0 . Learning Curves: Theory, Models, and Applications 350 300 C(x) = 250x–0.75 FIGURE 8.4 Learning curve with b = −0.75. In general, the kth-order half-life for the log-linear model is represented as: ⎛ 1⎞ x1/ 2( k ) = ⎜ ⎟ ⎝ 2⎠ −(k / b) x0 . The characteristics of half-life computations are illustrated in Figures 8.4 and 8.5. COMPUTATIONAL EXAMPLES Figures 8.2 and 8.3 show examples of log-linear learning curve profiles with b = 0.75 and b = 0.3032, respectively. The graphical profiles reveal the characteristics of learning, which can dictate the half-life behavior of the overall learning process. Knowing the point where the half-life of each curve occurs can be very useful in assessing learning retention for the purpose of designing training programs or designing work. For Figure 8.4 (C(x) = 250x−0.75), the first-order half-life is computed as: 500 C(x) = 240.03x–0.3032 FIGURE 8.5 Learning curve with b = −0.3032. Introduction to Half-Life Theory of Learning Curves ⎛ 1⎞ x1/ 2 = ⎜ ⎟ ⎝ 2⎠ − (1/ 0.75) x0 , x0 ≥ 1. If the above expression is evaluated for x0 = 2, the first-order half-life yields x1/2 = 5.0397, which indicates a fast drop in the value of C(x). Table 8.1 summarizes values of C(x) as a function of the starting point, x0. The specific case of x0 = 2 is highlighted in Table 8.1. It shows C(2) = 148.6509 corresponding to a half-life of 5.0397. Note that C(5.0397) = 74.7674, which is about half of 148.6509. The arrows in the table show how the various values are linked. The conclusion from this analysis is that if we are operating at the point x = 2, we can expect this particular curve to reach its half-life decline point at x = 5. For Figure 8.5 (C(x) = 250x −0.3032), the first-order half-life is computed as: ⎛ 1⎞ x1/ 2 = ⎜ ⎟ ⎝ 2⎠ − (1/ 0.3032 ) x0 , x0 ≥ 1 TABLE 8.1 Numeric Calculation of Half-Lives for C(x) ∙ 250x∙0.75 x0 Learning Curves: Theory, Models, and Applications If we evaluate the above function for x0 = 2, the first-order half-life yields x1/2 = 19.6731. This does not represent as precipitous a drop as in Figure 8.4. These numeric examples agree with the projected profiles of the curves in Figures 8.2 and 8.3, respectively. COST EXPRESSIONS FOR THE LOG-LINEAR MODEL For the log-linear model, using the basic expression for cumulative average cost, the total cost of producing units is computed as: TC ( x ) = ( x ) C x = xC1 x − b = C1 x ( − b +1) The unit cost of producing the xth unit is given by: UC ( x ) = C1 x ( − b +1) ( − b +1) − C1 ( x − 1) = C1 ⎡⎣ x − b +1 − ( x − 1)− b +1 ⎤⎦ . The marginal cost of producing the xth unit is given by: MC ( x ) = dTC x = ( −b + 1) C1 x − b . dx If desired, one can derive half-life expressions for the cost expressions above. For now, we will defer those derivations for interested readers. An important application of learning curve analysis is the calculation of expected production time as illustrated by the following examples. Suppose, in a production run of a complex technology component, it was observed that the cumulative hours required to produce 100 units is 100,000 hours with a learning curve effect of 85%. For future project planning purposes, an analyst needs to calculate the number of hours required to produce the fiftieth unit. Following the standard computations, we have the following: p = 0.85 and x = 100 units. Thus, 0.85 = 2–b, which yields b = 0.2345. Consequently, we have 1,000 = C1(100) –b, which yields C1 = 2,944.42 hours. Since b and C1 are now known, we can compute the cumulative average hours required to produce 49 and 50 units, respectively, to obtain C(49) = 1,182.09 hours and C(50) = 1,176.50 hours. Consequently, the total hours required to produce the fiftieth unit is 50[C(50)] – 49[C(49)] = 902.59 (approximately 113 work days). If we are interested in knowing when these performance metrics would reach half of their original levels in terms of production quantity, we would use half-life calculations. ALTERNATE FORMULATION OF THE LOG-LINEAR MODEL An alternate formulation for the log-linear model is called the unit cost model, which is expressed in terms of the specific cost of producing the xth unit, instead of the Introduction to Half-Life Theory of Learning Curves conventional cumulative average cost expression. The unit cost formula specifies that the individual cost per unit will decrease by a constant percentage as cumulative production doubles. The functional form of the unit cost model is the same as for the average cost model except that the interpretations of the terms are different. It is expressed as: UC ( x ) = C1 x − b , where: UC(x) = cost of producing the xth unit C1 = cost of the first unit x = cumulative production count b = the learning curve exponent, as discussed previously. From the unit cost formula, we can derive expressions for the other cost elements. For the discrete case, the total cost of producing units is given by: TC ( x ) = ∑ q 1 UCq = C1 q 1 and the cumulative average cost per unit is given by: C ( x) = C1 x ∑q q 1 The marginal cost is found as follows: x ⎡ −b ⎤ d ⎢C1 (i ) ⎥ d ⎡1 + 2 − b + 3− b + + x − b ⎤⎦ d ⎡TC ( x ) ⎤⎦ ⎢ ⎥⎦ = C1bx − b −1. MC ( x ) = ⎣ = ⎣ i1 = C1 ⎣ dx dx dx For the continuous case, the corresponding cost expressions are: TC ( x ) = UC ( z ) dz = C1 z − b dz = − b +1 C1 x ( ) , −b + 1 − b +1 ⎛ 1 ⎞ C x( ) , C ( x) = ⎜ ⎟ 1 ⎝ x ⎠ −b + 1 ⎡ C x ( − b +1) ⎤ d⎢ 1 ⎥ d ⎡TC ( x ) ⎤⎦ ⎢ −b + 1 ⎥⎦ MC ( x ) = ⎣ = ⎣ = C1 x − b . dx dx Learning Curves: Theory, Models, and Applications As in the previous illustrations, the half-life analysis can be applied to the foregoing expressions to determine when each cost element of interest will decrease to half its starting value. This information can be useful for product pricing purposes, particularly for technology products which are subject to rapid price reductions due to declining product cost. Several models and variations of learning curves have been reported in the literature (see Badiru, 1992; Jaber and Guiffrida, 2008). Models are developed through one of the following approaches: 1. 2. 3. 4. 5. Conceptual models Theoretical models Observational models Experimental models Empirical models HALF-LIFE ANALYSIS OF SELECTED CLASSICAL MODELS S-curve Model The S-curve (Towill and Cherrington, 1994) is based on an assumption of a gradual start-up. The function has the shape of the cumulative normal distribution function for the start-up curve and the shape of an operating characteristics function for the learning curve. The gradual start-up is based on the fact that the early stages of production are typically in a transient state, with changes in tooling, methods, materials, design, and even changes in the work force. The basic form of the S-curve function is: C ( x ) = C1 + M ( x + B)− b , −b MC ( x ) = C1 ⎡⎢ M + (1 − M ) ( x + B ) ⎤⎥ , ⎣ ⎦ where: C(x) = learning curve expression b = learning curve exponent M(x) = marginal cost expression C1 = cost of first unit M = incompressibility factor (a constant) B = equivalent experience units (a constant). Assumptions about at least three out of the four parameters (M,B,C1, and b) are needed in order to solve for the fourth one. Using the C(x) expression and derivation procedure outlined earlier for the log-linear model, the half-life equation for the S-curve learning model is derived to be: ⎡ M ( x0 + B)− b − C1 ⎤ x1/ 2 = (1 / 2)−1/ b ⎢ ⎥ M ⎣ ⎦ −1/ b − B, Introduction to Half-Life Theory of Learning Curves where: x1/2 = half-life expression for the S-curve learning model x0 = initial point of evaluation of performance on the learning curve. In terms of practical applications of the S-curve model, consider when a worker begins learning a new task. The individual is slow, initially at the tail end of the S-curve. However, the rate of learning increases as time goes on, with additional repetitions. This helps the worker to climb the steep-slope segment of the S-curve very rapidly. At the top of the slope, the worker is classified as being proficient with the learned task. From then on, even if the worker puts much effort into improving on the task, the resultant learning will not be proportional to the effort expended. The top end of the S-curve is often called the slope of diminishing returns. At the top of the S-curve, workers succumb to the effects of forgetting and other performanceimpeding factors. As the work environment continues to change, a worker’s level of skill and expertise can become obsolete. This is an excellent reason for the application of half-life computations. Stanford-B Model An early form of a learning curve is the Stanford-B model, which is represented as: −b UC ( x ) = C1 ( x + B ) , where: UC(x) = direct cost of producing the xth unit b = learning curve exponent C1 = cost of the first unit when B = 0 B = slope of the asymptote for the curve (1 < B < 10). This is equivalent to the units of previous experience at the start of the process, which represents the number of units produced prior to first unit acceptance. It is noted that when B=0, the Stanford-B model reduces to the conventional log-linear model. Figure 8.6 shows the profile of the Stanford-B model with B = 4.2 and b = −0.75. The general expression for the half-life of the Stanford-B model is derived to be: x1/ 2 = (1 / 2)−1/ b ( x0 + B) − B, where: x1/2 = half-life expression for the Stanford-B learning model x0 = initial point of the evaluation of performance on the learning curve. Derivation of Half-Life for Badiru’s Multi-Factor Model Badiru (1994) presents applications of learning and forgetting curves to productivity and performance analysis. One empirical example presented used production data to Learning Curves: Theory, Models, and Applications C(x) = 250(x + 4.2)–0.75 FIGURE 8.6 Stanford-B model with parameters B = 4.2 and b = −0.75. develop a predictive model of production throughput. Two data replicates are used for each of 10 selected combinations of cost and time values. Observations were recorded for the number of units representing double production levels. The resulting model has the functional form below and the graphical profile shown in Figure 8.7. C ( x ) = 298.88 x1−0 31 x2−0.13 , where: C(x) = cumulative production volume x1 = cumulative units of Factor 1 x2 = cumulative units of Factor 2 b1 = first learning curve exponent = −0.31 b2 = second learning curve exponent = −0.13. C(x) = FIGURE 8.7 Bivariate model of learning curve. Introduction to Half-Life Theory of Learning Curves A general form of the modeled multi-factor learning curve model is: C ( x ) = C1 x1− b1 x2− b2 , and the half-life expression for the multi-factor learning curve was derived to be: ⎡ x x b2 / b1 ⎤ x1 (1/ 2 ) = (1 / 2)−1/ b1 ⎢ 1( 0b)2 /2b1( 0 ) ⎥ ⎣ x2(1/ 2 ) ⎦ x2(1/ 2 ) = (1 / 2) −1/ b2 −1/ b1 ⎡ x2( 0 ) x1b(10/)b2 ⎤ ⎢ b2 / b1 ⎥ ⎣ x1(1/ 2 ) ⎦ −1/ b2 where: xi (1/2) = half-life component due to factor i (i = 1, 2) xi (0) = initial point of factor i (i = 1, 2) along the multi-factor learning curve. Knowledge of the value of one factor is needed in order to evaluate the other factor. Just as in the case of single-factor models, the half-life analysis of the multifactor model can be used to predict when the performance metric will reach half a starting value. DeJong’s Learning Formula DeJong’s learning formula is a power function that incorporates parameters for the proportion of manual activity in a task. When operations are controlled by manual tasks, the time will be compressible as successive units are completed. If, by contrast, machine cycle times control operations, then the time will be less compressible as the number of units increases. DeJong’s formula introduces an incompressible factor, M, into the log-linear model to account for the man-machine ratio. The model is expressed as: C ( x ) = C1 + Mx − b , MC ( x ) = C1 ⎡⎣ M + (1 − M ) x − b ⎤⎦ , where: C(x) = learning curve expression M(x) = marginal cost expression b = learning curve exponent C1 = cost of first unit M = incompressibility factor (a constant). When M =0, the model reduces to the log-linear model, which implies a completely manual operation. In completely machine-dominated operations, M =1. In Learning Curves: Theory, Models, and Applications that case, the unit cost reduces to a constant equal of C1, which suggests that no learning-based cost improvement is possible in machine-controlled operations. This represents a condition of high incompressibility. Figure 8.8 shows the profile of DeJong’s learning formula for hypothetical parameters of M = 0.55 and b = −0.75. This profile suggests impracticality at higher values of production. Learning is very steep and the average cumulative production cost drops rapidly. The horizontal asymptote for the profile is below the lower bound on the average cost axis, suggesting an infeasible operating region as the production volume increases. The analysis above agrees with the fact that no significant published data is available on whether or not DeJong’s learning formula has been successfully used to account for the degree of automation in any given operation. Using the expression, MC(x), the marginal cost half-life of DeJong’s learning model is derived to be: x1/ 2 = (1 / 2) −1/ b ⎡ (1 − M ) x0− b − M ⎤ ⎢ ⎥ ⎣ 2(1 − M ) ⎦ −1/ b where: x1/2 = half-life expression for DeJong’s learning curve marginal cost model x0 = initial point of the evaluation of performance on the marginal cost curve. If the C(x) model is used to derive the half-life, then we obtain the following derivation: ⎡ Mx − b − C1 ⎤ x1/ 2 = (1 / 2)−1/ b ⎢ 0 ⎥ M ⎣ ⎦ −1/ b where: x1/2 = half-life expression for DeJong’s learning curve model x0 = initial point of the evaluation of performance on DeJong’s learning curve. 300 MC(x) = 250[0.55 + 0.45x–0.75] FIGURE 8.8 DeJong’s learning formula with M = 0.55 and b = −0.75. Introduction to Half-Life Theory of Learning Curves Levy’s Adaptation Function Recognizing that the log-linear model does not account for the leveling off of production rate and the factors that may influence learning, Levy (1965) presented the following learning cost function: ⎡ 1 ⎛ 1 x −b MC ( x ) = ⎢ − ⎜ − ⎢⎣ β ⎝ β C1 ⎞ − kx ⎤ ⎟k ⎥ , ⎥⎦ ⎠ where: β = production index for the first unit k = constant used to flatten the learning curve for large values of x. The flattening constant, k, forces the curve to reach a plateau instead of continuing to decrease or turning in the upward direction. Figure 8.9 shows alternate profiles 0.75 0.752 0.7515 0.75 0.75 50 0.7495 0.749 3 0.78 0.2 0.4 0.6 0.8 1.2 1.4 4.5 4 3.5 3 2.5 2 1.5 4.5 4 3.5 FIGURE 8.9 Profiles of Levy’s adaptation over different production ranges. Learning Curves: Theory, Models, and Applications of Levy’s adaptation function over different ranges of production for β = 0.75, k = 5, C1 = 250, and b = 0.75. The profiles are arranged in an increasing order of ranges of operating intervals. The half-life expression for Levy’s learning model is a complex non-linear expression derived as shown below: (1/β − x1−/b2 /C1 )k − kx1/2 = 1/β − 2 ⎡⎣1/β − (1/β − x0− b /C1 )k − kx0 ⎤⎦ , where: x1/2 = half-life expression for Levy’s learning curve model x0 = initial point of the evaluation of performance on Levy’s learning curve. Knowledge of some of the parameters of the model is needed in order to solve for the half-life as a closed-form expression. Glover’s Learning Model Glover’s (1966) learning formula is a learning curve model that incorporates a work commencement factor. The model is based on a bottom-up approach that uses individual worker learning results as the basis for plant-wide learning curve standards. The functional form of the model is expressed as: n ∑ i 1 ⎛ yi + a = C1 ⎜ ⎝ ∑ i 1 ⎞ xi ⎟ , ⎠ where: yi = elapsed time or cumulative quantity xi = cumulative quantity or elapsed time a = commencement factor n = index of the curve (usually 1 + b) m = model parameter. This is a complex expression for which a half-life expression is not easily computable. We defer the half-life analysis of Levy’s learning curve model for further research by interested readers. Pegels’ Exponential Function Pegels (1976) presented an alternate algebraic function for the learning curve. His model, a form of an exponential function of marginal cost, is represented as: MC ( x ) = α a x −1 + β, where α, β, and a are parameters based on empirical data analysis. The total cost of producing x units is derived from the marginal cost as follows: Introduction to Half-Life Theory of Learning Curves TC ( x ) = ∫ (α a x −1 + β dx = α a x −1 + βx + c, ln ( a ) where c is a constant to be derived after the other parameters have been found. The constant can be found by letting the marginal cost, total cost, and average cost of the first unit all be equal. That is, MC1 = TC1 = AC1, which yields: c =α− α . ln ( a ) The model assumes that the marginal cost of the first unit is known. Thus, MC1 = α + β = y0 . Pegels’ also presented another mathematical expression for the total labor cost in start-up curves, which is expressed as: TC ( x ) = a 1− b x , 1− b where: x = cumulative number of units produced a,b = empirically determined parameters. The expressions for marginal cost, average cost, and unit cost can be derived, as shown earlier, for other models. Figure 8.10 shows alternate profiles of Pegels’ exponential function for α = 0.5, β = 125, and a = 1.2. The functions seem to suggest an unstable process, probably because the hypothetical parameters are incongruent with the empirical range for which the model was developed. Using the total cost expression, TC(x), we derive the expression for the half-life of Pegels’ learning curve model to be as shown below: x1/ 2 = (1 / 2)−1/(1− b ) x0 . 700 3.5 × 106 3 × 106 2.5 × 106 2 × 106 1.5 × 106 1 × 106 500000 FIGURE 8.10 Alternate forms of Pegels’ exponential function for α = 0.5, β = 125, and a = 1.2. Learning Curves: Theory, Models, and Applications Knecht’s Upturn Model Knecht (1974) presents a modification to the functional form of the learning curve in order to analytically express the observed divergence of actual costs from those predicted by learning curve theory when the number of units produced exceeds 200. This permits the consideration of non-constant slopes for the learning curve model. If UCx is defined as the unit cost of the xth unit, then it approaches 0 asymptotically as x increases. To avoid a zero limit unit cost, the basic functional form is modified. In the continuous case, the formula for cumulative average costs is derived as: C ( x) = C1z b dz = C1 x b +1 . (1 + b) This cumulative cost also approaches zero as x goes to infinity. Knecht alters the expression for the cumulative curve to allow for an upturn in the learning curve at large cumulative production levels. He suggested the functional form below: C ( x ) = C1 x − becx , where c is a second constant. Differentiating the modified cumulative average cost expression gives the unit cost of the xth unit as shown below. Figure 8.11 shows the cumulative average cost plot of Knecht’s function for C1 = 250, b = 0.25, and c = 0.25. UC ( x ) = d −b ⎞ . ⎡C1 x − becx ⎤ = C1 x − becx ⎛⎜ c + ⎣ ⎦ dx x ⎟⎠ ⎝ The half-life expression for Knecht’s learning model turns out to be a non-linear complex function as shown below: x1/ 2e − cx1/2 / b = (1 / 2)−1/ b e − cx0 / b x0 , 2500 C(x) = 250x–0.75 e0.25x FIGURE 8.11 Knecht’s cumulative average cost function for c1 = 250, b = 0.25, and c = 0.25. Introduction to Half-Life Theory of Learning Curves where: x1/2 = half-life expression for Knecht’s learning curve model x0 = initial point of the evaluation of performance on Knecht’s learning curve. Given that x0 is known, iterative, interpolation, or numerical methods may be needed to solve for the half-life value. Yelle’s Combined Product Learning Curve Yelle (1976) proposed a learning curve model for products by aggregating and extrapolating the individual learning curve of the operations making up a product on a log-linear plot. The model is expressed as shown below: C ( x ) = k1 x1− b1 + k2 x2− b2 + + kn xn− bn , where: C(x) = cost of producing the xth unit of the product n = number of operations making up the product kixi−bi = learning curve for ith operation. The deficiency of Knecht’s model above is that a product-specific learning curve seems to be a more reasonable model than an integrated product curve. For example, an aggregated learning curve with a 96.6% learning rate obtained from individual learning curves with the respective learning rates of 80%, 70%, 85%, 80%, and 85% does not appear to represent reality. If this type of composite improvement is possible, then one can always improve the learning rate for any operation by decomposing it into smaller integrated operations. The additive and multiplicative approaches of reliability functions support the conclusion of impracticality of Knecht’s integrated model. CONTEMPORARY LEARNING–FORGETTING CURVES Several factors can, in practice, influence the learning rate. A better understanding of the profiles of learning curves can help in developing forgetting intervention programs and for assessing the sustainability of learning. For example, shifting from learning one operational process to another can influence the half-life profile of the original learning curve. Important questions that half-life analysis can address include the following: 1. What factors influence learning retention and for how long? 2. What factors foster forgetting and at what rate? 3. What joint effects exist to determine the overall learning profile for worker performance and productivity? 4. What is the profile and rate of decline of the forgetting curve? Learning Curves: Theory, Models, and Applications The issues related to the impact of forgetting in performance and productivity analysis are brought to the forefront by Badiru (1994, 1995) and all the references therein. Figure 8.12 shows some of the possible profiles of the forgetting curve. The impact of forgetting can occur continuously over time or discretely over bounded intervals of time. Also, forgetting can occur as random interruptions in the system performance or as scheduled breaks (Anderlohr 1969). The profile of the forgetting curve and its mode of occurrence can influence the half-life measure. This is further evidence that the computation of half-life can help distinguish between learning curves, particularly if a forgetting component is involved. Recent literature has further highlighted the need to account for the impact of forgetting. Because of the recognition of the diminishing impacts of forgetting curves, these curves are very amenable to the application of the half-life concept. Jaber and Sikström (2004) present the computational comparisons of three learning and forgetting curves based on previous models available in the literature: 1. LFCM (learn–forget curve model) provided by Jaber and Bonney (1996) 2. RC (recency model) provided by Nembhard and Uzumeri (2000) 3. PID (power integration diffusion) provided by Sikström and Jaber (2002). All three models assume that learning conforms to the original log-linear model presented by Wright (1936) and denoted here as Wright’s learning curve (WLC): T ( x ) = T1 x − b , where T(x) is the time to produce the xth unit, T1 is the time to produce the first unit, x is the cumulative production unit, and b is the learning curve constant (0 < b < 1). Jaber–Bonney Learn–Forget Curve Model (LFCM) Jaber and Bonney (1996) present the learn–forget curve model (LFCM), which suggests that the forgetting curve exponent could be computed as: Performance axis fi = Concave decline b (1 − b ) log (ui + ni ) log 1 + D/t (ui + ni ) Convex decline Time axis FIGURE 8.12 Alternate profiles declining impact of forgetting. S-curve decline Introduction to Half-Life Theory of Learning Curves where 0 ≤ fi ≤ 1, ni is the number of units produced in cycle i up to the point of interruption, D is the break time for which total forgetting occurs, and ui is the number of units producible due to retained learning at the beginning of cycle i from producing xi−1 in previous i−1 cycles. Note that in i production cycles, there are i−1 production breaks, where xi −1 = ∑ij−11 n j and 0 < ui < xi−1. That is, if the learning process is interrupted at the time of length D, then the performance reverts to a threshold value, usually equivalent to T1. Denote t (ui + ni) as the time to produce ui + ni units (equivalent units of cumulative production accumulated by the end of cycle i), and b is the learning curve constant. Then, t (ui + ni) is computed as presented by Jaber and Sikström (2004): t (ui + ni ) = ∑ T (u + x ) ≅ ∫ 1 ui + ni x 1 T1 x − b dx = T1 (ui + ni )1− b . 1− b The above function is plotted in Figure 8.13 for t (ui + ni), for T1 =25, and for b=0.65. The number of units produced at the beginning of cycle i + 1 is given from Jaber and Bonney (1996) as: (1+ fi / b) yi ui +1 = (ui + ni ) fi / b where u1 = 0, and yi is the number of units that would have been accumulated, if the production was not ceased for di units of time, yi is computed as: 1/ (1− b ) ⎫ ⎧1 − b ⎡⎣t ( ui + ni ) + di ⎤⎦ ⎬ yi = ⎨ T ⎭ ⎩ 1 When total forgetting occurs, we have ui+1 = 0. However, ui+1 → 0 as yi → +∞; or alternatively, as di → +∞, where all the other parameters are of non-zero positive values. Thus, we deduce that total forgetting occurs only when di holds a very large 350 300 250 200 150 100 FIGURE 8.13 Plot of Jaber–Sikström’s learn–forget model. Learning Curves: Theory, Models, and Applications value. This does not necessarily contradict the assumption of finite value of D to which total forgetting occurs. By doing so, ui+1 < 1, when di = D, and it flattens out at zero for increasing values of di > D. Anderlohr (1969), McKenna and Glendon (1985), and Globerson et al. (1998) reported findings of the impact of production breaks through empirical studies. As reported by Jaber and Sikström (2004), the intercept of the forgetting curve could be determined as: − (b + fi ) Tˆ1i = T1 (ui + ni ) . The time to produce the first unit in cycle i could then be predicted as: −b = T1 (ui + ni ) . T1LFCM i Nembhard –Uzumeri Recency (RC) Model The recency (RC) model presented by Nembhard and Uzumeri (2000) has the capability of capturing multiple breaks. Nembhard and Uzumeri (2000) modified the three hyperbolic learning functions of Mazur and Hastie (1978) by introducing the measure—“recency” of experiential learning, R. For each unit of cumulative production, x, Nembhard and Uzumeri (2000) determined the corresponding recency measure, Rx, by computing the ratio of the average elapsed time to the elapsed time of the most recent unit produced. Nembhard and Osothsilp (2001) suggested that Rx could be computed as: Rx ∑ =2 ( ti − t 0 ) , x ( t x − t0 ) i 1 where x is the accumulated number of produced units, tx is the time when units x are produced, t0 is the time when the first unit is produced, ti is the time when unit i is produced, and R x ∈ (1,2). The performance of the first unit after a break could be computed as: ( ) α T1RC i = T1 xRx where α is a fitted parameter that represents the degree to which the individual forgets the task. Sikström–Jaber Power Integration Diffusion Model The power integration diffusion (PID) model presented by Sikström and Jaber (2002) advocates that each time a task is performed, a memory trace is formed. The strength of this trace decays as a power function over time. For identical repetitions Introduction to Half-Life Theory of Learning Curves of a task, an aggregated memory trace could be found by integrating the strength of the memory trace over the time interval of the repeated task. The integral of the power function memory trace is a power function. Therefore, the memory strength of an uninterrupted set of repetitions can be described as the difference between a power function of the retention interval at the start of the repetitions and a power function of the retention interval at the end of repetitions. The time it takes to perform a task is determined by “a diffusion” process where the strength of the memory constitutes the signal. To simplify the calculation, the noise in the diffusion process is disregarded and the time to perform a task is the inverse of the aggregated memory strength plus a constant reflecting the start time of the diffusion process. The strength of the memory trace follows a power function of the retention interval since training is given. That is, the strength of a memory trace (at which t time units have passed between learning and forgetting) encoded during a short time interval (dt) is: S ʹ ( t ) = S0t − a dt , where a is the forgetting parameter, a∈(0,1), S 0 is a scaling parameter >0 (to be compared with the parameter in other models that represents the time to produce the first unit). The strength of a memory trace encoded for an extended time period is S (te,1, te,2), where te,1 time units passed since the start of encoding of unit e and te,2 time units passed since the end of encoding of unit e and te,1 > te,2. This memory strength can be calculated by the integral over the time of encoding. S = ( te,1, te,2 ) = te ,2 te ,1 Sʹ ( t ) dt = S0 1− a 1− a ⎡te,2 − te,1 ⎤. ⎦ 1− a ⎣ The profile of the above function is plotted in Figure 8.14 for hypothetical values of S 0 = 20 and a = 0.35. –200 10 FIGURE 8.14 Plot of Jaber-Sikström’s power integration diffusion model. Learning Curves: Theory, Models, and Applications The strength of the memory trace following encoding during N time intervals is the sum over these intervals, and it is determined as: S = ( te,1, te,2 ) = S0 1− a ∑ ⎡⎣t e 1 1− a e, 2 − te1,−1a ⎤⎦. The time to produce a unit is calculated with a diffusion model where the strength of the memory trace is conceived of as a signal. For simplification, the noise in the diffusion process is set to zero. The time to produce a unit is the inverse of the memory strength. The start time of the diffusion process constitutes a constant (t0) that is added to the total time to produce a unit: −1 T ( tr ) = S ( te,1, te,2 ) + t0 −1 N ⎫ 1 − a ⎧⎪ ⎡te1,−2a − te1,−1a ⎤ ⎪⎬ + t0 = ⎨ ⎣ ⎦ S0 ⎪ e 1 ⎪⎭ ⎩ ⎧⎪ N ⎫ ⎡tea,1ʹ − tea,1ʹ ⎤ ⎪⎬ + t0 , = S0ʹ ⎨ ⎣ ⎦ ⎪⎩ e 1 ⎪⎭ where S′0 = [(1 − a) / S 0], which is a rescaling of S 0, and a′ = 1 −a, a′ ∈ (0,1) is a rescaling of a. The rescaling of the parameters is introduced for convenience to simplify the final expression. Sikström and Jaber (2002) showed that without production breaks, the predictions of PID are a good approximation of Wright’s learning curve model. That is, the predictions are identical, given that the accumulated time to produce a unit can be approximated as: t ( x ) = T1 n 1 ≈ T1 ∫n 0 = T1 x1− b / (1 − b ) . Thus, in this approximation, Wright’s original learning curve model is a special case of PID where: Tx = dt ( x ) / dx = ⎡⎣(1 + aʹ ) S0ʹ ⎤⎦ 1/ (1+ a ʹ ) }( x − a ʹ / (1+ a ʹ ) ) / (1 + aʹ) = T1 x − b and t0 = 0, from which Jaber and Sikström (2004) deduce the following relationships, between T1, a and S 0, and a and b, respectively, as: Introduction to Half-Life Theory of Learning Curves ⎡(1 + aʹ ) S0ʹ ⎤⎦ T1 = ⎣ 1 + aʹ 1/ (1+ a ʹ ) , where x = 1, and b= aʹ , for every x > 1, 1 + aʹ where 0 < b < 1/ 2 for 0 < a′ < 1, with a′ = 1 −a and S′0 = (1 − a) / S 0. The early studies of learning curves did not address the forgetting function. In this case, the contemporary functions that address the impact of forgetting tend to be more robust and more representative of actual production scenarios. These models can be further enhanced by carrying out a half-life analysis on them. POTENTIAL HALF-LIFE APPLICATION TO HYPERBOLIC DECLINE CURVES Over the years, the decline curve technique has been extensively used by the oil industry to evaluate future oil and gas predictions. These predictions are used as the basis for economic analysis to support development, property sale or purchase, industrial loan provisions, and also to determine if a secondary recovery project should be carried out. It is expected that the profile of hyperbolic decline curves can be adapted for an application to learning curve analysis. The graphical solution of the hyperbolic equation is through the use of a log-log paper, which sometimes provides a straight line that can be extrapolated for a useful length of time to predict future production levels. This technique, however, sometimes fails to produce the straight line needed for the extrapolation required for some production scenarios. Furthermore, the graphical method usually involves some manipulation of data, such as shifting, correcting and/or adjusting scales, which eventually introduce bias into the actual data. In order to avoid the noted graphical problems of hyperbolic decline curves and to accurately predict future performance of a producing well, a non-linear leastsquares technique is often considered. This method does not require any straight line extrapolation for future predictions. The mathematical analysis proceeds as follows. The general hyperbolic decline equation for oil production rate (q) as a function of time (t) can be represented as: q ( t ) = q0 (1 + mD0t ) 0 < m < 1, where: q(t) = oil production at time t q0 = initial oil production −1/ m Learning Curves: Theory, Models, and Applications 3.2 5 2.8 2.6 2.4 FIGURE 8.15 Plot of hyperbolic decline curve for cumulative production over time. D 0 = initial decline m = decline exponent. Also, the cumulative oil production at time t, Q(t) can be written as: Q (t ) = q0 ⎡(1 + mD0t )( m −1)/ m − 1⎤ , ⎦ ( m − 1) D0 ⎣ where Q(t) is the cumulative production as of time t. By combining the above equations and performing some algebraic manipulations, it can be shown that: 1− m q (t ) = q01− m + ( m − 1) D0q0 − mQ ( t ) , which shows that the production at time t is a non-linear function of its cumulative production level. By rewriting the equations in terms of cumulative production, we have: Q (t ) = 1− m q0 q0 m . + q (t ) (1 − m ) D0 ( m − 1) D0 The above function is plotted in Figure 8.15. It is evident that the model can be investigated in terms of conventional learning curve techniques, forgetting decline curve, and half-life analysis in a procedure similar to the techniques presented earlier in this chapter. CONCLUSIONS Degradation of performance occurs naturally, either due to internal processes or due to externally imposed events, such as extended production breaks. For productivity Introduction to Half-Life Theory of Learning Curves assessment purposes, it may be of interest to determine the length of time it takes for a production metric to decay to half of its original magnitude. For example, for a career planning strategy, one may be interested in how long it takes for skill sets to degrade by half in relation to the current technological needs of the workplace. The half-life phenomenon may be due to intrinsic factors, such as forgetting, or due to external factors, such as a shift in labor requirements. Half-life analysis can have application in intervention programs designed to achieve the reinforcement of learning. It can also have application for assessing the sustainability of skills acquired through training programs. Further research on the theory of half-life of learning curves should be directed to topics such as the following: • Half-life interpretations • Learning reinforcement program • Forgetting intervention and sustainability programs. In addition to the predictive benefits of half-life expressions, they also reveal the ad-hoc nature of some of the classical learning curve models that have been presented in the literature. We recommend that future efforts to develop learning curve models should also attempt to develop the corresponding half-life expressions to provide full operating characteristics of the models. Readers are encouraged to explore the half-life analyses of other learning curve models not covered in this chapter. REFERENCES Anderlohr, G., 1969. What production breaks cost. Industrial Engineering 20(9): 34–36. Badiru, A.B., 1992. Computational Survey of univariate and multivariate learning curve models. IEEE Transactions on Engineering Management 39(2): 176–188. Badiru, A.B., 1994. Multifactor learning and forgetting models for productivity and performance analysis. International Journal of Human Factors in Manufacturing 4(1): 37–54. Badiru, A.B., 1995. Multivariate analysis of the effect of learning and forgetting on product quality. International Journal of Production Research 33(3): 777–794. Badiru, A.B., and Ijaduola, A., 2009. Half-life theory of learning curves. In Handbook of military industrial engineering, eds. A.B. Badiru and M.U. Thomas, 33-1–33-28. Boca Raton: CRC Press/Taylor and Francis. Belkaoui, A., 1976. Costing through learning. Cost and Management 50(3): 36–40. Belkaoui, A., 1986. The learning curve. Westport: Quorum Books. Camm, J.D., Evans, J.R., and Womer, N.K., 1987. The unit learning curve approximation of total costs. Computers and Industrial Engineering 12(3): 205–213. Globerson, S., Nahumi, A., and Ellis, S., 1998. Rate of forgetting for motor and cognitive tasks. International Journal of Cognitive Ergonomics 2(3): 181–191. Glover, J.H., 1966. Manufacturing progress functions: An alternative model and its comparison with existing functions. International Journal of Production Research 4(4): 279–300. Jaber, M.Y., and Bonney, M., 1996. Production breaks and the learning curve: The forgetting phenomenon. Applied Mathematics Modeling 20(2): 162–169. Jaber, M.Y., and Bonney, M., 2003. Lot sizing with learning and forgetting in set-ups and in product quality. International Journal of Production Economics 83(1): 95–111. Jaber, M.Y., and Bonney, M., 2007. Economic manufacture quantity (EMQ) model with lot size dependent learning and forgetting rates. International Journal of Production Economics 108(1–2): 359–367. Learning Curves: Theory, Models, and Applications Jaber, M.Y., and Guiffrida, A., 2008. Learning curves for imperfect production processes with reworks and process restoration interruptions. European Journal of Operational Research 189(1): 93–104. Jaber, M.Y., Kher, H.V., and Davis, D., 2003. Countering forgetting through training and deployment. International Journal of Production Economics 85(1): 33–46. Jaber, M.Y., and Sikström, S., 2004. A numerical comparison of three potential learning and forgetting models. International Journal of Production Economics 92(3): 281–294. Knecht, G., 1974. Costing, technological growth and generalized learning curves. Operations Research Quarterly 25(3): 487–491. Levy, F., 1965. Adaptation in the production process. Management Science 11(6): 136–154. Liao, W.M., 1979. Effects of learning on resource allocation decisions. Decision Sciences 10(1):116–125. Mazur, J.E., and Hastie, R., 1978. Learning as accumulation: A re-examination of the learning curve. Psychological Bulletin 85(6): 1256–1274. McIntyre, E., 1977. Cost-volume-profit analysis adjusted for learning. Management Science 24(2): 149–160. McKenna, S.P., and Glendon, A.I., 1985. Occupational first aid training: Decay in cardiopulmonary resuscitation (CPR) skills. Journal of Occupational Psychology 58(2): 109–117. Nanda, R., 1979. Using learning curves in integration of production resources. Proceedings of 1979 IIE Fall Conference, 376–380. Nembhard, D.A., and Osothsilp, N., 2001. An empirical comparison of forgetting models. IEEE Transactions on Engineering Management 48(3): 283–291. Nembhard, D.A., and Uzumeri, M.V., 2000. Experiential learning and forgetting for manual and cognitive tasks. International Journal of Industrial Ergonomics 25(3): 315–326. Pegels, C., 1969. On start-up or learning curves: An expanded view. AIIE Transactions 1(3): 216–222. Richardson, W.J., 1978. Use of learning curves to set goals and monitor progress in cost reduction programs. Proceedings of 1978 IIE Spring Conference, 235–239. Norcross, GA: Institute of Industrial Engineers. Sikström, S., and Jaber, M.Y., 2002. The power integration diffusion (PID) model for production breaks. Journal of Experimental Psychology: Applied 8(2): 118–126. Smith, J., 1989. Learning curve for cost control. Norcross, GA: Industrial Engineering and Management Press. Smunt, T.L., 1986. A comparison of learning curve analysis and moving average ratio analysis for detailed operational planning. Decision Sciences 17(4): 475–495. Sule, D.R., 1978. The effect of alternate periods of learning and forgetting on economic manufacturing quantity. AIIE Transactions 10(3): 338–343. Towill, D.R., and Cherrington, J.E., 1994. Learning curve models for predicting the performance of advanced manufacturing technology. International Journal of Advanced Manufacturing Technology 9(3):195–203. Towill, D.R., and Kaloo, U., 1978. Productivity drift in extended learning curves. Omega 6(4): 295–304. Womer, N.K.,1979. Learning curves, production rate, and program costs. Management Science 25(4): 312–219. Womer, N.K., 1981. Some propositions on cost functions. Southern Economic Journal 47(4): 1111–1119. Womer, N.K., 1984. Estimating learning curves from aggregate monthly data. Management Science 30(8): 982–992. Womer, N.K., and Gulledge, T.R., Jr., 1983. A dynamic cost function for an airframe production program. Engineering Costs and Production Economics 7(3): 213–227. Introduction to Half-Life Theory of Learning Curves Wright, T.P., 1936. Factors affecting the cost of airplanes. Journal of the Aeronautical Sciences 3(2): 122–128. Yelle, L.E., 1979. The learning curve: Historical review and comprehensive survey. Decision Sciences 10(2): 302–328. Yelle, L.E., 1980. Industrial life cycles and learning curves: Interaction of marketing and production. Industrial Marketing Management 9(2): 311–318. Yelle, L.E., 1983. Adding life cycles to learning curves. Long Range Planning 16(6): 82–87. of Breaks 9 Influence in Learning on Forgetting Curves Sverker Sikström, Mohamad Y. Jaber, and W. Patrick Neumann CONTENTS Introduction............................................................................................................ 163 Breaks and Forgetting ............................................................................................ 165 Theories of Massed and Spaced Learning ............................................................. 167 Breaks and the Brain.............................................................................................. 167 Sleeping Breaks ..................................................................................................... 168 Conclusion ............................................................................................................. 170 Acknowledgments.................................................................................................. 170 References.............................................................................................................. 170 INTRODUCTION Forgetting is one of the most fundamental aspects of cognitive performance. Typically, forgetting is studied across time, where performance on a memory test plotted as a function of time is called a “forgetting curve.” This forgetting is essential to all forms of behavioral studies, because it greatly influences how much information can be accessed. On very short timescales, most, if not all, information is retained in the cognitive system, where this information is rapidly lost over time with a fraction of the original memory maintained over an extended period. These memories are maintained in different memory systems that span across various timescales, are related to different types of interference, and are possibly based on different neurophysiological substrates. On very short timescales, memory is maintained in a modality-specific perceptual system that holds the incoming sensory information so that it can be processed and summarized by the cognitive systems. The details of these sensory systems depend on the modality, where, for example, the visual system maintains the information more briefly than the auditory system (Martin and Jones 1979). In fact, the visual-sensory system holds information for so short a period of time that the content typically disappears before it can be efficiently reported by the subjects. Memories maintained for periods longer than the sensory memory are typically divided into short-term and long-term memory, where the retained information 163 Learning Curves: Theory, Models, and Applications is increasingly based on semantic or episodic aspects of the studied material. Shortterm memory and the similar concept of working memory are functionally seen as the memory systems required for processing and manipulating information that is directly related to the current task. The working memory can be further divided into systems maintaining visual/auditory information and a more central executive component. The visual and auditory systems may be seen as slave systems to the central executive system, where higher order and controlled processes occur, including elaboration of the information carried in the slave systems (Baddeley and Hitch 1974). Memories in the working memory are maintained for time periods measured in seconds, whereas the consolidation or the elaboration of memory promotes the storing of memories in a persistent long-term memory system, where information is maintained for hours, days, and years. Perhaps the most important factor influencing forgetting is time. The most significant effect is that the more time that has elapsed since encoding, the less well the material learnt can be retrieved. However, a few exceptions to this rule exist, as will be elaborated on later. Another factor is the rate of forgetting, which depends largely on the age of the memory: an older memory is better maintained than a younger one. This empirical phenomenon has been described by Jost’s law (1897), which suggests that two memories of equal strength have the same likelihood of successful retrieval. The law also indicates that younger memories are forgotten faster than older memories. Other studies have also observed that older memories are less likely to be lost than newer memories (e.g., Ribot 1881; Squire et al. 2001). A common and important theoretical implication of these studies, as well as others in the literature, is that memories require time to consolidate in order to become persistent against loss. Jost’s law has important implications for how forgetting can be summarized. If the rate of forgetting is less for older than for younger memories, then this seems to rule out the possibility that a certain amount of the memory is forgotten at each time interval—or stated in mathematical terms, that forgetting does not follow an exponential function. However, before accepting this conclusion another possibility needs to be ruled out—namely, that the strengthening of memories decreases the rate of forgetting. That is, it could be the case that memories are forgotten at an exponential rate, but that older memories (possibly due to more efficient encoding) decay more slowly than younger (and weaker) memories. This hypothesis was investigated by Slamecka and McElree (1983). They studied forgetting following either a strong or a weak initial learning. It was found that initial learning neither influenced the shape of the forgetting curve, nor the rate of forgetting. This suggests that the forgetting curves were reasonably parallel across the retention interval, which is defined as the time from the last presentation of the material (the last trial) to the test. The rate of decay also decreased over the retention interval, so that 36% of the facts learned were lost on the first day, whereas the average daily rate of the subsequent five days decreased to 11.5%. This data support the suggestion that forgetting cannot be described by a simple exponential decay function. Given that the rate of forgetting decreases with the age of the memory, a power function seems to be a good candidate for explaining forgetting data. Indeed, a number of empirical studies have found forgetting curves that are well described by power functions (Anderson and Schooler 1991; Wickelgren 1974, 1977; Wixted Influence of Breaks in Learning on Forgetting Curves and Ebbesen 1991). In an extensive meta-review of over 210 datasets published over the span of a century, Rubin and Wenzel (1996) tested over 100 forgetting functions and concluded that these data were consistent with the power function. However, based on this data they were not able to distinguish this candidate from three other functions—namely, the logarithmic function, the exponential in the square root of time, and the hyperbola in the square root of time—thus indicating that more precise data are required for identifying a single forgetting function. Another interpretation of forgetting curves is that memories are stored in different memory systems, where each system decays at an exponential rate with widely different half-time constants, and that forgetting that is found in behavioral data is based on an aggregation of these curves. This interpretation is appealing for at least three reasons. First, exponential curves are naturally occurring in most simple systems. For example, it is consistent with the idea that memories decay with a certain probability on each time step, while under different circumstances forgetting might occur due to interference following learning of other materials. In both cases, these theories are most easily understood as exponential functions. Second, it is consistent with the view of memories being contained in different memory systems—e.g., perceptual memory, short- and long-term memory, working memory, and so forth—where each memory system could be described by an exponential function with different time constants. Third, it is consistent with the theory that there are several biological mechanisms, operating at different timescales, which are relevant for memory and that these will affect forgetting in somewhat different ways. Each of these learning mechanisms may be forgotten at exponential rates with widely different time constants, giving rise to an aggregated forgetting curve on the behavioral level that is well summarized as a power function. Given the theory that the forgetting curve is an aggregation of underlying memory traces with exponential decay, a significant question to consider is how these underlying exponential traces could be combined so that we can obtain a forgetting curve that describes the empirical data at the behavioral level. This question was studied in detail by Sikström (1999, 2002). He found that power-function forgetting curves can be obtained on the behavioral levels, provided that the probability distribution of the half-time constants (in the underlying exponential functions) was drawn from a powerfunction distribution. The parameters describing the distribution of half-time constants directly determine the rate of forgetting on the behavioral power-function forgetting curve. Furthermore, power functions with any rate of forgetting can be obtained with this model. Sikström (2002) implemented a forgetting model in a Hopfield neural network and found that it nicely reproduced empirical power-function forgetting curves. Therefore, it can be clearly deduced from the literature that forgetting curves are well described by power functions. BREAKS AND FORGETTING An interesting aspect of memory function is how breaks of a repetitive task can influence the learning of that task. Data show that two conditions that are studied over the same length of time may be remembered quite differently depending on whether the learning is spread out over time or massed into one session. This phenomenon Learning Curves: Theory, Models, and Applications suggests that repetitive learning of the same material cannot directly be described as a sum of learning of each independent learning episode. Extensive research has shown that superior performance is found with spaced learning compared with massed learning. Continuous repetition of a to-beremembered item yields lower performance on a delayed test compared to if a short break is introduced between the subsequent repetitions. Uninterrupted learning of material is typically referred to as “massed learning,” whereas learning that is interrupted by short or longer breaks is called “spaced” or “distributed.” The benefits of spaced learning have been observed for verbal tasks such as list recall, paragraph recall, and paired associates learning (Janiszewski et al. 2003), and during skill learning; for example, mirror tracing or video game acquisition (Donovan and Radosevich 1999). The effect is typically large and well reproduced (Cepeda et al. 2006). It is worth noting that massed and spaced learning have their parallel in industrial engineering as massed and intermittent production (or production with breaks), respectively, where studying the effects of production breaks on the productivity of production systems has gained considerable attention (Jaber 2006). However, boundary conditions of the beneficial effects of spaced learning are known, where massed repetition can sometimes be better than spaced repetition. The retention interval, or the time between the second presentation and retrieval, needs to be large in comparison with the break time. For example, a lower performance in the spaced condition can be obtained on immediate testing in combination with a very long break time, compared to a massed condition immediately followed by a test. In this case the massed condition will effectively function as an extended presentation time, where the first presentation in the spaced presentation is largely forgotten. In particular, when the performance is plotted against the retention interval, immediate testing tends to give superior performance for massed presentation; whereas the standard finding of better performance for spaced repetition occurs as the retention interval increases. An important theoretical and practical question to consider is how to schedule learning so that people retain as much of the knowledge that they have learnt in previous learning sessions as is possible. Empirical data available in the literature suggest that there exists a length of break that maximizes performance (Sikstrom and Jaber, in revision). This finding was also supported by Glenberg and Lehmann (1980). For example, if the retention interval (storage of learning over a break) is less than one minute, then an inter-stimuli interval (ISI) of less than one minute maximizes performance; however, if the retention interval is six months or more, then an inter-stimuli interval of at least one month maximizes performance. The scheduling of the ISI is also important for learning. This has been studied by comparing fixed and expanding ISIs. In a fixed ISI the same time passes between each relearning, so that subjects know more about the material for each repetition. Expanding ISIs are scheduled so that the time between each stimuli increases, and thus function to counteract the aggregation of memory strength over repeated learning. However, the empirical results for fixed and expanding ISI are mixed. Some researchers have shown that expanding ISIs are beneficial for long-term retention (Hollingworth 1913; Kitson 1921; Landauer and Bjork 1978; Pyle 1913), whereas other have found the opposite effect (Cull 1995, 2000). Influence of Breaks in Learning on Forgetting Curves THEORIES OF MASSED AND SPACED LEARNING Massed and spaced effects have implications for theories of memory. The inattention theory (Hintzman 1974) suggests that when the ISIs are short, subjects will pay less attention, because the item to be remembered is more familiar. This theory thus accounts for inferior performance in massed conditions because less attention is paid at the second presentation. By contrast, Sikström and Jaber (in revision) have suggested that encoding requires resources, and that these resources are consumed, or depleted, during repetitive learning. In massed conditions, the second presentation has fewer resources available, leading to a poorer performance. This depletion model was implemented in a computer model where a memory trace is formed at every time unit of encoding. The depletion of encoding resources diminishes the overall strength of these traces. Furthermore, all traces that are formed at encoding are summed to an aggregated memory strength, which determines the performance at retrieval. Sikström and Jaber (in revision) fitted this computational model to several different datasets, including motoric and verbal learning, with short- and long-retention intervals. They found that the model fit the data well, providing support for the suggested theory. This suggests that the depletion of encoding resources could be an important mechanism for accounting for the differences observed between spaced and massed learning. Another theory describing learning performance is the context variability theory (Glenberg 1979; Melton 1970), which suggests that performance is highly dependent on the contextual overlap at encoding and at retrieval. Furthermore, a context drift over time is assumed, so that performance drops when the differences between the encoding and retrieval contexts increase. A spaced superiority effect is predicted in this context variability model because, as the spacing increases between the two presentations of the item to be encoded, the likelihood that the encoding context matches at least one of them also increases. Cepeda et al. (2006) simulated a version of the context variability theory and were able to reproduce several of the basic findings regarding massed and spaced effects. However, the basic version of the context variability model makes specific predictions that are not directly supported by data. In particular, it predicts that the probability of retrieving two words in a list increase with the spacing between the words in the list. This prediction has not been supported, suggesting a problem with the context variability theory (Bellezza et al. 1975). The consolidation theory (Wickelgren 1972) suggests that memories are first encoded in a fragile state and then, as time passes, they change to a relatively more stable state. This process is called “consolidation.” This theory proposes that the memory generated on the second delayed presentation inherits the consolidated state, and therefore is less prone to forgetting. Finally, if the retention interval is too long (e.g., a year) then there will be no initial memory trace left and retention will be poor due to the lack of consolidation of the memories. BREAKS AND THE BRAIN The possibility of measuring, or manipulating, states of the brain has added understanding to how breaks influence forgetting. Muellbacher et al. (2002) directly investigated consolidation in a skilled motor task by applying either direct or Learning Curves: Theory, Models, and Applications delayed interference to the brain. Subjects learned a motoric task and were tested again after a 15-minute break. In a condition where subjects had rested during this break, performance was well maintained. However, if the break was filled with a repetitive transcranial magnetic stimulation (rTMS; a treatment tool for various neurological conditions) applied to the motor cortex, it was found that performance dropped to the level prior to the initial learning. By contrast, if the same rTMS treatment were given following a delay of 6 hours then no loss in performance was found. This experiment clearly shows that the memory moves from a fragile to a stable representation during an interval ranging from 15 minutes to 6 hours. This susceptibility to interference may also be introduced by performing a similar task during the retention interval. Brashers-Krug et al. (1996) showed that learning skills similar to the original task interfered with the original learning; however, this interference only occurred during the first 4 hours following learning, whereas later skill learning did not. These studies indicate that what happens during the retention interval is critical to final learning. Beyond external influence during this interval, the brain may also restructure the representation of memories as time passes, indicating that “forgetting” may be a much more active process than what had previously been thought. For example, studies using functional imaging of the brain have shown that, although the performance in skill learning may be unaffected during a 6-hour period, the brain areas supporting this task may change significantly. Following this 6-hour delay in the waking state, activity was higher than it had been (during the initial learning) in the premotor, parietal, and cerebellar regions (Shadmehr and BrashersKrug 1997), indicating that memory is reorganized in the brain, which may not be directly evident by only looking at the behavioral data. SLEEPING BREAKS Humans spend one-third of their lifetimes sleeping, so any break longer than a day will most likely include sleep. The question of how sleep influences performance therefore becomes very relevant. A fundamental theory of forgetting is that memories are lost because we learn new material that “overwrites,” or interferes with, previously learned memories. One of the first tests of this theory was to compare sleeping with wakefulness. Results showed that memories are spared more during sleeping than during the waking state (Jenkins and Dallenbach 1924). A perhaps more intriguing finding is that not only are memories preserved, but they can actually be enhanced during a retention interval. This phenomenon has been most clearly found in motoric tasks following sleeping. Walker et al. (2002, 2003) let subjects perform a finger tapping motoric task at 12-hour intervals, which either included sleeping or did not. Within the first session, subjects typically increased their performance level by 60%. The performance was largely unaffected by a 12-hour retention in the waking state, whereas an increase of 20% in speed, and 39% in accuracy, was found if the retention interval included sleep. Furthermore, these effects were not dependent on the time of day that the first training session occurred. If this session was in the morning, then the first 12 hours of wakefulness did not influence performance, whereas the following 12 hours Influence of Breaks in Learning on Forgetting Curves of sleep did. If the first training occurred in the evening, then the first 12 hours of sleep improved performance, while the second 12 hours of wakefulness did not. These remarkable effects cannot be directly accounted for by the interference stemming from motoric activities that naturally occurs during the wakeful state. In a control condition, subjects wore mittens during the retention interval, which effectively eliminated hand movements and minimized the possibilities of interference. However, this manipulation did not improve the performance in the waking condition. A number of additional findings provide further clues of what is happening during sleep: First, improvements in skill performance do not necessarily have to follow sleep during the night. Fischer et al. (2002) found similar benefits in performance during shorter sleeps in the day. Second, the largest amount of improvement in skill learning occurs after the first night; however, additional nights did show a small additional benefit in performance. Third, doubling the amount of training does not seem to influence the amount of improvement by sleep. Fourth, prior to sleeping, each additional session provides additional learning, whereas following sleeping this effect is diminished. Fifth, the amount of training-induced learning does not correlate with the amount of sleep-induced learning, indicating that they tap into different processes (Walker et al. 2003). Given that sleep has been shown to improve performance, one may then ask what type of process in sleep provides this effect. Sleep can be divided into several levels, where perhaps the most salient difference occurs between the more superficial REM sleep and the deeper non-REM sleep. Gais et al. (2000) investigated a visual-perceptual task and found that both types of sleep are essential for consolidation as well as the order of the sleep types. The sleep pattern early in the night, which is dominated by non-REM and slow-wave sleep (SWS), are important for the initiation of consolidation, whereas the pattern later in the night, dominated by REM sleep, causes additional consolidation, which only occurs if the sleep patterns earlier in the night are preserved. This indicates that memory consolidation is dependent on at least two sleep-dependent processes that should occur in the right order. Maquet et al. (2003) investigated sleep-deprived subjects in a procedural visualmotoric task and found that they did not show any improvements over time in this task, while the sleeping control group did. Furthermore, the control group showed increased activation in the superior temporal sulcus at the later test compared with the first test, while the sleep-deprived group did not show this increase. Sleep is not necessary for increasing performance. Breaks of 5–15 minutes have been found to increase performance on a visuomotor task, a phenomena referred to as “reminiscence.” This effect is short-lived, lasting for only a few minutes, and falls back to baseline as the break period becomes longer (Denny 1951). In comparison, a 24-hour delay including sleep also showed increased performance, which did not fall back to baseline during continuous testing (Holland 1963). This suggests that the short- and long-term enhancement of breaks depends on different processes, where the short-term increase in performance may be a result of the release of inhibition following repetitive testing, whereas the long-term sleep-dependent effects are dependent on consolidation. Learning Curves: Theory, Models, and Applications CONCLUSION How learning interacts with breaks is a complex but fascinating topic. In this chapter, we have shown that forgetting can be modeled as a power function. This function is influenced by the length of the study time, the length of the retention interval, the amount of interference, and that learning can be modified by sleep. These influencing phenomena are important because learning and breaks from tasks are continuously made in everyday life, and a deeper understanding of this field may provide opportunities to plan our lives so that we can retain more of the information that we encounter. A combination of methods based on behavioral data, neurophysiological measures, and computational modeling will shed further light on this field. In particular, these findings have important real-life applications in an everyday work environment. All work schedules include breaks. These breaks may be long— e.g., vacations and changes between different work tasks or even jobs—or of intermediate length, such as weekends or subsequent work days, which typically involve sleep. Finally, short breaks such as coffee breaks, or interruptions such as telephone calls, may also influence the speed at which we gain or lose skills and memories. An understanding of these phenomena may thus have important real-life implications. ACKNOWLEDGMENTS The authors wish to thank the Social Sciences and Engineering Research Council (SSHRC) of Canada-Standard Grant, and the Swedish Research Council for supporting this research. They also wish to thank Dr. Frank Russo from the Department of Psychology at Ryerson University for his valuable comments and suggestions. REFERENCES Anderson, J.R., and Schooler, L.J., 1991. Reflections of the environment in memory. Psychological Science 2(6): 396–408. Baddeley, A.D., and Hitch, G.J., 1974. Working memory. In Recent advances in learning and motivation, ed. G. Bower, Vol. VIII, 47–90. London: Academic Press. Bellezza, F.S., Winkler, H.B., and Andrasik, F., Jr., 1975. Encoding processes and the spacing effect. Memory and Cognition 3(4): 451–457. Brashers-Krug, T., Shadmehr, R., and Bizzi, E., 1996. Consolidation in human motor memory. Nature 382(6588): 252–255. Cepeda, N.J., Pashler, H., Vul, E., Wixted, J.T., and Rohrer, D., 2006. Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin 132(3): 354–380. Cull, W.L., 1995. How and when should information be restudied? Chicago: Loyola University. Cull, W. L., 2000. Untangling the benefits of multiple study opportunities and repeated testing for cued recall. Applied Cognitive Psychology 14(3): 215–235. Denny, L.M., 1951. The shape of the post-rest performance curve for the continuous rotary pursuit task. Motor Skills Research Exchange 3: 103–105. Donovan, J.J., and Radosevich, D.J., 1999. A meta-analytic review of the distribution of practice effect: Now you see it, now you don’t. Journal of Applied Psychology 84(5): 795–805. Fischer, S., Hallschmid, M., Elsner, A.L., and Born, J., 2002. Sleep forms memory for finger skills. Proceedings of the National Academy of Sciences USA 99(18): 11987–11991. Influence of Breaks in Learning on Forgetting Curves Gais, S., Plihal, W., Wagner, U., and Born, J., 2000. Early sleep triggers memory for early visual discrimination skills. Nature Neuroscience 3(12): 1335–1339. Glenberg, A.M., 1979. Component-levels theory of the effects of spacing of repetitions on recall and recognition. Memory and Cognition 7(2): 95–112. Glenberg, A.M., and Lehmann, T.S., 1980. Spacing repetitions over one week. Memory and Cognition 8(6): 528–538. Hintzman, D.L., 1974. Theoretical implications of the spacing effect. In Theories in cognitive psychology: The Loyola symposium, ed. R. L. Solso, 77–97. Erlbaum: Potomac. Holland, H.C., 1963. Massed practice and reactivation inhibition, reminiscence and disinhibition in the spiral after-effect. British Journal of Psychology 54: 261–272. Hollingworth, H.L., 1913. Advertising and selling: Principles of appeal and response. New York: D. Appleton. Jaber, M.Y., 2006. Learning and forgetting models and their applications. In Handbook of Industrial and Systems Engineering, ed. A.B. Badiru, 1–27 (Chapter 32). Baco Raton, FL: CRC Press-Taylor and Francis Group. Janiszewski, C., Noel, H., and Sawyer, A.G., 2003. A meta-analysis of the spacing effect in verbal learning: Implications for research on advertising repetition and consumer memory. Journal of Consumer Research 30(1): 138–149. Jenkins, J.G., and Dallenbach, K.M., 1924. Obliviscence during sleep and waking. American Journal of Psychology 35(4): 605–612. Jost, A., 1897. Die Assoziationsfestigkeit in ihrer Abhängigkeit von der Verteilung der Wiederholungen [The strength of associations in their dependence on the distribution of repetitions]. Zeitschrift fur Psychologie und Physiologie der Sinnesorgane 16: 436–472. Kitson, H.D., 1921. The mind of the buyer: A psychology of selling. New York: MacMillan. Landauer, T. K., and Bjork, R.A., 1978. Optimum rehearsal patterns and name learning. In Practical aspects of memory, eds. P.E.M.M. Gruneberg, P.N. Morris, and R.N. Sykes, 625–632. London: Academic Press. Maquet, P., Schwartz, S., Passingham, R., and Frith, C., 2003. Sleep-related consolidation of a visuomotor skill: Brain mechanisms as assessed by functional magnetic resonance imaging. Journal of Neuroscience 23(4): 1432–1440. Martin, M., and Jones, G.V., 1979. Modality dependency of loss of recency in free recall. Psychological Research 40(3): 273–289. Melton, A.W., 1970. The situation with respect to the spacing of repetitions and memory. Journal of Verbal Learning and Verbal Behavior 9(5): 596–606. Muellbacher, W., Ziemann, U., Wissel, J., Dang, N., Kofler, M., Facchini, S., Boroojerdi, B., Poewe, W., and Hallett, M., 2002. Early consolidation in human primary motor cortex. Nature 415(6872): 640–644. Pyle, W.H., 1913. Economical learning. Journal of Educational Psychology 4(3): 148–158. Ribot, T., 1881. Les maladies de la memoire [Diseases of memory]. Paris: Germer Bailliere. Rubin, D.C., and Wenzel, A. E., 1996. One hundred years of forgetting: A quantitative description of retention. Psychological Review 103(4): 734–760. Shadmehr, R., and Brashers-Krug, T., 1997. Functional stages in the formation of human longterm motor memory. Journal of Neuroscience 17(1): 409–419. Sikström, S., 1999. A connectionist model for frequency effects in recall and recognition. In Connectionist Models in Cognitive Neuroscience: The 5th Neural Computation and Psychology Workshop, eds. D. Heinke, G.W. Humphreys, and A. Olson, 112–123. London: Springer Verlag. Sikström, S., 2002. Forgetting curves: Implications for connectionist models. Cognitive Psychology 45(1): 95–152. Sikström, S., and Jaber, M.Y. (in revision). The depletion, power, integration, diffusion model of spaced and massed repetition. Learning Curves: Theory, Models, and Applications Slamecka, N.J., and McElree, B., 1983. Normal forgetting of verbal lists as a function of their degree of learning. Journal of Experimental Psychology: Learning, Memory, and Cognition 9(3): 384–397. Squire, L.R., Clark, R.E., and Knowlton, B.J., 2001. Retrograde amnesia. Hippocampus 11(1): 50–55. Walker, M.P., Brakefield, T., Morgan, A., Hobson, J.A., and Stickgold, R., 2002. Practice with sleep makes perfect: Sleep-dependent motor skill learning. Neuron 35(1): 205–211. Walker, M. P., Brakefield, T., Seidman, J., Morgan, A., Hobson, J.A., and Stickgold, R., 2003. Sleep and the time course of motor skill learning. Learning and Memory 10(4): 275–284. Wickelgren, W.A., 1972. Trace resistance and the decay of long-term memory. Journal of Mathematical Psychology 9(4): 418–455. Wickelgren, W.A., 1974. Single-trace fragility theory of memory dynamics. Memory and Cognition 2(4): 775–780. Wickelgren, W.A., 1977. Learning and memory: Englewood Cliffs, NJ: Prentice Hall. Wixted, J.T., and Ebbesen, E.B., 1991. On the form of forgetting. Psychological Science 2(6): 409–415. and Forgetting: 10 Learning Implications for Workforce Flexibility in AMT Environments Corinne M. Karuppan CONTENTS Introduction............................................................................................................ 173 Workforce Flexibility............................................................................................. 174 Activation Theory .................................................................................................. 175 Learning and Forgetting......................................................................................... 177 Uncontrollable Factors........................................................................................... 178 Individual Characteristics.................................................................................. 178 Task Complexity ............................................................................................... 179 Length of Interruption Intervals ........................................................................ 181 Controllable Factors............................................................................................... 181 Training Methods .............................................................................................. 181 Over-Learning via Training and Usage............................................................. 182 Test Contents..................................................................................................... 183 Conclusion ............................................................................................................. 183 References.............................................................................................................. 185 INTRODUCTION For decades, workforce flexibility has been heralded as a strategic weapon to respond to and anticipate changing demand in a competitive marketplace. Despite the overall positive effects of workforce flexibility, there has been growing evidence that its deployment should be measured rather than unbounded. Not only has empirical research shown that “total” flexibility may not be a worthy pursuit, but it has also underscored its limitations when tasks are complex. These limitations have been mostly attributed to efficiency losses when individuals are slow to learn new tasks or when they have to relearn tasks that they had forgotten. In the backdrop of the deep recession, two realities cannot be ignored: (1) the demand for flexible, knowledge workers is more likely to increase rather than subside, and (2) the pressure for high returns on training investments will continue to intensify. 173 Learning Curves: Theory, Models, and Applications The dual purpose of this chapter is therefore to understand the nature of performance decrements related to workforce flexibility in environments with complex, advanced manufacturing technologies (AMTs), and to highlight the reliable mechanisms that will expand an individual’s flexibility potential with minimum drawbacks. The first part of the chapter describes the concept of workforce flexibility. The nature of the relationships between flexibility and performance are then explained in the context of activation theory. These relationships subsume learning and forgetting phenomena. A review of the factors affecting these phenomena is presented, and mental models emerge as intervening factors in the relationship between training/learning and forgetting. Theories from various fields of inquiry and empirical validations undergird the formulation of a conceptual framework. Implications for research and practice conclude the chapter. WORKFORCE FLEXIBILITY Workforce flexibility is a strategic capability that organizations deploy to address the realities of increased market diversity, shorter product life cycles, and fierce global competition. In a nutshell, flexibility is the ability to adapt quickly to change with minimum penalty (Gerwin 1987; Koste and Malhotra 1999). Based on a thorough review of the literature, Koste and Malhotra (1999) found that the construct was mapped along four elements: range-number (RN), range-heterogeneity (RH), mobility (MOB), and uniformity (UN). In the context of workforce flexibility, RN refers to the number of tasks or operations performed by an individual, the number of technologies that he/she operates, and so on. It is similar to the notion of horizontal loading, job enlargement, or intradepartmental flexibility, according to which individuals master a larger sequence of upstream and/or downstream activities related to their job. RH involves the degree of differentiation among the tasks, operations, and technologies. It has been construed as interdepartmental flexibility and is often used interchangeably with cross-training. MOB evokes effortlessness in terms of cost and time expended to move from one option to another, that is, learning or relearning tasks. UN is the consistency of performance outcomes (quality, time, cost, etc.) under different options. The first two elements of workforce flexibility deal with worker deployment, whereas the other two capture the notions of efficiency and effectiveness. The benefits of a flexible workforce have been praised extensively in the literature. Employees with a greater variety of skills are better equipped to respond to demand fluctuations and load unbalances on the shop floor. Let us assume there are two departments, A and B. If the workload decreases in department A and increases in department B, workers can be pulled from department A and re-assigned to department B (RH flexibility), while the remaining workers in department A can perform a wider range of tasks (RN flexibility) previously performed by others. Empirical evidence certainly supports the advantages of a flexible workforce. A diverse skill set enables the production of a more varied mix (e.g., Parthasarthy and Sethi 1992; Upton 1997) and is also credited with reduced inventories, lower lead times, and better due date performance (e.g., Allen 1963; Fryer 1976; Hogg et al. 1977; Felan et al. 1993). Learning and Forgetting: Implications for Workforce Flexibility Despite an overwhelmingly favorable picture of flexibility, Treleven (1989) questioned the early practice of simulating labor flexibility RH (interdepartmental) with no loss of efficiency or with a temporary drop followed by 100% efficiency, essentially implying that the results might be “too good to be true.” This concern was confirmed by the results of a field study in a printed circuit-board plant (Karuppan and Ganster 2004) where greater levels of labor flexibility RN increased the probability of producing a varied mix, whereas labor flexibility RH had the opposite effect. Further curbing the enthusiasm for large increases in both types of labor flexibility deployment are the diminishing returns observed for complex jobs. In the simulation of a dual resource constrained (DRC) job shop, Malhotra et al. (1993) demonstrated that high levels of workforce flexibility (captured as interdepartmental flexibility, or flexibility RH) could worsen shop performance when learning was slower and initial processing times were higher. In their simulation modeling of an assembly line, McCreery and Krajewski (1999) operationalized worker flexibility as the number of different adjacent tasks that each worker was trained to perform (flexibility RN). Task complexity was characterized by slower learning, more rapid forgetting, a higher proportion of learning possible in multiples of standard time—(time to produce first unit/standard time)—and product as opposed to process learning. Their results indicated that greater flexibility led to a mixed performance in an environment characterized by high variety and high complexity, as increases in throughput came at the expense of worker utilization. Extending the research to work teams, McCreery et al. (2004) concluded that high-complexity environments called for restrictive flexibility deployment approaches. The above studies suggest that high levels of both labor flexibility RN and RH are undesirable when jobs are complex. However, they offered little or no theoretical rationale for their findings. Conceptually, very few researchers have considered the possibility that high levels of “good” job characteristics—such as flexibility or task variety—might have adverse rather than beneficial effects. Traditionally, these good job characteristics have been those identified in Hackman and Oldham’s (1975; 1976) job characteristics model (JCM): task variety, task identity, skill utilization, feedback, and autonomy. The JCM postulated a strictly positive-linear relationship between these job characteristics and psychological responses to the job. As an extension of their work, Champoux (1978) suggested that very high levels of such “good” job characteristics might cause a dysfunctional, overstimulation effect. His rationale was based on activation theory. ACTIVATION THEORY Lindsley’s (1951) neurophysiological approach to activation theory—subsequently refined by others—is based on the notion of activation level; namely, the degree of neural activity in the reticular activation system. As Malmo (1959, 368) explained, “[…] the greater the cortical bombardment, the higher the activation. Furthermore, the relation between activation and behavioral efficiency is described by an inverted U-curve.” From low activation to an optimal level, performance increases monotonically. Beyond the optimal point, the relationship becomes non-monotonic with further increases in activation, resulting in performance decrements. In other words, low activation levels should produce low levels of performance; moderate activation levels Learning Curves: Theory, Models, and Applications should produce optimal performance; and high activation levels should result in low performance. Malmo recommended that the term “moderate” be construed in relative rather than absolute terms: low < moderate < high, with moderate/optimal levels varying across tasks and individuals. The optimal level of activation for an individual allows the cerebral cortex to operate most efficiently, resulting in improved behavioral (e.g., response time) or cerebral performance. Stimulation from internal or external sources may cause deviations from this optimal level and therefore diminish performance (Gardner and Cummings 1988). When applying this theory to job design, both low and high levels of task variety/ labor flexibility would be harmful—the former due to understimulation, the latter due to overstimulation (French and Caplan 1973; Martin and Wall 1989; Schaubroeck and Ganster 1993; Xie and Johns 1995). At low levels of variety, a lack of alertness or activation hampers performance, and variety increases are initially matched by increases in performance. A leveling off effect then occurs as variety increases no longer yield the proportionate increases in job outcomes. Eventually, overstimulation occurs, and further increases in variety become counterproductive (Singh 1998) as a result of an impaired information-processing capability (Easterbrook 1959; Humphreys and Revelle 1984) and perceptions of lack of focus and work/information overload, which are well known stressors (Kahn and Byosiere 1992). These arguments corroborate a critical review of the literature indicating that “the case for job enlargement has been drastically overstated and over generalized” (Hulin and Blood 1968, 50). Several empirical findings support the curvilinear relationship between good job characteristics and: (a) psychological responses (e.g., Champoux 1980, 1992); (b) stress (e.g., French et al. 1982; Xie and Johns 1995); and (c) task performance (Gardner 1986). Karuppan (2008) tested this relationship for labor flexibility RN and RH. She found that labor flexibility RN and RH exhibited a significant quadratic relationship with the worker’s production of a varied mix—that is, mix flexibility. Plots of curve estimations supported the inverted U pattern. The leveling effect occurred more rapidly for the production of a more highly varied mix (mix flexibility RH) for which more extensive product learning was required. Increasing flexibility essentially amounts to increases in learning within and across knowledge domains. When learning new tasks, there is competition for cognitive and physiological resources (Sünram-Lea et al. 2002). Divided attention at this encoding stage leads to reductions in memory performance (e.g., Craik et al. 1996). The number of tasks learned concurrently further taxes resources by reducing the amount of time available for encoding (Naveh-Benjamin 2001). Supporting this argument in industrial settings, Nembhard (2000) and Nembhard and Uzumeri (2000) found that rapid learning was associated with rapid forgetting. Jaber and Kher (2002) studied this phenomenon further. They showed that the forgetting slope increased with the learning rate b in the range (0 < b ≤ 0.5), meaning that as the learning rate goes from 99% to about 70%, faster forgetting occurs. Complexity raises activation levels beyond those induced by flexibility (Gardner and Cummings 1988). This combination strains information-processing capacity, hampering both encoding and retrieval. It also leads to heightened perceptions of quantitative (too much work), qualitative (too complex), and role (too many Learning and Forgetting: Implications for Workforce Flexibility responsibilities) overload, which are detrimental to workers’ psychological and physiological well-being (e.g., Bolino and Turnley 2005; Schultz et al. 1998). As a result, the range of optimal performance is much broader for simple than for complex jobs (Gardner 1986). Energy should therefore be expended on training workers effectively and efficiently. As shown in the next section, individuals who build a coherent knowledge base are better able to integrate new, related knowledge and are less susceptible to forgetting. LEARNING AND FORGETTING In cognitive science, it has been widely acknowledged that human beings form mental representations of the world, of themselves, of their capabilities, and of the tasks that they need to perform (Johnson-Laird 1983; Norman 1983). Their interactions with the environment, with others, and with the artifacts of technology shape such mental conceptualizations or models. Essentially, mental models are knowledge representations that enable individuals to describe the purpose of a system, explain its functioning, and predict its future state (Rouse and Morris 1986). The formation of these cognitive structures (or knowledge-acquisition processes) has been described by several theories, one of which is assimilation theory. Assimilation theory postulates that a learner assimilates new knowledge by connecting it to knowledge that already exists in the learner’s memory (Bransford 1979). This assimilation process involves three steps. At the first step, the reception step, the learner must pay attention to the new information so that it can reach short-term memory. At the second step, the availability step, prerequisite concepts must be stored in long-term memory for the new information to be assimilated. At the third step, the activation step, the learner must actively use the prerequisite knowledge to enable the connections between new and existing knowledge (Ausubel 1968; Mayer 1981). Assimilation theory has been deemed appropriate to describe the process of learning new information systems (IS) (Mayer 1981; Santhanam and Sein 1994; Davis and Bostrom 1993). Much of the IS literature involving the development of mental models applies to new manufacturing technologies as well. With the increased automation of manual tasks, the workforce has shifted toward cognitive tasks involving interactions with AMTs (Nembhard and Uzumeri 2000; Arzi and Shtub 1997). AMTs combine production and information-processing capabilities and call for the use of cognitive rather than physical skills. The narrowing skill divide between office and shop floor jobs thus warrants extending IS research findings to cover production. When learning new information systems, trainees may develop models based on prior experience with similar technologies (Young 1983). In this case, they learn by analogy. With new information technologies, however, it is not rare for trainees to have little or no a priori knowledge. This is why the use of external aids or “advance organizers” (Ausubel 1968) is necessary to provide a framework for the construction of new knowledge structures. Conceptual models of new systems refer to the external aids that are used in training (Mayer 1983) and help users form internal mental models of the system in question (Santhanam and Sein 1994). The quality of external aids that are used in training affects the quality of mental model development which, in turn, affects performance (Sauer et al. 2000). In the Learning Curves: Theory, Models, and Applications IS field, accurate mental models have indeed been associated both with efficiency (Staggers and Norcio 1993) and effectiveness (Sein and Bostrom 1989; Karuppan and Karuppan 2008). Accurate mental models are also more resilient to the passage of time as better comprehension greatly influences one’s ability to recall information (Thorndyke 1977). Using the suitcase metaphor, Towse et al. (2005) explained that the amount of learning that could be “packed” or stored in memory depended on packing efficiency and on the arrangement of contents. In other words, greater amounts of knowledge can be stored in one’s memory if it is well organized. Therefore, when flexible workers are assigned to new jobs or tasks, a hiatus between successive assignments will cause workers to focus on other cognitive demands, and performance will suffer (e.g., Peterson and Peterson 1959) unless they have developed robust mental models of their tasks/jobs. The development of mental models and the preservation of their integrity over time both depend on two categories of factors: (i) those that cannot be controlled by the trainer (individual characteristics, task complexity, and length of interruption intervals), and (ii) the methodological factors that can be manipulated during training (amount and type of training, and test contents). UNCONTROLLABLE FACTORS Individual Characteristics There are multiple indications in various learning theories that individual characteristics shape the learning process. Skinner (1969) underscored the impact of a learner’s past history on behavior in the theory of operant conditioning. Traits such as prior knowledge, skills, and previous experience affect the learning process (Cronbach and Snow 1977). Social learning theory (Bandura 1977, 1986, 1997) further buttresses these arguments by claiming that different individuals need different conditions to model others’ behaviors and, by extension, to learn. Demographic/ situational variables, personality variables, and cognitive/learning style shape the learning process and its effectiveness (Zmud 1979). Demographic/situational variables include—but are not limited to—age, gender, education, intellectual abilities, prior experience, personality, and learning styles. In this chapter, the focus is on the last four characteristics. Higher-ability individuals have consistently been found to process information and make decisions faster (Taylor and Dunnette 1974), develop more coherent mental models (Hunt and Lansman 1975), and therefore retain more knowledge and skills (especially abstract and theoretical) over periods of non-use than do lower-ability individuals (Arthur et al. 1998; Farr 1987). Individuals with higher working memory capacity also perform better because they retrieve information more efficiently in the presence of interference (e.g., Bunting et al. 2004; Cantor and Engle 1993). Prior experience and task knowledge facilitate the integration of new knowledge. They are related to more gradual learning, but also to slower forgetting, leading Nembhard and Uzumeri (2000) to conclude that individuals with prior experience are good candidates for cross-training. Personality traits are fully covered by the “big five” factor system (De Raad and Schouwenburg 1996). The five traits include extraversion, agreeableness, Learning and Forgetting: Implications for Workforce Flexibility conscientiousness, emotional stability, and openness to experience (also known as intellect/creativity/imagination) (De Raad and Van Heck 1994). Conscientiousness captures the traits of a good learner: well organized, efficient, systematic, and practical (Goldberg 1992). It is therefore not surprising that conscientiousness is a significant predictor of school (Schuerger and Kuna 1987), college (Wolfe and Johnson 1995), and job (Barrick and Mount 1993) performance. Although extraversion has been negatively related to undergraduate and high school grade point average (Goff and Ackerman 1992), it is positively related to job performance (e.g., Barrick and Mount 1991). Emotional stability and intellect are also prominent traits associated with effective learning (De Raad and Schouwenburg 1996). Moreover, the last trait, intellect, reflects the close alliance between intelligence and personality (Ackerman 1988). Personality indeed influences information processing or cognitive strategies. This connection has been discussed in terms of cognitive styles, thinking styles, learning styles, and intellectual engagement (De Raad and Schouwenburg 1996). Learning styles refer to an individual’s preferred way of learning (Anis et al. 2004). This choice is determined by the individual’s goals and objectives. Abstract learners (Sein and Bostrom 1989; Bostrom et al. 1990) and active experimenters (Bostrom et al. 1990; Frese et al. 1988; Karuppan and Karuppan 2008) have been found to develop more accurate mental models of new technologies. The rationale is that abstract thinkers are more predisposed to decode and understand basic system configurations. Similarly, actual use of and experimentation with a system or technology is a requirement for effective learning (Brown and Newman 1985; Carroll and Mack 1984). Further influencing learning and forgetting phenomena is the type of tasks that are performed (Farr 1987). Task Complexity A great deal of literature has dealt with the categorization of tasks into various types. A popular dichotomy is simple versus complex. The complexity of a task affects the learning rate. Bishu and Chen (1989) found that simple tasks are learned more quickly than difficult tasks. They also found that performance improvements were greater when tasks were learned from difficult to simple rather than the other way around. However, such performance improvements seem to occur only in the short run (Bohlen and Barany 1976). There is strong evidence to suggest that complex knowledge tends to be quickly forgotten (Hall et al. 1983; McCreery and Krajewski 1999; McCreery et al. 2004; Nembhard and Osothsilp 2002; Shields et al. 1979). Nembhard and Uzumeri’s (2000) findings also indicated that complexity—measured in their study in terms of sewing assembly times—affected learning and forgetting, but prior experience was a moderator. Experienced workers learned (in essence, they “relearned”) complex tasks faster than they did simple tasks; they also forgot these complex tasks more rapidly than inexperienced workers did, which the authors recommended be studied further. Arzi and Shtub (1997) discovered that the forgetting effect was stronger for tasks requiring a higher degree of decision making, but that individual capabilities and motivation played an important role in this process. These results suggest that personal and task characteristics interact to predict learning and forgetting. Learning Curves: Theory, Models, and Applications These considerations are especially relevant in AMT environments. The sophisticated technologies implemented in many of today’s production environments have contributed to increasing job complexity in multiple ways. Not only have they placed greater demands on attention and concentration (Aronsson 1989), but they have also increased cognitive requirements. Information technologies are believed to promote innovation and problem solving through learning, creating a “fusion” of work and learning (Torzadeh and Doll 1999). When complex technologies fail on the shop floor, problem referrals to supervisors are no longer considered as viable alternatives because the delays they engender threaten the fluidity of the production system (Kathuria and Partovi 1999; Cagliano and Spina 2000; Karuppan and Kepes 2006). Consequently, operators must utilize cognitive skills under time pressure to bring back system normalcy and prevent further problem occurrences (Pal and Saleh 1993). As mentioned earlier, high levels of complexity and the flexibility demands placed on the workforce contribute to excessive stimulation. Although job simplification through redesign is an attractive solution, it is a partial and temporary one in terms of overstimulation, as work demands always shift. Process redesigns are aimed at boosting productivity. The elimination of steps or tasks in a process frees up resources, which are then re-allocated to new tasks and jobs. In parallel, the constant push for product and process innovation will continue to intensify learning requirements, casting overstimulation as a permanent threat. Unfortunately, task complexity has been operationalized very loosely in the literature, making comparisons across studies difficult to make. Study-specific examples include: slower learning rate, total proportion of learning possible, speed of forgetting, and product learning (e.g., McCreery and Krajewski 1999; McCreery et al. 2004), which are actually the outcomes of complexity; method, machine, and material attributes (Nembhard 2000); and fault states (Sauer et al. 2000). Wood (1986) pointed out the failure of the empirical approach to produce definitions of tasks with sufficient construct validity. Through a review of the various frameworks used to define task complexity, he proposed a theoretical approach that describes task complexity along task inputs (acts and information cues) and outputs (products). Products are the final results of the task (Naylor et al. 1980). Examples include a part, a report, or a treated patient. Acts are the patterns of behaviors or activities required for the creation of the product. Information cues are the pieces of information that the individual uses to make judgments while performing the task. Based on these elements, Wood (1986) proposed three types of task complexity, two of which—component and dynamic—are especially relevant to the study of workforce flexibility. Component complexity is determined by the number of distinct acts required to complete the task and the number of distinct information cues that must be processed. Component redundancy (i.e., the degree of overlap among demand requirements for the task), reduces component complexity, whereas the number of subtasks required to do a particular task increases it. Component redundancy is not a measure of task complexity but it influences it. It evokes the notion of task similarity. The similarity between old and new tasks eases the learning process as well as the development (and even reinforcement) of the mental model. By the same token, it counters forgetting. Jaber et al.’s (2003) operationalization of task similarity exemplifies the relationship between task complexity and similarity. If task 1 comprises acts A, B, and C, and task 2 includes A, C, H, I, and J, the complexity (in terms Learning and Forgetting: Implications for Workforce Flexibility of the number of acts) of either task will be reduced by the redundancy of acts A and C in both tasks. Dynamic complexity is the result of external changes affecting the relationships between task inputs and outputs. Non-stationary acts and information cues—e.g., as the result of the process changes required for the task—place new demands on the worker’s skills and knowledge. The above considerations have important implications for flexibility deployments, since they hint at the degree of additional stimulation that complexity may produce. Length of Interruption Intervals Once workers are trained for a task/job, the length of the interruption interval between tasks/jobs is one of the most important contributors of forgetting (Anderson et al. 1999; Badiru 1995; Bailey 1989; Jaber and Bonney 1996; Jaber and Kher 2004). The period of skill non-use has, without fail, been negatively related to performance in the literature. Skill decay is commensurate with the length of these non-practice intervals. However, its effect on skill decay is moderated by a host of factors, some of which are mentioned above. For example, in their meta-analysis of factors influencing skill decay and retention, Arthur et al. (1998) found that the effect of interruption intervals was exacerbated when the tasks involved were more demanding. Adding to the difficulty of gauging the impact of this variable on skill decay is the fact that the interruption interval may not be clear-cut. For example, workers may be assigned to new tasks that share some common components with old tasks. This component redundancy not only facilitates the learning of new tasks, but also preserves some knowledge of the old ones through continued practice. Finally, some individuals may mentally practice some tasks even when they are not performing them. Consequently, there are claims that cognitive tasks should be more resilient to the passage of time than physical tasks because they naturally lend themselves (e.g., Ryan and Simon 1981) to mental practice. Weakening these claims are the facts that the opposite has been observed in the absence of practice (Arthur et al. 1998) and that the mental practice of unneeded tasks is hardly realistic in actual work settings (Farr 1987). In summary, interactions with other factors complicate the relationships between the length of interruption interval and retention. However, there seems to be a consensus that the duration of the interruption and task complexity exacerbate forgetting. The learning and forgetting research relating to uncontrollable factors is certainly intellectually appealing, but its practical value may be limited. Their interactions allow for a multitude of task-individual-interval combinations, which are daunting for trainers and production managers alike. Interventions known to influence the accuracy of mental models—method of training, over-learning, and test contents— are more easily manageable. CONTROLLABLE FACTORS Training Methods The following discussion does not address training delivery methods (e.g., online versus face-to-face) but rather focuses on the development of mental models via Learning Curves: Theory, Models, and Applications conceptual models or procedural training. Such methods are most suitable to learn how to interact with information technologies and systems (e.g., Santhanam and Sein 1994). Conceptual models refer to the external aids given to new users of a system. They can be analogical or abstract. Analogical models provide analogies between the system on which the worker is being trained and a system with which the worker is familiar. For example, computer numerical control (CNC) compensation may be explained in terms of a marksman compensating for the distance to the target before firing a shot. Analogies are powerful tools to help trainees connect new knowledge to existing knowledge (Lakoff and Johnson 1980). They are rather concrete compared to abstract conceptual models that use schema, structure diagrams, hierarchical charts, or mathematical expressions to represent the new system being learned (Gentner 1983). The contribution of either type of conceptual model to the development of an accurate mental model depends on personal characteristics, such as cognitive ability and learning style. It seems that abstract conceptual models benefit abstract and active learners, whereas analogical models are better suited for concrete learners (Bostrom et al. 1990; Sein and Bostrom 1989). The effectiveness of either model may also be subject to the degree of component redundancy between old and new assignments, which would naturally lend itself to meaningful analogies. As its name suggests, procedural training involves the instruction of step-bystep procedures or predetermined sequences that guide the user’s interaction with a system. No relationship between the system and its components is provided, and the user mentally establishes the linkages between the components and confirms or modifies them as learning progresses. This is consistent with knowledge assembly theory (Hayes-Roth 1977). The concept is similar to using the step-by-step instructions of a GPS system to drive to various destinations. If the driver routinely uses the GPS system, he/she will eventually develop a mental model of the city. On the other hand, a map would provide a conceptual model. Some have argued that procedural training is more appropriate than a conceptual model for the instruction of simple systems (Kieras 1988). In their experiment involving an e-mail system, Santhanam and Sein (1994) did not find any difference between the training methods, but they noted that users who had formed a conceptual mental model of the system outperformed those who had developed a more procedural representation of the system. Over-Learning via Training and Usage Additional training beyond that required for initial mastery is referred to as overlearning (e.g., Farr 1987). Consistent with early claims that the longer a person studies, the longer he/she will be able to retain knowledge (Ebbinghaus 1885), over-learning is considered to be the dominant factor of knowledge retention. According to Hurlock and Montague (1982, 5), “[…] any variable that leads to high initial levels of learning, such as high ability or frequent practice, will facilitate skill retention.” Frequent practice connotes over-learning. With subsequent task performance, initial assumptions about the system can be rectified or solidified, leading to the enrichment of the mental model (de Kleer and Brown 1983; Norman 1983). Over-learning also increases the automaticity of responses and greater self-confidence, which reduce overstimulation and subsequent stress (Arthur et al. 1998; Martens 1974). Since stress elicits Learning and Forgetting: Implications for Workforce Flexibility the secretion of cortisol, which compromises memory and cognitive functions such as knowledge retrieval (Bremner and Narayan 1998), over-learning shields memory from decay. The implementation of over-learning seems to call for longer training sessions. This is especially true for complex tasks for which learning is slower. In light of difficult economic times, many organizations will be tempted to rush training even though it will probably result in performance decrements. Malhotra et al. (1993) found that workers who were not fully trained on a particular task were less productive. They advocated speeding up the learning process. However, since rapid learning is associated with rapid forgetting, their recommendations may be risky. Indeed, one should ensure that trainees have developed adequate mental models prior to the actual re-assignment of work duties. This can be accomplished through an assessment of mental model quality via testing. Test Contents Performance is the only measure of mental model quality. However, performance for a simple task is not comparable to performance for complex tasks. Reminders of this argument are the consistent reports of overstimulation effects for complex tasks. As a result, the adequacy of a mental model is best judged by performance in creative, far-transfer tasks that require the completion of complex command sequences, which are not presented in training manuals (Mayer 1981; Sein and Bostrom 1989). These tasks differ from simple, near-transfer tasks to which trainees are usually exposed during training and whose performance demonstrates the ability to repeat commands, rather than the acquisition of a robust mental model (Weimin 2001). Far-transfer tasks force users to adapt the knowledge learned in training to novel situations. High performance on these tasks thus demonstrates better comprehension and an individual’s likelihood to integrate new knowledge in a logical fashion. Therefore, gauging the necessary amount of training or over-learning can be easily accomplished by setting benchmarks on performance tests of far-transfer tasks. CONCLUSION To summarize, the learning of new jobs or tasks occurs via training and/or usage. The degree of mastery acquired through training and usage is reflected in the accuracy of the trainee’s mental model of the job/task. Individual and task/job characteristics influence this process, thereby moderating the relationship between training/ usage and the accuracy of the mental model. Accurate mental models lead to better performance on the job, operationalized here as workforce flexibility (RN, RH) with minimal losses of efficiency and effectiveness (MOB, UN). Accurate mental models also provide insulation against memory seepage following the assignment of new tasks. In other words, a solid mental model of a particular set of tasks would influence the quality of the mental model after interruption. Further affecting retention and therefore the quality of the mental model after interruption are the length of the interruption interval, individual characteristics, and task complexity. As Arthur et al. (1998) demonstrated, individual characteristics and task complexity also moderate Learning Curves: Theory, Models, and Applications the relationship between the length of the interruption interval and retention. All the linkages between variables are depicted in the framework in Figure 10.1. Although not exhaustive, it extends the learning process model proposed by Santhanam and Sein (1994) and captures the elements most prevalent in the learning and forgetting literature. Much of the empirical research supporting the theory leading to the development of this framework relies on controlled laboratory and simulated experiments, which enable a “clean” manipulation of variables and are well suited to explore causality. This high degree of “purity” in the research methods has generated numerous insights. Nevertheless, the lack of generalizability of experiments to “noisier” environments is a weakness that needs to be addressed. Paradoxically, the popularity of the research stream in multiple disciplines has contributed to its richness, but it has also led to fragmentation. In the absence of a unified approach, the operationalization of variables (e.g., task characteristics) has been study-specific and even questionable in terms of construct validity. The above considerations suggest that validations of the vast experimental research in field settings with diverse individuals, truly complex systems, and well-defined constructs, constitute the next logical step. There have been some isolated attempts to do so (e.g., Karuppan and Karuppan 2008; Nembhard 2000; Nembhard and Osothsilp 2002); they just need to be more widespread and more interdisciplinary. The primary difficulty emerging in field research is the interference of a host of individual, task, and site characteristics with the relationships that are under investigation. A serious problem in research, it is even more acute in practice. Besides the controversy that a distinction between low- and high-ability individuals would create, the intricacy of the relationships among various characteristics poses challenges for the training decision makers. Regarding the methods of instruction, a great deal Individual characteristics -Prior experience -Ability -Personality -Learning style Task complexity -Component complexity -Dynamic complexity Length of interruption interval Training -Amount (over-learning) -Type (conceptual vs. analogical) Mental model 1 Mental model 2 Flexibility RN, RH, MOB, UN Flexibility RN, RH, MOB, UN Usage -Amount (over-learning) FIGURE 10.1 Proposed integrative framework of training for flexibility. Individual characteristics -Prior experience -Ability -Learning style Task complexity -Component complexity -Dynamic complexity Learning and Forgetting: Implications for Workforce Flexibility of research has identified the interactions between the methods of instruction and the individual and task characteristics, and has also suggested that training be customized accordingly. Unfortunately, and at least for the time being, the pressures for cost control and resource utilization in real-world settings limit the applicability of this research. Loo (2004) had recognized the burden of instruction customization and has instead advocated the use of a variety of instructional methods to appeal to a variety of individuals. Consequently, both conceptual and procedural models of AMT jobs/tasks should be made available to learners. Construing task complexity in terms of the “amount of learning possible” (e.g., McCreery et al. 2004) raises an interesting question. What does “fully trained” really mean? Does getting a passing grade on a test constitute complete training? One could easily argue that one learns something on the job every single day. In such cases, over-learning is not confined to the actual training session format; it extends to daily practice and becomes confounded with experience. This observation fits with existing evidence regarding flexibility deployments. The rules suggest that flexibility deployments should favor workers with more extensive experience. Experience gives workers the time to build and refine a robust mental model, which is more resilient to decay. In order to account for task complexity in a pragmatic fashion, the extent of flexibility RN and RH should also be determined by the degree of component redundancy among tasks. When component redundancy is limited, so should deployments. Component redundancy is also confounded with experience since it assumes that workers have already learned and practiced elements of the new assignments. In the presence of dynamic complexity, extra time for relearning should be granted in order to avoid significant performance decrements following the return to a job that has been altered by external events. These rules are simple. Yet personal communication with industrial partners suggests that workforce flexibility is not measured and is therefore not examined rigorously like cost and quality are. The old adage of “more is better” seems to prevail until major performance decrements and employee resistance reach unacceptable levels. Clearly, academic research in the field has not filtered through to other professional communication vehicles. They may not be as prestigious as academic outlets, but professional publications have the advantage of increasing the visibility of a topic and ultimately the opportunities for external research funding. REFERENCES Ackerman, P.L., 1988. Determinants of individual differences during skill acquisition: Cognitive abilities and information processing. Journal of Experimental Psychology 117(3): 288–318. Allen, M., 1963. Efficient utilization of labor under conditions of fluctuating demand. In Industrial scheduling, eds. J. Muth and G. Thompson, 252–276. Englewood Cliffs, NJ: Prentice-Hall. Anderson, J.R., Fincham, J.M., and Douglass, S., 1999. Practice and retention: A unifying analysis. Journal of Experimental Psychology 25(5): 1120–1136. Anis, M., Armstrong, S.J., and Zhu, Z., 2004. The influence of learning styles on knowledge acquisition in public sector management. Educational Psychology 24(4): 549–571. Aronsson, G., 1989. Changed qualifications in computer-mediated work. Applied Psychology: An International Review 38(1): 57–71. Learning Curves: Theory, Models, and Applications Arthur, W., Jr., Bennett, W., Jr., Stanush, P.L., and McNelly, T.L., 1998. Factors that influence skill decay and retention: A quantitative view and analysis. Human Performance 11(1): 57–101. Arzi, Y., and Shtub, A., 1997. Learning and forgetting in mental and mechanical tasks: A comparative study. IIE Transactions 29(9): 759–768. Ausubel, D.P., 1968. Educational psychology: A cognitive view. Holt, Reinhart, and Winston: New York. Badiru, A.B., 1995. Multivariate analysis of the effect of learning and forgetting on product quality. International Journal of Production Research 33(3): 777–794. Bailey, C.D., 1989. Forgetting and the learning curve: A laboratory study. Management Science 35(3): 340–352. Bandura, A., 1977. Social learning theory. Englewood Cliffs, NJ: Prentice-Hall. Bandura, A., 1986. Social foundations of thought and action: A social cognitive theory. Upper Saddle River, NJ: Prentice-Hall. Bandura, A., 1997. A self-efficacy—The exercise of control. New York, NY: W.H. Freeman and Company. Barrick, M.R., and Mount, M.K., 1991. The big five personality dimensions and job performance: A meta-analysis. Personnel Psychology 44(1): 1–26. Barrick, M.R., and Mount, M.K., 1993. Autonomy as a moderator of the relationships between the big five personality dimensions and job performance. Journal of Applied Psychology 78(1): 111–118. Bishu, R.R., and Chen, Y., 1989. Learning and transfer effects in simulated industrial information processing tasks. International Journal of Industrial Ergonomics 4(3): 237–243. Bohlen, G.A., and Barany, J.W., 1976. A learning curve prediction model for operators performing industrial bench assembly operations. International Journal of Production Research 14(2): 295–302. Bolino, M.C., and Turnley, W.H., 2005. The personal costs of citizenship behavior: The relationship between individual initiative and role overload, job stress, and work-family conflict. Journal of Applied Psychology 90(4): 740–748. Bostrom, R.P., Olfman, L., and Sein, M.K., 1990. The importance of learning style in end-user training. MIS Quarterly 14(1): 101–119. Bransford, J.D., 1979. Human cognition. Monterey: Wadsworth. Bremner, J.D., and Narayan, M., 1998. The effects of stress on memory and the hippocampus throughout the life cycle: Implications for childhood development and aging. Development and Psychopathology 10:871–885. Brown, J.S., and Newman, S.E., 1985. Issues in cognitive and social ergonomics: From our house to Bauhaus. Human-Computer Interaction 1(4): 359–391. Bunting, M.F., Conway, A.R.A., and Heitz, R.P., 2004. Individual differences in the fan effect and working memory capacity. Journal of Memory and Language 51(4): 604–622. Cagliano, R., and Spina, G., 2000. Advanced manufacturing technologies and strategically flexible production. Journal of Operations Management 18(2): 169–190. Cantor, J., and Engle, R.W., 1993. Working memory capacity as long-term memory activation: An individual differences approach. Journal of Experimental Psychology: Learning, Memory, and Cognition 19(5): 1101–1114. Carroll, J.M., and Mack, R.L., 1984. Learning to use a wordprocessor: By doing, by thinking and by knowing. In Human Factors in Computing Systems, eds. J.C. Thomas and M.L. Schneider, 13–51. Norwood: Ablex Publishing Company. Champoux, J.E., 1978. A preliminary examination of some complex job scope-growth need strength interactions. Paper read at Academy of Management Proceedings. Champoux, J.E., 1980. A three sample test of some extensions to the job characteristics model of work motivation. Academy of Management Journal 23(3): 466–478. Learning and Forgetting: Implications for Workforce Flexibility Champoux, J.E., 1992. A multivariate analysis of curvilinear relationships among job scope work context satisfaction and affective outcomes. Human Relations 45(1): 87–111. Craik, F.I.M., Govoni, R., Naveh-Benjamin, M., and Anderson, N.D., 1996. The effects of divided attention on encoding and retrieval processes in human memory. Journal of Experimental Psychology: General 25(2): 159–180. Cronbach, L.J., and Snow, R.E., 1977. Aptitudes and instructional methods: A handbook for research on interactions. New York: Irvington. Davis, S.A., and Bostrom, R.P., 1993. Training end-users: An experimental investigation of the roles of the computer interface and training methods. MIS Quarterly 17(1): 61–81. de Kleer, J., and Brown, J.S., 1983. Assumptions and ambiguities in mechanistic mental models. In Mental models, eds. D. Gentner and A.L. Stevens, 15–34. Hillsdale: Lawrence Erlbaum. De Raad, B., and Schouwenburg, H.C., 1996. Personality in learning and education: A review. European Journal of Personality 10(5): 303–336. De Raad, B., and Van Heck, G.L., 1994. The fifth of the big five. European Journal of Personality 8(Special Issue): 225–356. Easterbrook, A., 1959. The effect of emotion on cue utilization and the organization of behavior. Psychological Review 66(3): 187–201. Ebbinghaus, H., 1885. Memory: A contribution to experimental psychology. New York: Dover (translated in 1962). Farr, M.J., 1987. Long-term retention of knowledge and skills: A cognitive and instructional perspective. New York: Springer-Verlag. Felan, J.T., III, Fry, T.D., and Philipoom, P.R., 1993. Labour flexibility and staffing levels in a dual-resource constrained job shop. International Journal of Production Research 31(10): 2487–2506. French, J.R.P., and Caplan, R.D., 1973. Organizational stress and individual strain. In The failure of success, ed. A.J. Murrow, 30–66. New York: AMACOM. French, J.R.P., Jr., Caplan, R.D., and Van Harrison, R., 1982. The mechanisms of job stress and strain. Chichester: Wiley. Frese, M., Albrecht, K., Altmann, A., Lang, J., Papstein, P.V., Peyerl, R., Prumper, J., SchulteGocking, H., Wankmuller, I., and Wendel, R., 1988. The effects of an active development of the mental model in the training process: Experimental results in a word processing system. Behavior and Information Technology 7(3): 295–304. Fryer, J.S., 1976. Organizational segmentation and labor transfer policies in labor and machine limited production systems. Decision Sciences 7(4): 725–738. Gardner, D.G., 1986. Activation theory and task design: An empirical test of several new predictions. Journal of Applied Psychology 71(3): 411–418. Gardner, D.G., and Cummings, L.L., 1988. Activation theory and job design: Review and reconceptualization. Research in Organizational Behavior 10: 81–122. Gentner, D., 1983. Structure-mapping: A theoretical framework for analogy. Cognitive Science 7(2): 155–170. Gerwin, D., 1987. An agenda for research on the flexibility of manufacturing processes. International Journal of Operations and Production Management 7(1): 38–49. Goff, M., and Ackerman, P.L., 1992. Personality-intelligence relations: Assessment of typical intellectual engagement. Journal of Educational Psychology 84(4): 532–552. Goldberg, L.R., 1992. The development of markers of the big five factor structure. Psychological Assessment 4(1): 26–42. Hackman, J.R., and Oldham, G.R., 1975. Development of the job diagnostic survey. Journal of Applied Psychology 60(2): 159–170. Hackman, J.R., and Oldham, G.R., 1976. Motivation through the design of work: Test of a theory. Organizational Behavior and Human Performance 16(2): 250–279. Learning Curves: Theory, Models, and Applications Hall, E.R., Ford, L.H., Whitten, T.C., and Plyant, L.R., 1983. Knowledge retention among graduates of basic electricity and electronics schools. Training Analysis and Evaluation Group, Department of the Navy (AD-A131855): Orlando. Hayes-Roth, B., 1977. Evolution of cognitive structures and processes. Psychological Review 84(3): 260–278. Hogg, G.L., Phillips, D.T., and Maggard, M.J.,1977. Parallel-channel dual-resource-constrained queuing systems with heterogeneous resources. AIIE Transactions 9(4): 352–362. Hulin, C.L., and Blood, M.R., 1968. Job enlargement, individual differences, and worker responses. Psychological Bulletin 69(1): 41–55. Humphreys, M.S., and Revelle, W., 1984. Personality, motivation, and performance: A theory of the relationship between individual differences and information processing. Psychological Review 91(2): 153–184. Hunt, E., and Lansman, M., 1975. Cognitive theory applied to individual differences. In Handbook of learning and cognitive processes, ed. W.K. Estes, 81–107. Hillsdale, NJ: Lawrence Erlbaum. Hurlock, R.E., and Montague,W.E., 1982. Skill retention and its implications for Navy tasks: An analytical review. San Diego: Navy Personnel Research and Development Center. Jaber, M.Y., and Bonney, M., 1996. Production breaks and the learning curve: The forgetting phenomenon. Applied Mathematics Modeling 20(2): 162–169. Jaber, M.Y., and Kher, H.V., 2002. The dual-phase learning-forgetting model. International Journal of Production Economics 76(3): 229–242. Jaber, M.Y., and Kher, H.V., 2004. Variant versus invariant time to total forgetting: The learnforget curve model revisited. Computers & Industrial Engineering 46(4): 697–705. Jaber, M.Y., Kher, H.V., and Davis, D., 2003. Countering forgetting through training and deployment. International Journal of Production Economics 85(1): 33–46. Johnson-Laird, P.N., 1983. Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge: Cambridge University Press. Kahn, R.L., and Byosiere, P., 1992. Stress in organizations. In Handbook of industrial and organizational psychology, eds. M.D. Dunnette and L.M. Hough, 571–650. Palo Alto: Consulting Psychologists Press. Karuppan, C., 2008. Labor flexibility: Rethinking deployment. International Journal of Business Strategy 8(2): 108–113. Karuppan, C.M., and Ganster, D.C., 2004. The labor-machine dyad and its influence on mix flexibility. Journal of Operations Management 22(4): 533–556. Karuppan, C.M., and Karuppan, M., 2008. Resilience of super users’ mental models of enterprise-wide systems. European Journal of Information Systems 17(1): 29–46. Karuppan, C.M., and Kepes, S., 2006. The strategic pursuit of flexibility through operators’ involvement in decision making. International Journal of Operations & Production Management 26(9): 1039–1064. Kathuria, R., and Partovi, F.Y., 1999. Workforce management practices for manufacturing flexibility. Journal of Operations Management 18(1): 21–39. Kieras, D.E., 1988. What mental models should be taught: Choosing instructional content for complex engineering systems. In Intelligent tutoring systems: Lessons learned, ed. J. Psotka, L.D. Massey, and S.A. Mutter, 85–111. Hillsdale: Lawrence Erlbaum. Koste, L.L., and Malhotra, M., 1999. A theoretical framework for analyzing the dimensions of manufacturing flexibility. Journal of Operations Management 18(1): 75–93. Lakoff, G., and Johnson, M., 1980. Metaphors we live by. Chicago: The University of Chicago Press. Lindsley, D.B., 1951. Emotion. In Handbook of experimental psychology, ed. S.S. Stevens. New York: Wiley. 473–516. Loo, R., 2004. Kolb’s learning styles and learning preferences: Is there a linkage? Educational Psychology 24(1): Learning and Forgetting: Implications for Workforce Flexibility Malhotra, M.K., Fry, T.D., Kher, H.V., and Donahue, J.M., 1993. The impact of learning and labor attrition on worker flexibility in dual esource constrained job shops. Decision Sciences 24(3): 641–663. Malmo, R.B., 1959. Activation: A neuropsychological dimension. Psychological Review 66(6): 367–386. Martens, R., 1974. Arousal and motor performance. In Exercise and sport sciences review, ed. J.H. Wilmore, 155–188. New York: Academic Press. Martin, R., and Wall, T., 1989. Attentional demand and cost responsibility as stressors. Academy of Management Journal 32(1): 69–86. Mayer, R.E., 1981. The psychology of how novices learn computer programming. Computing Surveys 13(1): 121–141. Mayer, R.E., 1983. Can you repeat that? Qualitative effects of repetition and advance organizers on learning from scientific prose. Journal of Educational Psychology 75(1): 40–49. McCreery, J.K., and Krajewski, L.J., 1999. Improving performance using workforce flexibility in an assembly environment with learning and forgetting effects. International Journal of Production Research 37(9): 2031–2058. McCreery, J.K., Krajewski, L.J., Leong, G.K., and Ward, P.T., 2004. Performance implications of assembly work teams. Journal of Operations Management 22(4): 387–412. Naveh-Benjamin, M., 2001. and H.L. Roediger, III, 193–207. New York: Psychology Press. Naylor, J.C., Pritchard, R.D., and Ilgen, D.R., 1980. A theory of behavior in organizations. New York: Academic Press. Nembhard, D.A., 2000. The effects of task complexity and experience on learning and forgetting: A field study. Human Factors 42(2): 272–286. Nembhard, D.A., and Osothsilp, N., 2002. Task complexity effects on between-individual learning/forgetting variability. International Journal of Industrial Ergonomics 29(5): 297–306. Nembhard, D.A., and Uzumeri, M.V., 2000. Experiential learning and forgetting for manual and cognitive tasks. International Journal of Industrial Ergonomics 25(3): 315–326. Norman, D.A., 1983. Some observations on mental models. In Mental models, eds. D. Gentner and A. L. Stevens, 7–14. Hillsdale: Lawrence Erlbaum. Pal, S.P., and Saleh, S., 1993. Tactical flexibility of manufacturing technologies. IEEE Transactions on Engineering Management 40(4): 373–380. Parthasarthy, R., and Sethi, S.P., 1992. The impact of flexible automation on business strategy and organizational structure. Academy of Management Review 17(1): 86–111. Peterson, L.R., and Peterson, M.J., 1959. Short-term retention of individual verbal items. Journal of Experimental Psychology 58:193–198. Rouse, W.B., and Morris, N.M., 1986. On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin 100(3): 349–363. Ryan, E.D., and Simons, J., 1981. Cognitive demand, imagery, and the frequency of mental rehearsal as factors influencing acquisition of motor skills. Journal of Sports Psychology 3(1): 35–45. Santhanam, R., and Sein, M.K. 1994. Improving end-user proficiency: Effects of conceptual training and nature of interaction. Information Systems Research 5(4): 378–399. Sauer, J., Hockey, G.R.J., and Wastell, D.G., 2000. Effects of training on short- and long-term skill retention in a complex multi-task environment. Ergonomics 43(12): 2043–2064. Schaubroeck, J., and Ganster, D.C., 1993. Chronic demands and responsivity to challenge. Journal of Applied Psychology 78(1): 73–85. Schuerger, J.M., and Kuna, D.L., 1987. Adolescent personality and school performance: A follow-up study. Psychology in the Schools 24(3): 281–285. Schultz, P., Kirschbaum, C., Prüßner, J., and Hellhammer, D., 1998. Increased free cortisol secretion after awakening stressed individuals due to work overload. Stress and Health 14(2): 91–97. Learning Curves: Theory, Models, and Applications Sein, M.K., and Bostrom, R.P. 1989. Individual differences and conceptual models in training novice users. Human-Computer Interaction 4(3): 197–229. Shields, J.L., Goldberg, S.I., and Dressel, J.D., 1979. Retention of basic soldering skills. Alexandria, VA: US Army Research Institute for the Behavioral and Social Sciences. Singh, J., 1998. Striking a balance in boundary-spanning positions: An investigation of some unconventional influences of role stressors and job characteristics on job outcomes of salespeople. Journal of Marketing 62(3): 69–86. Skinner, B.F., 1969. Contingencies of reinforcement: A theoretical analysis. Englewood Cliffs, NJ: Prentice-Hall. Staggers, N., and Norcio, A.F., 1993. Mental models: Concepts for human-computer interaction research. International Journal of Man-Machine Studies 38(4): 587–605. Sünram-Lea, S.I., Foster, J.K., Durlach, P., and Perez, C., 2002. Investigation into the significance of task difficulty and divided allocation of resources on the glucose memory facilitation effect. Psychopharmacology 160(4): 387–397. Taylor, R.N., and Dunnette, M.D., 1974. Relative contribution of decision-maker attributes to decision process. Organization Behavior and Human Performance 12(2): 286–298. Thorndyke, P., 1977. Cognitive structures in comprehension and memory for narrative discourse. Cognitive Psychology 9(1): 77–110. Torzadeh, G., and Doll, W.J., 1999. The development of a tool for measuring the perceived impact of information technology on work. Omega 27(3): 327–339. Towse, J.N., Hitch, G.J., Hamilton, Z., Peacock, K., and Hutton, U.M.Z., 2005. Working memory period: The endurance of mental representations. The Quarterly Journal of Experimental Psychology 58A(3): 547–571. Treleven, M., 1989. A review of the dual resource constrained system research. IIE Transactions 21(3): 279–287. Upton, D.M., 1997. Process range in manufacturing: An empirical study of flexibility. Management Science 4(8): 1079–1092. Weimin, W., 2001. The relative effectiveness of structured questions and summarizing on near and far transfer tasks. Paper read at 24th National Convention of the Association for Educational Communications and Technology, Atlanta, GA. Wolfe, R.N., and Johnson, S.D., 1995. Personality is a predictor of college performance. Educational and Psychological Measurement 55(2): 177–185. Wood, R.E., 1986. Task complexity: Definition of the construct. Organizational Behavior and Human Decision Processes 37(1): 60–82. Xie, J.L., and Johns, G., 1995. Job scope and stress: Can job scope be too high? Academy of Management Journal 3(5): 1288–1309. Young, R.M., 1983. Surrogates and mappings: Two kinds of conceptual models for interactive devices. In Mental models, eds. D. Gentner and A.L. Stevens, 32–52 Hillsdale: Lawrence Erlbaum. Zmud, R.W., 1979. Individual differences and MIS success: A review of the empirical literature. Management Science 25(10): 966–979. Learning 11 Accelerated by Experimentation Roger Bohn and Michael A. Lapré CONTENTS Deliberate Learning ............................................................................................... 191 Sequential Experimentation .............................................................................. 193 Learning in Semiconductor Manufacturing ...................................................... 194 Methods of Accelerating Learning.......................................................................... 195 Types of Experiments and their Characteristics................................................ 195 Criteria for Evaluating Experimentation Methods ............................................ 197 Approximate Reality: The Location of Experiments ........................................ 199 Focused/Comprehensive Spectrum ................................................................... 201 Case Studies ...........................................................................................................202 Experimentation on the Internet........................................................................202 Apple Breeding: Five-fold Reduction of Information Cycle Time ................... 203 Short-Loop Experiments for AIDS: The Critical Role of Measurement ..........204 Fidelity Problems Due to Location in Clinical Trials .......................................205 Astronomy and other Observational Fields.......................................................206 Conclusions............................................................................................................207 DELIBERATE LEARNING Experimentation was a key part of the Scientific Revolution. Galileo (1564–1642) is often credited with being the first to use experiments for both discovery and proof of scientific relationships, although this could also be said of Ibn al-Haytham (965–1040) who lived some 600 years earlier. Experimentation as a core concept in management was introduced by Herbert Simon, who discussed both management and engineering as processes of systematic search over a field of alternatives (March and Simon 1958; Newell and Simon 1972). Deliberate systematic experimentation to improve manufacturing probably goes back to chemical engineering in the late nineteenth century. Frederick Taylor ran thousands of metal-cutting experiments over several decades, and in many ways was a pioneer in systematic learning, a decade before his controversial research on managing people (Bohn 2005). Systematic experimentation in marketing began around 1940 (Applebaum and Spears 1950). Learning Curves: Theory, Models, and Applications A general definition of an experiment is: “A deliberate comparison of outcomes from a varied but repeatable set of conditions, with an effort to explain different outcomes by differences in conditions.” This definition includes controlled experiments (in which possible causal conditions are deliberately manipulated), and natural experiments (in which they are measured but not deliberately altered). The definition excludes purely descriptive investigations such as surveys or satellite photographs in which the results are solely tabulated or displayed. For example, a control chart by itself is not an experiment, although control charts can provide data for natural experiments. Experiments are a key mechanism for industrial learning, and are therefore an important managerial lever to accelerate learning curves (Adler and Clark 1991; Dutton and Thomas 1994; Lapré et al. 2000). The learning curve phenomenon has been observed frequently for measures of organizational performance such as quality and productivity. The rate of improvement is called the “learning rate.” Learning rates show tremendous variation across industries, organizations, and organizational units (Dutton and Thomas 1984; Lapré and Van Wassenhove 2001). Bohn (1994) and Lapré (2011) discuss the model inside the learning curve. Both experience and deliberate activities can be sources for learning; learning can yield better organizational knowledge, which in turn can lead to changed behavior, and subsequently to improved performance. None of these steps are automatic. Dutton and Thomas (1984) call learning from experience “autonomous learning” and learning from deliberate activities “induced learning.” The typical examples of deliberate activities are quality improvement projects and productivity improvement projects. Such projects often rely on a series of experiments. Other induced learning methods at the level of the firm are deliberate knowledge transfers from outside the organization, and the training of workers—but these are available only for knowledge that already exists and is accessible. Even when firms transfer knowledge from the outside, some adaptation to local circumstances—and thus experimentation—is almost always required (Leonard-Barton 1988). Hence, sound management of experimentation is important in order to attain the effective management of the learning rate. In an extreme example, experimentation can be the sole driver of a learning curve. Lapré and Van Wassenhove (2001) studied productivity learning curves at four production lines at Bekaert, the world’s largest independent producer of steel wire. One production line was run as a learning laboratory in a factory. The productivity learning curve was explained by the cumulative number of productivity improvement projects, which consisted of a series of natural and controlled experiments. The other three production lines were set up to replicate the induced learning. Interestingly, the other three lines struggled with learning from experimentation and relied on autonomous learning instead. Even within the same organization, it can be difficult to manage the required scientific understanding and human creativity (we will later refer to this as the “value of the underlying ideas”). Experiments are used in a variety of settings. Generally, the experiments themselves are embedded into broader processes for deliberate learning, such as line start-ups, quality programs, product development, or market research. Examples of situations dealt with by experimentation include: Accelerated Learning by Experimentation • Diagnosing and solving a newly observed problem in a complex machine • Improving the layout or contents of web pages that are displayed to consumers • Breeding new varieties of plants • Developing a new product by building and testing prototypes • Conducting a clinical trial of a new medication on a population of patients • Scientific modeling via simulation, such as global climate change models Experimentation is not the only method of performance improvement in industry. Other approaches start from existing knowledge and attempt to employ it more widely or more effectively. These include training workers, installing improved machinery (with the knowledge embedded in the machines), and improving process monitoring to respond faster to deviations. In such approaches, experiments are still needed to validate changes, but they do not play a central role. Sequential Experimentation Experiments are generally conducted in a series, rather than individually. Each cycle in the series consists of planning the experiment, setting up the experimental apparatus, actually running the trial and collecting data, and analyzing the results (plan, set-up, run, analyze). For example, in product development, experiments focus on prototypes, and the prototyping cycle consists of designing (the next prototype), building, testing (running trials with the prototype), and analyzing (e.g., see Thomke 1998). The “planning” stage includes deciding what specific topic to investigate, deciding where and how to experiment (discussed below), and the detailed design of the experiment. The analysis of results from each experiment in the series suggests further ideas to explore. Experiments can be designed for the general exploration of a situation, to compare discrete alternative actions, to estimate the coefficients of a mathematical model that will be used for optimization or decision making, or to test a hypothesis about causal relationships in a complex system. The goals of experiments in a series usually evolve over time, such as moving from general to specific knowledge targets. For example, to fix a novel problem in a multistage manufacturing process, engineers may first isolate the location of the cause (general search), and then test hypotheses about the specific causal mechanism at work. They may then test possible interventions to find out whether or not they fix the problem and what, if any, are the side effects. Finally, they may run a series of experiments to quantitatively optimize the solution. The goals of experiments depend on how much is already known. Knowledge about a particular phenomenon is not a continuous variable, but rather it passes through a series of discrete stages. The current stage of knowledge determines the kinds of experiments needed in order to move to the next stage (Table 11.1). For example, to reach Stage 3 of causal knowledge, one must learn the direction of an effect, which may only require a simple two-level experiment, while at Stage 4 the full effects are understood quantitatively. This requires an elaborate design, often Learning Curves: Theory, Models, and Applications TABLE 11.1 Stages of Causal Knowledge and the Types of Experiments Needed to Progress Causal Knowledge Stage: How Cause xi Affects Outcome Y Experimental Design Needed to Get to this Stage 6. Integrated multivariate causal system Multivariate experiment with initial, intermediate, and final variables all measured Test fit to an equation derived from first principles Compare range of levels of xi (response surface estimation) Compare two selected alternatives Screening experiment on multiple possible causes; general exploration (Starting condition) 5. Scientific model (formulaic relationship derived from scientific theory) 4. Magnitude and shape of effect (empirical relationship) 3. Direction of effect on Y 2. Awareness of variable xi 1. involving the changing of multiple variables. Whether the additional knowledge is worth the additional effort depends on the economics of the particular situation. Moving from rudimentary to complete knowledge often takes years because advanced technologies have complex networks of relationships, with hundreds of variables to consider. Learning in Semiconductor Manufacturing Deliberate learning takes place almost constantly in semiconductor fabrication. Fixed costs of fabrication are very high, so the production rate (throughput) and yield (fraction of output that is good) are critical. Yields of new processes sometimes start well below 50%, and a percentage point of yield improvement is worth millions of dollars per month (Weber 2004). Because of the very high complexity of the fabrication process, no changes are made unless they have been tested experimentally. Hundreds of engineers in each wafer fabrication facility (called “fab”) engage in constant learning cycles. Some seek changes that will permanently improve yields. Learning targets can include the product design at several levels, from circuit design to specific masks, changes in the process recipe, changes in the tooling recipe (such as a time/intensity profile for a deposition process), or alterations in a $10 million tool. Other problem-solving activities can locate and diagnose temporary problems, such as contamination. Natural experiments (discussed below) may be adequate to find such problems, but engineers always test the solution by using a controlled experiment before putting the tool back into production. Other experiments are used to test possible product improvements. Finally, some experiments are run for development purposes, either of new products or of new processes. Learning curves for yield improvement can differ considerably, even within the same company. In the case of one semiconductor company for which we had data, the cumulative production that was required in order to raise yields by 10 percentage Accelerated Learning by Experimentation points from their initial level ranged from below 10 units, to more than 200 units.* There were many reasons behind this large range, including better systems put in place for experimentation and observation (natural experiments), transfer of knowledge across products, and more resources for experiments at some times than at others. METHODS OF ACCELERATING LEARNING Deliberate (induced) learning is a managed process. This section discusses the drivers of effective learning by experimentation. There is no perfect way to experiment, and choosing methods involves multiple tradeoffs. In some situations, a better strategy can have a 10-fold effect on the rate of learning. Even when using a single strategy, effectiveness can vary dramatically. We divide the analysis into three sections. First we show that there are four basic types of experiments. Second, we present criteria for predicting the rate of learning from experiments. These include cost per experiment, speed, and specific statistical properties. Third, we discuss the choice of where to experiment (location). At one extreme are full-scale experiments in the real world, while more controlled locations are usally superior. Different combinations of experiment type and location can improve some of the criteria, while worsening others. This three-part framework (types, criteria, and locations) was first developed for manufacturing (Bohn 1987) and was then applied to product development (Thomke 1998). The framework also fits market research and other industrial learning about behavior. It can also be used in clinical trials. Types of Experiments and their Characteristics There are four main types of experiments, each with different ways of manipulating the causal variables. (1) Controlled experiments make deliberate changes to treatments for several groups of subjects, and compare their properties. For example, medical clinical trials treat separate groups of patients with different drugs or doses. The “control group” captures the effects of unobserved variables; the difference in outcomes between the control and treated groups is the estimated effect of the treatment. Treatments in controlled experiments can be elaborated indefinitely. A classic reference on this subject is Box et al. (1978). (2) Natural experiments use normal production as the data source.† The natural variation in causal variables is measured carefully, and is related to the natural variation in outcomes using regression or related techniques. Natural experiments are generally inexpensive; the main costs involved are for analysis, and, if necessary, special measurements. As a result, very large sample sizes are possible. On the other hand, natural experiments can only measure the impact of changes that occur due to * Arbitrary production units used for disguise. The term “natural experiment” has apparently never been formally defined, although various authors have used it, or have discussed a similar concept using a different name. Murnane and Nelson (1984) refer to natural experiments but without defining them. Box et al. (1978) referred to them as “happenstance data.” Learning Curves: Theory, Models, and Applications natural variations in the process. They cannot predict the effects of radical changes, such as a new type of equipment, or a completely new procedure. A fundamental problem with natural experiments is confounding. If A and B vary together, does A cause B, or does B cause A, or are both caused by an invisible third variable (Box et al. 1978)? There are also tricky questions about causality. Suppose that the length of time spent by customers in a grocery store is measured and turns out to be positively correlated to the amount of money spent. Do the customers: (a) spend more because they had more time to look at merchandise, or (b) spend more time shopping because they intended to buy more at the time they entered the store? Gathering additional data via a questionnaire might resolve this, while still remaining a natural experiment, but even if the first case is correct, an intervention that increases the time spent in the store will not necessarily lead to increased spending.* This simple example highlights the importance of an explicit causal model for learning from experiments; an appropriately complex causal model is needed in order to understand the effect of interventions that can change A, compared with just statistically establishing and concluding that A increases B (Pearl 2001).† The causal model can be determined from outside knowledge, or by appropriate controlled experiments, but it cannot generally be tested purely by natural experiments. (3) Ad hoc experiments, like controlled experiments, use deliberate changes. However, the changes are made without a careful control group or experimental design. A simple “before-and-after” comparison is used to estimate the impact of the treatment. Because many unobserved variables can also change over time, ad hoc experiments can be very misleading and can have a poor reputation. However, young children playing with blocks learn very effectively in this way, and quickly learn basic cause-and-effect relationships. This form of learning is sometimes called trial and error (Nelson 2008). (4) Evolutionary operation (EVOP) experiments are a hybrid between controlled and natural experiments. Slight changes in the production process are made deliberately, and the resulting changes are measured and statistically associated with the process shifts. The changes are small enough so that the process still works and the output is still good. Subsequent changes can move further on in whichever directions yield the most improved results—this is the “evolution.” EVOP was proposed decades ago for factories, but as far as we know, it was little used (Hunter and Kittrell 1966).‡ Recently, however, it has become a very common approach to learning on the Internet. As is discussed below, Amazon and Google both use multiple EVOP experiments to tune their user-interface (web page) design, search algorithms for responding to queries, selection and placement of ads on the page, and so forth. Seemingly minor design issues such as colors and the precise location of “hot spots” on the page can be experimented on very easily, quickly, and cheaply. * For example, management could slow down the checkout process, or show a free movie. These might seem far fetched in a grocery store, but similar problems would cloud the results of a natural experiment in a casino, a theme park, or a bookstore. The standard statistical and mathematical notations are not even capable of distinguishing among the different types of causes (see Pearl 2001). It was also proposed for marketing, as a form of “adaptive control” (Little 1966). Accelerated Learning by Experimentation All four types of experiments are generally feasible for learning in ongoing operations, but only controlled and ad hoc experiments are available for novel situations. Deciding which type to use depends on the interactions among several criteria, which we turn to next. Criteria for Evaluating Experimentation Methods There are advantages and drawbacks to the different types of experiments and the different designs within a type. There is, as yet, no “theory of optimal experimentation,” except within very stylized models. However, a small number of highlevel properties are essential to predicting the overall effectiveness of the different approaches. Key characteristics include speed, signal-to-noise (S/N) ratio, cost per cycle, value and variety of ideas, and fidelity. • Speed can be defined as the inverse of the information cycle time from the beginning to the end of each plan–set-up–run–analyze cycle. Shorter cycle times directly accelerate learning by requiring less time to run a series of experiments. Less directly, a faster cycle helps the experimenters “keep track of” the experiment, the reasoning behind it, and any unrecorded subtleties in the experimental conditions, along with any results that may be important only in retrospect. • Signal-to-noise ratio (S/N) can be defined as the ratio of the true (unknown) effect of the experimental change to the standard deviation of measured outcomes. The S/N drives the common significance test such as the “t test,” which measures the probability of a false positive result. It also drives the statistical power, which measures the probability of a false negative (overlooking a genuine improvement) (Bohn 1995a). The S/N can generally be improved by increasing the sample size, but more subtle methods are usually available and are often less expensive.* • Cost per cycle. The lower the cost, the more experimental cycles can be run (or the more conditions can be tested in parallel). The financial costs of controlled experiments usually include the cost of the materials used in the experiment, but the most important costs are often non-financial—notably, the opportunity costs of busy engineers, the production capacity, computer time (for simulation), or the laboratory-quality metrology equipment. Controlled experiments carried out on production equipment often require elaborate set-ups, which increases the opportunity costs. One of the great benefits of natural experiments and EVOP is that the only costs are for special measurements, if any, and the analysis of the results. Costs are divisible into variable costs (meaning proportional to the sample size) and fixed costs (which depend on the complexity of the experimental set-up, but not the sample size). A third type of cost is capital costs for expenditures to create the experimental system itself. The cost of a pilot line can be considered a capital cost to enable more/better experiments. * S/N is a core concept in communications engineering. Learning Curves: Theory, Models, and Applications In semiconductor fabs, experimental lots are interspersed with normal production lots and the cost of experimentation is managed by quotas, rather than a financial budget. So-called “hot” lots are accelerated through the process by putting them at the front of the queue at each process step, giving roughly a two-fold reduction in the information cycle time, but increasing the cycle time for normal lots. For example, a development project could be given a quota of five normal lots and two hot lots per week. It is up to the engineers to assign these lots and their wafers to different questions within the overall development effort. Even for hot lots the information cycle is generally more than a month, so one fab could have 50 experiments in progress at one time. • Value and variety of the underlying ideas being tested. Ideas for experiments come from a variety of sources, including prior experiments, outside organizations, scientific understanding of the problem, and human creativity. Strong ideas convey a double benefit: they improve the S/N of the experiment and, if the experiment reaches the correct conclusion, they increase the benefit derived from the new knowledge. In mature production processes and markets, most ideas will have a negative expected value—they make the situation worse. A higher variety of underlying ideas raises the total value of experimentation. This follows from thinking of experiments as real options, where the cost of the experiment is the price of buying the option, the current value of “best practice” is its exercise price, and the revealed value of the new method is the value of the underlying asset.* According to the Black-Scholes formula and its variants, higher variance of the asset increases the value of options. There is a further benefit from the higher S/N that is not captured in the standard formulas, namely, the reduced probabilities of statistical errors. • Fidelity of the experiment can be defined as the degree to which the experimental conditions emulate the world in which the results will be applied. Fidelity is a major concern in choosing both the type and the location of experiments, which we discuss later. Ideal experimentation strategies and tactics would increase the value of all five criteria. More commonly though, the choices of how to experiment will involve tradeoffs between the criteria. For example, S/N ratio increases with sample size, but so does the cost (except for natural experiments). Speed can usually be increased by paying a higher price. The degree to which this is worthwhile depends on the opportunity cost of slower learning. This depends on both business and technical factors. Terwiesch and Bohn (2001) show that under some conditions of factory ramp-up, the optimal policy is bang-bang: at first, carry out zero production and devote all resources to experiments; later carry out no experiments and devote all resources to production. At least in semiconductors, the normal pattern for debugging a new process is to start with 100% Terwiesch and Ulrich (2009) show how higher variance is good in tournaments. Tournaments are a highly structured form of experimentation, in which only one proposal will be selected out of many. Accelerated Learning by Experimentation experiments, then shift to a quota, such as 5% experiments, and go to zero experiments near the end of life. The recent popularity of “data mining” reflects the power of natural experiments under the right circumstances. For example, Harrah’s Casino uses natural experiments for insights that drive a variety of controlled experiments (Lee et al. 2003). In data mining, a company has invested in a database of historical data and analytic hardware, software, and expertise. Once this large investment is operational, each experiment costs only a few hours, or days, of people’s time and server processing time. The sample size can be in the millions, so the S/N ratio may be excellent, even if the environment has high variability and the data has measurement errors. So, three of the five criteria are extremely good: S/N, speed, and cost. On the other hand, fidelity is unclear, since much of the data are old and subject to the causal ambiguity problems discussed earlier. Approximate Reality: The Location of Experiments The goal of experimentation in industry is to develop knowledge that can be applied in real environments, such as a high-volume manufacturing plant, a network of service delivery points (automated teller machines, web servers, stores, etc.), the end-use environments of a product, a population of customers, or sufferers from a particular disease. Yet it is usually preferable to do experiments elsewhere—in an environment that emulates key characteristics of the real world, but suppresses other characteristics in order to improve the S/N, cost, or information cycle time. Scientists have experimented on models of the real world for centuries, and new product developers have used models at least since the Wright brothers’ wind tunnels. The literature on product development identifies two orthogonal choices for how to model: analytical to physical and focused to comprehensive (Ulrich and Eppinger 2008). Both of these axes apply to all learning domains.* Organizations that do a lot of learning by experimentation often designate special facilities for the purpose, referred to as pilot lines, model stores, laboratories, test chambers, beta software versions, and other names. These facilities usually have several special characteristics. First, they are more carefully controlled environments than the real world.† For example, clinical trials use prescreened subjects, with clear clinical symptoms, no other diseases, and other characteristics that increase the likely signal size and decrease the noise level. Second, they are usually better instrumented, with more variables measured, more accurately and more often. For example, test stores may use video cameras to study shopper behavior. Third, these facilities are more flexible, with more skilled workers and different tools. Moving even further away from the real world are virtual environments, such as simulation models and mathematical equations. Finite element models have revolutionized experimentation in a number of engineering disciplines, because they allow science-derived first principles to be applied to complex systems. * The original literature on manufacturing experimentation conflated them into a single axis, “location” (Bohn 1987). But see the Taguchi argument for using normal quality materials, discussed in the Learning Curves: Theory, Models, and Applications Ulrich and Eppinger (2008) call the degree of reality or abstraction the “analytical/physical” dimension. They apply it to prototypes in product development, but their spectrum from physical to analytical also applies in manufacturing, medicine, and other domains, as illustrated in Table 11.2. For example, a steel company found that water and molten steel had about the same flow characteristics, and therefore prototyped new equipment configurations using scale models and water (LeonardBarton 1995). In drug development, test tube and animal experiments precede human trials. There are many benefits of moving away from full reality toward analytical approximations (top to bottom in Table 11.2). Cost per experiment, information cycle time, and the S/N ratio all generally improve. The S/N ratio can become very high in deterministic mathematical models. The disadvantage of moving toward analytical models is the loss of fidelity: the results of the experiment will be only partly relevant to the real world. In principle, the best approach is to run an experiment in the simplest possible (most analytical) environment that will still capture the essential elements of the problem under study. In practice, there is no single level of abstraction that is correct for every part of a problem, and learning therefore requires a variety of locations, with preliminary results developed at more analytical levels and then checked in more physical environments. Full-scale manufacturing can be very complex. Lapré et al. (2000) studied organizational learning efforts at Bekaert, the world’s largest independent producer of steel wire. Bekaert’s production process can be characterized by detail complexity TABLE 11.2 The Spectrum of Locations from Physical to Analytical Locations of Experiments: Physical/Analytic Range Full-scale reality (most physical) Scale model Laboratory Complex mathematical model Simple mathematical model (most analytical) Aviation Product Development Example Drug Testing Example Retail Behavior Example Manufacturing line Flight test Human trial Full-scale trial Pilot line Lab Finite-element simulation Wind tunnel Animal model In vitro test Rational drug design model Test market Focus group Half-life model of drug metabolism Advertising response model Simultaneous equation model for annealing in a furnace CAD (finite element) simulation with graphical output Strength of materials model Note: Example locations shown for four learning situations. Fidelity is highest at the top (full scale), but information cycle times, signal-to-noise ratio, and cost per experiment get better as the domain moves toward analytical (at the bottom). Accelerated Learning by Experimentation (hundreds of machines, and hundreds of process settings), dynamic complexity (lots of dependencies between the production stages), and incomplete knowledge concerning the relevant process variables and their interactions. They found that Bekaert factories sometimes used the results from experiments run in laboratories at a central research and development (R&D) facility. However, on average, these laboratory insights actually slowed down the rate of learning in full-scale manufacturing. Small-scale laboratories at the central R&D facility were too different from the reality of full-scale manufacturing. Ignoring the complexity of full-scale manufacturing (such as equipment configurations) actually caused deterioration in performance. Thus, in manufacturing environments such as Bekaert’s, fidelity issues mean that the locations used for experiments need to be more physical and less analytical. Bekaert, therefore, did most of its experiments in a special “learning line” set up inside its home factory. Focused/Comprehensive Spectrum The second aspect of experimental “location” is what Ulrich and Eppinger (2008) call “focused/comprehensive” dimension. An experiment can be run on a subsystem of the entire process or product rather than the entire system. It is easier to try out different automobile door designs by experimenting on a door than it is on an entire automobile, and easier still to experiment on a door latch. The effects of focused studies are similar to the effect of moving from physical to analytical locations. Once again, the tradeoff is loss of fidelity: subsystems have interactions with the rest of the system that will be missed. So this technique is more applicable in highly modular systems. At Bekaert, there were high interactions among the four main process stages, so single-stage trials were risky. Experimenting on a subsystem has always been common in product development. In manufacturing and other complex processes, it is more difficult and more subtle, yet can have an order-of-magnitude effect on the speed of learning. The key is to understand the process well enough to measure variables that predict the final outcome before it actually happens. The case study “short-loop experiments for AIDS” in the next section gives a dramatic example where the learning cycle time was reduced from years to months. In semiconductor fabrication, experiments that only go through part of the process, such as a single tool or single layer, are sometimes called short-loop experiments (Bohn 1995b).* Suppose that a megahertz-reducing mechanism has been tentatively diagnosed as occurring in layer 10 out of 30, in a process with an average cycle time of two days per layer. Mathematical models of the product/process interaction suggest a solution, which will be tested by a split-lot experiment in which 14 wafers are treated by the proposed new method and 10 by the old method. Clearly, 18 days can be saved by splitting a previously routine production lot at the end of layer 9, rather than at the beginning of the line. However, must the test lot be processed all the way to the end before measuring the yields of each wafer? If so, the information cycle time will be 42 days, plus a few more for setting up the experiment and testing. * Juran referred to such experiments as “cutting a new window” into the process (Juran and Gryna 1988). Learning Curves: Theory, Models, and Applications Furthermore, megahertz differences among the 24 wafers will be due to the effect of the process change plus all the normal wafer-to-wafer variation in all 30 layers, leading to a poor S/N ratio (Bohn 1995b). The information cycle time and S/N will be far better if the effect of the change can be measured directly after the tenth layer, without processing the wafers further. This requires a good understanding of the intermediate cause of the problem, and the ability to measure it accurately in a single layer. Furthermore, it requires confidence that the proposed change will not interact with any later process steps. Good (fast and accurate) measurement of key properties is critical to focused experiments. To allow better short-loop experiments, most semiconductor designs include special test structures for measuring the electrical properties of key layers and features. The electrical properties of these test structures can be measured and compared across all wafers. Running the trial and measuring the test structure properties can be done in a few days. Depending on how well the problem and solution are understood, this may be sufficient time to go ahead and make a process change. Even then, engineers will need to pay special attention to the megahertz tests of the first production lots using the new method. This checks the fidelity of the early shortloop results against the full effect of the change. If there is a discrepancy, it means that the model relating the test structures to performance of the full device needs to be revised. An analogous measurement for focused consumer behavior experiments is the ability to measure the emotional effect of advertisements on consumers in real time, and the ability to prove that the measured emotional state predicts their later purchasing behavior. Once these conditions exist, consumers can be shown a variety of advertisements quickly, with little need to measure their actual purchasing behavior. Measuring emotional response using functional magnetic resonance imaging (fMRI) is still in its infancy, but as it becomes easier it will have a big effect on the rate of progress in consumer marketing situations (for a review on fMRI, see Logothetis 2008). CASE STUDIES This section illustrates how learning has been accelerated through more effective experimentation in a variety of situations. Experimentation on the Internet Amazon, Google, and other dot-com companies have exploited the favorable properties of the web for relentless experimentation on the interaction between their users and websites. Google has set up a substantial infrastructure to run experiments quickly and cheaply on its search site.* For example, two search algorithms can be interleaved to compare results. The dependent (outcome) variables include the number of results clicked, how long the user continues to search, and other measures of * Presentation by Hal Varian, Google chief economist, UCSD May 2007. See also Shankland (2008) and Varian (2006). Accelerated Learning by Experimentation user satisfaction. Multiple page layout variables such as colors, white space, and the number of items per page are also tested. Through such tests, Google discovers the “real estate value” of different parts of the search page. Google tested 30 candidate logos for the Google checkout service, and, overall, it now runs up to several hundred experiments per day.* Such experiments have excellent properties with respect to most of the criteria we have discussed. They have very fast information cycle times (a few days), very low variable cost because they use EVOP, and high fidelity because they are run in the real world. Even if individual improvements are very small, or more precisely, have low standard deviation, the S/N ratio can still be excellent because of very large sample sizes. Google runs experiments on approximately 1% of the relevant queries, which in some cases can give a sample of one million in a single day. This can reliably identify a performance improvement of only 1 part in 10,000. However, Internet-based experiments still face potential problems on the focused/ comprehensive dimension. In many experiments the goal is to affect the long-term behavior of customers; but in a single user session only their immediate behavior is visible. A change could have different effects in the short and long run, and sometimes even opposite effects. Thus, experiments that measure immediate behavioral effects are essentially short-loop experiments, with a corresponding loss of fidelity. A company like Amazon can instead track the long-term buying behavior of customers in response to minor changes, but such experiments are slower and noisier. It is also difficult to measure the long-term effects on customers who do not log in to sites. Apple Breeding: Five-fold Reduction of Information Cycle Time Breeding plants for better properties is an activity that is probably millennia old. Norman Borlaug’s “Green Revolution” bred varieties of rice and other traditional crops that were faster growing, higher yielding, and more resistant to drought and disease. However, some plants, including apple trees and related fruits, have poor properties for breeding (Kean 2010). One difficulty is that apple trees take about five years to mature and begin to bear fruit, leading to very long experimental cycles. Finding out whether a seedling is a dwarf or full-sized tree can take 15 years. A second difficulty is that, because of Mendelian inheritance, traditional breeding produces a high percentage of undesirable genetic combinations, which cannot be identified until the trees begin bearing. Finally, the underlying genetic variation in American apples is small, as virtually all domestic plants are cloned from a small number of ancestors, which themselves originated from a narrow stock of seeds promulgated by Johnny Appleseed and others 200 years ago. As a result, apple-breeding experiments have been slow and expensive, with low variation in the underlying “ideas.” Botanists are now using four techniques to accelerate the rate of learning about apples. First, a USDA scientist collected 949 wild variants of the ancestor to domestic apples from Central Asia. This approximately doubled the stock of * Erik Brynjolfsson, quoted in Hopkins (2010, 52). The number 50 to 200 experiments at once is given in Gomes (2008). Learning Curves: Theory, Models, and Applications apple-specific genes available as raw material for learning (idea variety). Second, the time from planting to first apples is being shortened from five to about one year by inserting “fast flowering” genes from a poplar tree (information cycle time). Third, when two trees with individual good traits are crossed to create a combination of the two traits, DNA screening will be able to select the seedlings that combine both of the favored traits, without waiting until they mature. This reduces the number that must be grown by 75% (short-loop experiment to reduce cost). Finally, some of the wild trees were raised in a special greenhouse deliberately full of pathogens (specialized learning location, to improve cycle time and S/N). Short-Loop Experiments for AIDS: The Critical Role of Measurement The human immunodeficiency virus (HIV) is the virus that eventually leads to acquired immunodeficiency syndrome (AIDS), but the delay between the initial HIV infection and AIDS onset can be many years, even in untreated individuals. This delay made experiments on both prevention and AIDS treatment quite slow. It was not even possible to be sure someone was infected until they developed AIDS symptoms. Both controlled and natural experiments were very slow in consequence. In the early 1990s, new tests allowed a quantitative measurement of HIV viral loads in the bloodstream. Researchers hypothesized that viral load might be a proxy measurement for the occurrence and severity of infection, potentially permitting short-loop experiments that would be years faster than waiting for symptoms to develop. There is, however, always the question of the fidelity of short-loop experiments: is viral load truly a good proxy for severity of infection? Using a controlled experiment to validate the viral load measure would be impossible for ethical and other reasons. As an alternative, a natural experiment consists of measuring viral loads in a number of individuals who may have been exposed, and then waiting to see what happens to them. Such an experiment would take years, and it would require a large sample to get a decent S/N ratio. Fortunately, it was possible to run a quick natural experiment using historical data (Mellors 1998). Mellors and colleagues measured viral loads in stored blood plasma samples from 1600 patients. These samples had been taken 10 years earlier, before the measurement technique existed. From the samples plus mortality data on the individuals, they calculated the survival and life expectancy rates as a function of the initial viral load. They found very strong relationships. For example, if the viral load was below 500, the six-year survival rate was over 99% versus 30% if the load was over 30,000. This natural experiment on historical data achieved a high S/N ratio in a relatively short period. However, even with strong results, “correlation does not prove causation.” In medicine the use of historical data is called a “retrospective trial.” Fortunately, several large-scale prospective and controlled trials were able to demonstrate that if a treatment reduced viral load, it also reduced the likelihood of the disease progressing from infection to full AIDS. Therefore, viral load became a useful proxy for short-loop experiments on possible treatments, as well as for treating individual patients. Accelerated Learning by Experimentation An ongoing example of developing a new measurement to permit shorter loop experiments is the recent discovery of seven biomarkers for kidney damage (Borrell 2010). These biomarkers permit faster detection and are sensitive to lower levels of damage in animal studies, and presumably for humans as well. This will allow for the better assessment of toxicity at an earlier stage of drug trials. Both examples highlight how identifying new variables and developing practical measurements for them can accelerate learning rates several-fold. Especially good variables can predict final outcomes early, thus allowing for more focused experiments. Fidelity Problems Due to Location in Clinical Trials Sadly, searches for short-loop measurements that predict the course of illness are not always so successful. A particularly insidious example of the problem with shortloop experiments is the use of selective health effects as the outcome measurement in controlled clinical drug trials. If a drug is designed to help with disease X, in a large clinical trial should the outcome measure be symptoms related to X, clinical markers for X, mortality due to X, or measures of total health? Should immediate effects be measured, or should patients be followed over multiple years? The assumed causal chain is: new medication → clinical markers for disease X → symptoms for X → outcomes for X → total health of those taking the medication. The target outcome is the total health effect of the new medication, and if the disease is a chronic one that requires long-term treatment, this can take years to measure. Looking at just the markers for disease X, or even at the outcomes of disease X alone, are forms of shortloop experiments. However, the problem with short-loop experiments is fidelity: Is the result for disease X a reliable predictor of overall health effects? In the human body, with its variety of interlocking homeostatic (feedback) systems, multiple complex effects are common. Even if X gets better, overall health may not. Although clinical trials are supposed to look for side effects, side effect searches generally have a low S/N. They are also subject to considerable conflict of interest, as the drug trials are usually paid for by the pharmaceutical company. There is ample evidence that this sometimes affects the interpretation and publication of study results (see, e.g., DeAngelis and Fontanarosa 2008). A tragic example of this problem was the anti-diabetes drug, Avandia. It was one of the world’s highest-selling drugs, with sales of $3.2 billion in 2006 (Harris 2010). Untreated diabetes raises the mean and variance of blood glucose levels, and better control of the glucose level is viewed as a good indicator of effectiveness in treating diabetes. However, once Avandia was put on the market the drug caused serious heart problems. In fact, one estimate was that it raised the odds of death from cardiovascular causes by 64% (Nissen and Wolski 2007). Given the already high risk of heart attacks for diabetics, this indicates that it decreased overall survival. Unfortunately, many drug trial reports do not even list overall mortality for the treatment and placebo populations, making it impossible to calculate the overall effects. Doctors are essentially forced to assume that the short-loop experiment has adequate fidelity for overall patient health. Learning Curves: Theory, Models, and Applications Using intermediate disease markers for a focused clinical trial is not the only problem with the fidelity of clinical trials. Trial sample populations are carefully screened to improve the experimental properties for the primary (desired) effect. This is a move along the analytical/physical spectrum, because the trial population is not the same as the general population, which will take the drug if it is approved. For example, trials often reject patients who are suffering from multiple ailments or are already taking another medication for the targeted disease. For ethical reasons, they almost always avoid juvenile patients. These changes potentially reduce the fidelity of the experiment. An example is the use of statins to reduce cholesterol and heart disease. While multiple clinical studies show that statins lower the risk of heart attack, they also have side effects that include muscle problems, impotence, and cognitive impairment (Golomb and Evans 2008). Few of the clinical trials of statins measure overall mortality, and the available evidence suggests that they improve it only in men with pre-existing heart disease. Most of the clinical trials have been conducted on this population, yet statins are now widely prescribed for women and for men with high cholesterol but without heart disease. The evidence suggests that for these populations, they do not actually improve overall mortality (Parker-Pope 2008). Since they also have side effects, this suggests that they are being widely over-prescribed. Astronomy and other Observational Fields In some situations, causal variables cannot be altered and so only natural experiments are possible. However, when observations are expensive and difficult, the learning process is very similar to a series of controlled experiments. Often the experiments require elaborate data collection, and the choice of where to observe and what to collect raises the same issues as setting up controlled experiments. The only difference is that there is usually no choice of location, which is at the “real world” end of the physical-analytical spectrum.* The classic observational science is astronomy, since extra-planetary events cannot be altered, though the observation time on instruments is scarce and rationed, especially for wavelengths blocked by Earth’s atmosphere. One approach to finding extra-solar planets is to look for periodic dimming in the light from a star. The magnitude of the dimming gives an indication of planetary size. The S/N ratio of the measurements is critical, as small planets will have little effect. Choosing stars to observe that are more likely to have planets and to have high S/N ratios is therefore crucial. Even so, a large sample is needed because if the planet’s orbit is not on the same plane as our sun, no occlusion will be visible. A special satellite mission, “Kepler,” has been launched for this purpose (Basri et al. 2005). A recent paper posits that the Kepler mission can also detect Earth-sized moons in habitable zones (Kipping et al. 2009). The authors used mathematical models of hypothetical moons to evaluate different detection strategies and their statistical efficacy. * Even in observational fields like astronomy, controlled experiments are possible using mathematical models, though the results cannot be validated by controlled experiments in the real world. Accelerated Learning by Experimentation CONCLUSIONS This chapter shows how the rate of learning by experimentation—and, by extension, the slope of many learning curves—is heavily determined by the ways in which experiments are conducted. We identified five criteria that measure experimental effectiveness. They are: information cycle time (speed), S/N ratio, cost, value and variety of ideas, and fidelity. The statistical literature on experimentation deals formally with only one of these, the S/N ratio, but it offers insights that are helpful in dealing with the others. For example, the powerful method of fractional factorial experiments looks at the effects of multiple variables simultaneously rather than one at a time, thereby improving both speed and cost. These five criteria are, in turn, largely determined by how the experiment is designed, and we discuss two high-level design decisions: location and type of experiment. “Location” has two orthogonal axes, referred to as analytical to physical, and focused to comprehensive. Generally, more analytical or more focused locations reduce the fidelity of the experiment, but improve other criteria such as information cycle time and cost. We discuss four types of experiments: controlled, natural, evolutionary, and ad hoc. Although controlled experiments are often viewed as inherently superior, for some types of problems they are impossible, and in other cases they are dominated by other types of experiments. We have not discussed the meta-problem of designing good learning environments. For example, “just in time” manufacturing principles encourage just the key properties of fast information cycle time and good S/N ratio, while providing perfect fidelity (Bohn 1987). We have also said little about what is being learned, since it is heavily situation specific. However, G. Taguchi claimed that the knowledge being sought in experiments was often too narrow. He pointed out that in many situations the variability in outcomes is just as important as the mean level of the outcome. He proposed specific experimental methods for simultaneously measuring the effects of process changes on both mean and variation. For example, he suggested that pilot lines and prototypes should be built with material of normal quality, rather than using high-quality material to improve the S/N ratio of experiments. Taguchi’s insight reminds us that “the first step in effective problem solving is choosing the right problem to solve.” ACKNOWLEDGMENTS A multitude of colleagues, engineers, and managers have helped with these ideas over many years. Roger Bohn’s research on this topic was supported by the Harvard Business School Division of Research, and the Alfred P. Sloan Foundation Industry Studies Program. His friends at HBS were especially influential, including Oscar Hauptman, Jai Jaikumar, Therese Flaherty, and Roy Shapiro. Gene Meiran and Rick Dehmel taught him about semiconductors. Jim Cook provided a number of comments and ideas. Michael Lapré’s work was supported by the Dean’s Fund for Faculty Research from the Owen School at Vanderbilt. Learning Curves: Theory, Models, and Applications REFERENCES Adler, P.S., and Clark, K.B., 1991. Behind the learning curve: A sketch of the learning process. Management Science 37(3): 267–281. Applebaum, W., and Spears, R.F., 1950. Controlled experimentation in marketing research. Journal of Marketing 14(4): 505–517. Basri, G., Borucki, W.J., and Koch, D., 2005. The Kepler mission: A wide-field transit search for terrestrial planets. New Astronomy Reviews 49(7–9): 478–485. Bohn, R.E., 1987. Learning by experimentation in manufacturing. Harvard Business School Working Paper 88–001. Bohn, R.E., 1994. Measuring and managing technological knowledge. Sloan Management Review 36(1): 61–73. Bohn, R.E., 1995a. Noise and learning in semiconductor manufacturing. Management Science 41(1): 31–42. Bohn, R.E., 1995b. The impact of process noise on VLSI process improvement. IEEE Transactions on Semiconductor Manufacturing 8(3): 228–238. Bohn, R.E., 2005. From art to science in manufacturing: The evolution of technological knowledge. Foundations and Trends in Technology, Information and Operations Management 1(2): 129–212. Borrell, B., 2010. Biomarkers for kidney damage should speed drug development. Nature, May 10. http://www.nature.com/news/2010/100510/full/news.2010.232.html?s=news_ rss (accessed July 9, 2010). Box, G.E.P., Hunter, J.S., and Hunter, W.G., 1978. Statistics for experimenters. New York: Wiley. DeAngelis, C.D., and Fontanarosa, P.B., 2008. Impugning the integrity of medical science: The adverse effects of industry influence. Journal of the American Medical Association 299(15): 1833–1835. Dutton, J.M., and Thomas, A., 1984. Treating progress functions as a managerial opportunity. Academy of Management Review 9(2): 235–247. Golomb, B.A., and Evans, M.A., 2008. Statin adverse effects: A review of the literature and evidence for a mitochondrial mechanism. American Journal of Cardiovascular Drugs 8(6): 373–418. Gomes, B., 2008. Search experiments, large and small. Official Google blog http://googleblog.blogspot.com/2008/08/search-experiments-large-and-small.html (accessed July 9, 2010). Harris, G., 2010. Research ties diabetes drug to heart woes. The New York Times, February 19. Hopkins, M.S., 2010. The four ways IT is revolutionizing innovation – Interview with Erik Brynjolfsson. MIT Sloan Management Review 51(3): 51–56. Hunter, W.G., and Kittrell, S.R., 1966. Evolutionary operations: A review. Technometrics 8(3): 389–397. Juran, J.M., and Gryna, F.M., 1988. Juran’s quality control handbook. 4th ed. New York: McGraw-Hill. Kean, S., 2010. Besting Johnny Appleseed. Science 328(5976): 301–303. Kipping, D.M., Fossey, S.J., and Campanella, G., 2009. On the detectability of habitable exomoons with Kepler-class photometry. Monthly Notices of the Royal Astronomical Society 400(1): 398–405. Lapré, M.A. 2011. Inside the learning curve: Opening the black box of the learning curve. In Learning curves: Theory, models, and applications. ed. M.Y. Jaber. Chapter 2. Boca Raton: Taylor & Francis. Lapré, M.A., Mukherjee, A.S., and Van Wassenhove, L.N., 2000. Behind the learning curve: Linking learning activities to waste reduction. Management Science 46(5): 597–611. Lapré, M.A., and Van Wassenhove, L.N., 2001. Creating and transferring knowledge for productivity improvement in factories. Management Science 47(10): 1311–1325. Accelerated Learning by Experimentation Lee, H., Whang, S., Ahsan, K., Gordon, E., Faragalla, A., Jain, A., Mohsin, A., Guangyu, S., and Shi, G., 2003. Harrah’s Entertainment Inc.: Real-time CRM in a service supply chain. Stanford Graduate School of Business Case Study GS-50. Leonard-Barton, D., 1988. Implementation as mutual adaptation of technology and organization. Research Policy 17(5): 251–267. Leonard-Barton, D., 1995. Wellsprings of knowledge: Building and sustaining the sources of innovation. Cambridge: Harvard Business School Press. Little, J.D.C., 1966. A model of adaptive control of promotional spending. Operations Research 14(6): 1075–1097. Logothetis, N.K., 2008. What we can do and what we cannot do with fMRI. Nature 453:869–878. March, J.G., and Simon, H.A., 1958. Organizations. New York: Wiley. Mellors, J.W., 1998. Viral-load tests provide valuable answers. Scientific American 279:90–93. Murnane, R., and Nelson, R.R., 1984. Production and innovation when techniques are tacit: The case of education. Journal of Economic Behavior and Organization 5(3–4): 353–373. Nelson, R.R., 2008. Bounded rationality, cognitive maps, and trial and error learning. Journal of Economic Behavior and Organization 67(1): 78–87. Newell, A., and Simon, H., 1972. Human problem solving. Englewood Cliffs: Prentice-Hall. Nissen, S.E., and Wolski, K., 2007. Effect of Rosiglitazone on the risk of myocardial infarction and death from cardiovascular causes. The New England Journal of Medicine 356(24): 2457–2471. Parker-Pope, T. 2008. Great drug, but does it prolong life? The New York Times, January 29. Pearl, J., 2001. Causality: Models, reasoning, and inference. Cambridge: Cambridge University Press. Shankland, S., 2008. We’re all guinea pigs in Google’s search experiment. CNET News, May 29. http://news.cnet.com/8301-10784_3-9954972-7.html (accessed July 9, 2010). Terwiesch, C., and Bohn, R.E., 2001. Learning and process improvement during production ramp-up. International Journal of Production Economics 70(1): 1–19. Terwiesch, C., and Ulrich, K.T. 2009. Innovation tournaments. Boston: Harvard Business School Press. Thomke, S.H., 1998. Managing experimentation in the design of new products. Management Science 44(6): 743–762. Ulrich, K.T., and Eppinger, S.D., 2008. Product design and development. New York: McGraw Hill. Varian, H.R., 2006. The economics of internet search. Rivista di Politica Economica 96(6): 9–23. Weber, C., 2004. Yield learning and the sources of profitability in semiconductor manufacturing and process development. IEEE Transactions on Semiconductor Manufacturing 17(4): 590–596. Quality to 12 Linking Learning – A Review Mehmood Khan, Mohamad Y. Jaber, and Margaret Plaza CONTENTS Introduction............................................................................................................ 212 What is Quality? .................................................................................................... 212 Learning Behavior.................................................................................................. 213 Literature on the Quality-Learning Relationship................................................... 214 Mathematical relationship................................................................................. 214 Wright (1936) model .................................................................................... 214 Fine (1986) ................................................................................................... 215 Tapiero (1987) .............................................................................................. 217 Fine (1988) ................................................................................................... 218 Chand (1989)................................................................................................ 220 Urban (1998) ................................................................................................ 221 Jaber and Guiffrida (2004) ........................................................................... 223 Jaber and Guiffrida (2008) ...........................................................................224 Jaber and Khan (2010) ................................................................................. 225 Empirical Relationship...................................................................................... 226 Li and Rajagopalan (1997) ........................................................................... 226 Foster and Adam (1996) ............................................................................... 227 Forker (1997)................................................................................................ 227 Badiru (1995) ............................................................................................... 228 Mukherjee et al. (1998) ................................................................................ 229 Lapré et al. (2000) ........................................................................................ 230 Jaber and Bonney (2003).............................................................................. 231 Hyland et al. (2003)...................................................................................... 231 Jaber et al. (2008) ......................................................................................... 232 Future Directions of Research ............................................................................... 232 Acknowledgments.................................................................................................. 233 References.............................................................................................................. 233 Learning Curves: Theory, Models, and Applications INTRODUCTION Managers have long been looking for ways to improve the productivity of their companies. It has become imperative for an enterprise to look for tools that reduce costs and improve productivity at the same time. The world has seen American and Japanese companies struggling to achieve this goal in the final quarter of the last century. It was an eye for enhanced quality that gave Japanese manufacturers an edge. On the other hand, researchers also believe that an organization that learns faster will have a competitive advantage in the future (Kapp 1999). In the most simplistic model of learning, in which learning is a by-product of doing, cumulative output is functionally related to average cost reduction. However, firms often consciously focus on learning in order to trigger technological advancement and quality improvements beyond simply reducing average cost (Cohen and Levinthal 1989). Malreba (1992) identified six key types of learning in organizations, which can be closely interrelated and play a dominant role in product life cycle: (1) learning by doing, (2) learning by using, (3) learning from advancements in science and technology, (4) learning from inter-industry spillovers, (5) learning by interacting, and (6) learning by searching. Types 1, 2, and 3 are internal to the firm, while the other three types are external. In the case of a simple learning curve, only the first two types are linked together. However, for more complex improvements of processes and products, other types must also be taken care of. For example, since learning by searching is aimed at the improvement/generation of new products or processes, this type is dominant during research and development (R&D). Learning by searching is often coupled with learning by doing and targets improvements of quality, reliability, performance, and compatibility (Malreba 1992). This chapter will shed light on the relationship between quality and complex learning processes, which may incorporate some, or all, of those learning types. This linkage is crucial in that it has become vital in increasing the productivity of a company in the past few decades. WHAT IS QUALITY? There are many ways to define quality (Garvin 1987). A common definition in the industry is: “meeting or exceeding customer expectations” (Sontrop and MacKenzie 1995). Many companies refine their processes/products to meet customer expectations based on their surveys. The customers can be internal or external. For example, the internal customers for a fuel tank would be an assembly line or the paint shop, while its external customers would be a car dealer or the purchaser. The definition of quality emphasized in this chapter is “conformance to specifications.” Specifications are target values and tolerances such as the length of a trunk lid can be 150 ± 1 cm. That is, a conforming length falls in the interval from 149 to 151 cm. Targets and tolerances are set by the design and manufacturing engineers in a plant. The other characteristics of interest can be design configuration, such as the weight, thickness, reliability, ease of fitness, and so forth. Statistical quality control or statistical process control (SPC) tries to understand and reduce the variations in manufacturing processes (Yang et al. 2009). Linking Quality to Learning – A Review The measures of variation can be accuracy, precision, bias, stability, and so on. The role of quality control is to reduce the variability in a characteristic around its target. Quality control involves the employment of established procedures to assess, attain, uphold, and enhance the quality of a product or service by reducing the variability around a target. Therefore, the most useful definition for the technical application of quality control is conformance to specifications. SPC is the component of a continuous quality improvement system that consists of a set of statistical techniques to analyze a process or its output, and to take appropriate actions to achieve and maintain a state of statistical control. SPC is not a shortterm fix. The most successful organizations today have learned that SPC only works when the operating philosophy is that everyone in the organization is responsible for, and committed to, quality. SPC on the methods by which results are generated—on improvements to the processes that create products and services of the least variability. The traditional tools that SPC uses to improve on variability are: (i) flow charts, (ii) cause-and-effect diagrams, (iii) pareto charts, (iv) histograms, (v) scatter plots, and (vi) control charts. With this notion of quality or quality control in mind, industry today refers to the fraction of defective items in their production, or in the supplier lots, as the “quality” of the lot produced or received. This chapter will explore if this quality improves by virtue of learning or, in other words, by experience and repetition. Let us have a formal understanding of the learning process. LEARNING BEHAVIOR Learning in an organization has been receiving more and more attention. Steven (1999) presented examples from electronics, construction, and aerospace industries to conclude that learning curves will gain more interest in high-technology systems. Wright (1936) was the first to model the learning relationship in a quantitative form. This complex behavior has been given different names over the last century, such as startup curves (Baloff 1970), progress functions (Glover 1965), and improvement curves (Steedman 1970). However, researchers have agreed that the power-form learning curve is the most widely used form to depict the learning phenomenon (Yelle 1979). It is very hard to define this complex behavior, but practitioners and researchers mostly believe that it can be defined as the trend of improvement in performance, achieved by virtue of practice. The Wright (1936) learning curve states that the time to produce every successive unit in repetition keeps on decreasing until plateauing occurs. Plateauing is a state where a system, or a worker, ceases to improve on their performance. The reason for this could be that the worker ceases to learn, or it could be the unwillingness of the organization to invest any more capital. The mathematical form of Wright’s model is given by: Tx = T1 x − b , where x is the tally of the unit being produced, Tx and T1 are the times to produce the xth and the first unit, respectively, and b is the learning exponent. The learning Learning Curves: Theory, Models, and Applications c1(q) = Prevention cost Cost c2(q) = Failure cost c1(q) + c2(q) ECL Conformance level FIGURE 12.1 Quality-related costs. exponent in this expression is often referred to as an index called the “learning rate” (LR). Learning occurs each time the production quantity is doubled, such as: LR = T2 x T1 (2 x )− b = = 2−b , T1x T1 ( x )− b Thus, following the above learning curve, time to produce x units, t(x), is given by: t( x) = ∑T i ≈ ∫ 1 i 1 i 0 T1i − b di = T1 − b x . 1− b Although Figure 12.1 and the above expression represent the improvement in the time to process a unit, learning can be shown in the cost, productivity, and other similar measures of a production system. Next, we categorize the linkage between quality and learning by exploring the literature that finds a (i) mathematical and/or (ii) empirical relationship. LITERATURE ON THE QUALITY-LEARNING RELATIONSHIP Mathematical relationship Wright (1936) model Wright (1936) was probably the first to come up with a relationship that signifies the importance of experience or learning in a production facility. He studied the variation in production cost with production quantity. This issue was of increasing interest and importance because of a program sponsored by the Bureau of Air Commerce for the development of a small airplane. Wright had started working on the variation of cost with quantity in 1922. A curve depicting such variation was worked Linking Quality to Learning – A Review up empirically from the two or three points that previous production experience of the same model in differing quantities had made possible. Through the succeeding years, this original curve, which at first showed the variation in labor only, was used for estimation purposes and was corrected as more data became available. This curve was found to take an exponential form. Wright showed that the factors that make the reduction in cost possible with the increase in production quantity are labor, material, and overheads. The labor cost factor (F) and the production quantity (N) followed a logarithmic relationship: X= log F . log N where log is a logarithm of base 10. A plot of this relationship resulted in a value of 0.322 for X; that is, it was an 80% curve. This represents a factor by which the average labor cost in any quantity shall be multiplied in order to determine the average labor cost for a quantity of twice that number of airplanes. On the other hand, material cost also decreases with the increase in quantity because waste is cut down. Wright showed these variations in price with an actual example and compared the savings in cost between the production of cars and airplanes. It should be noticed that the traditional way to obtain data for a learning cycle is often erroneous, as the individual data are composed of some variance. To counter this, Zangwill and Kantor (1998) proposed to measure the individual improvements directly and to use the learning cycle repeatedly. This would make the management observe the techniques that produce greater improvement and thus, managers would learn how to improve processes faster. They came up with a differential equation that was composed of three forms of learning: power form, exponential form, and the finite form. Zangwill and Kantor (2000) extended on their earlier work and emphasized that traditional learning curves cannot identify which techniques are making improvements and which are not, on a period-by-period basis. Their approach helps to boost the rate of improvement in every cycle and makes learning a dynamic process. The same analogy can be applied to emphasize the relationship of the quality of a product or service (Anderson 2001) with experience (learning), which is the objective of this chapter. Fine (1986) Levy (1965) is believed to be the first to capture the linkage between quality and learning. Fine (1986) advocated that learning bridges quality improvements to productivity increases. He found this to support the observation of Deming (1982) that quality and productivity go together as productivity increases result from quality improvement efforts. Fine (1986) modeled a quality-learning relationship. He was of the view that product quality favorably influences the rate of cost reduction when costs are affected by quality-based learning. Thus, costs decline more rapidly with the experience of producing higher-quality products. He presented two models for quality-based Learning Curves: Theory, Models, and Applications The first model assumes that quality-based experience affects the direct manufacturing costs. Whereas the second model assumes that the quality-based experience affects quality control costs. For the second model, Fine (1986) found that the optimal quality level increases over time. According to Fine (1986), a key feature of the second quality-based model is that it resolves the inconsistency between the cost tradeoff analysis of Juran (1978) to find the optimal quality level and the “zero defects is the optimal quality level” as per Crosby (1979) and Deming (1982). Fine (1986) claimed that firms that choose to produce high-quality products will learn faster than firms producing lower quality products. He discussed price (cost per unit) and quality relationship of Lundvall and Juran (1974). Figure 12.1 shows that a firm may choose quality (fraction of defectives) levels greater than the economic conformance level (ECL, Juran 1978), but no rational was provided to why a firm would ever choose a quality level smaller than the ECL. Prevention and failure costs—c1(q) and c2(q), respectively—are also shown in Figure 12.1. Fine (1986) found that the prescription of an optimal conformance level with a strictly positive proportion of defects in Figure 12.1 is in direct opposition to the literature (Crosby 1979; Deming 1982) that recommends zero defects as the optimal conformance level. Fine’s quality-based learning theory added a dynamic learning curve effect to the static ECL model so that the modified model is consistent with the slogan that “higher quality costs less.” In the quality-based learning formulation adopted by Fine (1986), two types of learning were modeled: Induced and autonomous. Induced learning is the result of managerial and technical efforts to improve the efficiency of the production system, whereas autonomous learning is due to repetition or learning by doing. Fine (1986) also reviewed two quality-based learning models—Lundvall and Juran (1974) and Spence (1981). The first model assumed that learning reduces the direct costs of manufacturing output without affecting quality-related costs. The second model assumed that learning improves quality without affecting the direct manufacturing costs. He then presented two quality-based learning models to derive optimal quality, pricing, and production policies. Fine’s (1986) first quality-based learning model was for manufacturing activities. He related the conformance level (q) and experience (z) as: c(q, z) = c1 (q) + c2 (q) + c3 ( z), where c3(z) is positive, decreasing, and convex, and becomes a constant, c3, when z goes to infinity. In Equation 12.5, cumulative experience affects direct manufacturing costs through the function c3(z). This suggests that as experience is gained from producing larger volumes at higher quality levels, manufacturing costs are reduced. In his second model, Fine (1986) assumed that quality-based learning benefits lead to a reduction in the appraisal and prevention expenditures required to attain any given quality level. That is, the learning benefits accumulate in the appraisal Linking Quality to Learning – A Review and prevention activities. This idea was modeled by assuming a cost function of the form: c(q, z ) = a( z )c1 (q) + c2 (q) + c3 . The function a(z) represents the learning in appraisal and prevention activities, while the constant a represents a limit to the improvement that is possible, where a(z) is decreasing and convex for z ∈(0, ∞). He also assumed that a(0) = 1 and lim x→∞ a(x) = a ≥ 0. Note that c3 is a constant in this formulation. Figure 12.2 illustrates the effect of increasing experience on the cost function described above. The solid curve represents the quality-related costs, failure costs, and the appraisal and prevention costs for the case where z = 0. The dashed curves represent the quality-related costs for z > 0; that is, after some learning has taken place. Note that a(0)c1(q) + c2(q) has its minimum at q*—as in the ECL model. For any z, a(z)c1(q) + c2(q) will have a unique minimum at q*(z), where q*(z) = q* and q*(z) increases in z. That is, if the experience level at time t is z(t), then the optimal quality is q*(z), which rises with z(t). Fine (1986) concluded that (i) the optimal pricing policy under a quality-based learning curve is qualitatively similar to the optimal pricing policy under a volumebased learning curve; (ii) optimal quality levels decrease (or increase) over time if learning reduces direct manufacturing (appraisal and prevention) costs; and (iii) the optimal quality level under a quality-based learning curve exceeds the optimal quality level in the corresponding static, no-learning case. Tapiero (1987) Tapiero (1987) discussed the practice in manufacturing in which quality control is integrated into the process of production in altering both the product design and the c2(q) a(0) c1(q) + c2(q) a(z) c1(q) a(z) c1(q) + c2(q) Conformance level FIGURE 12.2 Quality-related costs. Learning Curves: Theory, Models, and Applications manufacturing techniques in order to prevent defective units. He developed a stochastic dynamic programming problem for determining the optimal parameters of a given quality control policy with learning. He defined the knowledge in a production facility with quality control as a learning function given by: xt +1 = f ( xt , N t , lt ); x0 > 0, where xt is the variable for knowledge at time t, Nt is the sales volume, and lt is the variable denoting the quality control process. This learning function had the following properties: ∂f/∂x < 0 reflecting the effect of forgetting in the learning process ∂f/∂l > 0 reflecting the positive effects of quality control on the learning process ∂f/∂N > 0 reflecting the positive effects of production volumes on the learning process With this, Tapiero developed a profit function comprising of the costs, sales, failures, and reworks to be solved through stochastic dynamic programming. The approach and the results obtained were interpreted and extended by Tapiero (1987) in several ways. In particular, the analytical solution for the risk-neutral, curtailed sampling case has shown that: a. When the costs of sampling are smaller than the expected failure costs, it is optimal to have full sampling, regardless of experience and learning. b. When the costs of sampling are larger than the expected failure costs, two situations arise. First, the optimal sample size is bang-bang, a function of experience accumulated through quality control learning. “The larger the experience, the smaller the amount of quality control.” c. When the inspection costs are not “that much greater” than the expected failure costs, experience and high quality control can move together. This was also pointed out by Fine (1986). Thus, he concluded that the optimal quality control policy that a manufacturer may use is not only a function of the costs of inspection and the costs of products’ failures, but also of the manufacturer’s ability to use the inspected samples in improving (through “experience”) the production technology. This observation is in line with the Japanese practice of “full inspection” and of learning “as much as possible” in order to finally obtain a zero-defects production technology. Fine (1988) To help explain how one firm can have both higher quality and lower costs than its competitors, Fine (1988) explored the role of inspection policies in qualitybased learning, a concept which was introduced by Fine (1986) and extended by Tapiero (1987). He discussed an imperfect production process. Fine (1988) noted that the process may produce defective items for several reasons: poorly designed or poorly constructed materials and/or components, substandard workmanship, Linking Quality to Learning – A Review faulty or poorly maintained equipment, or ineffective process controls. The model that Fine (1988) developed permits the production process to be improved through “quality-based learning” by the operators. It has tight quality control standards for the time in the “out-of-control” state, enforced by intensive inspection procedures, thus resulting in faster learning. Fine (1988) found that this leads to lower long-run failure costs than the short-run inspection costs. This work of Fine (1988) is different from that of Fine (1986) and Tapiero (1987) as cumulative output was not used as a measure of learning. Fine (1988) concluded that his model cautions managers responsible for quality policies that they may be choosing suboptimal inspection policies if potential learning benefits from inspection were ignored. Fine (1988) imagined a work area, its inputs, and its outputs as a station. This station could be either in an “in-control” or an “out-of-control” state, denoted by I and Ω, respectively. Time was indexed by the production of output; each unit produced would advance the clock by one. Before production in each time period, the firm inspects the station to verify that it is not in an out-of-control state. The system is restored if it is so. He denoted Xt, as the state of the tth station prior to the tth decision whether or not to inspect the station. Xt would be the state of the station following any inspection and/or repair at time t but before production of the tth unit. Thus, Xt = Ω (Xt = Ω) would mean that the state before (or after) the inspection decision at time t is out of control. This way, he defined the state of the station through a timehomogeneous Markov chain as: ⎛ I I⎜ 1 − h( n) Ω⎜ ⎝ 0 Ω ⎞ h( n)⎟ ; 0 ≤ h(n) ≤ 1 ∀n ≥ 0. ⎟ 1 ⎠ The symbol n serves as an index of cumulative learning and h(n) is the probability that the station will go from the in-control to the out-of-control state. The probability of qth unit being defective in the in-control and out-of-control states is q1 and q2, respectively. He assumed that q1 < q2. Learning was captured in this model by taking h(n) as a decreasing function of n and incrementing n by one each time the station is inspected and it is found to be out of control. Inspecting the station and finding it out-of-control constitutes a learning event that allows the workers to improve the process so that the probability that the station goes out of control in the future, is reduced. He assumed that h(n) takes the form h(n) = γ nh(0), where 0 ≤ γ ≤ 1 and 0 ≤ h(0) ≤ 1. This may be interpreted as a station having a sequence of “layers” of quality problems. Each layer must be discovered and fixed before the next layer of problems can be uncovered. Before producing the tth unit, Fine (1988) noted that the firm must decide whether or not to expend c (> 0) dollars to inspect the station and learn whether it is, in fact, truly out of control. He denoted this action by at; at = 0 for “do not inspect” and at = 1 for “inspect.” Once the unit of time t has been produced, its quality (denoted by Yt) is classified as either defective (Yt = D) or non-defective (Yt = N). Fine (1988) also assumed that a defective unit can be easily detected by visual inspection. He denoted Learning Curves: Theory, Models, and Applications the probability that item t is defective given that the station is in control at time t is by q1—that is, P{Yt = D| Xt = I} = q1. Similarly, P{Yt = D| Xt = Ω} = q2, and he assumed that 1 ≥ q2 > q1 ≥ 0. He assumed the repair/replace cost of each defective item as d (> 0). He further assumed that At = 1 if, and only if, the inspection action was taken (at = 1), and the station was found to be out of control (Xt = Ω). That is, “learning took place” at time t. Therefore, the history of the process immediately preceding the observation of the output at time t would be: Ht = {X 0 , a1 , A1 , Y1 , a2 , A2 , Y2 ,…, at, At }. Fine (1988) denoted the probability that the system is out of control as: Pt = P{X t = Ω| H t }. He noted that if the “inspect” action is chosen (at = 1), then the station will be in control for certain, either because it was found to be in control or because it was adjusted into control. Whereas, if the “do not inspect” action is chosen (at = 0), then no additional information was assumed by Fine (1988) to be available to update the probability that the state is out-of-control. If Pt = p, then after the production of the tth unit and the observation of Yt (the quality of that unit), he assumed that the probability that the station was out of control at the time of production, is updated by using Bayes’ theorem: Pt = P{X t = Ω ⎜Ht , Yt = D} = q2 p ; ∀Yt = D, q2 p + q2 (1 − p) (1 − q2 ) p ; ∀Yt = N . Pt = P{X = Ω ⎜Ht , Yt = N} = (1 − q2 ) p + (1 − q1 )(1 − p) Fine (1988) used transition matrix (12.8) to compute the post-production probability that the station will be out of control for the next unit. That is, Pt = P{X t +1 = Ω⎜Ht , Yt = D} = q2 p + h(n)q1 (1 − p) ; ∀Yt = D, q2 p + q2 (1 − p) (1 − q2 ) p + h(n)(1 − q1 )(1 − p) Pt = P{X t +1 = Ω⎜Ht , Yt = N} = ; ∀Yt = N . (1 − q2 ) p + (1 − q1 )(1 − p) He reviewed the “no learning” case before examining the effects of learning opportunities on the optimal inspection policy. Chand (1989) Chand (1989) discussed the benefits of small lot sizes in terms of reduced setup costs and improved process quality due to worker learning. He showed that the Linking Quality to Learning – A Review lot sizes do not have to be equal in the optimal solution, even if the demand rate is constant. Chand (1989) adopted the approach of Porteus (1986) in estimating the number of defective units in a lot. Porteus (1986) assumed that for very small probability (ρ) that a process goes out-of-control, the expected number of defective items in a lot of size Q could be approximated as: d (Q) = ρQ 2 . 2 Like Porteus (1986), Chand (1989) assumed that the process is in control at the start of a production lot and no corrective steps are taken if the process goes out of control while producing a lot. Both works also assume that before starting a new production lot, the process is in control. Chand’s (1989) expected cost with Nq setups is written as: TC ( Nq) = ∑ K (n) + 2N ρ (H + CρD), n 1 where K is the cost of each setup, H and C are the costs of the non-defective and defective units, respectively, D is the demand rate per period and the objective is to find an optimal lot size. If there is learning in the process quality with each setup, then: ρ(1) ≥ ρ(2) ≥ ρ(3) ≥ … , where the minimum and the maximum probability values are ρ∞ =lim n→∞ ρ(n) and ρ(1), respectively. The lot size would be Qn in the nth production. The objective function in Equation 12.14, for this case would be: TC ( N ) = ∑ n 1 K ( n) + ∑ n 1 Qn2 2 ⎛H ⎞ ⎜⎝ + Cρ(n)⎟⎠ , D such that Q1+Q2+Q3+…+QN=D. Improvement in the product quality favors a reduced setup frequency. The learning curve of quality was shown as: ρ(n − 1) − ρ(n) ≥ ρ(n) − ρ(n + 1). Urban (1998) Urban (1998) assumed the defect rate of a process to be a function of the run length. Using this assumption, he derived closed-form solutions for the economic production quantity (EPQ). He also formulated models to account for either positive or negative learning effects in production processes. Urban (1998) studied the learning effect of run length on product quality, and on production costs. He examined an EPQ model where the defect rate is a function of Learning Curves: Theory, Models, and Applications the production lot size. In his model, a constant and deterministic demand of a single item was discussed without any backorders. A reciprocal relationship between the defect rate and the production quantity was taken by Urban (1998) as: w= α+ β Q 0 ≤ α ≤ 1. Urban (1998) found this functional form to be very useful for the following reasons: 1. Using appropriate parameters, this functional form can represent the JIT (just-in-time) philosophy (β < 0), the disruptive philosophy (β > 0), or a constant defect rate independent of the lot size (β =0). 2. It provides a bound for the defect rate—that is, as the lot size increases, the defect rate approaches a given value, w∞ =limQ→∞ w∞ =α. 3. It is straightforward to estimate the model parameters in practice, using simple linear regression and generally readily available data on lot sizes and defect rates. 4. A closed-form solution can easily be obtained, which can then be examined to gain important insights into the problem. Under the JIT philosophy, Equation 12.18 suggests that the defect rate decreases as the lot size decreases. At Qi = −β/α the defect rate equals zero. On the other hand, under the disruptive philosophy (where β > 0), the defect rate increases as the lot size decreases. As Qi → β/(1 − α), the defect rate approaches one, meaning that all the units are defective. Urban (1998) analyzed four possible scenarios where he only considered the holding costs, setup costs, and the costs associated with the defective units (i.e., scrap, shortage, or rework cost). The first scenario assumes that all defective items are scrapped. That is, D/(1 − w) are produced to satisfy a demand of D units. The total cost for this scenario is: D ⎞ ch{(1 − α)Q − β} DK (αQ + β)cD ⎛ + + ξ1 = ⎜ 1 − ⎟ , P⎠ 2 {(1 − α)Q − β} {(1 − α)Q − β} ⎝ where: P = rate of production c = unit production cost h = unit holding cost K = setup cost The second scenario assumes that the defective units result in shortages at a cost of s dollars per unit. The total cost for this scenario is: D ⎞ Qch DK β⎞ ⎛ ⎛ + + sD ⎜ α + ⎟ . ξ2 = ⎜ 1 − ⎟ P⎠ 2 Q Q⎠ ⎝ ⎝ Linking Quality to Learning – A Review The third scenario assumes that the defective units reach the customer and cost r dollars per unit in compensation. The total cost for this scenario is: β⎞ D ⎞ Qch DK ⎛ ⎛ ξ3 = ⎜ 1 − ⎟ + + rD ⎜ α + ⎟ . ⎝ ⎝ P⎠ 2 Q Q⎠ The fourth and last scenario assumes that the defective units are reworked before they reach the customers. The total cost for this scenario is: β⎞ D ⎞ Q(c + w)h DK ⎛ ⎛ ξ4 = ⎜1 − ⎟ + + wD ⎜ α + ⎟ . ⎝ ⎠ ⎝ 2 P Q Q⎠ Jaber and Guiffrida (2004) Jaber and Guiffrida (2004) presented a quality learning curve (QLC), which is a modification of Wright’s learning curve (WLC), for imperfect processes where defectives can be reworked. They incorporated process quality into the learning curve by assuming no improvement in the reworks. They then modeled the same situation by relaxing this assumption. Their assumption was the same as that of Porteus (1986) in that the process remains in control at the beginning of production and generates no defects. The cumulative time to produce x units is (Equation 12.3) Y ( x) y1 1− b x , 1− b where the value of y1, the time to produce the first unit, is estimated after a subject has had a training period. The total time to produce x units, where there is no learning in the rework process, was given by: Y ( x) = y1 1− b x2 x + rρ , 1− b 2 with the marginal time t ( x ) = y1 x − b + rρx, where b is the learning exponent in production, ρ is as defined in Porteus (1989) and Chand (1989), and r is the time to rework a unit. Jaber and Guiffrida’s (2004) total time to produce x units, where there is learning in the rework process, is: Y ( x) = y1 1− b r ⎛ ρ⎞ x + 1 ⎜ ⎟ x2−2ε , 1− b 1− ε ⎝ 2⎠ where r1 is the time to rework the first defective unit and ε is the learning exponent of the rework learning curve. Learning Curves: Theory, Models, and Applications In the course of learning, there is a point where there is no more improvement in performance. This phenomenon is known as “plateauing.” The possible reasons for plateauing may be (i) labor ceases to learn, (ii) management becomes unwilling to invest in learning efforts, and (iii) management becomes skeptical that learning improvement can continue. There is no strong empirical evidence to either support or contradict these hypotheses. The composite learning model of Jaber and Guiffrida (2004) resulted in the following findings: a. For learning in reworks such that 0 ≤ ε < 0.5, a composite learning curve was found to be convex with a local minimum x* that represents the cumulative production in a given cycle. b. For ε = 0.5, the composite learning curve would plateau at a value of 2r1 ρ 2 , as the cumulative production approaches infinity. c. When 0.5 < ε < 1, learning was found to behave in a similar manner to that of Wright (1936). That is, as the cumulative production approaches infinity, the time to produce a unit would approach zero. d. In a case where there is learning in production only, if the cumulative production exceeds x*, then the time to produce each additional unit beyond x* will start increasing. That is, it results in a convex learning curve. Jaber and Guiffrida (2004) noted that their work has the following limitations: a. It cannot be applied to cases where defects are discarded. b. The rate of generating defects is constant. c. The process can go out of control with a given probability each time an item is produced (Porteus 1986). d. There is only one stage of production and rework considered. Jaber and Guiffrida (2008) In this paper, Jaber and Guiffrida (2008) assumed that an imperfect process can be interrupted in order to restore quality. In this way they addressed the shortcomings in WLC model—that is, (i) the learning phenomenon continues indefinitely, and (ii) all units produced are of acceptable quality. This model also addressed the third limitation in the work of Jaber and Guiffrida (2004), as stated above. They assumed that (i) a lot of size x is divided into n equal sub-lots corresponding to (n – 1) interruptions, and (ii) the restoration time is a constant percentage (0 < α < 1) of the production time. That is, S ( x ) = αY ( x ), and the expected number of defectives in the lot of x was computed as: d ( x , n) = ∑ i 1 ( x n )2 2 x2 . 2n Linking Quality to Learning – A Review The total restoration or interruption time was given by: S ( x, n ) = αy1 ⎛ n − 1⎞ ⎜ ⎟ 1− b ⎝ n ⎠ 1− b x1− b . The equations for the total time in Jaber and Guiffrida (2004), to produce x units were modified for the cases with and without learning in reworks, as: T ( x, n ) = y1 1− b αy1 ⎛ n − 1⎞ x + ⎜ ⎟ 1− b 1− b ⎝ n ⎠ 1− b x1− b + r ρ x2 , 2n and T ( x, n ) = y1 1− b αy1 ⎛ n − 1⎞ x + ⎜ ⎟ 1− b 1− b ⎝ n ⎠ 1− b x1− b + r1n ⎛ ρ ⎞ ⎜ ⎟ 1 − ε ⎝ 2n ⎠ 1− ε x2−2ε , respectively. The results of Jaber and Guiffrida (2008) indicated that introducing interruptions into the learning process to restore the quality of the production process improves the system’s performance. They found this possible when the percentage of the total restoration time is smaller than the production time. Otherwise, they recommended n = 1 (Jaber and Guiffrida 2004). One important outcome of Jaber and Guiffrida (2008) was that restoring the production process breaks the plateau barrier, thereby providing opportunities for improving the performance. Jaber and Khan (2010) In this paper, Jaber and Khan relaxed the first and the fourth assumption of the work of Jaber and Guiffrida (2004) as pointed out above. That is, they considered the scrap after production and the rework of a lot in a serial production line. They also elaborated on the impact of splitting a lot into smaller ones on the performance of the process. Jaber and (Khan 2010) defned the overall performance as the sum of the average processing time and the process yield. They assumed the time to produce and rework at every stage in a series was assumed to follow learning. Accordingly, Jaber and Khan (2010) wrote the total time to process xi units, at stage i in a serial line as: Ti ( xi ) = Yi ( xi ) + Ri ( xi ) = y1i 1− bi r ⎛ρ ⎞ xi + 1i ⎜ i ⎟ 1 − bi 1 − εi ⎝ 2 ⎠ 1− ε i xi 2 − 2εi . They noted that xi is the number of non-defective units that enter stage i. Their average processing time for N stages with n sub-lots would be: P ( x ) = P ( x N +1 ) = TnN ( x N +1 ) . nx N +1 Learning Curves: Theory, Models, and Applications Two performance measures were used by Jaber and Khan (2010). One for processing time and the other for process yield (or quality). The first performance measure was given as: Z1 = 1 − P( x ) , P0 where P0 is the processing time with no learning in production and reworks and with no lot splitting. The second performance measure was given as: Z2 = π p ∑ = 1− N i 1 si ( xi ) where si (xi) is the number of scrap items at stage i, when a lot size x1 enters the first stage. Jaber and Khan (2010) concluded the following: a. The optimal performance improves as learning in production becomes faster, and deteriorates when learning in reworks becomes faster. b. The time spent in production or reworks was also found to affect the performance. c. The system’s performance deteriorates as the number of stages in the serial production increases. Empirical Relationship Li and Rajagopalan (1997) In this paper, Li and Rajagopalan (1997) collected about three years of data on quality levels, production, and labor hours from two manufacturing firms. The aim of their study was to answer three questions related to the impact of quality on learning. These questions were—namely: (1) How well does the cumulative output of defective versus good units explain learning curve effects? (2) Do defective units explain learning curve effects better than good units? (3) How should cumulative experience be represented in the learning curve model when the quality level may have an impact on learning effects? The data were taken from two plants making tire tread and medical instruments (kits and fixtures), respectively. The study of Li and Rajagopalan (1997) resulted in the following findings: (1) the learning rate slows down as quality improves; (2) over time, as defects are less frequent, the opportunities for learning are also fewer; and (3) managers pay less attention to process improvement at higher levels of experience. Li and Rajagopalan (1997) used defect levels as a substitute for the effort devoted to process improvement. In a latter study, Li and Rajagopalan (1998) showed that the optimal investment in a process improvement effort is proportional to the defect levels. This is contrary to the traditional learning curve model (Wright 1936) where Linking Quality to Learning – A Review cumulative production volume, which contains good and defective units, was used as a proxy for knowledge or experience. They proposed a regression model complementary to the one in Fine (1986). Li and Rajagopalan (1997) showed that if the defect level in a period is very high, then it immediately gets attention and considerable effort is directed at identifying the source of the defectives. They found that if defect levels continue to be high for a few consecutive periods, then increased attention is paid and additional resources are devoted to investigating the cause of the defects. In their opinion, these efforts lead to a better understanding of the process variables and interactions, which is useful in helping to avoid such defects in the future. They also found that in the later stages, where defects are less frequent, the opportunities for learning from the analysis of these defects are also fewer. Li and Rajagopalan (1997) concluded that defective and good units do not explain learning curves equally. They further elaborated that defective units are statistically more significant than good units in explaining learning curve effects. Foster and Adam (1996) In this paper, Foster and Adam (1996) included the speed of quality improvement in Fine’s (1986) quality-based learning curve model. Their model demonstrated that, under different circumstances, rapid quality improvement effects are either beneficial or unfavorable to improvement in quality-related costs. They also demonstrated that sustained and permanent rapid quality improvement can lead to higher levels of learning. However, they found that under certain conditions the rapid speed of quality improvement can also impede organizational learning. In their opinion this suggests that when management imposes higher goals for the reduced number of defects while systems are not in place to achieve those goals, costs will increase. Foster and Adam (1996) developed two hypotheses from this analysis and tested them in an automotive parts manufacturing company with five similar plants. They found that fast quality improvements result in reducing the rate at which improvement in quality-related costs occur. The opposite to this behaviour was found to be true. They referred to this behavior as “organizational learning.” A type of learning that Foster and Adam (1996) found in many organizations. They also observed that with the passage of time, (i) inspection-related costs are reduced, (ii) the need for the acceptance sampling of raw materials is reduced, (iii) prevention-related costs decline, and (iv) prevention activities become more focused and specific. Foster and Adam (1996) cautioned that some quality-related efforts may be ineffective. Foster and Adam (1997) recommended that companies adopt slower and steadier rates of quality improvements. Their findings were supported empirically. Forker (1997) In this paper, the results of a survey of 348 aerospace component manufacturers were examined in order to investigate the factors that affect supplier quality performance. Forker discussed the process view of quality to depict the inconsistencies between practice and performance in a supplier firm. Foker (1997) noted that by Learning Curves: Theory, Models, and Applications linking quality management with process it would be possible to address issues of effectiveness and efficiency in firms. He linked the quality performance of a supplier with a variety of dimensions, such as features, reliability, conformance, durability, serviceability, and aesthetics. Foker (1997) emphasized in his study the importance of human learning for the following reasons: (i) the learning curve affected the transaction cost of different supplies, and (ii) suppliers’ attitudes toward learning, and thus their efficiency, impacted on the quality magnitude. Forker (1997) showed that as processes become more streamlined and capable, firms should invest their resources in product design and in training all employees in quality improvement concepts and techniques. Badiru (1995) In this paper, it was claimed that quality is a hidden factor in learning curve analysis. Badiru (1995) considered quality to be a function of performance, which in turn is a function of the production rate. He found that forgetting affects the product quality in the sense that it can impair the proficiency of a worker in performing certain tasks. Badiru (1995) further explained that the loss in worker performance due to forgetting is reflected in the product quality through poor workmanship. He also noted that forgetting can take several different forms: 1. Intermittent forgetting (i.e., in scheduled production breaks) 2. Random forgetting (e.g., machine breakdown) 3. Natural forgetting (i.e., effect of ageing) Badiru (1995) emphasized that there are numerous factors that can influence how fast, how far, and how well a worker or an organization learns within a given time span. The multivariate learning curve he suggested was given by: Cx = K ∏c x i i i 1 where Cx =cumulative average cost per unit K=cost for the first unit n=number of variables in the model xi =specific value of the ith variable bi =learning exponent for the ith variable ci = coefficient for the ith variable Badiru (1995) tested the learning model in Equation 12.37 on the four-year record of a troublesome production line. The production line he investigated was a new addition to an electronics plant and was thus subject to significant learning. Badiru (1995) observed that the company used to stop the production line temporarily if quality problems would arise. He also hypothesized that the quality problems could Linking Quality to Learning – A Review be overcome if the downtime (forgetting) could be reduced so that workers could have a more consistent operation. The four variables of interest in his study were: (i) the production level (X1), (ii) number of workers (X2), (iii) the number of hours of production downtime (X3), and (iv) the dependent variable was the average production cost per unit Cx. Badiru (1995) showed in his analysis of variance of the regression model that the fit is highly significant, with 95% variability in the average cost per unit. Badiru (1995) noticed that the average cost per unit would be underestimated if the effect of downtime hours is not considered. He concluded that the multivariate model provides a more accurate picture of the process performance. Mukherjee et al. (1998) Mukherjee et al. (1998) studied why some quality improvement projects are more effective than others. They explored this by studying 62 quality improvement projects undertaken in one factory over the course of a decade, and identified three learning constructs that characterize the learning process—namely, scope, conceptual learning, and operational learning. The purpose of their study was to establish a link between the pursuit of knowledge and the pursuit of quality. Mukherjee et al. (1998) followed the approach in Kim (1993) to distinguish between two types of effort—conceptual learning and operational learning. They defined conceptual learning as, trying to understand why events occur (i.e., the acquisition of know-why). Operational learning, in their view, consists of implementing changes and observing the results of these changes. They further added that operational learning is the process of developing a skill of how to deal with experienced events (i.e., the acquisition of know-how). Mukherjee et al. (1998) recommended that in order to establish links between learning and quality, field researchers should try to control potentially confounding factors such as variations in product and resource markets, general management policies, corporate culture, production technology, and geographical location. They also recommended that field researchers must have access to detailed data about the systems that are used to improve quality. Mukherjee et al. (1998) further recommended that the plant take the following total quality management (TQM) measures to enhance quality: • Training managers in problem-solving concepts and ensuring process capability • Investing heavily in the training of plant personnel • Creating a functional TQM organization, consisting of a plant-level TQMsteering committee and departmental TQM teams • Introducing SPC for the control of a few key parameters and attributes • Adding accuracy and precision indices to the existing quality index • Training foremen in creating, establishing, and monitoring standard operating procedures (SOPs) • Installing a new information system, which economically provides standardized daily, weekly, and monthly production and quality data • Emphasizing the behavioral (instead of the technical) component of TQM Learning Curves: Theory, Models, and Applications Mukherjee et al. (1998) concluded the following: (i) management plays a role in addressing 80%–85% of quality problems, (ii) in dynamic production environments a cross-functional project team is in a better position to create technological knowledge, and (iii) that operational and conceptual learning have different kinds of potential in a plant. Lapré et al. (2000) In this paper, the learning curve of TQM in a factory was explored. The link between learning and quality was extended from a cross-sectional, project-level analysis to a longitudinal, factory-level analysis. Lapré et al. (2000) cautioned that the power form of learning has many fundamental shortcomings. Using an exponential model, Lapré et al. (2000) focused on waste, which is a key driver of both quality and productivity. They modeled the improvement in the waste as: dW ( z ) = μ[W ( z ) − P ], dz where W(z), P, z, and μ are the current waste rate, desired waste rate, proven capacity, and the learning rate, respectively. Lapré et al. (2000) took the learning rate as a function of autonomous and induced learning. They also assumed that y1t, y 2t,…, ynt are the managerial factors that affect the learning rate (e.g., the cumulative number of quality improvement projects), and modeled the learning rate at time t as: n μ t = β0 + ∑β y . i it i 1 Lapré et al. (2000) combined Equations 12.37 and 12.38 to get: ⎛ lnW ( zt ) = a + ⎜ β0 + ⎝ ∑ β y ⎟⎠ z , i it i 1 n where a is a constant, while β0 and ∑ i 1 βi yit measure the autonomous and induced part of the learning rate, respectively. The parameters of Equation 12.39 were determined by analyzing a number of projects in a factory. Lapré et al. (2000) coded these projects on questions that dealt with their learning process and their performance by giving a response on a five-point Likert scale (Bucher 1991). This allowed them to provide a systemic explanation, based on the dimensions of the learning process they used, on why induced learning that yields both know-why and know-how enhances the learning rate, while induced learning that only yields know-why disrupts the learning process. Linking Quality to Learning – A Review Jaber and Bonney (2003) Jaber and Bonney (2003) used the data in Badiru (1995) to show that the electronics production line follows two hypotheses: 1. The time required to rework a defective item reduces as the production increases and the rework times conform to a learning relationship. 2. Quality deteriorates as forgetting increases, due to interruptions in the production process. To validate the first hypothesis, Jaber and Bonney (2003) analyzed the effect of the cumulative production level, X, on the average time to rework a unit, Y, to fit the model Y = β1 X −β2 . The p values for the intercept β1 and the slope β2 indicated that the regression fit is significant. The analysis of variance showed that 84% of the variability in the average rework time per unit is explained by cumulative production as an independent variable. Similarly, to validate the second hypothesis, Jaber and Bonney (2003) analyzed the impact of forgetting due to production downtime, D, on the average rework time for a unit, Y, and cumulative production level, X. The fitted regression model of Jaber and Bonney (2003) was of the form Y = β1 X β2 Dβ3 . Again, the p values for the intercept β1 and the slopes β2, β3 indicated that the regression fit is significant. Their analysis of variance showed that 88% of the variability in the average rework time per unit is explained by cumulative and production downtime as independent variables. Hyland et al. (2003) In this paper, Hyland et al. (2003) reported the research on continuous improvement (CI) and learning in the logistics of a supply chain. This research is based on a model of continuous innovation in the product development process and a methodology for mapping learning behaviors. Hyland et al. (2003) took learning to be crucial to innovation and improvement. To build innovative capabilities, Hyland et al. (2003) suggested that organizations need to develop and encourage learning behaviors. Hyland et al. (2003) believed that capabilities could only be developed over time by the progressive consolidation of behaviors, or by strategic actions aimed at the stock of existing resources. Hyland et al. (2003) identified four key capabilities that are central to learning and CI in a supply chain. These capabilities are: (1) the management of knowledge; (2) the management of information; (3) the ability to accommodate and manage technologies and the associated issues; and (4) the ability to manage collaborative operations. Learning Curves: Theory, Models, and Applications Jaber et al. (2008) Salameh and Jaber (2000) came up with a new line of research concerning the defective items in an inventory model. They recommended the screening and disposal of the defective items in the basic economic order quantity model. This model has recently been widely extended to address the issues of shortages/backorders, quality, fuzziness of demand, and supply chains. However, Salameh and Jaber assumed the fraction of defectives to be following a known probability density function. Jaber et al. (2008) noticed that this fraction in an automotive industry reduces according to a learning curve, over the number of shipments. They tried to fit several learning curve models to the collected data and found that the S-shaped logistic learning curve fitted the data well: wn = a , g + ebn where a and g are the model parameters, b is the learning exponent, and wn is the fraction of imperfect items in the nth shipment. Jaber et al. (2008) developed two models similar to that of Salameh and Jaber (2000). One for an infinite planning horizon, and one for a finite planning horizon. They found that in the infinite planning model, the number of defective units, the shipment size, and the total cost reduces with an increase in learning. For the finite learning model, they found that an increase in learning recommends larger lot sizes less frequently. FUTURE DIRECTIONS OF RESEARCH In this chapter, a number of scenarios have been presented. These scenarios relate the quality of a product to a number of parameters in a production facility—that is, the production quantity, production run, number of supplies, number of setups, and so on. Though we found a number of papers expressing the relationship between quality and learning in one way or another, a formal correlation between the two still seems to be missing. This association can be beneficial not only for inventory control, but also for the enhancement of coordination in a supply chain. There is a dire need to clearly understand the difference between the enhancements in quality of a product through induced and motor learning. The literature in the field of inventory and supply chain management has been dealing with these issues on distinct grounds, but the impact of joint learning on quality has never been examined. This can provide a rich field of research to be investigated. The issues of investment in different types of learning and sharing of their benefits in a supply chain are also a vast arena to be studied. There has also been constant debate on the problem of how much to produce/order in manufacturing setups. The literature provides a substantial background in this area. As discussed in the chapter, the quantity and run length in production are directly linked to the quality of a product. Combined with experience (learning), this subject should be of particular interest to engineers and managers in the field of production management. Linking Quality to Learning – A Review A number of researchers have been investigating errors in screening, though not many have studied such errors in production environments. The relationship between quality and screening and investment (in either, or both) is a topic that needs further research. This research should be of special importance for practitioners in the field of supply chain management as it calls for the individual and mutual benefits of all the stakeholders. Quality should also be considered at the strategic level, where various investment decisions about new product development, process improvements, or system implementations are made. Those decisions are based on several assumptions, including the performance of a development project that is expected to deliver a product at the required quality level. In contrast to industrial projects, which are usually repeated many times, learning in development projects causes increased productivity during the course of each activity and is referred to as “activity-specific learning” (Ash and Daniels 1999). Since development projects are hugely affected by learning (Robey et al. 2002; Plaza 2008), they are often delivered with significant schedule delays (Vendevoorde and Vanhoucke 2006). In an attempt to recover the schedule and reduce additional expenses, the testing of a prototype is reduced. If a prototype with a larger number of defects that would normally be acceptable is released, this impacts on the quality of the product down the road. Therefore, it becomes critical to incorporate learning curves into a standard project management methodology, as this would allow for the assessment of the impact of learning curves on the quality of a new product, a new process, or a system. In contrast to industrial projects, which are usually repeated many times, learning on development projects causes increased productivity during the course of each activity and is, in turn, referred to as “activity-specific learning” (Ash and Daniels 1999). Learning influences the effectiveness of teams and increases future performance efficiency (Edmondson 2003); the strength of its impact is sufficient to warrant (Sarin and McDermott 2003) the employment of careful management (Garvin 1993). ACKNOWLEDGMENTS The authors thank the Natural Sciences and Engineering Research Council (NSERC) of Canada for supporting their research. They also thank Professor Michael A. Lapré of the Owen Graduate School of Management at Vanderbilt University for his valuable comments and suggestions. REFERENCES Anderson, E.G., 2001. Managing the impact of high market growth and learning on knowledge worker productivity and service quality. European Journal of Operational Research 134(3): 508–524. Ash, R., and Daniels, D.E.S., 1999. The effect of learning, forgetting, and relearning on decision rule performance in multiproject scheduling. Decision Sciences 30(1): 47–82. Badiru, A.B., 1995. Multivariate analysis of the effect of learning and forgetting on product quality. International Journal of Production Research 33(3): 777–794. Baloff, N., 1970. Startup management. IEEE Transactions on Engineering Management EM-17(4): 17132–141. Learning Curves: Theory, Models, and Applications Bucher, L., 1991. Consider a likert scale. Journal for Nurses in Staff Development 7(5): 234–238. Chand, S., 1989. Lot sizes and setup frequency with learning in setups and process quality. European Journal of Operational Research 42(2): 190–202. Cohen, W.M., and Levinthal, D.A., 1989. Innovation and learning: The two faces of R & D. The Economic Journal 99(397): 569–596. Crosby, P.B., 1979. Quality is free. New York: McGraw-Hill. Deming, W.E., 1982. Quality, productivity, and competitive position. M.I.T. Center for Advanced Engineering Study, USA. Edmondson, A.C., 2003. Framing for learning: Lessons in successful technology implementation. California Management Review 45(2): 34–54. Fine, C.H., 1988. A quality control model with learning effects. Operations Research 36(3): 437–444. Fine, C.H., 1986. Quality improvement and learning in productive systems. Management Science 32(10): 1301–1315. Forker, F.B., 1997. Factors affecting supplier quality performance. Journal of Operations Management 15(4): 243–269. Foster, S.J., and Adam, E.E., 1996. Examining the impact of speed of quality improvement on quality-related costs. Decision Sciences 27(4): 623–646. Garvin, D.A., 1993. Building a learning organization. Harvard Business Review 71(4): 78–91. Garvin, D.A., 1987. Competing on the eight dimensions of quality. Harvard Business Review 65(6): 101–109. Glover, J.H., 1965. Manufacturing progress functions I: An alternative model and its comparison with existing functions. International Journal of Production Research 4(4): 279–300. Hyland, P.W., Soosay, C., and Sloan, T.R., 2003. Continuous improvement and learning in the supply chain. International Journal of Physical Distribution and Logistics Management 33(4): 316–335. Jaber, M.Y., and Khan, M., 2010. Managing yield by lot splitting in a serial production line with learning, rework and scrap. International Journal of Production Economics 124(1): 32–39. Jaber, M.Y., Goyal, S.K., and Imran, M., 2008. Economic production quantity model for items with imperfect quality subject to learning effects. International Journal of Production Economics 115(1): 143–150. Jaber, M.Y., and Guiffrida, A.L., 2008. Learning curves for imperfect production processes with reworks and process restoration interruptions. European Journal of Operational Research 189(1): 93–104. Jaber, M.Y., and Guiffrida, A.L., 2004. Learning curves for processes generating defects requiring reworks. European Journal of Operational Research 159(3): 663–672. Jaber, M.Y., and Bonney, M., 2003. Lot sizing with learning and forgetting in set-ups and in product quality. International Journal of Production Economics 83(1): 95–111. Juran, J.M., 1978. Japanese and western quality - A contrast. Quality Progress (December): 10–18. Kapp, K.M., 1999. Transforming your manufacturing organization into a learning organization. Hospital Material Management Quarterly 20(4): 46–55. Kim, D.H., 1993. The link between individual and organizational learning. Sloan Management Review 35(1): 37–50. Lapré, A.M., Mukherjee, A.S., and Wassenhove, L.N.V., 2000. Behind the learning curve: Linking learning activities to waste reduction. Management Science 46(5): 597–611. Levy, F., 1965. Adaptation in the production process. Management Science 11(6): 136–154. Li, G., Rajagopalan, S., 1998. Process improvement, quality and learning effects. Management Science 44(11): 1517–1532. Linking Quality to Learning – A Review Li, G., and Rajagopalan, S., 1997. The impact of quality on learning. Journal of Operations Management 15(3): 181–191. Lundvall, D.M., and Juran, J.M., 1974. Quality costs. In Quality control handbook, ed. J.M. Juran. Third Edition. San Francisco: McGraw-Hill, pp. 1–22. Malreba, F., 1992. Learning by firms and incremental technical change. The Economic Journal 102(413): 845–859. Mukherjee, A.S., Lapré, M.A., Wassenhove, L.N.V., 1998. Knowledge driven quality improvement. Management Science 44(11): 36–49. Plaza, M., 2008. Team performance and information systems implementation: Application of the progress curve to the earned value method in an information system project. Information Systems Frontiers 10(3): 347–359. Porteus, E.L., 1986. Optimal lot sizing, process quality improvement and setup cost reduction. Operations Research 34(1): 137–144. Robey, D., Ross, J.W., and Boudreau, M.C., 2002. Learning to implement enterprise systems: An exploratory study of the dialectics of changes. Journal of Strategic Information Systems 19(1): 17–46. Salameh, M.K., and Jaber, M.Y., 2000. Economic production quantity model for items with imperfect quality. International Journal of Production Economics 64(1): 59–64. Sarin, S., and McDermott, C., 2003. The effect of team leader characteristics on learning, knowledge application, and performance of cross-functional new product development teams. Decision Sciences 34(4): 707–739. Sontrop, J.W., and MacKenzie, K., 1995. Introduction to technical statistics and quality control. Addison Wesley: Canada Spence, A.M., 1981. The learning curve and competition. Bell Journal of Economics 12(1): 49–70. Steedman, I., 1970. Some improvement curve theory. International Journal of Production Research 8(3): 189–206. Steven, G.J., 1999. The learning curve: From aircraft to spacecraft? Management Accounting 77(5): 64–65. Tapiero, C.S., 1987. Production learning and quality control. IIE Transactions 19(4): 362–370. Urban, T.L., 1998. Analysis of production systems when run length influences product quality. International Journal of Production Research 36(11): 3085–3094. Vandevoorde, S., and Vanhoucke, M., 2006. A comparison of different project duration forecasting methods using earned value metrics. Journal of Project Management 24(4): 289–302. Wright, T.P., 1936. Factors affecting the cost of airplanes. Journal of the Aeronautical Sciences 3(2): 122–128. Yang, L., Wang, Y., and Pai, S., 2009. On-line SPC with consideration of learning curve. Computers and Industrial Engineering 57 (3): 1089–1095. Yelle, L.E., 1979. The learning curve: Historical review and comprehensive survey. Decision Sciences 10(2): 302–328. Zangwill, W.I., and Kantor, P.K., 2000. The learning curve: A new perspective. International Transactions in Operational Research 7(6): 595–607. Zangwill, W.I., and Kantor, P.B., 1998. Toward a theory of continuous improvement and the learning curve. Management Science 44(7): 910–920. Growth Models 13 Latent for Operations Management Research: A Methodological Primer Hemant V. Kher and Jean-Philippe Laurenceau CONTENTS Introduction............................................................................................................ 237 Overview of Latent Growth Models ......................................................................240 Unconditional and Conditional Linear Latent Growth Models ........................240 Data Requirements, Trajectory Modeling, and Invariance................................. 241 Model Identification .......................................................................................... 243 Estimation, Sample Size, and Measures of Model Fit ...................................... 243 Illustration of Latent Growth Model Application .................................................. 245 Illustration of The Unconditional Latent Growth Model .................................. 245 Latent Growth Models for Two Groups: Multi-Group Analysis.......................246 Conditional Latent Growth Model ....................................................................248 Piecewise Linear Latent Growth Model............................................................ 250 Multivariate Latent Growth Models: Cross-Domain Analysis of Change ........ 251 Summary of Latent Growth Model Illustrations ............................................... 253 Other Longitudinal Data-analysis Methods ........................................................... 253 Limitations and Possible Remedies for Latent Growth Models ............................ 254 Latent Growth Model Analysis in OM Research................................................... 255 Conclusions............................................................................................................ 257 Acknowledgments.................................................................................................. 259 References.............................................................................................................. 259 INTRODUCTION With an increased emphasis on empirical research in the field of operations management (OM), researchers are increasingly turning to the use of structural equation modeling (SEM) as a preferred method of data analysis. Shah and Goldstein (2006) reviewed nearly 100 papers in OM over two decades (1984–2003) that used SEM methodology. SEM allows researchers to study hypothesized relationships between unobservable constructs (i.e., latent variables) that are typically measured using two 237 Learning Curves: Theory, Models, and Applications or more observable measures (i.e., manifest or indicator variables). The purpose of this chapter is to illustrate the use of a special type of SEM called a “latent variable growth curve model” (abbreviated as LCM by Bollen and Curran 2006; and LGM by Duncan et al. 2006), for studying longitudinal changes (i.e., changes over time) in observed or latent variables of interest to researchers in the field of OM. Traditional approaches to analyzing longitudinal data include techniques such as repeated measures analysis of variance (RANOVA), analysis of covariance (ANCOVA), multivariate analysis of variance (MANOVA), multivariate analysis of covariance (MANCOVA), and autoregressive models (for a review, see Hancock and Lawrence 2006). However, these methods have been used to describe changes at the group level of analysis rather than changes at the individual level of analysis. These traditional approaches are limited when the variation in the constructs describing change at the intra-individual level and inter-individual differences in intra-individual change are under focus. The following example illustrates the difference between traditional group level techniques and latent growth model (LGM) for analyzing longitudinal change. Suppose a call center is interested in measuring the improvements in productivity for its customer service representatives. Productivity improvements may be documented using objective measures such as the number of calls answered, accuracy in providing requested information, courtesy in handling calls, and so forth (Castilla 2005). If we were only interested in describing the changes in productivity at the group level, we could follow the two-step approach suggested in Uzumeri and Nembhard (1998). In Step 1, we would fit a common mathematical function (using a curve-fitting technique) to longitudinal productivity data for every individual in the sample. Then, in Step 2, we would use the resulting set of best-fit parameter estimates to describe the changes in productivity for the entire group. The approach adopted by Uzumeri and Nembhard may have been appropriate in their study if their intention was to fit a single functional form for growth to the “population of learners.” However, if we also want to identify whether groups of individuals in the sample have growth patterns that are different relative to those fit to the entire sample and identify reasons for these differences, then the above-mentioned case-by-case approach is limited. Bollen and Curran (2006, 33–34) note five technical limitations of the case-by-case approach to using the ordinary least squares (OLS) regression procedure for estimating growth trajectories over time for sample members. First, overall tests of model fit are not readily available. Second, OLS imposes a restrictive structure on error variances. Third, the estimation of variances for random intercepts and slopes is difficult, and their significance tests are complicated. Fourth, inclusion of certain covariates is possible, but with the assumption that their measurement is error free, which is not always possible. Finally, the inclusion of time-varying covariates is impossible with the OLS for a case-by-case approach. Latent variable curve growth models can replicate the OLS regression results for the case-by-case approach while overcoming the above-listed limitations. The strength of this technique is that it allows researchers to quantify statistically the variations in sample elements on intra-individual (e.g., changes in each individual’s productivity over time) as well as inter-individual levels (i.e., changes in productivity across different individuals in the sample; Nesselroade and Baltes 1979). Latent Growth Models for Operations Management Research Latent growth models (LGMs) allow researchers to answer three important questions about the observed measure or the latent construct under focus (Bollen and Curran 2006). The first question concerns the nature of the trajectory at the group level. This question is answered by fitting an unconditional LGM, and of interest in this analysis is the functional form of longitudinal changes in the manifest variable or the latent construct (e.g., linear, quadratic, cubic, exponential, etc.), along with the associated parameter estimates (e.g., intercept and slope for linear LGM). The second question allows researchers to determine whether a distinct trajectory of longitudinal changes in the observed measure or latent construct is needed for each case in the sample. As an example, statistically significant variability in the intercept and slope for a linear LGM would suggest that the starting point and the rate of change in productivity for each individual may be different from that fitted to the entire sample. Finally, the third question pertains to identifying the predictors of change in trajectories for the different cases. For example, if the answer to the first question suggests that a linear model fits changes in individual productivity over time, and the answer to the second question indicates that different individuals in the sample may have different linear trajectories, then the answer to the third question consists of identifying the reasons underlying the different trajectories for the different individuals in the sample. Identifying these variations may be important to understanding whether some individuals start out with a high (or low) level of productivity and show a high (or low) rate of productivity changes over time. For example, do individuals that received extensive training prior to starting their jobs have a higher starting productivity? Is the rate of improvement in the productivity for these individuals higher than for others that did not receive such training? As noted above, the unconditional LGM answers the first two questions, while a conditional LGM answers the third question. Stated somewhat differently, Willett and Sayer (1994, 365) note that “logically, individual change must be described before inter-individual differences in change can be examined, and inter-individual differences in change must be present before one can ask whether any heterogeneity is related to predictors of change.” Here, the individual change (i.e., Level 1) model posits that the form of change is the same for all individuals in the sample, while the inter-individual change (i.e., Level 2) model posits that different individuals may have different starting points and rates of change. LGMs have grown significantly in popularity across the behavioral and social sciences since the 1990s. Several textbooks (we have cited two—Bollen and Curran 2006; and Duncan et al. 2006) and hundreds of papers, both technically oriented and demonstrating applications, have been printed on the topic. LGMS have been used extensively in the fields of psychology, sociology, health care, and education. Some examples of focal variables where change (growth or decline) has been studied in these fields include: (1) the behavioral aspects that cause increases in the consumption of tobacco, alcohol, and drugs (Duncan and Duncan 1994); (2) the relationship between crime rates and weather patterns (Hipp et al. 2004); (3) the behavioral aspects that cause changes in the level of physical activity among teenagers/adolescents (Li Learning Curves: Theory, Models, and Applications et al. 2007); and (4) the attitudes of middle- and high-school students towards science courses (George 2000). They are also starting to appear in the management literature where change is being assessed in focal variables such as: (1) the adjustment of new employees to the work environment (Lance et al. 2000); (2) the manner in which new employees seek information and build relationships (Chan and Schmitt 2000); (3) employee commitment and the intention to quit (Bentein et al. 2005); and (4) individual productivity (Ployhart and Hakel 1998). To the best of our knowledge, researchers in OM have not yet used LGMs for longitudinal analysis. The purpose of this chapter is to advocate for the use of LGMs for OM research and to provide a primer. Towards this end, we provide an overview of LGMs, and discuss the issues related to data requirements, model identification, estimation methods, sample size requirements, and model fit assessment statistics. We then illustrate the application of LGMs using simulated longitudinal data. We conclude by noting the advantages to using LGMs over other more traditional longitudinal approaches, and highlight areas in OM where researchers can use this technique effectively. OVERVIEW OF LATENT GROWTH MODELS Although a documented interest in modeling group and individual-level growth models exists from the early twentieth century, work in the area of LGMs is more recent. According to Bollen and Curran (2006), Baker (1954) presented the first known factor analytic model on repeated observations. They also note that Tucker (1958) and Rao (1958, 13) provided the approach to “parameterizing these factors to allow for the estimation of specific functional forms of growth.” Finally, Meredith and Tisak (1990) are credited for placing trajectory modeling in the context of confirmatory factor analysis. Unconditional and Conditional Linear Latent Growth Models The unconditional, linear LGM is shown in Figure 13.1. Following the path diagram convention used to represent SEMs, circles or ellipses represent the latent construct, while rectangular boxes represent observed (manifest) variables. Thus, in Figure 13.1, the rectangular boxes represent observed productivity for the years 2001 through 2005. Loadings of these measured variables on the intercept and slope construct are as shown. All variables have a constant loading (i.e., 1.0) on the intercept construct, while the loadings from observed productivity going from year 2001 to 2005 start at 0.0, and increase in steps of 1.0, representing linear growth. Finally, E1 through E5 represent the errors associated with observed productivity for the years 2001 through 2005, respectively. The double-headed arrow indicates the covariance between the intercept and slope constructs. Statistically significant variances for the intercept and slope construct suggest significant variability of the individuals in the sample with regards to initial productivity (i.e., in year 2001), and the rate at which they improve, respectively. A statistically significant covariance between the slope Latent Growth Models for Operations Management Research Score 1 Score 2 Score 3 Score 4 Score 5 1.0 1.0 1.0 1.0 1.0 FIGURE 13.1 Unconditional linear LGM. and intercept construct establishes the relationship between them; e.g., a negative covariance indicates that individuals with a high initial productivity improve at a slower rate. A conditional, linear LGM is depicted in Figure 13.2a. Whether or not the individual received training prior to starting their job is included as the time invariant predictor in this model, and is included as a 0/1 dummy variable. As with standard regression analysis, it is possible to use continuous as well as categorical predictors with LGMs. In addition to time invariant predictors, one can also use time variant predictors with LGMs, as denoted in Figure 13.2b, where changes in individual productivity are hypothesized to covary with the extent of feedback (E-2001 through E-2005) provided by Data Requirements, Trajectory Modeling, and Invariance The basic requirement for LGMs is that data on the same observed measure (or the same latent construct measured using the same indicators) should be collected over time. Researchers can use LGMs flexibly. Where appropriate, using an exploratory approach, it may be possible to assume that a single functional form governs growth, but without an a priori specification of this functional form. Thus, the researcher may model different functional forms to identify the one that provides the best fit. As an example, Ployhart and Hakel (1998) tested the no-growth, linear, quadratic, and cubic (S-curve) trajectories and identified the latter as the best-fitting curve for describing the growth in sales commissions for a sample of sales persons. On the other hand, the researcher can hypothesize and confirm a specific functional form for the growth in individuals and the group. Moreover, if the functional form of change is expected to be different during one set of time points versus another, non-linear forms of change can also be approximated using piecewise growth modeling Flora (2008). Score 5 Score 4 Score 3 Score 2 Score 1 Score 5 Score 4 Score 3 Score 2 Score 1 FIGURE 13.2 (a) Conditional linear LGM with a time invariant covariate, and (b) time varying covariate. Feedback 5 Feedback 4 Feedback 3 Feedback 2 Feedback 1 242 Learning Curves: Theory, Models, and Applications Latent Growth Models for Operations Management Research Model Identification Shah and Goldstein (2006) note significant concerns with just-identified and underidentified models in research utilizing SEM in OM, and recommend using overidentified models (degrees of freedom [DF] > 0) whenever possible. With LGMs (as with SEMs), the degrees of freedom are a function of the number of data waves (or measured variables in SEMs), parameters to be estimated, and the parameter constraints placed on the model (e.g., assuming equal error variances versus allowing for a free estimation of the error terms). The expression (1/2)×{p(p+1)}−q, where p is the number of data waves and q is the number of parameters, is useful in calculating the degrees of freedom for LGMs. Because LGMs incorporate both means and covariance structure analyses, the degrees of freedom given by the above-stated expression are increased by p, since there are as many observed means as there are data waves. The degrees of freedom calculation allows researchers to determine the minimum number of data waves that they should work with. As an example, to fit a linear LGM, Bollen and Curran (2006) recommend using at least three waves of data (they note that this is a necessary, although not a sufficient, condition to fit LGM). Figure 13.1 is useful in understanding the reason behind this. The LGM shown in Figure 13.1 is based on five waves of data (p= 5). These five waves provide a total of 15 variances and covariances ((1/2)×{5×(5+1)}=15), plus five means, resulting in 20 observed pieces of information. Using these, we must estimate five error variances (one for each data wave), two factor variances, one covariance between the two factors, and two factor means, which represent 10 estimated pieces (q = 10). Thus, our model will have 10 degrees of freedom. If all error variances were constrained to be equal, then our model would have 14 degrees of freedom (20 observed pieces—six estimated pieces). With only two waves of data, we would have a total of five observed pieces of information, which are insufficient to estimate six parameters, even for the restricted model with equal error variances. As the number of parameters to be estimated increases, so should the number of data waves. Thus, for an LGM describing a quadratic trajectory, the parameters to estimate would include the mean and variance for the intercept, linear, and quadratic slope terms plus three covariances (one each between the intercept and linear slope, the intercept and quadratic slope, and the linear and quadratic slope), and error variances. With only three data waves, we would have nine observed pieces of information (six variances and covariances plus three means), which are not sufficient to estimate 12 parameters. Thus, for the quadratic LGM, at least four waves of data are recommended. Using a similar approach, we can see that for a cubic LGM (which involves constructs for the intercept, and slopes for the linear, quadratic, and cubic components), at least five waves of data are recommended. More repeated measures, however, will increase the reliability estimation of the parameters of change (Singer and Willett 2003). Estimation, Sample Size, and Measures of Model Fit Maximum likelihood is the commonly used approach to estimating LGMs. This approach assumes that the repeated measures come from a continuous, or a Learning Curves: Theory, Models, and Applications continuous, distribution. If repeated data are collected on ordinal measures, such as a dichotomous variable for a yes/no scale, or where subjects rate variables of interest on categorical scales (e.g., rating a car’s reliability on a five-point scale ranging from one = poor to five = excellent), data have to be preprocessed before fitting the model. The first step involves calculating polychoric correlations and standard deviations, followed by generating an asymptotic covariance for the variables. Then, maximum likelihood (ML) or another estimator—such as weighted least squares (WLS) or the diagonally weighted least squares (DWLS)—can be used to fit LGMs. We illustrate this approach in an example in the next section. Duncan et al. (2006) identify the approaches to fitting LGMs for categorical variables currently available in different SEM software packages. Muthén and Muthén (2002) provide guidelines on the influence of sample size on the statistical power of confirmatory factor analysis (CFA) and LGMs. For the latter, their investigation considered unconditional models, as well as conditional models, with a single, dichotomous time invariant predictor (covariate). Other conditions that varied in their study included the absence and presence of missing data, and the size of the population regression slope coefficient (low = 0.1, high = 0.2). The results of their simulation experiments (see Muthen and Muthen 2002, Table 2, 607) suggest that for unconditional models, samples with as few as 40 observations can provide a statistical power of 0.80 or better. The presence of a covariate increases the sample size to about 150. Sample size requirements increase further in the presence of missing data and smaller values of the population regression coefficient. Hamilton et al. (2003) investigate the relationship between sample size (varied from n = 25, 50, 100, 200, 500, and 1000), model convergence rates, and the propensity to generate improper solutions under conditions created by varying the number of data waves (time points—modeled at four, five, and six), variance of the intercept (high vs. low), and the variance of the slope parameter (high vs. low). Their results indicate that the convergence rate increases and the chances of generating improper solutions decrease with increased sample size. In the majority of cases, a sample size of 100 was sufficient to achieve a 100% convergence rate combined with a reduction in the number of instances where improper solutions were generated. An increased number of data waves and lower levels of intercept and slope variance also reduced the sample size requirements for proper convergence and reduced the instances of improper solutions. Because LGMs are a special case of SEM, the indices used to assess model fit are the same as those highlighted in the review of SEM research by Shah and Goldstein (2006). Specifically, measures such as the chi-squared test statistic, goodness of fit index (GFI), adjusted goodness of fit index (AGFI), normed fit index (NFI), non-normed fit index (NNFI), comparative fit index (CFI), root mean squared error of approximation (RMSEA), and the root mean squared residual (RMR or SRMR), among others, are routinely reported in the context of assessing model fit. In addition, Bollen and Curran (2006) recommend using the Tucker–Lewis Index (TLI), incremental fit index (IFI), Akaike information criterion (AIC), and the Bayesian information criterion (BIC). It is interesting to note that different SEM software packages tend to report different fit measures. Duncan et al. (2006) provide a useful list of fit indices reported by current versions of Amos, EQS, LISREL, and Mplus. Default and Latent Growth Models for Operations Management Research independent model chi-squared statistics, AIC, CFI, NFI, and RMSEA appear to be reported by all four programs. ILLUSTRATION OF LATENT GROWTH MODEL APPLICATION We use simulated data to illustrate LGM-based analysis. The simulated data could represent a number of different scenarios relevant for OM researchers. As an example, the data sets could represent changes in the user acceptance of software designed to aid operational decisions in a manufacturing or service setting. Alternatively, the data could also represent the changes in productivity in a manufacturing setting (e.g., the number of units produced by employees) or a service setting (e.g., the number of calls answered per period for call center employees, or the amount of sales recorded by salespersons per period, etc.). All data sets are generated using the Monte Carlo data simulation facilities in Mplus (Muthén and Muthén 1998–2004). The syntax and/or data are available to interested readers on request. By using simulated data sets we illustrate how OM researchers can identify: (1) the form of growth that occurs at the group level (e.g., linear growth, non-linear growth, etc.); (2) whether two different groups with the same form of growth (e.g., linear) have similar growth parameters (e.g., intercepts, slopes, etc.); (3) predictors of growth (e.g., does training influence growth parameters); (4) whether an intervention in the learning process (e.g., introducing new technology or methods) has an influence on growth parameters (e.g., the growth rate); and (5) whether the growth in one domain affects the growth in another (i.e., parallel growth model). Illustration of The Unconditional Latent Growth Model We start by illustrating the unconditional LGM using a simulated data set that contains observations for four time points on 100 subjects. Data modeling follows the linear system of equations (e.g., see Hancock and Lawrence 2006, 175). In this data set, the intercept is generated from a normally distributed population with a mean of 30, while the growth rate is generated from a normal distribution with a mean of 10. We also decided to generate uncorrelated intercept and slope means. In our second example, we consider another simulated data for linear growth. When data sets from the first two examples are merged, the intercept and slope means of the resulting merged data turn out to be correlated, and we provide an interpretation of this correlation in that illustration. Researchers using the LGM technique usually plot and examine raw individual and summarized data in order to decide which functional form best fits the data. We show the correlation and average values for the time points in the first data set in Table 13.1. A researcher examining these results, especially the averages for different times, would likely conclude that a linear model might fit these data. On the other hand, if the researcher decides to adopt an exploratory approach, different models can be fit to the data in order to identify the best-fitting form. Following Chan and Schmitt (2000), we fit three different LGMs—the no-growth model (strict stability model suggested by Stoolmiller 1994), the free-form model, and the linear model. The no-growth model contains only the intercept, with a loading of 1.0 to each measured time point. A good fit for this model would support the hypothesis that there is no growth in observed values Learning Curves: Theory, Models, and Applications TABLE 13.1 Correlations and Means for a Linear Growth Data Set Generated With Intercept Mean∙30 and Slope Mean∙10 Time-0 Time-1 Time-2 Time-3 1.00 0.52 0.41 0.40 29.62 1.00 0.73 0.73 39.99 1.00 0.88 50.10 1.00 60.69 over time. In the free-form model, the loadings from all measured time points to the intercept are fixed at 1.0. The loadings for the first two time points to the slope are fixed at 0.0 and 1.0, respectively. The loadings from the subsequent time points to the slope are set free and are estimated by the SEM program, thus allowing for non-linear shapes of change. The loadings for the linear LGM were explained earlier (see Figure 13.1). When selecting the best-fitting model we compare alternate nested models using the likelihood ratio test (also known as the chi-square difference test). For example, the no-growth model is nested in the linear model, and the linear model is nested within the free-form model. The chi-square difference test is implemented by calculating the difference in model chi-square and degrees of freedom for a pair of models. If the chi-square difference (which is itself chi-square distributed) is significant, then, one of the two models has a significantly better fit. The chi-square test results in Table 13.2 show that both the linear and free-form models provide the best fit compared to the no-growth model. The differences in model fit statistics are not significant for the linear and free-form models. As such, we could adopt the linear LGM as a final choice, given that fewer parameters are estimated and thus it is more parsimonious relative to the free-form LGM (this same rationale is used by Chan and Schmitt 2000, in choosing the linear LGM over other forms). Table 13.2 also shows model fit measures that include RMSEA, SRMR AGFI, CFI, and NNFI. For the first two measures, values close to 0.0 indicate a good fit, while values greater than 0.10 are taken to imply a poor fit. For AGFI, CFI, and NNFI, values over 0.90 imply a good fit. Fit statistics associated with linear LGM (as indicated by RMSEA, CFI, NNFI and SRMR values of 0.0, 1.0, 1.0, and 0.034, respectively) appear to be quite good and justify the choice of the linear form over other forms. Some recommended guidelines for model fit can be found in Hu and Bentler (1999). The estimated means for the intercept and slope are 29.60 and 10.35, respectively. The variances for both constructs (intercept and slope) are statistically significant, while the covariance between the intercept and slope constructs is, as expected, not significant (because we had generated uncorrelated intercept and slope values). Latent Growth Models for Two Groups: Multi-Group Analysis We also simulated another sample of data based on the linear growth for 100 cases with four observations per case. For this set of data, the intercept was generated from a No-growth model Linear Free form 544.20 3.75 2.88 .00 .59 .41 Change in Chi-squared − 540.45 0.87 Model Comparison − 1 vs. 2 2 vs. 3 − 3 2 Change in DF .00 .65 n = 100. Numbers in bold italics are significant at 5%. Data set generated such that intercept mean = 30 and slope mean = 10. Log likelihood test does not show a statistically significant improvement between the linear and free-form models. Linear model is chosen over free-form model as it offers a more parsimonious fit. Mean and variance for the intercept are 29.60 and 1.78, respectively; both are statistically significant (p .10, RMSEA = 0.05, SRMR = 0.024, and AGFI, NNFI, and CFI are greater than 0.95). Piecewise Linear Latent Growth Model One area of interest in OM research involves the understanding and modeling of the effect of interventions in the learning process (e.g., Bailey 1989; Dar-El et al. 1995; Shafer et al. 2001). Let us assume that the effect of an intervention can be expected to impact on the performance immediately following the intervention in different ways. If the intervention was designed to improve performance, and is implemented successfully, then the rate of learning following the intervention may increase. On the other hand, there may be a drop in performance following a disruptive interruption. A piecewise LGM can be useful for capturing the effect of an intervention that follows a baseline period Flora (2008). We illustrate both the positive and negative effects of intervention on subsequent performance using two different data sets. In the first set of simulated data, we generated five repeated observations for 100 cases where the linear growth for the first three time points has a mean intercept of 30, and a mean slope of 5. An intervention occurs between the third and fourth time points, which causes the average rate of growth at the group level to double (i.e., the new slope for periods four and five has a mean of 10). In the second data set, the mean intercept and mean slope for the first three periods are the same (30 and 5, respectively); however, in this case the intervention between the third and fourth periods has a negative effect, causing the post-intervention growth rate to decline (modeled using a mean slope of 3 for the last two periods). The piecewise linear model uses three latent constructs—intercept, the slope for the first three periods (Slope 1), and the slope for the last two periods (Slope 2). There are two possible ways of fitting the piecewise model beyond this point. In both approaches, the coefficient of the intercept is fixed at 1 at all time points (as we did before with all linear LGMs). With the first approach, the coefficients for Slope 1 for the periods one to five would be 0, 1, 2, 2, and 2, respectively, while the coefficients for Slope 2 for the same periods would be 0, 0, 0, 1, and 2, respectively. Thus, effectively, the growth attained at period three is “frozen in” and serves as the intercept for the second phase of the growth (Hancock and Lawrence 2006). With the second approach, the only difference is that the coefficients for Slope 1 for the five periods are 0, 1, 2, 3, and 4, respectively, while the coefficients for Slope 2 are the same as in the first approach. The value of Slope 2 with the second approach is interpreted as the “added growth” (Hancock and Lawrence 2006). Latent Growth Models for Operations Management Research As shown in Table 13.4a, a researcher attempting to fit a linear or free-form model to these data would find the fit statistics to be very poor (i.e., large chi-square, RMSEA, and SRMR values). Fit statistics improve significantly when the data are modeled using the two above-mentioned piecewise growth modeling approaches. Table 13.4b shows that the mean values for the intercept and Slope 1, as well as the model fit statistics, are identical for the two approaches. However, the mean value of Slope 2 differs based on the approach used, although its numerical value remains the same. As an example, consider the data set where the intercept and slope for growth during the first three periods have means of 30 and 5, respectively, while the slope has a mean of 10 for the last two periods. For both approaches the intercept and Slope 1 means are 30.02 (p < .01) and 5.16 (p < .01), respectively. With the first modeling approach, the Slope 2 mean is estimated at 10.30 (p < .01), and with the “additional growth” approach, the Slope 2 mean is estimated at 5.14 (p < .01). Multivariate Latent Growth Models: Cross-Domain Analysis of Change As a final illustration of LGMs, we present a model for the cross-domain analysis of change (Singer and Willett 2003; Willett and Sayer 1996). Also referred to as the “parallel” or “dual growth” model, this multivariate approach allows researchers to test whether growth in one domain (e.g., performing a particular type of task) is associated with growth in another domain (e.g., performing another task that shares traits with the first task). The richness of this analysis comes from the way in which the model is specified (the parallel growth model is shown in Figure 13.3). Specifically, for two separate growth processes (linear in our example), the researcher can test if the intercept and slope associated with one process impact on the intercept and slope associated with the other process. We constructed a data set to illustrate the dual growth for a sample of 100 cases. For the first process, linear growth was modeled with a mean intercept of 30 and a mean slope of 10. For the second process, the intercept mean was generated with loadings of 1 each from the intercept and slope for Task 1, and the slope was generated with loadings of 0.1 and 0.3 from the intercept and slope of Task 1, respectively. Thus, the expected values of the mean intercept and mean slope for the second growth process would be 40 (1 × 30 + 1 × 10) and 6 (0.1 × 30 + 0.3 × 10), respectively. Statistics for this model suggest an adequate fit with the p-value for the model chi-square being greater than 10%, RMSEA and SRMR less than 0.06, while AGFI, NNFI, and CFI are equal to or greater than 0.90. Path coefficients (shown in Figure 13.3) from the intercept and slope of the first growth process to the intercept and slope of the second growth process are close to the expected values and are statistically significant. Using the means and standard errors, we can show that the loadings of the intercept and slope from the first growth process on the intercept of the second growth process are not different; similarly, for the slope of the second growth process, the corresponding loadings are not different from 0.1 and 0.3, respectively. These results show that the intercept and slope constructs associated with the first linear growth process have a statistically significant impact on the intercept and slope constructs associated with the second linear growth process (which may suggest a transfer of knowledge across processes). 285.2 17.05 6.34 .00 .02 .39 p-value – 268.15 10.71 Delta-Chi- Squared – 3 1 n = 100. Numbers in bold italics are significant at 5%. Free-form is compared to the linear growth model. Piecewise model is compared to the free-form model. Log likelihood test is used to determine if difference between model chi-square is significant at 5%. Linear growth model Free-form growth model Piecewise growth model TABLE 13.4a Evaluation of Fit Statistics for Data with Intervention in the Learning Process – .00 .00 p-value –0.18 0.86 0.95 0.43 0.97 1.00 0.43 0.98 1.00 0.16 0.083 0.046 0.53 0.12 0.024 252 Learning Curves: Theory, Models, and Applications Latent Growth Models for Operations Management Research TABLE 13.4b Mean Values for the Intercept and Slopes for the Piecewise Growth Modeling Approaches Data-1 Approach-1 Data-1 Approach-2 Data-2 Approach-1 Data-2 Approach-2 Intercept 30.02 30.02 30.01 30.01 Slope-1 5.16 5.16 5.16 5.16 Slope-2 10.3 5.14 3.21 −1.95 For both data, n = 100. Expected intercept and Slope 1 means are 30 and 5, respectively, for both data. For Data 1, expected Slope 2 mean is 10. For Data 2, expected Slope 2 mean is 3. With Approach 1, performance at the 3rd time point serves as the intercept for time points 4 and 5. With Approach 2, the slope for time points 4 and 5 is interpreted as “additional growth” relative to the slope for time points 1, 2, and 3. Summary of Latent Growth Model Illustrations We illustrated the application of LGMs to simulated data sets. Our examples showed how LGMs can be used to identify the functional form for growth occurring for the entire sample using unconditional models, as well as the differences in the growth parameters for distinct groups within the overall sample using the multi-group and conditional models. We showed how the effect of intervention on the growth process can be captured using LGMs. Finally, our illustration also showed how a researcher can test whether growth in one domain affects growth in another domain using the dual or parallel growth models (i.e., cross-domain analysis). OTHER LONGITUDINAL DATA-ANALYSIS METHODS LGMs are not the only way to study longitudinal changes in the measures and latent constructs of interest. According to Meredith and Tisak (1990), repeated measures ANOVA (RANOVA) and mixed-linear models (MLM) are special cases of LGMs. Duncan et al. (2006) perform a comparison of three analytic approaches for longitudinal data: RANOVA in SPSS, mixed-linear models in HLM (hierarchical linear models, version 6, 2004), and LGM (using EQS version 6, 2005). They note that with a set of common constraints, the three approaches produce identical results for an unconditional model. The three approaches also provide comparable results when estimating conditional growth models that include predictors of growth. In HLM, the conditional growth model is referred to as the “intercept and slope as outcomes” model. However, the authors note that when modeling complex models of growth that simultaneously include predictors and sequelae (i.e., distal outcomes) of growth, the current versions of the RANOVA and HLM cannot be used. Moreover, RANOVA and HLM cannot handle latent variables as predictors of intercepts and slopes. Latent growth modeling, on the other hand, is flexible and allows for the testing of such models. Learning Curves: Theory, Models, and Applications I2 1.00 1.00 S1 3.00 0.17 1.00 1.00 1.00 2.00 1.00 2.00 Chi-square = 32.59, df = 25, p-value = 0.14154, RMSEA = 0.055 Structural equations I2 = 1.01*I1 + 0.96*S1, Error var. = 1.04, R2 = 0.89 (0.019) (0.057) (0.23) 52.48 16.97 4.52 S2 = 1.12*I1 + 0.25*S1, Error var. = 0.87, R2 = 0.23 (0.015) (0.045) (0.14) 7.69 5.65 6.23 FIGURE 13.3 Path diagram for cross-domain analysis and related structural equations from LISREL. Chan (1998), and Hancock and Lawrence (2006) note an important limitation of the RANOVA procedure for handling growth data in a sample that includes individuals with variable growth rate parameters. The limitation involves the sphericity assumption inherent in RANOVA which requires that variances and covariances in the sample across different time units should be equal. Thus, the very existence of intra-individual differences violates the statistical assumption of the RANOVA procedure, and hence it is not recommended when one wishes to statistically quantify the individual differences in change over time. A major difference between HLM and LGM is the treatment of measurement error terms. While HLM imposes constraints by assuming that measurement error terms are independent, LGMs allow the researcher to model measurement errors quite flexibly. For example, the researcher may fix the error terms to specific values, let the software estimate them by treating them as unknowns, allow for auto-correlated error terms, and so forth. The key differences between LGM and HLM, with the advantages and disadvantages of both approaches over each other, are documented by Chan (1998), while Curran (2003) provides a more technical and detailed comparison. LIMITATIONS AND POSSIBLE REMEDIES FOR LATENT GROWTH MODELS Despite the many advantages, LGM suffers from many of the limitations that are inherent to the SEM approach. In addition, as with longitudinal methods, a major limitation of the LGM approach is the extended time frame necessary to follow Latent Growth Models for Operations Management Research participants when acquiring longitudinal data. The cohort sequential design allows for compressing the time frame over which data can be collected. Duncan et al. (1996) show that by using a cohort sequential design, longitudinal data for a single cohort that would normally require six years can be collected over a three-year period using multiple cohorts. In their example, comparisons are made between the annual tracking of a single cohort of individuals from the starting age of 12 until they reach the age of 17. With a single cohort, this approach would require six years of tracking (observations taken when the cohort is 12, 13, 14, 15, 16, and 17). However, with four cohorts (with the starting ages of 12, 13, 14, and 15), data can be collected effectively over a three-year period, with the assumption that the cohorts are comparable. Their study compares the growth trajectories for the single cohort and multi-cohort samples and finds the differences to be statistically insignificant. Longitudinal studies also suffer from issues related to missing data and attrition. For example, if a researcher designs data collection to occur over four time points, it may not always be possible to obtain data from every sample unit at every occasion. There is also the concern that some subjects may participate in providing data initially but may drop out before the study has been completed. Advances in LGM allow for procedures that can account for some of these problems. Bollen and Curran (2006) advise researchers against using the two commonly adopted procedures of dealing with missing data in practice—namely, list-wise deletion and pair-wise deletion. Instead, they recommend using either the direct maximum likelihood or the multiple imputation method in dealing with missing data (for details see Schafer and Graham 2002). Enders (2006) provides an excellent additional discussion on the handling of missing data within the context of SEM, while Duncan and Duncan (1994) demonstrate fitting LGMs in the presence of missing data. Lastly, the temporal design (Collins and Graham 2002) of a longitudinal study is an important but often under-considered factor in methodological designs. Temporal design refers to the timing and spacing of repeated assessments in a longitudinal design and has consequences for a study’s findings. Let us say that the true course of change for a given outcome is cubic or S-shaped. However, if the assessments are taken only at two points in time, the functional form of change can only be linear (i.e., a straight line), suggesting a smooth, continuous change in an outcome over time, which is a misleading conclusion. LATENT GROWTH MODEL ANALYSIS IN OM RESEARCH To begin with, as an SEM-based methodology, LGMs allow researchers to study change in both observed variables and latent variables. To the best of our knowledge, where OM researchers have focused on change, they have always done so by focusing on changes in observable measures (e.g., Uzumeri and Nembhard [1998] fit a learning curve model to a population of learners where the observed measure is the amount of work done in standard hours; Pisano et al. [2001] study the improvements in the time required to perform cardiac surgery as a measure of organizational learning in order to understand if/why learning rates vary across organizations, i.e., hospitals). LGM methodology allows OM researchers the opportunity to study Learning Curves: Theory, Models, and Applications longitudinal changes in observed variables as well as in multidimensional constructs such as product or service quality, customer service, employee satisfaction, and so forth. Multilevel LGMs (MLGMs) allow for the possibility of modeling change as a combination of its latent components. For example, researchers have viewed individual learning in industrial settings as a combination of motor and cognitive skillrelated learning (e.g., Dar-El et al. 1995). Using instruments to measure cognitive and motor learning for relevant tasks over time, it may be possible to fit LGMs for each learning component individually at Level 1. Overall learning can then be viewed as a combination of learning in both of these components. Such a model would have a common learning intercept and a common learning slope at Level 2. A conceptual diagram of this learning model is shown in Figure 13.4. Duncan et al. (2006) refer to such models as the “factor of curves” model (in our example, the growth in motor and cognitive learning refer to the “curves” at Level 1, and the common intercept and growth constructs are the “factor” at Level 2). In modeling performance changes as a function of experience (Wright, 1936) Yelle (1979) notes that non-linear forms like the S-curve and exponential functions have received extensive attention in the OM literature on learning. Ployhart and Hakel (1998) provide an application of the cubic (S-curve shaped) LGM to capture the growth in individual productivity, measured in sales commission (dollars), in a sample of 303 sales persons over a two-year (eight quarters) period. Blozis (2004, Common intercept Lm Common slope 1 Motor intercept Motor slope Cognitive intercept Cognitive slope FIGURE 13.4 Multilevel latent growth model (MLGM) depicting a factor of curves. (Adapted from Duncan et al., An introduction to latent variable growth curve modeling: Concepts, issues and applications. Mahwah: Lawrence Erlbaum, 2006, p. 70.) Latent Growth Models for Operations Management Research 343) provides an example of fitting data on verbal and quantitative skills acquisition for 204 individuals using a structured LGM with a “negatively accelerated exponential function.” Thus, OM researchers have several choices with regard to modeling the form of longitudinal change using LGMs. In addition to modeling individual-level learning effects, LGMs also allow for the modeling of interruptions (discontinuities) in learning. In recent years, several studies have focused on understanding the impact of interruptions in the learning process and how they can lead to forgetting effects (e.g., Bailey [1989] studied these effects in a laboratory setting; on the other hand, Shafer et al. [2001] used organizational data to study these effects). As demonstrated in our illustrations of the piecewise growth models, OM researchers can use LGMs for modeling the effects of interventions that affect the rate of learning (e.g., the introduction of new technology or methods during the training period may help individuals learn at a faster rate). Researchers in OM and management have also been interested in understanding the learning and growth that occurs at the organizational level (i.e., within and across organizations). Weinzimmer et al. (1998) provide a review study on organizational growth within management. Their summarized results show that while researchers have collected repeated measures on financial and other publicly available data for samples of companies, data analysis has typically been based on a single measure, and performed using an OLS approach. The LGM methodology can allow the use of one or more measures (e.g., multiple measures can be aggregated to form a composite) in order to assess growth. The benefits of LGM over OLS were noted in the introduction to this chapter. With organizations as the unit of analysis, another area where growth models can help is in analyzing the long-term effects of specific program adoptions on company performance. Consider, for example, the issue of the successful management of environmental responsibilities. If we were to focus on the effects of companies’ adopting programs aimed at improving their environmental performance, it would then make sense to investigate the long-term impacts of such actions on performance levels (e.g., financial performance) simply because it often takes a long time to adopt such programs. Such a study would be based on tracking performance over a longer time frame (rather than analyzing cross-sectional data). Companies are now being encouraged to make public their efforts toward environmental management, which allows researchers access to such data. For example, data on organizations reporting their environmental performance is now maintained by the U.S. Environmental Protection Agency’s National Environmental Performance Track program, which currently lists 450 reporting organizations. Similarly, there are other strategic programs that companies adopt which also warrant longitudinal analysis. Examples include examining the long-term effects of investing in enterprise resource planning (ERP) systems, adopting a just-in-time (JIT) manufacturing system, adopting quality improvement programs such as Six Sigma, the ISO 9000 series, and so on. CONCLUSIONS Motivated by the recent popularity of SEM in OM research, our chapter illustrated the application of LGMs to simulated data sets. Using an exploratory approach, our illustrations show that the researcher can evaluate the efficacy of modeling Learning Curves: Theory, Models, and Applications different forms of growth in order to identify the one that fits best. We demonstrated how conditional LGMs can be employed to identify groups of cases within the overall sample which have trajectories that differ from the one fitted for the entire sample. Finally, we also illustrated cross-domain analysis whereby LGMs are used to determine whether the growth in one measured variable (or latent construct) is influenced by the growth in another measured variable (or latent construct) over the same time frame. The primary benefits of using an LGM-based longitudinal analysis are listed in Table 13.5. In promoting the use of LGMs, we also noted the similarities and differences with other longitudinal analysis approaches. Existent research shows that there are certain types of advantages that only the LGM approach is able to provide, at least for now (future developments in other areas may offset these advantages). In particular, the flexibility of LGMs in allowing certain variables to be included simultaneously as dependent and independent variables in the same model, and the ability to allow for cross-domain analysis both provide LGMs with advantages over other approaches. We also noted the limitations inherent in LGMs, and the remedies recommended by researchers to overcome them. TABLE 13.5 Benefits of Using Latent Growth Models Latent Growth Models: 1. Allow for the identification of a single form of growth for the entire group. For example, we can attempt to fit a hypothesized functional form for the growth, or use an exploratory approach to determine the best model for the group. The SEM framework allows us to compare model fits to identify best functional form. 2. Provide an estimate of variance in the constructs for the form of growth across individuals. For example, with a linear growth model, a significant variance in the intercept suggests that individuals’ vary considerably with regards initial performance. Similarly, significant variance in the slope would suggest that individuals in the group vary with respect to their rate of growth. Growth parameter variances can then be explained by covariates. 3. Provide an estimate of covariance among constructs that define the form of growth. For example, with a linear growth model, a significant positive covariance between the intercept and slope would suggest that high initial performance is associated with higher learning rates. 4. Allow for testing the effects of intervention. For example, did the intervention have a positive or negative effect on the post-intervention rate of growth? 5. Allow for testing whether the same form of growth exists across multiple groups. For example, if two different groups are learning the same task in similar environments, then, is the form of growth (e.g., linear, non-linear, etc.) the same for these two groups? 6. Allow for testing whether the growth constructs vary across groups where growth follows the same form. For example, we can test whether the intercepts and slopes are similar for two groups experiencing linear growth. Latent Growth Models for Operations Management Research TABLE 13.5 (Continued) Benefits of Using Latent Growth Models 7. Allow for the inclusion of time invariant and time variant predictors of growth. Suppose one group was briefed about the task before learning commenced and the other group was not. A dummy variable denoting the presence/absence of training can be included as a time invariant predictor of growth. Suppose also that individuals in a group were provided feedback on their performance for that period, and the amount of feedback provided declined as individuals became more proficient at the task over time. In this case we may include the amount of feedback for each period as the time variant predictor of growth. 8. Allow for the inclusion of outcomes of growth. For example, individual performance appraisal after learning may be statistically linked to the growth constructs – in this case the performance appraisal is said to be a distal outcome of growth. This type of modeling cannot be conducted in HLM. 9. Allow for testing whether growth in one domain affects growth in another domain. For example, individuals that are learning the task well over time also happen to show increases in job satisfaction over the same time periods; the cross-domain model allows for a statistical test of linkages among the parameters that characterize growth across these two domains. We highlighted some research areas within OM where LGM applications appear to be most promising. These include studying learning at the individual and organizational levels, examining the antecedents of growth, and examining the longitudinal effects of strategic programs that companies employ to better their performance. We believe that there are many other areas where LGMs will help in advancing our knowledge about growth processes. We conclude with a quote from Hancock and Lawrence (2006, 160) that “LGM innovations are both exciting and too numerous to address.” With an increased emphasis on SEM, and growing expertise of OM researchers in SEM in general, LGMs offer the opportunity to analyze the longitudinal changes in measures and latent constructs that are relevant to OM. ACKNOWLEDGMENTS The first author would like to thank the Alfred Lerner School of Business, University of Delaware, for supporting this work through its summer research grant. REFERENCES Bailey, C., 1989. Forgetting and the learning curve: A laboratory study. Management Science 35(3): 340–352. Baker, G.A., 1954. Factor analysis of relative growth. Growth 18(3): 137–143. Bentein, K., Vandenberg, R., Vandenberghe, C., and Stinglhamber, F., 2005. The role of change in the relationship between commitment and turnover: A latent growth modeling approach. Journal of Applied Psychology 90(3): 468–482. Learning Curves: Theory, Models, and Applications Blozis, S.A., 2004. Structured latent curve models for the study of change in multivariate repeated measures. Psychological Methods 9(3): 334–353. Bollen, K.A., and Curran, P.J., 2006. Latent curve models: A structural equation perspective. Hoboken: John Wiley. Castilla, E.J., 2005. Social networks and employee performance in a call center. American Journal of Sociology 111(5): 1243–1283. Chan, D. (1998). The conceptualization and analysis of change over time: An integrative approach incorporating longitudinal mean and covariance structures analysis (LMACS) and multiple indicator latent growth modeling (MLGM). Organizational Research Methods 1(4): 421–483. Chan, D., and Schmitt, N., 2000. Inter-individual differences in intra-individual changes in proactivity during organizational entry: A latent growth modeling approach to understanding newcomer adaptation. Journal of Applied Psychology 85(2): 190–210. Collins, L.M., and Graham, J.W., 2002. The effect of the timing and spacing of observations in longitudinal studies of tobacco and other drug use: Temporal design considerations. Drug and Alcohol Dependence 68(1): S85–S96. Curran, P.J., 2003. Have multilevel models been structural equation models all along? Multivariate Behavioral Research 38(4): 529–569. Dar-El, E.M., Ayas K., and Gilad, I., 1995. A dual-phase model for the individual learning process in industrial tasks. IIE Transactions 27, 265–271. Duncan, T.E., and Duncan, S.C., 1994. Modeling incomplete longitudinal substance use data using latent variable growth curve methodology. Multivariate Behavioral Research 29(4): 313–338. Duncan, S.C., Duncan, T.E., and Hops, H., 1996. Analysis of longitudinal data within accelerated longitudinal designs. Psychological Methods 1(3): 236–248. Duncan, T.E., Duncan, S.C., and Strycker, L.A., 2006. An introduction to latent variable growth curve modeling: Concepts, issues and applications. Mahwah: Lawrence Erlbaum. Enders, C.K., 2006. Analyzing structural equation models with missing data. In A second course in structural equation modeling, eds. G.R. Hancock and R.O. Mueller, 313–344. Greenwich: Information Age. Flora, D.B. (2008). Specifying piecewise latent trajectory models for longitudinal data. Structural Equation Modeling, 15(3): 513–533. George, R., 2000. Measuring change in students’ attitudes toward science over time: An application of latent variable growth modeling. Journal of Science Education and Technology 9(3): 213–225. Hamilton, J., Gagne, P.E., and Hancock, G.R., 2003. The effect of sample size on latent growth models. Paper presented at the annual meeting of American Educational Research Association, April 21–25, Chicago, IL. Hancock, G.R., and Lawrence, F.R., 2006. Using latent growth models to evaluate longitudinal change. Structural equation modeling: A second course, 171–196. Greenwich: Information Age. Hipp, J.R., Bauer, D.J., Curran, P.J., and Bollen, K.A., 2004. Crimes of opportunity or crimes of emotion? Testing two explanations of seasonal change in crime. Social Forces 82(4): 1333–1372. Hu, L., and Bentler, P.M., 1999. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling 6(1): 1–55. Lance, C.E., Vandenberg, R.J., and Self, R.M., 2000. Latent growth models of individual change: The case of newcomer adjustment. Organizational Behavior and Human Decision Processes 83(1): 107–140. Li, C., Goran, M.I., Kaur, H., Nollen, N., and Ahluwalia, J.S., 2007. Developmental trajectories of overweight during childhood: Role of early life factors. Obesity 15(3): 760–771. Latent Growth Models for Operations Management Research Meredith, W., and Tisak, J., 1990. Latent curve analysis. Psychometrika 55(1): 107–122. Muthén, L.K., and Muthén, B.O., 2002. How to use a Monte Carlo study to decide on sample size and determine power. Structural Equation Modeling 9(4): 599–620. Muthén, L.K., and Muthén, B.O., 1998–2004. Mplus user’s guide. Third Edition. Los Angeles: Muthén and Muthén. Nesselroade, J.R., and Baltes, P.B. (eds.), 1979. Longitudinal research in the study of behavior and development. New York: Academic Press. Pisano, G.P., Bohmer, R.M.J., and Edmondson, A.C., 2001. Organizational differences in rates of learning: Evidence from the adoption of minimally invasive cardiac surgery. Management Science 47(6): 752–768. Ployhart, R.E., and Hakel, M.D., 1998. The substantive nature of performance variability: Predicting inter-individual differences in intra-individual performance. Personnel Psychology 51(4): 859–901. Rao, C.R., 1958. Some statistical methods for comparison of growth curves. Biometrics 14(1): 1–17. Schafer, J.L., and Graham, J.W., 2002. Missing data: Our view of the state of the art. Psychological Methods 7(2): 147–77. Shafer, S.M., Nembhard, D.A., and Uzumeri, M.V., 2001. The effects of worker learning, forgetting, and heterogeneity on assembly line productivity. Management Science 47(12): 1639–1653. Shah, R., and Goldstein, S.M., 2006. Use of structural equation modeling in operations management research: Looking back and forward. Journal of Operations Management 24(2): 148–169. Singer, J.D., and Willett, J.B., 2003. Applied longitudinal data analysis: Modeling change and event occurrence. Oxford: Oxford University Press. Stoolmiller, M., 1994. Antisocial behavior, delinquent peer association and unsupervised wandering for boys: Growth and change from childhood to early adolescence. Multivariate Behavioral Research 29(3): 263–288. Tucker, L.R., 1958. Determination of parameters of a functional relation by factor analysis. Psychometrika 23(1): 19–23. Uzumeri, M., and Nembhard, D., 1998. A population of learners: A new way to measure organizational learning. Journal of Operations Management 16(5): 515–528. Weinzimmer, L.G., Nystrom, P.C., and Freeman, S.J., 1998. Measuring organizational growth: Issues, consequences and guidelines. Journal of Management 24(2): 235–262. Willett, J.B., and Sayer, A.G., 1994. Using covariance structure analysis to detect correlates and predictors of change. Psychological Bulletin 116(2): 363–381. Willett, J.B., and Sayer, A.G., 1996. Cross-domain analyses of change over time: Combining growth modeling with covariance structure modeling. In Advanced structural equation modeling: Issues and techniques, eds. G.A. Marcoulides and R.E. Schumacker, 125– 157. Mahwah: Lawrence Erlbaum. Wright, T., 1936. Factors affecting the cost of airplanes. Journal of Aeronautical Science 3(2): 122–128. Yelle, L.E., 1979. The learning curve: Historical review and comprehensive survey. Decision Science 10(2): 302–328. Part II Applications Lot Sizing Problem 14 The and the Learning Curve: A Review Mohamad Y. Jaber and Maurice Bonney CONTENTS Introduction............................................................................................................ 265 The Learning Phenomenon ....................................................................................266 Lot Sizing with Learning (and Forgetting) in Production....................................... 268 Effects of Learning on the Economic Manufacturing Quantity with Instantaneous Replenishment............................................................................ 269 Effects of Learning on the Economic Manufacturing Quantity with Finite Delivery Time.................................................................................................... 271 Intermediate Research (1990–2000) ................................................................. 272 Recent Research (2001–2010) .......................................................................... 275 Lot Sizing with Learning in Set-ups ...................................................................... 278 Lot Sizing with Learning in Production............................................................ 281 Lot Sizing with Improvement in Quality .......................................................... 281 Lot Sizing with Controllable Lead Time........................................................... 282 Learning Curves in Supply Chains and Reverse Logistics .................................... 283 Supply Chain Management ............................................................................... 283 Reverse Logistics ..............................................................................................284 Summary and Conclusions .................................................................................... 285 References.............................................................................................................. 286 INTRODUCTION Traditional models for determining the economic manufacture/order quantity assume a constant production rate. One result of this assumption is that the number of units produced in a given period is constant. In practice, the constant production rate assumption is not valid whenever the operator begins the production of a new product, changes to a new machine, restarts after a break, or implements a new production technique. In such situations, learning cannot be ignored. Learning suggests that the performance of a person or an organization engaged in a repetitive task improves with time. This improvement results in a decrease in the manufacturing cost of the product, but if the savings due to learning are significant, the effect on production time—and hence inventory—can also be significant. Factors 265 Learning Curves: Theory, Models, and Applications contributing to this improved performance include more effective use of tools and machines, increased familiarity with operational tasks and the work environment, and enhanced management efficiency. Today, manufacturing companies are adopting an “inventory is waste” philosophy using just-in-time (JIT) production, which usually combines the elements of total quality control and demand pull to achieve high productivity. JIT turns the economic manufacture/order quantity (EMQ/EOQ) formula around. Instead of accepting set-up times as fixed, companies work to reduce the set-up time and reduce lot sizes. The JIT concept of continuous improvement applies primarily to repetitive manufacturing processes where the learning phenomenon is present. This chapter surveys work that deals with the effect of learning on the lot size problem. It also explores the possibility of incorporating some of the ideas adopted by JIT (reduction in set-up times, zero defectives, total preventive maintenance, etc.) to such models with the intention of narrowing the gap between the “inventory is waste” and the “just-in-case” philosophies. THE LEARNING PHENOMENON Early investigations of learning focused on the behavior of individual subjects. These investigations revealed that the amount of time required in order to perform a task declined as experience with the task increased (Thorndike 1898; Thurstone 1919; Graham and Gagné 1940). The first attempt made to formulate relations between learning variables in quantitative form (by Wright in 1936) resulted in the theory of the “learning curve.” The learning curve can describe group as well as individual performance, and the groups can comprise direct and indirect labor. Technological progress is a kind of learning. The industrial learning curve thus embraces more than the increasing skill of an individual by repetition of a simple operation. Instead, it describes a more complex organism—the collective efforts of many people, some in line and others in staff positions, but all aiming to accomplish a common task progressively better. This may be why the learning phenomenon has been called “progress functions” (Glover 1965), “start-up curves” (Baloff 1970), and “improvement curves” (Steedman 1970). The “aircraft learning curve” originated by Wright is known to some as “Wright’s model of progress.” Wright’s power function formulation can be represented as: TQ = T1Q − b , where TQ is the time to produce the Qth unit of production, T1 is the time to produce the first unit, and b is the learning exponent. Note that the cumulative time to produce Q Q units is determined from (1) as ∑Qi 1 T1i − b ≅ T1 ∫0 x − bdx = T1 Q1− b (1 − b ). Figure 14.1 illustrates the Wright learning curve. It is worth noting that a few researchers (e.g., Glover 1966, 1967) adopted graphical solutions for negotiating the prices for military contracts based on Wright’s learning curve. Hackett (1983) compares a number of models, including that of Wright, and concludes that the time constant model is a good choice for general use since it nicely fills a wide range of observed data. Although the time constant model may be a better one with regard to its predictive ability, it is more complicated to use and is less commonly encountered than the Wright learning curve (1936). Jordan (1958), Time per unit The Lot Sizing Problem and the Learning Curve: A Review Cumulative production (in units) FIGURE 14.1 Wright’s learning curve. Time per unit and Carlson and Rowe (1976) state that the learning curve is non-linear and that, in practice, the learning function is an S-shaped curve. The task “life cycle” can be described as consisting of three phases, as shown in Figure 14.2. The incipient phase, Phase 1, is the phase during which the worker is getting acquainted with the set-up, the tooling, instructions, blueprints, the workplace arrangement, and the conditions of the process. In this phase, improvement is slow. Phase 2—the learning phase—is where most of the improvement takes place (e.g., reduction in errors, changes in the workplace, and/or changes in the distance moved). The third and final phase (Phase 3) represents maturity, or the levelling of the curve. Looking at the graph in Figure 14.2, it will be seen that the learning rate during the incipient or start-up phase is slower than during the learning phase. At the level of the firm, the learning curve is considered as an aggregate model since it includes learning from all sources of the organization. Learning curves, besides describing changes in the labor performance, also describe changes in materials input, process or product technologies, or managerial technologies, from the level of the process to the level of the firm. Hirsch (1952) in his study found that approximately 87% of the changes in direct labor requirements were associated with changes in technical knowledge, which can be considered as a form of organizational learning. Hirschmann (1964) observed the learning curve in the petroleum-refining industry where direct labor Incipient phase Learning phase Maturity phase Cumulative production (in units) FIGURE 14.2 The S-shaped learning curve. Learning Curves: Theory, Models, and Applications is non-existent. He suggested that learning is due to organizational learning as a result of the continuous investment in new technology in order to reduce the cost per unit of output. When does the learning process cease? Crossman (1959) claimed that the process continues even after 10 million repetitions. Hirschmann (1964) made the point that skepticism on the part of management that improvement can continue may in itself be a barrier to its continuance. Conway and Schultz (1959) noted that two products that had plateaued in one firm continued down the learning curve when transferred to another firm. Baloff (1970) concluded that plateauing is much more likely to occur in machine-intensive industries than it is in labor-intensive industries. One possible explanation for this is that plateauing could be strongly associated with labor ceasing to learn or machines having a fixed cycle that limits the reduction in time. Corlett and Morcombe (1970) related plateauing either to the necessity to consolidate what has already been learnt before making further progress, or to forgetting. Although there is almost unanimous agreement by scientists and practitioners that the form of the learning curve presented by Wright (1936) is an effective representation of learning, a full understanding of the behavior and factors affecting the forgetting process has not yet been developed. Many researchers in the field have attempted to model the forgetting process mathematically. Carlson and Rowe (1976) described the forgetting or interruption portion of the learning cycle by a negative decay function comparable to the decay observed in the electrical losses in condensers. Hoffman (1968) and Adler and Nanda (1974a, 1974b) presented two refined mathematical techniques for incorporating the effects of production breaks into planning and control models. Globerson et al. (1989) indicated that the degree of forgetting is a function of the break length and the level of experience gained prior to the break. Jaber and Bonney (1996a) developed a mathematical model for the learningforgetting relationship, referred to as the “learn-forget curve model” (LFCM). The LFCM was tested and shown to be consistent with the model fitted to the experimental data of Globerson et al. (1989) with less than 1% deviation (Jaber and Bonney 1997a). The forgetting slope was assumed to be dependent on four factors: (1) the length of the interruption period, (2) the equivalent accumulated output by the point of interruption, (3) the time over which it is assumed that total forgetting will occur, and (4) the learning exponent. Unlike the “forgetting factor” model of Towill (1985), it is believed that the LFCM model resolved the behavioral nature of the forgetting factor as a function of the break time. The LFCM has been shown to be a potential model to capture the learning-forgetting process. For further background information, readers may refer to Jaber and Bonney (1997a), Jaber et al. (2003), and Jaber and Sikström (2004a, 2004b). LOT SIZING WITH LEARNING (AND FORGETTING) IN PRODUCTION The use of the learning curve has been receiving increasing attention due to its application to areas other than traditional learning. In general, learning is an important consideration whenever an operator begins the manufacture of a new product, changes to a new machine, or restarts production after some delay. Learning The Lot Sizing Problem and the Learning Curve: A Review implies that the time (cost) needed to produce a product will reduce as the individual (or group) becomes more proficient with the new product or process. For previous reviews of learning curves, see Yelle (1979) and Jaber (2006a). Optimal lot size formulae are typically developed and used on the assumption that manufacturing unit costs are constant. With learning, this approximation is true only when the production rate has stabilized. This section categorizes the publications on learning curve effects related to the lot sizing problem into two groups. The first group includes those authors who studied the effects of learning on the economic manufacturing quantity (EMQ) with the assumption of instantaneous replenishment. The second group includes those authors who studied the effects of learning on the EMQ with a finite replenishment rate. Effects of Learning on the Economic Manufacturing Quantity with Instantaneous Replenishment Keachie and Fontana (1966) provided an extension of the theory of lot sizes to include “learning” related to three different cases that can occur. These cases are: (1) there is a total transfer of learning from period to period, (2) there is no transmission of learning, and (3) there is a partial transmission of learning. The total inventory cost function, CT, developed by Keachie and Fontana (1966) is represented as: CT = S + h Q2 T Q1− b + c1 1 , 2D 1− b where S is the set-up cost per lot, h is the holding cost per unit per unit of time, c1 is the labor cost per unit of time, D is the demand rate, and T1 and b are as defined in Equation 14.1. They assumed constant demand, no shortages occur, and their set-up costs are independent of the number of pieces in the lot. Spradlin and Pierce (1967) describe lot sizing production runs whose production rate can be described by a learning curve. They considered the infinite planning horizon case. Their solution included the possibility of variable lot sizes, as well as the possibility that all units are produced in one lot. They assumed that the production rate is high enough, relative to the demand rate, that for the purpose of calculating holding costs, one can assume that a lot is not started before the preceding lot is depleted. They assumed a constant regression of learning (forgetting rate), which is related to the rate of learning at the point at which the interruption occurs. This is accomplished by taking the regression equal to the amount learned during the production of the last M units. At each interruption the rate of learning reverts back to the rate M units. Steedman (1970) extended the work of Keachie and Fontana by investigating the properties of the solution of Equation 14.2 and demonstrating that Q* > Q0 = 2SD h, where Q 0 is the economic order quantity (EOQ) (Harris 1990). The improvement phenomenon makes the optimum lot size greater than that given by the traditional square-root formula. Steedman noted that the optimum lot size decreases as learning becomes faster. Learning Curves: Theory, Models, and Applications Carlson (1975) provided a method to determine “lost time” based on the classical learning curve relationship, which he added to the cost function. Carlson altered the EOQ formula accordingly as: Q* = 2 D ( S + LT ) , U (Q)I where LT is lost time cost, U(Q) is the average unit labor cost when producing a lot of Q units, and I is the carrying cost rate per time period. While the approach of Carlson (1975) considers the role of U(Q) in the carrying cost, it ignores its impact on the production cost. Wortham and Mayyasi (1972) proposed the use of the classical square-root formula for demonstrating the impact of learning on the EOQ and system inventory cost, using a decreasing average value of the holding cost. Muth and Spremann (1983) extended the works of Keachie and Fontana (1966), Steedman (1970), and Wortham and Mayyasi (1972). These extensions are: (1) production costs consist of two portions—one is learning dependent, the other is linear; (2) it is shown that the ratio Q */Q 0 is a function of two parameters, the progress rate and the cost ratio; (3) an explicit numerical solution procedure is given for the transcendental equation defining Q */Q 0 and specific values of the solution are tabulated; and (4) a simple approximation for Q * is given in the classical square-root format whose form is: Q* ≈ 2D ⎡ T1Q01− b ⎤ S + c 1 ⎢ ⎥. 2 ⎦ h ⎣ Kopcso and Nemitz (1983) explored the effect of learning in production on the lot sizing problem. They developed two models to describe two demand situations: dependent (Model 1) and independent (Model 2). They found that by excluding the material costs (for an assembly operation, the cost of all components), the optimal lot size was seen to vary linearly with demand and inversely with the carrying cost rate. Kopcso and Nemitz also found that when material costs were included, a smaller optimal lot size was derived. Their cost function, with labor cost, is given as: CT = c1 T1Q − b ⎛ Q ⎞ I + D ⎟, 1 − b ⎜⎝ 2 ⎠ where I is the inventory carrying cost rate per time period (i.e., h = c1I) and Q = QL = 2bD/I(1−b) is the optimal lot size, where QL is linearly related to D and to 1/I. This is contrary to the EOQ model where the order quantity is proportional The Lot Sizing Problem and the Learning Curve: A Review to their square root. Kopcso and Nemitz (1983) modified Equation 14.5 to include the material cost, c2, as: ⎛ T Q −b ⎞ ⎛ Q ⎞ CT = ⎜ c2 + c1 1 ⎟ × ⎜ I + D ⎟. 1 2 − b ⎠ ⎝ ⎠ ⎝ No closed-form expression was found for Q = QM (including material cost) that optimizes Equation 14.6, which could be determined using numerical search techniques. However, they provided a relationship between Equations 14.5 and 14.6 where (QL − QM ) QM = c2 (c1 T1QM− b ). Kopcso and Nemitz (1983) ignored the set-up cost by referring to Freeland and Colley (1982), who proposed an improved variation on part period balancing, combining lots until the incremental (not total) carrying cost equals the savings resulting from eliminating one set-up. Effects of Learning on the Economic Manufacturing Quantity with Finite Delivery Time Adler and Nanda (1974a, 1974b), analyzed the effect of learning on production lot sizing models for both the single- and multiple-product cases. A general equation is developed for the average production time per unit, for producing the annual demand in batches with some percentage of learning not retained between lots. Two models are developed —the first restricted to equal lot sizes; Q1 = Q2 = Q3 =…= Qn = Q, and the second case restricted to equal production intervals; t1 = t2 = t3 =…= tn= t. Adler and Nanda defined the average production rate, Pi , as the reciprocal of the average time per unit for the ith lot of Q units. They developed a technique for estimating the drop in labor productivity as a decrease in the average production rate, as a result of production breaks. Adler and Nanda concluded that, not considering learning, b = 0, n* = hQ (1 − D Pi ) 2S , and when maximum learning is considered, b = 1, n0* = hQ 2S , resulting in less than the optimal number of lots, n* < n0*. In the case of equal production intervals, when b = 0, then n* = hQ (1 − D Pi ) 2S , whereas when b = 1, the cost function yielded an infeasible solution. Sule (1978) developed a method for determining the optimal order quantity for the finite production model, which incorporates both learning and forgetting effects. Sule based his decision on a forget relationship assumed by Carlson (1975). The forgetting rate was arbitrarily selected and the production quantity was assumed to be equal for all lots. Sule suggested a linear relationship between the production time and the quantity produced for known learning and forgetting rates. Axsäter and Elmaghraby (1981) argued that Sule’s model is based on some assumptions that were either not demonstrated by the author or were of questionable validity. They showed that Sule’s cost function is based on an approximation of the inventory accumulation during the production interval, in that he assumed a linear accumulation, while in reality it is a convex function of time. This consideration increases the invalidity of Sule’s optimizing procedure. Sule (1981) responded to the argument of Axsäter and Elmaghraby by reemphasizing that, under steady-state conditions, the time required to produce Q units (i.e., t) is given by Learning Curves: Theory, Models, and Applications a linear function of Q. He agreed with Axsäter and Elmaghraby that the expression he used for inventory accumulation in his basic model is a linear approximation. However, he argued that the substitution of the exact equation (a convex function) does not add any significant accuracy to the determination of the economic manufactured quantity and restated that the linear relationship t = α + βQ is a very good estimate of the production time. It is both easy to use and is well within the acceptable norms of accuracy. Fisk and Ballou (1982) presented models for solving the manufacturing lot sizing problem under two different learning situations. The first situation assumes that learning conforms to the power function formulation introduced by Wright. The second recognized approach for modeling learning is the bounded power function formulation proposed by de Jong (1957). Fisk and Ballou developed a dynamic programming approach to solve for the optimal production lot size. When regression in learning is operative, the recursive relation does not apply. Regression in learning refers to the loss of labor efficiency that can occur during the elapsed time between successive runs of a product. A different dynamic recursion and cost relationship is required. Using their models, Fisk and Ballou found that, relative to the classical production order quantity models, significant cost savings occur whenever the production rate is high relative to the demand rate. They suggested that an application of their models to an infinite horizon problem would yield satisfactory results since, as the cumulative number of units produced becomes large, production lot sizes tend to stabilize. Fisk and Ballou (1982) assumed forgetting to be a constant percentage; that is, ui = ∑ i−j 11 fQ j , where 0 ≤ f ≤1, i ≥ 2 and u1 = 0. Smunt and Morton (1985), unlike Fisk and Ballou (1982), relaxed the equal lot size and fixed carrying cost assumptions. Smunt and Morton’s relaxation of the carrying cost assumption implies that holding costs can approach zero as the total number of units requested, Q, increases, resulting in lot sizes that increase without bound as n gets large. Smunt and Morton also showed that the effect of forgetting was significant in the determination of optimal lot sizes, thus indicating that operations managers should consider longer, not shorter, production runs. They suggested the use of robotics and flexible manufacturing systems to overcome the need for larger lot sizes, which also reduces the forgetting effects. Klastorin and Moinzadeh (1989) considered the production of a single product with static demand and no shortages. They treated the problems defined by Fisk and Ballou (1982) by allowing lot size quantities to be real rather than integer numbers. Klastorin and Moinzadeh show that: (1) lot sizes monotonically decrease over time, and (2) that lot sizes approach the traditional EOQ in the limit as the total number of units ordered (N) approaches infinity. Two algorithms are presented for finding the lot sizes under a transitory learning effect. The first algorithm finds a solution for the case of no learning regression between production cycles; that is, production efficiency is the same at the end of cycle j−1 and at the beginning of cycle j. The second algorithm finds a solution for the case of a constant amount of forgetting occurring between order periods. Intermediate Research (1990–2000) Elmaghraby (1990) reviewed and critiqued two previously proposed models, corrected some minor errors in them, and expanded one of them to accommodate a finite horizon. These models are those of Spradlin and Pierce (1967) and Sule (1978), respectively. He The Lot Sizing Problem and the Learning Curve: A Review also proposed a different forgetting model from that of Carlson and Rowe (1976), which he suggested is more consistent with the learning-forgetting relationship. He then applied this to the determination of the optimal number and size of the lots in the finite and infinite horizon cases. Elmaghraby (1990) used a forgetting function, variable regression invariant forgetting model (VRIF), Tˆ1 x f , that assumes a forgetting exponent (fi = f ) and the intercept ( Tˆ1i = Tˆ1 ) do not vary between cycles (i = 1, 2, …). Jaber and Bonney (1996a) showed that using the LFCM, which assumes fi and Tˆ1i to vary from cycle to cycle, produces better results, as using the VRIF overestimates the total cost. The limitations of the VRIF are discussed in Jaber and Bonney (1997a) and Jaber et al. (2003). Salameh et al. (1993) incorporated Wright’s learning curve equation into the total inventory system cost resulting in a mathematical model that required a numerical search technique to determine the economic manufactured quantity (EMQ). In their model, the cycles were treated independently from one another. This assumption simplified the mathematical formulation. They assumed a new learning curve in each production cycle with T1 reducing while b remained constant. The EMQ was shown to decrease in successive lots as labor productivity increases because of learning. Their unit time cost function was given as: cT = SD T DQ − b Q T DQ1− b + c2 D + c1 1i i + h i − h 1i i , Qi 1− b 2 (1 − b) (2 − b) where Qi is the lot size for cycle i and T1i = T1 ∑ ij−11 Q j + 1 , where T11=T1. Jaber and Bonney (1996a) extended the work of Salameh et al. (1993) by incorporating the learn-forget model (LFCM) into the total inventory system cost. The effect of both learning and forgetting on the optimum production quantity and minimum total inventory cost was demonstrated by a numerical example. They showed that forgetting had an adverse effect on the productivity of labor-intensive manufacturing and on the total labor cost as a result of the drop in labor productivity. The difference between the two is in computing T1i, where it was given as T1i =T1(ui+1)−b and 0 ≤ ui ≤ ∑ij−11 Q j (the partial transfer of learning) representing the equivalent number of units of experience remembered from i−1 cycles at the beginning of cycle i, with ui = 0 and ui = i −1 ∑ j 1 Q j representing no or full transfer of learning, respectively. The mathematics of the LFCM is provided in Jaber and Bonney (1996a). So far, shortages have not been discussed. Jaber and Salameh (1995) extended the work of Salameh et al. (1993) by assuming shortages to be allowed and backordered, with the assumption of full transmission of learning between successive lots. Jaber and Bonney (1997b) assumed a case where it is sometimes possible that the improvement in the production rate is slow due to the complexity of the task performed and it is possible for the demand rate to exceed the production rate for a portion of the production cycle. This situation was referred to as intra-cycle backorders. It has been shown that the presence of intra-cycle backorders results in longer cycle runs that mean additional labor and inventory costs. These costs tend to be more critical when there is a partial transmission of learning, since intra-cycle shortages can appear in consecutive production runs. This shows the need for the proper estimation of Wright’s (1936) power learning curve parameters. Learning Curves: Theory, Models, and Applications A common theoretical drawback of Wright’s model is that the results obtained are not meaningful as the cumulative production approaches infinity. A correction of the zero asymptote was made by adding a “factor of compressibility” as a third parameter to Wright’s model (see de Jong 1957). This forces the learning curve to plateau at a value above zero. Although plateauing is usually observed in learning data, there is no consensus among researchers as to what causes the learning curve to plateau. Further discussion on this point is found in Jaber and Guiffrida (2004, 2008). Fisk and Ballou (1982) studied the manufacturing lot size problem under both the unbounded (Wright 1936) and the bounded (plateau) power function of de Jong’s (1957) learning situations. Jaber and Bonney (1996b) presented an efficient approximation for the closed algebraic form of the total holding cost expression developed by Fisk and Ballou (1982). This approximation simplified the solution of the lot size problem with bounded learning. This simplicity is characterized by relying on one numerical search rather than two. The estimated total cost recorded a maximum deviation of 0.5% from the estimate of Fisk and Ballou, which is almost negligible. It was also shown that, as the factor of incompressibility increases, the effect of learning decreases. This could be due to the decrease in the proportion of labor time out of the total time required to produce each consecutive unit. Shiue (1991) developed a model for determining the batch production quantity where a modified learning curve is used to fit the manufacture of products in the growth phase of its life cycle. The total cost function that was developed was dependent on two decision variables—the storage capacity and the batch size quantity—and was based on the values of set-up cost, holding cost, shortage cost, and production cost. Zhou and Lau (1998) claimed that the approximation of Jaber and Bonney (1996) contained an error, which they had corrected in a previous paper (in Chinese). They presented a similar model to that of Jaber and Bonney (1996b) where shortages are allowed and backordered. The claim by Zhou and Lau (1998) resulted in a response by Jaber and Bonney (2001a) and cross responses (Jaber and Bonney 2001b; Zhou and Lau 2001). In response, Jaber and Bonney (2001a) compared the models of those of Fisk and Ballou (1982), Jaber and Bonney (1996b), and Zhou and Lau (1998). Their paper rebuts the claims of Zhou and Lau that the approximation of Jaber and Bonney contained mathematical errors that rendered the developed model invalid, and they demonstrated that the results produced by Jaber and Bonney (1996b) are satisfactory. The cost function of Jaber and Bonney (1996b) is given as: CT = SD (1 − M ) DT1i − b ⎞ ⎛ + c2 D + c1 ⎜ T11MD + Qi ⎟ ⎝ ⎠ Qi 1− b ⎛ Q T MDQi 1 − M ⎞ DT1iQi1− b ⎟ + h ⎜ i − 11 − ⎝ 2 ⎠ 2 2(1 − b) −h 1− M (1 − M )bD ⎛ ⎞ T11MQi + T1iQi1− b ⎟ , ⎜ ⎠ 2(2 − b) ⎝ 1− b where T11 = T1, 0 < M < 1 is the factor of compressibility in de Jong’s learning curve, and T1i is as defined earlier. The Lot Sizing Problem and the Learning Curve: A Review Chiu (1997) investigated lot sizing with learning and forgetting for the assumption of discrete time-varying demand. For the purpose of his study, he chose three lot sizing models from the literature, which are the EOQ (Harris 1990), the least unit cost (Gorham 1968), and the maximum part-period gain (Karni 1981) and then used the extended dynamic optimal lot sizing model (Wagner and Whitin 1958) to generate optimal solutions. Chiu’s numerical results suggested using the maximum part-period gain heuristic due to its simplicity and good performance. Chiu (1997) assumed the forgetting rate to be either a given percentage or an exponential function of the break length. These assumptions were not justified in the paper. Chiu and Chen (1997) investigated the work of Chiu (1997) for the effect of the time-value of money. Their computational results indicated that the learning coefficient, the forgetting rate, and the real discount rate all have significant effects on the determination of lot sizes and relevant costs. Eroglu and Ozdemir (2005) argued that the use of the Wagner-Whitin problem is not generating an optimal solution for this problem, and developed a new recurrence relationship for this purpose. Jaber and Bonney (1998) described three mathematical models that incorporate the effects of learning and forgetting (LFCM) in the calculation of the economic manufactured quantity (EMQ). The first model assumes an infinite planning horizon, while the other two assume a finite planning horizon; one investigates the equal lot size while the other model investigates the unequal lot size. The cost function for the first model is presented as: 1− b 1− b SD T1D ⎡ (Qi + ui ) − ui ⎤ T Du1− b ⎢ ⎥+h 1 i CT = + c2 D + c1 Qi Qi 1− b ⎢ 1− b ⎥⎦ ⎣ ⎡ (Qi + ui )2 − b − ui2 − b ⎤ Qi T1D ⎢ ⎥. +h −h Qi 2 (1 − b)(2 − b) ⎢ ⎥⎦ ⎣ Their results showed that under the partial transmission of learning, the optimal policy was to carry less inventory in later lots, and to extend the cycles’ lengths within the planned production horizon so that the total cost per unit time and the total cost is at a minimum. Recent Research (2001–2010) The interest of some researchers in the lot sizing problem with learning (and forgetting) effects has continued into the new millennium. The researchers have extended the range of problems considered using different assumptions to gain additional insights. Jaber and Bonney (2001c) examined whether, when learning is considered, it is reasonable to ignore the effect of the continuous time discounting of costs by investigating the effect of learning and time discounting, both on the economic manufacturing quantity and the minimum total inventory cost. Their results indicated that, although discounting and learning affect the optimal batch size—suggesting that one should make items in smaller batches more frequently—changing the batch size Learning Curves: Theory, Models, and Applications does not greatly affect the total discounted cost. They concluded that from a decision-making point of view there is considerable flexibility in the choice of lot size. Jaber and Abboud (2001) considered a situation where on completion of the production run, the facility may not be available for a random amount of time due to several reasons, or that the facility is leased by different manufacturers and the demand for the facility is random. As a result of machine unavailability, stock-out situations might arise. They extended the work of Abboud et al. (2000), who excluded the production cost component from the cost function, by assuming that the production capacity is continuously improving over time because of learning. By so doing, this would be the first work that adds a stochastic component to the lot sizing problem with learning and forgetting. Abboud et al. (2000) assumed machine unavailability time is a uniformly distributed random variable. The results of Jaber and Abboud (2001) indicated that, as the production rate improves, the production policy was to produce smaller lots more frequently, resulting in shorter cycles. This enhanced service levels and machine availability. Forgetting had an adverse effect on service levels and machine availability, since with forgetting the production-inventory policy was to produce larger lots less frequently. Job complexity, as a measure of forgetting level, had an effect on machine availability. Results showed that machine availability increased as job complexity decreased. Ben-Daya and Hariga (2003) developed a continuous review inventory model where lead time is considered as a controllable variable. They assumed that the lead time is decomposed into all its components: set-up time, processing time, and non-productive time. These components reflect the set-up cost reduction, lot size lead time interaction, and lead time crashing, respectively. The learning effect in the production process was also included in the processing time component of the lead time. They argued that the finite investment approach for lead time and set-up cost reduction and their joint optimization, in addition to the lot size lead time interaction, introduce a realistic direction in lead time management and control. They found that the learning effect reflected by the parameter, b, had no significant effect on the expected total cost. This awkward result had to do with their choice of input parameters where the labor cost is overshadowed by the other costs, making learning insignificant. Balkhi (2003) studied the case of the full transmission of learning for the production lot size problem with an infinite planning horizon for the following assumptions: (1) items deteriorate while they are produced or stored; (2) both demand and deterioration rates are (known) functions of time; (3) shortages are allowed, but are partially backordered; and (4) the production rate is defined as the number of units produced per unit time. Although the paper presented interesting extensions, it lacked solid conclusions and meaningful managerial insights. Alamri and Balkhi (2007) studied the effects of learning and forgetting on the production lot size problems for the case of infinite planning horizon and deteriorating items. They presented a generalization of the LFCM (GLFCM) that allows variable total forgetting breaks. Their forgetting slope was given as fi = blog (ui+Qi)/ log (1 +1/T1i), which is different from that of the LFCM (Jaber and Bonney 1996a). Their forgetting model was found to be in conformance with six of the seven characteristics of forgetting that should be considered when developing forgetting curves The Lot Sizing Problem and the Learning Curve: A Review (Jaber et al. 2003). Their numerical results suggested an optimal policy of producing small lots. This suggestion was found to have two properties: (1) it decreased the cycle length; and (2) it increased the experience gained, causing a further decrease in the time required to produce the unit for each consecutive cycle. Although the LFCM was tested against empirical data (Jaber and Sikström 2004b), the GLFCM was not. All the works that studied the effect of learning on the lot size problem commonly assumed an invariant learning slope throughout the production-planning horizon. Jaber and Bonney (2007) argued that when learning rates are dependent on the number of units produced in a production cycle, then the assumption of invariant learning rates might produce erroneous lot size policies. The paper investigated the effect of lot size-dependent learning and forgetting rates on the lot size problem by incorporating the dual-phase learning-forgetting model (DPLFM; Jaber and Kher 2002). Jaber and Kher (2002) developed the DPLFM by combining the dual-phase learning model (DPLM, Dar-El et al. 1995) with the learn-forget curve model (LFCM; Jaber and Bonney 1996b). The dual-phase learning model (DPLM) proposed by Dar-El et al. (1995) is a modification of Wright’s learning curve, which aggregates two curves, one cognitive and one motor. Jaber and Bonney (2007) did so by extending the work of Salameh et al. (1993), which used invariant learning and forgetting rates. Their results indicated that ignoring the cognitive and motor structure of a task can result in lot size policies with a high percentage of errors in costs. This finding suggests that earlier work investigating the lot size problem in conjunction with learning and forgetting in production, may be unreliable, and therefore should be revisited and possibly revised. All the models that investigated the effect of learning in production on the lot sizing problem have limitations. Jaber and Guiffrida (2007) addressed two of these limitations. The first limitation is that the models found in the literature do not address the problem of when the learning exponent b approaches or exceeds the value 1, where these models are mathematically invalid for the special cases of b = 1 and 2, and not investigated for the cases 1 < b 2. The second limitation is that the models found in the literature assume that the holding cost per unit is fixed even though the unit production cost is decreasing because of learning. Jaber and Guiffrida (2007) addressed the first limitation; the mathematics of the economic order quantity model with learning and forgetting was reworked to address the learning exponent approaching and exceeding the value of b = 1. The numerical results suggested that Wright’s learning curve may not be the appropriate curve to capture learning in processes characterized by excessively small initial processing time and/ or very large learning exponent values, since the production time becomes insignificant. This implies adding a flattening (plateauing) factor to the learning curve in order to attain a minimum value of production time. They addressed the second limitation by allowing for a holding cost function that decreases as a result of learning effects. The numerical results also indicated that assuming a fixed holding cost underestimates the lot size quantity and slightly overestimates the total cost. Teyarachakul et al. (2008) analyzed the steady-state characteristics of a batch production time for a constant-demand lot sizing problem with learning and forgetting in production time. They report a new type of convergence, the alternating convergence, in which the batch production time alternates between two different values. This is different Learning Curves: Theory, Models, and Applications from the literature that reports that batch production time converges to a unique value; for example, Sule (1978), Axsäter and Elmaghraby (1981), and Elmaghraby (1990) used forgetting models in which the amount of forgetting is unbounded. They assumed forgetting to follow the model of Globerson and Levin (1987), which does not allow the amount of forgetting to exceed the amount of learning. Teyarachakul et al. (2008) also developed several mathematical properties of the model to validate convergence. In a follow-up paper, Teyarachakul et al. (2011) allowed delayed forgetting where forgetting starts slowly and then becomes faster with time. Their computational results show that it may be better to produce in smaller batches in the presence of learning and forgetting. Chen et al. (2008) investigated how the effect of learning (with no forgetting) on the unit production time affected the lot sizing problem when the production system is imperfect in the presence of shortages. They assumed that the process may go out of control and start producing defective items (Rosenblatt and Lee 1986), which are reworked at a cost. They also considered that the process generates defective items when in control too, but at a lower rate than in the previous situation. The most recent paper along this line of research is that of Jaber et al. (2009), who introduced the concept of entropy cost to estimate the hidden costs of inventory systems, which are discussed in Jaber (2009). Jaber et al. (2004) used the laws of thermodynamics to model commodity (heat) flow (or demand) from the production (thermodynamic) system to the market (surrounding), where the price is analogous to temperature. The demand rate is of the form D(t) = D = −K(P−P0), ∀ t > 0, an assumption that is consistent with that of the EMQ/EOQ of constant demand. K (which is analogous to a thermal capacity) represents the change in the flux for a change in the price of a commodity and is measured in additional units of demand per year per change in unit price—e.g., units/year/$. P(t) = P is the unit price at time t, and P0(t) = P0 is the market equilibrium price at time t, where P(t) < P0(t). They suggested adding a third component, representing the entropy cost, to the order quantity cost function. They noted that when P < P0, the direction of the commodity flow is from the system to the surroundings, and the entropy generation rate must satisfy S(t) = K(P/P0+P0/P−2). The entropy cost per cycle is computed T T as E (T ) = ∫0 D(t )dt ∫0 S (t )dt = − P0 P ( P − P0 ) , where T = Q/D is the cycle time. Jaber et al. (2009) added the term E(T)/T to Equation 14.9. They used E(T)/Q as a cost measure of controlling the flow of one unit of commodity from the system to the market. Results indicate that not accounting for entropy cost may result in more expensive commodity control policies; in particular, for inventory policies that promote producing smaller batches of materials or products more frequently. As production becomes faster, the control cost increases. Forgetting was found to reduce the commodity flow cost (entropy) as it recommends producing larger batches. The results from this paper suggest that a firm that is unable to estimate its cost parameters properly may find ordering in larger lots an appropriate policy to counter entropy effects LOT SIZING WITH LEARNING IN SET-UPS The Japanese approach to productivity demands producing in small lots. This can only be achieved if the set-up time is reduced. Instead of accepting set-up times The Lot Sizing Problem and the Learning Curve: A Review as fixed, they attempted to reduce the set-up time, thereby allowing lot sizes to be reduced. Their success in this area has motivated many researchers to think about the effect of decreasing set-ups. A simple EOQ cost function with learning in set-ups would be given as: CT, i = S (i)D Q +h i , Qi 2 where S(i) = Si−a is the set-up learning curve with a being the learning exponent and i the set-up number. The optimal lot size quantity is determined from Equation 14.10 as Qi = 2S (i ) D h , where Q1 ≥ Q2 ≥ Q3 ≥… ≥ Qn for i ∈[1,n] and Qn+1 = Qn+2 =… = Q∞ when S(i) = Smin. The following is a survey of the work that investigated the effects of learning (and forgetting) in set-ups on the lot sizing problem. Porteus (1985) developed an extension of the EOQ model in which the set-up cost is viewed as a decision variable, rather than as a parameter. He emphasized that lowered set-up costs can occur not only as a result of engineering effort. Porteus considered other benefits of lowered set-up costs (and times) and associated reduced lot sizes with improvements in quality control, flexibility, and effective capacity. Karwan et al. (1988) proposed a model for joint worker/set-up learning. The computational results suggest that optimal lot sizing policies continue to exhibit decreasing lot sizes over time under the total transmission of learning. Also, as the rate of set-up learning increases, optimal lot sizing policies tend to exhibit an increasing number of lots. Chand (1989) studied the effect of learning in set-ups and process quality on the optimal lot sizes and the set-up frequency. Chand’s work differs from that of Karwan et al. (1988) in the following respects: (1) it permits learning in process quality in addition to the learning in set-ups, (2) it allows any form of the learning function, (3) it provides an efficient algorithm for finding the optimal lot sizes as opposed to the computationally demanding dynamic programming algorithm, and (4) it also provides a mathematical analysis and computational results to study the effect of the rate of learning on lot sizes and the set-up frequency. Chand’s mathematical and computational results showed that the presence of learning in set-ups, and the effect of changes to the fraction of defectives (when the fraction of defectives for a production lot is proportional to the lot size), increased the optimal set-up frequency. He also showed that this effect can be quite significant for a company with a high production volume, high cost per defective unit, and a large rate of learning. Chand found that learning in process quality has no effect on the optimal set-up frequency since it does not seem to have a definite pattern. The results of Chand’s work support the arguments given by various authors in favor of stockless production, zero inventories, or the JIT approach. Replogle (1988) presented a revised EOQ model that recognizes the effect of learning on set-up costs, and permits the calculation of lot sizes that minimize the total inventory cost over any period. Reduced lot sizes mean more frequent set-ups, thereby moving more rapidly down the set-up learning curve and improving the competitive position of the firm. Cheng (1991) argued that the reduction in lot size and savings in total inventory cost based on Replogle’s model seem to be overestimated due to the way in which Replogle defines the learning curve, which is different from the traditional definition. That is, he defined the total set-up cost as Sn1–a rather than Sn1–a/(1– a). Cheng (1994) Learning Curves: Theory, Models, and Applications considered learning in batch production and set-ups in determining the EMQ. The results of the numerical examples solved by Cheng, strongly indicate that the assumption of equal manufacturing lot sizes not only simplifies the process of determining the optimal solutions, but also provides close approximations to the optimal solutions. Cheng (1991) did not refer either to Karwan et al. (1988) or to Chand (1989). Chand and Sethi (1990) considered the dynamic lot sizing problem of WagnerWhitin with the difference that the set-up cost in a period depends on the total number of set-ups required thus far and not on the period of set-up. The total cost of n set-ups is a concave non-decreasing function of n, which could arise from the worker learning in set-ups and/or technological improvements in set-ups methods. He showed that the minimum holding cost for a given interval declines at a decreasing rate for an increasing number of set-ups. Pratsini et al. (1994) investigated the effects of set-up time and cost reduction through learning on optimal schedules in the capacitated lot sizing problem with time-varying demand. They illustrated how set-up cost and time reduction, through learning curve effects, can change the optimal production plans in the capacity-constrained setting. The reduction of set-up time, and proportionally the set-up cost, can cause an increase in the prescribed number of set-ups as they become more cost effective, resulting in less inventory. It was also found that the learning effect can be dominated by the capacity effect. Rachamadugu (1994) considered the problem of determining the optimal lot sizes when set-up costs decrease over time because of learning, and the running or processing costs remain constant. To avoid forecasting the number of set-ups in the remaining planning horizon, they adopted a myopic policy (part period balancing) that sets the holding cost for the lot size in a cycle equal to the set-up cost in the same cycle. According to Rachamadugu (1994), this policy has the following advantages: (i) it is intuitively appealing to practitioners, (ii) it is easy to compute, and (iii) it does not require information on the future set-up costs. His computational results suggested that the performance of the myopic policy is influenced by the learning rate when the ratio of maximum to minimum set-up values is low, and showed that the myopic policy yielded good results even when the product has a short life cycle. In another paper, Rachamadugu and Schriber (1995) addressed the problem of determining optimal lot sizes when reductions in set-up costs persist due to the emphasis on continuous improvement, worker learning, and incremental process improvements. They suggested two heuristic procedures: (i) current set-up cost lot sizing policy (CURS), and (ii) minimum set-up cost lot sizing policy (MINS) (CURS and MINS policies), which can be used when information about set-up cost reduction trends is not available. Their results showed that CURS is better suited for situations in which improvements in set-up costs occur at a slow pace. For other situations, the MINS policy was found to be more appropriate. Their computational results have shown that such a policy can result in lot sizing costs that exceed the optimum by a considerable percentage. This implies that the average cost analysis could be inadequate for non-stationary cost parameter situations, such as when a production process is used for long periods of time while the reductions in set-up costs continue to occur over time due to an emphasis on kaizen (continuous improvement) and worker learning. Along the same line of research, Rachamadugu and Tan (1997) addressed the issue of determining lot sizes in finite horizon environments The Lot Sizing Problem and the Learning Curve: A Review when learning effects and/or emphasis on continuous improvement result in decreasing set-up costs. They analyzed and evaluated a myopic lot sizing policy for finite horizons that does not require any information about future set-up costs. Their analytical results and computational experiments show that the policy is a good choice for machine-intensive environments. They further showed that the myopic policy yielded good results even when set-up cost changes cannot be completely modeled by the stylized learning curves used in earlier research studies. Lot Sizing with Learning in Production Li and Cheng (1994) is perhaps the first work to investigate the lot sizing problem for learning and forgetting in set-ups and production. They developed an EMQ model that accounts for the effects of learning and forgetting on both set-up and unit variable manufacturing time. Their EMQ model is basically the EOQ model, modified for the gradual delivery or availability of a product, and is justified for a completely integrated flow line and perhaps for a very fast-response JIT system. Li and Cheng (1994) modeled the learning effect in terms of the reduction in direct labor hours as production increases. They found this approach to result in a simpler formulation of the total cost, solvable by dynamic programming. Their computational results indicated that assuming equal lot sizes simplifies the process of determining solutions and provides close approximations to the optimal solutions. Chiu et al. (2003) extended the work of Chiu (1997) by: (1) considering learning and forgetting in set-ups and production, (2) assuming that the forgetting rate in production is a function of the break length and the level of experience gained before the break, and (3) assuming that each production batch is completed as close as possible to the time of delivery in order to reduce the inventory carrying cost. They found that the effect of set-up forgetting increases with the set-up learning effect and with the horizon length, and also that production learning has the greatest effect on the total cost among all the effects of learning and forgetting in set-ups and production. Near-optimal solutions were found for the case(s) when the planning horizon and/or the total demand are/is moderately large. Chiu and Chen (2005) studied the problem of incorporating both learning and forgetting in set-ups and production into the dynamic lot sizing model to obtain an optimal production policy, including the optimal number of production runs and the optimal production quantities during the finite period planning horizon. They considered the rates of: (i) learning in set-ups, (ii) learning in production, (iii) forgetting in set-ups, and (iv) forgetting in production. The results indicated that the average optimal total cost increased with an increase in any of the exponents associated with the four rates. Their results also showed that the optimal number of production runs and the optimal total cost were insensitive to the demand pattern. Lot Sizing with Improvement in Quality Urban (1998) investigated a production lot size model that explicitly incorporates the effect of learning on the relationship—positive or negative—between the run length and the defect rate. Urban intentionally kept the model simple in order to isolate the learning effect of this relationship on the optimal lot size and the resulting Learning Curves: Theory, Models, and Applications production costs, as well as—and perhaps most importantly—to identify important implications of this relationship. He found that reductions in inventory levels can be achieved without corresponding reductions in set-up costs, as long as there is a significant inverse relationship between the run length and product quality. Jaber and Bonney (2003) investigated the effects that learning and forgetting in setups and product quality have on the economic lot sizing problem. Two quality-related hypotheses were empirically investigated using the data from Badiru (1995), which are: (1) the time to rework a defective item reduces if production increases conform to a learning relationship, and (2) quality deteriorates as forgetting increases due to interruptions in the production process. Unlike the work of Chand (1989), Jaber and Bonney (2003) assumed a cost for reworking defective items, referred to as the “product quality cost”, which is an aggregate of two components. The first represents a fixed cost e.g., the material cost of repairing a defective item, whereas the second component represents the labor cost of reworking that defective item, taking account of learning and forgetting. Their results indicated that with learning and forgetting in set-ups and process quality, the optimal value of the number of lots is pulled in opposite directions. That is, learning in set-ups encourages smaller lots to be produced more frequently. Conversely, learning in product quality encourages larger lots to be produced less frequently. However, the total cost was shown not to be very sensitive to the increasing values of the learning exponent, which means that it is possible to produce in smaller lots relative to the optimum value without incurring much additional cost. Jaber (2006b) investigated the lot sizing problem for reduction in set-ups, with reworks and interruptions to restore process quality. In a JIT environment, workers are authorized to stop production if a quality or a production problem arises; for example, the production process going out of control. He assumed that the rate of generating defects benefits from the changes to eliminate the defects, and thus reduces with each quality restoration action. Jaber (2006b) revised the work of Chand (1989) by assuming a realistic learning curve. The results indicate that learning in set-ups and improvement in quality reduces the costs significantly. His results also showed that accounting for the cost of reworking defective units when calculating the unit holding cost may not be unrealistic, given that some researchers suggest using a higher holding cost than the cost of money. Lot Sizing with Controllable Lead Time Pan and Lo (2008) investigated the impact of the learning curve effect on set-up costs for the conterminous review inventory model with controllable lead time and a mixture of backorder and partial lost sales. They assumed that the inventory lead time is decomposed into multiple components, each having a different crashing cost for the shortened lead time. They compared their model to that of Moon and Choi (1998), which assumes no learning in set-ups. They found that the expected total inventory cost tends to increase as the learning rate increases, while all the other parameters remain unchanged, and it decreased as the backorder ratio increased while all the other parameters stay fixed. Although this finding is interesting, the authors did not provide any discussion as to why their model behaved in this manner. Their results also showed that the lead time became shorter as the learning rate became faster for a given backorder ratio. The Lot Sizing Problem and the Learning Curve: A Review LEARNING CURVES IN SUPPLY CHAINS AND REVERSE LOGISTICS Supply chain management emerged in the late 1990s and the beginning of this millennium as a source of sustainable competitive advantage for companies (Dell and Fredman 1999). Supply chain management involves functions such as production, purchasing, materials management, warehousing and inventory control, distribution, shipping, and transport logistics. Like supply chains, reverse logistics encompasses the same functions where products flow in the opposite direction (from downstream to upstream). To maintain a sustainable competitiveness in these functions, the managers of these operations could benefit from introducing continuous improvement to foster organizational learning. Historically, learning curve theory has been applied to a diverse set of management decision areas, such as inventory control, production planning, and quality improvement (e.g., Jaber 2006a). Each of these areas exists within individual organizations of the supply chain and, because of the interdependencies among chain members, across the supply chain as a whole. By modeling these learning effects, management may then use established learning models to utilize capacity better, manage inventories, and coordinate production and distribution throughout the chain. Despite its importance, there are only a few quantitative studies that investigate learning in supply chain and reverse logistics contexts. These are surveyed below. Supply Chain Management Coordination among players in a supply chain is the key to a successful partnership. Some researchers showed that coordination could be achieved by integrating lot sizing models (e.g., Goyal and Gupta 1989), with the idea of joint optimization for buyer and vendor believed to have been introduced by Goyal (1977). Jaber and Zolfaghari (2008) review the literature for quantitative models for centralized supply chain coordination that emphasize inventory management. Nanda and Nam (1992) were the first to develop a joint manufacturer-retailer (vendor-buyer) inventory (referred to as a two-level supply chain) model for the case of a single buyer. Production costs were assumed to reduce according to a power form learning curve (Wright 1936) with forgetting effects caused by breaks in production. A quantity discount schedule was proposed based on the change of total variable costs of the buyer and manufacturer. To meet the demand of the buyer, the manufacturer considers either a lot-for-lot (LFL) production policy, or a production quantity that is a multiple of the buyer’s order quantity (e.g., Jaber and Zolfaghari 2008). Nanda and Nam (1992) assumed a LFL policy, learning in production, no defectives are produced, and like Fisk and Ballou (1982), assumed forgetting to be a constant percentage. They found that the joint total cost is decreased significantly when learning is fast. They also found that the joint total cost savings realized by the manufacturer are in the production and joint lot size inventory holding components when significant learning and learning retention are expected. They extended their work in a subsequent paper (Nanda and Nam 1993) to include multiple retailers. Kim et al. (2008) is the first study in the literature that examined the benefits of buyer-vendor partnerships over lot-for-lot (i.e., single set-up single delivery [SSSD]) systems and suggests two policies that the supplier can pursue in order to meet Learning Curves: Theory, Models, and Applications customers’ needs: (1) single set-up multiple delivery (SSMD), and (2) multiple set-up multiple delivery (MSMD). They found that if the buyer’s fixed set-up cost is relatively high, the vendor would prefer to implement SSMD and produce an entire order with one set-up. However, if the vendor can reduce the set-up cost (because of learning) and the vendor’s capacity is greater than the threshold level (production rate equals twice the demand rate), it was found to be more beneficial for the vendor to implement the multiple set-ups and multiple deliveries (MSMD) policy, even though he/she pays for more frequent set-up costs because the savings in inventory holding costs are greater than the increased set-up costs. Jaber et al. (2008) extended the work of Salameh and Jaber (2000) by assuming the percentage defective per lot reduces according to a learning curve, which was empirically validated by data from the automotive industry. Salameh and Jaber (2000) developed an inventory situation where items received are not of perfect quality (defective), and after 100% screening, imperfect quality items are withdrawn from the inventory and sold at a discounted price. Their data showed that the percentage defective per lot reduced for each subsequent shipment following an S-shape learning curve similar to the one described in Jordan (1958). The developed learning curve was incorporated into the model of Salameh and Jaber (2000). Two models were developed. The first model of Jaber et al. (2008), like Salameh and Jaber (2000), assumes an infinite planning horizon, while the second model assumed a finite planning horizon. The results of the first model showed that the number of defective units, the shipment size, and the cost reduce as learning increases following a form similar to the logistic curve. For the case of a finite horizon, results show that as learning becomes faster it is recommended to order in larger lots less frequently. Although the model was discussed in a vendor-buyer context, Jaber et al. (2008) did not investigate a joint-order policy. This remains as an immediate extension of their work. Jaber et al. (2010) investigated a three-level supply chain (supplier-manufacturerretailer) where the manufacture undergoes a continuous improvement process. The continuous improvement process is characterized by reducing set-up times, increasing the production capacity, and eliminating rework (Jaber and Bonney 2003). The cases of coordination and no coordination were investigated. Traditionally, with coordination, the manufacturer entices the retailer to order in larger lots than its economic order quantity. In their recent paper, the opposite was shown to be true as the manufacturer entices the retailer to order in smaller quantities than the retailer’s economic order quantity. As improvement becomes faster, the retailer is recommended to order in progressively smaller quantities as the manufacturer offers larger discounts and profits. The results also showed that coordination allows the manufacturer to maximize the benefits from implementing continuous improvements. It was also shown that forgetting increases the supply chain cost. Reverse Logistics As shorter product life cycles became the norm for manufacturers to sustain their competitive advantage in a dynamic and global marketplace, the forward flow of products quickened, resulting in faster rates of product waste generation and the depletion of natural resources. Consequently, manufacturing and production The Lot Sizing Problem and the Learning Curve: A Review processes have been viewed as culprits in harming the environment (e.g., Beamon 1999; Bonney 2009). This concern gave rise to the concept of reverse logistics (RL) or the backward flow of products from customers to manufacturers to suppliers (e.g., Gungor and Gupta 1999) for recovery. Recovery may take any of the following forms: repair, refurbishing, remanufacturing, and recycling (e.g., King et al., 2006). Schrady (1967) is believed to be the first to investigate the EOQ model in production/ procurement and recovery contexts. This line of research was revived in the mid-1990s by the work of Richter (1996a, b). Although the works of Schrady and Richter were investigated for different inventory situations, two have considered the effects of learning. Maity et al. (2009) developed an integrated production-recycling system over a finite time horizon, where demand is satisfied by production and recycling. Used units are collected continuously from customers, either to be recycled (repaired to an asgood-as-new state) or disposed of (if not repairable). They assumed that the set-up cost reduces over time following a learning curve, and also that the rates of production and disposal are functions of time. Regrettably, no significant results regarding the effect of learning in set-ups on the production-recycling system were presented or discussed. The paper by Jaber and El Saadany (2011) extended the production, remanufacture, and waste disposal model of Dobos and Richter (2004) by assuming learning to occur in both production and remanufacturing processes. They assumed that improvements due to learning require capital investment. Their results showed that there exists a threshold learning rate beyond which investing in learning may bring savings. That is, unless the learning process proceeds beyond the threshold value, investment in learning may not be worthwhile. It was also shown that faster learning in production lowers the collection rate of used items; however, should there be governmental legislation for firms to increase their collection rates, accelerating the learning process may not be desirable. The results also showed that it may be better to invest in speeding up the learning process in remanufacturing than in production as this reduces the costs of remanufacturing, making it attractive to recover used items. It was generally found that learning reduces the lot size quantity and, subsequently, the time interval over which new items are produced and used ones are remanufactured. Their results suggested that, with learning, a firm can have some degree of flexibility in setting some of its cost parameters that are otherwise difficult to estimate. SUMMARY AND CONCLUSIONS This chapter is an updated version of Jaber and Bonney (1999) and includes references that Jaber and Bonney (1999) were unaware of at that time and those that have appeared since 1999 up to 2011. The Jaber and Bonney (1999) paper surveyed work that deals with the effect of learning (and learning and forgetting) on the lot size problem. In addition, this chapter explores the possibility of incorporating some of the ideas adopted by JIT to such models with the intention of narrowing the gap between the “inventory is waste” and the “just-in-case” philosophies. A common feature among all of the above-surveyed work is the simplicity in modeling the cost function; that is, the papers assume a single-stage production with the set-up and holding costs of a finished product. In reality, production is usually performed in multiple stages, so besides the holding cost of a finished product, modeling Learning Curves: Theory, Models, and Applications should account for the individual costs at each stage, such as set-up, learning in production, holding of work-in-process, reworks, scrap, and so on. One way of addressing this research limitation is to extend the work of Jaber and Khan (2010) to account for the inventory costs of work-in-process and finished items. Beside inventory management, learning curve theory has been applied to a diverse set of management decision areas such as production planning and quality improvement. Each of these areas exist within the individual organizations of the supply chain, but because of the interdependencies among chain members, they also exist across the supply chain as a whole. By modeling these learning effects, management may then use established learning models to utilize capacity better, manage inventories, and coordinate production and distribution throughout the chain. However, there have been few quantitative studies that investigate learning in a supply chain and reverse logistics contexts (Jaber et al., 2010; Jaber and El Saadany in press). An interesting question that remains to be looked at is: what impact does the qualitylearning relationship have on coordination and profitability in supply chains? Another interesting area of study is to quantify the cost of disorder (entropy) in a production-inventory system by using the second law of thermodynamics (Jaber et al. 2004). This line of research needs further study. For example, is the analogy between business systems and physical systems sufficiently close for valid conclusions to be drawn from the analysis, and are there places where the analogy does not hold? Can learning reduce the entropy of business systems? These research questions will be addressed in future works. Finally, most of the above models remain nice mathematical exercises generally performed in the absence of actual learning and forgetting data. Many researchers have attempted to acquire such data with little success (e.g., Elmagharaby 1990). To develop models that represent reality faithfully, it is necessary that firms be more open to providing access to their data. Otherwise, and as Elmagharaby (1990) put it, many of the studies related to learning and forgetting will continue to be mainly armchair philosophizing. REFERENCES Abboud, N.E., Jaber, M.Y., and Noueihed, N.A., 2000. Economic lot sizing with the consideration of random machine unavailability time. Computers and Operations Research 27(4): 335–351. Adler, G.L., and Nanda, R., 1974a. The effects of learning on optimal lot determination – Single product case. AIIE Transactions 6(1): 14–20. Adler, G.L., and Nanda, R., 1974b. The effects of learning on optimal lot size determination – Multiple product case. AIIE Transactions 6(1): 21–27. Alamri, A.A., and Balkhi, Z.T., 2007. The effects of learning and forgetting on the optimal production lot size for deteriorating items with time–varying demand and deterioration rates. International Journal of Production Economics 107(1): 125–138. Axsäter, S., and Elmaghraby, S., 1981. A note on EMQ under learning and forgetting. AIIE Transactions 13(1): 86–90. Badiru, A.B., 1995. Multivariate analysis of the effect of learning and forgetting on product quality. International Journal of Production Research 33(3): 777–794. Balkhi, Z.T., 2003. The effects of learning on the optimal production lot size for deteriorating and partially backordered items with time-varying demand and deterioration rates. Applied Mathematical Modeling 27(10): 763–779. The Lot Sizing Problem and the Learning Curve: A Review Baloff, N., 1970. Startup management. IEEE Transactions on Engineering Management 17(4):132–141. Beamon, B.M., 1999. Designing the green supply chain. Logistics Information Management 12(4): 332–342. Ben-Daya, M., and Hariga, M., 2003. Lead-time reduction in a stochastic inventory system with learning consideration. International Journal of Production Research 41(3):571–579. Bonney, M., 2009. Inventory planning to help the environment. In Inventory management: Nonclassical views, M.Y. Jaber ed. 43–74, Boca Raton: CRC Press (Taylor and Francis Group). Carlson, J.G.H., 1975. Learning, lost time and economic production (The effect of learning on production lots). Production and Inventory Management 16(4): 20–33. Carlson, J.G., and Rowe, R.G., 1976. How much does forgetting cost? Industrial Engineering 8(9): 40–47. Chand, S., 1989. Lot sizes and set-up frequency with learning and process quality. European Journal of Operational Research 42(2): 190–202. Chand, S., and Sethi, S.P., 1990. A dynamic lot sizing model with learning in set-ups. Operations Research 38(4): 644–655. Chen, C.K., Lo, C.C., and Liao, Y.X., 2008. Optimal lot size with learning consideration on an imperfect production system with allowable shortages. International Journal of Production Economics 113(1): 459–469. Cheng, T.C.E., 1991. An EOQ model with learning effect on set-ups. Production and Inventory Management 32(1): 83–84. Cheng, T.C.E., 1994. An economic manufacturing quantity model with learning effects. International Journal of Production Economics 33(1–3): 257–264. Chiu H.N., 1997. Discrete time-varying demand lot-sizing models with learning and forgetting effects. Production Planning and Control 8(5): 484–493. Chiu, H.N., and Chen, H.M., 1997. The effect of time-value of money on discrete time-varying demand lot-sizing models with learning and forgetting considerations. Engineering Economist 42(3): 203–221. Chiu, H.N., and Chen, H.M., 2005. An optimal algorithm for solving the dynamic lot-sizing model with learning and forgetting in set-ups and production. International Journal of Production Economics 95(2): 179–193. Chiu, H.N., Chen, H.M., and Weng, L.C., 2003. Deterministic time-varying demand lotsizing models with learning and forgetting in set-ups and production. Production and Operations Management 12(1): 120–127. Conway, R., and Schultz, A., 1959. The manufacturing progress function. Journal of Industrial Engineering 10(1): 39–53. Corlett, N., and Morcombe, V.J., 1970. Straightening out the learning curves. Personnel Management 2(6): 14–19. Crossman, E.R.F.W., 1959. A theory of acquisition of speed skill. Ergonomics 2(2): 153–166. Dar-El, E.M., Ayas, K., and Gilad, I., 1995. A dual-phase model for the individual learning process in industrial tasks. IIE Transactions 27(3): 265–271. de Jong, J.R., 1957. The effect of increased skills on cycle time and its consequences for time standards. Ergonomics 1(1): 51–60. Dell, M., and Fredman, C., 1999. Direct from Dell: Strategies that revolutionized an industry. London: Harper Collins. Dobos, I., and Richter, K., 2004. An extended production/recycling model with stationary demand and return rates. International Journal of Production Economics 90(3): 311–323. Elmaghraby, S.E., 1990. Economic manufacturing quantities under conditions of learning and forgetting (EMQ/LaF). Production Planning and Control 1(4): 196–208. Eroglu, A., and Ozdemir, G., 2005. A note on “The effect of time-value of money on discrete time-varying demand lot-sizing models with learning and forgetting considerations.” Engineering Economist 50(1): 87–90. Learning Curves: Theory, Models, and Applications Fisk, J.C., and Ballou, D.P. 1982. Production lot sizing under a learning effect. AIIE Transactions 14(4): 257–264. Freeland, J.R., and Colley, J.L., Jr., 1982. A simple heuristic method for lot sizing in a timephased reorder system. Production and Inventory Management 23(1): 15–22. Globerson, S., and Levin, N., 1987. Incorporating forgetting into learning curves. International Journal of Operations and Production Management 7(4): 80–94. Globerson, S., Levin, N., and Shtub, A., 1989. The impact of breaks on forgetting when performing a repetitive task. IIE Transactions 21(4): 376–381. Glover, J.H., 1965. Manufacturing progress functions: An alternative model and its comparison with existing functions. International Journal of Production Research 4(4): 279–300. Glover, J.H., 1966. Manufacturing progress functions II: Selection of trainees and control of their progress. International Journal of Production Research 5(1): 43–59. Glover, J.H., 1967. Manufacturing progress functions III: Production control of new products. International Journal of Production Research 6(1): 15–24. Gorham, T., 1968., Dynamic order quantities. Production and Inventory Management 9(1): 75–81. Goyal, S.K., 1977. An integrated inventory model for a single supplier-single customer problem. International Journal of Production Research 15(1): 107–111. Goyal, S.K., and Gupta, Y.P., 1989. Integrated inventory models: The buyer-vendor coordination. European Journal of Operational Research 41(3): 261–269. Graham, C.H., and Gagné, R.M., 1940. The acquisition, extinction, and spontaneous recovery of conditioned operant response. Journal of Experimental Psychology 26(3): 251–280. Gungor, A., and Gupta, S.M., 1999. Issues in environmentally conscious manufacturing and product recovery: A survey. Computers and Industrial Engineering 36(4): 811–853. Hackett, E.A., 1983. Application of a set of learning curve models to repetitive tasks. The Radio and Electronic Engineer 53(1): 25–32. Harris, F.W., 1990. How many parts to make at once? Operations Research 38(6): 947– 950. [Reprinted from Factory: The Magazine of Management, Vol. 10, no. 2, 1913, pp. 135–36] Hirsch, W.Z., 1952. Manufacturing progress function. The Review of Economics and Statistics 34(2): 143–155. Hirschmann, W.B., 1964. Profit from the learning curve. Harvard Business Review 42(1): 125–139. Hoffman, T.R., 1968. Effect of prior experience on learning curve parameters. Journal of Industrial Engineering 19(8): 412–413. Jaber, M.Y., 2006a. Learning and forgetting models and their applications. In Handbook of industrial and systems engineering, ed. A.B. Badiru, Chapter 30, 1–27, Baco Raton: CRC Press (Taylor and Francis Group). Jaber, M.Y., 2006b. Lot sizing for an imperfect production process with quality corrective interruptions and improvements, and reduction in set-ups. Computers and Industrial Engineering 51(4): 781–790. Jaber, M.Y., 2009. Modeling hidden costs of inventory systems: A thermodynamics approach. In: Inventory management: Non-classical views, ed. M.Y. Jaber, pp. 199–218, Baco Raton: CRC Press (Taylor and Francis Group). Jaber, M.Y., and Abboud, N.E., 2001. The impact of random machine unavailability on inventory policies in a continuous improvement environment. Production Planning and Control 12(8): 754–763. Jaber, M.Y., and Bonney, M., 1996a. Production breaks and the learning curve: The forgetting phenomenon. Applied Mathematical Modeling 20(2): 162–169. Jaber, M.Y., and Bonney, M., 1996b. Optimal lot sizing under learning considerations: The bounded learning case. Applied Mathematical Modeling 20(10): 750–755. Jaber, M.Y., and Bonney, M.C., 1997a. A comparative study of learning curves with forgetting. Applied Mathematical Modeling 21(8): The Lot Sizing Problem and the Learning Curve: A Review Jaber, M.Y., and Bonney, M., 1997b. The effect of learning and forgetting on the economic manufactured quantity (EMQ) with the consideration of intra-cycle shortages. International Journal of Production Economics 53(1): 1–11. Jaber, M.Y., and Bonney, M., 1998. The effects of learning and forgetting on the optimal lot size quantity of intermittent production runs. Production Planning and Control 9(1): 20–27. Jaber, M.Y., and Bonney, M., 1999. The economic manufacture/order quantity (EMQ/EOQ) and the learning curve: Past, present, and future. International Journal of Production Economics 59(1–3): 93–102. Jaber, M.Y., and Bonney, M., 2001a. A comment on “Zhou YW and Lau H-S (1998). Optimal production lot-sizing model considering the bounded learning case and shortages backordered. J Opl Res Soc 49: 1206–1211.” Journal of Operational Research Society 52(5): 584–590. Jaber, M.Y., and Bonney, M., 2001b. A comment on “Zhou YW and Lau H-S (1998). Optimal production lot-sizing model considering the bounded learning case and shortages backordered. J Opl Res Soc 49: 1206–1211 — Comments on Zhou & Lau’s reply.” Journal of Operational Research Society 52(5): 591–592. Jaber, M.Y., and Bonney, M., 2001c. Economic lot sizing with learning and continuous time discounting: Is it significant? International Journal of Production Economics 71(1–3): 135–143. Jaber, M.Y., and Bonney, M., 2003. Lot sizing with learning and forgetting in set-ups and in product quality. International Journal of Production Economics 83(1): 95–111. Jaber, M.Y., and Bonney, M., 2007. Economic manufacture quantity (EMQ) model with lot-size dependent learning and forgetting rates. International Journal of Production Economics 108(1–2): 359–367. Jaber, M.Y., Bonney, M., and Guiffrida, A.L., 2010. Coordinating a three-level supply chain with learning-based continuous improvement. International Journal of Production Economics 127(1): 27–38. Jaber, M.Y., Bonney, M., and Moualek, I., 2009. Lot sizing with learning, forgetting and entropy cost. International Journal of Production Economics 118(1): 19–25. Jaber, M.Y., and El Saadany, A.M.A., 2011. An economic production and remanufacturing model with learning effects. International Journal of Production Economics 131(1): 115–127. Jaber, M.Y., Goyal, S.K., and Imran, M., 2008. Economic production quantity model for items with imperfect quality subject to learning effects. International Journal of Production Economics 115(1): 143–150. Jaber, M.Y., and Guiffrida, A.L., 2004. Learning curves for processes generating defects requiring reworks. European Journal of Operational Research 159(3): 663–672. Jaber, M.Y., and Guiffrida, A.L., 2007. Observations on the economic order (manufacture) quantity model with learning and forgetting. International Transactions in Operational Research 14(2): 91–104. Jaber, M.Y., and Guiffrida, A.L., 2008. Learning curves for imperfect production processes with reworks and process restoration interruptions. European Journal of Operational Research 189(1): 93–104. Jaber, M.Y., and Khan, M., 2010. Managing yield by lot splitting in a serial production line with learning, rework and scrap. International Journal of Production Economics 124(1): 32–39. Jaber, M.Y., and Kher, H.V., 2002. The dual-phase learning-forgetting model. International Journal of Production Economics 76(3): 229–242. Jaber, M.Y., Kher, H.V., and Davis, D., 2003. Countering forgetting through training and deployment. International Journal of Production Economics 85(1): 33–46. Jaber, M.Y., Nuwayhid, R.Y., and Rosen, M.A., 2004. Price-driven economic order systems from a thermodynamic point of view. International Journal of Production Research 42 (24): 5167–5184. Learning Curves: Theory, Models, and Applications Jaber, M.Y., and Salameh, M.K., 1995. Optimal lot sizing under learning considerations: Shortages allowed and back ordered. Applied Mathematical Modeling 19(5): 307–310. Jaber, M.Y., and Sikström, S., 2004a. A numerical comparison of three potential learning and forgetting models. International Journal of Production Economics 92(3): 281–294. Jaber, M.Y., and Sikström, S., 2004b. A note on: An empirical comparison of forgetting models. IEEE Transactions on Engineering Management 51(2): 233–234. Jaber, M.Y., and Zolfaghari, S., 2008. Quantitative models for centralized supply chain coordination. In Supply chains: Theory and applications, ed. V. Kordic, 307–338, Vienna: I-Tech Education and Publishing. Jordan, R.B., 1958. How to use the learning curve. N.A.A. Bulletin 39(5): 27–39. Karni, R., 1981. Maximum part-period gain (MPG): A lot sizing procedure for unconstrained and constrained requirements planning systems. Production and Inventory Management 22(2): 91– 98. Karwan, K., Mazzola, J., and Morey, R., 1988. Production lot sizing under set-up and worker learning. Naval Research Logistics 35(2): 159–175. Keachie, E.C., and Fontana, R.J., 1966. Effects of learning on optimal lot size. Management Science 13(2): B102–B108. Kim, S.L., Banerjee, A., and Burton, J., 2008. Production and delivery policies for enhanced supply chain partnerships. International Journal of Production Research 46(22): 6207–6229. King, A.M., Burgess, S.C., Ijomah, W., and McMahon, C.A., 2006. Reducing waste: Repair, recondition, remanufacture or recycle? Sustainable Development 14(4): 257–267. Klastorin, T.D., and Moinzadeh, K., 1989. Production lot sizing under learning effects: An efficient solution technique. AIIE Transactions 21(1): 2–10. Kopcso, D.P., and Nemitz, W.C., 1983. Learning curves and lot sizing for independent and dependent demand. Journal of Operations Management 4(1): 73–83. Li, C.L., and Cheng, T.C.E., 1994. An economic production quantity model with learning and forgetting considerations. Production and Operations Management 3(2): 118–132. Maity, A.K., Maity, K., Mondal, S.K., and Maiti, M., 2009. A production-recycling-inventory model with learning effect. Optimization and Engineering 10(3): 427–438. Muth, E.J., and Spremann, K., 1983. Learning effect in economic lot sizing. Management Science 29(2): B102–B108. Moon, I., and Choi, S., 1998. A note on lead time and distribution assumptions in continuous review inventory models. Computers and Operations Research 25 (11): 1007–1012. Nanda, R., and Nam, H.K., 1992. Quantity discounts using a joint lot size model under learning effects – Single buyer case. Computers and Industrial Engineering 22(2): 211–221. Nanda, R., Nam, H.K., 1993. Quantity discounts using a joint lot size model under learning effects – Multiple buyer case. Computers and Industrial Engineering 24(3): 487–494. Pan, J.C.H., and Lo, M.C., 2008. The learning effect on set-up cost reduction for mixture inventory models with variable lead time. Asia-Pacific Journal of Operational Research 25(4): 513–529. Porteus, E., 1985. Investing in reduced set-ups in the EOQ model. Management Science 31(8): 998–1010. Pratsini, E., Camm, J.D., and Raturi, A.S., 1994. Capacitated lot sizing under set-up learning. European Journal of Operational Research 72(3): 545–557. Rachamadugu, R., 1994. Performance of a myopic lot size policy with learning in set-ups. IIE Transactions 26(5): 85–91. Rachamadugu, R., and Schriber, T.J., 1995. Optimal and heuristic policies for lot sizing with learning in set-ups. Journal of Operations Management 13(3): 229–245. Rachamadugu, R., and Tan, L.C., 1997. Policies for lot sizing with set up learning. International Journal of Production Economics 48(2): 157–165. Replogle, S., 1988. The strategic use of smaller lot sizes through a new EOQ model. Production and Inventory Management 29(3): 41–44. The Lot Sizing Problem and the Learning Curve: A Review Richter, K., 1996a. The EOQ and waste disposal model with variable set-up numbers. European Journal of Operational Research 95(2): 313–324. Richter, K., 1996b. The extended EOQ repair and waste disposal model. International Journal of Production Economics 45(1–3): 443–448. Rosenblatt, M.J., and Lee, H.L., 1986. Economic production cycles with imperfect production processes. IIE Transactions 18(1): 48–55. Salameh, M.K., Abdul-Malak, M.U., and Jaber, M.Y., 1993. Mathematical modeling of the effect of human learning in the finite production inventory model. Applied Mathematical Modeling 17 (11): 613–615. Salameh, M.K., and Jaber, M.Y., 2000. Economic production quantity model for items with imperfect quality. International Journal of Production Economics 64(1–3): 59–64. Schrady, D.A., 1967. A deterministic inventory model for repairable items. Naval Research Logistics Quarterly 14(3): 391–398. Shiue, Y.C., 1991. An economic batch production quantity model with learning curve-dependent effects: A technical note. International Journal of Production Economics 25(1–3): 35–38. Smunt, T.L., and Morton, T.E., 1985. The effect of learning on optimal lot sizes: Further developments on the single product case. AIIE Transactions 17(1): 33–37. Spradlin, B.C., and Pierce, D.A., 1967. Production scheduling under learning effect by dynamic programming. The Journal of Industrial Engineering 18(3): 219–222. Steedman, I., 1970. Some improvement curve theory. International Journal of Production Research 8(3): 189–206. Sule, D.R., 1978. The effect of alternate periods of learning and forgetting on economic manufactured quantity. AIIE Transactions 10(3): 338–343. Sule, D.R., 1981. A note on production time variation in determining EMQ under the influence of learning and forgetting. AIIE Transactions 13(1): 91–95. Teyarachakul, S., Chand, S., and Ward, J., 2008. Batch sizing under learning and forgetting: Steady state characteristics for the constant demand case. Operations Research Letters 36(5): 589–593. Teyarachakul, S., Chand, S., and Ward, J., 2011. Effect of learning and forgetting on batch sizes. Production and Operations Management 20(1): 116–128. Thorndike, E.L., 1898. Animal intelligence: An experimental study of the associative process in animals. The Psychological Review: Monograph Supplements 2(4): 1–109. Thurstone, L.L., 1919. The learning curve equation. Psychological Monograph 26(3): 1–51. Towill, D.R., 1985. The use of learning curve models for prediction of batch production performance. International Journal of Operations and Production Management 5(2): 13–24. Urban, T.L., 1998. Analysis of production systems when run length influences product quality. International Journal of Production Research 36(11): 3085–3094. Wagner, H.M., and Whitin, T.M., 1958. Dynamic version of the economic lot size model. Management Science 5(1): 89–96. Wortham, A.W., and Mayyasi, A.M., 1972. Learning considerations with economic order quantity. AIIE Transactions 4(1): 69–71. Wright, T., 1936. Factors affecting the cost of airplanes. Journal of Aeronautical Science 3(4): 122–128. Yelle, L.E., 1979. The learning curve: Historical review and comprehensive survey. Decision Sciences 10(1): 302–328. Zhou, Y.W., and Lau, H.S., 1998. Optimal production lot-sizing model considering the bounded learning case and shortages backordered. The Journal of the Operational Research Society 49(11): 1206–1211. Zhou, Y.W., and Lau, H.S., 2001. A comment on “Zhou YW and Lau H-S (1998). Optimal production lot-sizing model considering the bounded learning case and shortages backordered. J Opl Res Soc 49: 1206–1211 — Reply to Jaber and Bonney.” Journal of Operational Research Society 52(5): 590–591. Effects in 15 Learning Inventory Models with Alternative Shipment Strategies Christoph H. Glock CONTENTS Introduction............................................................................................................ 293 Problem Description .............................................................................................. 294 Alternative Modes of Transporting Lots between Production Stages.................... 295 Transfer of Complete Lots (Model C)................................................................ 296 Immediate Product Transfer (Model I) ............................................................. 297 Equal-sized Batch Shipments (Model E) .......................................................... 297 Unequal-sized Batch Shipments (Model U) ..................................................... 298 Learning Effects in Inventory Models with Batch Shipments ............................... 299 Learning Effect.................................................................................................. 299 Transfer of Complete Lots (Model C)................................................................300 Immediate Product Transfer (Model I) .............................................................302 Equal-sized Batch Shipments (Model E) ..........................................................302 Unequal-sized Batch Shipments (Model U) ..................................................... 303 Numerical INTRODUCTION Learning effects have been analyzed in a variety of different application areas, such as the leadership of employees, purchasing decisions, or corporate strategy (see e.g., Manz and Sims 1984; De Geus 1988; Anderson and Parker 2002; Hatch and Dyer 2004; and Yelle 1979, for a review of related literature). The primary reason why the learning phenomenon has received increased attention in recent years is that learning has been identified as a source of sustainable competitive advantage by many researchers (see Hatch and Dyer 2004), which means that proactively transforming static organizations into learning organizations may help to differentiate the company from its competitors. 293 Learning Curves: Theory, Models, and Applications In the domain of production planning, researchers have studied the impact of learning processes on the development of inventory levels (e.g., Elmaghraby 1990; Jaber and Bonney 1998), the failure rate in production processes (see Jaber and Bonney 2003; Jaber et al. 2008), or the costs of machine set-ups (see Chiu et al. 2003; Jaber and Bonney 2003). The basic idea behind learning curve theory is that the performance of an individual improves in a repetitive task, leading to fewer mistakes and to faster job completion (Jaber and Bonney 1999). Seen from the perspective of inventory management, learning effects increase the production rate over time, which may lead to faster inventory build-up, an earlier start of the consumption phase and, consequently, lower inventory carrying costs (e.g., Adler and Nanda 1974). However, as has been shown by Glock (2010, 2011), increasing the production rate in a system where batch shipments are transported between subsequent production stages may increase in-process inventory and thus lead to higher total costs. By taking a closer look at the literature on learning effects in inventory models, it becomes obvious that interdependencies between the transportation strategy implemented in a production system and learning effects in the production process have not yet been studied. This is insufficient inasmuch as the timing and size of deliveries between subsequent production stages influence the relation between the production and consumption phases, which in turn impacts the learning effect. In addition, learning effects may influence the manufacturing time of a given production quantity, which may increase waiting times in the production system and consequently the in-process inventory. To close the gap identified above, this chapter aims to analyze how the timing and size of deliveries between subsequent stages helps to take advantage of the benefits of learning effects in production. The remainder of the chapter is organized as follows: In the next section we describe the problem studied in this chapter and introduce the assumptions and definitions that will be used in the remaining parts of the chapter. Accordingly, we develop formal models for different shipment strategies and integrate a learning effect into the model formulation. PROBLEM DESCRIPTION In this chapter, we study the interdependencies between learning effects in production and the timing of deliveries between subsequent stages of a production system. This study is motivated by two recent papers by Glock (2010, 2011), who has shown that increasing the production rate in a system where equal-sized batches are transported between subsequent stages may increase the waiting times for some of the batches, thereupon leading to higher inventory carrying costs. We focus on a two-stage production system with a producing and a consuming stage, which is the basic building block of a broad variety of more sophisticated inventory models, and consider four basic transportation strategies that are frequently found in the literature. In developing the proposed models, the following assumptions were made: 1. All parameters are deterministic and constant over time 2. The planning horizon is divided into J periods, and one lot is produced in each of the J Learning Effects in Inventory Models with Alternative Shipment Strategies 3. 4. 5. 6. Production lots are of equal sizes Learning and forgetting effects occur at the producing stage The production rate exceeds the demand rate Shortages are not allowed Furthermore, the following notation was used: A set-up costs per set-up αj units experience available in the production system at the beginning of period j D total demand in the planning period d demand rate in units per unit of time F transportation costs per shipment f slope of the forgetting curve γ cost per unit of production time h inventory carrying charges per unit per unit of time J number of set-ups in the planning horizon k the production count l slope of the learning curve with 0 ≤ l ≤ 1 m number of batch shipments per lot produced p production rate in units per unit of time tp,i,j production time of batch j of production lot i Q production lot size with Q = D/J Rj number of units that could have been produced during an interruption in period j T1 time required to produce the first unit Tˆ1 first unit of the forgetting curve Tk time required to produce the kth unit Tˆx time for the xth unit of lost experience of the forgetting curve tc time to consume a lot tp time to produce a lot t w,i waiting time of batch i x unit count of the forgetting curve IC inventory carrying costs in the planning period PC production costs TC total costs in the planning period T WI time-weighted inventory Max[a,b] denotes the maximum value of a and b Min[a,b] denotes the minimum value of a and b ALTERNATIVE MODES OF TRANSPORTING LOTS BETWEEN PRODUCTION STAGES The literature discusses a variety of alternative strategies for transporting lots between subsequent production stages. The mode of transport chosen in a production process determines the number of shipments and the shipment quantities, and may Learning Curves: Theory, Models, and Applications thus influence inventory and transfer costs in the production system. In the following sections, we discuss four basic strategies for transporting lots between a producing and a consuming stage. This will be extended to include learning effects in the later sections of this chapter. Transfer of Complete Lots (Model C) One alternative to transfer a lot from a producing to a consuming stage is to wait until the lot has been completed and to ship the entire production quantity to the next stage. This strategy is often implemented where a production lot may not be split up into smaller parts due to technical reasons or where high transportation costs prohibit more than one shipment per lot. The corresponding inventory time plots for this strategy are shown in Figure 15.1a. The inventory carrying costs in the planning period may easily be calculated as follows: IC C = Q D2 ⎛ 1 1 ⎞ t p + tc ) hJ = ( ⎜ + ⎟ h. 2 2J ⎝ p d ⎠ To assure comparability with the models developed in the next sections, we further consider production costs that amount to γD/p in the present case. By further considering set-up costs, the total cost function of this strategy amounts to: (a) tp (c) tc (d) Inventory tw,2 Q m tp,1 tc,1 = tp,2 tc,2 = tp,3 FIGURE 15.1 Alternative shipment strategies in a two-stage production system. (With kind permission from Springer Science+Business Media: Zeitschrift für Betriebswirtschaft (Journal of Business Economics), Rational inefficiencies in lot-sizing models with learning effects, 79(4), 2009, 37–57, Bogaschewsky, R.W., and Glock, C.H.) Learning Effects in Inventory Models with Alternative Shipment Strategies TC C = D2 ⎛ 1 1 ⎞ D ⎜ + ⎟ h + γ + AJ . 2J ⎝ p d ⎠ p Immediate Product Transfer (Model I) Another alternative to transfer lots to the subsequent stage is to ship each (infinitesimal) unit separately. This strategy is often discussed in the context of the classical economic production quantity (EPQ) model (e.g., Silver et al. 1998; Tersine 1998) and is applicable in cases where the producing and consuming stages are located in close proximity and/ or where both stages are connected with automatic transportation equipment. The corresponding inventory time plots for this case are shown in Figure 15.1b. The inventory carrying costs in the planning period may be obtained by calculating the area under the solid lines and multiplying the resulting expression with Jh. It follows that: IC I = Q D2 ⎛ 1 1 ⎞ tc − t p ) Jh = ( ⎜ − ⎟ h. 2 2J ⎝ d p ⎠ By comparing Equations 15.1 and 15.3, it can be seen that ICC is always larger than ICI, which is because consumption starts in Model C after production has been finished, whereas in Model I the production and consumption are initiated simultaneously. The total costs of this strategy are given as: TC I = D2 ⎛ 1 1 ⎞ D ⎜ − ⎟ h + γ + AJ . 2J ⎝ d p ⎠ p Equal-sized Batch Shipments (Model E) The two shipment strategies introduced above are not mutually exclusive and can be seen as extreme points on a continuum of hybrid transportation strategies. Instead of transporting only complete batches or each manufactured unit separately to the next stage, the company may decide to aggregate several units to a batch, which is then transported to the subsequent stage. In the case where the transportation frequency is equal to 1, this strategy would be identical to the case where only complete lots are transported to the next stage; whereas in a case where the transportation frequency approaches infinity, this strategy would correspond to the case where each (infinitesimal) unit is shipped separately. A basic transportation strategy that includes batch shipments is due to Szendrovits (1975), who assumes that successive shipments are of equal sizes. The corresponding inventory time plots for this case are illustrated in Figure 15.1c. As can be seen, the first batch is shipped to the subsequent stage directly after its completion. Due to p > d, the second batch is finished before the first batch is completely used up, therefore it has to be kept in stock for t w,1 = t v − tp time units. As described in Glock (2010, 2011) Learning Curves: Theory, Models, and Applications and Bogaschewsky and Glock (2009), the time-weighted inventory in this case consists of a “regular” inventory and an inventory due to waiting times. The “regular” inventory may be obtained by calculating the area defined by the identical triangles shown in Figure 15.1c: Q2 ⎛ 1 1 ⎞ + . 2m ⎜⎝ p d ⎟⎠ Taking into account waiting times of m − 1 sub-lots, we obtain: Q m m −1 ∑ i 1 i ( tc − t p ) = Q 2 ( m − 1) ⎛ 1 1 ⎞ ⎜⎝ d − p ⎟⎠ . 2m The inventory carrying costs per unit of time may thus be calculated as follows: IC E = Q2 ⎛ 2 − m m ⎞ D2 ⎛ 2 − m m ⎞ + ⎟ Jh = + ⎟ h. ⎜ ⎜ 2m ⎝ p 2 Jm ⎝ p d⎠ d⎠ Equation 15.7 reduces to Equation 15.1 when m = 1 and to Equation 15.3 when m → ∞. In calculating the total cost function, it is necessary to consider transportation costs that accrue with every shipment, since otherwise it would always be optimal to set m equal to infinity. To exclude this trivial case, which has already been considered above, we formulate the total cost function of this strategy as follows: TC E = D2 ⎛ 2 − m m ⎞ D + ⎟ h + γ + ( A + mF ) J . ⎜ 2 Jm ⎝ p d⎠ p Unequal-sized Batch Shipments (Model U) While transporting equal-sized batch shipments to the subsequent stage may be beneficial in case the second strategy is infeasible or too costly, it has been shown by Goyal (1977) that the process inventory may be further reduced if batches of unequal size are shipped to the subsequent stage. The corresponding inventory time plots for the case of unequal-sized batch shipments that follow a geometric series are illustrated in Figure 15.1d. As can be seen, batches 2 to m are manufactured in the consumption time of the respective precedent batch, which avoids the waiting time-related inventory. The size of the jth batch can be calculated as follows (see Goyal 1977): q j = q1 ( p d ) j −1 , with Q = ∑ i 1 qi = q1 i −1 ∑( p d) i 1 Learning Effects in Inventory Models with Alternative Shipment Strategies The inventory carrying costs per unit of time take the following form (see Glock 2010): 2m q2 ⎛ 1 1 ⎞ ( p d ) − 1 D2 ⎛ 1 1 ⎞ IC = 1 ⎜ + ⎟ Jh = − 2 2 ⎝ p d ⎠ ( p d) −1 2 J ⎜⎝ d p ⎟⎠ U (( p d ) + 1)( p d − 1) h. (( p d ) − 1)( p d + 1) m The total costs of this strategy are thus given as: D2 ⎛ 1 1 ⎞ TC = − 2 J ⎜⎝ d p ⎟⎠ U (( p d ) + 1)( p d − 1) h + γ D + ( A + mF ) J. (( p d ) − 1)( p d + 1) p m m LEARNING EFFECTS IN INVENTORY MODELS WITH BATCH SHIPMENTS Learning Effect In the following, we assume that learning and forgetting effects occur in the production process of the first stage. The learning effect is assumed to follow the power learning curve due to Wright (1936), which is of the form: Tk = T1K −1. In this respect, Tk denotes the time to produce the kth unit, k the cumulative production quantity, T1 the time required to produce the first unit, and l the slope of the learning curve. Furthermore, we consider the forgetting effect described by Carlson and Rowe (1976), which is of the form: Tˆx = Tˆ1 x f , where Tˆx equals the time for the xth unit of lost experience of the forgetting curve, x the amount of output that would have been accumulated if the production process had not been interrupted, Tˆ1 the equivalent time for the first unit of the forgetting curve, and f equals the slope of the forgetting curve. Both effects described above have been integrated by Jaber and Bonney (1996, 1998). Because the time it takes to produce one unit at the time production stops equals the starting point of the forgetting curve, it follows from Equations 15.12 and 15.13 that: − l+ f Tˆ1,i = T1qi ( ) . Learning Curves: Theory, Models, and Applications If αi denotes the equivalent number of produced units that the production system remembers at the beginning of production run i, the time to manufacture qi units equals (see Jaber and Bonney 1998): t p ,i = αi + qi T1k −l dk = T1 1− l (( q + α ) i 1− l − α1i −l . Further, we assume that Ri denotes the number of units that could have been produced during the interruption tf,i. It follows that: t f ,i = αi + qi + Ri αi + qi − l+ f Tq ( ) − l+ f T1qi ( ) z f dz = 1 i 1+ f (( α + q + R ) i 1+ f 1+ f − ( αi + qi ) ). (15.16) Solving Equation 15.16 for Ri yields: ⎛ (1 + f ) qif + l t ⎞ 1+ f Ri = ⎜ ( α i + qi ) + f ,i ⎟ T1 ⎠ ⎝ 1/ (1+ f ) − ( α i + qi ) . As the last point of the forgetting curve is equivalent to the starting point of the learning curve in the following production cycle, it follows that: T1 ( αi + qi ) −( f + l ) ( αi + qi + Ri ) = T1αi−+l1. Solving for αi+1 yields: αi +1 = ( αi + qi ) −( f + l ) ( αi + qi + Ri ) −1/ l Since it is not reasonable to assume that more units can be remembered than have previously been produced, feasible values for αi are restricted to the following interval: 0 ≤ αi ≤ j −1 ∑q . n 1 Note that α1 = 0, since the production system encounters no prior experience in the first period. Transfer of Complete Lots (Model C) We first consider the case where only complete lots are transported to the subsequent stage. As can be seen in Figure 15.2a, the production rate now increases with each unit produced due to the learning effect. Looking first at the inventory carrying costs, it is again advantageous to derive the stock in the planning period by calculating the area under the triangle-like shapes shown in Figure 15.2a. By solving Learning Effects in Inventory Models with Alternative Shipment Strategies (a) Inventory tw,1 Q m tp,2 tp,3 tp,1 tp,2 = tv,1 tp,3 = tv,2 FIGURE 15.2 Alternative shipment strategies in a two-stage production system with learning effects. (With kind permission from Springer Science+Business Media: Zeitschrift für Betriebswirtschaft (Journal of Business Economics), Rational inefficiencies in lot-sizing models with learning effects, 79(4), 2009, 37–57, Bogaschewsky, R.W., and Glock, C.H.) Equation 15.15, which gives the time needed to manufacture qi units, for qi, we can obtain the number of units that can be produced in tp,i time units given αi units of experience: 1/ (1− l ) ⎛ 1− l ⎞ qi = ⎜ t p,i + α1i −l ⎟ T ⎝ 1 ⎠ − αi . The time-weighted inventory per lot produced can be determined by integrating Equation 15.21 over the limits 0 and tp,i and by further considering the inventory at the second stage, providing: TWI iC , L = δ 1− l T1 ⎧ ⎡ 1− l ⎨ ⎣⎢( qi + αi ) ⎤⎦⎥ − αi 2−l ⎩ 1− l ⎫ D2 − δαi ⎡⎢( qi + αi ) − α1i −l ⎤⎥ ⎬ + 2 , (15.22) ⎣ ⎦ ⎭ 2J d where δ=(2−l)/(1−l). Note that qi = D/J in this case. Inventory carrying costs in the planning period may now be formulated as follows: IC C , L = ∑ TWI i 1 C ,L i Since the time to produce the total demand D is affected by the lot sizing policy, it can be assumed that the production costs themselves will also be influenced by the Learning Curves: Theory, Models, and Applications chosen lot sizing policy. If γ denotes the costs per unit production time, the production costs amount to: PC C , L = γ ∑t i 1 p ,i where tp,i is calculated according to Equation 15.15. The total cost function may be derived by further considering set-up costs: TC C , L = ∑ i 1 TWI iC , L h + γ ∑t i 1 p ,i + AJ . Immediate Product Transfer (Model I) In case each (infinitesimal) unit is shipped separately to the next stage, inventory per lot may be calculated by subtracting the time-weighted inventory during the production phase from the time-weighted inventory during the consumption phase (cf. Figure 15.2b). It follows from Equation 15.22: TWI iI , L = δ 1− l D2 T1 ⎧ ⎡ ⎤ − α1i −l − q + α ( ) i i ⎨ ⎦⎥ 2 J 2 d 2 − l ⎩ ⎣⎢ 1− l ⎫ − δαi ⎡⎢( qi + αi ) − α1i −l ⎤⎥ ⎬ . (15.26) ⎣ ⎦ ⎭ Note that, despite the learning effect, p > d or 1/T1 > d is still a necessary condition to ensure that the demand at the second stage is satisfied without interruption. Inventory carrying costs in the planning period are thus given as: IC I , L = ∑ TWI i 1 I ,L i The total cost function for this strategy is given by further considering production and set-up costs: TC I , L = ∑ i 1 TWI iI , L h + γ ∑t i 1 p ,i + AJ . Equal-sized Batch Shipments (Model E) The case where equal-sized batch shipments are transported to the subsequent stage is illustrated in Figure 15.2c. As can be seen, inventory may be differentiated into a “regular” inventory during the production and consumption phase of a batch, and an inventory due to waiting times. It is obvious that learning reduces the time it takes to build up inventory and consequently the regular inventory, but that simultaneously the inventory due to waiting times is increased. Learning Effects in Inventory Models with Alternative Shipment Strategies The “regular” inventory for Q units produced in m batches can be determined by adopting Equation 15.22: I T = 1 2−l + ∑ j 1 δ 1− l ⎛⎛ ⎞ D ⎛ ⎞ ⎜⎜⎜ + α i, j ⎟ ⎟ − α1i,−jl ⎠ ⎠ ⎜⎝ ⎝ ⎝ Jm ( ) 1− l ⎛⎛ D ⎞⎞ ⎞ − δα i, j ⎜ ⎜ + α i, j ⎟ − α1i,−jl ⎟ ⎟ ⎠ ⎝ ⎝ Jm ⎠ ⎟⎠ D2 , 2mJ 2 d where αi, j+1 = αi, j + D/(Jm) for j = 1,...,m − 1 and α1,1 = 0. αi,1 can be calculated from αi−1,m according to Equation 15.19. Inventory due to waiting times may be obtained in a similar way to the case without learning effects (see Bogaschewsky and Glock 2009): I wait = Q m m −1 ∑ i 1 ⎛ ⎜ itc − ⎜⎝ i +1 ∑ j 2 ⎞ D ⎛ ( m − 1) D t p ,i , j ⎟ = − ⎜ ⎟⎠ Jm ⎜⎝ 2 Jd m −1 i +1 ∑∑t j 2 i 1 p ,i , j ⎞ ⎟, ⎟⎠ where tp,i,j is the manufacturing time of the jth batch of production lot i (cf. Equation 15.15). The time-weighted inventory per lot is now given as the sum of Equations 15.27 and 15.28: TWI E,L i T = 1 2 l δ ⎛⎛ 1− l ⎞ ⎜ ⎜ ⎛⎜ D + α i, j ⎞⎟ ⎟ ⎠ ⎠ ⎜ ⎝ Jm j =1 ⎝ ⎝ m D ⎛ mD + ⎜ Jm ⎜⎝ 2 Jd (α ) 1− l i, j ⎛⎛ D ⎞ δα i, j ⎜ ⎜ + α i, j ⎟ ⎝ ⎠ ⎝ Jm 1− l ⎞⎞ α1i,−jl ⎟ ⎟ ⎠ ⎟⎠ m −1 i + 1 ⎞ t p,i , j ⎟ . ⎟⎠ j=2 ∑∑ i =1 Inventory carrying costs may now be calculated as follows: IC E,L = ∑ TWI E,L i i 1 The total cost function can thus be expressed as: TC E, L = ∑ i 1 TWI iE, L h + γ i 1 j 1 p ,i , j + ( A + mF ) J . Unequal-sized Batch Shipments (Model U) The case where unequal-sized batch shipments are transported to the subsequent stage is illustrated in Figure 15.2d. As can be seen, no inventory due to waiting times Learning Curves: Theory, Models, and Applications emerges in this case. As described above, the time-weighted inventory for the jth batch of production lot i may be calculated as follows: TWI iU, j, L = 1− l T1 ⎛ qi, j + α i, j ) ( ⎜ 2−l ⎝ ( ) − α1i,−jl 1− l − δα i, j ( qi, j + α i, j ) ⎞ q2 − α1i,−jl ⎟ + i, j . ⎠ 2d (15.32) With the help of Equation 15.21, qi,j can be expressed as: 1/ (1− l ) ⎛ 1− l ⎞ qi, j +1 = ⎜ tc, j + α1i,−jl ⎟ ⎝ T1 ⎠ − αi, j , where tc,j denotes the consumption time of batch j; that is, tc,j = qi,j/d. Further, we note that αi,j+1 = αi,j + qi,j for j = 1,...,m − 1 and α1,1 = 0. Again, αi,1 can be calculated from αi−1,m according to Equation 15.19. Inventory carrying costs in the planning period are thus given as: IC U, L i 1 j 1 ∑ ∑ TWI U, L i, j The total cost function may now be formulated as follows: TC U, L = i 1 j 1 TWI iU, j, L h + γ i 1 j 1 p ,i , j + ( A + mF ) J . NUMERICAL STUDIES Due to the complexity of the total cost functions 15.25, 15.28, 15.31, and 15.35, it is difficult to formally prove convexity in J and m. However, numerical studies indicated that the cost functions are quasi-convex in both decision variables, wherefore we applied a two-dimensional search algorithm that successively increased both decision variables until an increase in either J or m led to an increase in the total costs. The best solution found so far was taken as the optimal solution. To confirm our results, we used the NMinimize-function of the software-package Mathematica 7.0 (Wolfram Research, Inc), a function that contains several methods for solving constrained and unconstrained global optimization problems, such as genetic algorithms, simulated annealing, or the simplex method (see Champion 2002). For each instance we tested, the results of our enumeration algorithm and the results derived by Mathematica were identical. To illustrate how learning effects may impact the total costs under different shipment strategies, we considered the test problem shown in Table 15.1. First, we analyzed how the learning effect influences the time-weighted inventory in the models developed above. If we abstract from learning (i.e., if we assume that l = 0), it can be seen in Figure 15.3 that transferring each unit separately to the Learning Effects in Inventory Models with Alternative Shipment Strategies TABLE 15.1 Test Problem Used for Computational Experimentation D 1000 next stage leads to the lowest time-weighted inventory for fixed values of J and m, whereas shipping only complete lots results in the highest inventory. If the learning rate is gradually increased, time-weighted inventory increases for Models I, E, and U, but leads to a reduction in inventory for Model C. Further, it becomes obvious that for a high learning rate, the time-weighted inventory is almost identical for all four transportation strategies. Note that we only considered values for the learning rate in the range 0 ≤ l ≤ 0.9 since the model used to describe the learning effect in this chapter becomes invalid for the special case l = 1 (see Jaber and Guiffrida 2007). These results may be explained as follows: In case only complete lots are transferred to the second stage, it can be seen in Figure 15.1 that the inventory consists of an inventory during the production and an inventory during the consumption phase. If the learning rate is increased, the output per unit time of the production system increases, which leads to a shorter production time for a lot of size Q. In the hypothetical case of a production rate that approaches infinity, the inventory has to be carried solely during the consumption phase tc. In this case, the time-weighted inventory equals D2/(2Jd), which takes on a value of 2500 in the example introduced above. In case each unit is shipped immediately to the next stage, the inventory during the production phase increases according to the difference between the production rate p and the demand rate d. Consequently, if learning occurs and the production rate is increased, the inventory increases at a higher rate. In the hypothetical case where the production rate approaches infinity, the lot is available immediately and the Models C and I are identical. 4500 4000 3500 3000 2500 2000 1500 1000 500 0 0,4 TWI 0,5 TWI 0,7 TWI FIGURE 15.3 Time-weighted inventory for different learning rates and J = 2 and m = 3. Learning Curves: Theory, Models, and Applications 0,3 TC 0,4 TC 0,5 I,L 0,6 E,L FIGURE 15.4 Total costs for alternative shipment strategies and different learning rates. If equal-sized batches are transported to the second stage, the inventory consists of a “regular” inventory (i.e., an inventory during the production and consumption time of a batch), and an inventory due to waiting times. If learning occurs and the production rate increases, the regular inventory is reduced and the inventory during waiting times is increased, as has been outlined above. In the hypothetical case where the production rate approaches infinity, the production lot is again available immediately and Model E is identical to Models C and I. Finally, in case unequalsized batches are shipped to the subsequent stage, a similar effect occurs than the one described above. However, due to the fact that batches are of unequal sizes, an increase in the production rate may be balanced by reducing the size of the first shipment, which reduces the minimum inventory in the system. If the production rate takes on a very high level, the whole production lot is again available immediately, in which case Model U leads to similar results than Models C, I, and E. Figure 15.4 shows the development of the total costs for alternative values of l. It can be seen that for Model C, an increase in the learning rate leads to a reduction in the total costs. By contrast, if each unit is shipped separately to the next stage, an increase in the learning rate results in an increase in the total costs. For high values of l, Models C and I lead to identical costs. As to Models E and U, Figure 15.4 illustrates that an increase in the learning rate first increases and then slightly reduces the total costs. The reduction is due to a decrease in production costs, which overcompensates the additional inventory carrying costs. The difference between the total costs of Models C and I on the one hand, and Models E and U on the other hand, is due to the fact that transportation costs have not been considered in the formulation of Models C and I. CONCLUSIONS In this chapter, we have analyzed the impact of learning effects on the total costs of a two-stage production system under different transportation strategies. It has been Learning Effects in Inventory Models with Alternative Shipment Strategies shown that learning effects on the producing stage may increase in-process inventory and thus lead to higher total costs. The results are congruent with the logic of the theory of constraints, which suggests that increasing the capacity of a machine supplying to a bottleneck increases waiting times and inventory in front of the bottleneck (see Goldratt and Cox 1989). In such a case, it is beneficial to synchronize the production rates of the bottleneck and the machines supplying to it in order to avoid excessive inventory. When interpreting the results derived in this chapter, it has to be considered that the problems identified may be due to the specific definition of the problem domain and the implicit restrictions set by defining this domain (see Bogaschewsky and Glock 2009). The learning effects we considered in our model result in higher costs (in three out of four models) because they increase the production rate and thus induce an additional inventory in front of the consuming stage. However, if the production materials that are kept in stock in the raw material inventory are taken into account, it becomes obvious that faster production leads to a faster transformation of input materials into semi-finished or finished products. Thus, the inventory is transferred from the raw material storage location to the sales warehouse, and no additional inventory emerges. Therefore, higher inventory holding costs in the sales area (or after finishing production of any amount of items) would be compensated by lower inventory holding costs regarding the material stock, if all the materials needed for production are on stock no matter when production exactly starts. Obviously, problems arise only in a case where the production materials are delivered (and paid for) according to the actual work-in-process. Further, we note that mechanisms exist that are appropriate to reduce the problems described in this paper in part. For example, in order to avoid excessive inventory, the machine could be switched off after a certain period of time in order to wait until the inventory in front of the consuming stage has been depleted (see Bogaschewsky and Glock 2009; Szendrovits 1987). This might result in a partial loss of experience, but could simultaneously reduce inventory in the system. The models presented in this chapter may be used to study more complex production systems, for example, multi-stage or integrated inventory systems. Further, alternative formulations for the learning effect could be used to analyze how other learning curves—e.g., in combination with a plateau—impact inventory levels and the total costs in production systems. REFERENCES Adler, G.L., and Nanda, R.,1974. The effects of learning on optimal lot size determination – Single product case. AIIE Transactions 6(1): 14–20. Anderson, E.G., Jr., and Parker, G.G., 2002. The effect of learning on the make/buy decision. Production and Operations Management 11(3): 313–339. Bogaschewsky, R.W., and Glock, C.H., 2009. Rational inefficiencies in lot-sizing models with learning effects. Zeitschrift für Betriebswirtschaft (Journal of Business Economics) 79(4): 37–57. Carlson, J.G.H., and Rowe, R.G., 1976. How much does forgetting cost? Industrial Engineering 8 (9): 40–47. Learning Curves: Theory, Models, and Applications Champion, B., 2002. Numerical optimization in Mathematica: An insider’s view of Minimize. Proceedings of the 2002 World Multiconference on Systemics, Cybernetics, and Informatics. Orlando 2002, may be accessed online at http://library.wolfram.com/ infocenter/Conferences/4311/ Chiu, H.N., Chen, H.M., and Weng, L.C., 2003. Deterministic time-varying demand lotsizing models with learning and forgetting in set-ups and production. Production and Operations Management 12(1): 120–127. De Geus, A.P.,1988. Planning as learning. Harvard Business Review 66(1): 70–74. Elmaghraby, S.E., 1990. Economic manufacturing quantities under conditions of learning and forgetting (EMQ/LaF). Production Planning & Control 1(4): 196–208. Glock, C. H., 2010. Batch sizing with controllable production rates. International Journal of Production Research 48(20): 5925–5942. Glock, C.H., 2011. Batch sizing with controllable production rates in a multi-stage production system. International Journal of Production Research (doi: 10.1080/00207543 .2010.528058). Goldratt, E.M., and Cox, J., 1989. The goal (revised edition), Aldershot: Gower. Goyal, S.K., 1977. Determination of optimum production quantity for a two-stage production system. Operations Research Quarterly 28(4): 865–870. Hatch, N.W., and Dyer, J.H., 2004. Human capital and learning as a source of sustainable competitive advantage. Strategic Management Journal 25(12): 1155–1178. Jaber, M.Y., and Bonney, M., 1996. Production breaks and the learning curve: The forgetting phenomenon. Applied Mathematical Modeling 20(2): 162–169. Jaber, M. Y., and Bonney, M., 1998.The effects of learning and forgetting on the optimal lot size quantity of intermittent production runs. Production Planning and Control 9(1): 20–27. Jaber, M.Y., and Bonney, M.,1999. The economic manufacture/order quantity (EMQ/EOQ) and the learning curve: Past, present, and future. International Journal of Production Economics 59(1–3): 93–102. Jaber, M.Y., and Bonney, M., 2003. Lot sizing with learning and forgetting in set-ups and in product quality. International Journal of Production Economics 83(1): 95–111. Jaber, M.Y., Goyal, S.K., and Imran, M., 2008. Economic production quantity model for items with imperfect quality subject to learning effects. International Journal of Production Economics 115(1): 143–150. Jaber, M.Y., and Guiffrida, A.L., 2007. Observations on the economic manufacture quantity model with learning and forgetting. International Transactions in Operational Research 14(2): 91–104. Manz, C.C., and Sims H.P., Jr., 1984. The potential for “groupthink” autonomous work groups. Human Relations 35(9): 773–784. Silver, E.A., Pyke, D. F., and Peterson, R., 1998. Inventory management and production planning and scheduling, 3rd ed., New York: Wiley. Szendrovits, A.Z., 1975. Manufacturing cycle time determination for a multi-stage economic production quantity model. Management Science 22(3): 298–308. Szendrovits, A.Z., 1987. An inventory model for interrupted multi-stage production. International Journal of Production Research 25(1): 129–143. Tersine, R.J., 1998. Principles of inventory and materials management, 4th ed., Upper Saddle River: Prentice Hall Wright, T.P., 1936. Factors affecting the cost of airplanes. Journal of the Aeronautical Sciences 3(4): 122–128. Yelle, L.E., 1979. The learning curve: Historical review and comprehensive survey. Decision Sciences 10(2): 302–328. 16 Steady-State Characteristics under Processing-Time Learning and Forgetting Sunantha Teyarachakul CONTENTS Introduction and Major Assumptions......................................................................309 Steady-State Characteristics: The Case of Power Forgetting Functions................ 312 Power Function with a Variable y-intercept and a Fixed Slope.......................... 312 Function with a Fixed y-Intercept and a Fixed Slope........................................ 315 Steady-State Characteristics: The Case of an Exponential Forgetting Function........................................................................................318 Steady-State Characteristics: The Generalization Case ......................................... 326 Discussion on Form of the Optimal Policy (FOOP) .............................................. 329 Concluding Remarks.............................................................................................. 331 References.............................................................................................................. 331 INTRODUCTION AND MAJOR ASSUMPTIONS This chapter covers the long-term characteristics of batch production times in the repetitive work environment over an infinite horizon in which the demand rate is constant and the production of a constant batch size of q units takes place at regular intervals (such as once every Monday). The model that we are discussing differs from the traditional inventory models [i.e., economic order quantity (EOQ), economic manufacturing quantity (EMQ)] in that the production rate is no longer a constant; instead, it is influenced by workers learning while processing units and their forgetting during the break between two successive batches. When returning to produce a batch of the same product, the worker is then relearning. Early work in this area (Sule 1978; Axsäter and Elmaghraby 1981; and Elmaghraby 1990) has reported some common observations; as the number of lots produced increases, the batch production time converges to a unique value. Learning Curves: Theory, Models, and Applications Sule (1978, 338) explains it well: “In the steady state condition […] the drop in productivity due to forgetting would be equal to the increase in productivity due to learning during the manufacture of Q units.” This long-term characteristic implies that an operator starts every new batch with the same experience (skill) level due to the forgetting effect canceling out the learning effect within a batch. Teyarachakul et al. (2008) named this characteristic as the single-point convergence of the batch production time. More recent work (Teyarachakul 2003; Teyarachakul et al. 2008) found that the single-point convergence is not the sole possibility. Instead, in the long term, all of the odd batch numbers require the same longer production time than the even batch numbers do, or vice versa. Specifically in the steady state, every other batch starts with the same high experience level, while the other batch begins with the same low experience level; therefore, the amount of time to complete each batch alternates between two different values. This characteristic is referred to as the alternating convergence in batch production times by Teyarachakul et al. (2008). This chapter is devoted to the study of the long-term characteristics that were found to exist in power forgetting functions with a variable y-intercept and a fixed slope (Sule 1978; Axsäter and Elmaghraby 1981), those with a fixed y-intercept and a fixed slope (Elmaghraby 1990), and also in an exponential forgetting curve (Globerson and Levin 1987; Teyarachakul 2003). Later in the chapter, the generalization of the long-term characteristics is presented. The results—which are not specific to particular forgetting or learning functions—are based on a list of sensible characteristics of learning and forgetting. To summarize, the major common assumptions being made by scholars who analyze similar problems in this area are as follows: (1) infinite horizon, (2) constant demand rate of d, (3) constant lot size of q, and (4) the use of an original learning curve as a relearning curve. Additionally, many papers in this area have implicitly assumed the equal-spaced production cycles with the length T=q/d time periods. Each cycle starts when a batch production begins and ends when the stored inventory is depleted (zero inventory policy). The learning function in Globerson and Levin’s (1987) convergence study is Wright’s (1936) learning curve, whereas the learning curve in Sule’s (1978) and Elmaghraby’s (1990) studies has a slight modification to the unit of productivity measurement. They used the production rate (unit/time) as a function of continuous production time, while Globerson and Levin (1987) used the unit production time (time/unit) as a function of the cumulative number of units of uninterrupted production. We next present a brief review for one of the most widely used learning curves—Wright’s learning curve (1936). Its most important feature is that as the number of units produced doubles, the unit production time declines by a constant percentage, say (1-δ)100 percentage. In sections “Steady-State Characteristics: The Case of Power Forgetting Functions” and “Steady-State Characteristics: The Case of an Exponential Forgetting Function,” it will be used to capture workers’ learning or the experience gained in the convergence analysis under the cases of function-specific Steady-State Characteristics under Processing-Time Learning and Forgetting 311 forgetting curves. Specifications of the continuous version of Wright’s learning curve (Teyarachakul et al. 2008) are the following: T ( x ) = T0 x − m , where T(x) is the instantaneous per-unit production time at the instant when the xth unit starts, x ≥1; T0 is T(1), the initial instantaneous per-unit production time by an operator who has no prior experience or learning; m is (−log(δ)/log(2)) and 0 0. The RHS is increasing in αn. We can conclude, here, that there is, at most, one interception between the LHS and RHS; the corresponding value of αn at the point of interception has the property that αn =αn+1 =α. Additionally, for αn αn+1. Next we will show that such an interception must exist. Recall an assumption of αn+1 ≥1 for all n thus, at αn =1, αn+1 remains no less than 1. So, LHS(αn =1) ≥RHS(αn =1). As αn→∞, LHS→0 and RHS→q/d and LHS E-Book Information • Year: 2,011 • Edition: 1 • Pages: 464 • Pages In File: 464 • Language: English • Topic: 305 • Issue: 2011 12 30 • Identifier: 1439807388,9781439807385 • Ddc: 658.3/124 • Lcc: T60.35 .L43 2011 • Paginated: 1 • Org File Size: 6,103,237 • Extension: pdf • Tags: Финансово-экономические дисциплины Статистический анализ экономических данных
{"url":"https://vdoc.pub/documents/learning-curves-theory-models-and-applications-industrial-innovation-7a8ng91gkv50","timestamp":"2024-11-07T00:06:47Z","content_type":"text/html","content_length":"742706","record_id":"<urn:uuid:4b6adaab-1a15-4419-b243-29e49d6baa0b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00024.warc.gz"}
It was the famous game-show that shocked the world when it made its return in 1984 - What is Jeopardy? Correct! Jeopardy is a game-show based on not answering a question, but rather questioning an answer. Your mind will be tested, your brain will be challenged, and you will be in Jeopardy! The object of Jeopardy is to question answers on a board with 6 categories with clues that range from $100-$500 in the first round, then $200-$1000 in the second round. It's not as easy as it sounds. The scoring is dead simple. If you can get a correct response, you win the dollar amount selected and get to choose the next answer. However, if you don't get a correct response, you lose your money, but it doesn't necessarily mean you lose your turn. If one of your opponents can place a correct response, they win the money and choice of the next answer. There are 3 Daily Doubles throughout the game: one in the first round (Jeopardy), two in the second round (Double Jeopardy). If you are in control of the board and get a Daily Double, you get to wager an amount not to exceed the total you have. If you're below Zero (We've all been there) you can wager a maximum of 500 in the first round, 1000 in the second. If you get a correct response, you win the money. If you don't, you lose the amount you wagered. In Final Jeopardy, the round after Double Jeopardy, contestants are given a category and they must wager an amount not exceeding the total they have and whoever has the most money at the end wins.
{"url":"http://consoleclassix.com/nes/jeopardy.html","timestamp":"2024-11-10T01:15:40Z","content_type":"text/html","content_length":"13840","record_id":"<urn:uuid:f9b5f67d-69fd-4d4b-bb48-a97c6dc21437>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00473.warc.gz"}
Complexity Explorer About the Course: We will begin by viewing fractals as self-similar geometric objects such as trees, ferns, clouds, mountain ranges, and river basins. Fractals are scale-free, in the sense that there is not a typical length or time scale that captures their features. A tree, for example, is made up of branches, off of which are smaller branches, off of which are smaller branches, and so on. Fractals thus look similar, regardless of the scale at which they are viewed. Fractals are often characterized by their dimension. You will learn what it means to say that an object is 1.6 dimensional and how to calculate the dimension for different types of fractals. In addition to physical objects, fractals are used to describe distributions resulting from processes that unfold in space and/or time. Earthquake severity, the frequency of words in texts, the sizes of cities, and the number of links to websites are all examples of quantities described by fractal distributions of this sort, known as power laws. Phenomena described by such distributions are said to scale or exhibit scaling, because there is a statistical relationship that is constant across scales. We will look at power laws in some detail and will give an overview of modern statistical techniques for calculating power law exponents. We will also look more generally at fat-tailed distributions, a class of distributions of which power laws are a subset. Next we will turn our attention to learning about some of the many processes that can generate fractals. Finally, we will critically examine some recent applications of fractals and scaling in natural and social systems, including metabolic scaling and urban scaling. These are, arguably, among the most successful and surprising areas of application of fractals and scaling. They are also areas of current scientific activity and debate. This course is intended for anyone who is interested in an overview of how ideas from fractals and scaling are used to study complex systems. The course will make use of basic algebra, but potentially difficult topics will be reviewed, and help is available in the course discussion form. There will be optional units for more mathematically advanced students and pointers to additional resources for those who want to dig deeper. Course Outline 1. Introduction to fractals. Self-similarity dimension. Review of logarithms and exponents. 2. Box-counting dimension. Further examples of fractals. Stochastic fractals. 3. Power laws and their relation to fractals. Rank-frequency plots. How to estimate power law exponents. 4. Empirical examples of power laws. Other long-tailed distributions: log normals and stretched exponentials. Implications of long tails. 5. Mechanisms for generating power laws. Rich-get-richer phenomena. Phase transitions. Other mechanisms. 6. Metabolic scaling. West-Brown-Enquist scaling theory. 7. Urban scaling. About the Instructor(s): is Professor of Physics and Mathematics at College of the Atlantic. From 2004-2009 he was a faculty member in the Santa Fe Institute's Complex Systems Summer School in Beijing, China. He served as the school's co-director from 2006-2009. Dave is the author of Chaos and Fractals: An Elementary Introduction (Oxford University Press, 2012), a textbook on chaos and fractals for students with a background in high school algebra. He has thrice offered a MOOC on Chaos and Dynamical systems on the Complexity Explorer site, in addition to this MOOC. Dave was a U.S. Fulbright Lecturer in Rwanda in 2011-12. Class Introduction: How to use Complexity Explorer: Enrolled students: Course dates: Always available Some high school algebra Like this course? Twitter link 1. Introduction to Fractals and Dimension 2. Generating Fractals 3. Box-Counting Dimension 4. Introducing Power Laws 5. Power Laws in Empirical Data 6. Generating Power Laws 7. Metabolic Scaling 8. Urban Scaling 9. Conclusion and Summary
{"url":"https://www.complexityexplorer.org/courses/187-fractals-and-scaling","timestamp":"2024-11-03T15:07:58Z","content_type":"application/xhtml+xml","content_length":"74055","record_id":"<urn:uuid:c44893bc-6187-411a-b8f4-a03a53d38371>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00511.warc.gz"}