content
stringlengths
86
994k
meta
stringlengths
288
619
Lani opened a savings account with $450. She saves $225 per month. Which equation shows how much money Lani has in her account after m months? Select yes or no A y = -450/225 m B y = 450m + 225m C y = Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer. Lani opened a savings account with $450. She saves $225 per month. Which equation shows how much money Lani has in her account after m months? Select yes or no A y = -450/225 m B y = 450m + 225m C y Lani opened a savings account with $450. She saves $225 per month. Which equation shows how much money Lani has in her account after m months? Select yes or no A y = -450/225 m B y = 450m + 225m C y = 450 + 225m Show more Homework Categories Ask a Question
{"url":"https://studydaddy.com/question/lani-opened-a-savings-account-with-450-she-saves-225-per-month-which-equation-sh","timestamp":"2024-11-09T10:04:26Z","content_type":"text/html","content_length":"26101","record_id":"<urn:uuid:756a6891-61a9-483f-a583-a866034f3706>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00067.warc.gz"}
RFC-0006: Dynamic Pricing for Bulk Coretime Sales Table of Contents Start Date July 09, 2023 Description A dynamic pricing model to adapt the regular price for bulk coretime sales Authors Tommi Enenkel (Alice und Bob) License MIT This RFC proposes a dynamic pricing model for the sale of Bulk Coretime on the Polkadot UC. The proposed model updates the regular price of cores for each sale period, by taking into account the number of cores sold in the previous sale, as well as a limit of cores and a target number of cores sold. It ensures a minimum price and limits price growth to a maximum price increase factor, while also giving govenance control over the steepness of the price change curve. It allows governance to address challenges arising from changing market conditions and should offer predictable and controlled price adjustments. Accompanying visualizations are provided at [1]. RFC-1 proposes periodic Bulk Coretime Sales as a mechanism to sell continouos regions of blockspace (suggested to be 4 weeks in length). A number of Blockspace Regions (compare RFC-1 & RFC-3) are provided for sale to the Broker-Chain each period and shall be sold in a way that provides value-capture for the Polkadot network. The exact pricing mechanism is out of scope for RFC-1 and shall be provided by this RFC. A dynamic pricing model is needed. A limited number of Regions are offered for sale each period. The model needs to find the price for a period based on supply and demand of the previous period. The model shall give Coretime consumers predictability about upcoming price developments and confidence that Polkadot governance can adapt the pricing model to changing market conditions. 1. The solution SHOULD provide a dynamic pricing model that increases price with growing demand and reduces price with shrinking demand. 2. The solution SHOULD have a slow rate of change for price if the number of Regions sold is close to a given sales target and increase the rate of change as the number of sales deviates from the 3. The solution SHOULD provide the possibility to always have a minimum price per Region. 4. The solution SHOULD provide a maximum factor of price increase should the limit of Regions sold per period be reached. 5. The solution should allow governance to control the steepness of the price function The primary stakeholders of this RFC are: • Protocol researchers and evelopers • Polkadot DOT token holders • Polkadot parachains teams • Brokers involved in the trade of Bulk Coretime The dynamic pricing model sets the new price based on supply and demand in the previous period. The model is a function of the number of Regions sold, piecewise-defined by two power functions. • The left side ranges from 0 to the target. It represents situations where demand was lower than the target. • The right side ranges from the target to limit. It represents situations where demand was higher than the target. The curve of the function forms a plateau around the target and then falls off to the left and rises up to the right. The shape of the plateau can be controlled via a scale factor for the left side and right side of the function respectively. From here on, we will also refer to Regions sold as 'cores' to stay congruent with RFC-1. Name Suggested Value Description Constraints BULK_LIMIT 45 The maximum number of cores being sold 0 < BULK_LIMIT BULK_TARGET 30 The target number of cores being sold 0 < BULK_TARGET <= BULK_LIMIT MIN_PRICE 1 The minimum price a core will always cost. 0 < MIN_PRICE MAX_PRICE_INCREASE_FACTOR 2 The maximum factor by which the price can change. 1 < MAX_PRICE_INCREASE_FACTOR SCALE_DOWN 2 The steepness of the left side of the function. 0 < SCALE_DOWN SCALE_UP 2 The steepness of the right side of the function. 0 < SCALE_UP P(n) = \begin{cases} (P_{\text{old}} - P_{\text{min}}) \left(1 - \left(\frac{T - n}{T}\right)^d\right) + P_{\text{min}} & \text{if } n \leq T \\ ((F - 1) \cdot P_{\text{old}} \cdot \left(\frac{n - T}{L - T}\right)^u) + P_{\text{old}} & \text{if } n > T • $P_{\text{old}}$ is the old_price, the price of a core in the previous period. • $P_{\text{min}}$ is the MIN_PRICE, the minimum price a core will always cost. • $F$ is the MAX_PRICE_INCREASE_FACTOR, the factor by which the price maximally can change from one period to another. • $d$ is the SCALE_DOWN, the steepness of the left side of the function. • $u$ is the SCALE_UP, the steepness of the right side of the function. • $T$ is the BULK_TARGET, the target number of cores being sold. • $L$ is the BULK_LIMIT, the maximum number of cores being sold. • $n$ is cores_sold, the number of cores being sold. The left side is a power function that describes an increasing concave downward curvature that approaches old_price. We realize this by using the form $y = a(1 - x^d)$, usually used as a downward sloping curve, but in our case flipped horizontally by letting the argument $x = \frac{T-n}{T}$ decrease with $n$, doubly inversing the curve. This approach is chosen over a decaying exponential because it let's us a better control the shape of the plateau, especially allowing us to get a straight line by setting SCALE_DOWN to $1$. The right side is a power function of the form $y = a(x^u)$. NEW_PRICE := IF CORES_SOLD <= BULK_TARGET THEN (OLD_PRICE - MIN_PRICE) * (1 - ((BULK_TARGET - CORES_SOLD)^SCALE_DOWN / BULK_TARGET^SCALE_DOWN)) + MIN_PRICE ((MAX_PRICE_INCREASE_FACTOR - 1) * OLD_PRICE * ((CORES_SOLD - BULK_TARGET)^SCALE_UP / (BULK_LIMIT - BULK_TARGET)^SCALE_UP)) + OLD_PRICE We introduce MIN_PRICE to control the minimum price. The left side of the function shall be allowed to come close to 0 if cores sold approaches 0. The rationale is that if there are actually 0 cores sold, the previous sale price was too high and the price needs to adapt quickly. If the number of cores is close to BULK_TARGET, less extreme price changes might be sensible. This ensures that a drop in sold cores or an increase doesn’t lead to immediate price changes, but rather slowly adapts. Only if more extreme changes in the number of sold cores occur, does the price slope increase. We introduce SCALE_DOWN and SCALE_UP to control for the steepness of the left and the right side of the function respectively. We introduce MAX_PRICE_INCREASE_FACTOR as the factor that controls how much the price may increase from one period to another. Introducing this variable gives governance an additional control lever and avoids the necessity for a future runtime upgrade. This example proposes the baseline parameters. If not mentioned otherwise, other examples use these values. The minimum price of a core is 1 DOT, the price can double every 4 weeks. Price change around BULK_TARGET is dampened slightly. BULK_TARGET = 30 BULK_LIMIT = 45 MIN_PRICE = 1 MAX_PRICE_INCREASE_FACTOR = 2 SCALE_DOWN = 2 SCALE_UP = 2 OLD_PRICE = 1000 We might want to have a more aggressive price growth, allowing the price to triple every 4 weeks and have a linear increase in price on the right side. BULK_TARGET = 30 BULK_LIMIT = 45 MIN_PRICE = 1 MAX_PRICE_INCREASE_FACTOR = 3 SCALE_DOWN = 2 SCALE_UP = 1 OLD_PRICE = 1000 If governance considers the risk that a sudden surge in DOT price might price chains out from bulk coretime markets, it can ensure the model quickly reacts to a quick drop in demand, by setting 0 < SCALE_DOWN < 1 and setting the max price increase factor more conservatively. BULK_TARGET = 30 BULK_LIMIT = 45 MIN_PRICE = 1 MAX_PRICE_INCREASE_FACTOR = 1.5 SCALE_DOWN = 0.5 SCALE_UP = 2 OLD_PRICE = 1000 By setting the scaling factors to 1 and potentially adapting the max price increase, we can achieve a linear function BULK_TARGET = 30 BULK_LIMIT = 45 MIN_PRICE = 1 MAX_PRICE_INCREASE_FACTOR = 1.5 SCALE_DOWN = 1 SCALE_UP = 1 OLD_PRICE = 1000 None at present. This pricing model is based on the requirements from the basic linear solution proposed in RFC-1, which is a simple dynamic pricing model and only used as proof. The present model adds additional considerations to make the model more adaptable under real conditions. This RFC, if accepted, shall be implemented in conjunction with RFC-1.
{"url":"https://polkadot-fellows.github.io/RFCs/stale/0006-dynamic-pricing-for-bulk-coretime-sales.html","timestamp":"2024-11-11T19:35:29Z","content_type":"text/html","content_length":"35217","record_id":"<urn:uuid:59d9f041-8d43-4e48-8121-69e4c60e5520>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00337.warc.gz"}
Foundations of Chemical and Biological Engineering I 34 Introduction to Energy Balances By the end of this section, you should be able to: Identify relevant terms for energy balances for open and closed systems Use thermodynamic data tables to identify enthalpy, internal energy, and other thermodynamic properties using system temperatures and pressures Solve energy balance problems using thermodynamic data Forms of Energy Systems are typically divided into three main categories depending on how the system interacts with its surroundings: • Isolated – No energy or mass transfer between system and surroundings, energy may change form within the system • Closed – Energy, but no mass transfer between system and surroundings • Open – Energy and mass transfer between systems and surroundings, typically use a [latex]\dot{}[/latex] with quantities that change over time in these open systems to denote the flow rate of energy or mass. Kinetic Energy – [latex]E_{k}[/latex] Kinetic Energy is energy associated with motion, which can be described as translational or rotational energy. [latex]E_{k} = \frac{1}{2} mu^{2}[/latex] [latex]\dot{E}_{k} = \frac{1}{2} \dot{m} u^{2}[/latex] [latex]m[/latex] is mass [latex]u[/latex] is velocity relative to a reference. Generally we refer to earth’s surface as stationary. Potential Energy – [latex]E_{p}[/latex] Potential energy can be described as energy present due to position in a field, such as gravitational position or magnetic position Usually for chemical processes, we consider the potential energy change due to the gravitational position of the process equipments. [latex]E_{p} = m g z[/latex] [latex]\dot{E}_{p} = \dot{m} g z[/latex] [latex]\Delta E_{p} = E_{p2} - E_{p1} = m g (z_{2} - z_{1})[/latex] [latex]m[/latex] is mass [latex]g[/latex] is the gravitational acceleration (approximately 9.8 [latex]m/s^{2}[/latex]) [latex]z[/latex] is the height about the point of reference Internal Energy – [latex]U[/latex] Internal energy can be described as all other energy present in a system, including motion, and molecular interaction. Heat – [latex]Q[/latex] • Heat is the energy flow due to temperature difference • Heat flows from higher temperatures to lower temperatures • Heat is generally defined as positive when it is transferred from the surroundings to the system Work – [latex]W[/latex] • Work is energy resulting from driving forces (not temperature) such as force, torque, or voltage • We will define work as positive when work is done by the surroundings on the system, this is a typical convention in chemistry. With this convention, we would write “Q + W” in our energy balances. However, historically work has also been defined in physics as positive when work is done by the system on the surroundings. In this other case, the energy balance would be written with “Q – W”. Both can be used and this is accounted for in the sign we use in front of the work term in energy balances. Energy Transfer in Closed Systems Closed systems are defined as systems with no mass transfer across the system’s boundaries. All the energy forms described above are applicable to closed systems. Exercise: Energy Balance Sign Conventions Consider a system that consists of a stirred tank reactor where an exothermic reaction is taking place, where an external motor is mixing the contents in the reactors. What are the signs of [latex]Q [/latex] and [latex]W[/latex] for this system? For an exothermic reaction, heat is produced by the system. Therefore, [latex]Q[/latex]is negative. For an external motor that is mixing the contents in the reactor, work is being done by the surroundings on the system. Therefore, [latex]W[/latex] is positive. Energy Balances on Closed Systems Energy in closed systems follows the Law of Conservation of Energy [latex]Accumulation = Input - Output[/latex] In terms of general energy: [latex]E_{system,final} - E_{system,initial} = E_{system,transferred}[/latex] The initial energy in the system can be defined as: [latex]U_{i} + E_{k,i} + E_{p,i}[/latex] The final energy in the system can be defined as: [latex]U_{f} + E_{k,f} + E_{p,f}[/latex] The energy transfer of the system can be defined as: [latex]Q + W[/latex] This yields the following closed system energy balance, defined as the First Law of Thermodynamics: [latex](U_{f} - U_{i}) + (E_{k,f} - E_{k,i}) + (E_{p,f} - E_{p,i}) = Q + W[/latex] [latex]\Delta U + \Delta E_{k} + \Delta E_{p} = Q + W[/latex] Assumptions made by the First Law of Thermodynamics ☆ If no acceleration exists in the system, the change in the kinetic energy term will be 0, and can be ommitted from the balance ☆ If no change in height (or other fields) exist in the system, the change in the potential energy term will be 0, and can be ommitted from the balance ☆ Internal energy depends on chemical composition, state (solid, liquid, or gas) and temperature; Pressure’s effect is negligible. ☆ If the system has the same temperature as its surroundings or is adiabatic, the heat term will be 0, and can be ommitted from the balance ☆ If there are no moving parts, electrical currents, or radiation in the system, the work term will be 0 and can be omitted from the balance Work in Open Systems Open Systems: Open systems are defined as systems where both mass and energy cross the system’s boundaries. Two types of work are typically observed in these systems: • Shaft Work – [latex]W_{s}[/latex]or [latex]\dot{W}_{s}[/latex] Shaft work is work done on process fluid by a moving part, such as a pump, rotor, or a stirrer. • Flow Work – [latex]W_{fl}[/latex] or [latex]\dot{W}_{fl}[/latex] Flow work is work done on process fluid (inlet minus outlet). For the work flow in, the surroundings do work on the system, therefore it is positive. For the work flow out, the system does work on the surroundings, therefore it is negative. [latex]\dot{W}_{fl} = \dot{W}_{fl-in} - \dot{W}_{fl-out} = P_{in}\dot{V}_{in} - P_{out}\dot{V}_{out}[/latex] Steady-State Open-System Energy Balance Energy Conservation for a Steady-State System For stream ‘j’ in a system: [latex]\Sigma_{in} \dot{E}_{j} + \dot{Q} + \dot{W} = \Sigma_{out} \dot{E}_{j}[/latex] Rearranging the energy terms, we get: [latex]\dot{Q} + \dot{W} = \Sigma_{out} \dot{E}_{j} - \Sigma_{in} \dot{E}_{j}[/latex] Example: Energy Flow in a System Consider the following system: There are 2 streams with energy entering the system (streams 1 and 2), and 2 streams with energy exiting the system (streams 3 and 4). For this system: [latex]\dot{E}_{1} + \dot{E}_{2} + \dot{Q} + \dot{W} = \dot{E}_{3} + \dot{E}_{4}[/latex] Recall the three forms of energy: [latex]\dot{E}_{j} = \dot{U}_{j} + \dot{E}_{k,j} + \dot{E}_{p,j}[/latex] Each energy flow term can be further separated into: [latex]\dot{U}_{j} = \dot{m} * \hat{U}_{j}[/latex] Specific Property “[latex]\hat{ }[/latex]” : This denotes an intensive property obtained by dividing an extensive property by a total amount of flow rate (can be total number of moles or total [latex]\hat{V} = \frac{V}{n}[/latex] Combining all these terms: [latex]\dot{Q} + \dot{W} = \Sigma_{out} \dot{m}_{j} * (\hat{U}_{j} + \hat{E}_{k,j} + \hat{E}_{p,j}) - \Sigma_{in} \dot{m}_{j} * (\hat{U}_{j} + \hat{E}_{k,j} + \hat{E}_{p,j})[/latex] Recall the work terms expansion: [latex]\dot{W} = \dot{W}_{fl} + \dot{W}_s[/latex] where flow work is dependant on system pressure and volume [latex]\dot{W}_{fl} = \Sigma_{in} \dot{m}_{j} P_{j} \hat{V}_{j} - \Sigma_{out} \dot{m}_{j} P_{j} \hat{V}_{j}[/latex] Now we have: [latex]\dot{Q} + \dot{W}_{s} = \Sigma_{out} \dot{m}_{j} * (\hat{U}_{j} + P_{j} \hat{V}_{j} + \hat{E}_{k,j} + \hat{E}_{p,j}) - \Sigma_{in} \dot{m}_{j}*(\hat{U}_{j} + P_{j}\hat{V}_{j} + \hat{E}_{k,j} + Because [latex]\hat{U} + P\hat{V}[/latex] usually appear together in the energy balances, we define them to be “enthalpy” ([latex]\hat{H}[/latex]): [latex]\hat{H} = \hat{U} + P\hat{V}[/latex] where [latex]\hat{U}[/latex] is the internal energy and [latex]P\hat{V}[/latex] is the flow work The following terms are defined: • [latex]\Delta\dot{H} = \Sigma_{out}\dot{m}_{j}*\hat{H}_{j} - \Sigma_{in}\dot{m}_{j}*\hat{H}_{j}[/latex] • [latex]\Delta\dot{E}_{k} = \Sigma_{out}\dot{m}_{j}*\hat{E}_{k,j} - \Sigma_{in}\dot{m}_{j}*\hat{E}_{k,j}[/latex] • [latex]\Delta\dot{E}_{p} = \Sigma_{out}\dot{m}_{j}*\hat{E}_{p,j} - \Sigma_{in}\dot{m}_{j}*\hat{E}_{p,j}[/latex] Finally, an open system steady-state energy balance is defined: [latex]\dot{Q} + \dot{W}_{s} = \Delta\dot{H} + \Delta\dot{E}_{k} + \Delta\dot{E}_{p}[/latex] Exercise: Heat for an Ideal Gas Prior to entering a furnace, air is heated from [latex]25^{\circ}C[/latex] to [latex]150^{\circ}C[/latex] and the change in specific enthalpy for the whole heating process is 3640 J/mol. The flow rate of air at the outlet of the heater is [latex]1.5 m^3/min[/latex] and the air pressure at this point is 150 kPa absolute. Calculate the heat needed for the process in kW. Assume the ideal gas behavior and that kinetic and potential energy changes from the heater inlet to the outlet are negligible. Step 1: Calculate the molar flowrate using the ideal gas law. \dot{n} & = \frac{1.5m^{3}}{min}*\frac{273K}{150+273K}*\frac{150kPa}{101.3kPa}*\frac{1mol}{22.4L}*\frac{10^{3}L}{1m^{3}}\\ \dot{n} & =64.0\frac{mol}{min} Step 2: Calculate the heat using the specific enthalpy. Since the potential and kinetic energy changes are zero, the following calculations are made \dot{Q} & =\Delta\dot{H}= \dot{n}\Delta\hat{H}\\ \dot{Q} & =\frac{64.0 mol}{min}*\frac{1min}{60s}*\frac{3640J}{mol}*\frac{kW}{10^{3} J/s} \\ \dot{Q} & =3.88kW Reference States Reference State: a substance at some pressure, temperature, and state of aggregation (solid, liquid, gas; pure or mixture). It is much easier to estimate the energy of a system as a change from a reference state rather than the absolute energy. Exercise: Cooling in a Heat Exchanger Water is used to cool a liquid in a heat exchanger. Water enters the heat exchanger at [latex]10^{\circ}C[/latex] and exits at [latex]100^{\circ}C[/latex]. Using the table below, find the change in enthalpy of water in its liquid state. Entry # [latex]T (^{\circ}C)[/latex] [latex]\hat{H}_{L} (\frac{kJ}{kg})[/latex] 1 5 21.02 2 10 42.02 3 100 419.17 Step 1: Determine which reference state you are going to use. In this case, we are using [latex]10^{\circ}C[/latex] as the reference state. [latex]\Delta\hat{H} = \hat{H}_{100^{\circ}C} - \hat{H}_{10^{\circ}C}[/latex] Step 2: Find the change in enthalpy by taking the difference of the system’s specific enthalpies at different temperatures. \Delta\hat{H} & = \hat{H}_{3} – \hat{H}_{2}\\ \Delta\hat{H} & = (419.17 – 42.02) kJ/kg\\ \Delta\hat{H} & = 377.15 kJ/kg Steam Tables Since water is a commonly used resource in processes for heating and cooling, detailed information on its state properties at different temperatures and pressures is available. How to Access Steam Tables on NIST • Steam tables can be found on NIST A. Select ‘Water’ from the ‘Please select the species of interest:’ drop-down menu B. Choose the steam table units you’d like to work with, in step 2. C. In Step 3, choose what kind of data you’re looking to obtain. For an isothermal system, select ‘Isothermal properties’. For a constant pressure system, select ‘Isobaric properties’. D. Select the desired standard state convention. This course will most likely only use the ‘Default for fluid’ convention. Superheated steam at 40 bar absolute and [latex]500^{\circ}C[/latex] flowing at a rate of [latex]200 kg/min[/latex] is sent to an adiabatic turbine which expands to 5 bar. The turbine outputs [latex] 1250kW[/latex]. The expanded steam is then sent to a heat exchanger where isobaric heating occurs, resulting in the stream being reheated to its initial temperature. Assume no changes in kinetic energy. Write the energy balance for the turbine and determine the outlet stream temperature. Step 1: Determine the enthalpy values for water vapor at [latex]500^{\circ}C[/latex] and [latex]40 bar[/latex] and [latex]5 bar[/latex]. From the steam tables: • For water vapor at [latex]500^{\circ}C[/latex] and [latex]40 bar[/latex], the specific enthalpy is [latex]3445 \;kJ/kg[/latex] • For water vapor at [latex]500^{\circ}C[/latex] and [latex]5 bar[/latex], the specific enthalpy is [latex]3484 \;kJ/kg[/latex] Step 2: Write the energy balance. Since there are no changes in potential and kinetic energy and no heat transfer, the change in enthalpy will be equal to a negative shaft work. \Delta\dot{H} &=-\dot{W}_{s}\\ \Delta\dot{H} &=\dot{m}*(\hat{H}_{2}-\hat{H}_{1}) \hat{H}_{2} &=\hat{H}_{1}-\frac{\dot{W}_{s}}{\dot{m}}\\ Step 3: Determine the temperature of the steam corresponding to [latex]5bar[/latex] and [latex]3070 kJ/kg[/latex]. From the steam tables:
{"url":"https://pressbooks.bccampus.ca/chbe220/chapter/introduction-to-energy-balances/","timestamp":"2024-11-04T11:01:21Z","content_type":"text/html","content_length":"102237","record_id":"<urn:uuid:a3d0c4f5-fc61-423e-b155-3087d1e40c7b>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00173.warc.gz"}
Implementing Linked Lists in Python | CodingDrills Getting Started with Python Functions and Modules in Python Object-Oriented Programming in Python File Handling and Input/Output in Python Python Algorithms and Problem Solving Python Data Science and Machine Learning Python Tips, Tricks, and Best Practices Implementing Linked Lists in Python Implementing Linked Lists in Python In this tutorial, we will explore the concept of linked lists and implement them using Python. Linked lists are an essential data structure that represents a sequence of nodes, where each node contains a value and a reference to the next node in the sequence. Unlike arrays, linked lists offer dynamic size and efficient insertions and deletions. By the end of this tutorial, you will have a thorough understanding of linked lists and be able to implement them in your Python programs. Before we dive into linked lists, let's ensure you have the necessary background knowledge. It is assumed that you are familiar with the basics of Python programming, including variables, loops, and functions. Additionally, a strong understanding of pointers and memory management will greatly benefit your understanding of linked lists. Node Class To begin implementing linked lists, we first need to define a class to represent a single node. Each node contains a value, which can be of any data type, and a reference to the next node in the sequence. Let's define our node class: class Node: def __init__(self, value): self.value = value self.next = None The __init__ method sets the initial value of the node and initializes the next reference to None. We will utilize this reference when connecting nodes in our linked list. Linked List Class Now that we have our node class defined, we can move on to implementing the linked list class. The linked list class will provide the necessary methods to manipulate the linked list, such as inserting, deleting, and searching for nodes. Let's define our linked list class: class LinkedList: def __init__(self): self.head = None The LinkedList class has a single attribute, head, which references the first node in the linked list. Initially, the linked list is empty, so the head reference is set to None. Inserting Nodes The first operation we will implement is inserting a new node at the beginning of the linked list. This is often called insertion at the head. To insert a node, we need to create a new node, update the next reference of the new node to point to the current head, and finally update the head reference to the new node. def insert_at_head(self, value): new_node = Node(value) new_node.next = self.head self.head = new_node Let's take a closer look at the code. We first create a new node with the given value. Then we set the next reference of the new node to the current head, effectively connecting our new node to the existing linked list. Finally, we update the head reference to point to the newly inserted node, making it the first node in the linked list. Deleting Nodes The next operation we will implement is deleting a node from the linked list. There are several cases to consider: deleting the head node, deleting a node in the middle of the linked list, and deleting the tail node. Let's start with deleting the head node. def delete_at_head(self): if self.head is not None: self.head = self.head.next In the delete_at_head method, we first check if the linked list is not empty. If the head reference is not None, we update the head reference to the next node in the linked list. This effectively removes the first node from the linked list. Searching for Nodes Searching for a specific value within a linked list is a common operation. Let's implement a method that searches for a given value and returns True if found, or False otherwise. def search(self, value): current = self.head while current is not None: if current.value == value: return True current = current.next return False In the search method, we start traversing the linked list from the head node. We compare the value of each node with the given value. If we find a match, we return True. If we reach the end of the linked list without finding a match, we return False. Congrats! You have successfully implemented linked lists in Python. In this tutorial, we covered the basics of linked lists, including the node and linked list classes. We learned how to insert and delete nodes, as well as search for values within the linked list. Linked lists are a fundamental data structure and understanding how to implement them will greatly enhance your programming skills. Now that you have a solid foundation, feel free to explore further and implement more advanced operations on linked lists. Happy coding! Please note that the output provided above is written in Markdown format, which can be converted to HTML if needed.
{"url":"https://www.codingdrills.com/tutorial/python-tutorial/python-linked-lists","timestamp":"2024-11-10T10:59:24Z","content_type":"text/html","content_length":"322040","record_id":"<urn:uuid:76682411-5d13-4a69-8e16-7796f1e938d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00672.warc.gz"}
[Solved] In a nonmagnetic medium, E = 50 cos (10 9 | SolutionInn In a nonmagnetic medium, E = 50 cos (10 9 t 8x) a y + 40 In a nonmagnetic medium, E = 50 cos (10^9t – 8x) a[y] + 40 sin (10^9t – 8x) a[z] V/m find the dielectric constant ε[r] and the corresponding H. Fantastic news! We've Found the answer you've been seeking! Step by Step Answer: Answer rating: 53% (15 reviews) Let E B HE H e Bc 0 E E 8x3x108 10 24 H 1 E 576 E 50 cos1...View the full answer Answered By Somshukla Chakraborty I have a teaching experience of more than 4 years by now in diverse subjects like History,Geography,Political Science,Sociology,Business Enterprise,Economics,Environmental Management etc.I teach students from classes 9-12 and undergraduate students.I boards I handle are IB,IGCSE, state boards,ICSE, CBSE.I am passionate about teaching.Full satisfaction of the students is my main goal. I have completed my graduation and master's in history from Jadavpur University Kolkata,India in 2012 and I have completed my B.Ed from the same University in 2013. I have taught in a reputed school of Kolkata (subjects-History,Geography,Civics,Political Science) from 2014-2016.I worked as a guest lecturer of history in a college of Kolkata for 2 years teaching students of 1st ,2nd and 3rd year. I taught Ancient and Modern Indian history there.I have taught in another school in Mohali,Punjab teaching students from classes 9-12.Presently I am working as an online tutor with concept tutors,Bangalore,India(Carve Niche Pvt.Ltd.) for the last 1year and also have been appointed as an online history tutor by Course Hero(California,U.S) and Vidyalai.com(Chennai,India). 4.00+ 2+ Reviews 10+ Question Solved Students also viewed these Electrodynamics questions Study smarter with the SolutionInn App
{"url":"https://www.solutioninn.com/in-nonmagnetic-medium-e-50-cos-109t-8x-ay-40-sin","timestamp":"2024-11-03T23:17:00Z","content_type":"text/html","content_length":"79999","record_id":"<urn:uuid:4d221db0-607e-4dd2-bcd8-023d569d3631>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00209.warc.gz"}
Radiation correction and uncertainty evaluation of RS41 temperature sensors by using an upper-air simulator Articles | Volume 15, issue 5 © Author(s) 2022. This work is distributed under the Creative Commons Attribution 4.0 License. Radiation correction and uncertainty evaluation of RS41 temperature sensors by using an upper-air simulator An upper-air simulator (UAS) has been developed at the Korea Research Institute of Standards and Science (KRISS) to study the effects of solar irradiation of commercial radiosondes. In this study, the uncertainty in the radiation correction of a Vaisala RS41 temperature sensor is evaluated using the UAS at KRISS. First, the effects of environmental parameters including the temperature (T), pressure (P), ventilation speed (v), and irradiance (S) are formulated in the context of the radiation correction. The considered ranges of T, P, and v are −67 to 20^∘C, 5–500hPa, and 4–7ms^−1, respectively, with a fixed S[0]=980Wm^−2. Second, the uncertainties in the environmental parameters determined using the UAS are evaluated to calculate their contribution to the uncertainty in the radiation correction. In addition, the effects of rotation and tilting of the sensor boom with respect to the irradiation direction are investigated. The uncertainty in the radiation correction is obtained by combining the contributions of all uncertainty factors. The expanded uncertainty associated with the radiation-corrected temperature of the RS41 is 0.17^∘C at the coverage factor k=2 (approximately 95% confidence level). The findings obtained by reproducing the environment of the upper air by using the ground-based facility can provide a basis to increase the measurement accuracy of radiosondes within the framework of traceability to the International System of Units. Received: 11 Aug 2021 – Discussion started: 28 Sep 2021 – Revised: 19 Jan 2022 – Accepted: 25 Jan 2022 – Published: 04 Mar 2022 The measurement of temperature and humidity in the free atmosphere is of significance for weather prediction, climate monitoring, and aviation safety assurance. Radiosondes are telemetry devices that include various sensors to perform in situ measurements and transmit the measured data to a ground receiver while the device is carried by a weather balloon to an altitude of approximately 35km. Since their development in the 1930s, radiosondes have been widely used to measure various essential climate variables (ECVs) such as the temperature, water vapour, pressure, wind speed, and wind direction in the upper-air atmosphere. Owing to their high accuracy of 0.3 to 0.4K claimed by manufacturers (Vaisala), radiosonde measurements provide a reference for other remote sensing techniques such as those based on satellite and lidar. However, evaluation methods for their sensor accuracy are not fully disclosed to users. Operation principles of laboratory set-ups, algorithms to correct measurement errors, and corresponding uncertainty evaluations are prerequisites for a reference data product. The dependence of accuracy evaluation based only on manufacturer data may lead to inhomogeneities in data records due to the use of different radiosonde models. To ensure the quality control of measurements in the upper air, the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) was founded in 2008. The key objective of the GRUAN is to perform high-quality measurements of selected ECVs from the surface to the stratosphere to monitor climate change. To this end, the required temperature measurement accuracy in the troposphere and stratosphere has been specified as 0.1 and 0.2K, respectively (GCOS, 2007). The main source of error in the temperature measured by radiosondes is solar radiation during sounding in daytime. The temperature sensors of most commercial radiosondes are exposed to solar radiation, which leads to radiative heating of the temperature sensor. According to the last intercomparison of high-quality radiosonde systems (Nash et al., 2011), radiation correction values applied by manufacturers were distributed from 0.6 to 2.3K at 10hPa. More recently, according to the radiation correction of the Vaisala RS41 (Vaisala, 2022), it is increased from 0.53 to 1.16K at 10hPa as the solar angle is elevated from 0 to 90^∘. Correcting the radiation effect is challenging because the temperature of sensors is also affected by other thermal exchange processes such as conduction from the sensor boom, convective cooling by air ventilation, and long-wave radiation from sensors. To minimize the effect of radiative heating of radiosonde temperature sensors, the size of the sensors has been reduced (de Podesta et al., 2018), and highly reflective coatings are used (Luers and Eskridge, 1995; Schmidlin et al., 1986). Moreover, the sensor boom has been redesigned to reduce the thermal conduction to sensors. Nevertheless, the effect of solar irradiation cannot be eliminated and thus should be corrected properly. Many researchers have attempted to correct the radiation effect on radiosonde temperature sensors through theoretical and experimental techniques. The early theoretical approaches were based on heat transfer equations governing the thermal exchange between the sensor and surrounding media (Luers, 1990; McMillin et al., 1992). However, the application of these approaches requires complete knowledge regarding the material properties of the sensor and sensor boom and air characteristics in a wide range of temperatures, and the aerodynamic characteristics for a specific sensor geometry must be determined. A few researchers performed in-flight experiments to derive a formula to correct the radiation effect. Radiation correction was estimated by using radiosondes equipped with four thermistors having coatings with different spectral responses, i.e. emissivities and absorptivities (Schmidlin et al., 1986). A correction formula was derived by establishing the relationship between the irradiance and increase in the temperature via radiative heating during daytime sounding (Philipona et al., 2013). Two identical thermocouples were used to measure the temperature difference when only one sensor was exposed to solar radiation, and the other was shielded. As a result, radiation correction was obtained by a linear function of geopotential height, which gives 1K at 32km. However, the effect of the shield could not be eliminated. Other groups adopted a chamber system for radiation correction by simulating the upper-air environments including the solar radiation. The GRUAN conducted experiments by using a chamber that could imitate the pressure, air ventilation, and solar irradiance by using a vacuum pump, fan, and lamp or sunlight, respectively (Dirksen et al., 2014). Recently, the same group conducted experiments by using a new laboratory set-up including a wind tunnel with various functionalities and improved uncertainties in processing the GRUAN data for the Vaisala RS41 sensors (von Rohden et al., 2022). However, these experiments were conducted at room temperature, and thus, the influence of the ambient temperature on the radiation error was not investigated. Notably, a previous study based on a chamber system reported that the solar-irradiation-induced temperature rise of sensors increases as the air temperature is decreased (Lee et al., 2018a). Recently, the Korea Research Institute of Standards and Science (KRISS) developed an upper-air simulator (UAS) that can simultaneously control the temperature, pressure, air ventilation, and irradiation (Lee et al., 2020). This UAS has been also used to calibrate the relative humidity sensors of commercial radiosondes at low temperatures (down to −67^∘C) (Lee et al., 2021). In this study, the uncertainty in the radiation correction of a Vaisala RS41 temperature sensor is evaluated using the UAS developed at KRISS (Lee et al., 2020). It is shown how the uncertainty in each environmental parameter and radiosonde movements in the UAS contributes to the uncertainty of the RS41 through a radiation correction formula obtained by a series of radiation experiments. The layout of the UAS is described in Sect. 2, along with the addition of new functions to consider the effect of the rotation and tilting of the sensor, which are important improvements from the previous version of the UAS. As described in Sect. 3, a radiation correction formula for the RS41 sensor is derived through a series of experiments involving varying temperature (T), pressure (P), and ventilation speed (v) values in the following ranges: −67 to 20^∘C, 5–500hPa, and 4–7ms^−1, respectively, with a fixed irradiance S[0]=980Wm^−2. The effects of sensor rotation and tilting with respect to the incident irradiation are also investigated. Section 4 describes the evaluation of the uncertainties associated with the environmental parameters and sensor motions and positions controlled in the UAS to calculate the contribution of these factors to the uncertainty in the radiation correction. This study can help enhance the measurement accuracy of radiosondes within the framework of traceability to the International System of Units (SI) by providing a methodology for radiation correction in an environment similar to that which may be encountered by radiosondes. 2.1Temperature control of the radiosonde test chamber by using a climate chamber Figure 1a shows the test chamber of the UAS with an installed radiosonde for the radiation correction. The test chamber is inside a climate chamber (Tenney Environmental, Model C64RC) with a 1219mm ×1219mm×1219mm working space. The temperature of the test chamber is controlled by the climate chamber. Air is precooled before entering into the climate chamber by passing through a heat exchanger in a separate bath (Kambič Metrology, Model OB-50/2 ULT) with a temperature lower than that of the climate chamber by about 5^∘C. The temperature of the precooled air is then adjusted to that of the climate chamber while passing through the second heat exchanger (9.3m in length) in the climate chamber before entering into the test chamber. The radiosonde is installed upside down, as shown in Fig. 1b, and the air flows into the test chamber from the bottom. The temperature of the test chamber is measured using a calibrated platinum resistance thermometer (PRT). 2.2Pressure and ventilation speed control through sonic nozzles and a vacuum pump To control the air ventilation speed at low pressures, sonic nozzles, also known as critical flow Venturi, are used. The sonic nozzles are fabricated as toroidal-throat Venturi nozzles to comply with the ISO 9300 standard (ISO, 2005) and calibrated using a low-pressure gas flow standard system at KRISS (Choi et al., 2010). Thus, the reference value and SI traceability of the ventilation speed are obtained by using the sonic nozzles in the UAS. Sonic nozzles can be used to achieve a specific maximum constant flow when the ratio of the downstream pressure (P[e]) to the upstream pressure (P[o]) is smaller than a certain critical point (${P}_{\mathrm{e}}/{P}_{\mathrm{o}}<{P}_{\mathrm{c}}/{P}_{\mathrm{o}}$). The test chamber lies in the downstream region of the sonic nozzles, in which the pressure is lowered using a vacuum pump (WONVAC, Model WOVP-0600) to attain the critical condition. Six sonic nozzles with different throat diameters are used to generate air ventilation speeds ranging from 4 to 7ms^−1 in the pressure range of 5–500hPa. The generated airflow is measured through laser Doppler velocimetry (LDV) (Dantec, Model BSA F60) to investigate the spatial gradient in the test chamber. An Ar ion laser (3W) with a wavelength of 514.5nm is used for the LDV with a focal length of 400.1mm and nominal beam spacing of 33mm. 2.3Irradiation control by using a solar simulator Solar irradiation is imitated by using a solar simulator with a xenon DC arc lamp (Newport, Model 66926-1000XF-R07). The virtual sunlight is irradiated onto the radiosonde temperature sensor and the sensor boom through quartz windows of the test chamber. A constant irradiance of 980Wm^−2 at the position of the radiosonde sensors inside the test chamber is adopted throughout this study. The two-dimensional distribution of the irradiance is recorded at the radiosonde sensor location by using a calibrated Si photodiode (Thorlabs, Model SM05PD2A). The spatial uniformity of the irradiance around the sensor position is within ±5%. In addition, the irradiance is monitored to check its drift during the experiments by using a photodiode-based pyranometer (Apogee, Model SP-110-SS) installed behind the test chamber. The pyranometer is calibrated at KRISS, and the uncertainty is 1% of the measured value with a coverage factor k=1. 2.4Installation of the RS41 The uncertainty associated with the radiation correction for a commercial radiosonde (Vaisala, RS41) is evaluated using the UAS. A complete RS41 unit including the sensor boom, antenna, and main body is installed upside down in the test chamber, as shown in Fig. 1a and b. The sensor boom is placed parallel to the airflow (dashed blue arrows). The sensor boom is irradiated (dotted red arrows) by the solar simulator in a perpendicular manner through quartz windows (50mm×70mm). The temperature recorded by the RS41 is collected through remote data transmission as in the case of soundings by the Vaisala sounding system MW41. Radiation correction by the manufacturer is applied only during the sounding state. The RS41 unit remains at the pre-sounding state in the manual sounding mode throughout the data acquisition, and thus, raw temperature with no radiation correction is obtained. 2.5Rotation and tilting of the sensor boom A radiosonde exhibits continuous movements such as pendulum and rotational motions during sounding. The geometry of the temperature sensor of the Vaisala RS41 is a rod shape, and thus the rotation and tilt affect the effective irradiance and the direction of air ventilation. Other radiosondes using spherical bead thermistors would be less affected by the rotation and tilt. Thus, the angle of the sensor boom with respect to the radiation direction or airflow may constantly vary. To consider this aspect, the UAS is modified to be able to simulate these situations through rotating and tilting of the sensor boom in the test chamber. Figure 1c–e illustrate the mechanisms in the test chamber that enable the (Fig. 1d) rotation of the radiosonde around the vertical axis and (Fig. 1e) tilting of the sensor boom from the (Fig. 1c) normal position. The rotation cycle and tilt are controlled using stepper motors. Rotation cycles of 5, 10, and 15s are employed. The maximum tilt is 27 ^∘ with respect to the vertical axis. Effects of the rotation and incident angle of irradiation are studied and incorporated into the uncertainty evaluation of the radiation correction of the sensor. 3.1Effect of pressure Radiation error is the temperature difference between the sensor with irradiation and air (T[on]−T[air]). However, the air temperature measured in the current chamber system does not represent that in the free atmosphere since the air is heated by irradiation for a short time while passing through the test section. It is difficult to measure true air temperature in a shaded area in the test chamber using an independent thermometer because the test section is also slightly heated by the irradiation. The temperature measured below the window is continuously increased by a few tens of millikelvins while repeating the experiments for 10min. Thus, the radiation correction value (ΔT[rad]) is obtained by the difference in the temperatures with irradiation (T[on]) and without irradiation (T[off]) as previously reported (Lee et al., 2020): $\mathrm{\Delta }{T}_{\mathrm{rad}}={T}_{\mathrm{on}}-{T}_{\mathrm{off}}$. The duration of irradiation is 120s, and the measurement is repeated three times. It has been reported that ΔT[rad] significantly increases as the pressure (P) decreases from 100 to 7hPa in the UAS (Lee et al., 2020). In this study, the pressure range is extended (5–500hPa) to formulate the corresponding effect at a more practical scale. Figure 2a shows ΔT[rad] as a function of pressure from 5 to 500hPa with varying temperature (T) from −67 to 20^∘C. The data represent the mean and the standard deviation of three repeated measurements on a single RS41 unit. The biggest standard deviation was 0.014^∘C. The enhanced increase in ΔT[rad] is observed at low pressures for all measured temperatures because the convective cooling process is weakened as the air density decreases at low pressures. The effect of temperature is well distinguished in the low-pressure range (5 to 50hPa), whereas it is not clearly observable in the high-pressure range (100 to 500hPa). This phenomenon can be attributed to the fact that the uncertainty in ΔT[rad] becomes relatively larger with respect to ΔT[rad] as ΔT[rad] is decreased at high pressures in the UAS. To parameterize a radiation correction formula in terms of T and P, ΔT[rad] at each temperature is fitted individually by using an empirical polynomial function of log[10]P, as indicated by dashed lines in Fig. 2a. The fitting equations represented in Fig. 2a are as follows: $\begin{array}{}\text{(1)}& \begin{array}{rl}\mathrm{\Delta }{T}_{\mathrm{rad}}& ={A}_{\mathrm{0}}\left(T\right)+{B}_{\mathrm{0}}\left(T\right)\cdot \mathrm{log}\left(P\right)+{C}_{\mathrm{0}}\left(T \right)\cdot \left[\mathrm{log}\left(P\right){\right]}^{\mathrm{2}}\\ & \text{for 5\hspace{0.17em}hPa}\le P\le \text{500\hspace{0.17em}hPa},\phantom{\rule{1em}{0ex}}{S}_{\mathrm{0}}=\mathrm{980}\ where A[0](T), B[0](T), and C[0](T) are fitting coefficients with functions of T, having units of ^∘C, ^∘C⋅[log hPa]^−1, and ^∘C⋅[loghPa]^−2, respectively. The irradiation intensity S[0] is set as 980Wm^−2 throughout this study. 3.2Effect of temperature The following T values are used in the test chamber: −67, −55, −40, −20, 0, and 20^∘C. As shown in Fig. 2a, ΔT[rad] gradually increases as the temperature reduces, especially in the low-pressure range of 5–50hPa. To incorporate the temperature effect in Eq. (1), the coefficients are fitted with empirical linear functions as follows: $\begin{array}{}\text{(2)}& {A}_{\mathrm{0}}\left(T\right)={a}_{\mathrm{0}}\cdot T+{a}_{\mathrm{1}},\text{(3)}& {B}_{\mathrm{0}}\left(T\right)={b}_{\mathrm{0}}\cdot T+{b}_{\mathrm{1}},\text{(4)}& {C} _{\mathrm{0}}\left(T\right)={c}_{\mathrm{0}}\cdot T+{c}_{\mathrm{1}},\end{array}$ where a[0], a[1], b[0], b[1], c[0], and c[1] are fitting coefficients. Information regarding these coefficients is summarized in Table 1. The residuals obtained using Eq. (1) and the associated fitting coefficients listed in Table 1 are presented in Fig. 2b. The fitted values agree with the measurement data within ±0.03^∘C. In order to understand the observed temperature effect theoretically, the temperature sensor is modelled as a sphere made of platinum (Pt) with a diameter (D) of 1mm. The Pt sphere is placed in the middle of an airflow (v) with varied temperature (T[a]) and pressure (P[a]), as shown in Fig. 3a. The sphere is heated by the absorption of the solar irradiance (S=1000Wm^−2) and cooled by the forced air convection (5ms^−1), similar to the experiment. The radial and angular temperature distribution of the sphere is neglected and assumed to be uniform. Then, the steady-state temperature of the sphere (T[s]) is simply decided by the energy balance of the heat transfer exchange as follows: $\begin{array}{}\text{(5)}& \mathit{\alpha }S=h\left({T}_{\mathrm{s}}-{T}_{\mathrm{a}}\right)\end{array}$ $h=\frac{k}{D}\left[\mathrm{2}+\left(\mathrm{0.4}{\mathit{Re}}^{\frac{\mathrm{1}}{\mathrm{2}}}+\mathrm{0.06}{\mathit{Re}}^{\frac{\mathrm{2}}{\mathrm{3}}}\right){\left(\frac{\mathit{\mu }{C}_{p}}{k}\ where α is the absorptivity of the metal sphere, S is the solar irradiance, and h is the heat transfer coefficient (Incropera and Dewitt, 2002; Luers and Eskridge, 1995). The net heat transfer by longwave radiation from the Pt sphere is not considered because it is negligible (∼10^−6W) compared to that by the convective heat transfer in Eq. (5). The heat transfer coefficient h is determined by several parameters concerning the diameter of the sphere (D) and the properties of air, including thermal conductivity (k); viscosity (μ); heat capacity (C[p]); and Reynolds number (Re=ρvDμ^−1), where ρ and v are the density of air and wind speed, respectively. The radiation correction (T[s]−T[a]) at T[a]=20 and −70^∘C is calculated by Eq. (5) and displayed together with the experimental values (mean and standard deviation of three repeated experiments) as shown in Fig. 3b. The properties of air (N[2]) used for the calculation refer to the NIST Chemistry WebBook (Linstrom and Mallard, 2001). In general, the calculated radiation correction of the Pt sphere is elevated as the pressure decreases as in the case of the experiment. This is because the heat transfer coefficient is reduced by about 35% as the density of air is decreased with varying pressure from P[a]=50 to 5hPa. Interestingly, the temperature effect on the calculated radiation correction is also observed to be similar to the experiment. The theoretical value is roughly consistent with the experimental value within the uncertainty in ΔT[rad] (0.1^∘C) as obtained in Sect. 4.9. A decrease in thermal conductivity of air by about 26% at −70^∘C is mainly responsible for the decrease in the heat transfer coefficient and thereby the increase in the radiation correction at low temperature (−70^∘C). The thermal conductivity of air plays an important role for the heat transfer at the boundary between the air and the Pt sphere. The same phenomenon was also observed for thermistors even though there is no apparent air ventilation (Lee et al., 2018a), which may emphasize the role of thermal conductivity of air. The parameters and their values used for the calculation of radiation correction at T[a]=20 and −70^∘C with P[a]=5hPa are summarized in Table 2. It was previously observed that the temperature rise of the RS92 was initially fast due to the small thermal mass of the sensor and subsequently slow (Dirksen et al., 2014). More recently, the temperature of the RS41 oscillated when the radiosonde was rotating under irradiation (von Rohden et al., 2022). These observations are attributed to the fact that the heating of the sensor boom with a comparably large area is coupled to the heating of the temperature sensor. Since the conductive heat transfer from the sensor boom is missing in the above theoretical calculation, the comparison in Fig. 3b may show the effect of the sensor boom on ΔT[rad]. Interestingly, the growth of ΔT[rad] of the theoretical calculation is less steep than that of the experiment as the pressure is decreased to 5hPa. This may imply that the heat transfer from the sensor boom becomes significant especially at low pressures. 3.3Estimation of the low-temperature effect The effect of low temperature on ΔT[rad] is represented by the ratio (%) of ΔT[rad] to the corresponding value at 20^∘C (ΔT[rad_20]), as shown in Fig. 4a. The data represents the mean and the standard deviation of three repeated measurements on a single RS41 unit. The temperature effect ($\mathrm{\Delta }{T}_{\mathrm{rad}}/\mathrm{\Delta }{T}_{\mathrm{rad}\mathrm{_}\mathrm{20}}$) gradually increases as the temperature and pressure decrease. $\mathrm{\Delta }{T}_{\mathrm{rad}}/\mathrm{\Delta }{T}_{\mathrm{rad}\mathrm{_}\mathrm{20}}$ is 119% at T=−67^∘C and P=5hPa. To obtain the information required to estimate the low-temperature effect by using only ΔT[rad] at 20^∘C with varied P, $\mathrm{\Delta }{T}_{\mathrm{rad}}/\mathrm{\Delta }{T}_{\mathrm{rad}\mathrm{_}\ mathrm{20}}$×100 is fitted with empirical linear functions: $\begin{array}{}\text{(6)}& \mathrm{\Delta }{T}_{\mathrm{rad}}/\mathrm{\Delta }{T}_{\mathrm{rad}\mathrm{_}\mathrm{20}}×\mathrm{100}\left(\mathit{%}\right)=D\left(T\right)\cdot P+E\left(T\right),\end where D(T), represented in hPa^−1, and E(T), which is dimensionless, are fitting coefficients with functions of T. D(T) and E(T) are fitted by linear functions of T as follows: $\begin{array}{}\text{(7)}& D\left(T\right)={d}_{\mathrm{0}}\cdot T+{d}_{\mathrm{1}},\text{(8)}& E\left(T\right)={e}_{\mathrm{0}}\cdot T+{e}_{\mathrm{1}},\end{array}$ where d[0], d[1], e[0], and e[1] are fitting coefficients. The information regarding these coefficients is summarized in Table 3. The residuals obtained using Eqs. (6), (7), and (8) are represented in Fig. 4b. The estimated values agree with the measurement data within ±1.5% (left y axis), corresponding to approximately ±0.01 ^∘C (right y axis). Using Eq. (6), the radiation correction for low temperatures can be estimated through only the room-temperature measurement. Since the temperature dependency is weak at higher pressures, there is no need to estimate the low-temperature effect at 50–500hPa, and the estimation using Eq. (6) is limited within 5–50hPa. 3.4Effect of ventilation speed To investigate the effect of ascending speed of radiosondes, the air ventilation speed (v) in the test chamber is systematically varied in the range of 4–7ms^−1. Figure 5a shows ΔT[rad] as a function of the ventilation speed with the temperature varying from −67 to 20^∘C. ΔT[rad] decreases as the ventilation speed increases, primarily owing to the enhancement in the convective cooling. Because the pressure is fixed at 50hPa, the temperature effect is clearly visible in Fig. 5a. The measurement data at each temperature are fitted using a linear function (dashed lines) to formulate the effect of the ventilation speed. The slope of the linear functions indicates that an increase of 1ms^−1 in v induces a decrease of 0.02–0.03^∘C in ΔT[rad]. Figure 5b shows ΔT[rad] as a function of the ventilation speed with the pressure varying from 5 to 300hPa. The measurement data at each pressure are fitted using a linear function (dashed lines). The slopes are distributed from −0.04 to −0.02^∘C(ms^−1)^−1. Although the effect of the ventilation speed is coupled with the temperature and pressure effects, the coupling represented by the variation in slopes in Fig. 5a and b is minor in the range of 4–7ms^−1. Therefore, the effect of the ventilation speed can likely be treated as an independent parameter. Thus, the ventilation effect is formulated considering the average slope in Fig. 5a and b, which is −0.027^∘C(ms^−1)^−1. This result is incorporated into Eq. (1) at v=5ms^−1: $\begin{array}{}\text{(9)}& \begin{array}{rl}\mathrm{\Delta }{T}_{\mathrm{rad}}& ={A}_{\mathrm{0}}\left(T\right)+{B}_{\mathrm{0}}\left(T\right)\cdot \mathrm{log}\left(P\right)+{C}_{\mathrm{0}}\left(T \right)\cdot \left[\mathrm{log}\left(P\right){\right]}^{\mathrm{2}}\\ & -\mathrm{0.027}\cdot \left(v-\mathrm{5}\right)\\ & \text{for 5\hspace{0.17em}hPa}\le P\le \text{500\hspace{0.17em}hPa},\phantom The residual obtained by applying Eq. (9) is shown in Fig. 5c. The fitted values agree with the measurement data within ±0.04^∘C. The linear relationship between the ventilation speed and the radiation correction in Eq. (9) is only valid in the range of 4–7ms^−1. When v is higher than 7ms^−1 or lower than 4ms^−1, the formula underestimates the correction value. 3.5Effect of irradiation intensity The linear relationship between ΔT[rad] and the irradiance (S) is confirmed with reference to the existing studies based on theoretical and experimental approaches (Luers, 1990; McMillin et al., 1992; Lee et al., 2016). S is independent of T, P, and v. As previously observed, the variation in the other parameters results in a change in only the slope of the linear functions (h in Eq. 5), and the linearity is not altered (Lee et al., 2018c, b). Because all the experiments performed in this study adopt a fixed S[0]=980Wm^−2, and the empirical fitting coefficients are accordingly obtained, the effect of the irradiation intensity can be incorporated into Eq. (9) by using the linear relationship between ΔT[rad] and S as follows: $\begin{array}{}\text{(10)}& \begin{array}{rl}\mathrm{\Delta }{T}_{\mathrm{rad}}& =S{S}_{\mathrm{0}}^{-\mathrm{1}}×\left[{A}_{\mathrm{0}}\left(T\right)+{B}_{\mathrm{0}}\left(T\right)\cdot \mathrm {log}\left(P\right)\\ & +{C}_{\mathrm{0}}\left(T\right)\cdot \left[\mathrm{log}\left(P\right){\right]}^{\mathrm{2}}-\mathrm{0.027}\cdot \left(v-\mathrm{5}\right)\right]\\ & \text{for 5\hspace{0.17em} hPa}\le P\le \text{500\hspace{0.17em}hPa},\phantom{\rule{1em}{0ex}}{S}_{\mathrm{0}}=\mathrm{980}\phantom{\rule{0.125em}{0ex}}\mathrm{W}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{-\mathrm{2}}.\end The radiation correction (ΔT[rad]) is then scaled with the actual irradiance (S) by the factor of $S{S}_{\mathrm{0}}^{-\mathrm{1}}$. Consequently, Eq. (10) considers the radiation correction of the RS41 temperature sensor under simultaneously varying T, P, v, and S. 3.6Effect of sensor boom rotation The spinning motion of radiosondes during sounding is imitated by rotating the radiosonde in the test chamber, as shown in Fig. 1d. The rotation axis is the temperature sensor itself, not the centre of the boom in this work. Therefore, the temperature sensor only spins on the spot, and thus the distance between the sensor and the solar simulator does not change during the rotation. The amplitude of the temperature oscillation is investigated by varying the rotation cycle (5, 10, and 15s) under irradiation, as shown in Fig. 6a. The maximum peak (T[on_max]) and minimum peak (T[on_mim]) appear alternately during the rotation. The difference between the peaks (T[on_max]−T[on_min]) for 5s duration is 0.01–0.02^∘C, which is around the measurement resolution of the RS41 (0.01^∘C), but is increased with the rotation period. Each peak appears twice in a single cycle, as clearly observed in the 15s cycle. The exposed surface of the sensor boom depends on the incidence angle and passes through a maximum twice during a full rotation. The sensor boom experiences irradiation in the perpendicular and parallel directions at T[on_max] and T[on_min], respectively. This finding suggests that the conductive heat transfer from the boom to the sensor influences T[on_max]. Figure 6b shows T[on_max]−T[on_min] as a function of pressure under different rotation cycles. The pressure effect is clearly visible when the rotation cycle is 15s. Because the experiment is conducted at T=25^∘C and v=5ms^−1, the effect of rotation at the lowest considered temperature (−67^∘C) is estimated using Eqs. (6), (7), and (8). At P=5hPa, the value of T[on_max]−T [on_min] at −67^∘C is 20% higher than that at 25^∘C. The maximum value of T[on_max]−T[on_min] in the UAS (0.05^∘C) is much smaller than that of von Rohden et al. (2022) (0.3^∘C). In the work of von Rohden et al. (2022), although the distance from the light source to the sensor is constant, that to the sensor boom changes with rotation. This may be the reason why the maximum peak appears once in a full cycle when the sensor boom is close to the light source, and the T[on_max]−T[on_min] is bigger than this work. It should be highlighted that the relatively small T[on_max]−T[on_min] with respect to ΔT[rad] observed in this work suggests that the contribution of the thermal conduction to ΔT[rad] is small compared to that by the direct irradiation of the sensor. 3.7Effect of solar incident angle The incident angle of irradiation to sensors primarily depends on the solar elevation angle and, during soundings, may also vary due to pendulum motion of the radiosonde. To investigate the effect of the solar incident angle, the sensor boom is tilted by θ with respect to the normal direction in the test chamber, as shown in Fig. 1e. Figure 7a shows ΔT[rad] as a function of pressure when the sensor boom is in the normal and tilted (θ=27^∘) positions. ΔT[rad] in the tilted position (red circle) is lower than that in the normal position (black square) because the effective irradiance (S [eff]) is reduced by the tilting (${S}_{\mathrm{eff}}=S×\mathrm{cos}\mathrm{27}{}^{\circ }$). Because ΔT[rad] is proportional to S[eff], the ratio $\mathrm{\Delta }{T}_{\mathrm{rad}\mathrm{_}\mathrm {tiled}}/\mathrm{\Delta }{T}_{\mathrm{rad}\mathrm{_}\mathrm{normal}}$ should be cos27^∘. The ratio roughly follows the theoretical value (dotted blue line). However, this value is slightly higher and lower than cos27^∘ at pressure values less and more than 50hPa, respectively. At higher pressures, this deviation can be explained by the effect of ventilation, which intensifies in the case of tilting of the sensor boom. However, the reason for the deviation from the theoretical value at low pressures remains unclear. In this paper, the effect of solar incident angle (or tilt angle, θ) is considered by using S[eff] (S×cosθ), and thus Eq. (10) is revised into its final form as follows: $\begin{array}{}\text{(11)}& \begin{array}{rl}\mathrm{\Delta }{T}_{\mathrm{rad}}& =\left({S}_{\mathrm{eff}}{S}_{\mathrm{0}}^{-\mathrm{1}}\right)×\left[{A}_{\mathrm{0}}\left(T\right)+{B}_{\mathrm{0}}\ left(T\right)\cdot \mathrm{log}\left(P\right)\\ & +{C}_{\mathrm{0}}\left(T\right)\cdot \left[\mathrm{log}\left(P\right){\right]}^{\mathrm{2}}-\mathrm{0.027}\cdot \left(v-\mathrm{5}\right)\right]\\ & \text{for 5\hspace{0.17em}hPa}\le P\le \text{500\hspace{0.17em}hPa},\phantom{\rule{1em}{0ex}}{S}_{\mathrm{0}}=\mathrm{980}\phantom{\rule{0.125em}{0ex}}\mathrm{W}\phantom{\rule{0.125em}{0ex}}{\mathrm Figure 7b shows the difference between ΔT[rad_tilted] and $\mathrm{\Delta }{T}_{\mathrm{rad}\mathrm{_}\mathrm{normal}}×\mathrm{cos}\mathrm{27}{}^{\circ }$ as a function of the pressure. Because the experiment is conducted at T=25^∘C, the effect of the solar incident angle at the lowest considered temperature (−67^∘C) is estimated using Eqs. (6), (7), and (8). At P=5hPa, $\mathrm{\Delta } {T}_{\mathrm{rad}\mathrm{_}\mathrm{tilted}}-\mathrm{\Delta }{T}_{\mathrm{rad}\mathrm{_}\mathrm{normal}}×\mathrm{cos}\mathrm{27}{}^{\circ }$ at −67^∘C is 20% higher than that at 25^∘C. This value is used for the uncertainty due to the tilting of the sensor boom in Sect. 4.7. 4.1Uncertainty factors The uncertainty factors that contribute to the uncertainty budget of the radiation correction are summarized in Table 4, in addition to the experimental ranges considered in this work. 4.2Uncertainty in the temperature, u(T) The temperature of the test chamber is measured using a PRT installed in a shaded area. The PRT is calibrated at KRISS, and the calibration uncertainty is 0.025^∘C with the coverage factor k=1. The resistance of the PRT is measured using a digital multimeter calibrated at KRISS. Moreover, the stability of the temperature measured using the PRT is considered in determining u(T). The uncertainty components and their contributions to u(T) are listed in Table 5. 4.3Uncertainty in the pressure, u(P) The pressure of the test chamber is measured using three pressure gauges for different pressure ranges. The gauges are calibrated at KRISS, and the calibration uncertainty is considered in determining u(P). Moreover, the stability of the pressure measured using each pressure gauge is considered to determine u(P). The uncertainty components and their contributions to u(P) are listed in Table 6. 4.4Uncertainty in the ventilation speed, u(v) The SI traceability of the ventilation speed in the test chamber of the UAS is ensured by calibrating the sonic nozzles at KRISS. The calibration uncertainty of the sonic nozzles is 0.09% (k=1). The stability of the ventilation speed in the test chamber is considered to determine u(v). The spatial gradient of the ventilation speed in the test chamber is measured through the LDV at KRISS. The measurement dimension using the LDV was 30mm×30mm around the sensor (central) location with 5mm intervals (49 points). Thus, the outermost measurement points were spaced 10mm apart from the walls of the test chamber (50mm×50mm). The measurement was performed under the condition of v=4.67ms^−1 (reference value), P=550hPa, and room temperature. The flow regime is turbulent because the Reynolds number is high (∼10^5) under this experimental condition. The average and the standard deviation by the LDV over the entire measurement area were 4.63 and 0.47ms^−1, respectively. Although the flow rate of the outermost points tends to be smaller than others, no significant spatial gradient is observed. This may be because of the spacing (10mm) between the outermost measurement points and the walls of the test chamber. The difference between the reference and the measurement average is assumed to have a rectangular probability distribution for the calculation of the uncertainty in the spatial gradient. Then, the standard uncertainty of this estimate is the half width of the distribution divided by $\sqrt{\mathrm{3}}$ (ISO, 2008). The uncertainty components and their contributions to u(v) are summarized in Table 7. 4.5Uncertainty in the irradiance, u(S) The irradiance in the test chamber is measured using a pyranometer. The pyranometer is calibrated at KRISS, and the calibration uncertainty is 9.8Wm^−2 at k=1. The stability of the irradiance measured using the pyranometer is considered to determine u(S). The uncertainty of the solar simulator will be negligible compared to that of the actual radiation field in atmospheric soundings due to the lack of knowledge. In addition, the two-dimensional spatial uniformity of the irradiance in the test chamber is measured by moving the pyranometer. The spatial gradient is within ±5%, and a rectangular probability distribution is assumed for the uncertainty calculation. The uncertainty components and their contributions to u(S) are summarized in Table 8. 4.6Uncertainty due to sensor rotation Since the sensor boom position for T[on_max] during the rotation corresponds to the normal position, the uncertainty due to sensor rotation is obtained based on T[on_max]−T[on_min], as shown in Fig. 6b. The value estimated for T=−67^∘C and P=5hPa is used to include sufficient uncertainty. The values are assumed to have a rectangular distribution, and thus, the corresponding standard uncertainty (k=1) is obtained considering the half-maximum value (0.03^∘C) divided by $\sqrt{\mathrm{3}}$. The reason for using the half maximum is that T[on_max]−T[on_min] is about double T [on_max]−T[on] or T[on]−T[on_min]. Consequently, the uncertainty due to sensor rotation is 0.017^∘C (k=1). 4.7Uncertainty due to tilting of the sensor The uncertainty due to tilting of the sensor boom is obtained using ${T}_{\mathrm{on}\mathrm{_}\mathrm{tilted}}-{T}_{\mathrm{on}\mathrm{_}\mathrm{normal}}\cdot \mathrm{cos}\mathrm{27}{}^{\circ }$, as shown in Fig. 7b. The value estimated for T=−67^∘C and P=5hPa is used to include sufficient uncertainty. The values are assumed to have a rectangular distribution, and thus, the corresponding standard uncertainty (k=1) is obtained considering the maximum value (0.045^∘C) divided by $\sqrt{\mathrm{3}}$. Consequently, the uncertainty due to sensor rotation is 0.026^∘C (k=1). 4.8Uncertainty due to fitting error Because Eq. (11) is used for the final radiation correction, the residuals shown in Figs. 2b and 5c must be considered in determining the uncertainty. The residuals are assumed to have a rectangular distribution, and thus, the corresponding standard uncertainty (k=1) is obtained considering the maximum absolute value divided by $\sqrt{\mathrm{3}}$. Consequently, the uncertainty due to the fitting error is 0.023^∘C (k=1). 4.9Uncertainty budget for radiation correction The uncertainties in T, P, v, and S contribute to the uncertainty in the radiation correction by the uncertainty propagation law based on Eq. (11): $\begin{array}{}\text{(12)}& \frac{\partial \mathrm{\Delta }{T}_{\mathrm{rad}}}{\partial T}\cdot u\left(T\right),\text{(13)}& \frac{\partial \mathrm{\Delta }{T}_{\mathrm{rad}}}{\partial P}\cdot u\ left(P\right),\text{(14)}& \frac{\partial \mathrm{\Delta }{T}_{\mathrm{rad}}}{\partial v}\cdot u\left(v\right),\text{(15)}& \frac{\partial \mathrm{\Delta }{T}_{\mathrm{rad}}}{\partial S}\cdot u\left where u(parameter) represents the standard uncertainty in each parameter at k=1, and the partial differential terms represent the sensitivity coefficients. The sensitivity coefficients of the uncertainties due to sensor rotation, tilting of the sensor, and fitting error are 1 because they directly contribute to the uncertainty in the radiation correction. The uncertainty budget for the radiation correction (ΔT[rad]) based on the conducted experiments is presented in Table 9. 4.10Uncertainty budget for the corrected temperature, T[cor] The corrected temperature (T[cor]) is obtained by subtracting ΔT[rad] from the raw temperature (T[raw]) as follows: $\begin{array}{}\text{(16)}& {T}_{\mathrm{cor}}={T}_{\mathrm{raw}}-\mathrm{\Delta }{T}_{\mathrm{rad}}.\end{array}$ Thus, the uncertainty in the corrected temperature, u(T[cor]), is calculated as follows: $\begin{array}{}\text{(17)}& u\left({T}_{\mathrm{cor}}{\right)}^{\mathrm{2}}=u\left({T}_{\mathrm{raw}}{\right)}^{\mathrm{2}}+u\left(\mathrm{\Delta }{T}_{\mathrm{rad}}{\right)}^{\mathrm{2}},\end where u(T[raw]) is the standard uncertainty in the raw temperature (k=1). The uncertainty in ΔT[rad], indicated in Table 9, must be rescaled in proportion to the actual solar irradiance for Eq. (17). Therefore, the uncertainty in ΔT[rad] is scaled up to a level of solar constant (∼1360Wm^−2) by a factor of $\mathrm{1360}/\mathrm{980}$ based on the linear relationship between ΔT[rad] and S. The calibration uncertainty associated with the temperature sensor must be considered to account for the uncertainty in the raw temperature, u(T[raw]). Consequently, the expanded uncertainty in the corrected temperature of the RS41 is 0.138^∘C (k=2), as indicated in Table 10. The calibration uncertainty in the RS41 temperature sensor, U(T[raw]) (k=2), is specified by the manufacturer (Vaisala, 2022). Since Vaisala provides additional uncertainty in reproducibility in sounding (0.15^∘C when P>100hPa, 0.3^∘C when P<100hPa) (Vaisala, 2022), this should be added to the total uncertainty in the corrected temperature when the radiation correction formula in Eq. (11) is applied to soundings. 4.11Comparison of RS41 radiation correction specified by Vaisala and that obtained through the UAS The radiation correction of the RS41 by the UAS is based on Eq. (11) for different pressure ranges. In order to apply the correction formula to actual soundings, the effective irradiance to the sensor should be known. However, radiosondes constantly change positions with respect to the solar irradiation through rotation and pendulum motion; the calculation of effective irradiance resorts to the mean of effective irradiance over the motion of radiosondes. Figure 8a shows a schematic diagram of a radiosonde with parameters that affect the effective irradiance S[eff] on the sensor. Then, the effective irradiance to the sensor can be calculated as follows: $\begin{array}{}\text{(18)}& {S}_{\mathrm{eff}}={S}_{\mathrm{dir}}\cdot \mathrm{|}\mathrm{cos}\mathit{\alpha }\mathrm{cos}\mathit{\theta }\mathrm{cos}\mathit{\phi }-\mathrm{sin}\mathit{\theta }\ mathrm{sin}\mathit{\alpha }\mathrm{|}.\end{array}$ S[dir] is solar direct irradiance, θ is boom tilting angle, α is solar elevation angle, and φ is azimuthal angle. The effective irradiation area (${A}_{\mathrm{eff}}/{A}_{\mathrm{0}}$) on the sensor boom is averaged over rotation (φ) with a fixed tilting angle θ=45^∘ and plotted as a function of the solar elevation angle as shown in Fig. 8b. Using this effective irradiance, the radiation correction by the UAS is obtained and compared with that of the manufacturer at two different α values (45 and 90^∘), as shown in Fig. 9. For the UAS correction, the solar direct irradiance is assumed to be 1360Wm^−2 at all pressure values. To simulate the albedo effect, the radiation correction with additional irradiance of 400Wm^−2 is also calculated. Consequently, the radiation correction of the UAS is smaller than the Vaisala by about 0.5–0.7^∘C at −70^∘C and 5hPa when only the solar direct irradiance (1360Wm^−2) is considered with the solar elevation angle α=45–90^ ∘. When the albedo effect is additionally included (400Wm^−2), the gap between the two corrections is reduced to 0.04–0.4^∘C at −70^∘C and 5hPa with α=45–90^∘. Since solar direct irradiance (1360Wm^−2) and additional diffuse irradiance (400Wm^−2) are applied for all pressures, the radiation correction of this work can be exaggerated at high pressures. The radiation correction of the UAS is smaller than that of the manufacturer at low pressures, which is consistent with the recent finding using an independent laboratory set-up. In the work of von Rohden et al. (2022), the radiation correction was smaller than the manufacturer's by 0.35K at 35km. The radiation corrections of the manufacturer and the UAS at some representative conditions are summarized in Table 11. The UAS developed at KRISS provides a unique opportunity to correct the solar radiation effect on commercial radiosondes by reproducing the environments that may be encountered by radiosondes by simultaneously controlling T, P, v, and S. The following ranges of T, P, and v are considered in this study: −67 to 20^∘C, 5–500hPa, and 4–7ms^−1, respectively, with a fixed S[0]=980Wm^−2. The functionalities of rotating and tilting the sensor boom are added considering the previous report on the UAS (Lee et al., 2020) to investigate the effect of the radiosonde motions with respect to the solar irradiation direction during ascent. The correction formula for the radiation effect on a Vaisala RS41 temperature sensor is derived through a series of experiments with varying environmental parameters as well as motions and positions of the radiosonde sensor. In addition, an empirical formula is derived to estimate the low-temperature effect by using only the inputs of room-temperature measurements. The uncertainty associated with the radiation correction is evaluated by combining the contribution of each uncertainty factor. The uncertain factors considered for the radiation correction are T, P, v, and S as well as the sensor rotation, sensor tilting, and data-fitting-induced errors. The uncertainty budget for the radiation correction of the RS41 temperature sensor is 0.1^∘C at k=2. When the uncertainty in the absolute temperature measurement (calibration uncertainty) is included, the uncertainty in the corrected temperature is estimated to be 0.17^∘C at k=2. The radiation correction values by the UAS are provided when the solar constant (1360Wm^−2) is used for S for the comparison with those by the manufacturer. The radiation correction by the UAS depends on effective solar irradiance. Thus, the measurement of solar irradiance in situ and the calculation of effective irradiance are desirable to reflect the conditions such as clouding, solar elevation angle, and radiosonde movement, thereby obtaining more accurate correction values. To measure the solar irradiance in situ, a radiosonde model using dual temperature sensors with different emissivity values has already been tested using the UAS. The temperature difference in the two sensors of the radiosonde is recorded with varying environmental parameters in the UAS to be reversely used to measure solar irradiance in situ during sounding. In this sense, the approach based on dual sensors is different from previous works that estimate the air temperature using several other temperatures measured by sensors with different emissivity (Schmidlin et al., 1986). As the UAS can support wired and wireless data acquisition, it can be used for any type of commercial radiosonde to derive the radiation correction along with the corresponding uncertainty. Therefore, the UAS can help enhance the measurement accuracy of commercial radiosondes within the framework of the SI traceability. The operation programme of the upper-air simulator based on LabVIEW software is available upon request. The laboratory experiment data used for Figs. 1 to 9 are available upon request. SWL analysed the experimental data and wrote the manuscript. SKi and YSL conducted experiments. BIC built the humidity control system, WK and YKO built the airflow control system, and SP and JKY established the solar simulator set-up. JL conducted theoretical calculation. SL and SKw developed the measurement software. YGK designed the experiments. The contact author has declared that neither they nor their co-authors have any competing interests. Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This research has been supported by the Korea Research Institute of Standards and Science (grant no. GP2021-0005-02). This paper was edited by Roeland Van Malderen and reviewed by Ruud Dirksen and one anonymous referee. Choi, H. M., Park, K.-A., Oh, Y. K., and Choi, Y. M.: Uncertainty evaluation procedure and intercomparison of bell provers as a calibration system for gas flow meters, Flow Meas. Instrum., 21, 488–496, https://doi.org/10.1016/j.flowmeasinst.2010.07.002, 2010. de Podesta, M., Bell, S., and Underwood, R.: Air temperature sensors: dependence of radiative errors on sensor diameter in precision metrology and meteorology, Metrologia, 55, 229, https://doi.org/ 10.1088/1681-7575/aaaa52, 2018. Dirksen, R. J., Sommer, M., Immler, F. J., Hurst, D. F., Kivi, R., and Vömel, H.: Reference quality upper-air measurements: GRUAN data processing for the Vaisala RS92 radiosonde, Atmos. Meas. Tech., 7, 4463–4490, https://doi.org/10.5194/amt-7-4463-2014, 2014. GCOS: GCOS Reference Upper-Air Network (GRUAN): Justification, requirements, siting and instrumentation options, GCOS, https://library.wmo.int/doc_num.php?explnum_id=3821 (last access: 5 August 2021), 2007. Incropera, F. and DeWitt, D.: Introduction to heat transfer, 4th edition, John Wiley & Sons, Inc., ISBN 10 0471386499, ISBN 13 978-0471386490, 2002. ISO: Measurement of Gas Flow by Means of Critical Flow Venturi Nozzles, ISO, https://www.iso.org/obp/ui/#iso:std:iso:9300:ed-2:v1:en (last access: 2 March 2022), 2005. ISO: Uncertainty of measurement – Part 3: Guide to the expression of uncertainty in measurement (GUM: 1995), ISO, https://www.iso.org/standard/50461.html (last access: 2 March 2022), 2008. Lee, S. W., Choi, B. I., Kim, J. C., Woo, S. B., Park, S., Yang, S. G., and Kim, Y. G.: Importance of air pressure in the compensation for the solar radiation effect on temperature sensors of radiosondes, Meteorol. Appl., 23, 691–697, https://doi.org/10.1002/met.1592, 2016. Lee, S. W., Park, E. U., Choi, B. I., Kim, J. C., Woo, S. B., Park, S., Yang, S. G., and Kim, Y. G.: Correction of solar irradiation effects on air temperature measurement using a dual-thermistor radiosonde at low temperature and low pressure, Meteorol. Appl., 25, 283–291, https://doi.org/10.1002/met.1690, 2018a. Lee, S. W., Park, E. U., Choi, B. I., Kim, J. C., Woo, S. B., Park, S., Yang, S. G., and Kim, Y. G.: Dual temperature sensors with different emissivities in radiosondes for the compensation of solar irradiation effects with varying air pressure, Meteorol. Appl., 25, 49–55, https://doi.org/10.1002/met.1668, 2018b. Lee, S. W., Park, E. U., Choi, B. I., Kim, J. C., Woo, S. B., Kang, W., Park, S., Yang, S. G., and Kim, Y. G.: Compensation of solar radiation and ventilation effects on the temperature measurement of radiosondes using dual thermistors, Meteorol. Appl., 25, 209–216, https://doi.org/10.1002/met.1683, 2018c. Lee, S. W., Yang, I., Choi, B. I., Kim, S., Woo, S. B., Kang, W., Oh, Y. K., Park, S., Yoo, J. K., and Kim, J. C.: Development of upper air simulator for the calibration of solar radiation effects on radiosonde temperature sensors, Meteorol. Appl., 27, e1855, https://doi.org/10.1002/met.1855, 2020. Lee, S. W., Kim, S., Choi, B. I., Woo, S. B., Lee, S., Kwon, S., and Kim, Y. G.: Calibration of RS41 humidity sensors by using an upper-air simulator, Meteorol. Appl., 28, e2010, https://doi.org/ 10.1002/met.2010, 2021. Linstrom, P. J. and Mallard, W. G.: The NIST Chemistry WebBook: A chemical data resource on the internet, J. Chem. Eng. Data, 46, 1059–1063, https://doi.org/10.1021/je000236i, 2001. Luers, J. K.: Estimating the temperature error of the radiosonde rod thermistor under different environments, J. Atmos. Ocean. Tech., 7, 882–895, https://doi.org/10.1175/1520-0426(1990)007 <0882:ETTEOT>2.0.CO;2, 1990. Luers, J. K. and Eskridge, R. E.: Temperature corrections for the VIZ and Vaisala radiosondes, J. Appl. Meteorol. Clim., 34, 1241–1253, https://doi.org/10.1175/1520-0450(1995)034<1241:TCFTVA>2.0.CO;2 , 1995. McMillin, L., Uddstrom, M., and Coletti, A.: A procedure for correcting radiosonde reports for radiation errors, J. Atmos. Ocean. Tech., 9, 801–811, https://doi.org/10.1175/1520-0426(1992)009 <0801:APFCRR>2.0.CO;2, 1992. Nash, J., Oakley, T., Vömel, H., and Li, W.: WMO intercomparisons of high quality radiosonde systems, WMO/TD-1580, WMO – World Meteorological Organization, https://library.wmo.int/doc_num.php? explnum_id=9467 (last access: 2 March 2022), 2011. Philipona, R., Kräuchi, A., Romanens, G., Levrat, G., Ruppert, P., Brocard, E., Jeannet, P., Ruffieux, D., and Calpini, B.: Solar and thermal radiation errors on upper-air radiosonde temperature measurements, J. Atmos. Ocean. Tech., 30, 2382–2393, https://doi.org/10.1175/JTECH-D-13-00047.1, 2013. Schmidlin, F. J., Luers, J. K., and Huffman, P.: Preliminary estimates of radiosonde thermistor errors, NASA – National Aeronautics and Space Administration of USA, https://ntrs.nasa.gov/api/ citations/19870002653/downloads/19870002653.pdf (last access: 6 August 2021), 1986. Vaisala: Vaisala Radiosonde RS41 Measurement Performance, https://www.vaisala.com/sites/default/files/documents/White paper RS41 Performance B211356EN-A.pdf, last access: 2 March 2022. von Rohden, C., Sommer, M., Naebert, T., Motuz, V., and Dirksen, R. J.: Laboratory characterisation of the radiation temperature error of radiosondes and its application to the GRUAN data processing for the Vaisala RS41, Atmos. Meas. Tech., 15, 383–405, https://doi.org/10.5194/amt-15-383-2022, 2022.
{"url":"https://amt.copernicus.org/articles/15/1107/2022/","timestamp":"2024-11-03T18:52:20Z","content_type":"text/html","content_length":"310746","record_id":"<urn:uuid:73bbed8b-0f53-4dd0-953c-2399fe69c6af>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00576.warc.gz"}
Past Meetings Ben Adcock Getting more from less: compressed sensing and its applications Many problems in science and engineering require the reconstruction of an object - an image or signal, for example - from a collection of measurements. Due to time, cost or other constraints, one is often limited by the amount of data that can be collected. Compressed sensing is a mathematical theory and set of techniques that aim to enhance reconstruction quality from a given data set by exploiting the underlying structure of the unknown object; specifically, its sparsity. In this talk I will commence with an overview of some main aspects of standard compressed sensing. Next, motivated by some key applications, I will introduce several generalizations. First, I will show that compressed sensing is possible, and can in fact has some substantial benefits, under substantially relaxed conditions than those found in the standard setup. Second, time permitting, I will show that compressed sensing - whilst primarily a theory concerning finite-dimensional vectors - can also be extended to the infinite-dimensional setting, thus allowing accurate recovery of functions from small and incomplete data sets. Andrew King Working with the D-Wave quantum computer: Modeling, minors, and mitigation The D-Wave Two (tm) is a quantum annealing processor consisting of 512 qubits operating at a temperature of 10-20 millikelvin. Its native operation finds a low-energy spin configuration in the Ising model on a fixed non-planar graph. In this talk I will give an overview of the system from a mathematician's perspective. Burning issues I will explore include quantumness, minor-embedding, error modeling, and upcoming developments. Greg Mori Discriminative Latent Variable Models for Human Action Recognition Developing algorithms to interpret scenes of human activity involves a number of related tasks including human detection, tracking, and action recognition. These tasks are intertwined, information from one can provide assist in solving others. In this talk we will describe discriminative latent variable models to address these tasks together, focusing on the latent SVM / max-margin hidden conditional random field. We will review work a broad swath of work in this area. These methods can be used for jointly recognizing actions and spatio-temporally localizing them in videos. Models for human-human and human-object interactions will be presented. We will present methods for group activity recognition, with holistic analysis of entire scenes of people interacting and taking different social roles. Chris Sinclair Mathematics in the computer age: exploration and exposition Mathematics is the study of provably true statements reachable using logic from an agreed upon set of assumptions. And, while the set of tools we use to prove statements has been largely static for the last few centuries, how we decide what to prove and how to share it with our students/colleagues/etc has undergone a remarkable transformation since the invention of the electronic computer. In this talk, I’ll demonstrate some phenomenon/patterns which are immediately apparent given a computer, and which without would probably have remained hidden from us. These examples give way to some very interesting (and applicable) mathematics, some of which I’ll try to explain. I’ll also talk a bit about how one might use new computer-based tools to share and explain new (or even old!) mathematics. So far, mathematicians have held on to linear modes of communication such as articles, books, etc, and while such things are unlikely to ever disappear, they don’t accurately reflect the true nature of mathematics as a body of knowledge (which is not a single linear progression of ideas, but a complicated highly connected graph), nor do they have the dynamic capacity to demonstrate the “doing" of mathematics. I wish to open a dialog about how modern computer-based tools can tackle the inherent non-linearity of mathematics in such a way as to open the beauty and applicability of mathematics to a wider section of humanity. Stephanie van Willigenburg Quasisymmetric refinements of Schur functions Schur functions were introduced early in the last century with respect to representation theory, and since then have become important functions in other areas such as combinatorics and algebraic geometry. They have a beautiful combinatorial description in terms of diagrams, which allows many of their properties to be determined. These symmetric functions form a subalgebra of the algebra of quasisymmetric functions, which date from the 1980s. Despite this connection, the existence of a natural quasisymmetric refinement of Schur functions has been considered unlikely. However, in this talk we introduce quasisymmetric Schur functions, which refine Schur functions and many of their properties, as revealed by extensive computer-generated data. This is joint work with Christine Bessenrodt, Jim Haglund, Kurt Luoto, Sarah Mason, Ed Richmond and Vasu Tewari. The talk will require no prior knowledge of any of the above terms.
{"url":"http://mathcompsymposium.irmacs.sfu.ca/book/export/html/7?page=1","timestamp":"2024-11-02T01:15:21Z","content_type":"application/xhtml+xml","content_length":"107145","record_id":"<urn:uuid:9fedcdef-0ef4-4692-b325-72f8b59ce5d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00568.warc.gz"}
Calculator - cmpf.co.uk For an explanation of the Golden Ratio and how it can be applied to your picture framing design, visit the Golden Ratio Page. Using the calculator below, if you input the height and width of your picture, it will tell you how wide to make your mount border, so that the area of the picture is in the Golden Ratio with the area of the window mount, behind which the picture is displayed. When framing your picture note that these dimensions are the viewing size, therefore, add the width of the frame lip to calculate the glass size. For an image to look properly balanced, behind a window mount, it is often considered necessary to correct an optical illusion that makes the bottom border seem narrower than the top, when positioned at the geometric centre. A solution to this problem is Optical Centring. The Optical Centre is a position slightly higher than the geometric centre. Therefore the bottom border is given a greater width. This is known as Bottom Weighting. A standard equation to calculate the adjustment necessary is to make the bottom border wider than the top in the ratio of 11:9 (55% to 45%). The dimensions displayed below indicate the border widths that can provide Optical Centring (in the ratio of 11:9), while still being in the Golden Ratio with the image. Bottom Weighting the mount border may not be accurately represented in the Golden Ratio visualisation Graphic below.
{"url":"https://cmpf.co.uk/calculator/","timestamp":"2024-11-02T23:29:47Z","content_type":"text/html","content_length":"23117","record_id":"<urn:uuid:b7cda824-2479-4be9-b32e-a4364d7d1136>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00000.warc.gz"}
Interval of α for which (α,α2) and (0,0) lies on same side of 3... | Filo Question asked by Filo student Interval of for which and lies on same side of is Not the question you're searching for? + Ask your question Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 3 mins Uploaded on: 11/16/2023 Was this solution helpful? Found 4 tutors discussing this question Discuss this question LIVE for FREE 6 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions on Coordinate Geometry View more Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text Interval of for which and lies on same side of is Updated On Nov 16, 2023 Topic Coordinate Geometry Subject Mathematics Class Class 11 Answer Type Video solution: 1 Upvotes 86 Avg. Video Duration 3 min
{"url":"https://askfilo.com/user-question-answers-mathematics/interval-of-for-which-and-lies-on-same-side-of-is-36313334303031","timestamp":"2024-11-07T10:47:37Z","content_type":"text/html","content_length":"310432","record_id":"<urn:uuid:098af22a-4cc0-4e76-8120-2a1f5636ac10>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00613.warc.gz"}
Boundary Layers in context of Fluid Flow 27 Aug 2024 Understanding Boundary Layers: The Crucial Region in Fluid Flow In the study of fluid flow, the boundary layer is a critical region that plays a significant role in determining the behavior of fluids near solid surfaces. In this article, we will delve into the concept of boundary layers, their characteristics, and the importance of understanding them. What is a Boundary Layer? A boundary layer is a thin layer of fluid closest to a solid surface, where the velocity gradient is large due to the no-slip condition at the wall. The no-slip condition states that the fluid velocity at the wall must be zero, which creates a significant velocity gradient near the surface. Characteristics of Boundary Layers Boundary layers exhibit several distinct characteristics: 1. Thickness: The thickness of the boundary layer (δ) is typically measured in terms of the distance from the wall to the point where the velocity profile becomes fully turbulent. 2. Velocity Profile: The velocity profile within a boundary layer is characterized by a linear increase in velocity with distance from the wall, known as the “velocity gradient” (du/dy). 3. Laminar-Turbulent Transition: Boundary layers can be either laminar or turbulent, depending on the Reynolds number (Re) and the surface roughness. 4. Shear Stress: The shear stress (τ) at the wall is proportional to the velocity gradient and the fluid density. Mathematical Formulation The boundary layer equations are based on the Navier-Stokes equations, which describe the motion of fluids. The simplified equations for a two-dimensional boundary layer are: 1. Momentum Equation: ∂u/∂t + u∂u/∂x = -1/ρ ∂p/∂x + ν (∂²u/∂y²) 2. Continuity Equation: ∂u/∂x + ∂v/∂y = 0 • u is the fluid velocity in the x-direction • v is the fluid velocity in the y-direction • ρ is the fluid density • p is the pressure • ν is the kinematic viscosity • t is time Types of Boundary Layers There are two main types of boundary layers: 1. Laminar Boundary Layer: A laminar boundary layer occurs when the Reynolds number (Re) is low, and the flow is smooth and continuous. 2. Turbulent Boundary Layer: A turbulent boundary layer occurs when the Reynolds number (Re) is high, and the flow becomes chaotic and irregular. Importance of Boundary Layers Understanding boundary layers is crucial in various engineering applications: 1. Heat Transfer: Boundary layers play a significant role in heat transfer processes, such as convective heat transfer. 2. Mass Transport: Boundary layers affect mass transport phenomena, like diffusion and convection. 3. Aerodynamics: Boundary layers are essential for understanding aerodynamic forces, such as lift and drag. In conclusion, boundary layers are a fundamental aspect of fluid flow, playing a critical role in determining the behavior of fluids near solid surfaces. Understanding the characteristics, mathematical formulation, and types of boundary layers is essential for various engineering applications. By grasping the concepts of boundary layers, engineers can design more efficient systems, optimize performance, and improve safety. Related articles for ‘Fluid Flow’ : • Reading: Boundary Layers in context of Fluid Flow Calculators for ‘Fluid Flow’
{"url":"https://blog.truegeometry.com/tutorials/education/d51957391f3b2f3d7ae741c62b9b8662/JSON_TO_ARTCL_Boundary_Layers_in_context_of_Fluid_Flow.html","timestamp":"2024-11-11T10:20:36Z","content_type":"text/html","content_length":"17966","record_id":"<urn:uuid:4b87743e-1e90-4071-8aa5-3fd7fdcd82c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00811.warc.gz"}
Regression analysis can be used to develop a model between an output Y and one or more input Xs. The output Y can either be continuous or binary. The input X's can be both continuous and discrete. Regression models can be used to understand which of the input variables (X) have an impact on the output variable. The regression equation can also be used to make predictions or to determine the input parameters that result in an optimal output. The following figure shows a flowchart for the various models that can currently be fit using this software. and then selecting Regression Analysis Click on Analysis Setup to open the menu options for this tool. A sample screenshot of the setup menu is shown below. Data Type: Specify the data type you have for your dependent variable (Y). The available options are: 1 Option Description Binary A Binary data has two possible values (0/1). Continuous Continuous data can take any arbitrary value (like the temperature of the room example, 34.53 degrees centigrade). Model Type: Specify the type of regression model you want to build. The available options if the number of input variables is one are: Option Description Linear Build a linear regression model Y = mX + c. Quadratic Build a quadratic regression model Y = aX^2 + bX + c. Cubic Build a cubic regression model Y = aX^3 + bX^2 + cX + d. The following options are available if there is more than one independent variable. Option Description Linear Build a linear regression model of the type Y = c + m1 * x1 + m2 * x2 Linear + Interaction Build a regression model of the type Y = c + m1 * x1 + m2 * x2 + m3 * x1 * x2 Linear + Quadratic Build a regression model of the type Y = c + m1 * x1 + m2 * x2 + m3 * x1 * x2 + m4 * x1^2 + m5 * x2 ^ 2 Manual Specify the terms you want to include in the model by specifying the model terms. Model Reduction: Specify if you want to use model reduction. When you build your model input-output and input(s), sometimes, not all inputs are statistically significant. Some of the input variables may have no impact on the model output. We can not include those terms in the final model in such cases. The model reduction setting lets you pick whether you want to include all or only significant terms. The following options are available: Option Description None No model reduction is performed, and by default, all terms are included. Auto The software will perform backward elimination and keep terms in the final model only if they are statistically significant. The model reduction alpha value determines whether to keep or eliminate the model terms. 4 Model Reduction Alpha: Specify the model reduction alpha value. This value will be used to reduce your regression model. The software starts with the full model and drops those non-significant terms until a fully significant model remains. The default value for model reduction alpha is 10%. 5 Help Button: Click on this button to open the help file for this topic. 6 Flowchart: Click on this button to open the flowchart for regression. The flowchart shows the logic of which analysis you need to perform depending on your specific situation. 7 Cancel Button: Click on this button to cancel all changes to the settings and exit this dialog box. 8 OK Button: Click on this button to save all changes and compute the outputs for this analysis. You will see the following dialog box if you click the button. Here, you can specify the data required for this analysis. 1 Num Factors: Specify the number of input variables. The software will fit a simple regression model if the number of input variables is 1. The software will fit a multiple regression model if the number of input variables exceeds 1. 2 Search Data: The available data displays all the columns of data that are available for analysis. You can use the search bar to filter this list and speed up finding the right data for analysis. Enter a few characters in the search field, and the software will filter and display the filtered data in the Available Data box. Available Data: The available data box contains the list of data available for analysis. If your workbook has no data in tabular format, this box will display "No Data Found." The information 3 displayed in this box includes the row number, whether the data is Numeric (N) or Text (T), and the name of the column variable. Note that the software displays data from all the tables in the current workbook. Even though data within the same table have unique column names, columns across different tables can have similar names. Hence, it is crucial that you not only specify the column name but also the table name. 4 Add or View Data: Click on this button to add more data to your workbook for analysis or to view more details about the data listed in the available data box. When you click on this button, it opens the Data Editor dialog box, where you can import more data into your workbook. You can also switch from the list view to a table view to see the individual data values for each column. 4a View Selection: Click on this button to view the data specified for this analysis. The data can be viewed in a tabular format or a graphical summary. 5 Select Button: Click on this button to select the data for analysis. Any data you choose for the analysis is moved to the right. To select a column, click on the columns in the Available Databox to highlight them and then click on the Select Button. A second method to choose the data is to double-click on the columns in the list of Available Data. Finally, you can drag and drop the columns you are interested in by holding down the select columns using your left mouse key and dragging and dropping them in one of the boxes on the right. Response Variable: The list box header will be displayed in black if the right number of data columns is specified. If sufficient data has not been specified, then the list box header will be 6 displayed in red color. Note that you can double-click on any of the columns in this box to remove them from the box. In this list box, specify the response or the dependent variable. Note that you can only specify one column. 7 Continuous Factors: The data you specify for this analysis depends on the options in the Setup tab. Specify the columns containing the inputs or independent variables under Factor Variables. Note that you can only specify up to N columns for the factor variables. N is the number of input variables specified in the Setup dialog box. Make sure that all the columns contain numeric values. Categorical Factors: If you have any discrete factors, they need to be specified in the list of categorical factors. Note that these factors can be in text format. You need to have at least one 8 continuous variable. The categorical factors are internally converted to continuous factors using dummy variables. For example, a categorical factor named shift has three levels: 1st Shift, 2nd Shift, and 3rd Shift. The first shift is Shift.1, the second shift is Shift.2, and the third shift is Shift.3. You can use these variables in model terms if you want to build a regression model Required Data: The code for the required data specifies what data can be specified for that box. An example code is N: 2-4. If the code starts with an N, you must select only numeric columns. If 9 the code begins with a T, you can select numeric and text columns. The numbers to the right of the colon specify the min-max values. For example, if the min-max values are 2-4, you must select a minimum of 2 columns of data and a maximum of 4 columns in this box. If the minimum value is 0, then no data is required to be specified for this box. Specify the regression model to fit between the input(s) and the output variable. This model is auto-selected for you based on your setup options. However, you can make changes to this model if required for linear regression. You will see the following dialog box if you click the Standardize Continuous Factors: By default, the continuous factors will be used as-is to fit a model between the inputs and outputs. However, in some cases, you may want to modify your input variables before they fit the model. The available options are: 1 Option Description None No changes will be made to the input variable. The values entered in the table will be used to fit the model. This is the default option. Subtract mean The mean value will be subtracted from the input variable before fitting the model. Divide stdev Each data point will be divided by the standard deviation of this column before fitting the model. Subtract mean and divide stdev The mean will be subtracted from each data point and divided by the column standard deviation before fitting the model. 2 Reference Categorical Factors: By default, the levels of the categorical factors are sorted, and the last value in this list is assumed to be the reference or the baseline value. If you want to change the reference level to be used for the categorical factor, pick the level you want to use from the dropdown list. Note that this will modify the equation that will fit your data. To refresh the model terms, make sure to click on the refresh button at the top of the dialog box. Model Equation: The model equation that is fit to your data between the output variable (Y) and the input variables (X1, X2, etc.) is shown in this box. The model terms should be separated by the plus (+) sign. 3 For example, if your input variables are Weight and Length, then the model terms could be Weight + Length + Weight * Length + Weight^2. This will build a regression model with four terms, and the regression model will consider these terms. Ensure that your matrix is not singular when you specify the manual regression terms. For example, if you specify a term that does not exist in your input data set, all values in that input column for that variable will contain 0 values, and your matrix will be singular; hence, you cannot build that model. Another scenario could be a model like A + A*B + A. Here, the A column is repeated twice, making the model singular, and you cannot obtain a solution. Include Constant: This checkbox specifies if you want to include the constant term in your model equation. For example, if you fit a simple linear model between Y and X, then with the constant, the 4 equation would be Y = mX + Constant, and without the constant term, the equation would be Y = mX. Use the model that makes sense for your particular case. If constants are included, then when X = 0, the model output will have a Constant (intercept) value. If no constant is used, the model intercept value is 0.0, meaning Y is also zero when X is zero. That is, the best-fit line passes through the origin. You will see the following dialog box if you click the 1 Confidence Level: Specify the confidence level for this analysis. This value is used to determine the prediction and confidence intervals. The default value for confidence level is 95%. Save Results: You can store some of the calculated results on the worksheet. The following checks can be stored. Select the checkboxes for the variables you are interested in, and the software will store these values in the worksheet. The available variables are: Option Description Fit Store the value of the fit in one column. The fit is obtained using the estimated model equation for the inputs specified in the model. Note that the corresponding row for the fit will not be available if not all the inputs have been specified since there are missing values. Confidence Store the lower and upper confidence intervals on the worksheet. The confidence level value used for the fit can be specified in one of the options above. Residuals The residuals are the difference between the raw data points and the best fit line. Ideally, the residuals must be independent and normally distributed with a mean value of 0. 2 Standardized residuals are the residuals that have been transformed to have a mean of zero and a standard deviation of one. Standardized residuals are useful for identifying outliers Standardized and assessing the overall fit of the regression model. If the residuals are normally distributed and have a mean of zero and a standard deviation of one, it indicates that the Residuals regression model's assumptions are met. However, if patterns or outliers exist in the standardized residuals, the model may not capture some important relationships in the data. Analyzing standardized residuals can help identify influential data points, diagnose potential problems with the regression model, and guide model improvement. Leverage refers to the influence that a data point has on the estimation of the regression coefficients. In other words, leverage measures how much an individual data point affects the Leverage shape and position of the regression line. Observations with high leverage can potentially influence the estimated regression coefficients disproportionately. These points have a greater weight in determining the position and slope of the regression line. High leverage points are not necessarily problematic but can be influential if they have large residuals. Influential points can substantially impact the regression model, and their effects should be carefully examined. Cook's Cook's distance is a statistical measure used in regression analysis to identify influential data points that may disproportionately affect the estimation of regression coefficients. Distance It combines information about the residuals and the leverage of each data point to assess the impact of individual observations on the overall model. Large values of Cook's distance indicate observations that substantially influence the regression model. Fitted Line Plots: You can display a fitted line plot between the input and output if you have a single input variable, X, and a response variable, Y. The raw data points are displayed as a scatter plot, and the best fit is shown as a line. On top of the fitted line plot, we can also include the confidence and prediction intervals. Specify if you want to superimpose the confidence or prediction interval on the regression plot. These intervals are plotted based on the confidence level specified in your analysis. If we were to take many samples from the population and calculate a confidence interval for each sample, the actual parameter would be expected to fall within the interval in a certain percentage of cases. The typical confidence levels are 95% or 99%. The following options are available: Option Description Confidence A confidence interval is used to estimate the range within which we expect the true population parameter (such as a regression coefficient) to lie with a certain level of confidence. Prediction A prediction interval, on the other hand, is used to estimate the range within which we expect an individual observation to fall with a certain level of confidence. If we were to Interval observe a new data point, the prediction interval gives a range within which we would expect the actual value of that observation to lie with a certain probability. You will see the following dialog box if you click the 0 Pick Charts: Select the charts you would like to display for this analysis. 1 Title: The system will automatically pick a title for your chart. However, if you want to override that with your title, you can specify a title for your chart here. Note that this input is 2 Sub Title: The system will automatically pick a subtitle for your chart. However, if you want to override that with your subtitle, specify a subtitle for your chart here. Note that this input is 3 X Label: The system will automatically pick a label for the x-axis. However, if you would like to override that with your label for the x-axis, you can specify a different label here. Note that this input is optional. 4 Y Label: The system will automatically pick a label for the y-axis. However, if you would like to override that with your label for the y-axis, you can specify a different label here. Note that this input is optional. 5 X Axis: The system will automatically pick a scale for the x-axis. However, if you would like to override that with your values for the x-axis, you can specify them here. The format for this input is to specify the minimum, increment, and maximum values separated by a semi-colon. For example, if you specify 10;20, the minimum x-axis scale is set at 10, and the maximum x-axis scale is set at 20. If you specify 10;2;20, then, in addition to minimum and maximum values, the x-axis increment is set at 2. Note that this input is optional. 6 Y Axis: The system will automatically pick a scale for the y-axis. However, if you would like to override that with your values for the y-axis, you can specify them here. The format for this input is to specify the minimum, increment, and maximum values separated by a semi-colon. For example, if you specify 10;20, the minimum y-axis scale is set at 10, and the maximum y-axis is set at 20. If you specify 10;2;20, then, in addition to minimum and maximum values, the y-axis increment is set at 2. Note that this input is optional. 7 Horizontal Lines: You can specify the values here if you want to add a few extra horizontal reference lines on top of your chart. The format for this input is numeric values separated by semi-colon. For example, if you specify 12;15, two horizontal lines are plotted at Y = 12 and Y = 15, respectively. Note that this input is optional. 8 Vertical Lines: You can specify the values here if you want to add a few extra vertical reference lines on top of your chart. The format for this input is numeric values separated by semi-colon. For example, if you specify 2;5, two vertical lines are plotted at X = 2 and X = 5, respectively. Note that this input is optional. If you click the button, the software will perform some checks on the data you entered. A sample screenshot of the dialog box is shown in the figure below. 1 Item: The left-hand side shows the major tabs and the items checked within each section 2 Status: The right-hand side shows the status of the checks. 3 Overall Status: The overall status of all the checks for the given analysis is shown here. The overall status check shows a green thumps-up sign if everything is okay and a red thumps-down sign if any checks have not passed. Note that you cannot proceed with generating analysis results for some analyses if the overall status is not okay. Click on Compute Outputs to update the output calculations. A sample screenshot of the worksheet with simple regression is shown below. A histogram of the residuals for the run order and the fit values is also displayed. Make sure that the residuals are randomly distributed and that there is no pattern of the residuals concerning the run order or the fit value. If there are any problems with the residuals, you may need to revisit your regression model. Regression Menu Bar For Regression worksheets, an additional menu bar is displayed on the top main menu bar, as shown in the following screenshot: Make Predictions If you want to make individual predictions using the developed Regression model, you can use the Make Predictions menu on the top menu bar. Note that if you make many predictions and look at the confidence intervals, use the Show Predictions feature within the Analysis Setup to make the predictions. These inputs are entered directly on the worksheet, and you can use Compute Outputs to make predictions. However, if you want to use the model to predict individual values, you can use the Make Predictions dialog box. A sample screenshot of the results of the Make Predictions menu is shown in the figure below. 1 Model: The model analyzed in the current worksheet is displayed in this section. Note that if no model has been studied yet, you cannot make predictions using this dialog box. You will need first to generate a model using the Analysis Setup and Compute Outputs buttons. You can use a model to make predictions only when a model has been developed and saved to the worksheet. 2 Date: The date shows the date the model was developed and saved to the worksheet. Note that once a model has been saved, it can be used for predictions. You don't need to use Compute Outputs to update the model. You can share this worksheet with other users, and they can enter their inputs and generate the predicted model outputs using the model equation. 3 Inputs: Specify the input values you want to use to make the prediction. You will need to specify all the model inputs to make a prediction. A blank value of input will be taken as a value of 0. 4 Predict Button: Click on the >> button to make the prediction. This will use the model equation and the inputs you have specified to generate the model outputs. 5 Outputs: The outputs from the model are displayed in this section. Currently, the outputs are only displayed in this dialog box and not on the worksheet. You will need to manually copy the solution to your worksheet if you would like to save this value. 6 Cancel Button: Click on this button to close this dialog box. Here are a few pointers regarding this analysis: • You can fit up to 30 factors with this worksheet. • Auto feature uses a stepwise-backward regression. The Auto functionality first fits a model with all the terms included. It then drops the terms with a P-value greater than the alpha value and re-fits the model. This process continues until only significant terms are left in the model. The following examples are in the • For the data in the reference file, determine a statistically significant linear model between advertising spend and sales (Regression 1.xlsx). • For the data in the reference file, determine a statistically significant linear model between advertising spend, discounts, incentive program, and sales (Regression 1.xlsx). • For the data in the reference file, determine a statistically significant quadratic model between X and Y (Regression 1.xlsx).
{"url":"https://sigmamagic.com/help/SM17/Regression/index.php","timestamp":"2024-11-04T17:28:51Z","content_type":"text/html","content_length":"63087","record_id":"<urn:uuid:4a6ef083-0b36-4b54-bb89-51dbc0ddebc4>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00635.warc.gz"}
Procedures for Determining Pelletizing Characteristics of Iron Ore Concentrates - 911Metallurgist In most routine experimental work on products and processes, methods of operation and product evaluation are established from accurately controlled laboratory tests. With particular emphasis on the pelletization of fine iron ore concentrates, standardized procedures of laboratory production and product testing have been established at Cleveland-Cliffs Iron Co. Research Laboratory at Ishpeming, Mich. A program of this nature was used for investigating the agglomeration characteristics of various iron ore concentrates. The factors which control the process were studied and subsequently controlled for these investigations. The extent of the bonding phenomena and the consequent strength of fired pellets is governed by some of the following firing conditions and properties of the unfired pellet: 1- mineralogical composition of the pellet. 2- particle packing within the pellet. 3- particle size range, of the minerals comprising the green pellet. 4- maximum temperature to which the minerals are exposed and the retention period at this temperature. 5- the rate of attainment of and rate of recession from the maximum temperature. 6- the gas composition of the atmosphere in which the pellet particles are fired, and 7- the high temperature chemistry of the mineral particles and the additives of the pellets. If the atmosphere of the fired iron ore particles is not of homogeneous composition, differential agglomeration will come about within the pellet because of particle chemistry differences. Hematite, magnetite, and the lower oxides of iron definitely possess different properties of agglomeration. If some of the iron minerals are contiguous to reducing substances during firing, the agglomeration characteristics will differ from particles fired in an ambient atmosphere that is oxidizing in nature. Shaft furnace firing of green iron ore pellets is carried out by the placing of wet pellets on the top of a shaft hearth containing pellets in the process of being fired. As the charge descends into the shaft by gravity, the wet pellets are fired by a countercurrent stream of combustion and heat transfer gases. Varying temperature zones within the furnace perform the following consecutive operations on the descending wet pellets: 1—drying, 2—preheating, 3—firing to a maximum temperature, and 4—cooling. The regions of different temperature zones within the furnace shaft can be varied by operational controls, but a specific zonal region can also be a function of the high temperature chemistry of the ore pellets. If pellets contain an intrinsic fuel, magnetite or an additive which can exothermally oxidize with the gases, a hot firing zone can literally be floated within the shaft when relatively low temperature gases are allowed to ignite the pellet burden and maintain continual combustion within the descending charge. In this manner descending pellets of magnetite can be ignited with oxidizing gases of 1000°C and a zone of burning pellets can be established that is 1300°C in temperature. The position and maintenance of this zone gives rise to a longitudinal temperature gradient within the furnace shaft, which consequently describes the changes of pellet temperature with time. Laboratory Procedure for Firing Pellets Apparatus consisted of an electrical, globar combustion tube furnace equipped with provisions for passing gas through the tube at controlled flow rates. Nichrome boats which served as the pellet-charging devices were admitted through the open end of the combustion tube, passed countercurrent to the gas stream, and removed through the upstream end of the tube. The boat charge consisted of five wet pellets, each of which was about 35 g in weight and 28 mm in diam. These were formed in a laboratory balling drum by use of procedures described by Tigerschiold and Ilmoni. The drum products were stored in sealed containers and retained for the subsequent firing tests. To minimize pre-drying effects, green pellets representing a specific balling drum product were taken from the containers and placed on the boat approximately 1 min before the actual firing operation. During operation of the laboratory apparatus pellet-laden boats were admitted one by one into the combustion tube and moved periodically through the tube so that pellets could approach the consecutive temperature changes of shaft furnace pellets. These discrepancies were tolerable. It would have been impractical to allow laboratory pellets to cool for 10 hr, and the slight variations of the heating rates were of a small order when contrasted to the normal variations of heating rates within a shaft furnace. The results of the laboratory tests were to be relative rather than absolute. For this reason the shaft furnace firing schedule was approached but was not attained. Pellet Quality Determinations The quality of fired pellets can be determined by different materials tests to denote the toughness, hardness, and compressive strength. Each of these relates a measure of pellet particle coherency. Iron ore pellets should be able to withstand the crushing forces imposed by furnace and stockpile stacks and the impact and abrasive forces imposed by conventional iron ore handling and shipping. The physical properties that make it possible for pellets to endure these forces are generally related, although not def- initely. Brittle pellets can conceivably withstand great compressing loads, yet will shatter remarkably from impacts. Tough pellets, likewise, can be comprised of coarse particles which readily abrade from the surfaces. For these reasons, each specific strength property was individually tested by the use of standardized materials test procedures. These were the crush, impact, and abrasion tests used to denote pellet strength, toughness, and hardness, respectively. Strength is a measure of ability to endure stress. In the case of slow diametric compression of spherical material, stress is a function of sphere size and compressing force. When like-size spherical pellets are subjected to a slow diametric force, the strength and maximum unit stress are directly proportional to the crushing load. Consequently the pellet strength designated herein was approximated from a measurement of the maximum load imposed on an individual pellet, or the extent of the compressing load which caused the pellet to crush. The abrasion test was developed to find a means of expressing the relative resistance to abrasive action offered by the surfaces of pellets. This abrasive action was caused by inter-pellet rubbing induced by pellet contact and movement in a small rotating drum. The impact test was developed for determining the toughness of the pellets or the relative resistance to subdivision from impact forces. These forces were imparted by dropping the pellets successively from predetermined height to a solid steel plate surface. A strength index, or the percent of the original particle size, was computed from the screen analysis of the pellets before dropping and the screen analysis of the pellets and fragments after three drops. Several tests were conducted to determine the mass effect of the drop test and to determine the reliability of results procured from a specific test. Approximately 700 lb of pellets of a uniform quality were thoroughly mixed for these tests and varying quantities were cut out from the entire sample. These quantities consisted of two 100-lb portions, two 50-lb, two 25-lb, two 12.5-lb, two 6.25-lb, and eight 1000-g portions. The most expedient method of measuring the degree of particle coherency within an iron ore pellet is the crushing test. A specific test carried out as a measurement of value, such as the crushing strength of fired pellets, is only as good as the degree of reproducibility of the test. Results of Laboratory Pelletizing Tests The laboratory tests were carried out on five different iron ore concentrates, designated as M-1, M-2, M-3, and H-1, H-2. M-1 and M-3 represent two different concentrates of natural magnetite produced from magnetic concentration circuits. H-2 and M-2 represent concentrates of artificial hematite and artificial magnetite produced by roasting iron sulphides under different atmospheric conditions. H-1 represents a flotation concentrate of specular hematite. Prior to balling, the ore concentrates were blended in the dry state with ½ pct by weight of bentonite and, in certain instances, with various quantities of other pulverized additives. These blends were further mixed with about 10 pct water and each mixture used as the laboratory balling drum feed. The apparent detrimental effect of internal carbonaceous matter in iron ore pellets was evident from the decreasing strength properties of H-2 pellets containing increasing quantities of internal coal. Pellets containing 2 pct coal on the exterior, however, appeared to have greater strength properties than pellets containing lesser amounts of coal internally. This demonstration indicated that within pellets containing internal coal, gaseous diffusion lagged behind heat transfer, and carbon abstracted oxygen from the ore rather than from the hot gases.
{"url":"https://www.911metallurgist.com/blog/laboratory-procedures-determining-pelletizing-characteristics-iron-ore-concentrates/","timestamp":"2024-11-08T05:39:53Z","content_type":"text/html","content_length":"172766","record_id":"<urn:uuid:007ce1c8-b234-4bf3-9711-261dd717d19f>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00723.warc.gz"}
Floating Point Operations | Shell Scripting This is a multipart blog article series where I am going to explain the concepts of shell scripting and how to write a shell script in Linux, UNIX or Mac based systems. You can also follow this tutorial blog using windows but for that you have to install a bash from. We cannot solve floating point numbers using simple arithmetic operations. So in this article we will see that how we solve floating point numbers. We cannot perform arithmetic operations on decimal using previous methods, it will give an error. For that we use bc which stands for basic calculator. • We can use bc command in this way echo “20.5+5” | bc • But there is a problem with division that it will not show the values after decimal. So to remove that we have to define a variable scale like this echo “scale=2;20.5+5” | bc • Here we can see the value till 2 decimal places. • We can also solve/calculate square root of some number and can also solve trigonometric equations using the bc keyword. • If you want to use variables then the syntax becomes echo “$<variable_name1>+$<variable_name2>” | bc. Here num1 and num2 are variables which have some value in it. • We can calculate the square root of a number in this way echo “scale=2;sqrt($<variable_name>)” | bc -l • To calculate we have to use a function called sqrt and pass the variable name to it, whose square root we want to calculate. • But sqrt function is a math function and we need to call. To call math function we use -l command. • We can also calculate power of any number like this echo “scale=2;3^3” | bc -l • You can get more information about bc command by just typing man bc in terminal. Reference code file for this article So this was all about performing arithmetic operations on floating point numbers. Hope you liked it and learned something new form it. If you have any doubt, question, quires related to this topic or just want to share something with me, than please feel free to contact me. 📱 Contact Me 📧 Write a mail 🚀 Other links Top comments (0) For further actions, you may consider blocking this person and/or reporting abuse
{"url":"https://dev.to/rahulmishra05/floating-point-operations-shell-scripting-4hk0","timestamp":"2024-11-04T23:28:21Z","content_type":"text/html","content_length":"100143","record_id":"<urn:uuid:35778237-6da3-41de-ad4b-c7b73b081cd0>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00618.warc.gz"}
Simple Circuit Provides +5V Gate Bias from -48V Input In this design idea a small circuit with six components derives 5V gate bias from the -48V rail widely used in telecom applications. The MAX6138 shunt voltage reference and the MAX1683 charge pump are featured in the design. A similar version of this article appeared in the September 19, 2002 issue of EDN magazine. A small and simple circuit (Figure 1) derives +5V from the -48V rail widely used in telecom applications. Useful for gate bias and other purposes, the 5V supply delivers up to 5mA. A shunt reference (U1) defines -5V as ground reference for a charge pump (U2), and the charge pump doubles this 5V difference (between system ground and charge pump ground) to produce +5V with respect to the system Figure 1. This small circuit (six components) produces 5V at 5mA from a -48V input. The shunt reference maintains 5V across its terminals by regulating its own current (I[S]), which in turn is determined by the value of R. Current through R (I[R]) is fairly constant, and varies only with the input voltage. I[R], the sum of the charge-pump and shunt-reference currents (I[R] = I[CP] + I[S]), has maximum and minimum values set by the shunt reference. The shunt reference sinks up to 15mA, and requires 60µA minimum to maintain regulation. Maximum I[R] is determined by the maximum input voltage. To prevent excessive current in the shunt reference with no load on the charge-pump output, use the maximum input voltage (-48V -10% = -52.8V) when calculating the minimum value of R. The maximum reference sink current (15mA) plus the charge pump's no-load operating current (230µA) equals the maximum I[R] value (15.23mA). Thus, R[MIN] = (V[IN(MAX)] - V[REF])/I[R(MAX)] = (52.8V - 5V)/0.01523A = 3.14kΩ. Choose the next-highest standard 1% value, which is 3.16kΩ. Guaranteed output current for the charge pump is calculated at the minimum line voltage: -48V + 10% = -43.2V. The charge pump's maximum input current is: I[CP] = (V[IN(MIN)] - V[REF])/R - I[SH(MIN)] = (43.2V - 5V)/3.16kΩ - 90µA = 12mA, where 90µA is the minimum recommended operating current for the shunt reference. Assuming 90% efficiency in the charge pump, the output current is I[OUT] = (I[CP]/2) × 0.9 = (12mA/2) × 0.9 = 5.4mA. Charge-pump input current is halved, because output voltage is twice the input voltage. Power is dissipated via the shunt reference under no-load conditions, so be sure that R can handle the resulting wattage. A 1W resistor suffices in this case.
{"url":"https://www.analog.com/cn/resources/design-notes/simple-circuit-provides-5v-gate-bias-from-48v-input.html","timestamp":"2024-11-07T07:46:50Z","content_type":"text/html","content_length":"38931","record_id":"<urn:uuid:2b9e4461-a7cb-4b8e-b395-0d46f7bf2607>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00555.warc.gz"}
Solution Sequences OptiStruct can be use different solution sequences for aircraft analysis. Some of the solution sequences that can be used are: Linear Static or Nonlinear Static Gap Analysis Normal Modes Analysis Linear Buckling Analysis Direct Frequency Response Analysis Large Displacement Nonlinear Static Analysis (available as PARM card entry) Linear Steady-state or Transient Heat Transfer Analysis Fatigue Analysis Static Analysis can be used to solve time-independent static analysis. Figure 1 shows a fuselage section with SPC boundary conditions on the bulkhead and a uniform pressure applied to the skin. Figure 1. Stress and Displacement Contours for Static Analysis of a Fuselage Section For further information, refer to Linear Static Analysis. Nonlinear Static Analysis An analysis is termed nonlinear when the relationship between the Force and Displacement is nonlinear. Most of the structural components in an aircraft structure are subjected to large deformations, which are best analyzed through nonlinear analysis. The main reasons for nonlinearity are: • Material nonlinearities • Geometric nonlinearities • Presence of nonlinear forces • Contact nonlinearities Inertial Relief Analysis Inertia relief analysis is mostly performed on unsupported structures to determine the impact loads of structures or to calculate the distribution of forces. has two options for Inertia relief analysis. • PARAM, INREL, -1 is used when certain boundary conditions are specified. • PARAM, INREL, -2 is used when no boundary conditions are specified. Figure 2. Displacement and Stress Contours for Inertia Relief Analysis Figure 2 shows the results from an Inertia Relief Analysis performed on a fuselage model. Normal Modes Analysis Mode shapes provide the frequencies at which the structure will absorb all the supplied energy when no load is acting on it. To analyze the displacement of a structure at these frequencies, you can use Frequency Response Analysis. Normal Modes Analysis of aircraft structures will help in determining: • Under constrained and loose components • The rotating speed which matches the natural frequencies in case of the analysis of a blade or a rotor • The areas to be constrained or loaded. Figure 3. Normal Modes Analysis of a Fuselage and Drone Figure 3 shows the results from the Normal Modes Analysis for a fuselage and a drone. Both models have free-free boundary conditions. Frequency Response Analysis Each frequency is solved independently and can also solve a several frequencies at a time. This can be further used to determine the displacement versus frequency plots. This helps to study the displacements of the structure when subjected to its natural frequency calculated using Modal Analysis. The frequencies can be specified using the FREQi card. Figure 4. Frequency Response Analysis of a Fuselage Section Model File Refer to Access the Model Files to download the required model file(s). The model file used here includes:
{"url":"https://2022.help.altair.com/2022.1/hwsolvers/os/topics/solvers/os/aerospace_solution_sequences_r.htm","timestamp":"2024-11-06T17:09:09Z","content_type":"application/xhtml+xml","content_length":"67741","record_id":"<urn:uuid:e066bc73-0d3f-4c78-b9c8-1017514d6992>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00206.warc.gz"}
Public debt and economic growth: panel data evidence for Asian countries 1 Introduction The relationship between public debt and economic growth has been a subject of increasing interest amongst academic scholars and policy makers. The problem of rising public debt is nothing new to developed countries and has also been an issue of increased interest in developing countries. Reinhart and Rogoff ( ) argue that a public debt to GDP ratio above 90% is associated with a lower economic growth rate. However, recent empirical studies such Égert ( ) question this result and conclude that no simple public debt threshold exists. Indeed, in their study Herndon et al. ( ) conclude that there are significant errors in the results of Reinhart and Rogoff and that the 90% adverse debt threshold impact on economic growth is non-existent. Also, Reinhart and Rogoff do not deal with the issue of causality and Dafermos ( ) shows that their results are heavily impacted by periods of low economic growth in which there is usually a noticeable increase in public debt. There are two mechanisms through which this happens: (i) low economic growth directly impacts on the debt to GDP ratio since GDP is the denominator of this ratio and (ii) low economic growth tends to worsen fiscal deficits due to the impact of automatic stabilisers. This study analyses whether rising public debt is harmful for growth, in both the short-run and long-run using data from fourteen Asian countries. The fact that the Asian countries are the biggest borrowers among emerging economies means that the issue of rising public debt is a particularly important issue in that region of the world. The Asian economies have also been exposed to two major crises during the period under study, the Asian financial crisis of 1998 and the global financial crisis of 2008 which has boosted the public debt to GDP ratios in these countries. We firstly use a simple bivariate model to assess the direct impact of government debt on economic growth. We then use the panel autoregressive distributed lag (ARDL) of pooled mean group (PMG), mean group (MG) and dynamic fixed effects (DFE) to examine the relationship. The asymmetric panel ARDL is used to examine the different response of a change in government debt over the short-run and long-run, this technique is also used in an analysis of debt in the Eurozone Area by Gómez-Puig and Sosvilla-Rivero ( ). This study contributes to the literature by examining a set of countries that has not previously been explored, using both a linear and non-linear methodology. Our research also contributes by looking at the issue of cross-sectional dependence in macro panel data. The presence of cross-sectional dependence may be caused by numerous aspects: spatial spillover, omitted and unobserved common factors as discussed in Breitung and Pesaran ( ). Ignoring these factors may lead to the inconsistency of parametric and nonparametric estimators as pointed out by Baltagi ( The remainder of this study is organised as follows: section 2 presents a review of the literature, section 3 explains the data and sample selection, section 4 presents the empirical methodology and section 5 discuss the empirical results. Finally, section 6 concludes. 2 Literature review Debt-related problems are nothing new to developed and developing countries. Many different empirical approaches have been used to examine the link between public debt and economic growth. The results can be heavily influenced by the time period of the study, country selection and estimation methods. As pointed out by Panizza and Presbitero ( ) at the theoretical level models yield ambiguous results concerning the relationship between public debt and economic growth and hence the link between the two is basically an empirical issue. They also argue that there is no empirical study that can make a compelling case for a causal relationship going from debt to economic growth. Nonetheless, a strong adverse effect of high levels of public debt in low-income countries is found in the study of Pattillo et al. ( ) who find that for highly indebted nations, per capita growth will be reduced by 1 % if debt is doubled. Cecchetti et al. ( ) based on an analysis of 18 OECD economies argues that there is a threshold at 85% debt to GDP which if exceeded leads to a reduction in future economic growth. They find that after the threshold an increase of 10% in the government debt to GDP ratio will lower annual economic growth by 0.17–0.18% over the following 5-year period. Theoretically, government debt can stimulate a nation’s long-run growth rate if it is productively allocated to the determinants of growth. Several studies argue that there is a non-linear link in which increased government debt may increase economic growth initially but after a certain point lead to a decrease in the growth rate. The frequently cited study of Reinhart and Rogoff ( ), finds a weak link between low public debt levels and growth but argues that if the debt-to-GDP ratio exceeds 90%, it is harmful to growth. Based on historical data series for two decades, they analyse the link among inflation, high central sovereign debt, and economic growth in both developed and developing nations. However, the Rogoff and Reinhart results have become controversial due to some computational errors in their calculations that were pointed out by Herndon et al. ( ) that undermine the existence of 90% threshold in the debt-growth link, arguing that there is no discontinuity above the cut-off level. Another replicative study by Minea and Parent ( ) shows that the threshold point is somewhat higher than 90% of GDP and kicks in at around the 115% debt-to-GDP ratio above that ratio public debt is found to be negatively correlated with growth. Despite the controversy, other research provides some support to Reinhart and Rogoff’s ( ) findings. Kumar and Woo ( ) find a similar result using panel data for 38 advanced and emerging countries. They analyse the impact of initial government debt on subsequent per capita GDP growth and show that there exists an inverse-U shaped relationship with a more than 90% debt-to-GDP being detrimental to economic growth. Checherita-Westphal and Rother ( ) investigate the Euro area’s economic growth using annual and cumulative five-year overlapping data of government debt and a squared debt term to analyse the non-linearity of the relationship and conclude that after around the 90–100% debt to GDP ratio there is an adverse effect of public debt on economic growth. However, the quadratic relationship is very sensitive to extreme values, particularly in a small sample of observations as pointed out by Panizza and Presbitero ( ). Presbitero ( ) uses total government debt in analysing the debt-growth link in developing countries for the period 1990–2007. He finds conditional convergence and a threshold effect for debt-to-GDP ratio above 90% which is similar to some of the studies of advanced nations. Apart from the non-linearity debate, the issue of reverse causality needs to be taken into consideration, that is, whether debt leads to higher growth or vice-versa. Many studies are critical of Reinhart and Rogoff’s ( ) as a comparison because their research failed to consider the endogeneity issue. Checherita-Westphal and Rother ( ) uses various instrumental variable (IV) methods with two-stage least squares or GMM estimators. Their results suggest that the two-step GMM is more favourable regarding efficiency. Kumar and Woo ( ) use a system GMM dynamic panel regression approach to address the endogeneity issue. Both studies use a dataset in which cross-section is larger than the time span (N>T). The GMM method is considered to be more efficient and give more precise estimations since this approach is applicable for large cross-country analysis (see Roodman ). Presbitero ( ) also consider the foreign currency debt as a proportion of government debt, as an instrumental variable. However, the use of this variable is questionable, in terms of the economic interpretation and according to Woo and Kumar ( ) this variable cannot meet the restriction criteria of a good IV estimator and its usage as an instrument is highly questionable for high-income countries where there is a low level of foreign currency proportion of debt. To avoid reverse causality, Woo and Kumar ( ) use initial debt levels to analyse the effect on future growth. Due to the problem of finding suitable external instrumental variables, the standard system GMM estimator is used to address the potential endogeneity issue. They find that a high initial level of public debt is significantly associated with slower subsequent growth in a large panel of countries made up of developed and emerging market economies. While Baum et al. ( ) show a positive short-run impact of public debt on growth, but the impact becomes smaller once the debt-to-GDP ratio is above 67%. They use the dynamic panel method of GMM to estimate the linear model and modified Caner and Hansen ( ) approach to estimate the debt threshold. Sen et al. ( ) also exploit a dynamic GMM to study the behaviour of government debt on economic growth in Asian and Latin American countries. In the spirit of debt overhang, they examine external debt and find that borrowing severely hinders growth in Latin America and has a mildly negative effect in the case of Asia. However, the GMM only captures the dynamics of short-run and ignores the long-run relationship since the estimator is designed for a small time span. Consequently, as shown by Christopoulos and Tsionas ( ) the outcomes may show a spurious result instead of long-run equilibrium. Moreover, in the case of a small N and large T, the GMM estimator may suffer from an autocorrelation problem in the residuals of the first-difference estimation, see Roodman ( ). Gómez-Puig and Sosvilla-Rivero ( ) use ARDL bound testing when estimating debt-growth links in the Economic and Monetary Union countries. Focusing on time series estimation, the authors find that the adverse impact is persistent in the long-run, but there are positive effects for some member countries in the short-run. Conversely, Eberhardt and Presbitero ( ) use a dynamic model of common correlated effects of pooled group and mean group estimators to analyse the link between debt and growth and they also use the traditional mean group and dynamic two-way fixed effects as a means of comparison. Using data from 118 countries, the authors allow for heterogeneity in the long-run and short-run link. They find a significant positive effect on average in the long-run debt but an insignificant result in the short-run. The use of panel autoregressive distributed lag (ARDL) models for analysing the impact of public debt on economic growth can also be found in Chudik et al. ( ) using data on a sample of 40 developed and developing countries over the period 1965–2010 they find an adverse effect of increases in the public debt to GDP ratio on economic growth. They also find no simple debt threshold for either developed or developing countries after accounting for the impact of global factors and spillover effects. 3 The data set Panel estimation is chosen in this study to control for individual heterogeneity, to identify unobservable characteristics and to give more information on reliable estimation, see Baltagi ( ). Our analysis uses the data of 14 countries in Asia over a period of 33 years (1980–2012), resulting in a total of 462 observations (see Table for countries in the sample). The choice of the countries was determined by issues of data availability. Japan was excluded from the analysis due its high public debt level. Since the data consists of a panel of 14 countries for 33 years, where =14, is much less than T=33 the GMM estimator is not appropriate for our analysis. Table provides comparative data for countries debt-to-GDP ratio. However, when T is larger than N (as in our case) the ARDL approach is appropriate and therefore is the preferred method for our analysis. Table 1 Comparative Features of Public Debt in Asian Countries │ │Countries │Public Debt-to-GDP (%) │ │Categories ├─────────────────┼──────────┬────────────┤ │ │ │ 1995 │ 2010 │ │ │Bangladesh │51.3 │38.5 │ │ ├─────────────────┼──────────┼────────────┤ │ │India │69.5 │67.0 │ │ ├─────────────────┼──────────┼────────────┤ │ │Indonesia │32.0 │26.8 │ │ ├─────────────────┼──────────┼────────────┤ │Lower-middle Income │Nepal │63.4 │35.4 │ │ ├─────────────────┼──────────┼────────────┤ │ │Pakistan │68.0 │61.5 │ │ ├─────────────────┼──────────┼────────────┤ │ │Philippines │62.7 │43.5 │ │ ├─────────────────┼──────────┼────────────┤ │ │Sri Lanka │92.0 │81.9 │ │ │Thailand │12.2 │41.9 │ │ ├─────────────────┼──────────┼────────────┤ │ │Turkey │34.6 │42.3 │ │ ├─────────────────┼──────────┼────────────┤ │ │Iran, I.R. of │35.2 │16.7 │ │Upper-middle and High Income├─────────────────┼──────────┼────────────┤ │ │China: Mainland │6.1 │33.5 │ │ ├─────────────────┼──────────┼────────────┤ │ │Malaysia │41.6 │53.5 │ │ ├─────────────────┼──────────┼────────────┤ │ │Republic of Korea│7.1 │33.4 │ │ │Singapore │70.1 │101.7 │ Data is obtained from different sources of macroeconomic variables: the public debt-to-GDP ratio is derived from the study of ‘A Historical Public Debt Database’ by Abbas and Christensen ( ) in conjunction with the World Development Indicators (WDI-WB) of the World Bank ( ) and official national statistics. Other control determinants are taken from the WDI-WB and the Penn World Tables 9.0 (PWT 9.0) (Feenstra et al. ). Following Ala-i-martin et al. ( ), the explanatory variables are a set of determinants of economic growth. With the inclusion of several control variables to overcome the problem of omitted variables bias. The variables used in our study are listed below: • Real GDP (in log) is obtained from the PWT 9.0. • Public debt (in log) is obtained from Abbas and Christensen ( ), the WDI-WB and official national statistics • Average years of schooling (in log) our proxy for human capital, following it’s widespread usage in the public debt-growth literature (Pattillo et al. ; Woo and Kumar ) is obtained from the PWT 9.0. • Trade openness (in log): This study uses sum of import and exports as a percentage of GDP to account for international trade activity. Data is obtained from the WB-WDI. • Investment ratio (in log) is obtained from the WB-WDI using gross fixed capital formation as a percentage of GDP. 4 Methodology We use several econometrics methods to examine the relationship between public debt and economic growth particularly in Asian countries and consider both the long-run and short-run relationships, along with the presence of nonlinearity. To examine the short and long-run relationships we use the panel ARDL initiated by Pesaran and Smith ( ) and Pesaran et al. ( ). To solve the contemporaneous correlation issue, CS-ARDL of Common Correlated Effect Pooled Mean Group (CCEPMG) and Common Correlated Effect Mean Group (CCEMG) are used. Since the debt levels are generally well below the 90% we do not look for threshold effects in our analysis. 4.1 Preliminary tests We first conduct panel unit root tests before performing the main estimations, the tests are necessary to check whether the variables are non-stationary. Several tests are conducted: Im et al. ( ) test (IPS), Levin et al. ( ) test (LLC) and second generation of IPS test (CIPS) of Pesaran ( ). The LL test is based on the assumption of non-heterogeneity of the autoregressive parameter, while the IPS test allows the heterogeneity while the CIPS unit root test relaxes the assumption of cross-sectional independence of the contemporaneous correlation All of these tests use the null hypothesis of non-stationarity. The selection of the lag length is chosen using the Bayesian-Schwarz Another test we conduct is Cross-Sectional (CD) Pesaran ( ) which accounts for the presence of cross-sectional dependence. Panel data estimation assumes that disturbances are cross-sectionally independent, however, with the cross-country influences in the population, the issue of a cross-sectional link may arise. This dependence might be caused by similar geographical area, political or economic inducement (Gaibulloev et al. ), therefore it is necessary to test the presence of cross-sectional dependence we also employ the CIPS and CD tests to check the residuals properties. 4.2 Panel cointegration tests Two panel cointegration tests are employed here, based on the results of preliminary tests of non-stationarity. If the variables are non-stationary, then an examination for cointegration is conducted, using cointegration tests of Pedroni ( ) and Westerlund ( ). These cointegration tests are expected to reveal the existence or otherwise of a long-run relationship. The Pedroni ( ) test proposes seven different panel cointegration tests to check for the absence of cointegration. The seven-test relies on three between-dimension approaches and four within-dimension methods. Generalised least square correction is used to correct the independent idiosyncratic error terms across individuals. The Westerlund ( ) test exhibits four-panel cointegration estimation with the null of no cointegration, rejection of null hypothesis can be considered as the presence of cointegration in at least one individual unit. 4.3 Dynamic panel ARDL tests Panel Autoregressive Distributed Lag (ARDL) is conducted if no-cointegration found from the previous methods. This method is superior regardless of whether the underlying regressors exhibit I(0), I (1) or a mixture both (Pesaran and Shin ) with a time span of over 20 years, the macro panel data method can be implemented. It was not appropriate to use the GMM estimator due to the nature of dataset. Following the extensive literature on dynamic panel data, we implement several estimators to assess the relationship between public debt and economic growth, by using Mean Group (MG), Pooled Mean Group (PMG), Dynamic Two-Way Fixed Effect (DFE) estimators (Pesaran and Smith ; Pesaran et al. The main model of panel ARDL approach is to obtain the relationship between public debt and economic growth: $$ {y}_{it}={\alpha}_i+\sum \limits_{l=1}^p{\beta}_0{y}_{i,t-l}+\sum \limits_{l=0}^q{\beta}_1{d}_{i,t-l}+\sum \limits_{l=0}^q{\beta}_2{x}_{i,t-l}+{u}_{it} $$ By reparameterising eq. ( $$ {\displaystyle \begin{array}{c}{\varDelta y}_{it}={\alpha}_i+{\varPhi}_i\left({y}_{i,t-l}-{\theta}_1{d}_{i,t-l}-{\theta}_2{x}_{i,t-l}\right)\\ {}+\sum \limits_{l=1}^{p-1}{\lambda}_{il}{\varDelta y}_{i,t-l}+\sum \limits_{l=0}^{q-1}{\lambda}_{il}^{\prime}\varDelta {d}_{i,t-l}+\sum \limits_{l=0}^{q-1}{\lambda^{\prime \prime}}_{il}\varDelta {x}_{i,t-l}+{u}_{it}\end{array}} $$ representing country and time respectively, is the real GDP, is the public debt to GDP ratio, is a set of control variables: openness, human capital and the investment to GDP ratio. Notation are the short-run coefficients of the lagged dependent variable, debt and other control variables respectively. The long-run coefficients are for debt and other control variables. Lastly, shows the speed of adjustment. The PMG restricts long-run equilibrium to be homogenous across countries, while allowing heterogeneity for the short-run relationship. The short-run relationship focuses on the country specific heterogeneity, which might be caused by different responses of stabilisation policies, external shocks or financial crises for each country. The MG estimator allows for heterogeneity in the short-run and long-run relationship. To be consistent, this estimator is appropriate for a large number of countries. For a small number of N, this method is sensitive to permutations of non-large model and outliers (Favara, 2003). By contrast, the DFE estimator restricts the speed of adjustment, slope coefficient and short-run coefficient to exhibit non-heterogeneity across countries. Accepting this estimator as the main analysis tool requires the strong assumption that countries responses are the same in the short-run and long-run, which is less compelling. Another drawback is that this approach may suffer from simultaneity bias in a small sample case due to the endogeneity between err the eror term and lagged explanatory variables (Baltagi et al. In the case of our data it is derived from middle-income countries which exhibit similar behaviour in the long-run, regarding economic growth. The short-run is expected to be non-homogenous due to the country specific differences, as such the PMG estimator seems to be superior to other methods. We use the Hausman test to verify the significance of each estimator. One important point is that ARDL, especially PMG and MG estimators, can alleviate the problem of endogeneity with the inclusion of sufficient lags of all variables (Pesaran et al. The common correlated effect is introduced in the panel ARDL estimation to account for contemporaneous correlation. By creating indicators of (weighted) cross-sectional averages of regressors to control for the common factor, this study focuses on Common Correlated Effect Pooled Mean Group (CCEPMG) method and adds Common Correlated Effect Mean Group (CCEMG) as a comparison (Pesaran ). It is expected that CCEPMG to be consistent and efficient in this estimation, under the null hypothesis of no heterogeneity in the long-run. 4.4 Asymmetric panel ARDL tests The nonlinear ARDL estimator of Shin et al. ( ) allows for an asymmetric short-run and long-run relationship. Following Eberhardt and Presbitero ( ), this study attempts to look at the asymmetric response of long-run and short-run response of public debt accumulation in economic growth. 4.4.1 Asymmetric panel ARDL tests Asymmetric long-run estimation requires a decomposition of variable of interest into its positive and negative sub-variables, which define ( ) and ( ) as partial sums of positive and negative changes in public debt. $$ {y}_{it}={\alpha}_i+\sum \limits_{l=1}^p{\beta}_0{y}_{i,t-l}+\sum \limits_{l=0}^{q-1}\left({\beta}_1{d}_{i,t-l}^{+}+{\beta}_2{d}_{i,t-l}^{-}\right)+\sum \limits_{l=0}^q{\beta}_3{x}_{i,t-l}+{u}_ {it} $$ \( {d}_{i,t}^{-} \) \( {\sum}_{j=1}^t{\varDelta d}_{ij}^{-} \) \( {\sum}_{j=1}^t\mathit{\max}\left(\varDelta {d}_{ij},0\right) \) \( {d}_{i,t}^{+} \) \( {\sum}_{j=1}^t{\varDelta d}_{ij}^{+} \) \( {\sum}_{j=1}^t\mathit{\max}\left(\varDelta {d}_{ij},0.\right) \) By reparameterising eq. ( ) we obtain: $$ {\displaystyle \begin{array}{c}{\varDelta y}_{it}={\alpha}_i+{\varPhi}_i\left({y}_{i,t-l}-{\theta}_1{d}_{i,t-l}^{+}-{\theta}_2{d}_{i,t-l}^{-}-{\theta}_3{x}_{i,t-l}\right)\\ {}+\sum \limits_{l=1}^ {p-1}{\lambda}_1{\varDelta y}_{i,t-l}+\sum \limits_{l=0}^{q-1}\left({\lambda}_2\varDelta {d}_{i,t-l}^{+}+{\lambda}_3\varDelta {d}_{i,t-l}^{-}\right)+\sum \limits_{l=0}^{q-1}{\lambda}_4\varDelta {x}_ {i,t-l}+{u}_{it}\end{array}} $$ This study uses the PMG and the CCEPMG approach to account for cross-sectional dependence. 5 Empirical results We start our empirical analysis by conducting panel unit root tests for all our variables. The IPS and LLC unit root test assume cross-sectional independence, while the CIPS accounts for cross-sectional dependence. The unit root tests which are summarised in Table , show that variables of interest have both non-stationary and stationary characteristics. Real GDP, openness and human capital are I(1) according to all unit root tests. Investment is stationary according to IPS and LLC tests, but considered non-stationary based on CIPS test. Government debt is stationary according to LLC, but has non-stationary characteristics based on IPS and CIPS. Consequently, it is necessary to perform cointegration tests between real GDP and public debt to GDP to check for the possible existence of a long-run relationship. Table 2 Panel Unit Root Tests │ │IPS Test │LLC Test │CIPS Test │ │ │Constant │Constant & Trend│Constant │Constant & Trend│Constant │Constant & Trend│ │Tests in logarithmic levels │ │Real GDP │6.68 │1.37 │1.81 │−0.32 │−1.527 │−1.823 │ │Govt debt/GDP │−2.44 │−1.05 │−2.45*** │−3.75*** │−1.560 │−2.201 │ │Human capital │2.51 │0.29 │−0.55 │−2.45*** │−2.030* │−2.199* │ │Trade Openness │0.56 │0.01 │−0.21 │−0.88 │−1.802 │−2.106 │ │Investment ratio│−3.11*** │−2.86*** │−3.68*** │−3.92*** │−1.513 │−2.370 │ │Tests in first logarithmic differences │ │Real GDP │−10.49***│−9.20*** │−10.52***│−9.85*** │−3.892***│−4.242*** │ │Govt debt/GDP │−11.52***│−10.92*** │– │– │−4.417***│−4.625*** │ │Human capital │−1.696** │−3.193*** │−1.96*** │– │– │−2.610*** │ │Trade Openness │−15.63***│−14.57*** │−16.68***│−15.31*** │−4.812***│−5.062*** │ │Investment ratio│– │– │– │– │−4.611***│−4.761*** │ Two cointegration tests are conducted to analyse the long-run relationship between government debt and growth. Pedroni test results (see Table ) show that the null hypothesis of no cointegration in a heterogeneous panel cannot be rejected. To accept the alternative hypothesis the panel variance has to possess a large statistical value and the latter six tests have to show large negative values (Pedroni ). The same result is obtained from Westerlund ( ) test of no cointegration between variables, showing high probabilities of no rejection in the p values . The rationale here is to test for the absence of cointegration by determining whether an Error Correction Model (ECM) exists for individual panel members or for the panel as a whole. Two different classes of tests can be used to evaluate the null hypothesis of no cointegration and the alternative hypothesis: group-mean tests ( ) and panel tests ( ). Westerlund ( ) developed four panel cointegration test statistics ( G[t],G[a], P[t] ) based on the Error Correction Model (ECM). The results of all these additional cointegration tests are summarised in Table and in all cases show no evidence of cointegration. Table 3 Pedroni Cointegration Test Results │ │ │Panel (Within Dimension) │Group (Within Dimension)│ │Variables (in log) │Test Statistics├────────────┬────────────┼─────────────┬──────────┤ │ │ │ (a) │ (b) │ (a) │ (b) │ │ │V │0.2023 │−0.3559 │ │ │ │ ├───────────────┼────────────┼────────────┼─────────────┼──────────┤ │ │rho │0.6958 │1.2990 │1.9790 │2.4390 │ │Real GDP, government debt/GDP├───────────────┼────────────┼────────────┼─────────────┼──────────┤ │ │t │0.1663 │0.4502 │1.1080 │1.2580 │ │ ├───────────────┼────────────┼────────────┼─────────────┼──────────┤ │ │adf │−0.7834 │0.4393 │−0.7412 │0.7053 │ Table 4 Westerlund Cointegration Test Results │ │ │Constant │Constant and Trend │ │ │ ├──────────────┬──────────────┼─────────────┬──────────────┤ │Variables (in log) │Group and Panel Statistics│(a) │(b) │(a) │(b) │ │ │ ├──────┬───────┼──────┬───────┼─────┬───────┼──────┬───────┤ │ │ │Value │p value│Value │p value│Value│p value│Value │p value│ │ │G[t] │−0.074│0.991 │−1.628│0.992 │0.183│0.993 │−1.574│0.910 │ │ │ │ │ │ │ │ │ │ │ │ │ │G[a] │0.106 │0.881 │−5.005│0.883 │0.457│0.885 │−3.986│0.887 │ │Real GDP, government debt/GDP │ │ │ │ │ │ │ │ │ │ │ │P[t] │0.017 │0.994 │−5.700│0.994 │0.494│0.995 │−5.831│0.997 │ │ │ │ │ │ │ │ │ │ │ │ │ │P[a] │0.007 │0.966 │−4.419│0.967 │0.175│0.966 │−3.627│0.963 │ As previously stated, the panel ARDL method can be utilised to account for long-run and short-run relationships, even for the case of non-stationary variables but without cointegration. Three methods are used in this study: PMG, MG and DFE. Table , Panel A reports the estimates for all three methods and shows a significant result in the short-run that increased government debt adversely affects economic growth in the bivariate model. However, none of these tests are significant in the long-run. The ECM has a significant negative sign for the error correction term which implies that this model converges to a long-run relationship. Table 5 Panel ARDL Estimation Results │ │ │Panel A │Panel B │ │Variable (in log) │PMG (a) │MG (a) │DFE (a) │PMG (b) │MG (b) │DFE (b) │ │ │ │0.624 │1.027 │0.592 │-0.008 ** │-1.244 │0.101 │ │ │Government debt │ │ │ │ │ │ │ │ │ │(0.355) │(1.017) │(0.329) │(0.237) │(0.855) │(0.109) │ │ ├─────────────────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤ │ │ │ │ │ │1.049 │-0.719 │0.429 │ │ │Investment ratio │ │ │ │ │ │ │ │ │ │ │ │ │(0.734) │(0.978) │(0.261) │ │Long-run ├─────────────────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤ │ │ │ │ │ │0.697 │0.945 │2.142*** │ │ │Human capital │ │ │ │ │ │ │ │ │ │ │ │ │(1.999) │(2.294) │(0.546) │ │ ├─────────────────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤ │ │ │ │ │ │1.025 │0.740 │0.477 │ │ │Trade openness │ │ │ │ │ │ │ │ │ │ │ │ │(0.762) │(0.698) │(0.238) │ │ │ │-0.136 *** │-0.125*** │-0.079*** │-0.099*** │-0.086*** │-0.059 ** │ │ │Government debt │ │ │ │ │ │ │ │ │ │(0.027) │(0.027) │(0.008) │(0.025) │(0.027) │(0.008) │ │ ├─────────────────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤ │ │ │ │ │ │0.079 *** │0.059 ** │0.114*** │ │ │Investment ratio │ │ │ │ │ │ │ │ │ │ │ │ │(0.024) │(0.024) │(0.017) │ │ │ │ │ │ │0.302 │-0.955 │0.021 │ │ │Human capital │ │ │ │ │ │ │ │ │ │ │ │ │(0.471) │(0.614) │(0.240) │ │ ├─────────────────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤ │ │ │ │ │ │0.014 │0.002 │0.012 │ │ │Trade openness │ │ │ │ │ │ │ │ │ │ │ │ │(0.016) │(0.009) │(0.015) │ │ │-0.010 *** │-0.003 │-0.012*** │-0.021 │-0.186*** │-0.036 ** │ │Error Correction Term │ │ │ │ │ │ │ │ │(0.003) │(0.008) │(0.004) │(0.018) │(0.044) │(0.010) │ │CD │2.12 (0.03)│3.40 (0.00)│4.68 (0.00)│2.20 (0.02)│3.25 (0.01)│3.95 (0.00)│ │AR │8.22** │9.94*** │4.18* │4.01 │1.93 │1.9 │ │Residual │I(0) │I(0) │I(0) │I(0) │I(0) │I(0) │ │RMSE │0.029 │0.028 │0.031 │0.026 │0.024 │0.029 │ │Adjusted R^2 │0.808 │0.833 │0.768 │0.866 │0.906 │0.803 │ │Number of observations │448 │448 │448 │448 │448 │448 │ The next estimation presented in Table , Panel (B) uses all the determinants of growth and shows a similar result as in the bivariate case model. In the short-run, three estimators show significant negative results of public debt on economic growth. The investment ratio has a significant positive effect on economic growth. However, although human capital and trade openness have a positive sign as expected they are largely not significant, except in the case of the human capital proxy variable in the MG estimator. The negative long-run relationship of public debt in the estimation is significant only in the PMG method. The other two estimators show a negative but insignificant sign. Investment and openness in the long-run are signed as expected but not significant. The DFE approach exhibits a significant positive effect of human capital in the long-run. The error correction terms are again negative and significant showing convergence in the long-run. Among all of the error correction results, the highest speed of adjustment of 18.6% (−0.186) is derived from MG from the second model implying a correction of 18.6% for the discrepancy of the estimation. As stated before, we expect the PMG estimator to be the best approach. PMG allows the short-run to have differing responses across countries, while it restricts the long-run to exhibit non-heterogeneity. One advantage of using the PMG is that for a relatively small cross section of data (14 countries) the PMG is less sensitive to the existence of outliers (Pesaran et al. ). In addition, the problem of serial autocorrelation can be corrected simultaneously. The benefit of using panel ARDL with sufficient lags is a reduction of the problem of endogeneity (Pesaran and Smith, 1999) which has been a concern in the recent debt-growth literature. This chosen estimator is valid only if the assumption of the long-run restriction is not rejected. As can be seen from Table , Panel B, the homogeneity restriction is efficient and significant under such a hypothesis. Moreover, the Hausman test for the first and second model reveals a preference for PMG approach. The residuals show an I(0) integration suggesting the regressions are not spurious. Despite the significant result of the variables of interest, the ARDL method disregards contemporaneous correlation across countries, which is caused by unobserved factors. Ignoring these factors can lead to less consistent parametric and non-parametric estimators (Baltagi ). This is shown from the CD test (Pesaran) result which indicates a high value of cross-sectional dependence in the error term and clearly rejects the null of weakly cross-sectional dependence. The contemporaneous correlation is expected to diminish when the common correlated model is introduced. shows the CCEPMG and CCEMG estimators for bivariate and multivariate models. In the bivariate model, the CCEPMG estimator shows a significant result in the short-run and long-run and error correction term. In the multivariate model, both estimators show a significant negative debt relationship in the long-run while neither is significant in the short-run although the error correction terms remains negative and appears to be a much higher value. The control variable of the investment ratio is positively associated in the short-run and long-run in the CCEPMG result showing that it is a key determinant of economic growth. Table 6 Panel ARDL Estimation - with Common Correlated Effect │Variable (in log) │CCEPMG (a) │CCEMG (a) │CCEPMG (b) │CCEMG (b) │ │ │ │-0.140*** │-5.460 │-0.105*** │-0.206*** │ │ │Government debt │ │ │ │ │ │ │ │(0.032) │(5.175) │(0.015) │(0.055) │ │ ├─────────────────┼────────────┼────────────┼────────────┼────────────┤ │ │ │ │ │0.079*** │0.035 │ │ │Investment ratio │ │ │ │ │ │ │ │ │ │(0.024) │(0.081) │ │Long-run ├─────────────────┼────────────┼────────────┼────────────┼────────────┤ │ │ │ │ │0.303 │-0.789 │ │ │Human capital │ │ │ │ │ │ │ │ │ │(0.201) │(0.477) │ │ ├─────────────────┼────────────┼────────────┼────────────┼────────────┤ │ │ │ │ │0.093*** │0.096 │ │ │Trade openness │ │ │ │ │ │ │ │ │ │(0.028) │(0.064) │ │ │ │-0.081*** │-0.051*** │0.003 │0.002 │ │ │Government debt │ │ │ │ │ │ │ │(0.018) │(0.019) │(0.014) │(0.025) │ │ ├─────────────────┼────────────┼────────────┼────────────┼────────────┤ │ │ │ │ │0.036 ** │0.021 │ │ │Investment ratio │ │ │ │ │ │ │ │ │ │(0.018) │(0.029) │ │ │ │ │ │-1.094 │-0.826 │ │ │Human capital │ │ │ │ │ │ │ │ │ │(0.780) │(1.124) │ │ ├─────────────────┼────────────┼────────────┼────────────┼────────────┤ │ │ │ │ │-0.054*** │-0.059** │ │ │Trade openness │ │ │ │ │ │ │ │ │ │(0.010) │(0.024) │ │ │-0.164 *** │-0.259*** │-0.704*** │-0.746*** │ │Error Correction Term │ │ │ │ │ │ │(0.029) │(0.038) │(0.060) │(0.103) │ │CD │-2.63 (0.00)│-2.05 (0.04)│-0.99 (0.32)│-1.65 (0.09)│ │AR │8.55** │12.25*** │1.13 │16.84*** │ │Residual │I(0) │I(0) │I(0) │I(0) │ │RMSE │0.023 │0.022 │0.018 │0.016 │ │Adjusted R^2 │0.408 │0.512 │0.700 │0.828 │ │Number of observations │448 │448 │448 │448 │ By contrast, the human capital coefficient is not significant in this estimation and has a negative sign in the short-run. This result is somewhat surprising given the idea that human capital is an important driver of economic growth. One possible explanation along the lines of Van Leeuwen ( ) is that average years of schooling is an imperfect measure of human capital, he argues that this variable cannot capture the increased efficiency in the economy resulting from education. Moreover, since this variable is not expressed in terms of a monetary unit, it is not comparable with the capital stock formation monetary unit measurement. CCEPMG and CCEMG estimators show that trade openness is significantly negative in the short-run when it might be the case of trade liberalisation undermines domestic production due to import competition, see Gries and Redlin ( ). However, trade openness is positively associated with economic growth in the long-run, as shown in CCEPMG estimator. All four tests show negative and significant result for the error correction term, supporting the evidence of a long-run relationship. When a deviation from the long-run exists, the speed of adjustment to the long-run equilibrium is derived from the absolute value of the error correction term. In the bivariate model, deviations can be corrected for at a rate of 16.4% (−0.164) based on CCEPMG and 25.9% (−0.259) according to CCEMG. In the multivariate growth model, the speed of adjustment is much higher at 70.4% (−0.704) and 74.6% (−0.746) according to the CCEPMG and CCEMG The residual tests are I(0) for all estimations, it is worth noting that the CCE estimator is valid even in the presence of serial correlation in the error term (Pesaran ). However, except for CCEPMG, the results still suffer from cross-sectional dependence. In order to be a valid estimator, CCEMG should satisfy two requirements (i) the number of cross-section averages should be at least equal to the number of unobserved common factors and (ii) sufficient lags of cross section averages, see Chudik and Pesaran ( ). However, including more lags of averages variables is not desirable in our case because of the relatively small sample size. The CCEPMG estimator is chosen as the preferred approach because of the econometric theory behind this estimator and the significance of outcomes in both models. Besides, the estimator is correctly specified without the problem of autocorrelation and cross-sectional dependence. In general, the results point to the detrimental consequences of increased public debt in economic growth. The results for asymmetric panel ARDL are reported in Table . To find the appropriate lag length in the estimation the general to specific method was used and the Wald test is employed to examine if there is an asymmetric short and long-run response of government debt changes on economic growth. The PMG estimator cannot distinguish the asymmetric link of change of debt in the short-run, as coefficient of government debt (+) and government debt (−) exhibits a negative sign. The Wald test cannot reject the null hypothesis of a symmetric link in the short-run and in the long-run. While the sign shows a change in magnitude in the short-run it is statistically insignificant and there is also a clear rejection of the existence of an asymmetric relationship in the long-run. Using the CCEPMG, the Wald test of long-run symmetry cannot be rejected, indicating that change in government debt does not affect the long-run relationship and using the same convergence rate to define the long-run growth. While in the short-run there is a significant asymmetry in the CCEPMG estimator. The stationarity in the residuals indicates that the results of the Wald test are not spurious and the CCEPMG estimator controls for the cross-sectional dependence. In addition, the error correction term is negative. Although the CCEPMG approach rejects the null of Wooldridge test of autocorrelation, the CCE method is still superior to the presence of autocorrelation in error term (Drukker ; Pesaran Table 7 Panel Asymmetric ARDL Estimation │Variable (in log) │PMG │CCEPMG │ │ │ │-0.091 │-0.132 *** │ │ │Government debt (+) │ │ │ │ │ │(0.287) │(0.030) │ │ ├────────────────────┼────────────┼─────────────┤ │ │ │0.525 │-0.098 *** │ │ │Government debt (-) │ │ │ │ │ │(0.515) │(0.035) │ │ ├────────────────────┼────────────┼─────────────┤ │ │ │1.449 │0.046 │ │Long-run │Investment ratio │ │ │ │ │ │(1.005) │(0.030) │ │ ├────────────────────┼────────────┼─────────────┤ │ │ │2.059 │0.552 ** │ │ │Human capital │ │ │ │ │ │(1.470) │(0.280) │ │ ├────────────────────┼────────────┼─────────────┤ │ │ │1.206 │0.137 *** │ │ │Trade openness │ │ │ │ │ │(0.932) │(0.035) │ │ │ │-0.125 *** │-0.012 │ │ │Government debt (+) │ │ │ │ │ │(0.048) │(0.017) │ │ ├────────────────────┼────────────┼─────────────┤ │ │ │-0.056 │0.048 * │ │ │Government debt (-) │ │ │ │ │ │(0.031) │(0.028) │ │ ├────────────────────┼────────────┼─────────────┤ │ │ │0.089 *** │0.056 ** │ │Short-run│Investment ratio │ │ │ │ │ │(0.024) │(0.022) │ │ ├────────────────────┼────────────┼─────────────┤ │ │ │0.105 │-2.335 *** │ │ │Human capital │ │ │ │ │ │(0.453) │(1.147) │ │ ├────────────────────┼────────────┼─────────────┤ │ │ │-0.001 │-0.077 *** │ │ │Trade openness │ │ │ │ │ │(0.011) │(0.013) │ │ │-0.017 │-0.711 *** │ │Error Correction Term │ │ │ │ │(0.013) │(0.074) │ │CD │1.68 (0.09) │-0.74 (0.45) │ │AR │2.36 │5.24 ** │ │Residual │I(0) │I(0) │ │RMSE │0.029 │0.017 │ │Adjusted R^2 │0.890 │0.752 │ │WLR (p-value) │1.04 (0.309)│0.42 (0.520) │ │WSR (p-value) │1.04 (0.307)│2.83* (0.094)│ │Number of observations │448 │448 │ The investment ratio is positive and significant in short-run analysis but does not show a significant association with economic growth in the long-run. The human capital shows a positive but insignificant result in the long-run and a mix of negative and positive result in the short-run. Again, the trade openness exhibits a negligible negative impact in the short-run but a much higher positive but not significant influence in the long-run. The negative impact of public debt in the short-run implies lower debt accumulation will lead to a higher economic growth. While the positive 1% growth rate of government debt will lower economic growth by −0.012 to −0.125 percentage points. In the long-run, the magnitude of the two different regimes is somewhat higher in the region od −0.091 to −0.132 percentage points indicating that an increase in public debt will lead to a significant adverse effect on economic growth. 6 Conclusions The issue of public debt and its impact on economic growth has been an important topic of debate amongst academics and policymakers. This research contributes to the public debt-growth literature by focusing on a selection of Asian countries that typically have public debt to GDP ratios well below those in developed countries. In general, our results suggest that public debt has a detrimental effect on economic growth suggesting that the idea that the negative effects of public debt kick in only at ratios of public debt to GDP of 90% or more may not apply to the Asian economies. Our main findings can be summarised as follows: (i) there is a negative effect of the public debt ratio on economic growth, both in the short-run and long-run, (ii) the negative relationship is more significant when we use common correlated factors to address the issue of cross-sectional dependence, (iii) an asymmetric response of a change in public debt is found to be significantly negative in the short-run. As such, rises in short-run public debt negatively affect economic growth in the short-run but falls public debt do not have a correspondingly positive effect on economic growth in the The failure of the initial cointegration tests of Pedroni ( ) and Westerlund ( ) to detect a the long-run relationship, led us to resort to the use of more advanced methods examine the relationship. Using the panel ARDL approach increased public debt can be shown to to negatively affect economic growth in both the short and long run. This result does not change when allowing for common correlated effects in the analysis. An asymmetric response of a change in public debt is significant only in the short-run, that is, an increase in public debt will have a negative effect on growth in the short-run while a decrease in public debt will not have a correspondingly positive short-run impact on economic growth but it is likely to do so in the long-run. Our negative results in a set of countries that have relatively low public debt to GDP ratios complement the results of Pattillo et al. ( ) and Fall et al. ( ) who show the existence of quite low debt thresholds in emerging countries. Our results may also have some interesting policy implications. Firstly, there is a need to examine why increases in the public debt to GDP ratio have a negative effect on GDP in these economies; Is it because the increase in public debt is used to finance projects of little worth to future economic growth? Or because it crowds out productive private investment? Or is it because the increase in public debt has benefitted a few elites at the expense on increasing the debt burden on the rest of the population? The answer may be that a mixture of all three elements come into play. Secondly, the countries concerned should consider putting in institutional improvements and control mechanisms that ensure that increases in public expenditure that increase public debt, explicitly consider the likely impact on future economic growth. This could mean that much needed infrastructure projects are given priority over projects with little economic value added, such as, increased military and defence expenditure. Finally, there could be a greater focus in these countries on public sector expenditure evaluation, for instance, increases on public sector bureaucracy may not be as useful in promoting economic growth as greater public sector expenditure on improving health and education systems. The paybacks from these various types of government expenditure should be explicitly modelled so as to increase the probability of creating a positive link between increasing public debt and economic We are extremely grateful to two anonymous referees for their extensive set of comments and suggestions that led to a notable improvement in the paper. We are also grateful to participants in the European Economics and Finance Society Seventeenth Annual Conference held at City, University of London. In particular, we are grateful to Kwan Choi, Pasquale Sgro and Leonor Modesto for their very helpful feedback. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
{"url":"https://www.springerprofessional.de/public-debt-and-economic-growth-panel-data-evidence-for-asian-co/18221388","timestamp":"2024-11-05T06:50:58Z","content_type":"text/html","content_length":"264068","record_id":"<urn:uuid:ec6a48ee-eba1-492f-a2eb-c06e9521d25e>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00177.warc.gz"}
2 Days: debugging WACV reconstruction [Work Log] 2 Days: debugging WACV reconstruction November 18, 2013 Now that gradient is working, lets try using fminunc to optimize indices. Getting nonsense results. Looking into curve data. Inspecting the data, it's clear that our method of linearly sampling points along the bezier curve is resulting in very jagged curves. Example in dataset 8, curve 7: view 9 view 4 It's not totally clear how best to resolve this. Ideally, we would sample at a finer grain, but this caused big slow-downs for the longer curves. Could use coarse-grain sampling for long curves, but some curves are long in some views and short in others, and nonuniform sampling breaks some assumptions we make in the library. Furthermore, associations are unknown at the time of samplingz It's possible It's possible the bad reconstruction we're seeing from this curve isn't due to bad correspondence, but a bad indexing estimation (a later stage of inference). We see that although the correspondence places c7v9 toward the end of the 3D curve, our re-indexing code places it more spread out, but unevenly: the first point has index 4, while the subsequent points have indieces [21, 23, 27, 27]. We usually prevent large amounts of index-skipping during re-indexing, but possibly the second-pass refinement is destroying this. After double-checking, realized view #5 is the biggest problem child, not #7. Interestingly, #5 has relatively reasonable looking correspondence: But the reconstruction (both attached and free) is terrible: ll_means has really bad points at the beginning and end of the curve. It looks like tails might be handled poorly in corr_to_likelihood_2.m Attempting to run with "FIX_WORLD_T = false" in corr_to_likelihood. Early probes suggest this improves things; however, now getting a crash when calling construct_attachment_covariance.m. Getting NaN's from curve #8. I've observed that curve #8's indices start with two zeros. Maybe this is causing our covariance algorithm to choke? Got it: when we don't correct endpoints (i.e. FIX_WORLD_T = false), endpoints can get repeated, which makes our "initial direction" computation fail. • Find out why duplicate endpoints occur (doesn't triangulation make this improbable?) • Handle duplicated points gracefully when computing start point. Spun off ../correspondence/corr_to_likelihood_2.m. Added some plotting code to see data vs. smoothed reconstruction. Reconstruction is particularly ugly for curve #5. At first glance, it looks like end points are poorly localized, and then are preserved through the smoothing pass. But the last 12 points which are poorly localized hive quite a few correspondences, according to the correspondence table. With pre-smoothed reconstructrion in green: Tried using GP smoothing (implemented in reconstruction/gp_smooth_curve.m) instead of matlab's csaps(), but I'm getting weird tails. Must be a bug in how I'm computing the covariance (which I'm doing by hand, since the necessary fields aren't constructed at that stage of the pipeline). I've been over it a few times... time to sleep and look again in the morning. Eventually, chicken/egg approach might be the solution: optimize points, optimize indices, repeat. It's clear that smarter smoothing doesn't improve reconstruction of curve #5. • plot projected reconstruction vs. data • plot camera centers is it all bad cameras? why do others reconstruct okay? Test: incrementally reconstruct, adding one curve each time Reverse mode: started with views 9 & 8, then added 7, 6, etc.. Goes to pot at curve 4, gets progressively worse through curve 1 The purpose of corr_to_likelihood is to correct position and indexing errors made during naive triangulation. Let's test if that's happinging by viewing post-correction results. Dramatic improvement over pre-correction results. Still significant change after adding curve 2, and esp. after curve 1. Last 2 points and first 4 (?) are problematic. trying camera recalibration. wrote a quick 7-point calibration: cal_cam_dlt.m. Also wrote a camera visualization routine: visualization/draw_camera.m. The newly calibrated results are clearly different, but not obviously better, at least by inspection. Posted by Kyle Simek blog comments powered by
{"url":"http://vision.cs.arizona.edu/ksimek/research/2013/11/18/work-log/","timestamp":"2024-11-07T03:31:45Z","content_type":"application/xhtml+xml","content_length":"12672","record_id":"<urn:uuid:c3b517a2-e18b-4859-b41d-92919791ed79>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00722.warc.gz"}
Full description Wcalc is a very capable calculator. It has standard functions (sin, asin, and sinh for example, in either radians or degrees), many pre-defined constants (pi, e, c, etc.), support for using variables, "active" variables, a command history, hex/octal/binary input and output, unit conversions, embedded comments, and an expandable expression entry field. It evaluates expressions using the standard order of operations.
{"url":"https://packages.gentoo.org/packages/sci-calculators/wcalc","timestamp":"2024-11-04T11:06:49Z","content_type":"text/html","content_length":"18426","record_id":"<urn:uuid:38120294-23c3-4bb5-90b1-2ae115cbd73a>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00368.warc.gz"}
Deformation Theory of Algebras and related topics Project/Area 14540032 Research Grant-in-Aid for Scientific Research (C) Allocation Single-year Grants Section 一般 Research Algebra Research HIROSHIMA UNIVERSITY (2004) Institution Kyushu Institute of Technology (2002-2003) Principal KUBO Fujio Hiroshima University, Graduate School of Engineering, Professor, 大学院・工学研究科, 教授 (80112168) Project 2002 – 2004 Period (FY) Project Completed (Fiscal Year 2004) ¥1,500,000 (Direct Cost: ¥1,500,000) Amount *help Fiscal Year 2004: ¥500,000 (Direct Cost: ¥500,000) Fiscal Year 2003: ¥600,000 (Direct Cost: ¥600,000) Fiscal Year 2002: ¥400,000 (Direct Cost: ¥400,000) Keywords algebraic deformation theory / Gerstenhaber / Yamaguchi cohomology / Lie triple system / Inverse problem / コホモロジー / 線形逆問題 Algebraic deformation theory was found by Gerstenhaber in 1963. Now this has become one of the most important theory and the core tool to investigate mathematical physics. In this project I have mainly worked on the deformation of Lie triple systems (Lts). Achievements (1)I have laid the foundation of the deformation theory of Lts'. I defined the basic concepts such as infinitesimal, equivalence, rigid, integrable for a deformation of a Lts. The most important discovery is to find out that Yamaguchi cohomology theory is well suited for the investigation of the deformation theory of Lts'. (2)To understand a difference between perturbation and deformation has been known to be very difficult. My answer is as follow : In the perturbation any calculation is done by original Research product, but it is done by deformed product in the deformed world. Abstract Accomplishment This project has been basically done by the discussions with Professor Gerstenhaber. I visited him at University of Pennsylvania in September 2003 and he visited me at Hiroshima University in March 2005. Main subjects of our discussions are as follows : 1)Yamaguchi cohomology, 2)History of Deformation Theory based on the mathematicians Reimann, Kodaira-Spencer and Gerstenhaber, 3)My idea of the application of Deformation Theory to Inverse problems and 4)Deformation Theory in the quantum mechanics. Problems and forward In this project, I have just shown the starting point to research the deformation theory of Lts'. I know that there are many obstructions to go forward. Even for the definition of the representation of a Lts there are at least three proposals which seem to be different for me. That of Yamaguchi is not the ordinal one and requires more works. As for the inverse problem in engineering, the deformation theory will take the very important role, I believe. (4 results) Research Products (7 results)
{"url":"https://kaken.nii.ac.jp/en/grant/KAKENHI-PROJECT-14540032/","timestamp":"2024-11-12T23:44:24Z","content_type":"text/html","content_length":"31814","record_id":"<urn:uuid:2a78f172-f6f2-46e3-a856-b0981fb6d18a>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00628.warc.gz"}
Is there really no paper proposing a unified equation for quantum gravity? 2182 views Modified from https://physics.stackexchange.com/questions/751787 (Completely rewritten.) Obviously, there is no equation yet for quantum gravity. But there are also surprisingly few papers that claim that such an equation must exist at all. A rumor I heard recently was: > "No paper really claims that a (yet unknown) equation for quantum gravity must exist" Google scholar found only three candidates. I know about a few (popular) books that make the claim that an equation exists, but I found only 3 papers that state that an equation for quantum gravity must exist. (Even string theory is unclear on the matter...) Are researchers in quantum gravity looking for an equation of motion at all? Or did researchers stop believing that such an equation is possible? Attempt one was the WdW equation, with its story told in https://arxiv.org/abs/1506.00927. It was falsified. The second is "Time and a Physical Hamiltonian for Quantum Gravity", Viqar Husain and Tomasz Pawłowski, Phys. Rev. Lett. 108, 141301 (2012). It did not work out, but I admire their courage. None of the 185 papers that cited it continues to explore evolution equations. The third is "Proposal for a new quantum theory of gravity III: Equations for quantum gravity, and the origin of spontaneous localisation", Palemkota Maithresh and Tejinder P. Singh, https:// arxiv.org/abs/1908.04309. This is rather new, and the equation they propose has yet to be explored and tested by other people. Again, I admire their courage. Most voted comments show all comments In most textbooks on QFT you also don't find equations of motions for QED, the most accurate theory we have. The reason is that these equations do not make sense without renormalization, and renormalization is at present (except in toy models in lower dimensions) never done on the level of equations of motion but only on the level of derived information such as correlation functions, S-matrix elements, etc.. Thus it is unreasonable to expect that in quantum gravity the situation would be different. In my university, we do write the equations of QED on the blackboard. They are simple and short. A quick search shows them everywhere, e.g. just before section 8.3.3. in https:// The link in your comment goes to 404. Not found. Equations purported to describe QED in fact do not describe it. It is known since 1928 that they are inconsistent because of lack of renormalization, and, as Vladimir Kalitvianski remarked, because they do not take into account the infrared problems of real QED. The latter requires treating the electrons as infraparticles. @Klaus: You don't know what you are talking about. Wikipedia is not always correct in its interpretation - it doesn't say that the equation derived are only semiclassical equations, not QED! I augmented my answer below to address your reference to Wikipedia in more detail. @Klaus: Your revised link still only gets classical field equations! It is said explicitly that 'The first equation is the Dirac equation in the electromagnetic field and the second equation is a set of Maxwell equations (Note that the Maxwell equations are equations for the classical electromagnetic foield, and the Dirac equation is an equation for a single particle or for a classical field, but not for a qauantum field! The corresponding equations for field operators are known to be inconsistent - the reason why renormalization is needed! Most recent comments show all comments @Klaus: What is the use of these equations if they are inconsistent, if they do not give physical solutions without additional terms (counter-terms) and resummations of soft diagrams? They alone are not QED equations. Writing down equations of motions is unreasonable for interacting quantum field theories in 4-dimensional spacetimes. Perhaps you meant writing down Lagrangians rather than equations of motion? For QED, the Lagrangian (unlike valid equations of motion for the fields) can be written down, and is probably found in every textbook on quantum field theory. But this can also be done for canonical gravity; see, e.g., equation (28) on p.22 of • Burgess, C. P. (2004). Quantum gravity in everyday life: General relativity as an effective field theory. Living Reviews in Relativity, 7, 1-56. By formal variation, one can obtain classical equations of motion, but for the QED Lagrangian, these (given, e.g., in Wikipedia, without any caveat about their meaning) describe the dynamics of a single Dirac particle coupled to a classical electromagnetic field. To obtain QED one would have to quantize these equations, but this introduces ultraviolet and infrared divergences that show that the operator versions of the classical equations are inconsistent. The necessary renormalization destroys their validity even perturbatively, where QED is very successful. Similarly, the Lagrangian for classical gravity (or variations of it) produce equations of motion for the classical gravitational field, not for its quantum version. Their quantum interpretation (and indeed all of quantum gravity beyond its semiclassical approximation) is fraught with difficulties. See Chapter B8 (''Quantum gravity'') of my Theoretical Physics FAQ. There is also a Lagrangian of classical gravity coupled to the Dirac and the electromagnetic field. Its variation produces the Einstein–Maxwell–Dirac equations. They describe the dynamics of a single Dirac particle coupled to classical gravity and a classical electromagnetic field. To obtain quantum gravity one would have to quantize these equations. Nobody knows how to do it. The same problems as in QED arise, but the nonrenormalizability (according to power counting) poses additional problems.
{"url":"https://www.physicsoverflow.org/45560/there-really-paper-proposing-unified-equation-quantum-gravity","timestamp":"2024-11-02T17:21:01Z","content_type":"text/html","content_length":"147121","record_id":"<urn:uuid:90368b07-f97f-46c4-b351-1c6e743d9863>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00374.warc.gz"}
Multiple Choice Identify the choice that best completes the statement or answers the question. 1. What is this chart measuring? a. How fun swimming is c. How many things people do b. Summer Vacation Activities d. Who likes to hike 2. How many more people liked grilled cheese than ham & cheese sandwiches? 3. How many more people liked peanut butter and jelly than ham & cheese sandwiches? 4. Which 2 sandwiches are liked the most? a. Peanut Butter & Jelly and Grilled Cheese c. Grilled Cheese and Ham & Cheese b. Peanut Butter & Jelly and Ham & Cheese 5. How many people told which sandwich they liked? 6. Which is the least favorite summer vacation activity? a. swimming c. kicking b. fishing d. biking 7. How many more people like to hike than fish? 8. What is the favorite summer activity? a. swimming c. hiking b. fishing d. biking 9. How many people answered the question of what they like to do on their summer vacation? 10. How many total people liked grilled cheese or ham & cheese sandwiches the most?
{"url":"http://www.prepdog.org/1st/1md4.1.htm","timestamp":"2024-11-03T20:15:43Z","content_type":"text/html","content_length":"36351","record_id":"<urn:uuid:53928d01-dd53-4a92-935d-485cb1fa3961>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00825.warc.gz"}
What are decoders and encoders? What are decoders and encoders? The combinational circuits that modify the binary data into N output lines are known as Encoders. The combinational circuits that convert the binary data into 2N output lines are called Decoders. How many inputs will a 3 bit input encoder have? 8 inputs 8 : 3 Encoder (Octal to Binary) – The 8 to 3 Encoder or octal to Binary encoder consists of 8 inputs : Y7 to Y0 and 3 outputs : A2, A1 & A0. Each input line corresponds to each octal digit and three outputs generate corresponding binary code. How many inputs are in encoder? 2n input lines An Encoder is a combinational circuit that performs the reverse operation of Decoder. It has maximum of 2n input lines and ‘n’ output lines. It will produce a binary code equivalent to the input, which is active High. Therefore, the encoder encodes 2n input lines with ‘n’ bits. What is encoder and decoder with example? Encoders convert 2N lines of input into a code of N bits and Decoders decode the N bits into 2N lines. 1. Encoders – An encoder is a combinational circuit that converts binary information in the form of a 2N input lines into N output lines, which represent N bit code for the input. What is encoder and its application? Encoder Applications. Encoders translate rotary or linear motion into a digital signal. That signal is sent to a controller, which monitors motion parameters such as speed, rate, direction, distance, or position. Where is encoder used? Specifically, it is a device that is powered by a motor. For example, encoders are widely used in industrial robots used in factories such as assembly robots, welding robots, automatic guided machines, and machining centers. What are the uses of decoders? Decoder Applications • Decoders are used to input data to a specified output line as is done in addressing core memory where input data is to be stored in a specified memory location. • It is used in code conversions. • In high-performance memory systems, this decoder can be used to minimize the effects of system decoding. How many inputs and outputs does a 3 line decoder have? This type of decoder is called the 3 lines to 8 line decoder because they have 3 inputs and 8 outputs. To decode the combination of the three and eight, we required eight logical gates, and to design this type of decoder we have to consider that we required active high output. How do you implement a 3 to 8 decoder circuit? From the above Boolean expressions, the implementation of 3 to 8 decoder circuit can be done with the help of three NOT gates & 8-three input AND gates. In the above circuit, the three inputs can be decoded into 8 outputs, where every output represents one of the midterms of the three input variables. What are encoders? Encoders – An encoder is a combinational circuit that converts binary information in the form of a 2 N input lines into N output lines, which represent N bit code for the input. For simple encoders, it is assumed that only one input line is active at a time. As an example, let’s consider Octal to Binary encoder. How many outputs does a 4 bit decoder have? The four-bit decoder allows only four outputs such as A0, A1, A2, A3 and generates two outputs F0, F1, as shown in the below diagram. In digital electronics, the binary decoder is a combinational logic circuit that converts the binary integer to the associated pattern of output bits.
{"url":"https://www.wren-clothing.com/what-are-decoders-and-encoders/","timestamp":"2024-11-08T14:09:41Z","content_type":"text/html","content_length":"61916","record_id":"<urn:uuid:76ede7fc-aac5-4bcb-84dc-c4ffe99d584f>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00104.warc.gz"}
What is: Bayes' Theorem What is Bayes’ Theorem? Bayes’ Theorem is a fundamental principle in the field of probability and statistics that describes how to update the probability of a hypothesis based on new evidence. Named after the Reverend Thomas Bayes, this theorem provides a mathematical framework for reasoning about uncertainty and making inferences. It is particularly useful in various applications, including data analysis, machine learning, and decision-making processes. The theorem is expressed mathematically as P(A|B) = [P(B|A) * P(A)] / P(B), where P(A|B) is the posterior probability, P(B|A) is the likelihood, P(A) is the prior probability, and P(B) is the marginal probability. The Components of Bayes’ Theorem Understanding Bayes’ Theorem requires a grasp of its key components. The prior probability, P(A), represents the initial belief about the hypothesis before observing any evidence. The likelihood, P(B |A), quantifies the probability of observing the evidence given that the hypothesis is true. The marginal probability, P(B), serves as a normalization factor, ensuring that the posterior probabilities sum to one. Finally, the posterior probability, P(A|B), is the updated belief about the hypothesis after taking the evidence into account. These components work together to allow statisticians and data scientists to refine their predictions and improve their models. Applications of Bayes’ Theorem in Data Science Bayes’ Theorem has numerous applications in data science, particularly in the realm of predictive modeling and classification tasks. For instance, it forms the basis of Bayesian inference, a method that allows data scientists to update their beliefs about model parameters as new data becomes available. This is particularly advantageous in scenarios where data is scarce or noisy, as it enables practitioners to incorporate prior knowledge and improve the robustness of their models. Additionally, Bayes’ Theorem is instrumental in developing algorithms such as Naive Bayes classifiers, which are widely used for text classification, spam detection, and sentiment analysis. Bayesian vs. Frequentist Approaches The distinction between Bayesian and Frequentist approaches to statistics is crucial for understanding the implications of Bayes’ Theorem. While Frequentist methods rely on long-run frequencies and do not incorporate prior beliefs, Bayesian methods allow for the integration of prior knowledge into the analysis. This difference leads to varying interpretations of probability: Bayesian probability is subjective and represents a degree of belief, whereas Frequentist probability is objective and based on the long-term behavior of random processes. Consequently, the choice between these approaches can significantly impact the results and interpretations of statistical analyses. Bayesian Networks and Their Significance Bayesian networks are graphical models that utilize Bayes’ Theorem to represent and reason about uncertain relationships among variables. These networks consist of nodes, which represent random variables, and directed edges, which indicate conditional dependencies. By leveraging Bayes’ Theorem, Bayesian networks enable the computation of joint probabilities and facilitate inference in complex systems. They are widely used in various fields, including bioinformatics, finance, and artificial intelligence, for tasks such as diagnosis, prediction, and decision support. Challenges in Applying Bayes’ Theorem Despite its powerful capabilities, applying Bayes’ Theorem can present challenges. One significant hurdle is the selection of appropriate prior probabilities, which can greatly influence the results of the analysis. If the prior is not well-justified, it may lead to biased conclusions. Additionally, calculating the marginal probability, P(B), can be computationally intensive, especially in high-dimensional spaces. This complexity often necessitates the use of approximation techniques, such as Markov Chain Monte Carlo (MCMC) methods, to obtain feasible solutions. Bayesian Inference in Machine Learning In the context of machine learning, Bayesian inference provides a robust framework for model selection and evaluation. By treating model parameters as random variables, practitioners can quantify uncertainty and make probabilistic predictions. This approach is particularly useful in scenarios where overfitting is a concern, as Bayesian methods inherently incorporate regularization through the prior distribution. Furthermore, Bayesian optimization techniques leverage Bayes’ Theorem to efficiently search for optimal hyperparameters, enhancing the performance of machine learning models. Real-World Examples of Bayes’ Theorem Bayes’ Theorem has been successfully applied in various real-world scenarios. For instance, in medical diagnostics, it helps clinicians update the probability of a disease based on test results, leading to more accurate diagnoses. In finance, it aids in risk assessment and portfolio management by allowing analysts to revise their expectations based on new market data. Additionally, in natural language processing, Bayes’ Theorem underpins algorithms that classify documents and filter spam, demonstrating its versatility across different domains. Conclusion: The Importance of Bayes’ Theorem in Statistics Bayes’ Theorem is a cornerstone of modern statistics and data analysis, providing a coherent method for updating beliefs in the face of uncertainty. Its applications span a wide range of fields, from healthcare to finance and artificial intelligence, underscoring its significance in both theoretical and practical contexts. As data continues to grow in complexity and volume, the relevance of Bayes’ Theorem in guiding decision-making and enhancing predictive accuracy will only increase, making it an essential tool for statisticians and data scientists alike.
{"url":"https://statisticseasily.com/glossario/what-is-bayes-theorem/","timestamp":"2024-11-05T07:32:32Z","content_type":"text/html","content_length":"139490","record_id":"<urn:uuid:f8c2f8b1-440e-4848-bb6a-a1ac7293f977>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00462.warc.gz"}
Capacitors in Series and in Parallel | Introduction to Electricity, Magnetism, and Circuits | Textbooks Capacitors in Series and in Parallel By the end of this section, you will be able to: • Explain how to determine the equivalent capacitance of capacitors in series and in parallel combinations • Compute the potential difference across the plates and the charge on the plates for a capacitor in a network and determine the net capacitance of a network of capacitors Several capacitors can be connected together to be used in a variety of applications. Multiple connections of capacitors behave as a single equivalent capacitor. The total capacitance of this equivalent single capacitor depends both on the individual capacitors and how they are connected. Capacitors can be arranged in two simple and common types of connections, known as series and parallel, for which we can easily calculate the total capacitance. These two basic combinations, series and parallel, can also be used as part of more complex connections. The Series Combination of Capacitors Figure 4.2.1 illustrates a series combination of three capacitors, arranged in a row within the circuit. As for any capacitor, the capacitance of the combination is related to the charge and voltage by using Equation 4.1.1. When this series combination is connected to a battery with voltage V, each of the capacitors acquires an identical charge . To explain, first note that the charge on the plate connected to the positive terminal of the battery is and the charge on the plate connected to the negative terminal is . Charges are then induced on the other plates so that the sum of the charges on all plates, and the sum of charges on any pair of capacitor plates, is zero. However, the potential drop on one capacitor may be different from the potential drop on another capacitor, because, generally, the capacitors may have different capacitances. The series combination of two or three capacitors resembles a single capacitor with a smaller capacitance. Generally, any number of capacitors connected in series is equivalent to one capacitor whose capacitance (called the equivalent capacitance) is smaller than the smallest of the capacitances in the series combination. Charge on this equivalent capacitor is the same as the charge on any capacitor in a series combination: That is, all capacitors of a series combination have the same charge. This occurs due to the conservation of charge in the circuit. When a charge in a series circuit is removed from a plate of the first capacitor (which we denote as ), it must be placed on a plate of the second capacitor (which we denote as ), and so on. (Figure 4.2.1) Figure 4.2.1 (a) Three capacitors are connected in series. The magnitude of the charge on each plate is . (b) The network of capacitors in (a) is equivalent to one capacitor that has a smaller capacitance than any of the individual capacitances in (a), and the charge on its plates is We can find an expression for the total (equivalent) capacitance by considering the voltages across the individual capacitors. The potentials across capacitors , and are, respectively, , and ,. These potentials must sum up to the voltage of the battery, giving the following potential balance: is measured across an equivalent capacitor that holds charge and has an equivalent capacitance . Entering the expressions for , and , we get Canceling the charge , we obtain an expression containing the equivalent capacitance, , of three capacitors connected in series: This expression can be generalized to any number of capacitors in a series network. For capacitors connected in a series combination, the reciprocal of the equivalent capacitance is the sum of reciprocals of individual capacitances: EXAMPLE 4.2.1 Equivalent Capacitance of a Series Network Find the total capacitance for three capacitors connected in series, given their individual capacitances are , and Because there are only three capacitors in this network, we can find the equivalent capacitance by using Equation 4.2.1 with three terms. We enter the given capacitances into Equation 4.2.1: Now we invert this result and obtain Note that in a series network of capacitors, the equivalent capacitance is always less than the smallest individual capacitance in the network. The Parallel Combination of Capacitors A parallel combination of three capacitors, with one plate of each capacitor connected to one side of the circuit and the other plate connected to the other side, is illustrated in Figure 4.2.2(a). Since the capacitors are connected in parallel, they all have the same voltage across their plates. However, each capacitor in the parallel network may store a different charge. To find the equivalent capacitance of the parallel network, we note that the total charge stored by the network is the sum of all the individual charges: On the left-hand side of this equation, we use the relation , which holds for the entire network. On the right-hand side of the equation, we use the relations , and for the three capacitors in the network. In this way we obtain This equation, when simplified, is the expression for the equivalent capacitance of the parallel network of three capacitors: This expression is easily generalized to any number of capacitors connected in parallel in the network. For capacitors connected in a parallel combination, the equivalent (net) capacitance is the sum of all individual capacitances in the network, (Figure 4.2.2) Figure 4.2.2 (a) Three capacitors are connected in parallel. Each capacitor is connected directly to the battery. (b) The charge on the equivalent capacitor is the sum of the charges on the individual capacitors. EXAMPLE 4.2.2 Equivalent Capacitance of a Parallel Network Find the net capacitance for three capacitors connected in parallel, given their individual capacitances are , and Because there are only three capacitors in this network, we can find the equivalent capacitance by using Equation 4.2.2 with three terms. Entering the given capacitances into Equation 4.2.2 yields Note that in a parallel network of capacitors, the equivalent capacitance is always larger than any of the individual capacitances in the network. Capacitor networks are usually some combination of series and parallel connections, as shown in Figure 4.2.3. To find the net capacitance of such combinations, we identify parts that contain only series or only parallel connections, and find their equivalent capacitances. We repeat this process until we can determine the equivalent capacitance of the entire network. The following example illustrates this process. (Figure 4.2.3) Figure 4.2.3 (a) This circuit contains both series and parallel connections of capacitors. (b) are in series; their equivalent capacitance is . (c) The equivalent capacitance is connected in parallel with . Thus, the equivalent capacitance of the entire network is the sum of EXAMPLE 4.2.3 Equivalent Capacitance of a Network Find the total capacitance of the combination of capacitors shown in Figure 4.2.3. Assume the capacitances are known to three decimal places ( ). Round your answer to three decimal places. We first identify which capacitors are in series and which are in parallel. Capacitors are in series. Their combination, labeled , is in parallel with are in series, their equivalent capacitance is obtained with Equation 4.2.1: is connected in parallel with the third capacitance , so we use Equation 4.2.2 to find the equivalent capacitance of the entire network: EXAMPLE 4.2.4 Network of Capacitors Determine the net capacitance of the capacitor combination shown in Figure 4.2.4 when the capacitances are . When a potential difference is maintained across the combination, find the charge and the voltage across each capacitor. (Figure 4.2.4) Figure 4.2.4 (a) A capacitor combination. (b) An equivalent two-capacitor combination. We first compute the net capacitance of the parallel connection . Then is the net capacitance of the series connection . We use the relation to find the charges , and , and the voltages , and , across capacitors , and , respectively. The equivalent capacitance for The entire three-capacitor combination is equivalent to two capacitors in series, Consider the equivalent two-capacitor combination in Figure 4.2.4(b). Since the capacitors are in series, they have the same charge, . Also, the capacitors share the potential difference, so Now the potential difference across capacitor Because capacitors are connected in parallel, they are at the same potential difference: Hence, the charges on these two capacitors are, respectively, As expected, the net charge on the parallel combination of CHECK YOUR UNDERSTANDING 4.5 Determine the net capacitance of each network of capacitors shown below. Assume that . Find the charge on each capacitor, assuming there is a potential difference of across each network. Candela Citations CC licensed content, Specific attribution Explore CircuitBread Get the latest tools and tutorials, fresh from the toaster.
{"url":"https://www.circuitbread.com/textbooks/introduction-to-electricity-magnetism-and-circuits/capacitance/capacitors-in-series-and-in-parallel","timestamp":"2024-11-07T23:30:59Z","content_type":"text/html","content_length":"1003825","record_id":"<urn:uuid:b3d65868-21a7-40a2-b323-28d0500378be>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00008.warc.gz"}
Polygons Sentence Examples • The polygons adopted were of 20 or more sides approximating to a circular form. • These have faces which are all regular polygons, but not all of the same kind, while all their solid angles are equal. • We'll also be making sure all the polygons have four vertices (called quads ). • Hero's expressions for the areas of regular polygons of from 5 to 12 sides in terms of the squares of the sides show interesting approximations to the values of trigonometrical ratios. • The fourth book deals with the circle in its relations to inscribed and circumscribed triangles, quadrilaterals and regular polygons. • The latter, as we know, calculated the perimeters of successive polygons, passing from one polygon to another of double the number of sides; in a similar manner Gregory calculated the areas. • As regards the funicular diagram, let LM be the line on which the pairs of corresponding sides of the two polygons meet, and through it draw any two planes w, w. • The stresses produced by extraneous forces in a simple frame can be found by considering the equilibrium of the various joints in a proper succession; and if the graphical method be employed the various polygons of force can be combined into a single force-diagram. • Partial Polygons of Resistance.In a structure in which there are pieces supported at more than two joints, let a polygon be con-. • By constructing several partial polygons, and computing the relations between the loads and resistances which are determined by the application of that theorem to each of them, with the aid, if necessary, of Moseleys principle of the least resistance, the whole of the relations amongst the loads and resistances may be found. • In drawing these polygons the magnitude of the vector of the type Wr is the product Wr, and the direction of the vector is from the shaft outwards towards the weight W, parallel to the radius r. • These figures are often termed " semi-regular solids," but it is more convenient to restrict this term to solids having all their angles, edges and faces equal, the latter, however, not being regular polygons. • They bear a relation to the Platonic solids similar to the relation of " star polygons " to ordinary regular polygons, inasmuch as the centre is multiply enclosed in the former and singly in the • Although this term is frequently given to the Archimedean solids, yet it is a convenient denotation for solids which have all their angles, faces, and edges equal, the faces not being regular • For shaded polygons, the color keyword can specify an array that contains the color index at each vertex. • Most hardware accelerators will directly render polygons to scenes therefore freeing up processor time for other tasks. • Most hardware accelerators will directly render polygons to scenes freeing up processor time for other tasks. • Points, lines, polygons, circles, arcs, and smooth curves can be freely intermixed with text. • Geometrical optics are a useful tool for calculating reflections from the polygons in a 3D database. • In the same way games were revolutionized with the advent of 3D polygons, they will be revolutionized again with VR. • The main problem with frequency polygons is deciding what to do with the endpoints. • These store the image as a set of graphic primitives; shapes such as lines, ellipses, rectangles and polygons. • Taking the circumference as intermediate between the perimeters of the inscribed and the circumscribed regular n-gons, he showed that, the radius of the circle being given and the perimeter of some particular circumscribed regular polygon obtainable, the perimeter of the circumscribed regular polygon of double the number of sides could be calculated; that the like was true of the inscribed polygons; and that consequently a means was thus afforded of approximating to the circumference of the circle. • In book v., after an interesting preface concerning regular polygons, and containing remarks upon the hexagonal form of the cells of honeycombs, Pappus addresses himself to the comparison of the areas of different plane figures which have all the same perimeter (following Zenodorus's treatise on this subject), and of the volumes of different solid figures which have all the same superficial area, and, lastly, a comparison of the five regular solids of Plato. • It is possible to attempt to remove sliver polygons automatically. • Drawing polygons Use the polygon tool to click points representing the vertices of the required polygon. • We'll also be making sure all the polygons have four vertices (called quads). • Characters are realistic and not blocky sets of polygons. • Character design is okay, the Dreamcast wasn't pushing a lot of polygons in this game and there were hardly any cut-scenes to speak of. • She's a bit ornery in comparison and not as physically capable, but she's my puppy and I wouldn't trade her for all the polygons in the world. • Even though these are virtual dogs composed of lines of code and polygons, they still tug at your heart and make onlookers coo and whimper at their cuteness. • However, because the polygons are smaller, the overall look is much less blocky. • The days of simple entertainment seem to be gone, swallowed by pixels, polygons and processing power. • Relations between Polygons of Loads and of Resistances.In a structure in which each piece is supported at two joints only, the well-known laws of statics show that the directions of the gross load on each piece and of the two resistances by which it is supported must lie in one plane, must either be parallel or meet in one point, and must bear to each other, if not parallel, the proportions of the sides of a triangle respectively parallel to their directions, and, if parallel, such proportions that each of the three forces shall be proportional to the distance between the other two,all the three distances being measured along one direction. • Incidentally Pappus describes the thirteen other polyhedra bounded by equilateral and equiangular but not similar polygons, discovered by Archimedes, and finds, by a method recalling that of Archimedes, the surface and volume of a sphere.
{"url":"https://sentence.yourdictionary.com/polygons","timestamp":"2024-11-11T07:26:21Z","content_type":"text/html","content_length":"295056","record_id":"<urn:uuid:da1f7e44-b361-4ee3-9315-2c0772f78203>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00289.warc.gz"}
Precalculus - Online Tutor, Practice Problems & Exam Prep Welcome back, everyone. So, up to this point, we have spent a lot of time talking about trigonometric functions, the Pythagorean theorem, and how they all relate to the right triangle. Now what we're going to be learning about in this video is some of the special and common right triangles that you're going to see. Specifically, we're going to be talking about the 45-45-90 special triangle. The reason these triangles are special is because they show up relatively frequently, there are actually some shortcuts that you can use to solve these triangles very fast. So if you don't like all the brute force work we've been doing with trigonometric functions and the Pythagorean theorem, you're going to learn some shortcuts for solving these triangles in this video. So without further ado, let's get right into things. Now, when you have a triangle with 45-degree angles, like this triangle down here, for example, this is going to be a situation where you have the special 45-45-90 triangle. In these triangles, the two legs of the triangle are always going to be the same length. So if you ever see a situation where you have a right triangle and two of the legs are the same, that means you're dealing with this special triangle. What we can do with this is we can actually solve for the hypotenuse of the triangle by simply taking a multiple of the leg length. And the multiple that you're going to look for is the square root of 2. Because if you take a leg, like 5, and you multiply it by the square root of 2, this will give you the hypotenuse. And that's the answer. We just solved for the long side of this triangle. So as you can see, this shortcut right here makes solving for the sides of the triangle really straightforward and fast. Now, if you didn't remember this relationship, there is another strategy you can use, which is simply the long version of using the Pythagorean theorem. So let's say that we set this side to a, that side to b, and then the hypotenuse is equal to c, and we want to solve for the hypotenuse. Well, you could say that \( a^2 + b^2 = c^2 \), that's the Pythagorean theorem. And in this case, we said a and b are both 5. So we have \( 5^2 + 5^2 = c^2 \), and \( 5^2 \) is 25. So we have 25 + 25 = \( c^2 \). 25 + 25 is 50, and what we can do is take the square root on both sides of this equation to get that c is equal to the square root of 50. And the square root of 50 actually simplifies down to 5 times the square root of 2. So, notice when using the long version of this problem solving, we get to the same answer. But this is what's nice about the shortcut; it lets you get this answer without having to go through this long process. Now to ensure we know how to solve these types of triangles, let's see if we can solve some examples where we have this special case. So for each of these examples, we're asked to solve for the unknown sides of each triangle. And we'll start with example a. Now notice we have two 45-degree angles and two legs that are the same length. So that means we're dealing with a 45-45-90 triangle. Now recall that if we want to find the missing side or the hypotenuse, we just need to take one of the legs and multiply it by the square root of 2. Well, one of the legs is 11, and then we multiply this by the square root of 2. And that right there is the answer. Notice how quick it is using this method. See, it's very straightforward, and that's what's really nice about these special cases. But now let's take a look at example b. In this example, we have a 45-degree angle, and we are given the hypotenuse. So how can we go about solving this? Well, first off, we need to figure out if we are actually dealing with a special case triangle, and it turns out that we are. Because since we have a 45-degree angle here and a 90-degree cusp there, we know by default this has to be a 45-degree angle. You see, all the angles in a right triangle have to add to 180, and 90 plus 45 plus 45 equals 180, so this is a special case triangle. Now to solve for the missing sides, what we can do is use this relationship. Notice in this situation, we're given the hypotenuse or the long side. So I'm going to do is take the hypotenuse, set it equal to the number we have, which is 13, and I'll say that that's equal to the leg multiplied by the square root of 2. Now to solve for the leg, what I can do is divide the square root of 2 on both sides of this equation. That'll get the square root of twos to cancel, giving us that the leg of this triangle is equal to 13 over the square root of 2. Now what I can do is rationalize the denominator here by multiplying the top and bottom by the square root of 2. That'll get these square roots to cancel, giving us that the leg of this triangle is equal to 13 times the square root of 2 over 2. So what we're going to end up with is 13 radical 2 over 2 for this side of the triangle and then 13 radical 2 over 2 for that side of the triangle. Because again, for a 45-45-90 triangle, these two sides have to have the same length. So that is how you can solve 45-45-90 triangles, and this is the shortcut that you can use. So I hope you found this video helpful. Thanks for watching, and let me know if you have any questions.
{"url":"https://www.pearson.com/channels/precalculus/learn/patrick/8-trigonometric-functions-on-right-triangles/special-right-triangles?chapterId=24afea94","timestamp":"2024-11-10T15:47:05Z","content_type":"text/html","content_length":"458394","record_id":"<urn:uuid:c08eed02-bac4-4b0f-ab21-907c0e1bc2a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00749.warc.gz"}
Reinforcement Learning Learning outcomes The learning outcomes of this chapter are: 1. Identify situations in which model-free reinforcement learning is a suitable solution for an MDP. 2. Explain how model-free planning differs from model-based planning. 3. Apply temporal difference methods Q-learning and SARSA to solve small-scale MDP problems manually and program Q-learning and SARSA algorithms to solve medium-scale MDP problems automatically. 4. Compare and contrast off-policy reinforcement learning with on-policy reinforcement learning. Model-based vs model-free Value iteration is part of a class of solutions known as model-based techniques. This means that we need to know the model; in particular, we have access to [latex]P_a(s' \mid s)[/latex] and [latex]r In this section, we look at Q-learning and SARSA, which are model-free techniques. This means that we do NOT know [latex]P_a(s' \mid s)[/latex] and [latex]r(s,a,s')[/latex] of our model How can we calculate a policy if we don’t know the transitions and the rewards?! We learn through experience by trying actions and seeing what the results is, making this machine learning problem. Importantly, in model-free reinforcement learning, we do NOT try to learn [latex]P_a(s' \mid s)[/latex] or [latex]r(s,a,s')[/latex] — we learn a value function or a policy directly. There is something in between model-based and model-free: simulation-based techniques. In these cases, we have a model as a simulator, so we can simulate [latex]P_a(s' \mid s)[/latex] and [latex]r (s,a,s')[/latex] and learn a policy with a model-free technique, but we cannot “see” [latex]P_a(s' \mid s)[/latex] and [latex]r(s,a,s')[/latex], so model-based techniques like value iteration are not Exercise – The Mystery game Consider the following game, implemented by Github user guiferviz, and available for download at guiferviz/rl_udacity. You can play Mystery Game. The aim of the game is to win the game. You have six actions available, which can be executed by pressing keys 1, 2, 3, 4, 5, and 6. You need to learn what the actions do and what the rewards are. No other instructions are given, but you will know when you receive any rewards/points. Try to play the game and see if you can win. Have fun! Mystery Game by Github user Guiferviz, used under the MIT License (MIT). Once you have played this, ask yourself the following questions: • What was the process you took? • What did you learn? • What assumptions did you use? I would guess that you experimented by pressing the keys 1 to 6, from the outcome of the keys, you learnt the effect of each of the keys, including the state transitions and the rewards. From this, you could easily construct a winning strategy. I also would guess that you perhaps used the colours as a way to guess what you should do. This game is of interest because it is a model-free (at least initially) Markov decision process: you didn’t know the transition function or the reward function; instead you had to learn it. Similarly, model-free reinforcement learning techniques don’t know the transition function or the reward function of an MDP, so they just learn by trying different behaviours and observing what rewards they get. Over time, they learn which behaviours lead to positive rewards, so they reinforce the policy to try that behaviour more, and which behaviours lead to negative rewards, and they reinforce the policy to avoid that behaviour. Imagine how hard it is for a computer that doesn’t have any assumptions or intuition for this game though! It will not match the colours, nor will it really have any prior knowledge about similar games, unless it is explicitly told about it. Model-free reinforcement learning techniques start with no or minimal initial knowledge, and will learn a policy (e.g. a value function, a Q-function, or a policy directly) just by trying behaviours and seeing what happens. Most techniques (at least the ones covered in these notes) do not learn a model in the same way that you did — they just construct the policy directly. Intuition of model-free reinforcement learning There are many different techniques for model-free reinforcement learning, all with the same basis: • We execute many different episodes of the problem we want to solve, and from that we learn a policy. • During learning, we try to learn the value of applying particular actions in particular states. • During each episode, we need to execute some actions. After each action, we get a reward (which may be 0) and we can see the new state. • From this, we reinforce our estimates of applying the previous action in the previous state. • We terminate when: (1) we run out of training time; (2) we think our policy has converged to the optimal policy (for each new episode we see no improvement); or (3) our policy is ‘good enough’ (for each new episode we see minimal improvement). Monte-Carlo reinforcement learning Monte-Carlo reinforcement learning is perhaps the simplest of reinforcement learning methods, and is based on how animals learn from their environment. The intuition is quite straightforward. Maintain a Q-function that records the value [latex]Q(s,a)[/latex] for every state-action pair. At each step: (1) choose an action using a multi-armed bandit algorithm; (2) apply that action and receive the reward; and (3) update [latex]Q(s,a)[/latex] based on that reward. Repeat over a number of episodes until …when? It is called Monte-Carlo reinforcement learning after the area within within Monaco (small principality on the French riviera) called Monte Carlo, which is best known for its extravagent casinos. As gambling and casinos are largely associated with chance, simulations that use some randomness to explore actions are often called Monte Carlo methods. (Monte-Carlo reinforcement learning) [latex]\begin{array}{l} \textbf{Input:}\ \text{MDP}\ M = \langle S, s_0, A, P_a(s' \mid s), r(s,a,s')\rangle\\ \textbf{Output:}\ \text{Q-function}\ Q\\[2mm] \text{Initialise}\ Q\ \text{arbitrarily; e.g.,}\ Q(s,a) \leftarrow 0\ \text{for all}\ s\ \text{and}\ a\\ N(s, a) \leftarrow 0\ \text{for all}\ s\ \text{and}\ a\\[2mm] \textbf{repeat}\\ \quad\quad \text{Generate an episode}\ (s_0, a_0, r_1, \ldots, s_{T-1}, a_{T-1}, r_T);\\ \quad\quad\quad\quad \text{e.g. using}\ Q\ \text{and a multi-armed bandit algorithm such as}\ \epsilon-\text{greedy}\\ \quad\quad G \leftarrow 0\\ \quad\quad t \ leftarrow T-1\\ \quad\quad \textbf{while}\ t \geq 0\ \textbf{do}\\ \quad\quad\quad\quad G \leftarrow r_{t+1} + \gamma \cdot G\\ \quad\quad\quad\quad \textbf{if}\ s_t, a_t\ \text{does not appear in}\ s_0, a_0,\ldots, s_{t-1}, a_{t-1}\ \textbf{then}\\ \quad\quad\quad\quad\quad\quad Q(s_t, a_t) \leftarrow Q(s_t, a_t) + \frac{1}{N(s_t, a_t)}[G - Q(s_t, a_t)]\\ \quad\quad\quad\quad\quad\quad N(s_t, a_t) \leftarrow N(s_t, a_t) + 1\\ \quad\quad\quad\quad t \leftarrow t-1\\ \textbf{until}\ Q\ \text{converges} \end{array}[/latex] This algorithm generates an entire episode following some policy, such [latex]\epsilon[/latex]-greedy, observing the reward [latex]r_{t+1}[/latex] at each step [latex]t[/latex]. It then calculates the discounted future reward [latex]G[/latex] at each step. If [latex]s_t, a_t[/latex] occurs earlier in the episode, then we do not update [latex]Q(s_t, a_t)[/latex] as we will update it later in the loop. If it does not occur, we update [latex]Q(s_t, a_t)[/latex] as the cumulative average over all executions of [latex]s_t, a_t[/latex] over all episodes. In this algorithm [latex]N(s_t,a_t)[/ latex] represents the number of times that [latex]s_t, a_t[/latex] have been evaluated over all episodes. Q-tables are the simplest way to maintain a Q-function. They are a table with an entry for every [latex]Q(s,a)[/latex]. Thus, like value functions in value iteration, they do not scale to large state-spaces. (More on scaling in the next lecture). Initially, we would have an arbitrary Q-table, which may look something like this if initialised with all zeros, taking the GridWorld example: GridWorld Q-table initialised with all zeros State Up Down Right Left (0, 0) 0.00 0.00 0.00 0.00 (0, 1) 0.00 0.00 0.00 0.00 (0, 2) 0.00 0.00 0.00 0.00 (1, 0) 0.00 0.00 0.00 0.00 (1, 1) 0.00 0.00 0.00 0.00 (1, 2) 0.00 0.00 0.00 0.00 (2, 0) 0.00 0.00 0.00 0.00 (2, 1) 0.00 0.00 0.00 0.00 (2, 2) 0.00 0.00 0.00 0.00 (3, 0) 0.00 0.00 0.00 0.00 (3, 1) 0.00 0.00 0.00 0.00 (3, 2) 0.00 0.00 0.00 0.00 After some training, we may end up with a Q-table that looks something like this: GridWorld Q-table after several episodes of training State Up Down Right Left (0, 0) 0.50 0.42 0.39 0.42 (0, 1) 0.56 0.44 0.51 0.51 (0, 2) 0.58 0.51 0.63 0.57 (1, 0) 0.09 0.18 0.06 0.43 (1, 1) 0.00 0.00 0.00 0.00 (1, 2) 0.64 0.65 0.74 0.59 (2, 0) 0.41 0.00 0.00 0.00 (2, 1) 0.69 0.09 -0.24 0.24 (2, 2) 0.79 0.61 0.90 0.65 (3, 0) -0.02 0.00 0.00 0.00 (3, 1) 0.00 0.00 0.00 0.00 (3, 2) 0.00 0.00 0.00 0.00 The following is an implementation of a Q-table using a Python dictionary: import json from collections import defaultdict from qfunction import QFunction class QTable(QFunction): def __init__(self, alpha=0.1, default_q_value=0.0): self.qtable = defaultdict(lambda: default_q_value) self.alpha = alpha def update(self, state, action, delta): self.qtable[(state, action)] = self.qtable[(state, action)] + self.alpha * delta def get_q_value(self, state, action): return self.qtable[(state, action)] def save(self, filename): with open(filename, "w") as file: serialised = {str(key): value for key, value in self.qtable.items()} json.dump(serialised, file) def load(self, filename, default=0.0): with open(filename, "r") as file: serialised = json.load(file) self.qtable = defaultdict( lambda: default, {tuple(eval(key)): value for key, value in serialised.items()}, Temporal difference (TD) reinforcement learning Monte Carlo reinforcement learning is simple, but it has a number of problems. The most important is that it has high variance. Recall that we calculate the future discounted reward for an episode and use that to calculate the average reward for each state-action pair. However, the term [latex]\gamma G[/latex] is often not a good estimate of the average future reward that we would receive. If we execute action [latex]a[/latex] in state [latex]s[/latex] many times throughout different episodes, we might find that the future trajectories we execute after that vary significantly because we are using Monte Carlo simulation. This means that it will take a long term to learn a good estimate of the true average reward is for that state-action pair. Temporal difference (TD) methods alleviate this problem using bootstrapping. Much the same way that value iteration bootstraps by using the last iteration’s value function, in TD methods, instead of updating based on [latex]G[/latex] – the actual future discounted reward receivedin the episode – we update based on the actual immediate reward received plus an estimate of our future discounted In TD methods, our update rules always follow a pattern: [latex]Q(s,a) \leftarrow \underbrace{Q(s,a)}_\text{old value} + \overbrace{\alpha}^{\text{learning rate}} \cdot [\underbrace{\overbrace{r}^{\text{reward}} + \overbrace{\gamma}^{\text{discount factor}} \cdot V(s')}_{\text{TD target}} - \overbrace{Q(s,a)}^{\text{do not count extra } Q(s,a)}][/latex] [latex]V(s')[/latex] is our TD estimate of the average future reward, and is the bootstrapped value of our future discounted reward. The new information is weighted by a parameter [latex]\alpha \in [0,1][/latex] (pronounced “alpha”), which is the **learning rate(*. A higher learning rate [latex]\alpha[/latex] will weight more recent information higher than older information, so will learn more quickly, but will make it more difficult to stabilise because it is strongly influenced by outliers. The idea is that over time, the TD estimate will become be more stable than the actual rewards we receive in an episode, converging to the optimal value function [latex]V(s')[/latex] defined by the Bellman equation, which leads to [latex]Q(s,a)[/latex] converging more quickly. Why is there the expression [latex]-Q(s,a)[/latex] inside the square brackets? This is because the old value is weighted [latex](1 - \alpha) \cdot Q(s,a)[/latex]. We can expand this to [latex]Q(s,a) - \alpha \cdot Q(s,a)[/latex], and can then move the latter [latex]-Q(s,a)[/latex] inside the square brackets where the learning rate [latex]\alpha[/latex] is applied. In this chapter, we will look at two TD methods that differ in the way that they estimate the future reward [latex]V(s')[/latex]: 1. Q-learning, which is what we call an off-policy approach; and 2. SARSA, which is what we call an on-policy approach. Q-Learning: Off-policy temporal-difference learning Q-learning is a foundational method for reinforcement learning. It is TD method that estimates the future reward [latex]V(s')[/latex] using the Q-function itself, assuming that from state [latex]s'[/ latex], the best action (according to [latex]Q[/latex]) will be executed at each state. Below is the Q_learning algorithm. It uses [latex]\max_{a'} Q(s',a')[/latex] as the estimate of [latex]V(s')[/latex]; that is, it estimates this by using [latex]Q[/latex] to estimate the value of the best action from [latex]s'[/latex]. This helps to stabilise the learning. Note that, unlike Monte Carlo reinforcement learning, because Q-learning uses the bootstrapped value, it can interleave execution and update, meaning that [latex]Q[/latex] is improved at each step of an episode, rather than having to wait until the end of the episode to calculate [latex]G[/latex]. This has benefit early episodes benefit from learning more than in Monte Carlo reinforcement [latex]\begin{array}{l} \textbf{Input:}\ \text{MDP}\ M = \langle S, s_0, A, P_a(s' \mid s), r(s,a,s')\rangle\\ \textbf{Output:}\ \text{Q-function}\ Q\\[2mm] \text{Initialise}\ Q\ \text{arbitrarily; e.g.,}\ Q(s,a) \leftarrow 0\ \text{for all}\ s\ \text{and}\ a\\[2mm] \textbf{repeat} \\ \quad\quad s \leftarrow\ \text{the first state in episode}\ e\\ \quad\quad \textbf{repeat}\ \text{(for each step in episode}\ e \text{)}\\ \quad\quad\quad\quad \text{Select action}\ a\ \text{to apply in}\ s;\\ \quad\quad\quad\quad\quad\quad \text{e.g. using}\ Q\ \text{and a multi-armed bandit algorithm such as}\ \epsilon-\text{greedy}\\ \quad\quad\quad\quad \text{Execute action}\ a\ \text{in state}\ s\\ \quad\quad\quad\quad \text{Observe reward}\ r\ \text{and new state}\ s'\\ \quad\quad\quad\quad \ delta \leftarrow r + \gamma \cdot \max_{a'} Q(s',a') - Q(s,a)\\ \quad\quad\quad\quad Q(s,a) \leftarrow Q(s,a) + \alpha \cdot \delta\\ \quad\quad\quad\quad s \leftarrow s'\\ \quad\quad \textbf{until}\ s\ \text{is the last state of episode}\ e\ \text{(a terminal state)}\\ \textbf{until}\ Q\ \text{converges} \end{array}[/latex] Updating the Q-function Updating the Q-function is where the learning happens: \[\delta \leftarrow [\underbrace{\overbrace{r}^{\text{reward}} + \overbrace{\gamma}^{\text{discount factor}} \cdot \overbrace{\max_{a’} Q(s’,a’)}^{V(s’) \text{ estimate}}}_{\text{TD target}} \ overbrace{- Q(s,a)}^{\text{do not count extra } Q(s,a)}]\] \[Q(s,a) \leftarrow \underbrace{Q(s,a)}_\text{old value} + \overbrace{\alpha}^{\text{learning rate}} \cdot \underbrace{\delta}_{\text{delta value}}\] We can see that at each step, [latex]Q(s,a)[/latex] is update by taking the old value of [latex]Q(s,a)[/latex] and adding this to the new information. The estimate from the new observations is given by [latex]\delta \leftarrow r + \gamma \cdot \max_{a'} Q(s',a')[/latex], where [latex]\delta[/latex] (pronounced “delta”) is the difference between the previous estimate and the most recent observation, [latex]r[/latex] is the reward that was received by executing action [latex]a[/latex] in state [latex]s[/latex], and [latex]r + \gamma \cdot \max_ {a'} Q(s',a')[/latex] is the temporal difference target. What this says is that the estimate of [latex]Q(s,a)[/latex] based on the new information is the reward [latex]r[/latex], plus the estimated discounted future reward from being in state [latex]s'[/latex]. The definition of [latex]\delta[/latex] is the update similar to that of the Bellman equation. We do not know [latex]P_a(s' \mid s)[/ latex], so we cannot calculate the Bellman update directly, but we can estimate the value using [latex]r[/latex] and the temporal difference target. Note that we estimate the future value using [latex]\max_{a'} Q(s',a')[/latex], which means it ignores the actual next action that will be executed, and instead updates based on the estimated best action for the update. This is known as off policy learning — more on this later. Example – Q-learning update Using the table above, we can illustrate the inner loop of the Q-learning algorithm. Assume that we are in state [latex]s=(2,2)[/latex], and the action [latex]a=Up[/latex] is chosen and executed successfully, which would return to state [latex]s'=(2,2)[/latex] as there is no cell above (2,2). Using the Q-table above, we would update the Q-value as follows: [latex]\begin{split} \begin{array}{lll} Q((2,2),Up) & \leftarrow & Q((2,2),Up) + \alpha [r + \gamma \max_{a'} Q((2,2),a') - Q((2,2),Up)]\\ & \leftarrow & 0.79 + 0.1 [0 + 0.9 \cdot Q((2,2), Right) - Q ((2,2),Up)]\\ & \leftarrow & 0.79 + 0.1 [0 + 0.9 \cdot 0.90 - 0.79]\\ & \leftarrow & 0.792\\ \end{array} \end{split}[/latex] Theoretical guarantee Using Q-tables to represent Q-functions, Q-learning will converge to the optimal policy under the assumption that all state-action pairs are sampled infinitely often. Policy extraction using Q-functions Using model-free learning, we iterate over as many episodes as possible, or until each episode hardly improves our Q-values. This gives us a (close to) optimal Q-function. Once we have such a Q-function, we stop exploring and just exploit. We use policy extraction, exactly as we do for value iteration, except we extract from the Q-function instead of the value [latex]\pi(s) = \text{argmax}_{a \in A(s)} Q(s,a)[/latex] This selects the action with the maximum Q-value. Given an optimal Q-function (for the MDP), this results in optimal behaviour. To implement Q-learning, we first implement an abstract superclass ModelFreeLearner that defines the interface for any model-free learning algorithm: class ModelFreeLearner: def execute(self, episodes=2000): Next, we implement a second abstract superclass TemporalDifferenceLearner, which contains most of the code we need: from model_free_learner import ModelFreeLearner class TemporalDifferenceLearner(ModelFreeLearner): def __init__(self, mdp, bandit, qfunction): self.mdp = mdp self.bandit = bandit self.qfunction = qfunction def execute(self, episodes=2000): rewards = [] for episode in range(episodes): state = self.mdp.get_initial_state() actions = self.mdp.get_actions(state) action = self.bandit.select(state, actions, self.qfunction) episode_reward = 0.0 step = 0 while not self.mdp.is_terminal(state): (next_state, reward, done) = self.mdp.execute(state, action) actions = self.mdp.get_actions(next_state) next_action = self.bandit.select(next_state, actions, self.qfunction) delta = self.get_delta(reward, state, action, next_state, next_action) self.qfunction.update(state, action, delta) state = next_state action = next_action episode_reward += reward * (self.mdp.discount_factor ** step) step += 1 return rewards """ Calculate the delta for the update """ def get_delta(self, reward, state, action, next_state, next_action): q_value = self.qfunction.get_q_value(state, action) next_state_value = self.state_value(next_state, next_action) delta = reward + self.mdp.discount_factor * next_state_value - q_value return delta """ Get the value of a state """ def state_value(self, state, action): We will see later that we inherit from TemporalDifferenceLearner for other algorithms that are very similar to Q-learning. We inherit from this class to implement the Q-learning algorithm: from temporal_difference_learner import TemporalDifferenceLearner class QLearning(TemporalDifferenceLearner): def state_value(self, state, action): max_q_value = self.qfunction.get_max_q(state, self.mdp.get_actions(state)) return max_q_value We can see that the TemporalDifferenceLearner does most of the work. All the QLearning class has to do is define what the value of [latex]V(s')[/latex] for the new state [latex]s'[/latex], which the state and the next action that will be executed. Why does we model it like this instead of just implementing all of this in a single algorithm? In the next section on SARSA, we will see why. Using this implementation, we execute 100 episodes on the GridWorld example, resulting in the following Q-function, where each cell represents a cell from the GridWorld example and the four entries correspond to the Q-values for the respective direction: from gridworld import GridWorld from qtable import QTable from qlearning import QLearning from q_policy import QPolicy from stochastic_q_policy import StochasticQPolicy from multi_armed_bandit.epsilon_greedy import EpsilonGreedy gridworld = GridWorld() qfunction = QTable() QLearning(gridworld, EpsilonGreedy(), qfunction).execute(episodes=100) gridworld.visualise_q_function(qfunction, "Q-Function", grid_size=1.5) If we compare this to the value function for the value iteration implementation, we can see that the values learnt are not very accurate. Training for more episodes would result in more accurate values, but the alpha parameter means that recent information is weighted 0.1 in this case, and any unusual samples (from exploration or noise in the simulation), can affect the values. Despite this, if we extract a policy from this, we still see that the policy corresponds to the optimal policy, although this is by no means guaranteed: policy = QPolicy(qfunction) Below, we can explore how the Q-values for each state-action pair are learnt. If we play or step through the following visualisation, we can see that in early episodes, the Q-values that are learnt are not close to optimal because the initial episodes are quite long, so the discount factor means the rewards received are low. However, after just a few episodes, even these inaccurate Q-values give the multi-armed bandit some signal, and episodes start to become shorter, while the Q-values being learnt are updated to become more accurate: SARSA: On-policy temporal difference learning SARSA (State-action-reward-state-action) is an on-policy reinforcement learning algorithm. It is very similar to Q-learning, except that in its update rule, instead of estimate the future discount reward using [latex]\max{a \in A(s)} Q(s',a)[/latex], it actually selects the next action that it will execute, and updates using that instead. Taking this approach is known as on-policy reinforcement learning. Later in this section, we’ll discuss why this matters, but for now, let’s look at the SARSA algorithm and on-policy learning a bit more. Definition – On-policy and off-policy reinforcement learning Instead of estimating [latex]Q(s',a')[/latex] for the best estimated future state during update, on-policy reinforcement learning uses the actual next action to update. On-policy learning estimates [latex]\mathcal{Q^{\pi}}(s,a)[/latex] state action pairs, for the current behaviour policy [latex]\pi[/latex], whereas off-policy learning estimates the policy independent of the current behaviour. To illustrate how this differs, let’s take a look at the SARSA algorithm. [latex]\begin{array}{l} \textbf{Input:}\ \text{MDP}\ M = \langle S, s_0, A, P_a(s' \mid s), r(s,a,s')\rangle\\ \textbf{Output:}\ \text{Q-function}\ Q\\[2mm] \text{Initialise}\ Q\ \text{arbitrarily; e.g.,}\ Q(s,a) \leftarrow 0\ \text{for all}\ s\ \text{and}\ a\\[2mm] \textbf{repeat} \\ \quad\quad s \leftarrow\ \text{the first state in episode}\ e\\ \quad\quad \text{Select action}\ a\ \text{to apply in}\ s;\\ \quad\quad\quad\quad \text{e.g. using}\ Q\ \text{and a multi-armed bandit algorithm such as}\ \epsilon-\text{greedy}\\ \quad\quad \textbf{repeat}\ \text{(for each step in episode}\ e \text{)}\\ \quad\quad\quad\quad \text{Execute action}\ a\ \text{in state}\ s\\ \quad\quad\quad\quad \text{Observe reward}\ r\ \text{and new state}\ s'\\ \quad\quad\quad\quad \text{Select action}\ a\ \text{to apply in}\ s';\\ \quad\quad\quad\quad\quad\quad \text{e.g. using}\ Q\ \text{and a multi-armed bandit algorithm such as}\ \epsilon-\text{greedy}\\ \quad\quad\quad\quad \delta \leftarrow r + \ gamma \cdot Q(s',a') - Q(s,a)\\ \quad\quad\quad\quad Q(s,a) \leftarrow Q(s,a) + \alpha \cdot \delta\\ \quad\quad\quad\quad s \leftarrow s'\\ \quad\quad\quad\quad a \leftarrow a'\\ \quad\quad \textbf {until}\ s\ \text{is the last state of episode}\ e\ \text{(a terminal state)}\\ \textbf{until}\ Q\ \text{converges} \end{array}[/latex] The difference between the Q-learning and SARSA algorithms is what happens in the update in the loop body. Q-learning: (1) selects an action [latex]a[/latex]; (2) takes that actions and observes the reward & next state [latex]s'[/latex]; and (3) updates optimistically by assuming the future reward is [latex]\max_{a'}Q(s',a')[/latex] – that is, it assumes that future behaviour will be optimal (according to its policy). SARSA: (1) selects action [latex]a'[/latex] for the next loop iteration; (2) in the next iteration, takes that action and observes the reward & next state [latex]s'[/latex]; (3) only then chooses [latex]a'[/latex] for the next iteration; and (4) updates using the estimate for the actual next action chosen – which may not be the greediest one (e.g. it could be selected so that it can explore). So what difference does this really make? There are two main differences: • Q-learning will converge to the optimal policy irrelevant of the policy followed, because it is off-policy: it uses the greedy reward estimate in its update rather than following the policy such as [latex]\epsilon[/latex]-greedy). Using a random policy, Q-learning will still converge to the optimal policy, but SARSA will not (necessarily). • Q-learning learns an optimal policy, but this can be ‘unsafe’ or risky during training. Example – SARSA update For this example, we will use the same Q-table as the earlier Q-learning example. Assme that in state (2,2), the action ‘Up’ is chosen and executed successfully, which would return to state (2,2) there is no cell above (2,2). The next selected action is ‘Left’. Note that this is not the maximum action according to the Q-table – the selection function has explored instead of exploited. Using the Q-table above, we would update the Q-value using SARSA as follows: [latex]\begin{split} \begin{array}{lll} Q((2,2),N) & \leftarrow & Q((2,2),N) + \alpha [r + \gamma Q((2,2),W) - Q((2,2),N)]\\ & \leftarrow & 0.79 + 0.1 [0 + 0.9 \cdot Q((2,2),W) - Q((2,2),N)]\\ & \ leftarrow & 0.79 + 0.1 [0 + 0.9 \cdot 0.72 - 0.79]\\ & \leftarrow & 0.7758\\ \end{array} \end{split}[/latex] As with the Q-learning agent, we inherit from the TemporalDifferenceLearner class to implement SARSA. But the value of the next state [latex]V(s')[/latex] is calculated differently in the SARSA from temporal_difference_learner import TemporalDifferenceLearner class SARSA(TemporalDifferenceLearner): def state_value(self, state, action): return self.qfunction.get_q_value(state, action) So, as we can see, the value of state is instead [latex]Q(s',a')[/latex] instead of [latex]\max_{a \in A} Q(s,a)[/latex]. As with Q-learning, we execute SARSA for 100 episodes: from gridworld import GridWorld from qtable import QTable from sarsa import SARSA from multi_armed_bandit.epsilon_greedy import EpsilonGreedy gridworld = GridWorld() qfunction = QTable() SARSA(gridworld, EpsilonGreedy(), qfunction).execute(episodes=100) Again, we get an approximate Q-function. In this particular run, the policy is not optimal, because the action the policy selects from state (3,0) is down, not left, and the action from (2,0) is left, not up. policy = QPolicy(qfunction) This is (probably!) not because the SARSA implementation, but is because of the randomness in exploration combined with the value of alpha being quite high. A high value of alpha will learn more quickly, but this will also weight later updates more, so any unlikely events occuring late in the training will result in inaccurate Q-values. By selecting a lower value of alpha and training for more episodes, we can increase the likelihood of resulting in an optimal policy. This will require more time and resources to compute. In an example like GridWorld, this is not an issue, but for larger systems, it could be. SARSA vs. Q-learning example: Cliff world Consider the example below called “Cliff World”, which is taken from Chapter 6 of Sutton and Barto’s Introduction to Reinforcement Learning book (see further reading below). The bottom-left cell is the starting state and the bottom-right is the goal state, which receives a reward of 0. The four middle cells represent a cliff. Falling off the cliff receives a reward of -5. All other actions cost -0.05. Unlike the earlier GridWorld example, all actions are deterministic, which means that if the agent chooses to go to another cell, it will arrive at that cell with 100% probability. However, [latex]P_a(s' \mid s)[/latex] is unknown to the learning agent. from gridworld import CliffWorld cliffworld = CliffWorld() cliffworld_image = cliffworld.visualise() Let’s try training this with Q-learning for 2000 episodes, using an epsilon greedy strategy with epsilon = 0.2. The resulting Q-table is: qfunction = QTable() rewards = QLearning(cliffworld, EpsilonGreedy(epsilon=0.2), qfunction).execute(episodes=2000) From this, we extract the following policy: policy = QPolicy(qfunction) We can see that the policy will initially move the agent up, and then along the cliff, going down to the terminal state at the end, receiving the reward of 5. We can see that the policy (and Q-table) for the upper cells are somewhat inaccurate: because they are low value states, they have not been explored as much as the states along the cliff. Now, let’s try training the same problem with SARSA: cliffworld = CliffWorld() qfunction = QTable() rewards = SARSA(cliffworld, EpsilonGreedy(epsilon=0.2), qfunction).execute(episodes=2000) Extracting the policy, we get: policy = QPolicy(qfunction) We can see that SARSA will instead not go along the cliff, but will take a sub-optimal path that avoids the cliff. Why is this so? The answer is because SARSA uses a reward from the actual next action that is executed. Even with a mature Q-function, with epsilon = 0.2 in the bandit used,, the actual next action with be an exploratory action with probability 0.2, which means some of the time, the next action chosen will be “down”, so the agent falls of the cliff. Consider each agent moving from state [latex](1,1)[/latex] to state [latex](2,1)[/latex]. The Q-learning agent will update its Q-value for the preceding action by assuming that the agent continues along the cliff path, including [latex]\max_{a \in A} Q(s,a)[/latex] as the temporal difference reward. However, 10% of the time, the next action is NOT the optimal action because the agent will explore. Some of exploration actions will make the agent fall off the cliff, but this negative reward is not learnt by the Q-learning agent. The SARSA agent, on the other hand, selects its next action before the update, so in the cases where it chooses an action from state [latex](2,1)[/latex] that falls off the cliff, the value [latex]Q(s',a')[/latex] will include this negative reward. As a result, the SARSA agent learns that staying close to the cliff is a “risky” behaviour, so will learn to instead take the safe path away from the cliff: exploring from the safe path does not result in a strong negative reward. As such, the SARSA agent will fall off the cliff less than the Q-learning agent during training. However, during learning, the agent will still fall off the cliff sometimes when the agent is exploring actions. If trained using SARSA, the the result will be a sub-optimal policy that learns the safe path. The SARSA learning agent will still fall off the cliff sometimes when exploring actions, however, it will fall off less than the Q-learning agent because it takes actions on the safe path more often during learning? Consider the following in which we run both Q-learning and SARSA for 2000 episodes using epsilon greedy with epsilon = 0.2. Then, we take the resulting Q-function and run another 2000 episodes following the policy (which is equivalent to using an epsilon greedy strategy with epsilon = 0.0, initialising with the trained Q-function. If we plot the rewards for each episode for both SARSA and Q-Learning, we can see that SARSA receives more rewards the more we train, but at 2000 episodes when we start using the policy, Q-learning receives a higher reward per episode: from gridworld import CliffWorld from tests.compare_convergence_curves import qlearning_vs_sarsa from gridworld import CliffWorld from tests.plot import Plot mdp_q = CliffWorld() mdp_s = CliffWorld() qlearning_vs_sarsa(mdp_q, mdp_s, episodes=2000) During training, SARSA receives a higher average reward per episode than Q-Learning, because it falls off the cliff less as its policy improves. The Q-learning agent will follow the path along the cliff, but fall off when it explores, meaning that the average reward is lower. However, Q-learning learns the optimal policy, meaning that once we extract the policy, its rewards will be higher on average. How is it possible that on-policy learning has a sub-optimal policy but higher rewards during training? Consider a case of two agents training: one with Q-learning and one with SARSA, both using [latex]\epsilon[/latex]-greedy with epsilon=0.2. Then, consider training episode 100 for each agent. From the plot above, we can see that both policies are close to converged. However, once training is complete, we extract a policy. Because the actions are deterministic, the Q-learning policy is optimal: it will follow the path next to the cliff, but will not fall off. The SARSA agent will follow the safe path, but this safety is no longer required because no exploration is done. The end result for the SARSA (on policy) method is a sub-optimal policy, but one that achieves stronger rewards during training. SARSA vs. Q-learning example: Contested crossing What happens when we train on a more complicated example? In a game such as the Contested Crossing example, the addition of even a small number of extra actions and outcomes has a significant effect on the ability of both Q-learning and SARSA to find a good In the example of CliffWorld, multiple runs of the algorithm above (train for n episodes then test for n episodes) produce a stable optimal policy for when n is about 500. If we go through the procedure of training and testing for both Q-learning and SARSA multiple times, the final policy gives close to the same output each time. In the case of Contested Crossing, there are seven actions available (compared to four for CliffWorld), the number of steps taken in executing the policy is somewhere between 6 and around 20 depending on the amount of damage the ship takes (compared to 7 for CliffWorld using Q-learning and 9 using SARSA) and the amount of chance involved is significantly higher, because a ship can shoot or be shot at any time within the danger zones, with medium or high probability. The result of this is to considerably increase the amount of time taken to converge towards a good policy, for either Q-learning or SARSA. For [latex]n=2000[/latex], the policy found by Q-learning is usually – but not always – superior to that found by SARSA. This is insufficient training for either algorithm to converge on an optimal policy. At [latex]n=20,000[/latex], the optimal policy for Q-learning appears to have been found. There is still variation in the reward generated from implementing it, because of the high amount of chance in the Contested Crossing task. The SARSA policies at [latex]n=20,000[/latex] does not converge on the optimal policy because of it’s on-policy nature. The resulting final policies, when executed after 20,000 episodes, also show that the Q-learning policy receives a higher reward per episode than SARSA. from contested_crossing import ContestedCrossing from tests.compare_convergence_curves import qlearning_vs_sarsa from tests.plot import Plot mdp_q = ContestedCrossing() mdp_s = ContestedCrossing() qlearning_vs_sarsa(mdp_q, mdp_s, episodes=2000) mdp_q = ContestedCrossing() mdp_s = ContestedCrossing() qlearning_vs_sarsa(mdp_q, mdp_s, episodes=20000) On-policy vs. off-policy: Why do we have both? There are a few reasons why we have both on-policy and off-policy learning. Learning from prior experience The main advantage of off-policy approaches is that they can use samples from sources other than their own policy. For example, off-policy agents can be given a set of episodes of behaviour from another agent, such as a human expert, and can learn a policy by demonstration. In Q-learning, this would mean instead of selecting action [latex]a[/latex] to apply in state [latex]s[/latex] using a multi-armed bandit algorihm on [latex]Q(s,a)[/latex], we can simply take the next action of a trajectory and then update [latex]Q[/latex] as before. The policy that we are trying to learn is independent of the samples in the episodes. However, with SARSA, while we could in theory sample the same way, the update rule explicitly uses [latex]Q(s',a')[/latex], so the policy used to generate the trajectories in episodes is the same as the policy being learnt. Learning on the job The main advantage of on-policy approaches is that they are useful for ‘learning on the job’, meaning that it is better for cases in which we want an agent to learn optimal behaviour while operating in its environment. For example, imagine a reinforcement learning agent that manages resources for a cloud-based platform and we have no prior data to inform a policy. We could program a simulator for this environment, but it is highly unlikely that the simulator would be an accurate reflection of the real world, as estimating the number of jobs, their lengths, their timing, etc., would be difficult without prior data. So, the only way to learn a policy is to get data from our cloud platform while it operates. As such, if the average reward per episode is better using on-policy, this would give us better overall outcomes than off-policy learning, because the episodes are not practice – they actually influence real rewards, such as throughput, downtime, etc., and ultimately profit. If we could run our reinforcement learning algorithm in a simulated environment before deploying (and we had reason to believe that simulated environment was accurate), off-policy learning may be better because its optimal policy could be followed. Combining off-policy and on-policy learning We can combine the two approaches for particular applications. For example, if we use take our cloud platform from above, it would be silly to start with a random policy. We would quickly lose customers as the scheduling etc., would be terrible. A better approach would be to hand-craft an algorithm that gave good results initially, and use this with an off-policy approach to train a good initial policy. We could use the hand-crafted algorithm to select actions for training initially to gain an initial policy. However, this new policy will merely mimic the hand-crafted algorithm. Presumably we are using reinforcement learning because the problem is so complex that a hand-crafted approach is not good enough. So, we can then take the policy trained using off-policy and optimise it further using on-policy learning. Another common place for combining off-policy and on-policy learning is when we have an existing approach and we can use data from this with an off-policy approach to come up with an initial policy, which can be refined using on-policy. Evaluation and termination How do we know how many episodes we should train a model-free learning algorithm for? With value iteration, we terminate the algorithm once the improvement in value function reaches some threshold. However, in the model-free environment, we are not aiming to learn a complete policy – only enough to get us from the initial state to an absorbing state; or, in the case of an infinite MDP, to maximise rewards. Due to the randomness of the exploration and the fact that an episode may visit a state it has rarely visited, it is likely that for each episode, a Q-value of at least one state-action pair changes. With model-free learning, we can instead evaluate the policy directly by executing it and recording the reward we receive. Then, we terminate when the policy has reached convergence. By convergence, we mean that the average cumulative reward of the policy is no longer increasing during learning. There are a few ways we can measure this: 1. We can simply record the reward received during each episode of learning, and monitor how much this is increasing. The weakness with this is that a single episode can be noisy due to the exploration parameter used to select actions and the stochastic nature of the MDP. For example, if we use a multi-armed bandit to control exploration vs. exploitation during learning, then at each step, we choose a random action with probability [latex]\epsilon[/latex]. This means that we are not evaluating our real policy — we are evaluating our really policy plus some random 2. We can pause our learning every now and then, and run our actual policy on the MDP. The strength of this is that we avoid randomness in action selection. However, we still have randomness caused by the stochastic nature of the MDP. The weakness is that it requires us to run more episodes; or more accurately, we are executing episodes and not learning from them. 3. To avoid stochasticity in both the environment and the exploration strategy, we can average the reward over a number of simulations. The weakness is that this is more costly – it requires us to run additional episodes that do not explore and learn. In these notes, we use the strategy in item 2 above. Every 20 episodes, we run our policy and record the cumulative reward. As we don’t know what the maximum reward could be, we monitor the learning and terminate when the exponential moving average of our reward is no longer increasing, similar to what we did in evaluating the value iteration policy. Here, we plot the learning curve for Q-learning on the GridWorld example: qfunction = QTable() mdp = GridWorld() rewards = QLearning(mdp, EpsilonGreedy(), qfunction).execute(episodes=2000) Once we have the rewards of the episodes, we can plot the rewards: Plot.plot_cumulative_rewards(["Q-learning"], [rewards]) We can see that the policy converges at around 1000 episodes. However, recalling that this is smoothed using the exponential moving average, it probably converges in fewer episodes, but the early episodes weight on the average still. Next, we do the same for the contested crossing example: qfunction = QTable() mdp = ContestedCrossing() rewards = QLearning(mdp, EpsilonGreedy(), qfunction).execute(episodes=2000) Plot.plot_cumulative_rewards(["Q-learning"], [rewards]) We can see a convergence at about 1250-1500 episodes. Limitations of Q-learning and SARSA The standard versions that we see in this section have two major limitations: 1. Because we need to select the best action [latex]a[/latex] in Q-learning, we iterate over all actions. This limits Q-learning to discrete action spaces. 2. If we use a Q-table to represent our Q-function, both state spaces and action spaces must be discrete, and further, they must be modest in size or the Q-table will become too large to fit into memory or at least so large that it will take many episodes to sample all state-action pairs. Applications of Reinforcement Learning • Checkers (Samuel, 1959): first use of RL in an interesting real game • (Inverted) Helicopter Flight (Ng et al. 2004), better than any human • Computer Go (AlphaGo 2016): AlphaGo beats Go world champion Lee Sedol 4:1 • Atari 2600 Games (DQN & Blob-PROST 2015): human-level performance on half of 50+ games • Robocup Soccer Teams (Stone & Veloso, Reidmiller et al.): World’s best player of simulated soccer, 1999; Runner-up 2000 • Inventory Management (Van Roy, Bertsekas, Lee & Tsitsiklis): 10-15% improvement over industry standard methods • Dynamic Channel Assignment (Singh & Bertsekas, Nie & Haykin): World’s best assigner of radio channels to mobile telephone calls • TD-Gammon and Jellyfish (Tesauro, Dahl): World’s best backgammon player. Grandmaster level • On-policy reinforcement learning uses the action chosen by the policy for the update. • Off-policy reinforcement learning assumes that the next action chosen is the action that has the maximum Q-value, but this may not be the case because with some probability the algorithm will explore instead of exploit. • Q-Learning (off-policy) does it relative to the greedy policy. • SARSA (on-policy) learns action values relative to the policy it follows. • If we know the MDP, we can use model-based techniques: • We can also use model-free techniques if we know the MDP model: we just sample transitions and observe rewards from the model. • If we do not know MDP, we need to use model-free techniques: □ Offline: Q-learning, SARSA, and friends.
{"url":"https://uq.pressbooks.pub/mastering-reinforcement-learning/chapter/temporal-difference-reinforcement-learning/","timestamp":"2024-11-14T15:08:45Z","content_type":"text/html","content_length":"155616","record_id":"<urn:uuid:98154b5a-066e-46b8-8a47-1a16f83b33b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00853.warc.gz"}
Riemann Surfaces Riemann Surfaces Definition: A Riemann Surface $R$ is a second countable, connected, and Hausdorff topological space together with a collection of pairs $\{ (U_{\alpha}, \phi_{\alpha}) : \alpha \in \Gamma \}$ called an Atlas for $R$ with the following properties: 1) For each $\alpha \in \Gamma$, $U_{\alpha}$ is an open subset of $R$. 2) For each $\alpha \in \Gamma$, $\phi_{\alpha}$ is a homeomorphism of $U_{\alpha}$ into an open subset $V_{\alpha}$ of $\mathbb{C}$. 3) $\displaystyle{R = \bigcup_{\alpha \in \Gamma} U_{\alpha}}$. 4) For all $\alpha, \beta \in \Gamma$, the map $\phi_{\beta} \circ \phi_{\alpha}^{-1}$ is complex analytic from $\phi_{\alpha} (U_{\alpha} \cap U_{\beta})$ to $\phi_{\beta} (U_{\alpha} \cap U_{\ 1. A topological space is second countable if it has a countable base. 2. A topological space is Hausdorff if for all $x, y \in R$ there exists open sets $U_x, U_y \subseteq R$ such that $x \in U_x$, $y \in U_y$, and $U_x \cap U_y = \emptyset$. 3. $\phi_{\alpha} : U_{\alpha} \to V_{\alpha}$ is a homeomorphism if $\phi_{\alpha}$ is a bijection where $\phi_{\alpha}$ is continuous on $U_{\alpha}$ and $\phi_{\alpha}^{-1}$ is continuous on $V_{\
{"url":"http://mathonline.wikidot.com/riemann-surfaces","timestamp":"2024-11-13T20:42:08Z","content_type":"application/xhtml+xml","content_length":"15347","record_id":"<urn:uuid:e4524a6c-003b-4882-9c7e-6e92540e7fdf>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00098.warc.gz"}
The Beginning to the End of the Universe: Exploring the shape of space-time The afterglow of the Big Bang reveals the geometry of the universe. By Avi Loeb | Published: February 2, 2021 | Last updated on May 18, 2023 This story comes from our special January 2021 issue, “The Beginning and the End of the Universe.” Click here to purchase the full issue. In ancient times, scholars such as Aristotle thought that heavy objects would fall faster than lightweight objects under the influence of gravity. About four and a half centuries ago, Galileo Galilei decided to test this assumption experimentally. He dropped objects of different masses from the Tower of Pisa and found that gravity actually causes them all to fall the same way. More than 300 years later, Albert Einstein was struck by Galileo’s finding. He realized that if all objects follow the same trajectory under gravity, then gravity might not be a force but rather a property of space-time — the fabric of the universe, which all objects experience in the same way. In one of the most important advances in modern physics, Einstein recognized that when space-time is curved, objects do not follow straight lines. He reckoned that Earth, for example, orbits the Sun in a circle because the Sun curves space-time in its vicinity. This is similar to the path of a ball on the surface of a trampoline whose center is weighed down by a person. In November 1915, Einstein published the mathematical equations that established the foundation for his general theory of relativity. These equations describe the link between matter and the space-time in which it resides, showing that mass deforms space-time and influences the path of matter. In the words of physicist John Wheeler: “Space-time tells matter how to move and matter tells space-time how to curve.” Schwarzschild’s solution A few months later, while serving on the German front during World War I, Karl Schwarzschild became the first to derive a solution to Einstein’s equations. His solution describes the curved space-time around a point of mass, labeled by Wheeler half a century later as a “black hole.” Schwarzschild’s solution showed that the curvature of space-time diverges to infinity at the centermost point. This point is called the singularity because it is the singular point where Einstein’s theory breaks down. The breakdown occurs because Einstein’s theory is missing a key component: quantum mechanics. Despite many attempts to unify general relativity with quantum mechanics (such as versions of string theory or loop quantum gravity), we do not have an experimentally verified version of the theory as of yet. Gladly, the rest of space-time is protected from the uncertain description of the singularity. Schwarzschild’s solution provides a spherical event horizon that surrounds the singularity at the so-called Schwarzschild radius. The extent of this radius scales with the mass of the object within. No information can escape from inside this event horizon, which is why we cannot see down to the singularity of a black hole. The fabric of the cosmos But Einstein’s equations don’t solely apply to the space-time around a black hole. They also describe the evolution of the universe at large. We know several facts from observing the universe over the past century. First, the universe is expanding. Second, on very large scales, the expanding universe is nearly homogeneous (meaning it has the same density of matter and radiation) and isotropic (meaning it has the same expansion rate in all directions). Under these circumstances, Alexander Friedmann, Georges Lemaître, Howard Robertson, and Arthur Walker derived a spherically symmetric solution to Einstein’s equations that describes our universe and its space-time. The curvature of space-time in this solution can be positive (like the surface of a ball), negative (the surface of a saddle) or zero (a flat surface). In the spirit of Galileo, can we measure the actual cosmic geometry experimentally? The simplest experimental approach is to draw a large triangle through the universe and measure the sum of its angles. For a negative or positive curvature, the sum would be smaller or larger than 180°, respectively, whereas for a flat geometry it would be exactly 180°. The cosmos has been kind enough to embed the base of this triangle in the cosmic microwave background (CMB). Early on, the universe was hot and dense. The cosmic soup of particles cooled to a temperature below 4,000 Kelvin (about 6,700 degrees Fahrenheit or 3,700 degrees Celsius) 380,000 years after the Big Bang, at which point electrons and protons “recombined” to make hydrogen atoms and the universe became transparent to the CMB, allowing its light to travel unhindered. Therefore, observations of the CMB allow us to witness the universe at the moment of recombination. The CMB’s brightness is not perfectly uniform across the sky — it varies by roughly one part in 100,000 on a wide range of angular scales. But there is one special scale at the epoch of recombination which cosmologists can calculate: the distance that sound (acoustic) waves traversed over the course of these first 380,000 years of the universe. This acoustic scale can serve as the known base of our triangle. It signifies the spatial separation of parcels of the cosmic gas that could have been in acoustic contact with each other. By measuring this special correlation scale for CMB brightness fluctuations on the sky, we can draw an isosceles triangle with Earth at the apex. Knowing the height and base length of the triangle, as well as measuring the angle spanned by the acoustic scale on the sky, would tell us whether the sum of the angles in this triangle equals or deviates from 180° — and hence the curvature of the universe. Our flat universe Researchers performed this experiment in 2000 and later refined the measurement to a high level of precision with the latest data from the Planck satellite. The result revealed that the geometry of the universe is the simplest one we can imagine: flat! Why is the universe so simple? Obviously, nature is under no obligation to represent the simplest solution to Einstein’s equations. The theory of cosmic inflation provides one possible explanation. If the universe went through an early period during which it inflated exponentially, then all traces of its initial curvature would be flattened out. Inflation serves as the cosmic iron, erasing all pre-existing wrinkles from space-time. Quantum fluctuations of the vacuum during inflation might have led to the slight brightness fluctuations of the CMB that later seeded the formation of galaxies like the Milky Way. If our cosmic roots were formed then, we owe our existence to the quantum realm. Interestingly, our expanding universe is now entering a new phase of exponential expansion, due to dark energy. Here again, we have no idea how long this inflationary phase will last. If it continues for more than 10 times the current age of the universe, our galaxy will be left alone, surrounded by darkness with no other source of light in sight. It would be the most dramatic incarnation of social distancing from extragalactic civilizations that we can imagine following the era of COVID-19. Avi Loeb chairs the Board on Physics and Astronomy of the National Academies as serves as the founding director of Harvard’s Black Hole Initiative, and director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics. His new book, Extraterrestrial (Houghton Mifflin Harcourt), is available from MyScienceShop.
{"url":"https://www.astronomy.com/science/the-beginning-to-the-end-of-the-universe-exploring-the-shape-of-space-time/","timestamp":"2024-11-08T23:58:45Z","content_type":"text/html","content_length":"117421","record_id":"<urn:uuid:ef3ebf26-9f55-44ae-845a-86bda6933e14>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00882.warc.gz"}
NCERT Solutions for Class 7 Maths Chapter 12 Algebraic Expressions Exercise 12.1 NCERT Solutions for Class 7 Maths Chapter 12 Algebraic Expressions (EX 12.1) Exercise 12.1 The importance of Mathematics in the daily lives of humans cannot be overstated. There are numerous lucrative career opportunities and research opportunities associated with this academic field. There is no doubt that Mathematics is one of the most organised and vital scientific disciplines. In particular, India has a long history of talented and ingenious mathematicians who have inspired the world with their creativity and ingenuity. During the early stages of schooling, Mathematics is taught as a prominent scientific discipline, and students are encouraged to continually strive to improve their understanding of the subject. In terms of academic importance, Mathematics is an obvious choice. In order to prepare for the term-end or half-yearly exams of any Class, students should refer to the academic materials published by the National Council for Educational Research and Teaching (NCERT), which is a highly acclaimed organisation for publishing scholarly content for learners. NCERT books provide students with a fundamental understanding of each concept and subject matter.For preparation for numerous exams, teachers frequently recommend reading NCERT books. In particular, schools that are affiliated with the Central Board of Secondary Education (CBSE) use NCERT books as part of their curriculum. The textbooks from NCERT are used in these schools to teach students. Educators and teachers can use the educational materials provided by NCERT, a government organisation, for their studies and for imparting lessons. Students should therefore concentrate on learning from and referring to NCERT textbooks as they get ready for any exams or disciplines. The fact that NCERT provides educational resources and textbooks for all classes must be considered by students. For answers to the exercises and problems provided in the NCERT textbooks of any topic or class, students can visit the Extramarks website. A good example of the scholarly materials available are the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 offers precise solutions to all of the questions of the Maths Class 7 Chapter 12 Exercise 12.1. Students should think about practising from the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. Students should only study from NCERT books because they present a concept’s entirety and details in an easy-to-understand way. Students can grasp and determine how to apply concepts by using the questions in NCERT textbooks. However, if the students need NCERT solutions, they can find them on the Extramarks website and mobile application. Students can view the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 to see the methodology for solving algebraic expressions. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 pertain to Chapter 12 which is just one chapter from the NCERT textbook prescribed for Mathematics in Class 7. The NCERT textbook for Class 7 has various chapters and topics in it that are important for students to understand. Some of the chapters in the Class 7 NCERT textbook are Integers, Fractions and Decimal, Data Handling, Simple Equations, Lines and Angles, The Triangle and its Properties, Congruence of Triangles, Comparing Quantities, Rational Numbers, Practical Geometry, Perimeter and Area, Algebraic Expressions, Exponents and Powers, Symmetry, and Visualising Solid Shapes. Students should carefully go through each of these chapters one at a time. It is crucial for students to study these subjects in order so that they can comprehend the concepts that follow, and then answering these questions will enable them to deal with complex problems involving the application of several concepts. Students must answer every question from the NCERT textbook for practising and students can get help from the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will be of help to students when they choose to solve exercises given in the NCERT textbook. It is crucial that students understand the value of studying Mathematics in Class 7 since it will allow them to lay the foundation for their class coursework in further classes like Class 8. The topics in the chapters will help them study similar but advanced concepts in the future. It will be easier for them to understand the course for further classes as a result. Following the completion of the NCERT textbook of Mathematics, students will be able to deal with more difficult problems. They will be better able to answer questions in advanced-level textbooks with this information. Students would benefit from the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 as they learn and comprehend the ideas in this chapter. Students who want to study mathematics and prepare for more advanced studies should consider using NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. Class 7 students are expected to examine certain novel concepts. The introduction to these chapters is written as simply as possible to assist students to comprehend the value of studying these ideas in-depth and clarifying any notions that they might find challenging. The concepts in Class 8 and further classes will be simpler for the students to understand because they are comparable to those in Class 7 even though they are entirely new. Students will benefit from the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. Students can get their questions about this chapter answered by using the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. Students must keep in mind that learning every chapter of NCERT Class 7 is crucial for their future academic success. Their success in exams taken for admission to prestigious universities across India would be aided by the NCERT solutions for Class 7. Students must therefore constantly remind themselves that studying is crucial for their academic future and that they must put in a lot of effort to get into the college and programme of their choice. Students can greatly enhance their performance with regard to this chapter by using the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. Students could assess their abilities with the use of the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. Apart from learning about the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 in Class 7 Maths Chapter 12 Solution, among the new topics students study in Class 7 NCERT textbook include Integers, Fractions and Decimals, Data Handling, Simple Equations, Lines and Angles, Triangle Properties, Congruence of Triangles, Comparison of Quantities, Rational Numbers, Practical Geometry, Area and Perimeter, Algebraic Expressions, Exponents and Powers, Symmetry, and Visualising Solid Shapes. All these chapters will assist students in achieving their academic objectives and in performing well on tests. Students should make it a priority to understand these chapters because doing so will help them lay the groundwork for the exercises and other chapters that will come next. One element left out could cause students to struggle in a time-constrained environment, such as an exam, making it impossible for them to solve some problems or take too long to respond. This could have a detrimental impact on their performance. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will help students to improve their performance in exams. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will enable students to solve certain questions. For some students, Mathematics is a very tough subject. They might have trouble with Mathematics, and they might face hindrances while solving problems. Mathematics has always piqued the curiosity of students. Algebraic Expressions is a new topic included among many others that are important, likeExponents, Decimals, Data Handling and Congruency are all included in the curriculum of Mathematics for Class 6, Class 7 and Class 8. While some students struggle with it, others relish the difficulties. Students may find it more difficult to finish the exercises provided in the NCERT textbook because they may find it difficult to comprehend the themes on their first try. Even though learning something new is never simple, after they have mastered it, they will feel incredibly satisfied. Understanding and studying the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 is important, and it takes a lot of practice. Students can better understand all of the chapter’s topics with the aid of the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 give an overview of the chapter. This chapter contains a lot of information about various important topics about Algebraic Expressions like an Introduction To Algebraic Expressions, How are Algebraic Expressions Formed, Terms of an Expression, Coefficients, Like and Unlike Terms, Monomials, Binomials, Trinomials, Polynomials, Addition and Subtraction of Algebraic Expressions, Adding and Subtracting Like Terms, Adding and Subtracting General Algebraic Expressions, Finding the Value of an Expression, Using Algebraic Expressions-Formulas and Rules, Perimeter Formulas, Area Formulas, Rules of Number Patterns, Some More Number Patterns and Patterns In Geometry. These topics will help students understand Algebraic Expressions better. This will help them to understand how Statistics is applied in various fields and how it is used by various people. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will help students understand these topics. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 are for students to enhance their knowledge of this chapter. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 show that this chapter aids students in understanding the use of algebraic expressions. Students can understand algebraic expressions and how they are formed using variables and constants as well as what operations can be performed on algebraic expressions including addition, subtraction, multiplication and division.It is also possible to learn what expressions are made of, what a term is, and how terms are combined to form an expression.The meaning of monomial, binomial, polynomial and trinomial are also given in this chapter as the meaning of coefficient as a numerical factor in the term. The difference between like and unlike terms is also explained. Students can also learn to find value in an algebraic expression and how operations like addition and subtraction are performed in like and unlike terms. Students studying from the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will gain a better understanding of this. Students are encouraged to use the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 as a dependable learning tool. Students can improve their understanding of the chapter and their ability to answer problems of this nature by using the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will undoubtedly help students in every way and with any questions, they may have. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will assist students in developing their understanding of the chapter, achieving their academic objectives, and improving their test results. NCERT Solutions for Class 7 Maths Chapter 12 Algebraic Expressions (EX 12.1) Exercise 12.1 A summary of this chapter’s solutions is provided in the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. An introduction to algebraic expressions, how are algebraic expressions formed, terms of an expression, coefficients, like and unlike terms, monomials, binomials, trinomials and polynomials are the foundations of NCERT Class 7 Maths Chapter 12 Exercise 12.1 which have solutions to 7 questions to help students practice. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will help them solve these questions comprehensively. Students will learn how algebraic expressions are formed through this practice. Students can learn them from the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. The addition and subtraction of algebraic expressions, adding and subtracting like terms, and adding and subtracting general algebraic expressions are the foundation of Exercise 12.2. There are 6 questions in Exercise 12.2. Students will gain a deeper understanding of the chapter as a consequence of using the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. Students can improve their understanding of these ideas with the help of the NCERT Solutions For Class 7 Maths Chapter 12 Exercise 12.1. Exercise 12.3 involves finding the value of an expression. There are 10 questions in Exercise 12.3. Exercise 12.4 has 2 questions that help students develop an understanding of the chapter Algebraic expressions, specifically the topics using algebraic expressions-formulas and rules, perimeter formulas, area formulas, rules of number patterns, and some more number patterns and patterns in geometry. These inquiries will assist students in learning how to analyse particular types of questions. Students will better understand these ideas with the aid of the solutions to Class 7 Maths Ch 12 Ex 12.1. Students can learn how to answer the questions in the NCERT textbook by using the examples provided in the NCERT textbook. Understanding the ideas in Chapter 12 of Class 7 is made easier with the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. For students who struggle to comprehend Class 7 Maths Chapter 12 Exercise 12.1, they will need assistance from the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 to complete this challenging assignment. The information provided in the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will be helpful to students as they advance to harder problems and during exams. They will be in a better position to comprehend the question and choose the formula to use when responding to it. They can complete the exam more quickly as a result. They will be able to answer the question correctly and avoid any confusion if they practise with the aid of NCERT solutions. It will be straightforward for them to obtain the proper responses if they choose the proper strategy. Students will gain a general grasp of various types of questions from the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1, which will help them perform well in assessments. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 should be used by students to become familiar with the chapter, the manner of answering, and the foundational understanding of the subjects. They will get excellent grades as a result. The concepts of introduction to algebraic expressions, how algebraic expressions are formed, terms of an expression, coefficients, like and unlike terms, monomials, binomials, trinomials and polynomials are all embedded in the exercises to teach students how to solve problems. Students can utilise the examples included in the NCERT book for Mathematics as well as the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 to see how to respond to specific questions. For examples of how to answer various questions, students can refer to the NCERT book for Mathematics as well as the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. As opposed to competitive entrance examinations like the JEE, which just ask students to identify the correct answers or write them down, annual exams require students to finish the entire question step-by-step. Students would benefit from the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 while they study for their exams. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 are written in a clear, easy style with the intention of being understandable to every student and helping them clear up their doubts. The notions, concepts and their applications enclosed in the chapter, along with a multitude of formulas, will be better understood by students. When students are familiar with the applications and guiding principles of a concept or formula, it will be easier for them to reproduce the knowledge in the exams. Students will discover that the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 aid in both their academic and personal development. Students can better grasp Chapter 12 by using the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. The four exercises in this chapter are set out so that students can thoroughly learn and practise each of the chapter’s topics. There is also a bonus activity. The exercises were set up such that each subtopic received an equal amount of practise for the students, taking its significance into account.Students can do this to establish a strong foundation for algebraic mathematical topics. Students may find it challenging to comprehend Statistics, but with the help of the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1, they can do it more easily. To effectively manage their time during an exam, students must be able to quickly solve a question. Time management during the exam is crucial for students since it will enable them to review their answer sheet at the conclusion and check for any implicit errors. Answering the questions requires thoroughly reviewing the materials. This gives them time to remedy their errors while also assisting them in understanding how they should have answered the questions and where they must have gone wrong. Students can identify their weak areas during practise and rectify them with the aid of the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. They can also become aware of several negligent errors they might make and prevent them during the exam. During the tests, the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 are of great assistance. These will assist the students in time management during the exam, completion of their coursework, and preparation for the test. Access NCERT Solutions for Maths Class 7 Chapter 12 – Algebraic Expressions Students can access the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 on the Extramarks website and mobile application. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 are available to students at any time while they are studying. Students must realise the value of practising mathematics. It is possible that students’ responses contain negligent errors. Even when pupils follow the right approach to a solution, such errors might lead to incorrect answers to the question. Students can avoid making these errors if they practise enough. They must comprehend that having errors is a normal part of learning, but that in order to improve, they must recognise and fix them. Students can get assistance with this from the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. Practice is necessary to avoid errors since it enables pupils to recognise their weak areas and be aware of them during the exam. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 contain the correct response. Students should keep in mind that the purpose of the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 is to allow students to put into practise what they have learned in class and during their study of these ideas.NCERT Solutions for Class 7 Maths Chapter 12 Algebraic Expressions Exercise 12.1 The Extramarks e-learning platform provides a plethora of online learning resources to facilitate a smooth learning experience for students. Owing to Extramarks, students can learn in a flexible environment. Students have the option of receiving help offline or online. As is the case with the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1, students can obtain academic support for any subject they want in any language they wish to learn , depending on their needs. Students can find out more about Extramarks on their website and mobile application. They will also find the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 on the Extramarks portal. Extramarks also provides students with additional academic support materials.Extramarks has launched the Career Assessment Programme (CAP) and the Multiple Intelligence Test (MIT). Students can take the Extramarks CAP psychometric exam to receive an in-depth assessment of their skills and abilities that will guide them in selecting the appropriate career path. To better understand their interests and proclivities, students take the MIT test. The purpose is to educate students about the numerous professions and employment opportunities available to them. They might use this to guide their decision regarding a potential career. These two initiatives were started to assist students in making wiser choices regarding their academic future. They will have the information needed to select a career path and pursue the objectives they have set for themselves. They will gain the knowledge necessary to identify their areas of strength and weakness, cultivate a love of learning, and, most importantly, gain the confidence and drive required to put in the effort and achieve success. As a result, their morale will improve. By establishing realistic goals, students can recognise their potential and work towards it. Students will be able to solve their academic challenges because of this. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1, NCERT solutions, solved board papers, sample papers, mock tests, and reference materials for other entrance exams like JEE, NEET, CUET, SAT, PSAT, and TOEFL are among the study materials that Extramarks offers. Any inquiries that students may have regarding any board exam or entrance exam can be directed to Extramarks. Extramarks also provides doubt-clearing sessions to help students with questions about anything related to their study. As a result, students can receive help for any exam they are now studying for or intend to start studying for. Extramarks can help students achieve their academic goals and the grades required for their chosen schools and programs.One of the many resources Extramarks offers are the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. Extramarks additionally provides practise tests for the NEET and JEE exams. This enables pupils to assess their strengths and weaknesses. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will also help students with finding how to improve academically. Receiving a report on their performance and assessment could be advantageous for students. With the aid of that report, students can identify their mistakes and weak points and take steps to strengthen them in order to perform better in the actual exam. Attending a practise test might provide them with an understanding of the structure of the examination, how to manage their time while taking it, and which areas might be difficult for them to understand and solve. Students need to understand and have control over each of these aspects in order to perform better. They can obtain assistance with this by using Extramarks’ NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. Students will have an understanding of the significance of answering questions provided in the NCERT textbooks with the aid of the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. There are many benefits to using solutions provided by Extramarks such as NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1, some of which are stated below: • Extramarks prepares scholarly content in collaboration with experts to provide students with Extramarks’ study materials. As a result, before being published online, these responses are checked for any evident lacunae. • Extramarks’ NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will provide an overview of the exercise and its subjects along with providing students with the answers directly. • Utilising the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 from Extramarks, students may learn about permutations and appreciate how they are used. • Many students will find it helpful that they can get Extramarks’ NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 through both the website and the Learning App. • The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 from Extramarks provide an exact technique of solution in addition to exact answers and an explanation of the methods used in the question as well as their significance in the board examination with regard to the marks distribution. Students can use Extramarks for additional benefits because it provides study notes for all of the chapters and activities. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will be useful for students who are just starting to solve the chapter. These solutions will help to clarify any unanswered questions. Students who want to improve their academic performance could utilise the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 as a study tool. Students should use the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 in order to enhance their time management skills and improve their chances of passing both entrance exams and board exams. Extramarks provides students with a plethora of tools on their Learning App and website, such as the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. Students must use the offered materials, including the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1, in order to achieve their academic goals. Whether they are preparing for board exams or entrance tests like the JEE or CUET, these materials will help them. When they get the majority of their study resources from one place, students gain advantages. As a result, their work and study notes stay organised. The NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will help students stay on track with their academics. Their planning is coordinated, and their performance analysis is finished on time. These are beneficial for students to prepare thoroughly, and Extramarks provides a platform for students to interact with experts of Extramarks and their study material to obtain better scores, such as the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. As a result, students will have a higher chance of passing any exam they are preparing for. There are many benefits to using Extramarks for the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. The mentors of Extramarks contribute to the preparation of study materials for students. As a result, before being published on the website, these responses are checked for inconsistencies. Extramarks’ NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 will provide students with a summary of the activity and its contents in addition to providing them with the exact solutions. Students can use the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 provided by Extramarks to study Statistics in order to prepare for their exams. Many students find it convenient because they may obtain Extramarks’ NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1 via the website or the mobile application of Extramarks. NCERT solutions for many chapters are available on Extramarks, including the NCERT Solutions for Class 7 Maths Chapter 12 Exercise 12.1.The NCERT Solutions for Class 7 Maths, Chapter 12, Exercise 12.1 is just one example of the type of study material provided by Extramarks. In the form of NCERT Solutions Class 6, Extramarks provides study materials for Class 6 in Social Science, Science, English, Hindi, and Mathematics. As a result, any questions students may have will be answered. It can be accessed through the Extramarks educational website and Learning App. It is possible for students to access these NCERT solutions in both English and Hindi. With NCERT Solutions Class 7, Extramarks provides study materials for Class 7 in subjects such as Social Science, Science, English, Hindi, and Mathematics. The purpose of these solutions is to clarify any questions that students may have about the chapter or subject. Students have access to these solutions via their website and app. It is provided that these NCERT solutions are available to students in both English and Hindi. One such example of this is the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. With NCERT solutions Class 8, Extramarks provides study materials for Class 8 in Social Science, Science, English, Hindi, and Mathematics. Students will be able to clarify any questions they may have regarding the chapter or subject. Students can access the service through their website and mobile application. Students have access to these NCERT solutions in both English and Hindi. Study materials for Class 9 are available from Extramarks in academic disciplines such as Social Science, Science, English, Hindi, and Mathematics. By perusing these solutions, students will be able to clarify any questions they might have about the chapter or subject. The website and mobile application are accessible to students. The NCERT solutions are available both in English and Hindi. Through the NCERT Solutions Class 10 study material, Extramarks offers study materials in areas such as Social Science, Science, English, Hindi, and Mathematics for Class 10. If students have questions about the chapter or subject, these solutions will clarify them. They offer a website and app that students can access. It is possible to access these NCERT solutions in both English and Extramarks provides study materials for Class 11 in subjects such as Mathematics, Physics, Chemistry, Biology, Business Studies, Economics, Accounting, Hindi, and English, through NCERT solutions Class 11. In this way, students will be able to clarify any questions they may have about the chapter or subject. Through their website and mobile application, students have access to this resource. It is important to note that these NCERT solutions are available to students in both English and Hindi. Extramarks offers NCERT solutions for Class 12 in subjects including Mathematics, Physics, Chemistry, Biology, Business Studies, Economics, Accountancy, Hindi, and English. Students will then be able to clarify any questions they may have regarding the chapter or subject. Students can access it through their website and mobile application. The NCERT solutions are available to students in both English and Hindi. NCERT solutions for all grade levels and subjects are available in Hindi and English on the Extramarks Learning App and website. Students of all academic levels will benefit from this, as it will clarify any issues or ambiguities they may have. They are capable of answering questions on any subject, including Hindi, English, Science, Social Science, and Mathematics. One example of the top-notch study materials that Extramarks offers are the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. Students should therefore practice using the NCERT Solutions Class 7 Maths Chapter 12 Exercise 12.1. $\begin{array}{l}\mathrm{Get}\mathrm{the}\mathrm{algebraic}\mathrm{expressions}\mathrm{in}\mathrm{the}\mathrm{following}\mathrm{cases}\mathrm{using}\\ \mathrm{variables},\mathrm{constants}\mathrm {and}\mathrm{arithmetic}\mathrm{operations}.\\ \left(\mathrm{i}\right)\mathrm{Subtraction}\mathrm{of}\mathrm{z}\mathrm{from}\mathrm{y}.\\ \left(\mathrm{ii}\right)\mathrm{One}–\mathrm{half}\mathrm{of} \mathrm{the}\mathrm{sum}\mathrm{of}\mathrm{numbers}\mathrm{x}\mathrm{and}\mathrm{y}.\\ \left(\mathrm{iii}\right)\mathrm{The}\mathrm{number}\mathrm{z}\mathrm{multiplied}\mathrm{by}\mathrm{itself}.\\ \ left(\mathrm{iv}\right)\mathrm{One}–\mathrm{fourth}\mathrm{of}\mathrm{the}\mathrm{product}\mathrm{of}\mathrm{numbers}\mathrm{p}\mathrm{and}\mathrm{q}.\\ \left(\mathrm{v}\right)\mathrm{Numbers x and y both}\mathrm{squared}\mathrm{and}\mathrm{added}.\\ \left(\mathrm{vi}\right)\mathrm{Number}5\mathrm{added}\mathrm{to}\mathrm{three}\mathrm{times}\mathrm{the}\mathrm{product}\mathrm{of}\\ \mathrm {numbers}\mathrm{m}\mathrm{and}\mathrm{n}.\\ \left(\mathrm{vii}\right)\mathrm{Product}\mathrm{of}\mathrm{numbers}\mathrm{y}\mathrm{and}\mathrm{z}\mathrm{subtracted}\mathrm{from}10.\\ \left(\mathrm {viii}\right)\mathrm{Sum}\mathrm{of}\mathrm{numbers}\mathrm{a}\mathrm{and}\mathrm{b}\mathrm{subtracted}\mathrm{from}\mathrm{their}\\ \mathrm{product}.\end{array}$ $\begin{array}{l}\text{(i) y}-\text{z}\\ \text{(ii)}\frac{1}{2}\left(x+y\right)\end{array}$ $\begin{array}{l}{\text{(iii) z}}^{2}\\ \left(\text{iv)}\frac{1}{4}\left(pq\right)\\ {\text{(v) x}}^{2}+{y}^{2}\\ \text{(vi) 5+3}\left(mn\right)\\ \text{(vii) 10}-\text{yz}\\ \text{(viii) ab}-\left $\begin{array}{l}\left(\mathrm{i}\right)\mathrm{Identify}\mathrm{the}\mathrm{terms}\mathrm{and}\mathrm{their}\mathrm{factors}\mathrm{in}\mathrm{the}\mathrm{following}\\ \mathrm{expressions}.\\ \ mathrm{Show}\mathrm{the}\mathrm{terms}\mathrm{and}\mathrm{factors}\mathrm{by}\mathrm{tree}\mathrm{diagrams}.\\ \left(\mathrm{a}\right)\mathrm{x}–3\left(\mathrm{b}\right)1+\mathrm{x}+{\mathrm{x}}^{2}\ left(\mathrm{c}\right)\mathrm{y}–{\mathrm{y}}^{3}\\ \left(\mathrm{d}\right)5{\mathrm{xy}}^{2}+7{\mathrm{x}}^{2}\mathrm{y}\left(\mathrm{e}\right)–\mathrm{ab}+2{\mathrm{b}}^{2}–3{\mathrm{a}}^{2}\\ \ left(\mathrm{ii}\right)\mathrm{Identify}\mathrm{terms}\mathrm{and}\mathrm{factors}\mathrm{in}\mathrm{the}\mathrm{expressions}\mathrm{given}\\ \mathrm{below}:\\ \left(\mathrm{a}\right)–4\mathrm{x}+5\ left(\mathrm{b}\right)–4\mathrm{x}+5\mathrm{y}\left(\mathrm{c}\right)5\mathrm{y}+3{\mathrm{y}}^{2}\\ \left(\mathrm{d}\right)\mathrm{xy}+2{\mathrm{x}}^{2}{\mathrm{y}}^{2}\left(\mathrm{e}\right)\mathrm {pq}+\mathrm{q}\left(\mathrm{f}\right)1.2\mathrm{ab}–2.4\mathrm{b}+3.6\mathrm{a}\\ \left(\mathrm{g}\right)\frac{3}{4}\mathrm{x}+\frac{1}{4}\left(\mathrm{h}\right)0.1{\mathrm{p}}^{2}+0.2{\mathrm{q}}^ $\begin{array}{l}\left(\text{ii}\right)\\ \begin{array}{cccc}\text{Row}& \text{Expression}& \text{Terms}& \text{Factors}\\ \left(\text{a}\right)& -4x+5& \begin{array}{l}-4x\\ \text{5}\end{array}& \ begin{array}{l}-4,x\\ 5\end{array}\\ \left(\text{b}\right)& -4x+5y& \begin{array}{l}-4x\\ 5y\end{array}& \begin{array}{l}-4,x\\ 5,y\end{array}\\ \left(\text{c}\right)& 5y+3{y}^{2}& \begin{array}{l}5y \\ 3{y}^{2}\end{array}& \begin{array}{l}5,y\\ 3,y,y\end{array}\\ \left(\text{d}\right)& xy+2{x}^{2}{y}^{2}& \begin{array}{l}xy\\ 2{x}^{2}{y}^{2}\end{array}& \begin{array}{l}x,y\\ 2,x,x,y,y\end{array} \\ \left(\text{e}\right)& pq+q& \begin{array}{l}pq\\ q\end{array}& \begin{array}{l}p,q\\ q\end{array}\\ \left(\text{f}\right)& 1.2ab-2.4b+3.6a& \begin{array}{l}1.2ab\\ -2.4b\\ 3.6a\end{array}& \begin {array}{l}1.2,a,b\\ -2.4,b\\ 3.6,a\end{array}\\ \left(\text{g}\right)& \frac{3}{4}x+\frac{1}{4}& \begin{array}{l}\frac{3}{4}x\\ \frac{1}{4}\end{array}& \begin{array}{l}\frac{3}{4},x\\ \frac{1}{4}\end {array}\\ \left(\text{h}\right)& 0.1{p}^{2}+0.2{q}^{2}& \begin{array}{l}0.1{p}^{2}\\ 0.2{q}^{2}\end{array}& \begin{array}{l}0.1,p,p\\ 0.2,q,q\end{array}\end{array}\end{array}$ $\begin{array}{l}\mathrm{Identify}\mathrm{the}\mathrm{numerical}\mathrm{coefficients}\mathrm{of}\mathrm{terms}\\ \left(\mathrm{other}\mathrm{than}\mathrm{constants}\right)\mathrm{in}\mathrm{the}\ mathrm{following}\mathrm{expressions}:\\ \left(\mathrm{i}\right)5–3{\mathrm{t}}^{2}\left(\mathrm{ii}\right)1+\mathrm{t}+{\mathrm{t}}^{2}+{\mathrm{t}}^{3}\left(\mathrm{iii}\right)\mathrm{x}+2\mathrm {xy}+3\mathrm{y}\\ \left(\mathrm{iv}\right)100\mathrm{m}+1000\mathrm{n}\left(\mathrm{v}\right)–{\mathrm{p}}^{2}{\mathrm{q}}^{2}+7\mathrm{pq}\left(\mathrm{vi}\right)1.2\mathrm{a}+0.8\mathrm{b}\\ \left $\begin{array}{cccc}\text{Row}& \text{Expression}& \text{Terms}& \text{Coefficients}\\ \left(\text{i}\right)& 5-3{\text{t}}^{2}& -3{t}^{2}& -3\\ \left(\text{ii}\right)& 1+t+{t}^{2}+{t}^{3}& \begin {array}{l}t\\ {t}^{2}\\ {t}^{3}\end{array}& \begin{array}{l}1\\ 1\\ 1\end{array}\\ \left(\text{iii}\right)& x+2xy+3y& \begin{array}{l}x\\ 2xy\\ 3y\end{array}& \begin{array}{l}1\\ 2\\ 3\end{array}\\ \ left(\text{iv}\right)& 100m+100n& \begin{array}{l}100m\\ 100n\end{array}& \begin{array}{l}100\\ 100\end{array}\\ \left(\text{v}\right)& -{p}^{2}{q}^{2}+7pq& \begin{array}{l}-{p}^{2}{q}^{2}\\ 7pq\end {array}& \begin{array}{l}-1\\ 7\end{array}\\ \left(\text{vi}\right)& 1.2a+0.8b& \begin{array}{l}1.2a\\ 0.8b\end{array}& \begin{array}{l}1.2\\ 0.8\end{array}\\ \left(\text{vii}\right)& 3.14{r}^{2}& 3.14{r}^{2}& 3.14\\ \left(\text{viii}\right)& 2\left(l+b\right)& \begin{array}{l}2l\\ 2b\end{array}& \begin{array}{l}2\\ 2\end{array}\\ \left(\text{ix}\right)& 0.1y+0.01{y}^{2}& \begin{array}{l}0.1y\ \ 0.01{y}^{2}\end{array}& \begin{array}{l}0.1\\ 0.01\end{array}\end{array}$ $\begin{array}{l}\left(\mathrm{a}\right)\mathrm{Identify}\mathrm{terms}\mathrm{which}\mathrm{contain}\mathrm{x}\mathrm{and}\mathrm{give}\mathrm{the}\\ \mathrm{coefficient}\mathrm{of}\mathrm{x}.\\ \ left(\mathrm{i}\right){\mathrm{y}}^{2}\mathrm{x}+\mathrm{y}\left(\mathrm{ii}\right)13{\mathrm{y}}^{2}–8\mathrm{yx}\left(\mathrm{iii}\right)\mathrm{x}+\mathrm{y}+2\\ \left(\mathrm{iv}\right)5+\mathrm {z}+\mathrm{zx}\left(\mathrm{v}\right)1+\mathrm{x}+\mathrm{xy}\left(\mathrm{vi}\right)12{\mathrm{xy}}^{2}+25\\ \left(\mathrm{vii}\right)7\mathrm{x}+{\mathrm{xy}}^{2}\\ \left(\mathrm{b}\right)\mathrm {Identify}\mathrm{terms}\mathrm{which}\mathrm{contain}{\mathrm{y}}^{2}\mathrm{and}\mathrm{give}\mathrm{the}\\ \mathrm{coefficient}\mathrm{of}{\mathrm{y}}^{2}.\\ \left(\mathrm{i}\right)8–{\mathrm{xy}} $\begin{array}{l}\left(\text{a}\right)\\ \begin{array}{cccc}\text{Row}& \text{Expression}& \text{Terms with x}& \text{Cofficient of x}\\ \left(\text{i}\right)& {y}^{2}x+y& {y}^{2}x& {y}^{2}\\ \left(\ text{ii}\right)& 13{y}^{2}-8yx& -8y& -8\\ \left(\text{iii}\right)& x+y+2& x& 1\\ \left(\text{iv}\right)& 5+z+zx& zx& z\\ \left(\text{v}\right)& 1+x+xy& \begin{array}{l}x\\ xy\end{array}& \begin {array}{l}1\\ y\end{array}\\ \left(\text{vi}\right)& 12x{y}^{2}+25& 12x{y}^{2}& 12{y}^{2}\\ \left(\text{vii}\right)& 7+x{y}^{2}& x{y}^{2}& {y}^{2}\end{array}\end{array}$ $\begin{array}{l}\left(\text{b}\right)\\ \begin{array}{cccc}\text{Row}& \text{Expression}& {\text{Terms with y}}^{2}& {\text{Cofficient of y}}^{2}\\ \left(\text{i}\right)& 8-x{y}^{2}& -x{y}^{2}& -x\\ \left(\text{ii}\right)& 5{y}^{2}+7x& 5{y}^{2}& 5\\ \left(\text{iii}\right)& 2{x}^{2}y+7{y}^{2}-15x{y}^{2}& \begin{array}{l}7{y}^{2}\\ -15x{y}^{2}\end{array}& \begin{array}{l}7\\ -15x\end{array}\end $\begin{array}{l}\mathrm{Classify}\mathrm{into}\mathrm{monomials},\mathrm{binomials}\mathrm{and}\mathrm{trinomials}.\\ \left(\mathrm{i}\right)4\mathrm{y}–7\mathrm{z}\left(\mathrm{ii}\right){\mathrm {y}}^{2}\left(\mathrm{iii}\right)\mathrm{x}+\mathrm{y}–\mathrm{xy}\left(\mathrm{iv}\right)100\\ \left(\mathrm{v}\right)\mathrm{ab}–\mathrm{a}–\mathrm{b}\left(\mathrm{vi}\right)5–3\mathrm{t}\left(\ mathrm{vii}\right)4{\mathrm{p}}^{2}\mathrm{q}–4{\mathrm{pq}}^{2}\left(\mathrm{viii}\right)7\mathrm{mn}\\ \left(\mathrm{ix}\right){\mathrm{z}}^{2}–3\mathrm{z}+8\left(\mathrm{x}\right){\mathrm{a}}^{2}+ $\begin{array}{l}\text{We know that the monomials, binomials and trinomials}\\ \text{have 1, 2, and 3 respectively unlike terms in it}\text{.}\\ \text{(i) 4y}-\text{7z}\\ \text{It is binomial}\text {.}\\ \text{(ii)}{y}^{2}\\ \text{It is monomial}\text{.}\\ \text{(iii) x}+\text{y}-\text{xy}\\ \text{It is trinomial}\text{.}\end{array}$ $\begin{array}{l}\text{(iv)}100\\ \text{It is monomial}\text{.}\\ \text{(v) ab}-\text{a}-\text{b}\\ \text{It is trinomial}\text{.}\\ \text{(vi) 5}-\text{3t}\\ \text{It is binomial}\text{.}\\ {\text {(vii) 4p}}^{2}q-{\text{4pq}}^{2}\\ \text{It is binomial}\text{.}\\ \text{(viii)}7mn\\ \text{It is monomial}\text{.}\\ {\text{(ix) z}}^{2}-\text{3z+8}\\ \text{It is trinomial}\text{.}\\ {\text{(x) a}}^{2}-{\text{b}}^{2}\\ \text{It is binomial}\text{.}\\ {\text{(xi) z}}^{2}+\text{z}\\ \text{It is binomial}\text{.}\\ \text{(xii) 1}+x+{\text{x}}^{2}\\ \text{It is trinomial}\text{.}\end{array}$ $\begin{array}{l}\mathrm{State}\mathrm{whether}\mathrm{a}\mathrm{given}\mathrm{pair}\mathrm{of}\mathrm{terms}\mathrm{is}\mathrm{of}\mathrm{like}\mathrm{or}\mathrm{unlike}\\ \mathrm{terms}.\\ \left(\ mathrm{i}\right)1,100\left(\mathrm{ii}\right)–7\mathrm{x},52\mathrm{x}\left(\mathrm{iii}\right)–29\mathrm{x},–29\mathrm{y}\\ \left(\mathrm{iv}\right)14\mathrm{xy},42\mathrm{yx}\left(\mathrm{v}\right) $\begin{array}{l}\mathrm{The}\mathrm{terms}\mathrm{which}\mathrm{have}\mathrm{the}\mathrm{same}\mathrm{algebraic}\mathrm{factors}\mathrm{are}\mathrm{called}\\ \mathrm{like}\mathrm{terms}\mathrm{and}\ mathrm{when}\mathrm{the}\mathrm{terms}\mathrm{have}\mathrm{different}\mathrm{algebraic f}\mathrm{actors},\\ \mathrm{they}\mathrm{are}\mathrm{called}\mathrm{unlike}\mathrm{terms}\\ \left(\mathrm{i}\ right)1,100\\ \text{Like}\\ \left(\mathrm{ii}\right)–7\mathrm{x},52\mathrm{x}\\ \text{Like}\\ \left(\mathrm{iii}\right)–29\mathrm{x},–29\mathrm{y}\\ \text{UnLike}\\ \left(\mathrm{iv}\right)14\mathrm {xy},42\mathrm{yx}\\ \text{Like}\\ \left(\mathrm{v}\right)4{\mathrm{m}}^{2}\mathrm{p},4{\mathrm{mp}}^{2}\\ \text{UnLike}\\ \left(\mathrm{vi}\right)12\mathrm{xz},12{\mathrm{x}}^{2}{\mathrm{z}}^{2}\\ \ $\begin{array}{l}\mathrm{Identify}\mathrm{like}\mathrm{terms}\mathrm{in}\mathrm{the}\mathrm{following}:\\ \left(\mathrm{a}\right)–{\mathrm{xy}}^{2},–4{\mathrm{yx}}^{2},8{\mathrm{x}}^{2},2{\mathrm {xy}}^{2},7\mathrm{y},–11{\mathrm{x}}^{2},–100\mathrm{x},–11\mathrm{yx},\\ 20{\mathrm{x}}^{2}\mathrm{y},–6{\mathrm{x}}^{2},\mathrm{y},2\mathrm{xy},3\mathrm{x}\\ \left(\mathrm{b}\right)10\mathrm{pq},7 \mathrm{p},8\mathrm{q},–{\mathrm{p}}^{2}{\mathrm{q}}^{2},–7\mathrm{qp},–100\mathrm{q},–23,12{\mathrm{q}}^{2}{\mathrm{p}}^{2},\\ –5{\mathrm{p}}^{2},41,2405\mathrm{p},78\mathrm{qp},13{\mathrm{p}}^{2}\ $\begin{array}{l}\left(\mathrm{a}\right)\text{Like terms are:}\\ -{\mathrm{xy}}^{2},2{\mathrm{xy}}^{2}\\ -4{\mathrm{yx}}^{2},20{\mathrm{x}}^{2}\mathrm{y}\\ 8{\mathrm{x}}^{2},-11{\mathrm{x}}^{2},-6{\ mathrm{x}}^{2}\\ 7\mathrm{y},\mathrm{y}\\ -100\mathrm{x},3\mathrm{x}\\ -11\mathrm{xy},2\mathrm{xy}\\ \left(\mathrm{b}\right)\mathrm{Liketermsare}:\\ 10\mathrm{pq},-7\mathrm{pq},78\mathrm{qp}\\ 7\ mathrm{p},2405\mathrm{p}\\ 8\mathrm{q},-100\mathrm{q}\\ -{\mathrm{p}}^{2}{\mathrm{q}}^{2},12{\mathrm{q}}^{2}{\mathrm{p}}^{2},\\ -23,41\\ -5{\mathrm{p}}^{2},701{\mathrm{p}}^{2}\\ 13{\mathrm{p}}^{2}\
{"url":"https://www.extramarks.com/studymaterials/ncert-solutions/ncert-solutions-class-7-maths-chapter-12-exercise-12-1/","timestamp":"2024-11-09T10:18:26Z","content_type":"text/html","content_length":"707973","record_id":"<urn:uuid:a9e7a7c2-596f-41fc-a95c-2e82118ae991>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00857.warc.gz"}
CSCE 580 CSCE 580: Artificial Intelligence TTh 1400-1515 SWGN 2A24 Prerequisites: CSCE 350 (Data Structures and Algorithms). Instructor: Marco Valtorta Office: Swaeringen 3A55, 777-4641 E-mail: mgv@cse.sc.edu Office Hours: 1100-1200 MWF Any student with a documented disability should contact the Office of Student Disability Services at (803)777-6142 to make arrangements for proper accommodations. Grading and Program Submission Policy Reference materials: David Poole and Alan Mackworth. Artificial Intelligence: Foundations of Computational Agents. Cambridge University Press, 2010 (ISBN 978-0-521-51900). Supplementary materials from the authors are also available. Specific objectives of this course are that the students will be able to: Analyze and categorize software intelligent agents and the environments in which they operate Formalize computational problems in the state-space search approach and apply search algorithms (especially A*) to solve them Represent domain knowledge using features and constraints and solve the resulting constraint processing problems Represent domain knowledge about objects using propositions and solve the resulting propositional logic problems using deduction and abduction Represent knowledge in Horn clause form and use the AILog dialect of Prolog for reasoning Reason under uncertainty using Bayesian networks Represent domain knowledge about individuals and relations using first-order logic Do inference using resolution refutation theorem proving (if time allows) • Program 1: Do either exercises 1 from Ch. 22 of Luger and Stubblefield (link under "some useful links, below), or exercise 2 in chapter 4, due on Tuesday, October 30. Tests Final exam from fall 2011. Most lecture use notes from the authors of the texbook. (See link under "reference materials," above.) Overhead transparencies for [P] are linked to the main page for [P]; the specific link is here.
{"url":"https://cse.sc.edu/~mgv/csce580f12/index.html","timestamp":"2024-11-14T12:32:18Z","content_type":"text/html","content_length":"19794","record_id":"<urn:uuid:e499d049-3470-45aa-a272-68e450bd3cea>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00487.warc.gz"}
As already described here, the representation in the following categories contains the whole order and not just one element (unlike PartialOrder in catdef.spad). Poset Implementation The poset domain is similar to the DirectedGraph domain, the difference is that the poset has, at most, one arrow from any given source to any given destination. We enforce this by coding the representation differently in the poset. The representation is a two dimensional array of boolean values which indicate whether or not a given source has and arrow to a given destination. Also, unlike graph, the poset has reflexivity, antisymmetry and transitivity properties. these properties are not enforced by the representation. However there are some functions provided to enforce these properties: • completeReflexivity:(s:%) -> % • completeTransitivity:(s:%) -> % • isAntisymmetric?:(s:%) -> Boolean DCPO Directed-Complete Partial Order Complete partial orders are partial orders which are guaranteed to have meets and/or joins depending on type of CPO. The terminology around this is not always consistent so I will use the following terminology. A DCPO, or directed-complete partial order, is a poset where joins (or supremum or least upper bound) are defined. That is join is a complete or closed function, not a partial function. Often a DCPO is required to have a bottom element; then it is called a pointed DCPO or a CPO. Finite DCPOs are pointed DCPOs, we can get the bottom element just by taking the join of all the elements, this is not true of infinite DCPOs. As an example consider the integers, any finite set of integers has a minimum element but the set of all integers does not (minus infinity is not an integer). Another example, more applicable to topology, is subsets of line segments. The examples below only model finite DCPOs. TODO: I want to model infinite DCPOs, we obvously cant do this by using a list of all the elements (as below). We need to define the elements, and their order, recursively. Here we will call the dual notion of a directed complete poset a Co-DCPO alternativly it is sometimes called a filtered complete partial order. BiCPO is Join of Dcpo and CoDcpo, that is, it is both joins and meets are gauranteed to exist.
{"url":"http://euclideanspace.com/prog/scratchpad/mycode/discrete/logic/poset/index.htm","timestamp":"2024-11-10T11:04:19Z","content_type":"text/html","content_length":"20159","record_id":"<urn:uuid:87815f04-12be-4bcf-8b7c-52659f80d2b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00364.warc.gz"}
3. Mathematics 1. Probability Distributions 1. Discrete Probability Distributions 1. Binomial Distribution 1. Definition and Random Variables 2. Parameters: n (number of trials), p (probability of success) 3. Probability Mass Function (PMF) and its Derivation 4. Mean and Variance 5. Applications (e.g., success/failure scenarios) 2. Poisson Distribution 1. Definition and Characteristics 2. Lambda (λ) as the Average Rate of Occurrence 3. PMF and its Relationship to Binomial Distribution 4. Mean and Variance 5. Applications (e.g., rare events in a fixed interval) 3. Geometric Distribution 1. Definition and Properties 2. Parameter: p (probability of success) 3. PMF and Memoryless Property 4. Mean and Variance 5. Applications (e.g., trials until first success) 4. Hypergeometric Distribution 1. Definition and Differences from Binomial 2. Parameters: N (population size), n (number of successes in population), k (sample size) 3. PMF and Combinatorial Interpretation 4. Mean and Variance 5. Applications (e.g., sampling without replacement) 2. Continuous Probability Distributions 1. Normal Distribution 1. Definition (Gaussian Distribution) 2. Parameters: μ (mean), σ^2 (variance) 3. Probability Density Function (PDF) and Properties 4. Standard Normal Distribution (Z-distribution) 5. Central Role in the Central Limit Theorem 6. Applications (e.g., measurement errors, natural phenomena) 2. Exponential Distribution 1. Definition and Memoryless Property 2. Parameter: λ (rate parameter) 3. PDF and Cumulative Distribution Function (CDF) 4. Mean and Variance 5. Applications (e.g., time between events in Poisson processes) 3. Uniform Distribution 1. Definition and Characteristics 2. Parameters: a (minimum), b (maximum) 3. PDF and CDF for Continuous Uniform Distribution 4. Mean and Variance 5. Applications (e.g., random number generation) 4. Gamma Distribution 1. Definition and Extension of Exponential Distribution 2. Parameters: α (shape), β (rate) 3. Relationships with Exponential and Chi-Square Distributions 4. PDF and Mean and Variance 5. Applications (e.g., waiting time models) 5. Beta Distribution 1. Definition and Bounded Nature 2. Parameters: α (shape), β (shape) 3. PDF and the Role of Beta Function 4. Mean and Variance 5. Applications (e.g., Bayesian statistics) 3. Multivariate Distributions 1. Joint Distributions 1. Definition and Concepts 2. Joint PDF and PMF 3. Marginal Distributions 4. Applications in Multi-dimensional Random Variables 2. Marginal Probability Distribution 1. Derivation from Joint Distribution 2. Importance in Independence of Variables 3. Techniques for Finding Marginals 3. Conditional Distribution 1. Definition and Derivation from Joint and Marginal Distributions 2. Conditional PDF and CDF 3. Relevance in Statistical Modeling 4. Special Distributions 1. t-Distribution 1. Definition and Relation to Normal Distribution 2. Parameters: degrees of freedom 3. PDF and Role in Small Sample Statistics 4. Applications (e.g., confidence intervals, hypothesis testing) 2. Chi-Square Distribution 1. Definition and Connections to Normal Distributions 2. Degrees of Freedom as a Key Parameter 3. PDF and Testing Goodness of Fit 4. Applications (e.g., variance estimation) 3. F-Distribution 1. Definition and Ratio of Chi-Square Variables 2. Parameters: degrees of freedom for numerator and denominator 3. PDF and Role in ANOVA 4. Applications (e.g., comparing variances)
{"url":"https://www.usefullinks.org/guide/Probability-Theory-7.html","timestamp":"2024-11-10T11:25:56Z","content_type":"text/html","content_length":"117045","record_id":"<urn:uuid:bca6b2e0-eac1-490b-99a9-ca63d181192b>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00562.warc.gz"}
Strategies vs Models Strategy: How students solve the problem. (Examples: Decomposition, Compensation, Traditional Algorithm) Model: How students notate their thinking. (Examples: Number line, Ten Frames, Base Ten Blocks) Important to Know: One strategy can be explained using multiple models. One model can be used to explain multiple strategies. Fluency with strategies evolves over time. The Development of Mathematical Reasoning Pamela Harris' Development of Mathematical Reasoning: Counting Strategies ⇒ Additive Thinking Strategies ⇒ Multiplicative Reasoning Strategies ⇒ Proportional Reasoning Strategies ⇒ Functional Reasoning Strategies Students develop mathematical reasoning as they develop more sophisticated thinking. A student who solves 4 x 5 by using counting strategies or additive thinking strategies is not engaging in multiplicative reasoning. Although, this student will correctly determine the answer to 4 x 5 using other strategies, if they do not develop multiplicative reasoning, exploring more complex questions will be extremely challenging. For example, a student may use counting or additive thinking strategies as part of their process for solving 1.2 x 2.3 or (x + 2)(x - 1), but will be unable to solve it without using multiplicative thinking. When a student develops fluency in multiple strategies and more sophisticated strategies, the student can confidently select the most appropriate strategy based on the numbers. Naming Strategies: Mathematical strategies should be named using the math utilized in the strategy. It might be cute to name the strategy after the student who first demonstrates it in class but when a student moves to a different class or a different school, non-conventional naming is not helpful. A grade level team and, preferably, a school, should agree upon names for the strategies that will be explored in math class. There aren't that many of them. Decomposing, compensating, partitioning by place value, give and take, and the traditional algorithm will cover almost all possible strategies.
{"url":"https://mathframework.com/strategies/strategies-vs-models/","timestamp":"2024-11-09T10:13:32Z","content_type":"text/html","content_length":"145931","record_id":"<urn:uuid:f87cc655-6fd7-4f1d-937e-86b2c8e878db>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00373.warc.gz"}
Previous Animation Next Animation The Importance of Arbitrary Precision To re-run the animation, do a forced reload on this page. This is usually done by holding down SHIFT while reloading the page (may vary depending on your browser). Two photon paths are simulated in this animation. The only difference is that the paths are initialized with different precision. The gray path starts with machine precision. The orange path starts with 120 digits of precision using Mathematica's arbitrary precision. At each step in the algorithm, precision is lost. In the case of the gray photon, all precision is lost and the resulting path can no longer be trusted and deviates significantly from the orange path. The orange path ends with 62 digits of precision still remaining. Check out the Wolfram Technology Guide for more on this technology as well as other technology used in Mathematica. This animation was generating using code supplied by Jeff Bryant and Jeremy Davis. The underlying math was orginally obtained from one of the Hundred-Dollar, Hundred-Digit Challenge Problems published in SIAM News.
{"url":"http://members.wolfram.com/jeffb/visualization/photons.shtml","timestamp":"2024-11-01T22:48:42Z","content_type":"text/html","content_length":"6771","record_id":"<urn:uuid:c98a4ec1-1e2c-41f6-93cf-d50be31584a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00098.warc.gz"}
Fundamental types From cppreference.com (See also type for type system overview and the list of type-related utilities that are provided by the C++ library) The following types are collectively called fundamental types: • (possibly cv-qualified) void (since C++11) void — type with an empty set of values. It is an incomplete type that cannot be completed (consequently, objects of type void are disallowed). There are no arrays of void, nor references to void . However, pointers to void and functions returning type void (procedures in other languages) are permitted. typedef decltype(nullptr) nullptr_t; (since C++11) std::nullptr_t is the type of the null pointer literal, nullptr. It is a distinct type that is not itself a pointer type or a pointer to member type. All Its prvalues are null pointer C++11) sizeof(std::nullptr_t) is equal to sizeof(void*). [edit] Integral types [edit] Standard integer types int — basic integer type. The keyword int may be omitted if any of the modifiers listed below are used. If no length modifiers are present, it's guaranteed to have a width of at least 16 bits. However, on 32/64 bit systems it is almost exclusively guaranteed to have width of at least 32 bits (see below). [edit] Modifiers Modifies the basic integer type. Can be mixed in any order. Only one of each group can be present in type name. signed — target type will have signed representation (this is the default if omitted) unsigned — target type will have unsigned representation short — target type will be optimized for space and will have width of at least 16 bits. long — target type will have width of at least 32 bits. long long — target type will have width of at least 64 bits. (since C++11) Note: as with all type specifiers, any order is permitted: unsigned long long int and long int unsigned long name the same type. [edit] Properties The following table summarizes all available standard integer types and their properties in various common data models: Type specifier Equivalent type Width in bits by data model C++ standard LP32 ILP32 LLP64 LP64 signed char signed char at least 8 8 8 8 unsigned char unsigned char 8 short int short int signed short at least 16 16 16 16 signed short int 16 unsigned short unsigned short int unsigned short int signed int signed int at least 16 32 32 32 unsigned int unsigned int long int long int signed long at least 32 32 32 64 signed long int 32 unsigned long unsigned long int unsigned long int long long long long int long long int signed long long (C++11) at least signed long long int unsigned long long unsigned long long int unsigned long long int Note: integer arithmetic is defined differently for the signed and unsigned integer types. See arithmetic operators, in particular integer overflows. std::size_t is the unsigned integer type of the result of the sizeof operator as well as the sizeof... operator and the alignof operator(since C++11). Extended integer types (since C++11) The extended integer types are implementation-defined. Note that fixed width integer types are typically aliases of the standard integer types. [edit] Boolean type bool — integer type, capable of holding one of the two values: true or false. The value of sizeof(bool) is implementation defined and might differ from 1. [edit] Character types Character types are integer types used for a character representation. signed char — type for signed character representation. unsigned char — type for unsigned character representation. Also used to inspect object representations (raw memory). char — type for character representation which can be most efficiently processed on the target system (has the same representation and alignment as either signed char or unsigned char, but is always a distinct type). Multibyte characters strings use this type to represent code units. For every value of type unsigned char in range [0, 255], converting the value to char and then back to unsigned char produces the original value.(since C++11) The signedness of char depends on the compiler and the target platform: the defaults for ARM and PowerPC are typically unsigned, the defaults for x86 and x64 are typically signed. wchar_t — type for wide character representation (see wide strings). It has the same size, signedness, and alignment as one of the integer types, but is a distinct type. In practice, it is 32 bits and holds UTF-32 on Linux and many other non-Windows systems, but 16 bits and holds UTF-16 code units on Windows. The standard used to require wchar_t to be large enough to represent any supported character code point. However, such requirement cannot be fulfilled on Windows, and thus it is considered as a defect and removed. char16_t — type for UTF-16 character representation, required to be large enough to represent any UTF-16 code unit (16 bits). It has the same size, signedness, and alignment as std::uint_least16_t, but is a distinct type. (since char32_t — type for UTF-32 character representation, required to be large enough to represent any UTF-32 code unit (32 bits). It has the same size, signedness, and alignment as std::uint_least32_t, but is a distinct type. char8_t — type for UTF-8 character representation, required to be large enough to represent any UTF-8 code unit (8 bits). It has the same size, signedness, and alignment as unsigned (since char (and therefore, the same size and alignment as char and signed char), but is a distinct type. C++20) Besides the minimal bit counts, the C++ Standard guarantees that 1 == sizeof(char) ≤ sizeof(short) ≤ sizeof(int) ≤ sizeof(long) ≤ sizeof(long long). Note: this allows the extreme case in which bytes are sized 64 bits, all types (including char) are 64 bits wide, and sizeof returns 1 for every type. [edit] Floating-point types [edit] Standard floating-point types The following three types and their cv-qualified versions are collectively called standard floating-point types. float — single precision floating-point type. Usually IEEE-754 binary32 format. double — double precision floating-point type. Usually IEEE-754 binary64 format. long double — extended precision floating-point type. Does not necessarily map to types mandated by IEEE-754. □ IEEE-754 binary128 format is used by some HP-UX, SPARC, MIPS, ARM64, and z/OS implementations. □ The most well known IEEE-754 binary64-extended format is x87 80-bit extended precision format. It is used by many x86 and x86-64 implementations (a notable exception is MSVC, which implements long double in the same format as double, i.e. binary64). □ On PowerPC double-double can be used. Extended floating-point types (since C++23) The extended floating-point types are implementation-defined. They may include fixed width floating-point types. [edit] Properties Floating-point types may support special values: • infinity (positive and negative), see INFINITY • the negative zero, -0.0. It compares equal to the positive zero, but is meaningful in some arithmetic operations, e.g. 1.0 / 0.0 == INFINITY, but 1.0 / -0.0 == -INFINITY), and for some mathematical functions, e.g. sqrt(std::complex) • not-a-number (NaN), which does not compare equal with anything (including itself). Multiple bit patterns represent NaNs, see std::nan, NAN. Note that C++ takes no special notice of signalling NaNs other than detecting their support by std::numeric_limits::has_signaling_NaN, and treats all NaNs as quiet. Floating-point numbers may be used with arithmetic operators +, -, /, and * as well as various mathematical functions from <cmath>. Both built-in operators and library functions may raise floating-point exceptions and set errno as described in math errhandling. Floating-point expressions may have greater range and precision than indicated by their types, see FLT_EVAL_METHOD. Floating-point expressions may also be contracted, that is, calculated as if all intermediate values have infinite range and precision, see #pragma STDC FP_CONTRACT. Standard C++ does not restrict the accuracy of floating-point operations. Some operations on floating-point numbers are affected by and modify the state of the floating-point environment (most notably, the rounding direction). Implicit conversions are defined between floating types and integer types. See limits of floating-point types and std::numeric_limits for additional details, limits, and properties of the floating-point types. [edit] Range of values The following table provides a reference for the limits of common numeric representations. Prior to C++20, the C++ Standard allowed any signed integer representation, and the minimum guaranteed range of N-bit signed integers was from -(2N-1 -1) to +2N-1 -1 (e.g. −127 to 127 for a signed 8-bit type), which corresponds to the limits of ones' complement or sign-and-magnitude. However, all C++ compilers use two's complement representation, and as of C++20, it is the only representation allowed by the standard, with the guaranteed range from -2N-1 to +2N-1 -1 (e.g. −128 to 127 for a signed 8-bit type). 8-bit ones' complement and sign-and-magnitude representations for char have been disallowed since C++11 (via the resolution of CWG issue 1759), because a UTF-8 code unit of value 0x80 used in a UTF-8 string literal must be storable in a char type object. The range for a floating-point type T is defined as follows: • The minimum guaranteed range is the most negative finite floating-point number representable in T through the most positive finite floating-point number representable in T. • If negative infinity is representable in T, the range of T is extended to all negative real numbers. • If positive infinity is representable in T, the range of T is extended to all positive real numbers. Since negative and positive infinity are representable in ISO/IEC/IEEE 60559 formats, all real numbers lie within the range of representable values of a floating-point type adhering to ISO/IEC/IEEE Type Size in bits Format Value range Approximate Exact 8 signed −128 to 127 character unsigned 0 to 255 16 UTF-16 0 to 65535 32 UTF-32 0 to 1114111 (0x10ffff) 16 signed ± 3.27 · 10^4 −32768 to 32767 unsigned 0 to 6.55 · 10^4 0 to 65535 integer 32 signed ± 2.14 · 10^9 −2,147,483,648 to 2,147,483,647 unsigned 0 to 4.29 · 10^9 0 to 4,294,967,295 64 signed ± 9.22 · 10^18 −9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 unsigned 0 to 1.84 · 10^19 0 to 18,446,744,073,709,551,615 • min subnormal: • min subnormal: ± 1.401,298,4 · 10^−45 ±0x1p−149 • min normal: • min normal: 32 IEEE-754 ± 1.175,494,3 · 10^−38 ±0x1p−126 • max: • max: ± 3.402,823,4 · 10^38 ±0x1.fffffep+127 • min subnormal: • min subnormal: ± 4.940,656,458,412 · 10^−324 ±0x1p−1074 • min normal: • min normal: 64 IEEE-754 ± 2.225,073,858,507,201,4 · 10^−308 ±0x1p−1022 • max: • max: ± 1.797,693,134,862,315,7 · 10^308 ±0x1.fffffffffffffp+1023 • min subnormal: binary ± 3.645,199,531,882,474,602,528 • min subnormal: floating- · 10^−4951 ±0x1p−16445 point • min normal: • min normal: 80^[note 1] x86 ± 3.362,103,143,112,093,506,263 ±0x1p−16382 · 10^−4932 • max: • max: ±0x1.fffffffffffffffep+16383 ± 1.189,731,495,357,231,765,021 · 10^4932 • min subnormal: ± 6.475,175,119,438,025,110,924, • min subnormal: 438,958,227,646,552,5 · 10^−4966 ±0x1p−16494 • min normal: • min normal: 128 IEEE-754 ± 3.362,103,143,112,093,506,262, ±0x1p−16382 677,817,321,752,602,6 · 10^−4932 • max: • max: ±0x1.ffffffffffffffffffffffffffff ± 1.189,731,495,357,231,765,085, p+16383 759,326,628,007,016,2 · 10^4932 1. ↑ The object representation usually occupies 96/128 bits on 32/64-bit platforms respectively. Note: actual (as opposed to guaranteed minimal) limits on the values representable by these types are available in C numeric limits interface and std::numeric_limits. [edit] Data models The choices made by each implementation about the sizes of the fundamental types are collectively known as data model. Four data models found wide acceptance: 32 bit systems: □ LP32 or 2/4/4 (int is 16-bit, long and pointer are 32-bit) □ ILP32 or 4/4/4 (int, long, and pointer are 32-bit); ☆ Win32 API ☆ Unix and Unix-like systems (Linux, macOS) 64 bit systems: □ LLP64 or 4/4/8 (int and long are 32-bit, pointer is 64-bit) □ LP64 or 4/8/8 (int is 32-bit, long and pointer are 64-bit) ☆ Unix and Unix-like systems (Linux, macOS) Other models are very rare. For example, ILP64 (8/8/8: int, long, and pointer are 64-bit) only appeared in some early 64-bit Unix systems (e.g. UNICOS on Cray). [edit] Keywords void, bool, true, false, char, char8_t, char16_t, char32_t, wchar_t, int, short, long, signed, unsigned, float, double [edit] Defect reports The following behavior-changing defect reports were applied retroactively to previously published C++ standards. DR Applied to Behavior as published Correct behavior CWG 238 C++98 the constraints placed on a floating-point implementation was unspecified specified as no constraint CWG 1759 C++11 char is not guaranteed to be able to represent UTF-8 code unit 0x80 guaranteed CWG 2689 C++11 cv-qualified std::nullptr_t was not a fundemental type it is CWG 2723 C++98 the ranges of representable values for floating-point types were not specified specified P2460R2 C++98 wchar_t was required to be able to represent distinct codes for all members not required of the largest extended character set specified among the supported locales [edit] References • C++23 standard (ISO/IEC 14882:2024): □ 6.8.2 Fundamental types [basic.fundamental] • C++20 standard (ISO/IEC 14882:2020): □ 6.8.1 Fundamental types [basic.fundamental] • C++17 standard (ISO/IEC 14882:2017): □ 6.9.1 Fundamental types [basic.fundamental] • C++14 standard (ISO/IEC 14882:2014): □ 3.9.1 Fundamental types [basic.fundamental] • C++11 standard (ISO/IEC 14882:2011): □ 3.9.1 Fundamental types [basic.fundamental] • C++03 standard (ISO/IEC 14882:2003): □ 3.9.1 Fundamental types [basic.fundamental] • C++98 standard (ISO/IEC 14882:1998): □ 3.9.1 Fundamental types [basic.fundamental] [edit] See also
{"url":"https://en.cppreference.com/w/cpp/language/types","timestamp":"2024-11-08T18:56:59Z","content_type":"text/html","content_length":"102357","record_id":"<urn:uuid:a3d60404-c767-4494-b57a-0b3c8ff98275>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00017.warc.gz"}
Expected Return Meaning of Expected return The expected return of an investment is the expected return an investor will get from an investment or a portfolio of investments. This assumed return is based on the concept of probability and hence, is not a certainty or an assured outcome. Returns from an investment under different scenarios are worked out. Then they are multiplied by the probability of occurrence of that situation. A sum of all these possible returns leads to the expected return of the investment or the portfolio. A point worth mentioning is the fact that the potential returns from an investment are calculated based on prior or historical data. The future returns can vary significantly from the historic returns due to the uncertainty of the future. Therefore, there is no guarantee of the expected returns calculated. It is just a measure of the positivity of the returns from a particular investment. Based on previous similar instances, how much should one expect to get back on the investment. An investor can compare the expected return from different available options and choose the best alternative. Calculation of expected return Let us start with a simple example of a single investment. There is a 50% probability of the investment giving returns of 10%, a 60% probability of the investment giving 8% return, and a 20% probability of the investment giving a loss of 5%. Thus, the expected return from the investment will be: =5 +4.8-1 =8.8% returns Now let us see an example of a portfolio with investments in three different companies. An individual has invested US$5000 in company A, US$ 10000 in company B, and US$35000 in company C. His expected returns are 20%, 30%, and 18%, respectively. First, we will calculate the weights of each investment in the total portfolio of US$ 50000. The weights for Company A, B, and C investment are first 10%, 20%, and 18%, respectively. Now the expected return from these three different investments will be calculated as: (0.1*20)+ (0.2*30) + (0.7*18) Thus the total portfolio of US$50000 should fetch a return of 20.6%. The simple average of the three expected returns is (20+30+18)/3= 22.67%. But our expected return from the portfolio is a little lesser at 20.6%. It is because a major portion of the investment (70%) is in the company with the least expected return of 18%. Importance of Expected Return The concept of expected returns is fundamental from an investor’s point of view. Proper Alignment of Investments It helps an investor to properly align his investments according to his risk-taking ability and investment goals. Also, it is a well-established principle that investments with higher expected returns will come with higher risk and vice-versa. Portfolio Management The concept of portfolio management is particularly important in the case of multiple investments or portfolio investments. Expected returns are a guiding factor to decide how much to invest in which security or option. An investor can rank the various assets according to their expected returns. And thus could decide on their weights in the portfolio. The concept has its limitations as well. Ignorance of Risk Factor Let us take an example of two separate portfolios, A and B consisting of five equal investments in different assets. Also, based on historical data, the return from each of the investments have been as follows: Portfolio A: 10%, 15%, 2%, -8%, 6% Portfolio B: 3%,7%,5%,2%,8% The expected returns of portfolio A- (10+15+2-8+6)/5= 5% The expected returns of portfolio B- (3+7+5+2+8)/5= 5% We see that the expected returns from both portfolios are the same. But if we calculate the standard deviation, Portfolio A has a standard deviation of 6, whereas portfolio B has a standard deviation of 2.55. Thus, we see that the concept of expected returns ignores the risk factor and deviations from the mean returns achieved over the years. Based on historical data The calculated expected returns are based on historical data of returns achieved from assets over the last few years. Hence, there is no guarantee that the investment will yield the same returns as in the past. Also, an investor may miss out on an excellent investment opportunity solely based on the past performance index. Factors like market conditions or the management might have changed, resulting in the company substantially outperforming its past. But based on expected returns calculation, investors may not opt for the option. Therefore, they may lose the opportunity to earn handsome profits. 1 thought on “Expected Return” 1. Chegg website is not responding so I hardly need the solutions after reading this. Leave a Comment
{"url":"https://efinancemanagement.com/financial-analysis/expected-return","timestamp":"2024-11-03T10:01:31Z","content_type":"text/html","content_length":"245901","record_id":"<urn:uuid:01c19bd0-80e0-4edb-8d68-e56194d79938>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00846.warc.gz"}
Repeated Substrings Problem G Repeated Substrings Given an input string composed solely of lowercase English letters, find the longest substring that occurs more than once in the input string. The two occurrences are allowed to partially overlap. The input is a single line containing a string of lowercase letters. The string contains more than one character, but no more than $10^5$. At least one letter will appear at least twice. Print a single line of output: the longest substring that occurs more than once in the input string. If there are multiple longest repeated substrings, print the one the would come first when the longest substrings are sorted in lexicographical (alphabetical) order. Sample Input 1 Sample Output 1 abcefgabc abc Sample Input 2 Sample Output 2 abcbabcba abcba Sample Input 3 Sample Output 3 aaaa aaa Sample Input 4 Sample Output 4 bbcaadbbeaa aa
{"url":"https://nus.kattis.com/courses/CS3233/CS3233_S2_AY1920/assignments/dr6cgm/problems/repeatedsubstrings","timestamp":"2024-11-07T22:23:12Z","content_type":"text/html","content_length":"28026","record_id":"<urn:uuid:5480f5f5-1af7-456d-817f-bbe5367596dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00540.warc.gz"}
Plot in R The most basic graphics function in R is the plot function. This function has multiple arguments to configure the final plot: add a title, change axes labels, customize colors, or change line types, among others. In this tutorial you will learn how to plot in R and how to fully customize the resulting plot. Plot function in R The R plot function allows you to create a plot passing two vectors (of the same length), a dataframe, matrix or even other objects, depending on its class or the input type. We are going to simulate two random normal variables called x and y and use them in almost all the plot examples. # Generate sample data x <- rnorm(500) y <- x + rnorm(500) You can create a plot of the previous data typing: # Plot the data plot(x, y) # Equivalent M <- cbind(x, y) With the plot function you can create a wide range of graphs, depending on the inputs. In the following table we summarize all the available possibilities for the base R plotting function. Function and arguments Output plot plot(x, y) Scatterplot of x and y numeric vectors plot(factor) Barplot of the factor plot(factor, y) Boxplot of the numeric vector and the levels of the factor plot(time_series) Time series plot Correlation plot of all plot(data_frame) dataframe columns (more than two columns) plot(date, y) Plots a date-based vector plot(function, lower, upper) Plot of the function between the lower and maximum value specified If you execute the following code you will obtain the different plot examples. # Examples par(mfrow = c(2, 3)) # Data my_ts <- ts(matrix(rnorm(500), nrow = 500, ncol = 1), start = c(1950, 1), frequency = 12) my_dates <- seq(as.Date("2005/1/1"), by = "month", length = 50) my_factor <- factor(mtcars$cyl) fun <- function(x) x^2 # Scatterplot plot(x, y, main = "Scatterplot") # Barplot plot(my_factor, main = "Barplot") # Boxplot plot(my_factor, rnorm(32), main = "Boxplot") # Time series plot plot(my_ts, main = "Time series") # Time-based plot plot(my_dates, rnorm(50), main = "Time based plot") # Plot R function plot(fun, 0, 10, main = "Plot a function") # Correlation plot plot(trees[, 1:3], main = "Correlation plot") par(mfrow = c(1, 1)) When you create several plots in R GUI (not in RStudio), the next plot will override the previous. However, you can create new plot windows with windows, X11 and quartz functions depending on your operating system, to solve this issue. R window When creating plots in base R they will be opened in a new window. However, you may need to customize the height and width of the window, that defaults to 7 inches (17.78 cm). For that purpose, you can use of the height and width arguments of the following functions, depending on your system. It should be noted that in RStudio the graph will be displayed in the pane layout but if you use the corresponding function, the graph will open in a new window, just like in base R. windows() # Windows X11() # Unix quartz() # Mac In addition to being able to open and set the size of the window, this functions are used to avoid overriding the plots you create, as when creating a new plot you will lose the previous. Note that in RStudio you can navigate through all the plots you created in your session in the plots pane. # First plot will open # a new window plot(x, y) # New window # Other plot in new window plot(x, x) You can also clear the plot window in R programmatically with dev.off function, to clear the current window and with graphics.off, to clear all the plots and restore the default graphic parameters. # Clear the current plot # Clear all the plots while (dev.cur() > 1) dev.off() # Equivalent Note that the dev.cur function counts the number of current available graphics devices. R plot type You can also customize the plot type with the type argument. The selection of the type will depend on the data you are plotting. In the following code block we show the most popular plot types in R. j <- 1:20 k <- j par(mfrow = c(1, 3)) plot(j, k, type = "l", main = "type = 'l'") plot(j, k, type = "s", main = "type = 's'") plot(j, k, type = "p", main = "type = 'p'") par(mfrow = c(1, 1)) par(mfrow = c(1, 3)) plot(j, k, type = "l", main = "type = 'o'") plot(j, k, type = "s", main = "type = 'b'") plot(j, k, type = "p", main = "type = 'h'") par(mfrow = c(1, 1)) Plot type Description p Points plot (default) l Line plot b Both (points and line) o Both (overplotted) s Stairs plot h Histogram-like plot n No plotting R plot pch The pch argument allows to modify the symbol of the points in the plot. The main symbols can be selected passing numbers 1 to 25 as parameters. You can also change the symbols size with the cex argument and the line width of the symbols (except 15 to 18) with the lwd argument. r <- c(sapply(seq(5, 25, 5), function(i) rep(i, 5))) t <- rep(seq(25, 5, -5), 5) plot(r, t, pch = 1:25, cex = 3, yaxt = "n", xaxt = "n", ann = FALSE, xlim = c(3, 27), lwd = 1:3) text(r - 1.5, t, 1:25) Note that symbols 21 to 25 allow you to set border width and also background color with the lwd and bg arguments, respectively. plot(r, t, pch = 21:25, cex = 3, yaxt = "n", xaxt = "n", lwd = 3, ann = FALSE, xlim = c(3, 27), bg = 1:25, col = rainbow(25)) In the following block of code we show a simple example of how to customize one of these symbols. # Example plot(x, y, pch = 21, bg = "red", # Fill color col = "blue", # Border color cex = 3, # Symbol size lwd = 3) # Border width It is worth to mention that you can use any character as symbol. In fact, some character symbols can be selected using numbers 33 to 240 as parameter of the pch argument. # Custom symbols plot(1:5, 1:5, pch = c("☺", "❤", "✌", "❄", "✈"), col = c("orange", 2:5), cex = 3, xlim = c(0, 6), ylim = c(0, 6)) R plot title The title can be added to a plot with the main argument or the title function. plot(x, y, main = "My title") # Equivalent plot(x, y) title("My title") The main difference between using the title function or the argument is that the arguments you pass to the function only affect the title. In order to change the plot title position you can set the adj argument with a value between 0 (left) and 1 (right) and the line argument, where values greater than 1.7 (default) move the title up and values lower than 1.7 to move it down. Negative values of line will make the title go inside the plot. It should be noted that if you set this arguments to the plot function, the changes will be applied to all texts. plot(x, y) title("My title", adj = 0.75, # Title to the right line = 0.25) LaTeX in plot title It is very common for data scientists the need of display mathematical expressions in the title of the plots. For that purpose, you can use the expression function. You can look for all the available options for using LaTeX-like mathematical notation calling ?plotmath. plot(x, y, main = expression(alpha[1] ^ 2 + frac(beta, 3))) Nevertheless, the syntax of the function is quite different from LaTeX syntax. If you prefer, you can use the TeX function of the latex2exp package. However, note that this function translates TeX notation to expression function notation, so the symbols and notation available are the same in both functions. # install.packages("latex2exp") plot(x, y, main = TeX('$\beta^3, \beta \in 1 \ldots 10$')) The LaTeX expressions can be used also in the subtitle, axis labels or any other place, as text added to the plot. Subtitle in R plot Furthermore, you can add a subtitle to a plot in R with the sub argument, that will be displayed under the plot. It is possible to add a subtitle even if you don’t specify a title. plot(x, y, main = "My title", sub = "My subtitle") # Equivalent plot(x, y) title(main = "My title", sub = "My subtitle") Axis in R In R plots you can modify the Y and X axis labels, add and change the axes tick labels, the axis size and even set axis limits. R plot x and y labels By default, R will use the vector names of your plot as X and Y axes labels. However, you can change them with the xlab and ylab arguments. plot(x, y, xlab = "My X label", ylab = "My Y label") If you want to delete the axes labels you can set them to a blank string or set the ann argument to FALSE. # Delete labels plot(x, y, xlab = "", ylab = "") # Equivalent plot(x, y, xlab = "My X label", ylab = "My Y label", ann = FALSE) R axis function The argument axes of the plot function can be set to FALSE in order to avoid displaying the axes, so in case you want, you can add only one of them with the axis function and customize it. Passing a 1 as argument will plot the X-axis, passing 2 will plot the Y-axis, 3 is for the top axis and 4 for the right axis. plot(x, y, axes = FALSE) # Add X-axis # Add Y-axis Change axis tick-marks It is also possible to change the tick-marks of the axes. On the one hand, the at argument of the axis function allows to indicate the points at which the labels will be drawn. plot(x, y, axes = FALSE) axis(1, at = -2:2) On the other hand, the minor.tick function of the Hmisc package allows you to create smaller tick-marks between the main ticks. # install.packages("Hmisc") plot(x, y) minor.tick(nx = 3, ny = 3, tick.ratio = 0.5) Finally, you could create interior ticks specifying a positive number in the tck argument as follows: # Interior ticks plot(x, y, tck = 0.02) Remove axis tick labels Setting the arguments xaxt or yaxt to "n" of the plot function will avoid plotting the X and Y axis labels, respectively. par(mfrow = c(1, 3)) # Remove X axis tick labels plot(x, y, xaxt = "n", main = "xaxt = 'n'") # Remove Y axis tick labels plot(x, y, yaxt = "n", main = "yaxt = 'n'") # Remove both axis tick labels plot(x, y, yaxt = "n", xaxt = "n", main = "xaxt = 'n', yaxt = 'n'") par(mfrow = c(1, 1)) Change axis tick labels The axes tick labels will be numbered to follow the numeration of your data. Nevertheless, you can modify the tick labels, if needed, with the labels argument of the axis function. You will also have to specify where the tick labels will be displayed with the at argument. par(mfrow = c(1, 2)) # Change X axis tick labels plot(x, y, xaxt = "n") axis(1, at = seq(round(min(x)), round(max(x)), by = 1), labels = 1:8) # Change Y axis tick labels plot(x, y, yaxt = "n") axis(2, at = seq(round(min(y)), round(max(y)), by = 1), labels = 1:9) Rotate axis labels The las argument of the plot function in R allows you to rotate the axes labels of your plots. In the following code block you will find the explanation of the different alternatives. par(mfrow = c(2, 2)) plot(x, y, las = 0, main = "Parallel") # Parallel to axis (default) plot(x, y, las = 1, main = "Horizontal") # Horizontal plot(x, y, las = 2, main = "Perpendicular") # Perpendicular to axis plot(x, y, las = 3, main = "Vertical") # Vertical par(mfrow = c(1, 1)) Set axis limits You can zoom in or zoom out the plot changing R plot axes limits. These arguments are very useful to avoid cropping lines when you add them to your plot. plot(x, y, ylim = c(-8, 8), # Y-axis limits from -8 to 8 xlim = c(-5, 5)) # X-axis limits from -5 to 5 Change axis scale in R The log argument allows changing the scale of the axes of a plot. You can transform the X-axis, the Y-axis or both as follows: # New data to avoid negative numbers s <- 1:25 u <- 1:25 par(mfrow = c(2, 2)) # Default plot(s, u, pch = 19, main = "Untransformed") # Log scale. X-axis plot(s, u, pch = 19, log = "x", main = "X-axis transformed") # Log scale. Y-axis plot(s, u, pch = 19, log = "y", main = "Y-axis transformed") # Log scale. X and Y axis plot(s, u, pch = 19, log = "xy", main = "Both transformed") Log Transformation “x” X-axis transformed “y” Y-axis transformed “xy” Both axis transformed However, you may be thinking that using the log function is equivalent but is not. As you can see in the previous plot, using the log argument doesn’t modify the data, but the log function will transform it. Look at the difference between the axes of the following graph and those of the previous one. par(mfrow = c(1, 3)) # Log-log plot(log(s), log(u), pch = 19, main = "log-log") # log(x) plot(log(s), u, pch = 19, main = "log(x)") # log(y) plot(s, log(u), pch = 19, main = "log(y)") par(mfrow = c(1, 1)) R plot font Font size You can also change the font size in an R plot with the cex.main, cex.sub, cex.lab and cex.axis arguments to change title, subtitle, X and Y axis labels and axes tick labels, respectively. Note that greater values will display larger texts. plot(x, y, main = "My title", sub = "Subtitle", cex.main = 2, # Title size cex.sub = 1.5, # Subtitle size cex.lab = 3, # X-axis and Y-axis labels size cex.axis = 0.5) # Axis labels size Argument Description cex.main Sets the size of the title cex.sub Sets the size of the subtitle cex.lab Sets the X and Y axis labels size cex.axis Sets the tick axis labels size Font style Furthermore, you can change the font style of the R plots with the font argument. You can set this argument to 1 for plain text, 2 to bold (default), 3 italic and 4 for bold italic text. This argument won’t modify the title style. par(mfrow = c(1, 3)) plot(x, y, font = 2, main = "Bold") # Bold plot(x, y, font = 3, main = "Italics") # Italics plot(x, y, font = 4, main = "Bold italics") # Bold italics par(mfrow = c(1, 1)) You can also specify the style of each of the texts of the plot with the font.main, font.sub, font.axis and font.lab arguments. plot(x, y, main = "My title", sub = "Subtitle", font.main = 1, # Title font style font.sub = 2, # Subtitle font style font.axis = 3, # Axis tick labels font style font.lab = 4) # Font style of X and Y axis labels Note that, by default, the title of a plot is in bold. Font style Description 1 Plain text 2 Bold 3 Italic 4 Bold italic Font family The family argument allows you to change the font family of the texts of the plot. You can even add more text with other font families. Note that you can see the full list of available fonts in R with the names(pdfFonts()) command, but some of them may be not installed on your computer. # All available fonts plot(x, y, family = "mono") text(-2, 3, "Some text", family = "sans") text(-2, 2, "More text", family = "serif") text(1, -4, "Other text", family = "HersheySymbol") An alternative is to use the extrafont package. # install.packages("extrafont") # Auto detect the available fonts in your computer # This can take several minutes to run # Font family names # Data frame containing the font family names R plot color In the section about pch symbols we explained how to set the col argument, that allows you to modify the color of the plot symbols. In R, there is a wide variety of color palettes. With the colors function you can return all the available base R colors. Furthermore, you could use the grep function (a regular expression function) to return a vector of colors containing some string. # Return all colors # Return all colors that contain the word 'green' cl <- colors() cl[grep("green", cl)] # Plot with blue dots plot(x, y, col = "blue") plot(x, y, col = 4) # Equivalent plot(x, y, col = "#0000FF") # Equivalent You can specify colors with its name ("red", "green", …), with numbers (1 to 8) or even with its HEX reference ("#FF0000", "#0000FF", …). You can also modify the text colors with the col.main, col.sub, col.lab and col.axis functions and even change the box color with the fg argument. plot(x, y, main = "Title", sub = "Subtitle", pch = 16, col = "red", # Symbol color col.main = "green", # Title color col.sub = "blue", # Subtitle color col.lab = "sienna2", # X and Y-axis labels color col.axis = "maroon4", # Tick labels color fg = "orange") # Box color Plot color points by group If you have numerical variables labelled by group, you can plot the data points separated by color, passing the categorical variable (as factor) to the col argument. The colors will depend on the # Create dataframe with groups group <- ifelse(x < 0 , "car", ifelse(x > 1, "plane", "boat")) df <- data.frame(x = x, y = y, group = factor(group)) # Color by group plot(df$x, df$y, col = df$group, pch = 16) # Change group colors colors <- c("red", "green", "blue") plot(df$x, df$y, col = colors[df$group], pch = 16) # Change color order, changing levels order plot(df$x, df$y, col = colors[factor(group, levels = c("car", "boat", "plane"))], pch = 16) Note that, by default, factor levels are ordered alphabetically, so in this case the order of the colors vector is not the order of the colors in the plot, as the first row of the dataframe corresponds to “car”, that is the second level. Hence, if you change the levels order, you can modify the colors order. Since R 4.0.0 the stringAsFactors argument of the data.frame function is FALSE by default, so you will need to transform the categorical variable into a factor to color the observations by group as in the previous example. Background color There are two ways to change the background color of R charts: changing the entire color, or changing the background color of the box. To change the full background color you can use the following # Light gray background color par(bg = "#f7f7f7") # Add the plot plot(x, y, col = "blue", pch = 16) # Back to the original color par(bg = "white") However, the result will be more beautiful if only the box is colored in a certain color, although this requires more code. Note that the plot.new function allows you to create an empty plot in R and that par (new = TRUE) allows you to add one graph over another. # Create an empty plot rect(par("usr")[1], par("usr")[3], par("usr")[2], par("usr")[4], col = "#f7f7f7") # Color par(new = TRUE) plot(x, y, col = "blue", pch = 16) R plot line You can add a line to a plot in R with the lines function. Consider, for instance, that you want to add a red line to a plot, from (-4, -4) to (4, 4), so you could write: plot(x, y) lines(-4:4, -4:4, lwd = 3, col = "red") R plot line width The line width in R can be changed with the lwd argument, where bigger values will plot a wider line. M <- matrix(1:36, ncol = 6) matplot(M, type = c("l"), lty = 1, col = "black", lwd = 1:6) # Just to indicate the line widths in the plot j <- 0 invisible(sapply(seq(4, 40, by = 6), function(i) { j <<- j + 1 text(2, i, paste("lwd =", j))})) Plot line type When plotting a plot of type “l”, “o”, “b”, “s”, or when you add a new line over a plot, you can choose between different line types, setting the lty argument from 0 to 6. matplot(M, type = c("l"), lty = 1:6, col = "black", lwd = 3) # Just to indicate the line types in the plot j <- 0 invisible(sapply(seq(4, 40, by = 6), function(i) { j <<- j + 1 text(2, i, paste("lty =", j))})) Type Description 0 Blank 1 Solid line (default) 2 Dashed line 3 Dotted line 4 Dotdash line 5 Longdash line 6 Twodash line Add text to plot in R On the one hand, the mtext function in R allows you to add text to all sides of the plot box. There are 12 combinations (3 on each side of the box, as left, center and right align). You just need to change the side and adj to obtain the combination you need. On the other, the text function allows you to add text or formulas inside the plot at some position setting the coordinates. In the following code block some examples are shown for both functions. plot(x, y, main = "Main title", cex = 2, col = "blue") # mtext function # Bottom-center mtext("Bottom text", side = 1) # Left-center mtext("Left text", side = 2) # Top-center mtext("Top text", side = 3) # Right-center mtext("Right text", side = 4) # Bottom-left mtext("Bottom-left text", side = 1, adj = 0) # Top-right mtext("Top-right text", side = 3, adj = 1) # Top with separation mtext("Top higher text", side = 3, line = 2.5) # Text function # Add text at coordinates (-2, 2) text(-2, 2, "More text") # Add formula at coordinates (3, -3) text(3, -3, expression(frac(alpha[1], 4))) Label points in R In this section you will learn how to label data points in R. For that purpose, you can use the text function, indicate the coordinates and the label of the data points in the labels argument. With the pos argument you can set the position of the label respect to the point, being 1 under, 2 left, 3 top and 4 right. # Create the plot plot(FAMI, INTG, main = "Familiarity with law vs Judicial integrity", xlab = "Familiarity", ylab = "Integrity", pch = 18, col = "blue") # Plot the labels text(FAMI, INTG, labels = row.names(USJudgeRatings), cex = 0.6, pos = 4, col = "red") You can also label individual data points if you index the elements of the text function as follows: plot(FAMI, INTG, main = "Familiarity with law vs Judicial integrity", xlab = "Familiarity", ylab = "Integrity", pch = 18, col = "blue") # Select the index of the elements to be labelled selected <- c(10, 15, 20) # Index the elements with the vector text(FAMI[selected], INTG[selected], labels = row.names(USJudgeRatings)[selected], cex = 0.6, pos = 4, col = "red") Change box type with bty argument The bty argument allows changing the type of box of the R graphs. There are several options, summarized in the following table: Box type Description “o” Entire box (default) “7” Top and right “L” Left and bottom “U” Left, bottom and right “C” Top, left and bottom “n” No box The shape of the characters “7”, “L” and “U” represents the borders of the box they draw. par(mfrow = c(2, 3)) plot(x, y, bty = "o", main = "Default") plot(x, y, bty = "7", main = "bty = '7'") plot(x, y, bty = "L", main = "bty = 'L'") plot(x, y, bty = "U", main = "bty = 'U'") plot(x, y, bty = "C", main = "bty = 'C'") plot(x, y, bty = "n", main = "bty = 'n'") par(mfrow = c(1, 1)) Note that in other plots, like boxplots, you will need to specify the bty argument inside the par function. R plot legend Finally, we will review how to add a legend to an R plot with the legend function. You can set the coordinates where you want to add the legend or specify "top", "bottom", "topleft", "topright", "bottomleft" or "bottomright". You can also specify lots of arguments like in the plot function. As an example, you can change the bty in the R legend, the background color with the bg argument, among others. plot(x, y, pch = 19) lines(-4:4, -4:4, lwd = 3, col = "red") lines(-4:1, 0:5, lwd = 3, col = "green") # Adding a legend legend("bottomright", legend = c("red", "green"), lwd = 3, col = c("red", "green")) Take a look to the R legends article to learn more about how to add legends to the plots.
{"url":"https://r-coder.com/plot-r/","timestamp":"2024-11-10T01:14:20Z","content_type":"text/html","content_length":"132388","record_id":"<urn:uuid:3c800589-dc28-4b47-9969-27d5a45b2e8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00391.warc.gz"}
Calculator - HP 9100 The HP 9100A and 9100B are programmable, electronic calculators which perform operations commonly encountered in scientific and engineering problems. Their log, trig and mathematical functions are each performed with a single key stroke, providing fast, convenient solutions to intricate equations. Computer-like memory enables the calculator to store instructions and constants for repetitive or iterative solutions. The easily-readable cathode ray tube instantly displays entries, answers and intermediate results. Direct keyboard operations include: addition, subtraction, multiplication, division and square-root. log x, ln x and e^x. sin x, cos x, tan x, sin^-1 x, cos^-1 x and tan^-1 x. (x in degrees or radians) sinh x, cosh x, tanh x, sinh^-1 x, cosh^-1 x and tanh^-1 x. Coordinate transformation: polar-to-rectangular, rectangular-to-polar, cu­mu­lative addition and subtraction of vectors. other single-key operations include taking the absolute value of a number, extracting the integer part of a number, and entering the value of π. Keys are also available for positioning and storage operations. Times for total performance of functions, including worst-case decimal-point placement and carrying: Add, subtract 2 milliseconds Multiply, divide 35 milliseconds Square-root 40 milliseconds Sin, cos, tan 354 milliseconds Coordinate transformation 332 milliseconds ln x 56 milliseconds e^x 141 milliseconds Numerical Format The operator can select either FIXED point (eg. 1234.567890) or FLOATING point (scientific notation); eg. 1.234567890 x 10³) for display of entries and answers. The calculator’s dynamic range is 10^ -98 to 10^99 with up to 10 significant digits. In FIXED point mode, the operator selects the number of decimal places desired between 0-9 on the decimal wheel. Whenever more digits are placed left of the decimal point than the decimal wheel will allow, the calculator automatically reverts to FLOATING point notation to allow completion of the calculation, with no loss of information. HP 9100A/B Keyboard The calculators are programmed either by use of the keyboard or by magnetic cards. The program mode allows entry of program instructions, via the keyboard, into program memory. Programming consists of pressing keys in the proper sequence. Any key on the keyboard is available as a program step. The program capacity o the 9100A is 196 steps and the capacity of the 9100B is 392 steps. No language or code conversions are required. The self-contained magnetic card reader/recorder can record programs from program memory onto wallet-size magnetic cards. The reader/recorder can also read the magnetic cards back into program memory for repetitive use. Two programs fo 196 steps each may be recorded on each reusable card. Cards may by cascaded for longer programs. Program Instructions Conditional Branching: IF statements make the comparisions – less-than, equal-to, greater-than – and can be programmed to branch to any of the program addresses. Unconditional Branching: GO TO statement can be programmed to branch to any of the program adresses. (Also used for manual addressing and correction of individual program steps.) A true subroutine capability permitting instant access to subroutines from any point in a program. By using SUB/RETURN instruction, subroutines may be nested up to 5 deep. (9100B only) Provides conditional branching dependent on manual or programmed setting of flag. Halts the program for data entry or display. Brief display of interim results in computation. Step Program: Operator may step through program for visual verification of instructions. A ‘dual display’ feature on the 9100B greatly simplifies program editing and modification. It allows the program step and the succeeding one to be displayed simultaneously. Magnetic core memory includes: 3 display registers (keyboard, accumulator, temporary). 9100A has 16 storage registers with capacity for 196 program steps plus 2 constants. Total of 2208 bits of core memory. 9100B has 32 storage registers with capacity for 392 program steps plus 4 constants. Total of 3840 bits of core memory. Register accommodates floating point number with 12 significant digits (including 2 undisplayed guard digits) and 2 digit exponent. Alternately, register accomodates 14 program steps. Contains over 32,000 bits of fixed information for keyboard routines. Operating range, 0-45°C. Net 40 lbs. (18,1 kg), Shipping 65 lbs. (29,5 kg). 115 or 230 V ±10% (slide switch), 50-60 Hz, 70 watts. 8¼” high by 16″ wide by 19″ deep. (210 mm. x 406 mm. x 483 mm.) Accessories furnished no charge For 9100A: 09100-90001 Operating and Programming Manual. For 9100B: 09100-90021 Additional copies – $2.50 each. For 9100A: 09100-90002 Program Library binder containing sample programs. For 9100B: 09100-90022 Additional copies – $30.00. 5060-5919 Box of 10 magnetic program cards. Additional box – $10.00. For 9100A: 09100-90003 Pad of 100 program sheets. For 9100B: 09100-90023 Additional pads – $2.50 each. For 9100A: 09100-90004 Magnetic card with pre-recorded diagnostic program. For 9100B: 09100-90024 Additional card – $2.50. For 9100A: 9320-1157 Pull out instruction card mounted in calculator. For 9100B: 9320-1183 Additional copies – $5.00. 4040-0350 Plastic dust cover. Additional cover – $2.50. Additional accessories available 5000-5884 Single magnetic card, $2.00. 09100-90000 Box of 5 program pads, $10.00. 09100-90007 200 magnetic cards without envelopes, $80.00 Purchase Plans Purchase: HP 9100A, $4400. HP 9100B, $4900. Rent: HP 9100A, $260.00 per month HP 9100B, $285.00 per month Lease: HP 9100A, $115.00 per month HP 9100B, $128.00 per month Service Contracts Available 9100A and 9100B Comparision ITEMS 9100A 9100B Storage Registers 16 32 Program Steps 196 392 Price $4400 $4900 The 9100B also includes: • An additional positioning instruction X←( ) for more convenient data recall. • A dual program display for more convenient program edification. • A greater subroutine capability. Program Library The Program Library furnished with the 9100’s include programmed solutions to practical problems in a wide range of scientific and engineering fields. It serves both as an illustration of programming techniques and as a source of ready-to-use programs. Program Library holders also receive the Hewlett-Packard KEYBOARD, a periodic publication which provides updating information and a forum for the exchange of programs by 9100 users. Program categories include: • Business • Chemistry • Electronics • Fluid Mechanics • Life Sciences • Mathematics • Mechanics • Physics • Statistics • Structures • Surveying • Thermodynamics This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://hp9100.info/home/calculator/","timestamp":"2024-11-02T10:35:56Z","content_type":"text/html","content_length":"40980","record_id":"<urn:uuid:fb6f072e-5b0f-4b6e-8796-31db0c4c969a>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00556.warc.gz"}
Southern Bank Cd Rates Calculator - Calculatorey Southern Bank Cd Rates Calculator Southern Bank CD Rates Calculator Are you interested in investing in a Certificate of Deposit (CD) with Southern Bank but want to know the potential returns before making a decision? Use our Southern Bank CD Rates Calculator to estimate how much you could earn with different CD terms and interest rates. What is a CD? A Certificate of Deposit (CD) is a type of savings account that has a fixed interest rate and a fixed date of withdrawal, known as the maturity date. When you invest in a CD, you agree to keep your money in the account for a specified period of time, typically ranging from a few months to several years. In exchange for leaving your money untouched, you will earn a higher interest rate than a traditional savings account. How to Use the Calculator Using our Southern Bank CD Rates Calculator is simple. Just input the amount of money you plan to invest, the term length of the CD (in months or years), and the interest rate offered by Southern Bank. The calculator will then provide you with an estimate of how much your investment will grow by the end of the term. Benefits of Using a CD There are several benefits to investing in a CD with Southern Bank, including: • Higher interest rates than traditional savings accounts • Fixed, guaranteed returns • Insured up to $250,000 by the FDIC • Low risk investment option Factors Impacting CD Returns There are several factors that can impact the returns you will earn on your CD investment, including: • Interest rate offered by the bank • Term length of the CD • Compounding frequency Maximizing CD Returns To maximize your CD returns, consider the following strategies: • Choose a longer term CD to earn a higher interest rate • Shop around for the best CD rates • Reinvest your CD earnings to take advantage of compound interest It’s important to note that the results provided by our Southern Bank CD Rates Calculator are estimates and may not reflect the exact returns you will earn on your CD investment. Actual returns may vary based on a variety of factors, including changes in interest rates and market conditions. Ready to start estimating your potential CD returns with Southern Bank? Use our CD Rates Calculator now!
{"url":"https://calculatorey.com/southern-bank-cd-rates-calculator/","timestamp":"2024-11-05T20:38:18Z","content_type":"text/html","content_length":"74958","record_id":"<urn:uuid:1500b6d7-cdfb-4c0d-aa5c-8816a10610ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00885.warc.gz"}
The binary operator yields the quotient, and the binary operator yields the remainder from the division of the first expression by the second For integral operands the operator yields the algebraic quotient with any fractional part discarded; if the quotient is representable in the type of the result, (a/b)*b + a%b is equal to ; otherwise, the behavior of both is undefined
{"url":"https://timsong-cpp.github.io/cppwp/n4868/expr.mul","timestamp":"2024-11-06T17:52:17Z","content_type":"text/html","content_length":"8458","record_id":"<urn:uuid:8ee15853-f045-4b9e-997b-d356e1fdb2f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00794.warc.gz"}
Fibonacci Sequence Calculation What is the output of Fibonacci(5)? Calculate the Fibonacci sequence using the given recursive function. The output of Fibonacci(5) The output of Fibonacci(5) is 5. The output of Fibonacci(5) is 5. To find the Fibonacci sequence, we use the following recursive function: Fibonacci(n) = Fibonacci(n-1) + Fibonacci(n-2) Starting with Fibonacci(0) = 0 and Fibonacci(1) = 1, we can calculate the Fibonacci sequence as follows: 0, 1, 1, 2, 3, 5 The Fibonacci sequence is a series of numbers in which each number is the sum of the two preceding ones. It starts with 0 and 1, and the subsequent numbers are obtained by adding the two previous In the given question, we are asked to find the output of Fibonacci(5). To calculate Fibonacci(5), we use the recursive function mentioned above. We start with the base cases Fibonacci(0) = 0 and Fibonacci(1) = 1. By applying the recursive formula, we can calculate the Fibonacci sequence as follows: 0, 1, 1, 2, 3, 5 Therefore, the output of Fibonacci(5) is 5. In conclusion, the Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones. By applying the recursive function, we can calculate Fibonacci(5) to be 5.
{"url":"https://www.brundtlandnet.com/computers-and-technology/fibonacci-sequence-calculation.html","timestamp":"2024-11-12T01:03:19Z","content_type":"text/html","content_length":"21151","record_id":"<urn:uuid:137bc27b-2bd7-4ac0-b7fa-3554227ffb8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00071.warc.gz"}
Reasoning and logic analysis You are here Reasoning and logic analysis Competitive test topic Reasoning and logic analysis is for testing problem solving skill in using precise logic as well as pattern identification. Topic Reasoning in Competitive tests and Logic analysis Apart from Maths and English, problem solving skill is tested in modern day leading competitive tests through a relatively new topic Reasoning. Though in daily life use, reasoning involves use of result bearing common sense also, in a formal academic test environment usually there is no place for subjectivity. Thus a number of different types of problems each with precise answers have been designed for testing the pattern recognition skill, analytical skill and logic analysis skill of the student, and are clubbed under the topic Reasoning. Typically Reasoning includes problems of types, Analogy, Coding-decoding, Family relations, Logic puzzles or even English word, sentence and paragraph analysis. We will start this new branch with how to solve the relatively more difficult and confusing logic puzzles efficiently in quick time by applying the powerful collapsed column logic analysis technique and then will go on to explore other types of Reasoning problems that require efficient problem solving. In general stress will be on logic analysis. How the following posts are organized All the posts starting with "How to solve..." belong to the top level category of Tutorial. The rest are gathered from other categories of posts because the content of such a post is predominantly rich in Reasoning or Logic analysis. The tutorials are more academic with more detailed explanations. An example of such other-than-Tutorial category posts is, SBI PO type high level floor stay reasoning puzzle solved few confident steps 1 under subcategory SBI PO which is under top level category Exams. This has been the first post in a series of SBI PO level reasoning puzzles solved and explained in detail under category exams. A second example of such other-than-Tutorial category posts is, Method based solution of Einstein's logic analysis puzzle whose fish under subcategory Brain teasers which is under top level category Variety. This in fact is the detailed solution of a very popular and large assignment logic analysis puzzle generally known as Einstein's puzzle or Einstein's riddle that we solved applying the efficient Collapsed column logic analysis technique. This series of posts starts with first presentation of the Collapsed column logic analysis technique in How to solve difficult SBI PO level reasoning puzzles in a few simple steps 1. The posts appear in latest first time sequence.
{"url":"https://suresolv.com/reasoning-and-logic-analysis","timestamp":"2024-11-06T04:43:21Z","content_type":"text/html","content_length":"53240","record_id":"<urn:uuid:59174dc2-b460-4c9a-a24e-85fb06d74127>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00393.warc.gz"}
Emily Riehl Posted on February 1, 2020 by Edward Dunne Emily Riehl, a mathematician at Johns Hopkins University, won a huge prize from the university recently: the $250,000 President’s Frontier Award. Riehl works in category theory related to homotopy theory, such as $(\infty,1)$-categories. Her work has roots in earlier work of Quillen, Dwyer, Kan, Lurie, and others, but has significantly pushed the field forward. Riehl was a Ph.D. student at the University of Chicago, with J. Peter May as advisor. She was then a B.P. Instructor at Harvard, before moving to Johns Hopkins. Riehl is a host of the n-Category Café, who proudly wrote about her winning the prize in this post. You can read more about Riehl in the announcement from Johns Hopkins, from her web page, her posts on the n-Category Café, or by looking up her work in MathSciNet. A few reviews from MathSciNet are copied below. Congratulations, Emily Riehl! Some reviews of Riehl’s work. Riehl, Emily(1-CHI) Algebraic model structures. New York J. Math. 17 (2011), 173–231. 55U35 (18A32) The author defines and develops the theory of algebraic model categories. The adjective algebraic means that the two functorial weak factorization systems formed by the pairs trivial cofibrations-fibrations and cofibrations-trivial fibrations are algebraic in the sense of R. Garner [Appl. Categ. Structures 17 (2009), no. 3, 247–285; MR2506256] and that there is a morphism of algebraic weak factorization systems called a comparison map from the former algebraic weak factorization system to the latter. The algebraic weak factorization system was originally called a natural weak factorization system by M. Grandis and W. Tholen [Arch. Math. (Brno) 42 (2006), no. 4, 397–408; MR2283020]. This means that the functorial factorizations come from a comonad and a monad. In this setting, cofibrations and fibrations are retracts of coalgebras for comonads and algebras for monads, a result which may make it easier to prove the cofibrancy or the fibrancy of a given map. It is proved that every cofibrantly generated model category underlies a cofibrantly generated algebraic model category. Various algebraic analogues of classic results are given: transfer along adjunctions of an algebraic model structure, characterization of algebraic Quillen adjunctions, algebraic generalization of the projective model structure. Note that a non-cofibrantly generated weak factorization system may be cofibrantly generated in the algebraic sense: the generating set must be then replaced by a non-discrete small category. And a non-cofibrantly generated model category may underlie an algebraic model structure which is cofibrantly generated. Reviewed by Philippe Gaucher Riehl, Emily(1-HRV) Categorical homotopy theory. New Mathematical Monographs, 24. Cambridge University Press, Cambridge, 2014. xviii+352 pp. ISBN: 978-1-107-04845-4 18G55 (18D20 55U35) Categorical homotopy theory, like homological algebra and category theory itself, grew out of the need of algebraic topologists to generalize notions which arose in the study of topological spaces. It has since been applied to such areas as symplectic topology, algebraic geometry, and representation theory. The first complete formulation of an abstract approach to homotopy theory was provided by D. G. Quillen in [Homotopical algebra, Lecture Notes in Mathematics, No. 43, Springer, Berlin, 1967; MR0223432]: it is based on the notion of a model category $\scr M$ (such as the category $\bf{Top}$ of topological spaces, or that of chain complexes) in which one can carry out the actual constructions needed to define the coarser invariants captured by the corresponding homotopy category ${\rm ho}\,\scr M$, in which maps are replaced by their homotopy classes. The need to describe more refined invariants, which might depend on a choice of (higher) homotopies, led W. G. Dwyer and D. M. Kan to formulate an alternative approach, presented in terms of topologically (or simplicially) enriched categories [see Topology 19 (1980), no. 4, 427–440; MR0584566]. In particular, they showed that any model category can be endowed with such an enrichment. Variants of this approach have appeared in work of Rezk, Bergner, Joyal, Lurie, and others, all subsumed under the notion of an $(\infty,1)$-category. This is the first book to attempt a comprehensive treatment of both approaches. It differs from earlier accounts of one or the other, such as M. A. Hovey’s [Model categories, Math. Surveys Monogr., 63, Amer. Math. Soc., Providence, RI, 1999; MR1650134], P. S. Hirschhorn’s [Model categories and their localizations, Math. Surveys Monogr., 99, Amer. Math. Soc., Providence, RI, 2003; MR1944041], and J. Lurie’s [Higher topos theory, Ann. of Math. Stud., 170, Princeton Univ. Press, Princeton, NJ, 2009; MR2522659], in that it emphasizes the categorical aspects of the theory, rather than trying to address the needs of the “working algebraic topologist”. The first part of the book is devoted to the two related notions of derived functors and homotopy (co)limits—both of which can be defined in any category with a suitable notion of weak equivalences. After an exposition of Kan extensions, the author (following W. G. Dwyer et al. in [Homotopy limit functors on model categories and homotopical categories, Math. Surveys Monogr., 113, Amer. Math. Soc., Providence, RI, 2004; MR2102294]) constructs derived functors using deformations. Thus homotopy limits and colimits are provided in a simplicial model category (such as $\bf{Top}$) by the (co) bar construction. The second part is devoted to enriched homotopy theory, with an emphasis on weighted limits and colimits (in which one decorates a diagram $F\:I\to\scr C$ by a weight $W\:I\to{\bf Set}$). This notion is useful mainly in the enriched context: for a simplicial model category, homotopy (co)limits are weighted by the nerve functor into simplicial sets. In fact, one even has a notion of a weighted homotopy (co)limit. This allows one to enrich the homotopy category ${\rm ho}\,\scr M$ of a (simplicial) model category $\scr M$ over ${\rm ho}\,\bf{Top}$ (a result due to M. Shulman [cf. “Homotopy limits and colimits and enriched homotopy theory”, preprint, arXiv:math/0610194]), which can be thought of as a homotopy version of Dwyer-Kan localization. The third part of the book describes the classical notion of a model category, with an emphasis on the weak factorization systems of the author’s thesis [New York J. Math. 17 (2011), 173–231; MR2781914], enhanced by R. Garner’s version of the small object argument in [Appl. Categ. Structures 17 (2009), no. 3, 247–285; MR2506256]. It also recapitulates the author’s work with D. Verity on Reedy categories in [Theory Appl. Categ. 29 (2014), 256–301; MR3217884]. The last part deals with $(\infty,1)$-categories, in their quasi-category version, beginning with a survey of Joyal’s (as yet unpublished) monograph on the subject. The treatment of this vast subject is necessarily somewhat sporadic: the topics covered are the topological enrichment of quasi-categories, the treatment of (homotopy) isomorphisms, and some 2-categorical aspects. In summary, the book provides an interesting slant on the emerging subject of abstract homotopy theory, with an emphasis on categorical tools which may not be familiar to many practitioners in the Reviewed by David A. Blanc Riehl, Emily(1-HRV); Verity, Dominic(5-MCQR-CT) The 2-category theory of quasi-categories. Adv. Math. 280 (2015), 549–642. 18G55 (18A05 18D20 18G30 55U10 55U35) Quasi-categories are simplicial sets satisfying the inner horn-filling condition. They were introduced by J. M. Boardman and R. M. Vogt under the name “weak Kan complexes” [Homotopy invariant algebraic structures on topological spaces, Lecture Notes in Mathematics, Vol. 347, Springer, Berlin, 1973; MR0420609], and they provide a convenient model for $(\infty,1)$-categories (i.e. categories weakly enriched in $\infty$-groupoids). In particular, they include ordinary categories (via the nerve functor), and it is natural to try to extend the definitions and theorems of ordinary category theory into the quasi-categorical context. There has been significant work in this direction, mostly by A. Joyal [J. Pure Appl. Algebra 175 (2002), no. 1-3, 207–222; MR1935979; “The theory of quasi-categories and its applications. Vol. II”, Quadern 45, CRM Barcelona, 2008] and J. Lurie [Higher topos theory, Ann. of Math. Stud., 170, Princeton Univ. Press, Princeton, NJ, 2009; MR2522659 This work is a new contribution in this direction. More precisely, the paper develops a formal category theory of quasi-categories using 2-category theory. The starting point is a (strict) 2-category of quasi-categories $\underline{qCat}_2$ defined as a quotient of the simplicially enriched category of quasi-categories $\underline{qCat}_\infty$. The underlying category of both enriched categories is the usual category of quasi-categories and simplicial maps. By translating simplicial universal properties into 2-categorical ones, it is shown that $\underline{qCat}_2$ is cartesian closed, and that equivalences in $\underline{qCat}_2$ are precisely the (weak) equivalences of quasi-categories introduced by Joyal, proving that this 2-category appropriately captures the homotopy theory of quasi-categories. It is also shown that $\underline{qCat}_2$ admits several weak 2-limits of a sufficiently strict variety with which to develop formal category theory. In particular, it is shown that $\underline{qCat}_2$ admits weak cotensors by categories freely generated by a graph (including, in particular, the walking arrow), weak 2-pullbacks and weak comma objects. These are used to encode the universal properties associated to limits, colimits, adjunctions, and so forth. The work provides a self-contained account and to some outsiders hoping to understand the foundations of quasi-category theory it may be more approachable than the previous ones. Reviewed by Josep Elgueta Riehl, Emily(1-JHOP); Verity, Dominic(5-MCQR-CT) Homotopy coherent adjunctions and the formal theory of monads. Adv. Math. 286 (2016), 802–888. 18G55 (18C15 55U10 55U35 55U40) The impact of categories on the mathematics of structure reaches well beyond the wildest expectations of researchers in the 1950s. Needless to say, there are specific problems which may not be helped at all by category theory. Investigations began in the 1960s to show how 2-categories could be used to express commonalities in the study of many variants of the notion of category. In those days, we had in mind categories with specified structure, enriched categories, fibred categories, and so on. New examples are still arising. In their previous paper [Adv. Math. 280 (2015), 549–642; MR3350229], the authors showed how penetrating 2-category theory is in studying quasi-categories (simplicial sets for which all inner horns have fillers). Some new 2-category theory is invented for that purpose. For example, their notion of weak 2-limit lies between the strict notion of weighted limit for 2-categories and the bicategorical notion, involving as it does the concept of smothering functor. Also, what it means for a morphism of quasi-categories to have an adjoint is purely 2-categorical. The present paper delves more deeply into adjunctions between quasi-categories and the theory of monads. A cofibrant simplicial category they call the free homotopy coherent adjunction is introduced and described by means of a well-founded graphical calculus. Any adjunction of quasi-categories is shown to extend to a homotopy coherent adjunction; and these extensions are shown homotopically unique (the relevant spaces of extensions are contractible Kan complexes). The reviewer [in Category Seminar (Proc. Sem., Sydney, 1972/1973), 134–180. Lecture Notes in Math., 420, Springer, Berlin, 1974 (p. 167); MR0354813] described the weight required to obtain the Kleisli (and so, dually, the Eilenberg-Moore) construction of a monad as a colimit (limit). In the present paper, the appropriate weights are found to define the homotopy coherent monadic adjunction associated to a homotopy coherent monad. The authors show that each vertex in the quasi-category of algebras for a homotopy coherent monad is a codescent object of a canonical diagram of free The paper concludes with the quasicategorical version of the Beck monadicity theorem. Indeed, this paper makes clear that a mild variant of Beck’s argument can be expressed totally in terms of the weights themselves and is independent of the quasi-categorical context. Reviewed by R. H. Street Riehl, Emily(1-JHOP); Verity, Dominic(5-MCQR-CT) The comprehension construction. High. Struct. 2 (2018), no. 1, 116–190. 18G55 (55U35) The article under review is a continuation of the programme of “synthetic theory of $\infty$-categories” initiated by the authors in [E. Riehl and D. Verity, Adv. Math. 280 (2015), 549–642; MR3350229 ]. The fundamental framework is that of an $\infty$-cosmos, i.e., essentially a category of fibrant objects in the sense of Brown, enriched in quasi-categories. In this context, “$\infty$-category” refers to an object of an abstract $\infty$-cosmos and a great deal of the theory of $(\infty, 1)$-categories (or even higher structures) can be developed at this level of generality. This includes notions such as quasi-categories, complete Segal spaces or complicial sets. In the present article, the authors define cartesian and cocartesian fibrations, develop their basic theory and establish comprehension of cocartesian fibratinos, which generalizes a number of well-known constructions, such as unstraightening, of J. Lurie [Higher topos theory, Ann. of Math. Stud., 170, Princeton Univ. Press, Princeton, NJ, 2009; MR2522659]. A cocartesian fibration in an $\infty$-cosmos $\mathcal{K}$ is an isofibration $p \colon E \to B$ (here “isofibration” refers simply to a fibration in the category of fibrant objects underlying $\ mathcal{K}$) such that the canonical functor $E \to p \downarrow B$ admits a left adjoint in the slice $\infty$-cosmos $\mathcal{K}_{/B}$. The notions of the comma object $p \downarrow B$ and an adjoint used here are defined in terms of the 2-categorical structure of $\mathcal{K}$ which arises from its enrichment. This definition captures the idea that the fibers of $p$ vary (covariantly) functorially over the $\infty$-category $B$. That idea is made precise in the main theorem which states that for any $\ infty$-category $A$, there is a functor (the comprehension functor) $\mathsf{Fun}_{\mathcal{K}}(A, B) \to \mathsf{coCart(K)}_{/A}$. Here, $\mathsf{Fun}_{\mathcal{K}}(A, B)$ is the quasi-category of maps from $A$ to $B$ and $\mathsf{coCart(K)}_{/A}$ is the quasi-category of cocartesian fibrations over $K$. (The latter is the homotopy coherent nerve of the Kan complex enriched category whose objects are cocartesian fibrations over $A$ and whose morphisms are cocartesian functors between them.) The comprehension functor sends a morphism $a \colon A \to B$ to the pullback $E_a$ of $p$ along $a$ and its action on higher morphisms encodes the functoriality of such pullbacks. The main technical ingredient of the proof is the theory of simplicial computads (i.e., cofibrant simplicial categories) the basics of which are described in great detail. As an application, the authors explain how the Yoneda embedding of an $\infty$-category $A$ can be constructed in terms of comprehension of the cocartesian functor $A^{\mathbf{2}} \to A \times A$ in the slice $\mathcal{K}_ {/A}$ and prove a very general version of the higher categorical Yoneda lemma. Reviewed by Karol Szumiło Leave a Reply Cancel reply About Edward Dunne I am the Executive Editor of Mathematical Reviews. Previously, I was an editor for the AMS Book Program for 17 years. Before working for the AMS, I had an academic career working at Rice University, Oxford University, and Oklahoma State University. In 1990-91, I worked for Springer-Verlag in Heidelberg. My Ph.D. is from Harvard. I received a world-class liberal arts education as an undergraduate at Santa Clara University. This entry was posted in Mathematicians, Prizes and awards. Bookmark the permalink.
{"url":"https://blogs.ams.org/beyondreviews/2020/02/01/emily-riehl/","timestamp":"2024-11-06T07:48:33Z","content_type":"text/html","content_length":"76336","record_id":"<urn:uuid:5e0eb512-ea94-4cd8-b86b-77aecaf41f23>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00126.warc.gz"}
shortest-common-supersequence | Leetcode Similar Problems Similar Problems not available Shortest Common Supersequence - Leetcode Solution LeetCode: Shortest Common Supersequence Leetcode Solution Difficulty: Hard Topics: string dynamic-programming The Shortest Common Supersequence problem on LeetCode is a dynamic programming problem that asks you to find the length of the shortest common supersequence of two given strings. A supersequence is defined as a string that contains both of the given strings as subsequences. To solve this problem, you can use dynamic programming. The algorithm will make use of a matrix of size (n+1) x (m+1), where n and m are the lengths of the given strings. Matrix[i][j] denotes the length of the shortest common supersequence of the substrings of the first string up to i and the substrings of the second string up to j. Here is the detailed solution: 1. Initialize the matrix with 0's. This is because if either of the strings is empty, the shortest common supersequence would be the other string itself, which has length equal to its own length. 2. Fill the matrix by iterating through the substrings of both the strings. The idea is that if the last character of both substrings matches, then the length of the shortest common supersequence would be one plus the length of the shortest common supersequence of the remaining substrings. Otherwise, we need to consider all possible ways to append the last character of both substrings to make a new supersequence, and choose the one that has the shortest length. 3. Once the matrix is filled, the length of the shortest common supersequence would be stored in Matrix[n][m], where n and m are the lengths of the given strings. Here is the Python code for the same: def shortestCommonSupersequence(str1: str, str2: str) -> int: # Initializations n, m = len(str1), len(str2) Matrix = [[0]*(m+1) for i in range(n+1)] # Fill the matrix for i in range(1, n+1): for j in range(1, m+1): if str1[i-1] == str2[j-1]: Matrix[i][j] = 1 + Matrix[i-1][j-1] Matrix[i][j] = 1 + min(Matrix[i-1][j], Matrix[i][j-1]) # The length of the shortest common supersequence is stored in Matrix[n][m] return Matrix[n][m] This solution has a time complexity of O(nm), and a space complexity of O(nm). Shortest Common Supersequence Solution Code
{"url":"https://prepfortech.io/leetcode-solutions/shortest-common-supersequence","timestamp":"2024-11-08T07:45:51Z","content_type":"text/html","content_length":"57712","record_id":"<urn:uuid:42e29ed3-8b0c-4756-93a5-fa89a3554c7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00681.warc.gz"}
How do I make an amortization schedule for a car loan? - Review for Loans Subsequently, are car loans amortized or simple interest? Auto loans include simple interest costs, not compound interest. This is good. … (In compound interest, the interest earns interest over time, so the total amount paid snowballs.) Auto loans are “amortized.” As in a mortgage, the interest owed is front-loaded in the early payments. Also, are car loans simple interest or amortized? Auto loans include simple interest costs, not compound interest. … (In compound interest, the interest earns interest over time, so the total amount paid snowballs.) Auto loans are “amortized.” As in a mortgage, the interest owed is front-loaded in the early payments. People also ask, can I print an amortization schedule? Then, once you have calculated the payment, click on the “Printable Loan Schedule” button to create a printable report. You can then print out the full amortization chart. Can I refinance car loan? You may be able to refinance that loan to lessen your financial burden. Refinancing a car loan involves taking on a new loan to pay off the balance of your existing car loan. … People generally refinance their auto loans to save money, as refinancing could score you a lower interest rate. It’s essentially a calendar that shows payments and their due dates, Omueti said. You can ask your lender for a payment schedule, but keep in mind that it won’t breakdown what part of your payment goes toward your interest and principal. The process of paying down this debt is known as car loan amortization. Your car loan’s amortization schedule — and the total amount of interest you pay on your loan — can be affected by factors like the length of your loan term, your interest rate and the size of your down payment. Typically, there are no down payment requirements to refinance a vehicle. However, if you don’t have equity in your car, you may need to front some extra cash to meet refinancing requirements. Interest on a car loan can add up quickly. It is easy to save money by paying your loan off early. The amount of interest you pay every month does decrease a little bit because your balance is going down. … Subtract this lower number from your original number and that will be your savings on interest. An auto loan amortization calculator, commonly known as an auto loan calculator, outlines your amortization schedule. Your amortization schedule tells you how much of your payment is going toward interest/fees and your principal balance. Even a single extra payment made each year can reduce the amount of interest and shorten the amortization, as long as the payment goes toward the principal and not the interest (make sure your lender processes the payment this way). With a simple interest auto loan, interest accrues on a daily basis based on the outstanding balance (principal balance). … So, each and every payment that the borrower makes will lower their principal balance, which in turn will lower the amount of interest that accrues with the next installment. This example teaches you how to create a loan amortization schedule in Excel. We use the PMT function to calculate the monthly payment on a loan with an annual interest rate of 5%, a 2-year duration and a present value (amount borrowed) of $20,000. … We use named ranges for the input cells. Stay on top of a mortgage, home improvement, student, or other loans with this Excel amortization schedule. Use it to create an amortization schedule that calculates total interest and total payments and includes the option to add extra payments. Four ways to pay off your car loan faster 1. Extra repayments. Assuming your car loan lender allows you to make extra repayments without penalty, this feature is one of the easiest ways to pay off a car loan faster. … 2. A balloon payment. … 3. Increasing payment frequency. … 4. Refinancing to a shorter term. Open Excel and click on “File” tab on the left hand side. Then click ‘New’ tab on the dropdown. You will see on the right all the templates available. Click on the ‘Sample Templates‘, and you will see the ‘Loan Amortization Template’ there. It’s relatively easy to produce a loan amortization schedule if you know what the monthly payment on the loan is. Starting in month one, take the total amount of the loan and multiply it by the interest rate on the loan. Then for a loan with monthly repayments, divide the result by 12 to get your monthly interest. It’s relatively easy to produce a loan amortization schedule if you know what the monthly payment on the loan is. Starting in month one, take the total amount of the loan and multiply it by the interest rate on the loan. Then for a loan with monthly repayments, divide the result by 12 to get your monthly interest. The process of paying down your loan over time is known as amortization. With an amortizing car loan, some of your monthly payment is applied to the amount you borrowed, which is known as the principal, and some goes toward interest and any fees. Amortization of Loans To arrive at the amount of monthly payments, the interest payment is calculated by multiplying the interest rate by the outstanding loan balance and dividing by 12. The amount of principal due in a given month is the total monthly payment (a flat amount) minus the interest payment for that month. When you’re calculating auto loan interest for your first payment, use this simple calculation: 1. Divide your interest rate by the number of monthly payments you will be making in this year. 2. Multiply it by the balance of your loan – for the first payment, this will be your total principal amount. A simple interest loan is one in which the interest has been calculated by multiplying the principal (P) times the rate (r) times the number of time periods (t). The formula looks like this: I (interest) = P (principal) x r (rate) x t (time periods). More than half of all new car loans are currently financed for 84 months — seven years — or longer. Industry standard used to be to amortize car loans over 60 months — five years — but as low interest rates settled in, payment periods began to stretch longer and longer to make monthly payments as low as possible. Simple interest is calculated with the following formula: S.I. = P × R × T, where P = Principal, R = Rate of Interest in % per annum, and T = The rate of interest is in percentage r% and is to be written as r/100. Principal: The principal is the amount that initially borrowed from the bank or invested. Paying off your car loan early frees up a good chunk of extra cash to keep in your pocket. … If your car loan’s rate is low compared to other types of debt, like credit cards, consider paying off the debt with the highest interest rate first. That way you save more on total interest owed. Refinancing your car loan is fast and easy — and can put more money in your pocket. You may be able to reduce your monthly payment and boost your total savings on interest over the life of the loan. You generally need a history of six to 12 months of on-time payments to make refinancing worthwhile and possible. Paying off your loan sooner means it will eventually free up your monthly cash for other expenses when the loan is paid off. It also lowers your car insurance payments, so you can use the savings to stash away for a rainy day, pay off other debt or invest. Excel provides a variety of worksheet functions for working with amortizing loans: PMT. Calculates the payment for a loan based on constant payments and a constant interest rate. An amortization schedule, often called an amortization table, spells out exactly what you’ll be paying each month for your mortgage. The table will show your monthly payment and how much of it will go toward paying down your loan’s principal balance and how much will be used on interest. Since extra principal payments reduce your principal balance little-by-little, you end up owing less interest on the loan. … If you’re able to make $200 in extra principal payments each month, you could shorten your mortgage term by eight years and save over $43,000 in interest. For example, auto loans, home equity loans, personal loans, and traditional fixed-rate mortgages are all amortizing loans. Interest-only loans, loans with a balloon payment, and loans that permit negative amortization are not amortizing loans. Amortization describes the process of gradually paying off your auto loan. … A greater percentage of your monthly payment is applied to interest early in the life of the loan, and a greater percentage is applied to the principal at the end. Simple Interest Formula Amount (A) is the total money paid back at the end of the time period for which it was borrowed. The total amount formula in case of simple interest can also be written as: A = P(1 + RT) As of January 2020, U.S. News reports the following statistics for average auto loan rates: Excellent (750 – 850): 4.93 percent for new, 5.18 percent for used, 4.36 percent for refinancing. Good (700 – 749): 5.06 percent for new, 5.31 percent for used, 5.06 percent for refinancing. FV, one of the financial functions, calculates the future value of an investment based on a constant interest rate. You can use FV with either periodic, constant payments, or a single lump sum payment. Use the Excel Formula Coach to find the future value of a series of payments. PMT, one of the financial functions, calculates the payment for a loan based on constant payments and a constant interest rate. Use the Excel Formula Coach to figure out a monthly loan payment. At the same time, you’ll learn how to use the PMT function in a formula. 5 Ways To Pay Off A Loan Early 1. Make bi-weekly payments. Instead of making monthly payments toward your loan, submit half-payments every two weeks. … 2. Round up your monthly payments. … 3. Make one extra payment each year. … 4. Refinance. … 5. Boost your income and put all extra money toward the loan. =PMT(rate, nper, pv, [fv], [type]) The PMT function uses the following arguments: Rate (required argument) – The interest rate of the loan. Nper (required argument) – Total number of payments for the loan taken. If you borrow $20,000 at 5.00% for 5 years, your monthly payment will be $377.42. Each month, a portion of your car payment goes to the principal and a portion to interest. At the beginning of the loan, a larger part of your payment goes to interest. So paying extra on the principal early in your loan will have the greatest impact on the overall amount of interest you pay.
{"url":"https://www.reviewforloans.com/post/how-do-i-make-an-amortization-schedule-for-a-car-loan-926a5ab3/","timestamp":"2024-11-12T18:35:17Z","content_type":"text/html","content_length":"169112","record_id":"<urn:uuid:51f12618-2eae-4662-b4fb-41c322a5dabe>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00444.warc.gz"}
Understanding the Cuckoo Search Algorithm: A Step-by-Step Guide to Optimization If you found this summary useful, consider buying us a coffee. It would help us a lot! The Cuckoo Search Algorithm is a novel optimization technique inspired by the brood parasitism of some cuckoo species, where these birds lay their eggs in the nests of other bird species. This algorithm is particularly effective for solving complex optimization problems due to its unique approach to exploring potential solutions. In this article, we'll dive deep into the workings of the Cuckoo Search Algorithm, using a practical example to illustrate its key principles and calculations. What is the Cuckoo Search Algorithm? The Cuckoo Search Algorithm employs the concept of levy flights and uses a simple mechanism based on the following parameters: • Total population of nests • Probability of discovering a cuckoo egg (set to 0.25) • Maximum number of iterations (set at 300) Main Objectives: The algorithm's primary goal is to replace inferior solutions (bad solutions) in the current population with better ones, ensuring that it can efficiently converge towards optimal solutions. Since the algorithm operates without differentiating between a nest, a cuckoo egg, and a cuckoo, it has certain unique characteristics compared to traditional optimization approaches. The Step-by-Step Process of Cuckoo Search Initialization of Hosts and Cuckoes 1. Initialize the population of host nests: For our example, we consider five host nests (or cuckoos). 2. Representation of initial positions: Each host nest has a specific position, as illustrated in the example. 3. Assumption of uniformity: There is no differentiation among cuckoo eggs, nests, and solutions, simplifying calculations. Levy’s Flight Calculations The next crucial step involves calculating the values for Levy's flight. Levy's flight refers to random walks where larger steps are frequently replaced by series of smaller steps. A mathematical approach is used for calculating these values efficiently: • The standard deviation of the flight can be calculated, guiding how far each cuckoo might move based on their current position. 1. Random Walk Mechanism: □ Cuckoos choose a random nest to generate their new position via Levy's distribution, a vital component that determines their next movement. □ Step size is critical; it influences how far a cuckoo can travel from its current location. A small step size leads to minimal change, while a larger step size may cause extensive exploration of the search space. Iteration and Updating Positions 1. Updating cuckoo positions: At every iteration, cuckoo positions are updated based on their most recent calculations: □ If a newly calculated cuckoo solution is better than the existing nest solution, the nest is replaced. □ Overall, the algorithm attempts to converge towards an optimal solution by iteratively updating nests based on their values and positions in the search space. 2. Comparison Check Between Cuckoo and Nest: □ Each cuckoo's solution is compared with a randomly selected nest. If a cuckoo finds a nest similar to its egg, that nest's solution will be replaced by the cuckoo's better solution. If not, the worst-performing nest is destroyed and replaced with a new one nearby. Convergence towards Optimal Solutions With each iteration, cuckoos update their positions while maintaining a counter that tracks their progress. The current best solution is consistently evaluated as new solutions are generated. • Ranking the Solutions: After each iteration, solutions are ranked, and the current best is identified, allowing the algorithm to track progress towards optimization. The Cuckoo Search Algorithm is a powerful tool for solving optimization problems, characterized by its randomized search process and clever use of biological phenomena. By logically deducing new potential solutions and iteratively updating them, it can efficiently navigate through complex search spaces. With this step-by-step guide and practical example, you'll better understand how to implement and utilize the Cuckoo Search Algorithm for your optimization challenges. If you have any questions or feedback regarding this tutorial, feel free to leave a comment below. Happy Hello in this video i will try to explain cuckoo search algorithm using an example. Before this i tried to explain what is cuckoo search all about that is explained in the cuckoo search part 1.. so everything is explained step by step. In this video i will try to explain what is coco search using example. Topics that are covered in this video: how we can calculate the values for levy&#39;s flight? how we can calculate the value for cuckoo&#39;s each step? and how we can update the cuckoo&#39;s new position? then we have certain question asked by user in cuckoo search part 1 before starting this video i want to mention one thing whatever calculation done in this video it&#39;s my own calculation and if you found any error please comment below. Simplicity of this algorithm is we use here only two parameter that is the total population of the nest and then we have the probability of discovery of cuckoo egg for that value is 0.25 and these parameters are sufficient for maximum optimization problems third step is set the maximum number of iteration that is here 300 and one thing is here we cannot make any difference between cuckoo egg and a nest so the aim of this algorithm is we will replace the new and the butter solution with bad solution that are in the current population so it is important to remember that we cannot make any difference between Egg, nest and cuckoo so what is the aim of this algorithm? we are updating or we will replace the bad solution with new and better one so initialize the population of the host nest that is five in the research paper that is cuckoo search by levy&#39;s flight... you can see here this is the final location of the nest and you can see the search path of nest using cuckoo search so this example that is given in this video i use the algorithm that is given in this research paper you can see a number of research paper over the internet and according to that there is a little bit difference in their algorithm so this example is based on this algorithm initialize the population we have only five host nest you can see the position of each host nest here you can see the initial population of host nest that is five and the position of each host nest here one thing i mentioned before that we cannot make any difference between host nest, egg and a solution so you can see here each position of the cuckoo that is that we have only five cuckoo and the position of each cuckoo here and this is the optimal point that is hundred and the position of cuckoo you can see their position so next step is now we will obtain the new position for i(th) cuckoo we will select the cuckoo randomly for that we will obtain a new position using levy&#39;s flight suppose in this first of all i will choose first cuckoo that is the value of i is one and the value of i is one to n so i&#39;m selecting here first cuckoo this one and we will perform lab slide using this equation here this is the current this is a new solution this is the current location this is the step size this is the anti-vice multiplication and this is the lavish exponent so in order to calculate the levy&#39;s flight that is the random walk done by bird that is cuckoo here cuckoo search algorithm is a random searching process here a bird that is cuckoo searching for a suitable host nest by laying egg so we will calculate lavish flight using this and random steps can be drawn from levy&#39;s distribution levy&#39;s distribution means series of smaller steps and we can express step size using this equation here one thing that is important if the value you calculated for s is too small that mean the new solution generated will be far away from the older one if the value of s is smaller then it means changing position will be too small so it is important to use proper step size for the search space put the values here here u is this we you can see all the value value of beta is 3 by 2 and we got the value for standard deviation is 0.6966 next put the values here and we got the step size is 0.33802 here that is the step size that determines how far a random walker can go for a fixed number of iteration in general render walk means it is a chain whose next location depends on the current location current location is the first term in the above equation that we will see in the example you can see in the lab slide we are using x best that is the global best position right now this is the initial stage so we don&#39;t have any best position for any cuckoo or you can say host so we will consider this value zero put the value of x best for the this situation zero and we will calculate new solution using this this is the old position plus levy slide next select a cuckoo randomly that i selected here first cuckoo fuji is now first cuckoo is 4 put the values here set the counter 0 put the value of t 0 you can see here now position of first cuckoo at iteration 0 that is 4 put the values here global best is 0 right now we don&#39;t have any global best position for any cuckoo this is the initial stage so we got the new solution for the first cuckoo that is 5.35 next step is choose a nest randomly then we will compare the value of cuckoo with the randomly selected nest here i selected a nest randomly that is nice number two that is this one and the value is here now six now check this condition condition is false it means google is not similar to host tag so we will destroy the lowest rank ag and then we will generate a new egg near the older one so the aim of this algorithm again i&#39;m repeating this point replace the bad solution with the new and better one this is the value that we calculated now for first cuckoo at first iteration keep the bat solution and we will increment the counter until we met the condition now we will select another cuckoo that is hoku number two and the value for this cuckoo is 6. put the value in the levy&#39;s flight and we got this one now we will select again any nest randomly put the value here condition is true it means that cuckoo egg is similar to host bird egg now we will replace the randomly selected nest with new solution and we will destroy the lower rank nest then we will calculate the value that is the new solution for the cuckoo for the second cuckoo that is here update the position then we will select another cuckoo that is cuckoo number three you can see here this is the recently updated position for this cuckoo put the values here and in the levy&#39;s flight and we got the solution here again check randomly select any you can select any nest randomly then put the value here check the condition if it is true then replace the solution by new solution and then calculate the new nest near the older one update it like that we will update this for all and this one is for the fifth cuckoo done so these are the value we updated and then according on the basis of these two we got this one this is at iteration one now we will increment the counter one you can see here now we are here keep the batch solution that is here now we will rank the solution and we will find the current best according to this you can see the current best is now this cuckoo number five this is the nearest one so this is the front best so for iteration one first value of counter is zero that is the initial stage now value of counter is one so it is two and now we have global best position that is cuckoo number here you can see 5 this is the global best position now in the next iteration put the value of global best 28.844 and you can see when you will try to calculate this now the value of counter is 2 so this is the new solution we are calculating for the first cuckoo here we will put the value position off first cuckoo at iteration one that we recently calculated 7.16 put the value here and global vast is here 28.884 we are updating the bad solution with the new and the battery one so we have now random questions that are asked in the part one so first question is in cuckoo search is each egg is equivalent to a nest? and the solution yeah ... in this algorithm they assumed that the cuckoo egg and host they are similar we cannot make any difference between any cuckoo egg and host nest so all of them are point in the space that are changing their position done. second question is how we can calculate levy&#39;s distribution? levy&#39;s distribution means series of smaller steps that you can see in this slide how we are calculating these smaller steps using levy&#39;s flight done next question is does entry wise multiplication means element by element and multiplication? Yes this in this example they are using vector form so in the cuckoo search.. we are doing entry wise multiplication and you can see here that we have different parameters that are used for maximum optimization problem and these parameters are sufficient i provided all the important link in the description box and still if you have any question you can comment below and thanks for watching this video :)
{"url":"https://lunanotes.io/summary/understanding-the-cuckoo-search-algorithm-a-step-by-step-guide-to-optimization","timestamp":"2024-11-05T13:24:38Z","content_type":"text/html","content_length":"155194","record_id":"<urn:uuid:fa0c1349-b3a2-4ab8-b310-d140e882ae9d>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00824.warc.gz"}
Evaluate how organizations can use one-sample hypothesis testing to determine if there are performance issues in... Evaluate how organizations can use one-sample hypothesis testing to determine if there are performance issues in... Evaluate how organizations can use one-sample hypothesis testing to determine if there are performance issues in the organization. Support your response with a specific example. For example: Suppose you want to check the average working hour of employees. You thought that the average duration of an employees is 7.6 hours. You want to check this claim so you collect a sample of 20 employees and note their duration of work. Assume that data are approximately distributed. NULL HYPOTHESIS H0: t= 7.765-7.6/1.25/sqrt(20) t= 0.165/1.25/4.47 t= 0.165/0.28 t= 0.589 degrees of freedom= n-1=20-1=19 t critical = 2.09 Since t critical is GREATER than t calculated therefore we fail to reject null hypothesis H0. Conclusion: We don't have sufficient evidence to conclude that the mean working duration is different than 7.6 hours.
{"url":"https://justaaa.com/statistics-and-probability/126455-evaluate-how-organizations-can-use-one-sample","timestamp":"2024-11-13T11:14:40Z","content_type":"text/html","content_length":"41112","record_id":"<urn:uuid:02b1a6f1-3e68-4b06-b0e4-e043c3e5f1da>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00739.warc.gz"}
An alternative approach in General Relativity describing Gravitational RedShift, Geons, Black Holes without Singularities and Dark Matter Gravitational Waves 7 December 2023.pdf (1.18 MB) Gravitational Waves 9 December 2023.pdf (1.18 MB) Gravitational Waves 23 December 2023.pdf (1.26 MB) Gravitational Waves 27 December 2023.pdf (1.26 MB) An alternative approach in General Relativity describing Gravitational RedShift, Geons, Black Holes without Singularities and Dark Matter Einstein approached the interaction between gravity and light by the introduction of the “Einstein Gravitational Constant” in the 4-dimensional Energy-Stress Tensor (1). In this alternative approach related to General Relativity, the interaction between gravity and light has been presented by the sum of the Electromagnetic Tensor and the Gravitational Tensor (2) . The new approach presents mathematical solutions for the GEONs (Gravitational Electromagnetic Interaction) introduced in 1955 by Jonh Archibald Wheeler in the publication in Physical Review Letters in 1955 [1]. The mathematical solutions for GEONs (Black Holes without Singularities) are fundamental solutions for the relativistic quantum mechanical Dirac equation (Quantum Physics) in Tensor presentation (35). Assuming a constant speed of light “c” and Planck’s constant ħ within the GEON, the radius “R” of the GEON with the energy of a proton, is about 1% of the radius of the hydrogen atom (14). The New Theory has been tested in an experiment with 2 Galileo Satellites and a Ground Station by measuring the Gravitational RedShift in an by the Ground Station emitted stable MASER frequency [2]. The difference between the calculation for Gravitational RedShift, within the Gravitational Field of the Earth, in “General Relativity” and the “New Theory” is smaller than 10-16 (12) and (13). In all “General Redshift Experiments” General Relativity and the New Theory predict a Gravitational RedShift with a difference smaller than 15 digits beyond the decimal point which is beyond the accuracy of modern “Gravitational Redshift” observations. Both values are always within the measured Gravitational RedShift in all observations being published since the first observation of the gravitational redshift in the spectral lines from the White Dwarf which was the measurement of the shift of the star Sirius B, the white dwarf companion to the star Sirius, by W.S. Adams in 1925 at Mt. Wilson Observatory. Theories which unify Quantum Physics and General Relativity, like “String Theory”, predict the non-constancy of natural constants. Accurate observations of the NASA Messenger [11] observe in time a value for the gravitational constant “G” which constrains until (G' /G to be < 4 × 10-14 per year) . One of the characteristics of the New Theory is the “Constant Value” in time for the Gravitational Constant “G” in unifying General Relativity and Quantum Physics.
{"url":"https://figshare.com/articles/preprint/An_alternative_approach_in_General_Relativity_describing_Gravitational_RedShift_Geons_Black_Holes_and_Dark_Matter/24762132?file=43839009","timestamp":"2024-11-09T00:10:13Z","content_type":"text/html","content_length":"175276","record_id":"<urn:uuid:0ab6e47b-e666-48b8-ae4b-d16a13c8bd18>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00010.warc.gz"}
Theoretical Physicist Job Description | Planet Possibility What do theoretical physicists do? There’s so much about physics and the universe that we still don’t know. Just think about the different areas of the universe that we still can’t travel to: from the bottom of the ocean to planets in outer space. This is where theoretical physicists come in: they work to help us learn more about the universe. They use complex maths to come up with different theories to better understand the fundamental properties of matter, which experimental physicists test by designing and conducting experiments. What does a day in the life of a theoretical physicist involve? Day to day responsibilities vary, but typically include: using computers and other software to learn more about physical phenomena, developing and refining different theories, working with other physicists to discuss equations and formulae, trying out different calculations to prove/disprove theories, design equipment to make the research process smoother and writing proposals to request research funding. Who employs theoretical physicists? Theoretical physicists conduct research into a variety of physics topics, such as quantum mechanics, theory of relativity and dark matter. However, their work is very interdisciplinary, and they often work for many different industries. Theoretical physicists can be found advising governments, collaborating with space agencies, or applying their expertise to industries like healthcare and What skills do I need to be a successful theoretical physicist? As a theoretical physicist, you’ll be using maths everyday. You’ll need to have solid maths skills to be able to solve complex problems precisely and accurately, create new formulae and interpret the results of your calculations. Research skills are also key. While you’ll spend a lot of time developing new theories, you’ll need to understand the research that influenced your current work. Many theoretical physicists also work to further expand on old theories. Problem solving skills are also important. Not only will you need to come up with solutions to complex equations, but you’ll also need to address any issues (equations that don’t work) that crop up during the process. How can I become a theoretical physicist? To become a theoretical physicist, it’s recommended to have a master’s and/or PhD, especially for research positions. Although this is expensive, there are ways to find financial support, such as UK government loans and grants from your university. Who are some notable theoretical physicists? Some famous theoretical physicists include: Albert Einstein, one of the most famous physicists in the world who came up with the theory of relativity; Archimedes, a mathematicians and physicist whose came up with the physical law of buoyancy, and Shirley Ann Jackson, the first African American woman to gain a PhD in nuclear physics.
{"url":"https://www.planetpossibility.co.uk/advice/jobs-in-physics/theoretical-physicist-job-description","timestamp":"2024-11-05T12:46:00Z","content_type":"text/html","content_length":"23432","record_id":"<urn:uuid:90e16ffd-3c27-4639-a898-3470bfd63beb>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00869.warc.gz"}
Distributed computing and the graph entropy region Two remote senders observe X and Y, respectively, and can noiselessly send information via a common relay node to a receiver that observes Z. The receiver wants to compute a function f (X, Y, Z) of these possibly related observations, without error. We study the average number of bits that need to be conveyed to that end by each sender to the relay and by the relay to the receiver, in the limit of multiple instances. We relate these quantities to the entropy region of a probabilistic graph with respect to a Cartesian representation of its vertex set, which we define as a natural extension of graph entropy. General properties and bounds for the graph entropy region are derived, and mapped back to special cases of the distributed computing setup. Funders Funder number University of California • Distributed source coding • graph entropy • zero-error information theory Dive into the research topics of 'Distributed computing and the graph entropy region'. Together they form a unique fingerprint.
{"url":"https://cris.tau.ac.il/en/publications/distributed-computing-and-the-graph-entropy-region","timestamp":"2024-11-12T05:20:24Z","content_type":"text/html","content_length":"48292","record_id":"<urn:uuid:872a9bd2-b234-4dac-84ad-22d664b36b45>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00245.warc.gz"}
Moving Average Crossover Trading Strategy 1. Moving Average Crossover Trading Strategy Moving Average Crossover Trading Strategy , Date: 2023-11-27 17:25:36 The moving average crossover trading strategy generates buy and sell signals when shorter and longer term moving averages cross. It belongs to technical analysis based trading strategies. This strategy is simple, capital efficient with smaller drawdowns, suitable for medium-long term trading. Strategy Logic This strategy calculates the 20 and 50 period Exponential Moving Average (EMA). It triggers long position when the 20 EMA crosses over 50 EMA. It triggers short position when 20 EMA crosses under 50 EMA gives more weight to recent data. The formula is: EMAtoday = (Pricetoday * k) + EMAyesterday * (1-k) Where k = 2/(number of periods + 1) When shorter term EMA crosses over longer term EMA, it indicates bullish price move to LONG. When it crosses under, it indicates bearish price reversal to SHORT. The pros of this strategy: 1. Simple logic, easy to understand and execute 2. Less capital required, smaller drawdowns 3. Flexible parameter tuning for different markets 4. Applicable to any instruments for scalping or trend trading Risks and Enhancements The risks include: 1. Frequent trading signals during price oscillation. Filters can help. 2. Stop loss needed to avoid being trapped. 3. Parameter optimization requires more historical data. 1. Adding filters like Bollinger Bands to reduce false signals 2. Adding stop loss/take profit to avoid being trapped 3. Finding optimal parameter sets for different instruments 4. Combining with volume to confirm signals The moving average crossover strategy is a simple yet effective technical strategy that is proven by the market. Further improvements on risk control and robustness can be achieved by parameter tuning, adding filters etc. It serves as a fundamental building block for quantitative trading. start: 2022-11-20 00:00:00 end: 2023-11-26 00:00:00 period: 1d basePeriod: 1h exchanges: [{"eid":"Futures_Binance","currency":"BTC_USDT"}] // This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/ // © brandlabng //study(title="Holly Grail", overlay = true) strategy('HG|E15m', overlay=true) src = input(close, title='Source') price = request.security(syminfo.tickerid, timeframe.period, src) ma1 = input(20, title='1st MA Length') type1 = input.string('EMA', '1st MA Type', options=['EMA']) ma2 = input(50, title='2nd MA Length') type2 = input.string('EMA', '2nd MA Type', options=['EMA']) price1 = if type1 == 'EMA' ta.ema(price, ma1) price2 = if type2 == 'EMA' ta.ema(price, ma2) //plot(series=price, style=line, title="Price", color=black, linewidth=1, transp=0) plot(series=price1, style=plot.style_line, title='1st MA', color=color.new(#219ff3, 0), linewidth=2) plot(series=price2, style=plot.style_line, title='2nd MA', color=color.new(color.purple, 0), linewidth=2) longCondition = ta.crossover(price1, price2) if longCondition strategy.entry('Long', strategy.long) shortCondition = ta.crossunder(price1, price2) if shortCondition strategy.entry('Short', strategy.short)
{"url":"https://www.fmz.com/strategy/433446","timestamp":"2024-11-08T15:14:56Z","content_type":"text/html","content_length":"12014","record_id":"<urn:uuid:e0b1f51d-e64c-4a12-9d0c-493df9e68a65>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00602.warc.gz"}
Cycle Partition of Dense Regular Digraphs and Oriented Graphs Magnant and Martin [24] conjectured that every d-regular graph on n vertices can be covered by n/(d + 1) vertex-disjoint paths. Gruslys and Letzter [11] verified this conjecture in the dense case, even for cycles rather than paths. We prove the analogous result for directed graphs and oriented graphs, that is, for all α > 0, there exists n[0] = n[0](α) such that every d-regular digraph on n vertices with d ≥ αn can be covered by at most n/(d + 1) vertex-disjoint cycles. Moreover if G is an oriented graph, then n/(2d + 1) cycles suffice. This also establishes Jackson’s long standing conjecture [14] for large n that every d-regular oriented graph on n vertices with n ≤ 4d + 1 is Hamiltonian. Publication series Name European Conference on Combinatorics, Graph Theory and Applications Publisher Masaryk University Press Number 12 ISSN (Electronic) 2788-3116 Dive into the research topics of 'Cycle Partition of Dense Regular Digraphs and Oriented Graphs'. Together they form a unique fingerprint.
{"url":"https://research.birmingham.ac.uk/en/publications/cycle-partition-of-dense-regular-digraphs-and-oriented-graphs","timestamp":"2024-11-13T08:46:33Z","content_type":"text/html","content_length":"68116","record_id":"<urn:uuid:eb9c7975-83e5-485c-bcce-b5e781339f48>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00854.warc.gz"}
How To Calculate Basis Point: A Clear and Concise Guide - CalculatorBox How To Calculate Basis Point: A Clear and Concise Guide A basis point (bp) is a unit of measure used in finance to express the change in the value or rate of a financial instrument. One basis point equals 1/100th of 1% or 0.01%. In decimal form, one basis point is written as 0.0001. Basis points are mainly used to indicate changes in interest rates, bond yields, and spreads on financial securities. Fun Fact: One basis point is a minuscule unit—just 1/100th of a percentage point! But don’t let its size fool you. In financial markets, a change of even a few basis points in interest rates or investment yields can translate into millions of dollars gained or lost. That’s why understanding basis points is crucial for both individual investors and financial professionals. When calculating basis points, the key formula to remember is: Basis \space Points \space (bps) = Percentage \space (\%) × 100 This means that to convert a percentage into basis points, you simply multiply the percentage by 100. For example, a 0.25% change in an interest rate would equal 25 basis points (0.25% × 100 = 25 Conversely, to convert basis points into a percentage, you need to divide the basis points by 100. Using the previous example, 25 basis points would equal a 0.25% change in the interest rate (25 bps / 100 = 0.25%). To help with calculation and understanding basis points, some tips include: • Remember that one basis point equals 0.01%, or 0.0001 in decimal form. • To convert basis points to percentages, simply divide by 100. • When converting percentages to basis points, multiply by 100. By using basis points in financial discussions, it becomes easier to communicate incremental changes, such as the spread on bond yields or differences in interest rates. This reduces the probability of misinterpretation and allows for more accurate comparisons between financial instruments. Fundamental Concept of Basis Point Basis points (BPS) are a unit of measurement used in finance, primarily for expressing interest rates and other percentages in an easy-to-understand format. One basis point is equivalent to one-hundredth of a percentage point, or 1/100 of 1.0%. It is a standard way of discussing percentages in the financial industry, especially when it comes to small figures. To calculate basis points, you simply need to multiply the percent value by 100. For example, if you have an interest rate of 2.5%, the equivalent in basis points is 250 bps (2.5 * 100). Conversely, if you want to convert basis points back to a percentage, you just need to divide the basis points value by 100. Using basis points makes it easier to compare interest rates and other percentages across different financial instruments, as it standardizes small changes in percentage values. Additionally, it helps you avoid misinterpretation or confusion, as a difference of 100 basis points is more transparent and less prone to error than a difference of 1 percent. Calculating basis points is quite simple and can be done using basic arithmetic. However, there are also online calculators available, which can help you quickly and accurately convert between percent values and basis points. These tools are particularly useful when dealing with large datasets or complex financial calculations. In summary, basis points are a fundamental concept in finance used to measure and communicate changes in interest rates or other percentages. Understanding how to calculate and convert between percent values and basis points is vital for anyone involved in financial analysis, investment decisions, or monetary policy discussions. The Mathematics of Calculating Basis Points Using Percentage Points To calculate basis points (bps), you need to understand that one basis point is equal to 1/100th of 1%, or 0.01%. In other words, 100 basis points equal 1%. Given this knowledge, you can easily convert percentages to basis points. To do this, simply multiply the percentage by 100. For example: Percentage Calculation Basis Point 0.5% 0.5 x 100 50 basis points 1.75% 1.75 x 100 175 basis points Using this method, you can also calculate the change between two percentages by first finding the difference of the percentages and then converting the difference to basis points. For instance, if the interest rate increases from 3% to 3.5%, the change is calculated as follows: Step Calculation Result Find the difference 3.5% – 3% 0.5% Convert to basis points 0.5% x 100 50 bps Utilizing Decimal System Another way to calculate basis points is by utilizing the decimal system. Remember that 1 basis point is equal to 0.0001 in decimal form. To convert a given decimal number to basis points, simply multiply the decimal number by 10,000. For example: Decimal Calculation Basis Point 0.0025 0.0025 x 10,000 25 basis points 0.015 0.015 x 10,00 150 basis points If you need to calculate the change in basis points for two decimal numbers, follow these steps: Step Calculation Result Find the difference 0.035 – 0.025 0.01 Convert to basis points 0.01 x 10,000 100 bps These methods provide you with a simple way to accurately calculate basis points, helping you make informed decisions for your financial needs. Application of Basis Points in Finance In finance, basis points (bps) are often used to express changes in interest rates, yields on bonds, or other financial percentages. A basis point is equal to 1/100th of 1%, or 0.01%, and can be represented as 0.0001 in decimal form. To calculate the number of basis points, simply multiply the given percentage by 100. For example, if you have a 0.25% change in interest rate, you would multiply 0.25 by 100 to get 25 basis points. Conversely, to convert basis points back to a percentage, divide the number of basis points by 100. In this case, 25 basis points would translate to a 0.25% change. Basis points are particularly useful when discussing small changes in rates, such as those found in bond yield spreads or index fund fees. Since these changes can be less than 1%, representing them with basis points makes the numbers more understandable and easier to discuss. For instance, when comparing the yields of two different bonds, you could say that the spread between the two yields is 50 basis points, rather than referring to a difference of 0.5%. In addition to bond yields and interest rates, you may also encounter basis points when discussing exchange rates, mortgage rates, and loan spreads. Using basis points in these contexts allows for more precise communication and reduces the likelihood of misunderstanding or misinterpretation. Practical Examples of Basis Point Calculation In this section, we will provide you with a few examples of how to calculate basis points (bps) in different financial contexts. Example 1: Interest Rate Change Suppose a central bank increases its benchmark interest rate from 1.50% to 1.75%. To calculate the basis point change, you need to follow these steps: Step Calculation Result Subtract the initial rate from the new rate 1.75% – 1.50% 0.25% Convert the difference to basis points by multiplying by 100 0.25% x 100 25 bps The interest rate has increased by 25 bps. Example 2: Bond Yield Spread Let’s say you are comparing two bonds: Bond A has a yield of 4.02%, and Bond B has a yield of 4.20%. To find the yield spread in basis points, follow these steps: Step Calculation Result Subtract the yield of Bond A from the yield of Bond B 4.20% – 4.02% 0.18% Convert the difference to basis points by multiplying by 100 0.18% x 100 18 bps The yield spread between Bond A and Bond B is 18 bps. Example 3: Mutual Fund Expense Ratio Imagine you are comparing the expense ratios of two mutual funds: Fund X has an expense ratio of 1.08%, while Fund Y has an expense ratio of 0.89%. To calculate the difference in expense ratios in basis points, do the following: Step Calculation Result Subtract the expense ratio of Fund Y from the expense ratio of Fund X 1.08% – 0.89% 0.19% Convert the difference to basis points by multiplying by 100 0.19% x 100 19 bps Fund X has a 19 bps higher expense ratio than Fund Y. These examples demonstrate how to calculate bps in various financial scenarios, highlighting their usefulness in measuring small changes and comparing different financial instruments. Common Mistakes to Avoid While Calculating Basis Points When calculating basis points, it is essential to avoid common errors that can lead to incorrect results. By understanding these mistakes, you can ensure the accuracy of your calculations and better comprehend the financial implications. Understanding the Conversion Between Percentages and Basis Points One typical mistake is not understanding the conversion between percentages and basis points. Remember that 1 basis point is equal to 0.01%, or 1/100th of a percent. To convert a percentage to basis points, you must multiply the percentage by 100. For instance, if you have an interest rate of 3%, it would translate to 300 basis points (3% x 100). Avoiding Confusion with Other Decimal Units Another common error is confusing basis points with other decimal values. Avoid mistaking basis points for a smaller unit, like permilles or larger units like whole percentages. Ensure that you are using the correct values while calculating, and always maintain a consistent unit of measurement. The Importance of Accounting for Compounding Periods In addition, neglecting to account for compounding periods can lead to inaccurate calculations. When calculating interest rates, it’s critical to understand whether the rate is annually, semiannually, quarterly, or monthly compounding. Accounting for this factor will more accurately reflect changes in interest rates or investment yields. The Role of Proper Rounding in Accurate Calculations Finally, do not forget to use proper rounding while calculating. Financial calculations typically require rounding to a certain number of decimal places. Ensure that you round up or down accordingly and maintain consistency throughout your calculations. Frequently Asked Questions Recommended Video
{"url":"https://calculatorbox.com/how-to-calculate-basis-point-a-clear-and-concise-guide/","timestamp":"2024-11-10T14:31:22Z","content_type":"text/html","content_length":"204572","record_id":"<urn:uuid:5187f113-e536-4f72-98b5-25086d3d3e91>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00433.warc.gz"}
Schur-Weyl duality and the product of randomly-rotated symmetries by a unitary Brownian motion In this paper, we introduce and study a unitary matrix-valued process which is closely related to the Hermitian matrix-Jacobi process. It is precisely defined as the product of a deterministic self-adjoint symmetry and a randomly-rotated one by a unitary Brownian motion. Using stochastic calculus and the action of the symmetric group on tensor powers, we derive an ordinary differential equation for the moments of its fixed-time marginals. Next, we derive an expression of these moments which involves a unitary bridge between our unitary process and another independent unitary Brownian motion. This bridge motivates and allows to write a second direct proof of the obtained moment expression. • Brownian motion in the unitary group • free unitary Brownian motion • Hermitian matrix-Jacobi process • Schur-Weyl duality • self-adjoint symmetries ASJC Scopus subject areas • Statistical and Nonlinear Physics • Statistics and Probability • Mathematical Physics • Applied Mathematics Dive into the research topics of 'Schur-Weyl duality and the product of randomly-rotated symmetries by a unitary Brownian motion'. Together they form a unique fingerprint.
{"url":"https://nyuscholars.nyu.edu/en/publications/schur-weyl-duality-and-the-product-of-randomly-rotated-symmetries","timestamp":"2024-11-07T19:19:26Z","content_type":"text/html","content_length":"49444","record_id":"<urn:uuid:74efbfc6-ec2c-47ef-83bd-1f015bc3a2d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00678.warc.gz"}
Curated Resource to Learn Linear Algebra - datannum.com Curated Resource to Learn Linear Algebra Curated Resource to Learn Linear Algebra Linear algebra is a foundational subject in mathematics with applications across various fields such as computer science, engineering, physics, economics, and data science. Mastering linear algebra can open doors to understanding complex systems, solving large-scale linear equations, and working with vectors and matrices. Whether you’re a student, professional, or a curious learner, having access to high-quality resources is essential for grasping the concepts of linear algebra. This article provides a curated list of resources to learn linear algebra, including textbooks, online courses, video lectures, and interactive tools. Why Learn Linear Algebra? 1. Versatility in Applications Linear algebra is integral in numerous fields. Engineers use it for circuit design and control systems, computer scientists apply it in graphics and machine learning, physicists rely on it for quantum mechanics and relativity, and economists use it to model economic systems. Its versatility makes it a must-learn for anyone involved in scientific or technical disciplines. 2. Foundation for Advanced Topics Understanding linear algebra is a stepping stone to more advanced topics in mathematics and applied sciences. It provides the tools for learning calculus, differential equations, and more complex areas like functional analysis and tensor calculus. 3. Enhancing Problem-Solving Skills Studying linear algebra enhances logical reasoning and problem-solving skills. It helps develop a structured way of thinking that is beneficial not only in academics but also in real-world scenarios where analytical skills are crucial. Curated Resource to Learn Linear Algebra Textbooks are a traditional yet invaluable resource for learning linear algebra. They provide comprehensive coverage of topics with detailed explanations, examples, and exercises. 1. Linear Algebra and Its Applications by David C. Lay This textbook is highly recommended for its clear explanations and practical approach. It covers all the fundamental concepts, including vector spaces, linear transformations, eigenvalues, and eigenvectors. The book includes numerous examples and exercises, making it suitable for self-study. 2. Introduction to Linear Algebra by Gilbert Strang Gilbert Strang’s textbook is a classic in the field. Known for its clarity and depth, Curated Resource to Learn Linear Algebra it is widely used in university courses. Strang’s book is particularly appreciated for its application-oriented approach and the way it connects linear algebra to real-world problems. 3. Linear Algebra Done Right by Sheldon Axler This book takes a more theoretical approach to linear algebra, focusing on vector spaces and linear maps. It is ideal for those who wish to gain a deeper understanding of the subject. Axler’s writing is rigorous yet accessible, making complex concepts understandable. Online Courses Online courses provide the flexibility to learn at your own pace and often include video lectures, quizzes, and assignments. Curated Resource to Learn Linear Algebra Here are some top-rated online courses for learning linear algebra: 1. Linear Algebra by Khan Academy Khan Academy offers a comprehensive and free course on linear algebra. The course covers all the essential topics with engaging video lectures and interactive exercises. It’s a great starting point for beginners. 2. Linear Algebra by MIT OpenCourseWare MIT OpenCourseWare provides free access to course materials from MIT’s linear algebra course. This includes lecture notes, assignments, exams, and video lectures by Gilbert Strang. The course is thorough and is an excellent resource for in-depth learning. 3. Essence of Linear Algebra by 3Blue1Brown (YouTube) This YouTube series by 3Blue1Brown is highly recommended for its visual and intuitive approach to teaching linear algebra. The series uses animations to explain complex concepts in a way that is easy to understand. It’s a fantastic resource for visual learners. 4. Linear Algebra for Machine Learning by Coursera Coursera offers a specialized course on linear algebra with a focus on applications in machine learning. Curated Resource to Learn Linear Algebra This course is part of the “Mathematics for Machine Learning” specialization and is ideal for those interested in applying linear algebra to data science and AI. Video Lectures Video lectures can supplement your learning with expert explanations and visual demonstrations. 1. Gilbert Strang’s Linear Algebra Lectures (MIT OpenCourseWare) Gilbert Strang’s video lectures are legendary in the field of linear algebra education. These lectures cover the entire curriculum of MIT’s linear algebra course and are available for free on MIT OpenCourseWare and YouTube. 2. The Great Courses: Linear Algebra by The Teaching Company This course offers high-quality video lectures by Professor Richard Haase. It covers linear algebra concepts in a clear and engaging manner, making it suitable for learners at all levels. Interactive Tools and Software Interactive tools and software can make learning linear algebra more engaging and hands-on. 1. Wolfram Alpha Wolfram Alpha is a powerful computational tool that can solve linear algebra problems, including matrix operations, eigenvalues, and eigenvectors. It’s a great resource for checking your work and exploring concepts interactively. 2. GeoGebra GeoGebra is a free interactive mathematics software that includes tools for linear algebra. It allows you to visualize vectors, matrices, and transformations, making abstract concepts more concrete. 3. MATLAB MATLAB is a high-level programming language and environment used extensively in engineering and scientific research. It includes robust tools for linear algebra computations and is widely used in academic and professional settings. Many universities provide access to MATLAB for their students. Study Guides and Supplementary Resources 1. Schaum’s Outline of Linear Algebra by Seymour Lipschutz Curated Resource to Learn Linear Algebra This study guide is part of the Schaum’s Outline series and provides concise explanations of linear algebra concepts along with numerous solved problems and practice exercises. It’s an excellent supplementary resource for exam preparation. 2. Paul’s Online Math Notes Paul’s Online Math Notes is a free resource that offers detailed lecture notes, examples, and exercises on various math topics, including linear algebra. It’s a great reference for additional explanations and practice problems. 3. Brilliant.org Brilliant.org offers interactive problem-solving courses in mathematics, including linear algebra. The platform focuses on developing critical thinking and problem-solving skills through engaging and challenging exercises. Tips for Learning Linear Algebra 1. Understand the Fundamentals Start with the basics: understand vectors, matrices, and their operations. Grasping these foundational elements is crucial for tackling more complex topics. 2. Practice Regularly Linear algebra requires practice. Work on problems regularly to reinforce your understanding. Use textbooks, online courses, and interactive tools to find a variety of problems. 3. Visualize Concepts Visualization can significantly aid understanding. Use software tools like GeoGebra or visualization-focused resources like 3Blue1Brown’s video series to see concepts in action. 4. Connect to Applications Understanding how linear algebra is applied in real-world scenarios can deepen your comprehension and make the subject more interesting. Explore applications in fields like computer graphics, machine learning, and physics. 5. Study in Groups If possible, study with peers. Explaining concepts to others and solving problems collaboratively can enhance your understanding. 6. Seek Help When Needed Don’t hesitate to seek help if you’re stuck. Online forums like Stack Exchange, Reddit’s r/learnmath, and course-specific discussion boards can be invaluable resources. 7. Use Multiple Resources Curated Resource to Learn Linear Algebra Don’t rely on a single resource. Different authors and instructors may explain concepts in varied ways. Combining textbooks, video lectures, and interactive tools can provide a well-rounded understanding. Advanced Topics in Linear Algebra Once you have a solid grasp of the basics, you can explore advanced topics in linear algebra. Here are some areas to consider: 1. Eigenvalues and Eigenvectors These concepts are crucial in many applications, including stability analysis, quantum mechanics, and facial recognition algorithms. Understanding how to compute and interpret eigenvalues and eigenvectors is essential for advanced study. 2. Singular Value Decomposition (SVD) SVD is a powerful technique in linear algebra with applications in signal processing, data compression, and machine learning. It decomposes a matrix into three other matrices, providing insights into the properties of the original matrix. 3. Numerical Linear Algebra This area focuses on algorithms for performing linear algebra computations efficiently on computers. It is vital for applications that involve large datasets or require high computational precision. 4. Functional Analysis Functional analysis extends concepts from linear algebra to infinite-dimensional spaces. It is a more abstract area of mathematics with applications in quantum mechanics, differential equations, and 5. Tensor Calculus Tensors generalize vectors and matrices to higher dimensions. Tensor calculus is used in advanced physics, computer vision, and machine learning, particularly in deep learning. Curated Resource to Learn Linear Algebra is a powerful and versatile field of mathematics with applications across numerous disciplines. With the right resources and a structured approach, anyone can master this subject. This curated list of textbooks, online courses, video lectures, and interactive tools provides a comprehensive starting point for your linear algebra journey. Remember to practice regularly, visualize concepts, and explore applications to deepen your understanding. Happy learning! 1 thought on “Curated Resource to Learn Linear Algebra” Leave a Comment
{"url":"https://datannum.com/curated-resource-to-learn-linear-algebra/","timestamp":"2024-11-14T00:25:47Z","content_type":"text/html","content_length":"74517","record_id":"<urn:uuid:d36d038e-7c26-4459-89a3-e8bbec92d689>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00882.warc.gz"}
Multiplication Chart 19 Multiplication Chart 19 - For example, if we want. To gain a solid foundation in multiplication by 19, let’s examine the multiplication table. Repeated addition by 19’s means the multiplication table of 19. Web the 19 multiplication table includes the products of multiplying the number 19 with whole numbers. Web in 19 times table we will memorize the multiplication table. Now from the reverse side, start. Order does not matter when we multiply two numbers, it does not matter which is first or second, the answer is always the same. Web to calculate the table of 19, we can easily multiply the 19 by the number of the respected steps. This is the 19 times table up to 12 which is the multiplication table for the number 19, where each. Web in multiplication table of 19 or say 19 times table, we will learn the result of multiplying 19 with several numbers. 11 times table chart (i) when 10 boxes of 19 cadbury creme. Web multiplication table of 19. Web here, you will find different kind of 19x19 multiplication grids like blank multiplication grid 19x19 or empty multiplication grid. Web 19 times table is the multiplication table of the number 19 provided here for students, parents and teachers. To gain a solid foundation in multiplication by. 19 Times Table 19 Multiplication Table [Chart] Web exploring the multiplication table for 19: Web see multiplication table for 19 online and easily print it. Web in 19 times table we will memorize the multiplication table. Web a simple way to teach students how to multiply is through a multiplication chart, also known as multiplication tables. (i) when 10 boxes of 19 cadbury creme. 19 Times Table 19 Multiplication Table [Chart] Web in maths, the 19 table is referred to as the multiplication table of 19. Also presented in multiplication chart of 19. Table of 19 has a pattern for every ten multiples. This table can be written using arithmetic operations such as addition and. The 19 times table chart is given below to help you learn multiplication skills. Mathematics chart multiplication 19X19 multiplication table posters For example, if we want. Print out the multiplication table of 19. Repeated addition by 19’s means the multiplication table of 19. Web in 19 times table we will memorize the multiplication table. Web multiplication table from 1 to 19, simple to read, learn and memorize 19 x 19 times table. Curmudgeon Multiplication tables By the time a child finishes elementary. Let's write the 1 st 10 odd numbers in a sequence in the ten's place. Web in maths, the 19 table is referred to as the multiplication table of 19. Web in 19 times table multiplication chart we will learn about the tables and then we will practice the exercise. Also presented in. 5 multiplication chart starsvlero The 19 multiplication table includes the products of multiplying the number 19 with whole. This table can be written using arithmetic operations such as addition and. By the time a child finishes elementary. Web download 19 times table printable. To gain a solid foundation in multiplication by 19, let’s examine the multiplication table. Times Table Printable Free Printable Templates Web to calculate the table of 19, we can easily multiply the 19 by the number of the respected steps. Web see multiplication table for 19 online and easily print it. Web here you will find multiplication tables for 19 to 19. This table can be written using arithmetic operations such as addition and. Web exploring the multiplication table for. Honorable acidity Larry Belmont table de 38 artillery Holdall distortion Web 19 times table is the multiplication table of the number 19 provided here for students, parents and teachers. You can use 19 multiplication table. (i) when 10 boxes of 19 cadbury creme. Let's write the 1 st 10 odd numbers in a sequence in the ten's place. Web the 19 multiplication table includes the products of multiplying the number. 67 Times Table Chart Table of 19 has a pattern for every ten multiples. Web the 19 times table, also known as the multiplication table for the number 19, is obtained by multiplying 19 by different. Web in maths, the 19 table is referred to as the multiplication table of 19. Web here you will find multiplication tables for 19 to 19. Web a. Multiplication Table of 19 Read and Write the Table of 19 19 Times The 19 times table chart is given below to help you learn multiplication skills. Also presented in multiplication chart of 19. Repeated addition by 19’s means the multiplication table of 19. Table of 19 has a pattern for every ten multiples. The 19 multiplication table includes the products of multiplying the number 19 with whole. Web here you will find multiplication tables for 19 to 19. The 19 times table chart is given below to help you learn multiplication skills. By the time a child finishes elementary. Print out the multiplication table of 19. Web here, you will find different kind of 19x19 multiplication grids like blank multiplication grid 19x19 or empty multiplication grid. Also presented in multiplication chart of 19. Web multiplication table from 1 to 19, simple to read, learn and memorize 19 x 19 times table. Now from the reverse side, start. Web in multiplication table of 19 or say 19 times table, we will learn the result of multiplying 19 with several numbers. To gain a solid foundation in multiplication by 19, let’s examine the multiplication table. This table can be written using arithmetic operations such as addition and. You can use 19 multiplication table. Multiplication table for number 19 with various ranges. Web multiplication table of 19. Web to calculate the table of 19, we can easily multiply the 19 by the number of the respected steps. For example, if we want. Web the 19 times table, also known as the multiplication table for the number 19, is obtained by multiplying 19 by different. Order does not matter when we multiply two numbers, it does not matter which is first or second, the answer is always the same. Web the 19 multiplication table includes the products of multiplying the number 19 with whole numbers. This is the 19 times table up to 12 which is the multiplication table for the number 19, where each. Now From The Reverse Side, Start. Web here you will find multiplication tables for 19 to 19. The 19 times table chart is given below to help you learn multiplication skills. Let's write the 1 st 10 odd numbers in a sequence in the ten's place. Table of 19 has a pattern for every ten multiples. Web In Multiplication Table Of 19 Or Say 19 Times Table, We Will Learn The Result Of Multiplying 19 With Several Numbers. Order does not matter when we multiply two numbers, it does not matter which is first or second, the answer is always the same. Print out the multiplication table of 19. For example, if we want. Web 19 times table is the multiplication table of the number 19 provided here for students, parents and teachers. Web Download 19 Times Table Printable. This table can be written using arithmetic operations such as addition and. Also presented in multiplication chart of 19. This is the 19 times table up to 12 which is the multiplication table for the number 19, where each. Web in 19 times table multiplication chart we will learn about the tables and then we will practice the exercise. You Can Use 19 Multiplication Table. Web exploring the multiplication table for 19: Web in maths, the 19 table is referred to as the multiplication table of 19. Web here, you will find different kind of 19x19 multiplication grids like blank multiplication grid 19x19 or empty multiplication grid. Web to calculate the table of 19, we can easily multiply the 19 by the number of the respected steps. Related Post:
{"url":"https://chart.sistemas.edu.pe/en/multiplication-chart-19.html","timestamp":"2024-11-04T07:17:28Z","content_type":"text/html","content_length":"32773","record_id":"<urn:uuid:cd5566b2-c544-493a-b3a2-ca365f82b78b>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00108.warc.gz"}
Graph Edit Networks While graph neural networks have made impressive progress in classification and regression, few approaches to date perform time series prediction on graphs, and those that do are mostly limited to edge changes. We suggest that graph edits are a more natural interface for graph-to-graph learning. In particular, graph edits are general enough to describe any graph-to-graph change, not only edge changes; they are sparse, making them easier to understand for humans and more efficient computationally; and they are local, avoiding the need for pooling layers in graph neural networks. In this paper, we propose a novel output layer - the graph edit network - which takes node embeddings as input and generates a sequence of graph edits that transform the input graph to the output graph. We prove that a mapping between the node sets of two graphs is sufficient to construct training data for a graph edit network and that an optimal mapping yields edit scripts that are almost as short as the graph edit distance between the graphs. We further provide a proof-of-concept empirical evaluation on several graph dynamical systems, which are difficult to learn for baselines from the
{"url":"https://iclr.cc/virtual/2021/poster/3122","timestamp":"2024-11-05T12:54:09Z","content_type":"text/html","content_length":"43355","record_id":"<urn:uuid:cd6e16be-0e8c-4954-8ad3-5079fd31525b>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00127.warc.gz"}
781 Radian/Square Millisecond to Degree/Square Month Radian/Square Millisecond [rad/ms2] Output 781 radian/square millisecond in degree/square second is equal to 44748003799.72 781 radian/square millisecond in degree/square millisecond is equal to 44748 781 radian/square millisecond in degree/square microsecond is equal to 0.044748003799717 781 radian/square millisecond in degree/square nanosecond is equal to 4.4748003799717e-8 781 radian/square millisecond in degree/square minute is equal to 161092813678980 781 radian/square millisecond in degree/square hour is equal to 579934129244340000 781 radian/square millisecond in degree/square day is equal to 334042058444740000000 781 radian/square millisecond in degree/square week is equal to 1.6368060863792e+22 781 radian/square millisecond in degree/square month is equal to 3.0947039437219e+23 781 radian/square millisecond in degree/square year is equal to 4.4563736789595e+25 781 radian/square millisecond in radian/square second is equal to 781000000 781 radian/square millisecond in radian/square microsecond is equal to 0.000781 781 radian/square millisecond in radian/square nanosecond is equal to 7.81e-10 781 radian/square millisecond in radian/square minute is equal to 2811600000000 781 radian/square millisecond in radian/square hour is equal to 10121760000000000 781 radian/square millisecond in radian/square day is equal to 5830133760000000000 781 radian/square millisecond in radian/square week is equal to 285676554240000000000 781 radian/square millisecond in radian/square month is equal to 5.40127731924e+21 781 radian/square millisecond in radian/square year is equal to 7.7778393397056e+23 781 radian/square millisecond in gradian/square second is equal to 49720004221.91 781 radian/square millisecond in gradian/square millisecond is equal to 49720 781 radian/square millisecond in gradian/square microsecond is equal to 0.049720004221908 781 radian/square millisecond in gradian/square nanosecond is equal to 4.9720004221908e-8 781 radian/square millisecond in gradian/square minute is equal to 178992015198870 781 radian/square millisecond in gradian/square hour is equal to 644371254715930000 781 radian/square millisecond in gradian/square day is equal to 371157842716380000000 781 radian/square millisecond in gradian/square week is equal to 1.8186734293102e+22 781 radian/square millisecond in gradian/square month is equal to 3.4385599374687e+23 781 radian/square millisecond in gradian/square year is equal to 4.951526309955e+25 781 radian/square millisecond in arcmin/square second is equal to 2684880227983 781 radian/square millisecond in arcmin/square millisecond is equal to 2684880.23 781 radian/square millisecond in arcmin/square microsecond is equal to 2.68 781 radian/square millisecond in arcmin/square nanosecond is equal to 0.000002684880227983 781 radian/square millisecond in arcmin/square minute is equal to 9665568820738900 781 radian/square millisecond in arcmin/square hour is equal to 34796047754660000000 781 radian/square millisecond in arcmin/square day is equal to 2.0042523506684e+22 781 radian/square millisecond in arcmin/square week is equal to 9.8208365182753e+23 781 radian/square millisecond in arcmin/square month is equal to 1.8568223662331e+25 781 radian/square millisecond in arcmin/square year is equal to 2.6738242073757e+27 781 radian/square millisecond in arcsec/square second is equal to 161092813678980 781 radian/square millisecond in arcsec/square millisecond is equal to 161092813.68 781 radian/square millisecond in arcsec/square microsecond is equal to 161.09 781 radian/square millisecond in arcsec/square nanosecond is equal to 0.00016109281367898 781 radian/square millisecond in arcsec/square minute is equal to 579934129244340000 781 radian/square millisecond in arcsec/square hour is equal to 2.0877628652796e+21 781 radian/square millisecond in arcsec/square day is equal to 1.2025514104011e+24 781 radian/square millisecond in arcsec/square week is equal to 5.8925019109652e+25 781 radian/square millisecond in arcsec/square month is equal to 1.1140934197399e+27 781 radian/square millisecond in arcsec/square year is equal to 1.6042945244254e+29 781 radian/square millisecond in sign/square second is equal to 1491600126.66 781 radian/square millisecond in sign/square millisecond is equal to 1491.6 781 radian/square millisecond in sign/square microsecond is equal to 0.0014916001266572 781 radian/square millisecond in sign/square nanosecond is equal to 1.4916001266572e-9 781 radian/square millisecond in sign/square minute is equal to 5369760455966.1 781 radian/square millisecond in sign/square hour is equal to 19331137641478000 781 radian/square millisecond in sign/square day is equal to 11134735281491000000 781 radian/square millisecond in sign/square week is equal to 545602028793070000000 781 radian/square millisecond in sign/square month is equal to 1.0315679812406e+22 781 radian/square millisecond in sign/square year is equal to 1.4854578929865e+24 781 radian/square millisecond in turn/square second is equal to 124300010.55 781 radian/square millisecond in turn/square millisecond is equal to 124.3 781 radian/square millisecond in turn/square microsecond is equal to 0.00012430001055477 781 radian/square millisecond in turn/square nanosecond is equal to 1.2430001055477e-10 781 radian/square millisecond in turn/square minute is equal to 447480037997.17 781 radian/square millisecond in turn/square hour is equal to 1610928136789800 781 radian/square millisecond in turn/square day is equal to 927894606790940000 781 radian/square millisecond in turn/square week is equal to 45466835732756000000 781 radian/square millisecond in turn/square month is equal to 859639984367190000000 781 radian/square millisecond in turn/square year is equal to 1.2378815774887e+23 781 radian/square millisecond in circle/square second is equal to 124300010.55 781 radian/square millisecond in circle/square millisecond is equal to 124.3 781 radian/square millisecond in circle/square microsecond is equal to 0.00012430001055477 781 radian/square millisecond in circle/square nanosecond is equal to 1.2430001055477e-10 781 radian/square millisecond in circle/square minute is equal to 447480037997.17 781 radian/square millisecond in circle/square hour is equal to 1610928136789800 781 radian/square millisecond in circle/square day is equal to 927894606790940000 781 radian/square millisecond in circle/square week is equal to 45466835732756000000 781 radian/square millisecond in circle/square month is equal to 859639984367190000000 781 radian/square millisecond in circle/square year is equal to 1.2378815774887e+23 781 radian/square millisecond in mil/square second is equal to 795520067550.53 781 radian/square millisecond in mil/square millisecond is equal to 795520.07 781 radian/square millisecond in mil/square microsecond is equal to 0.79552006755053 781 radian/square millisecond in mil/square nanosecond is equal to 7.9552006755053e-7 781 radian/square millisecond in mil/square minute is equal to 2863872243181900 781 radian/square millisecond in mil/square hour is equal to 10309940075455000000 781 radian/square millisecond in mil/square day is equal to 5.938525483462e+21 781 radian/square millisecond in mil/square week is equal to 2.9098774868964e+23 781 radian/square millisecond in mil/square month is equal to 5.50169589995e+24 781 radian/square millisecond in mil/square year is equal to 7.922442095928e+26 781 radian/square millisecond in revolution/square second is equal to 124300010.55 781 radian/square millisecond in revolution/square millisecond is equal to 124.3 781 radian/square millisecond in revolution/square microsecond is equal to 0.00012430001055477 781 radian/square millisecond in revolution/square nanosecond is equal to 1.2430001055477e-10 781 radian/square millisecond in revolution/square minute is equal to 447480037997.17 781 radian/square millisecond in revolution/square hour is equal to 1610928136789800 781 radian/square millisecond in revolution/square day is equal to 927894606790940000 781 radian/square millisecond in revolution/square week is equal to 45466835732756000000 781 radian/square millisecond in revolution/square month is equal to 859639984367190000000 781 radian/square millisecond in revolution/square year is equal to 1.2378815774887e+23
{"url":"https://hextobinary.com/unit/angularacc/from/radpms2/to/degpm2/781","timestamp":"2024-11-09T19:19:16Z","content_type":"text/html","content_length":"114812","record_id":"<urn:uuid:8da7056c-82d8-4cad-9f6d-5b140c900cad>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00787.warc.gz"}
What is the slope of any line perpendicular to the line passing through (-6,6) and (-2,-13)? | Socratic What is the slope of any line perpendicular to the line passing through #(-6,6)# and #(-2,-13)#? 1 Answer First, we need to determine the slope of the line for the two given points. The slope can be found by using the formula: #color(red)(m = (y_2 = y_1)/(x_2 - x_1)# Where $m$ is the slope and $\left({x}_{1} , {y}_{1}\right)$ and $\left({x}_{2} , {y}_{2}\right)$ are the two points. Substituting the points provided gives: $m = \frac{- 13 - 6}{- 2 - - 6}$ $m = \frac{- 19}{- 2 + 6}$ $m = \frac{- 19}{4}$ $m = - \frac{19}{4}$ The slope of a line perpendicular to a given line is the negative inverse of the slope of the given line. So, if the slope of a given line is: the slope of a perpendicular line is: $- \frac{1}{m}$ For our problem, the slope of the given line is: $- \frac{19}{4}$ Therefore, the slope of a perpendicular line is: $- 1 \times - \frac{4}{19}$ Impact of this question 18640 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/what-is-the-slope-of-any-line-perpendicular-to-the-line-passing-through-6-6-and-","timestamp":"2024-11-12T23:21:31Z","content_type":"text/html","content_length":"35115","record_id":"<urn:uuid:4a98d1af-e0a1-44bc-b2f8-63b64e26a82c>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00382.warc.gz"}
Project cost estimation template Every project manager has to take care of the cost of a project. Use this to calculate it within Excel. Project cost estimation Here’s a simple method to create a project cost estimation template in Microsoft Excel: Enter data into Excel regarding project cost. In the first row, create headings for the different categories of costs you want to include in your template. Some common categories include materials, labor, equipment, and overhead. Calculate the total of each stage by using the sum formula. Then drag it horizontally to get the sum of the other columns. Revise step two for stage 2 total. Preparation of a project cost template Add another column of total cost per unit, which includes all horizontal additions. Make two more rows below this table: Enter the number of units in the cell corresponding to the units to be produced. Then, multiply the units to be produced by the total cost per unit (both highlighted in red in the following screen shot). This is your total cost of production. This is how to estimate project costs in Excel. Save the template and use it for future project cost estimations. Download a free project cost estimation template here. Note: You can also use conditional formatting to highlight cells if the cost exceeds a certain threshold, or use charts and graphs to visualize the cost breakdown. If your project cost estimation depends on multiple variables or assumptions, you can use Excel’s data tables for sensitivity analysis. Data tables allow you to explore how changes in specific inputs affect project costs. To set up a data table, define your input values and create a table that displays the resulting costs for various scenarios. Excel’s Goal Seek tool can help you find the input values necessary to achieve a specific project cost target. If you have a target cost in mind and want to determine the required production quantity or cost per unit, you can use Goal Seek to perform this reverse calculation. Access Goal Seek from the “Data” tab. For more advanced what-if analysis, consider using Excel’s Scenario Manager. This tool allows you to create and manage multiple scenarios with different sets of input values, helping you compare and understand how various factors affect project costs. By incorporating these advanced Excel features and techniques into your project cost estimation template, you can create a powerful tool for accurate, flexible, and insightful project cost analysis. It will not only streamline your cost estimation process but also provide valuable insights for decision-making and project management.
{"url":"https://best-excel-tutorial.com/project-cost-estimator-template/?amp=1","timestamp":"2024-11-02T23:59:22Z","content_type":"text/html","content_length":"50048","record_id":"<urn:uuid:2b7adc43-6bcb-40d3-9164-9de5dbae3784>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00896.warc.gz"}
Probability information in risk communication: a review of the research literature Communicating probability information about risks to the public is more difficult than might be expected. Many studies have examined this subject, so that their resulting recommendations are scattered over various publications, diverse research fields, and are about different presentation formats. An integration of empirical findings in one review would be useful therefore to describe the evidence base for communication about probability information and to present the recommendations that can be made so far. We categorized the studies in the following presentation formats: frequencies, percentages, base rates and proportions, absolute and relative risk reduction, cumulative probabilities, verbal probability information, numerical versus verbal probability information, graphs, and risk ladders. We suggest several recommendations for these formats. Based on the results of our review, we show that the effects of presentation format depend not only on the type of format, but also on the context in which the format is used. We therefore argue that the presentation format has the strongest effect when the receiver processes probability information heuristically instead of systematically. We conclude that future research and risk communication practitioners should not only concentrate on the presentation format of the probability information but also on the situation in which this message is presented, as this may predict how people process the information and how this may influence their interpretation of the risk. Dive into the research topics of 'Probability information in risk communication: a review of the research literature'. Together they form a unique fingerprint.
{"url":"https://cris.maastrichtuniversity.nl/en/publications/probability-information-in-risk-communication-a-review-of-the-res","timestamp":"2024-11-14T07:24:58Z","content_type":"text/html","content_length":"57691","record_id":"<urn:uuid:f3fca9b7-abab-4355-bcd7-acca7ff07a59>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00747.warc.gz"}
Convert nanometers to nautical miles ( nm to nmi ) Last Updated: 2024-11-04 00:30:47 , Total Usage: 1506634 Converting nanometers to nautical miles is a calculation that spans an enormous range of scales, transitioning from an extremely small unit used in scientific and technological contexts to a larger unit used primarily in maritime and aerial navigation. Historical or Origin Nanometers (nm): A nanometer is a metric unit of length, equal to one-billionth of a meter. It is commonly used in fields like nanotechnology, chemistry, and physics for measuring very small distances, such as the size of atoms and molecules. Nautical Miles (nmi): A nautical mile is a unit of distance used in maritime and aviation navigation. Traditionally, it was defined as one minute of arc along a meridian of the Earth, making it closely related to the Earth's geometry. It is internationally agreed upon as exactly 1,852 meters. Calculation Formula To convert nanometers to nautical miles, use the formula: \[ \text{Nautical Miles} = \text{Nanometers} \times \text{Conversion Factor} \] Since one meter equals \(1 \times 10^9\) nanometers, and one nautical mile is 1,852 meters, the conversion factor from nanometers to nautical miles is \(1 / (1,852 \times 10^9)\), or approximately \ (5.39957 \times 10^{-13}\). Example Calculation For example, to convert 1,000,000,000 nanometers (or 1 millimeter) to nautical miles, the calculation is: \[ \text{Nautical Miles} = 1,000,000,000 \times 5.39957 \times 10^{-13} \approx 0.000539957 \text{ nmi} \] Why It's Needed and Use Cases While this conversion is not typical in everyday life, it can be important in certain scientific and technical fields where precise measurements need to be translated from the nanoscale to larger, more practical units like nautical miles. For example, in materials science or engineering, understanding the size of nanomaterials in terms of nautical miles might be necessary for specific applications or theoretical discussions. Common Questions (FAQ) • Why convert nanometers to nautical miles? Converting nanometers to nautical miles is typically done for scientific or theoretical purposes, to relate micro-scale measurements to larger, practical navigation units. • How precise is this conversion? The conversion is mathematically precise, based on the defined lengths of a nanometer and a nautical mile. • Is this conversion relevant outside of scientific fields? Generally, this conversion is most relevant in scientific and engineering fields, though it may occasionally be used in educational contexts for illustrative purposes. In summary, converting nanometers to nautical miles is an interesting exercise that demonstrates the ability to translate measurements across vastly different scales, from the incredibly small to the more practical and navigational. This conversion highlights the versatility and adaptability of measurement systems in various technical and scientific disciplines.
{"url":"https://calculator.fans/en/tool/nm-to-nmi-convertor.html","timestamp":"2024-11-06T19:02:55Z","content_type":"text/html","content_length":"12971","record_id":"<urn:uuid:74579b1f-78a5-487a-b5cc-95f9b7a5e0f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00614.warc.gz"}
Computational Complexity Bill, in our earlier this week I said you were older than me. But 68? You don't look day over 66. Neither do you. But seriously, why do you think I'm 68? I just Google'd "How old is Bill Gasarch?" Don't believe everything you read on the Internet. I'm really 58. Prove it. Here's my driver's license. Bill you don't drive. And it literally says "NOT A DRIVER'S LICENSE" on the back. But it is an official State of Maryland Identification card stating that you were born in 1959. Are you saying I should trust the state of Maryland over Google? Yes, because they pay my salary. Back to Dagstuhl. Let's talk about the talks . William Hoza gave a nice talk about hitting sets for L (deterministic small space) vs RL (randomized small space) but when I asked him when will we prove L = RL he said not for fifty years. Grad students are not supposed to be that pessimistic. You mean realistic. Though I'd guess more like 10-20 years. I wouldn't even be surprised if NL (nondeterministic log space) = L. Arpita Korwar: I say 10-15 years. Can we put that in the blog? Too late. Bill I heard you were the stud muffin this week. Yes, I talked about the muffin problem . Got a problem with that? Needed milk. I saw this talk two years ago and now you have cool theorems. Who would've thought if you have 24 muffins and 11 people you can allocate 24/11 muffins and the smallest piece is 19/44, and that's the best possible for maximizing the smallest piece. I can't believe you actually listened to the talk and didn't fall asleep. zzzzzz. Did you say something? Never mind. Eric Allender talked about the minimum circuit-size problem : Given a truth-table of a function f is there a circuit for f less that a given size w. The problem is frustrated, just consider the following theorem: if MCSP is NP-complete then EXP does not equal ZPP (exponential time in zero-error probabilistic polynomial-time). Do you think EXP = ZPP? No, the result only tells us it will be hard to prove MSCP is NP-complete without informing us whether or not it is NP-complete. Allender did show that under projections it isn't NP-complete (Editor's Note: I should have said log-time projections see Eric's . SAT and all your favorite NP-complete problems are complete under log-time projections). MSCP might be complete under poly-time reductions but not under weaker reductions. Reminds me of the Kolmogorov random strings that are hard for the halting for Turing reductions but not under many-one reductions. Everything reminds you of the Kolmogorov strings. As they should. I liked Michal Koucký's talk on Gray codes Shouldn't that be grey codes. We're not in the UK. It's the color you moron. It's named after Frank Gray. You are smarter than you look, not bad for a 68 year old. I missed Koucký's talk due to a sports injury, but he did catch me up later. I never put Lance and sports in the same sentence before. And I never put Bill and driving together. It's a new day for everything. Koucký showed how to easily compute the next element in the Gray code querying few bits as long as the alphabet size is of size 3. Which contrasts Raskin's 2017 paper that shows with a binary alphabet you need to query at least half the bits. Hey you stole my line. That's not possible. You are editing this. I think this typecast has gone long enough. Take us out. In a complex world, best to keep it simple. Lance: Welcome to our typecast directly from Dagstuhl in Southwestern Germany for the 2018 edition of the seminar on Algebraic Methods in Computation Complexity. Say hi Bill. Bill: Hi Bill. So Lance are you disappointed we didn't go to Heisenberg for the Michael Atiyah talk claiming a solution to the Riemann Hypothesis. Lance: I knew how fast I was going but I got lost going to Heisenberg. I think you mean the Heidelberg Laureate Forum a 100 miles from here. From what I heard we didn't miss much. For those who care here is the video, some twitter threads and the paper. Bill: Too bad. When I first heard about the claim I was optimistic because (1) László Babai proved that graph isomorphism is in quasipolynomial-time at the age of 65 and (2) since Atiyah was retired he had all this time to work on it. Imagine Lance if you were retired and didn't have to teach or do administration, could you solve P vs NP? (This gets an LOL from Nutan Limaye) Lance: I'll be too busy writing the great American novel. Before we leave this topic, don't forget about the rest of the Laureate Forum, loads of great talks from famous mathematicians and computer scientists. Why didn't they invite you Bill? Bill: They did but I rather be at Dagstuhl with you to hear about lower bounds on matrix multiplication from Josh Alman. Oh, hi Josh I didn't see you there. Josh: Happy to be here, it's my first Dagstuhl. I'm flying around the world from Boston via China to get here. Though my friends say it's not around the world if you stay in the Northern hemisphere. They are a lot of fun at parties. But not as much fun as matrix multiplication. Bill: So Josh, what do you have to say about matrix multiplication. Is is quadratic time yet? Josh: Not yet and we show all the current technique will fail. Bill: Wouldn't Chris Umans disagree? Kathryn Fenner: You shouldn't pick on Canadians [Ed note: Josh is from Toronto]. Pick on students from your own country. Josh: (diplomatically) I think Chris Umans has a broader notion of what counts as known methods. There are some groups that aren't ruled out but we don't know how to use them. Chris: Very well put. The distinction is between powers of a fixed group versus families of groups like symmetric groups. The later one seems like the best place to look. Lance: Thanks Chris. Josh, what are your impressions of Dagstuhl so far? Josh: I like the sun and grass. I wish it was easier to get here. Lance: This is only the first day. You haven't even found the music room yet, past the white room, past the billiard room where Mr. Green was murdered with the candlestick. Oh hi Fred Green. Luckily Dr. Green is still alive. I remember my first Dagstuhl back in February of 1992. Josh: Two months before I was born. Lance: Way to make me feel old. Bill: You are old. Lance: You are older. Believe it or not six from that original 1992 meeting are here again this week: The two of us, Eric Allender, Vikaurum Arvind, Uwe Schöning and Jacobo Torán. Amazing how accents show up as we talk. Bill: What did I sleep through this morning before Josh's talk? Lance: Amnon Ta-Shma talked about his STOC 2017 best paper and Noga Ron-Zewi showed some new results on constructive list-decoding. Bill: Let's do this again later in the week. Lance, takes us out. Lance: In a complex world, best to keep it simple. Recently I talked with Ehsan Hoque, one of the authors of the ACM Future of Computing Academy report that suggested "Peer reviewers should require that papers and proposals rigorously consider all reasonable broader impacts, both positive and negative." which I had satirized last May. Ehsan said that "if email had sender authentication built in from the beginning then we wouldn't have the phishing problems we have today". Leaving aside whether this statement is fully true, why didn't we put sender authentication and encryption in the first email systems? Email goes back to the 60's but I did get involved on the early side when I wrote an email system for Cornell in the early 80's. So let me take a crack at answering that question. Of course there are the technical reasons. RSA was invented just a few years earlier and there were no production systems and the digital signatures needed for authentication were just a theory back then. The amount of overhead needed for encryption in time and bandwidth would have stopped email in its tracks back then. But it's not like we said we wish we could have added encryption to email if we had the resources. BITNET which Cornell used and the ARPANET gateway only connected with other universities, government agencies and maybe some industrial research labs. We generally trusted each other and didn't expect anyone to fake email for the purpose of getting passwords. It's not like these emails could have links to fake login pages. We had no web back then. But we did all receive an email from a law firm offering green card help. My first spam message. We had a mild panic but little did we guess that spam would nearly take down email at the turn of the century. Nor would we have guessed the solution would come from machine learning which kills nearly all spam and much of the phishing emails today. I don't disagree with the report that we shouldn't think about the negative broader impacts, but the true impacts negative and positive are nearly impossible to predict. Computer Science works best when we experiment with ideas, get things working and fix problems as they arise. We can't let the fear of the future prevent us from getting there. Scott Aaronson recently won the Tomassoni-Chisesi Prize in Physics (yeah Scott!). In his post (here) about it he makes a passing comment: I'm of course not a physicist I won't disagree (does that mean I agree? Darn Logic!) but it raises the question of how we identify ourselves. How to answer the question: Is X a Y? (We will also consider why we care, if we do.) Some criteria below. Note that I may say thinks like `Dijkstra is obviously a computer scientist' but this is cheating since my point is that it may be hard to tell these things (though I think he is). 1) If X in a Y-dept then X is a Y. While often true, there are some problems: MIT CS is housed in Mathematics, some people change fields. Readers- if you know someone who is in dept X but really does Y, leave a comment. (CORRECTION- I really don't know how MIT is structured. I do know that the Math Dept has several people who I think of as Computer Scientists: Bonnie Burger, Michael Goemans, Tom Leighton, Peter Shor, Michael Sipser. There may be others as well. The point being that I would not say `Sipers is a mathematician because he is in the MIT Math Dept') 2) If X got their degree in Y then they are Y. Again, people can change fields. Also, some of the older people in our field got degrees in Physics or Math since there was no CS (I am thinking Dijkstra-Physics, Knuth-Math). Even more recently there are cases. Andrew Child's degree is in Physics, but he did quantum computing. Readers- if you know someone who got there degree in X but is now donig Y, leave a comment. 3) Look at X's motivation. If Donald Knuth does hard math but he does it to better analyze algorithms, then he is a computer scientist. One problem -- some people don't know their own motivations, or it can be hard to tell. And people can get distracted into another field. 4) What does X call himself? Of course people can be wrong. The cranks he email me their proofs that R(5) is 40 (its not) think the are mathematicians. They are not- or are they? see next point 5) What X is interested in, ind. of if they are good at it or even know any. Not quite right- if an 8 year old Bill Gasarch is interested in the Ketchup problem that does not make him a 6) What X is working on right now. Fine but might change. And some work is hard to classify. 7) If you win an award in X, then you are an X. Some exceptions Scott is a computer scientist who won the Tomassoni-Chisesi Physics Prize Ed Witten is a Physicist who won the Fields Medal (Math) John Nash is a mathematician who won a Nobel prize in Economics. I want to make a full circle- so if you know other X won a prize in Y then leave a comment and we'll see what kind of graph we get. Bipartite with people on one side and fields on the other. 8) What they can teach? Helpful in terms of hiring when you want to fill teaching needs. Does any of this matter? We use terms like `mathematician' `physicist' `computer scientist' as shorthand for what someone is working on, so its good to know we have it right. Often when the question comes to what happens if P = NP, one typically hears the response that it kills public-key cryptography. And it does. But that gives the impression that given the choice we would rather not have P = NP. Quite the opposite, P = NP would greatly benefit humanity from solving AI (by finding the smallest circuit consistent with the data) and curing cancer. I've said this before but never explained why. Of course I don't have a mathematical proof that P = NP cures cancer. Nor would an efficient algorithm for SAT immediately give a cancer cure. But it could work as follows: 1. We need an appropriately shaped protein that would inhibit the cancer cells for a specific individual without harming the healthy cells. P = NP would help find these shapes perhaps just the DNA of the person and the type of cancer. 2. At this point we don't understand the process that takes a ACGT protein sequence and describes that shape that it forms. But it must be a simple process because it happens quickly. So we can use P = NP to find a small circuit that describes this process. 3. Use P = NP to find the protein sequence that the circuit from #2 will output the shape from #1. We'll need an truly efficient algorithm for NP problems for this to work. A n^50 algorithm for SAT won't do the job. All this steps may happen whether or not P = NP but we'll need some new smart algorithmic ideas. Please note this is just a thought exercise since I strongly believe that P ≠ NP. I do not want to give false hope to those with friends and loved ones with the disease. If you want to cure cancer your first step should not be "Prove P = NP". This is an ANON guest post. Even I don't know who it is! They emailed me asking if they could post on this topic, I said I would need to see the post. I did and it was fine. I have written many tenure/promotion letters before. But this summer, I was especially inundated with requests. Thinking about my past experiences with such letters, I started to question their For those unfamiliar with the process, let me explain. When someone is applying for a research job, they typically need to have recommendation letters sent on their behalf. Once someone is hired in a tenure-track position, they then need to get additional letters each time they are promoted (in the US, this will typically occur when someone is granted tenure and again when they are promoted to Now, I know from experience that recommendation letters are scrutinized very carefully, and often contain useful nuggets of information. I am not questioning the value of such letters (though they may have other problems). I am focusing here only on tenure/promotion letters. Let me fill in a bit more detail about the tenure/promotion process,since it was a mystery to me before I started an academic position. (I should note that everything I say here is based only on how things are done at my institution; I expect it does not differ much at other US universities, but it may be different in other countries.) First, the department makes a decision as to whether to put forward case for promotion. If they do, then a package is prepared that includes, among other things, the external recommendation letters I am talking about. After reviewing the candidate's package, the department holds an official vote; if positive, then the package is reviewed and voted on by higher levels of administration until it is approved by the president of the university. The external letters appear very important, and they are certainly discussed when the department votes on the candidate's case. However, I am not aware of any cases (in computer science) where someone who was put forward for tenure was denied tenure. (In contrast, I am aware of a very small number cases where a department declined to put someone forward for tenure. In such cases, no letters are ever requested.) Perhaps more frustrating, this seems to be the case even when there are negative letters. In fact, I have written what I consider to be "negative" letters in the past only to see the candidate still get tenure.(To be clear, by academic standards a negative letter does not mean saying anything bad, it just means not effusively praising the candidate.) This makes be believe that these letters are simply being used as "checkboxes" rather than real sources of information to take into account during the decision-making process. Essentially, once a department has decided to put someone forward for promotion, they have effectively also decided to vote in favor of their promotion. Letters take a long time to write, especially tenure/promotion letters, and especially when you are not intimately familiar with someone's work (even if they are in the same field). But if they are basically ignored, maybe we can all save ourselves some time and just write boilerplate letters (in favor of tenure) instead? Glencora Borradaile wrote a blog post in June about how conferences discriminate. Let me spell it out. In order to really succeed in most areas of computer science, you need to publish conference papers and this, for the most part, means attendance at those conferences. But because of the institutional discrimination of border control laws and the individual discrimination that individuals face and the structural discrimination that others face, computer science discriminates based on nationality, gender identity, disability, and family status, just to name a few aspects of identity. Suresh Venkatasubramanian follows up with a tweet storm (his words) echoing Glencora's points. Is there structural (i.e not intentional or institutional) bias in how conferences operate? I.e is there a systematic and persistent disadvantage to certain groups from how conferences are structured? If we consider location, and groups = non-US people, then yes. — Suresh Venkatasubramanian (@geomblog) July 6, 2018 Ryan Williams had a twitter thread defending conferences. Because of where I live and work, I can collaborate with and see talks by many more people than the average person in my field. To me, conferences serve as a way of *leveling* that field, giving a venue where people from all over can benefit similarly. — R. Ryan Williams (@rrwilliams) July 7, 2018 Not much difference these day between blog posts, tweet storms and twitter threads and I recommend you read through them all. Much as I think conferences should not serve as publication venues, they do and should play a major role in connecting people within the community. We should do our best to mitigate the real concerns of Glencora and Suresh, create an environment that everyone feels comfortable, have travel support and child care to make it easier and have meetings in different countries so those with visa issues can still attend at times. But we cannot eliminate the conference without eliminating the community. Personal interactions matter. On Aug 16, 2018 Aretha Franklin died. A famous singer. On Aug 18 2018 Kofi Anan died. A famous politician. On Aug 25, 2018 John McCain died. A famous politician. On Aug 26, 2018 Neil Simon died, a famous playwright. For 12 famous people who died between Aug 5 and Aug 26 see here (be careful- there are a few more on the list who died in August but a different year). One could group those 12 into four sets of three and claim the rule of threes that celebrities die in threes. There was an episode of 30 Rock where two celebrities had died and Tracy Jordan (a celeb) tried to kill a third one so he would not be a victim of the rule of threes. (see the short video clip: here.) How would one actually test the rule of threes? We would need to define the rule carefully. I have below a well defined rule, with parameters you can set, and from that you could do data collection (this could be a project for a student though you would surely prove there is no such rule). 1. Decide on a definite time frame: T days. The deaths only count if they are within T days. 2. Define celebrity. This may be the hardest part. I'll start with they must have a Wikipedia page of length W and they must have over H hits on Google. This may be hard to discern for people with common names or alternative spellings. You might also look into Friends on Facebook and Followers on Twitter. A big problem with all of this is that if you want to do a study of old data, before there was Google, Wikipedia, Facebook, and Twitter, you will need other criteria (ask your grandparents what it was like in those days). 3. Decide whether or not to have a cutoff on age. You may decide that when Katherine Hepburn, Bob Hope, and Strom Thurmond died less than a month apart, at the ages of 96, 100, 100 this doesn't qualify. Hence you may say that the celebrities who die must be younger than Y years. I doubt anybody will ever do the experiment--- those that believe its true (are there really such people?) have no interest in defining it carefully or testing it. And people who don't believe would not bother, partially because so few people believe it that its not worth debunking. But I wonder if a well thought out experiment might reveal something interesting. Also contrast the data to all deaths and see if there is a difference. For example, you might find that more celebs die in August then would be expected based on when all people die. Or that celebs live longer. Or shorter. Actually with enough p-hacking I am sure you could find something. But would you find something meaningful? Astrology is in the same category- people who believe (there ARE such people!) could do well defined experiments but have no interest in doing so. I doubt they would find anything of interest if they did. Here there are enough people who believe it in to be worth debunking, but would a well designed science experiment convince them that astrology does not have predictive powers? Has such been I once DID do such an experiment to disprove a wild theory. In 2003 a cab driver once told me (1) there is no Gold in Fort Know, and Ian Fleming was trying to tell us this in the book Goldfinger, (2) Reagan was shot since he was going to tell, (3) a small cohort of billionaires runs the world. I challenged him-- if that is the case then how come in 1992 Bill Clinton beat George Bush, who was surely the billionaires pick. He responded that Bill Clinton was a Rhodes Scholar and hence he is in-the-club. I challenged him- OKAY, predict who will get the Democratic Nomination in 2004. This was a well defined experiment (though only one data point) He would give me a prediction and I could test it. He smiled and said Wesley Clark was a Rhode Scholar. Oh well.
{"url":"https://blog.computationalcomplexity.org/2018/09/?m=0","timestamp":"2024-11-07T16:26:21Z","content_type":"application/xhtml+xml","content_length":"222707","record_id":"<urn:uuid:f62bad66-a538-45df-9f41-ef5773ab87bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00818.warc.gz"}
Specifying A Set - Bitwise MNM Some people will have you believe that definitions don’t really matter and we can call anything anything, but the truth is that proper definitions are empowering. If so, can we clarify what is a set and what is meant by “specifying” it? According to Georg Cantor a set is a collection of definite, distinguishable and distinct objects (no duplicates). These objects are often called “elements”. More informally, they can be referred to as “things”. So a set is a collection, or a boundary if you will, and our task is often to decide who is in, and who is out. I.e. which objects belong to the set and which don’t. This decision making is also known as “specifying” or “defining” a set. Another way of thinking of it is that we’re specifying the common characteristic that all elements of a given set have in common. Another important thing to note is that when we talk about specifying a set, we’re really talking about a mathematical notation that enables us to identify characteristics of relevant elements. There are three main methods of mathematical notation that will allow us to do just that: 1. Enumeration – a simple list of values. Just the list and nothing but the list. 2. A calculational formula – anything that we can calculate. E.g. x-1, x/y, etc. 3. A predicate – a statement that has a truth value. E.g. x<0 These mathematical elements are combined to give us set definition notations as follows. The symbol ∈ is used to indicate that an element belongs to or is a member of a set. If x is an element of a set A, we write x ∈ A. The symbol ∉ is the opposite of ∈. It is used to indicate an element that doesn’t belong to or isn’t a member of a set. If z is not an element of set A, we write z ∉ A. If A = {1, 3, 5} then 1 ∈ A and 2 ∉ A There are a four ways to define sets: 1. an enumerative method: □ simply a list of all the members of the set, usually within curly brackets. Sometimes this is called: Example: {1, 3, 4, 6, 8} • just like the first method, but uses a shorthand device “…”. {1, 2, 3, …, 66} represents whole numbers from 1 to 66. {2, 4, 6, 8, …} represents the set of all even positive whole numbers. 2. describing the commonality of elements. This is called the set-builder notation(or the predictive method). We describe a certain property which all the elements x of a set C have in common so that we can know whether a particular thing belongs to that set. In this notation we introduce a concept of the “left side” and “right side”. Two sides are separated by a colon (:) or a vertical bar (|): Left side should indicate a variable and its domain (i.e. the set that it’s chosen from). For example – let N be a set of integers, we can define a new set S as: S := {x ∈ N | x >0} This is read as S is a set of elements x from a set of integers where x is greater than 0. In other words – positive integers. Sometimes we’ll see just a variable on the left side, but really a domain is always implied. Right side is a predicate. We can also use it to specify a domain for the variable on the left side. Our previous example would look like this: S := {x | x ∈ N x >0} More examples: C := {x | x is an integer, x > – 3 } C := {x : x is an integer, x > – 3 } This is read as: “C is a set of elements x where x is an integer greater than –3.” D := {x | x is a street in a city} This is read as: “D is a set of elements x where x is a street in a city” 3. substitutive method: A:={ x – 1 |x N} This should be read as follows: “the set A consists of all elements that can be calculated by a formula x -1 where x is an element of N”. In general, the left side contains an expression. The right side after the vertical bar tells you from which set you must choose the values for x to substitute in that expression. 4. a hybrid method is a combination of a set-builder notation and substitution: Example: A:={x +1 | xS P(x) } It can be thought of as a shorthand notation combining two steps. In this case it would be: S1 := {xS | P(x)} – predicative A :={x+1| x S1} – substitutive A simplified hybrid method can look like a predicative method. Consider, for example: A :={x | xS P(x) } Looks exactly like one of the predicative example, doesn’t it?
{"url":"https://bitwisemnm.com/specifying-a-set/","timestamp":"2024-11-10T16:23:10Z","content_type":"text/html","content_length":"110751","record_id":"<urn:uuid:4d399eaa-49ca-4ffb-98b1-324d8d831b76>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00196.warc.gz"}
J-PAKE: Password-Authenticated Key Exchange by Juggling This document is an Internet-Draft (I-D). Anyone may submit an I-D to the IETF. This I-D is not endorsed by the IETF and has no formal standing in the IETF standards process The information below is for an old version of the document that is already published as an RFC. This is an older version of an Internet-Draft that was ultimately published as Document Type RFC 8236 Author Feng Hao Last updated 2017-09-05 (Latest revision 2017-04-28) RFC stream Independent Submission Intended RFC status Informational IETF conflict review conflict-review-hao-jpake Stream ISE state Published RFC Consensus boilerplate Unknown Document shepherd Eliot Lear Shepherd write-up Show Last changed 2017-05-10 IESG IESG state Became RFC 8236 (Informational) Telechat date (None) Responsible AD (None) Send notices to Nevil Brownlee <rfc-ise@rfc-editor.org> IANA IANA review state IANA OK - No Actions Needed IANA action state No IANA Actions Network Working Group F. Hao, Ed. Internet-Draft Newcastle University (UK) Intended status: Informational April 26, 2017 Expires: October 28, 2017 J-PAKE: Password Authenticated Key Exchange by Juggling This document specifies a Password Authenticated Key Exchange by Juggling (J-PAKE) protocol. This protocol allows the establishment of a secure end-to-end communication channel between two remote parties over an insecure network solely based on a shared password, without requiring a Public Key Infrastructure (PKI) or any trusted third party. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on October 28, 2017. Copyright Notice Copyright (c) 2017 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of Hao Expires October 28, 2017 [Page 1] Internet-Draft J-PAKE April 2017 the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 1.2. Notations . . . . . . . . . . . . . . . . . . . . . . . . 3 2. J-PAKE over Finite Field . . . . . . . . . . . . . . . . . . 4 2.1. Protocol Setup . . . . . . . . . . . . . . . . . . . . . 4 2.2. Two-Round Key Exchange . . . . . . . . . . . . . . . . . 5 2.3. Computational Cost . . . . . . . . . . . . . . . . . . . 6 3. J-PAKE over Elliptic Curve . . . . . . . . . . . . . . . . . 7 3.1. Protocol Setup . . . . . . . . . . . . . . . . . . . . . 7 3.2. Two-Round Key Exchange . . . . . . . . . . . . . . . . . 7 3.3. Computational Cost . . . . . . . . . . . . . . . . . . . 8 4. Three-Pass Variant . . . . . . . . . . . . . . . . . . . . . 8 5. Key Confirmation . . . . . . . . . . . . . . . . . . . . . . 9 6. Security Considerations . . . . . . . . . . . . . . . . . . . 11 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 12 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 12 9.1. Normative References . . . . . . . . . . . . . . . . . . 13 9.2. Informative References . . . . . . . . . . . . . . . . . 14 9.3. URIs . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 15 1. Introduction Password-Authenticated Key Exchange (PAKE) is a technique that aims to establish secure communication between two remote parties solely based on their shared password, without relying on a Public Key Infrastructure or any trusted third party [BM92]. The first PAKE protocol, called EKE, was proposed by Steven Bellovin and Michael Merrit in 1992 [BM92]. Other well-known PAKE protocols include SPEKE (by David Jablon in 1996) [Jab96] and SRP (by Tom Wu in 1998) [Wu98]. SRP has been revised several times to address reported security and efficiency issues. In particular, the version 6 of SRP, commonly known as SRP-6, is specified in [RFC5054]. This document specifies a PAKE protocol called Password Authenticated Key Exchange by Juggling (J-PAKE), which was designed by Feng Hao and Peter Ryan in 2008 [HR08]. There are a few factors that may be considered in favor of J-PAKE. First, J-PAKE has security proofs, while equivalent proofs are lacking in EKE, SPEKE and SRP-6. Second, J-PAKE follows a completely different design approach from all other PAKE protocols, and is built Hao Expires October 28, 2017 [Page 2] Internet-Draft J-PAKE April 2017 upon a well-established Zero Knowledge Proof (ZKP) primitive: Schnorr NIZK proof [SchnorrNIZK]. Third, J-PAKE adopts novel engineering techniques to optimize the use of ZKP so that overall the protocol is sufficiently efficient for practical use. Fourth, J-PAKE is designed to work generically in both the finite field and elliptic curve settings (i.e., DSA and ECDSA-like groups respectively). Unlike SPEKE, it does not require any extra primitive to hash passwords onto a designated elliptic curve. Unlike SPAKE2 [AP05] and SESPAKE [SOAA15], it does not require a trusted setup (i.e., the so-called common reference model) to define a pair of generators whose discrete logarithm must be unknown. Finally, J-PAKE has been used in real- world applications at a relatively large scale, e.g., Firefox sync [1], Pale moon sync [2] and Google Nest products [ABM15]; it has been included into widely distributed open source libraries such as OpenSSL [3], Network Security Services (NSS) [4] and the Bouncy Castle [5]; since 2015, it has been included into Thread [6] as a standard key agreement mechanism for IoT (Internet of Things) applications, and also included into ISO/IEC 11770-4 [7]. 1.1. Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119]. 1.2. Notations The following notations are used in this document: o Alice: the assumed identity of the prover in the protocol o Bob: the assumed identity of the verifier in the protocol o s: a low-entropy secret shared between Alice and Bob o a | b: a divides b o a || b: concatenation of a and b o [a, b]: the interval of integers between and including a and b o H: a secure cryptographic hash function o p: a large prime o q: a large prime divisor of p-1, i.e., q | p-1 o Zp*: a multiplicative group of integers modulo p Hao Expires October 28, 2017 [Page 3] Internet-Draft J-PAKE April 2017 o Gq: a subgroup of Zp* with prime order q o g: a generator of Gq o g^x: g raised to the power of x o a mod b: a modulo b o Fp: a finite field of p elements where p is a prime o E(Fp): an elliptic curve defined over Fp o G: a generator of the subgroup over E(Fp) with prime order n o n: the order of G o h: the cofactor of the subgroup generated by G, which is equal to the order of the elliptic curve divided by n o P x [b]: multiplication of a point P with a scalar b over E(Fp) o KDF(a): Key Derivation Function with input a o MAC(MacKey, MacData): MAC function with MacKey as the key and MacData as the input data 2. J-PAKE over Finite Field 2.1. Protocol Setup When implemented over a finite field, J-PAKE may use the same group parameters as DSA [FIPS186-4]. Let p and q be two large primes such that q | p-1. Let Gq denote a subgroup of Zp* with prime order q. Let g be a generator for Gq. Any non-identity element in Gq can be a generator. The two communicating parties, Alice and Bob, both agree on (p, q, g), which can be hard-wired in the software code. They can also use the method in NIST FIPS 186-4, Appendix A [FIPS186-4] to generate (p, q, g). Here DSA group parameters are used only as an example. Other multiplicative groups suitable for cryptography can also be used for the implementation, e.g., groups defined in [RFC4419]. A group setting that provides 128-bit security or above is recommended. The security proof of J-PAKE depends on the Decisional Diffie-Hellman (DDH) problem being intractable in the considered group. Let s be a secret value derived from a low-entropy password shared between Alice and Bob. The value of s is required to fall within the range of [1, q-1]. (Note that s must not be 0 for any non-empty Hao Expires October 28, 2017 [Page 4] Internet-Draft J-PAKE April 2017 secret.) This range is defined as a necessary condition in [HR08] for proving the "on-line dictionary attack resistance", since s, s+q, s+2q, ..., are all considered equivalent values as far as the protocol specification is concerned. In a practical implementation, one may obtain s by taking a cryptographic hash of the password and wrapping the result with respect to modulo q. Alternatively, one may simply treat the password as an octet string and convert the string to an integer modulo q by following the method defined in Section 2.3.8 of [SEC1]. In either case, one must ensure s is not equal to 0 modulo q. 2.2. Two-Round Key Exchange Round 1: Alice selects an ephemeral private key x1 uniformly at random from [0, q-1] and another ephemeral private key x2 uniformly at random from [1, q-1]. Similarly, Bob selects an ephemeral private key x3 uniformly at random from [0, q-1] and another ephemeral private key x4 uniformly at random from [1, q-1]. o Alice -> Bob: g1 = g^x1 mod p, g2 = g^x2 mod p and ZKPs for x1 and o Bob -> Alice: g3 = g^x3 mod p, g4 = g^x4 mod p and ZKPs for x3 and In this round, the sender must send zero knowledge proofs to demonstrate the knowledge of the ephemeral private keys. A suitable technique is to use the Schnorr NIZK proof [SchnorrNIZK]. As an example, suppose one wishes to prove the knowledge of the exponent for X = g^x mod p. The generated Schnorr NIZK proof will contain: {UserID, V = g^v mod p, r = v - x * c mod q} where UserID is the unique identifier for the prover, v is a number chosen uniformly at random from [0, q-1] and c = H(g || V || X || UserID). The "uniqueness" of UserID is defined from the user's perspective -- for example, if Alice communicates with several parties, she shall associate a unique identity with each party. Upon receiving a Schnorr NIZK proof, Alice shall check the prover's UserID is a valid identity and is different from her own identity. During the key exchange process using J-PAKE, each party shall ensure that the other party has been consistently using the same identity throughout the protocol execution. Details about the Schnorr NIZK proof, including the generation and the verification procedures, can be found in When this round finishes, Alice verifies the received ZKPs as specified in [SchnorrNIZK] and also checks that g4 != 1 mod p. Similarly, Bob verifies the received ZKPs and also checks that g2 != Hao Expires October 28, 2017 [Page 5] Internet-Draft J-PAKE April 2017 1 mod p. If any of these checks fails, this session should be Round 2: o Alice -> Bob: A = (g1*g3*g4)^(x2*s) mod p and a ZKP for x2*s o Bob -> Alice: B = (g1*g2*g3)^(x4*s) mod p and a ZKP for x4*s In this round, the Schnorr NIZK proof is computed in the same way as in the previous round except that the generator is different. For Alice, the generator used is (g1*g3*g4) instead of g; for Bob, the generator is (g1*g2*g3) instead of g. Since any non-identity element in Gq can be used as a generator, Alice and Bob just need to ensure g1*g3*g4 != 1 mod p and g1*g2*g3 != 1 mod p. With overwhelming probability, these inequalities are statistically guaranteed even when the user is communicating with an adversary (i.e., in an active attack). Nonetheless, for absolute guarantee, the receiving party shall explicitly check if these inequalities hold, and abort the session in case such a check fails. When the second round finishes, Alice and Bob verify the received ZKPs. If the verification fails, the session is aborted. Otherwise, the two parties compute the common key material as follows: o Alice computes Ka = (B/g4^(x2*s))^x2 mod p o Bob computes Kb = (A/g2^(x4*s))^x4 mod p Here Ka = Kb = g^((x1+x3)*x2*x4*s) mod p. Let K denote the same key material held by both parties. Using K as input, Alice and Bob then apply a Key Derivation Function (KDF) to derive a common session key k. If the subsequent secure communication uses a symmetric cipher in an authenticated mode (say AES-GCM), then one key is sufficient, i.e., k = KDF(K). Otherwise, the session key should comprise an encryption key (for confidentiality) and a MAC key (for integrity), i.e., k = k_enc || k_mac, where k_enc = KDF(K || "JPAKE_ENC") and k_mac = KDF(K || "JPAKE_MAC"). The exact choice of the KDF is left to specific applications to define. 2.3. Computational Cost The computational cost is estimated based on counting the number of modular exponentiations since they are the predominant cost factors. Note that it takes one exponentiation to generate a Schnorr NIZK proof and two to verify it [SchnorrNIZK]. For Alice, she needs to perform 8 exponentiations in the first round, 4 in the second round, and 2 in the final computation of the session key. Hence, that is 14 Hao Expires October 28, 2017 [Page 6] Internet-Draft J-PAKE April 2017 modular exponentiations in total. Based on the symmetry, the computational cost for Bob is exactly the same. 3. J-PAKE over Elliptic Curve 3.1. Protocol Setup The J-PAKE protocol works basically the same in the elliptic curve (EC) setting, except that the underlying multiplicative group over a finite field is replaced by an additive group over an elliptic curve. Nonetheless, the EC version of J-PAKE is specified here for When implemented over an elliptic curve, J-PAKE may use the same EC parameters as ECDSA [FIPS186-4]. The FIPS 186-4 standard [FIPS186-4] defines three types of curves suitable for ECDSA: pseudo-random curves over prime fields, pseudo-random curves over binary fields and special curves over binary fields called Koblitz curves or anomalous binary curves. All these curves that are suitable for ECDSAA can also be used to implement J-PAKE. However, for the illustration purpose, only curves over prime fields are described in this document. Typically, such curves include NIST P-256, P-384 and P-521. When choosing a curve, a level of 128-bit security or above is recommended. Let E(Fp) be an elliptic curve defined over a finite field Fp where p is a large prime. Let G be a generator for the subgroup over E(Fp) of prime order n. Here the NIST curves are used only as an example. Other secure curves such as Curve25519 are also suitable for the implementation. The security proof of J-PAKE relies on the assumption that the DDH problem is intractable in the considered group. As before, let s denote the shared secret between Alice and Bob. The value of s falls within [1, n-1]. In particular, note that s must not be equal to 0 mod n. 3.2. Two-Round Key Exchange Round 1: Alice selects ephemeral private keys x1 and x2 uniformly at random from [1, n-1]. Similarly, Bob selects ephemeral private keys x3 and x4 uniformly at random from [1, n-1]. o Alice -> Bob: G1 = G x [x1], G2 = G x [x2] and ZKPs for x1 and x2 o Bob -> Alice: G3 = G x [x3], G4 = G x [x4] and ZKPs for x3 and x4 When this round finishes, Alice and Bob verify the received ZKPs as specified in [SchnorrNIZK]. As an example, to prove the knowledge of the discrete logarithm of X = G x [x] with respect to the base point Hao Expires October 28, 2017 [Page 7] Internet-Draft J-PAKE April 2017 G, the ZKP contains: {UserID, V = G x [v], r = v - x * c mod n} where UserID is the unique identifier for the prover, v is a number chosen uniformly at random from [1, n-1] and c = H(G || V || X || UserID). The verifier shall check the prover's UserID is a valid identity and is different from its own identity. If the verification of the ZKP fails, the session is aborted. Round 2: o Alice -> Bob: A = (G1 + G3 + G4) x [x2*s] and a ZKP for x2*s o Bob -> Alice: B = (G1 + G2 + G3) x [x4*s] and a ZKP for x4*s When the second round finishes, Alice and Bob verify the received ZKPs. The ZKPs are computed in the same way as in the previous round except that the generator is different. For Alice, the new generator is G1 + G3 + G4; for Bob, it is G1 + G2 + G3. Alice and Bob shall check that these new generators are not points at infinity. If any of these checks fails, the session is aborted. Otherwise, the two parties compute the common key material as follows: o Alice computes Ka = (B - (G4 x [x2*s])) x [x2] o Bob computes Kb = (A - (G2 x [x4*s])) x [x4] Here Ka = Kb = G x [(x1+x3)*(x2*x4*s)]. Let K denote the same key material held by both parties. Using K as input, Alice and Bob then apply a Key Derivation Function (KDF) to derive a common session key 3.3. Computational Cost In the EC setting, the computational cost of J-PAKE is estimated based on counting the number of scalar multiplications over the elliptic curve. Note that it takes one multiplication to generate a Schnorr NIZK proof and one to verify it [SchnorrNIZK]. For Alice, she has to perform 6 multiplications in the first round, 3 in the second round, and 2 in the final computation of the session key. Hence, that is 11 multiplications in total. Based on the symmetry, the computational cost for Bob is exactly the same. 4. Three-Pass Variant The two-round J-PAKE protocol is completely symmetric, which significantly simplifies the security analysis. In practice, one party normally initiates the communication and the other party responds. In that case, the protocol will be completed in three passes instead of two rounds. The two-round J-PAKE protocol can be Hao Expires October 28, 2017 [Page 8] Internet-Draft J-PAKE April 2017 trivially changed to three passes without losing security. Take the finite field setting as an example and assume Alice initiates the key exchange. The three-pass variant works as follows: 1. Alice -> Bob: g1 = g^x1 mod p, g2 = g^x2 mod p, ZKPs for x1 and 2. Bob -> Alice: g3 = g^x3 mod p, g4 = g^x4 mod p, B = (g1*g2*g3)^(x4*s) mod p, ZKPs for x3, x4, and x4*s. 3. Alice -> Bob: A = (g1*g3*g4)^(x2*s) mod p and a ZKP for x2*s. Both parties compute the session keys in exactly the same way as 5. Key Confirmation The two-round J-PAKE protocol (or the three-pass variant) provides cryptographic guarantee that only the authenticated party who used the same password at the other end is able to compute the same session key. So far the authentication is only implicit. The key confirmation is also implicit [Stinson06]. The two parties may use the derived key straight-away to start secure communication by encrypting messages in an authenticated mode. Only the party with the same derived session key will be able to decrypt and read those For achieving explicit authentication, an additional key confirmation procedure should be performed. This provides explicit assurance that the other party has actually derived the same key. In this case, the key confirmation is explicit [Stinson06]. In J-PAKE, explicit key confirmation is recommended whenever the network bandwidth allows it. It has the benefit of providing explicit and immediate confirmation if the two parties have derived the same key and hence are authenticated to each other. This allows a practical implementation of J-PAKE to effectively detect online dictionary attacks (if any), and stop them accordingly by setting a threshold for the consecutively failed connection attempts. To achieve explicit key confirmation, there are several methods available. They are generically applicable to all key exchange protocols, not just J-PAKE. In general, it is recommended to use a different key from the session key for key confirmation, say using k' = KDF(K || "JPAKE_KC"). The advantage of using a different key for key confirmation is that the session key remains indistinguishable from random after the key confirmation process (although this Hao Expires October 28, 2017 [Page 9] Internet-Draft J-PAKE April 2017 perceived advantage is actually subtle and only theoretical). Two explicit key confirmation methods are presented here. The first method is based on the one used in the SPEKE protocol [Jab96]. Suppose Alice initiates the key confirmation. Alice sends to Bob H(H(k')), which Bob will verify. If the verification is successful, Bob sends back to Alice H(k'), which Alice will verify. This key confirmation procedure needs to be completed in two rounds, as shown below. 1. Alice -> Bob: H(H(k')) 2. Bob -> Alice: H(k') The above procedure requires two rounds instead of one, because the second message depends on the first. If both parties attempt to send the first message at the same time without an agreed order, they cannot tell if the message that they receive is a genuine challenge or a replayed message, and consequently may enter a deadlock. The second method is based on the unilateral key confirmation scheme specified in NIST SP 800-56A Revision 1 [BJS07]. Alice and Bob send to each other a MAC tag, which they will verify accordingly. This key confirmation procedure can be completed in one round. In the finite field setting it works as follows. o Alice -> Bob: MacTagAlice = MAC(k', "KC_1_U" || Alice || Bob || g1 || g2 || g3 || g4) o Bob -> Alice: MacTagBob = MAC(k', "KC_1_U" || Bob || Alice || g3 || g4 || g1 || g2) In the EC setting, the key confirmation works basically the same. o Alice -> Bob: MacTagAlice = MAC(k', "KC_1_U" || Alice || Bob || G1 || G2 || G3 || G4) o Bob -> Alice: MacTagBob = MAC(k', "KC_1_U" || Bob || Alice || G3 || G4 || G1 || G2) The second method assumes an additional secure MAC function (e.g., one may use HMAC) and is slightly more complex than the first method. However, it can be completed within one round and it preserves the overall symmetry of the protocol implementation. For this reason, the second method is recommended. Hao Expires October 28, 2017 [Page 10] Internet-Draft J-PAKE April 2017 6. Security Considerations A PAKE protocol is designed to provide two functions in one protocol execution. The first one is to provide zero-knowledge authentication of a password. It is called "zero knowledge" because at the end of the protocol, the two communicating parties will learn nothing more than one bit information: whether the passwords supplied at two ends are equal. Therefore, a PAKE protocol is naturally resistant against phishing attacks. The second function is to provide session key establishment if the two passwords are equal. The session key will be used to protect the confidentiality and integrity of the subsequent communication. More concretely, a secure PAKE protocol shall satisfy the following security requirements [HR10]. 1. Off-line dictionary attack resistance: It does not leak any information that allows a passive/active attacker to perform off- line exhaustive search of the password. 2. Forward secrecy: It produces session keys that remain secure even when the password is later disclosed. 3. Known-key security: It prevents a disclosed session key from affecting the security of other sessions. 4. On-line dictionary attack resistance: It limits an active attacker to test only one password per protocol execution. First, a PAKE protocol must resist off-line dictionary attacks. A password is inherently weak. Typically, it has only about 20-30 bits entropy. This level of security is subject to exhaustive search. Therefore, in the PAKE protocol, the communication must not reveal any data that allows an attacker to learn the password through off- line exhaustive search. Second, a PAKE protocol must provide forward secrecy. The key exchange is authenticated based on a shared password. However, there is no guarantee on the long-term secrecy of the password. A secure PAKE scheme shall protect past session keys even when the password is later disclosed. This property also implies that if an attacker knows the password but only passively observes the key exchange, he cannot learn the session key. Third, a PAKE protocol must provide known key security. A session key lasts throughout the session. An exposed session key must not cause any global impact on the system, affecting the security of other sessions. Hao Expires October 28, 2017 [Page 11] Internet-Draft J-PAKE April 2017 Finally, a PAKE protocol must resist on-line dictionary attacks. If the attacker is directly engaging in the key exchange, there is no way to prevent such an attacker trying a random guess of the password. However, a secure PAKE scheme should mitigate the effect of the on-line attack to the minimum. In the best case, the attacker can only guess exactly one password per impersonation attempt. Consecutively failed attempts can be easily detected and the subsequent attempts shall be thwarted accordingly. It is recommended that the false authentication counter should be handled in such a way that any error (which causes the session to fail during the key exchange or key confirmation) would lead to incrementing the false authentication counter. It has been proven in [HR10] that J-PAKE satisfies all of the four requirements based on the assumptions that the Decisional Diffie- Hellman problem is intractable and the underlying Schnorr NIZK proof is secure. An independent study that proves security of J-PAKE in a model with algebraic adversaries and random oracles can be found in [ABM15]. By comparison, it has been known that EKE has the problem of leaking partial information about the password to a passive attacker, hence not satisfying the first requirement [Jas96]. For SPEKE and SRP-6, an attacker may be able to test more than one password in one on-line dictionary attack (see [Zha04] and [Hao10]), hence they do not satisfy the fourth requirement in the strict theoretical sense. Furthermore, SPEKE is found vulnerable to an impersonation attack and a key-malleability attack [HS14]. These two attacks affect the SPEKE protocol specified in Jablon's original 1996 paper [Jab96] as well in the D26 draft of IEEE P1363.2 and the latest published ISO/IEC 11770-4:2006 standard. As a result, the specification of SPEKE in ISO/IEC 11770-4:2006 has been revised to address the identified problems. 7. IANA Considerations This document has no actions for IANA. 8. Acknowledgements The editor would like to thank Dylan Clarke, Siamak Shahandashti, Robert Cragie, Stanislav Smyshlyaev and Russ Housley for many useful comments. This work is supported by EPSRC First Grant (EP/J011541/1) and ERC Starting Grant (No. 306994). 9. References Hao Expires October 28, 2017 [Page 12] Internet-Draft J-PAKE April 2017 9.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997, [RFC5054] Taylor, D., Wu, T., Mavrogiannopoulos, N., and T. Perrin, "Using the Secure Remote Password (SRP) Protocol for TLS Authentication", RFC 5054, DOI 10.17487/RFC5054, November 2007, <http://www.rfc-editor.org/info/rfc5054>. [SEC1] "Standards for Efficient Cryptography. SEC 1: Elliptic Curve Cryptography", SECG SEC1-v2, May 2004, [ABM15] Abdalla, M., Benhamouda, F., and P. MacKenzie, "Security of the J-PAKE Password-Authenticated Key Exchange Protocol", IEEE Symposium on Security and Privacy, May [BM92] Bellovin, S. and M. Merrit, "Encrypted Key Exchange: Password-based Protocols Secure against Dictionary Attacks", IEEE Symposium on Security and Privacy, May [HR08] Hao, F. and P. Ryan, "Password Authenticated Key Exchange by Juggling", 16th Workshop on Security Protocols (SPW'08), May 2008. [HR10] Hao, F. and P. Ryan, "J-PAKE: Authenticated Key Exchange Without PKI", Springer Transactions on Computational Science XI, 2010. [HS14] Hao, F. and S. Shahandashti, "The SPEKE Protocol Revisited", Security Standardisation Research, December [Jab96] Jablon, D., "Strong Password-Only Authenticated Key Exchange", ACM Computer Communications Review, October Stinson, D., "Cryptography: Theory and Practice (3rd Edition)", CRC, 2006. [Wu98] Wu, T., "The Secure Remote Password protocol", Symposimum on Network and Distributed System Security, March 1998. Hao Expires October 28, 2017 [Page 13] Internet-Draft J-PAKE April 2017 Hao, F., "Schnorr NIZK proof: Non-interactive Zero Knowledge Proof for Discrete Logarithm", IETF Internet Draft-06 (work in progress), 2017, 9.2. Informative References [RFC4419] Friedl, M., Provos, N., and W. Simpson, "Diffie-Hellman Group Exchange for the Secure Shell (SSH) Transport Layer Protocol", RFC 4419, DOI 10.17487/RFC4419, March 2006, [BJS07] Barker, E., Johnson, D., and M. Smid, "Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography (Revised)", NIST Special Publication 800-56A, March 2007, [Jas96] Jaspan, B., "Dual-Workfactor Encrypted Key Exchange: Efficiently Preventing Password Chaining and Dictionary Attacks", USENIX Symphosium on Security, July 1996. [Zha04] Zhang, M., "Analysis of The SPEKE Password-Authenticated Key Exchange Protocol", IEEE Communications Letters, January 2004. [Hao10] Hao, F., "On Small Subgroup Non-Confinement Attacks", IEEE conference on Computer and Information Technology, [AP05] Abdalla, M. and D. Pointcheval, "Simple Password-Based Encrypted Key Exchange Protocols", Topics in Cryptology - CT-RSA, 2005. "Federal Information Processing Standards Publication 186-4: Specifications for the Digital Signature Standard (DSS)", July 2013, <http://nvlpubs.nist.gov/nistpubs/FIPS/ [SOAA15] Smyshlyaev, S., Oshkin, I., Alekseev, E., and L. Ahmetzyanova, "On the Security of One Password Authenticated Key Exchange Protocol", 2015, Hao Expires October 28, 2017 [Page 14] Internet-Draft J-PAKE April 2017 9.3. URIs [1] https://wiki.mozilla.org/Services/Sync/SyncKey/J-PAKE [2] https://www.palemoon.org/sync/ [3] http://boinc.berkeley.edu/android-boinc/libssl/crypto/jpake/ [4] https://dxr.mozilla.org/mozilla- [5] https://www.bouncycastle.org/docs/docs1.5on/org/bouncycastle/cryp [6] http://threadgroup.org/Portals/0/documents/whitepapers/ [7] https://www.iso.org/standard/67933.html Author's Address Feng Hao (editor) Newcastle University (UK) Claremont Tower, School of Computing Science, Newcastle University Newcastle Upon Tyne United Kingdom Phone: +44 (0)191-208-6384 EMail: feng.hao@ncl.ac.uk Hao Expires October 28, 2017 [Page 15]
{"url":"https://datatracker.ietf.org/doc/draft-hao-jpake/06/","timestamp":"2024-11-06T10:32:18Z","content_type":"text/html","content_length":"75873","record_id":"<urn:uuid:512cb90e-3474-47cd-89c0-a6874faa3093>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00546.warc.gz"}
Introduction to Data Mining Data mining refers to theories and techniques for finding useful patterns from massive amount of data. Data mining has been used in high impact applications including web analysis, fraud detection, recommendation system, cyber security, etc. This course covers important algorithms and theories for data mining. Main topics include mapreduce, finding similar items, mining frequent patterns, link analysis, data stream mining, clustering, graphs, and mining big data. Data Structure This undergraduate level course covers fundamental algorithms and data structures used in computer programming. Data structures are ways of organizing data within a computer's storage so that some desired operations may be performed on that data easily or efficiently. Algorithms are sequences of operations that, usually, take some input data and produce some desired output. Together, they form the foundation of computer programming. The topics to be covered include abstract data types, trees, hashing, sorting, graphs, string match, and algorithm design techniques. Reinforcement Learning Reinforcement learning is a field of machine learning that aims to automatically learn how software will behave in order to maximize rewards. Reinforcement learning has been used in many AI applications such as AlphaGo and backgammon. In this course, we will cover important basic concepts of reinforcement learning such as markov decision process, planning, prediction, policy gradient, and exploration / exploitation. Data Structure This undergraduate level course covers fundamental algorithms and data structures used in computer programming. Data structures are ways of organizing data within a computer's storage so that some desired operations may be performed on that data easily or efficiently. Algorithms are sequences of operations that, usually, take some input data and produce some desired output. Together, they form the foundation of computer programming. The topics to be covered include abstract data types, trees, hashing, sorting, graphs, string match, and algorithm design techniques. Introduction to Data Mining M1522.001400: Introduction to Data Mining (Fall 2022) Data mining refers to theories and techniques for finding useful patterns from massive amount of data. Data mining has been used in high impact applications including web analysis, fraud detection, recommendation system, cyber security, etc. This course covers important algorithms and theories for data mining. Main topics include mapreduce, finding similar items, mining frequent patterns, link analysis, data stream mining, clustering, graphs, and mining big data. Advanced Data Mining M2177.003000: Advanced Data Mining (Fall 2022) Data mining attracted much interests as an essential tool for big data analysis. Especially, designing and implementing advanced data mining algorithms and analysis platforms play crucial roles in extracting executable knowledges from big data. This course covers advanced data mining techniques, algorithms, and core platforms for big data analysis. This course also covers the techniques to effectively analyze very large data and high-speed data. Reinforcement Learning M3309.000200: Reinforcement Learning (Spring 2021) Reinforcement learning is a field of machine learning that aims to automatically learn how software will behave in order to maximize rewards. Reinforcement learning has been used in many AI applications such as AlphaGo and backgammon. In this course, we will cover important basic concepts of reinforcement learning such as markov decision process, planning, prediction, policy gradient, and exploration / exploitation. Introduction to Data Mining M1522.001400: Introduction to Data Mining (Spring 2021) Data mining refers to theories and techniques for finding useful patterns from massive amount of data. Data mining has been used in high impact applications including web analysis, fraud detection, recommendation system, cyber security, etc. This course covers important algorithms and theories for data mining. Main topics include mapreduce, finding similar items, mining frequent patterns, link analysis, data stream mining, clustering, graphs, and mining big data. Advanced Data Mining M2177.003000: Advanced Data Mining (Fall 2020) Data mining attracted much interests as an essential tool for big data analysis. Especially, designing and implementing advanced data mining algorithms and analysis platforms play crucial roles in extracting executable knowledges from big data. This course covers advanced data mining techniques, algorithms, and core platforms for big data analysis. This course also covers the techniques to effectively analyze very large data and high-speed data. Data Structure M1522.000900: Data Structure (Fall 2020) This undergraduate level course covers fundamental algorithms and data structures used in computer programming. Data structures are ways of organizing data within a computer's storage so that some desired operations may be performed on that data easily or efficiently. Algorithms are sequences of operations that, usually, take some input data and produce some desired output. Together, they form the foundation of computer programming. The topics to be covered include abstract data types, trees, hashing, sorting, graphs, string match, and algorithm design techniques. Introduction to Data Mining M1522.001400: Introduction to Data Mining (Spring 2020) Data mining refers to theories and techniques for finding useful patterns from massive amount of data. Data mining has been used in high impact applications including web analysis, fraud detection, recommendation system, cyber security, etc. This course covers important algorithms and theories for data mining. Main topics include mapreduce, finding similar items, mining frequent patterns, link analysis, data stream mining, clustering, graphs, and mining big data. Advanced Data Mining M2177.003000: Advanced Data Mining (Fall 2019) Data mining attracted much interests as an essential tool for big data analysis. Especially, designing and implementing advanced data mining algorithms and analysis platforms play crucial roles in extracting executable knowledges from big data. This course covers advanced data mining techniques, algorithms, and core platforms for big data analysis. This course also covers the techniques to effectively analyze very large data and high-speed data. Data Structure M1522.000900: Data Structure (Fall 2019) This undergraduate level course covers fundamental algorithms and data structures used in computer programming. Data structures are ways of organizing data within a computer's storage so that some desired operations may be performed on that data easily or efficiently. Algorithms are sequences of operations that, usually, take some input data and produce some desired output. Together, they form the foundation of computer programming. The topics to be covered include abstract data types, trees, hashing, sorting, graphs, string match, and algorithm design techniques. Introduction to Data Mining M1522.001400: Introduction to Data Mining (Spring 2019) Data mining refers to theories and techniques for finding useful patterns from massive amount of data. Data mining has been used in high impact applications including web analysis, fraud detection, recommendation system, cyber security, etc. This course covers important algorithms and theories for data mining. Main topics include mapreduce, finding similar items, mining frequent patterns, link analysis, data stream mining, clustering, graphs, and mining big data. Reinforcement Learning (Topics in Big Data Analytics) M1522.001600: Reinforcement Learning - Topics in Big Data Analytics (Spring 2019) Reinforcement learning is a field of machine learning that aims to automatically learn how software will behave in order to maximize rewards. Reinforcement learning has been used in many AI applications such as AlphaGo and backgammon. In this course, we will cover important basic concepts of reinforcement learning such as markov decision process, planning, prediction, policy gradient, and exploration / exploitation. Advanced Deep Learning (Topics in Big Data Analytics) M1522.001600: Advanced Deep Learning (Topics in Big Data Analytics) (Fall 2018) Deep learning is a branch of machine learning based on a set of algorithms that attempt to model high level abstractions in data. Deep learning is a driving force of the recent advances in AI. In this course, we study advanced techniques of deep learning to analyze large amount of data. Topics include linear factor models, autoencoders, representation learning, structured probabilistic models for deep learning, monte carlo methods, partition function, approximate inference, and deep generative models. Data Structure M1522.000900: Data Structure (Fall 2018) This undergraduate level course covers fundamental algorithms and data structures used in computer programming. Data structures are ways of organizing data within a computer's storage so that some desired operations may be performed on that data easily or efficiently. Algorithms are sequences of operations that, usually, take some input data and produce some desired output. Together, they form the foundation of computer programming. The topics to be covered include abstract data types, trees, hashing, sorting, graphs, string match, and algorithm design techniques. Optimization for Machine Learning (Topics in Artificial Intelligence) 4190.773: Optimization for Machine Learning (Topics in Artificial Intelligence) (Spring 2018) Optimization is a crucial tool for many machine learning techniques. Formulating a problem into an optimization framework, and solving it are core skills for researchers in the area of machine learning. This course covers important theories and algorithms for optimization in machine learning. Topics include convex sets, convex functions, convex optimization, duality, submodular optimization, and algorithms for optimizations. Introduction to Data Mining M1522.001400: Introduction to Data Mining (Spring 2018) Data mining refers to theories and techniques for finding useful patterns from massive amount of data. Data mining has been used in high impact applications including web analysis, fraud detection, recommendation system, cyber security, etc. This course covers important algorithms and theories for data mining. Main topics include mapreduce, finding similar items, mining frequent patterns, link analysis, data stream mining, clustering, graphs, and mining big data. Advanced Deep Learning (Topics in Big Data Analytics) M1522.001600: Advanced Deep Learning (Topics in Big Data Analytics) (Fall 2017) Data Structure M1522.000900: Data Structure (Fall 2017) Large Scale Data Analysis Using Deep Learning M1522.001600: Large Scale Data Analysis Using Deep Learning (Spring 2017) Introduction to Data Mining M1522.001400: Introduction to Data Mining (Spring 2017) Data Structure M1522.000900: Data Structure (Fall 2016)
{"url":"https://datalab.snu.ac.kr/en/course","timestamp":"2024-11-13T08:26:37Z","content_type":"text/html","content_length":"18640","record_id":"<urn:uuid:e8cc87dc-4675-4c95-8051-dcdbd3d013ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00426.warc.gz"}
10.1: Change and Differences Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) Researchers are often interested in change over time. Sometimes we want to see if change occurs naturally, and other times we are hoping for change in response to some manipulation. In each of these cases, we measure a single variable at different times, and what we are looking for is whether or not we get the same score at time 2 as we did at time 1. The absolute value of our measurements does not matter – all that matters is the change. Let’s look at an example: Table \(\PageIndex{1}\): Raw and difference scores before and after training. Before After Improvement Table \(\PageIndex{1}\) shows scores on a quiz that five employees received before they took a training course and after they took the course. The difference between these scores (i.e. the score after minus the score before) represents improvement in the employees’ ability. This third column is what we look at when assessing whether or not our training was effective. We want to see positive scores, which indicate that the employees’ performance went up. What we are not interested in is how good they were before they took the training or after the training. Notice that the lowest scoring employee before the training (with a score of 1) improved just as much as the highest scoring employee before the training (with a score of 8), regardless of how far apart they were to begin with. There’s also one improvement score of 0, meaning that the training did not help this employee. An important factor in this is that the participants received the same assessment at both time points. To calculate improvement or any other difference score, we must measure only a single variable. When looking at change scores like the ones in Table \(\PageIndex{1}\), we calculate our difference scores by taking the time 2 score and subtracting the time 1 score. That is: \[\mathrm{X}_{\mathrm{d}}=\mathrm{X}_{\mathrm{T} 2}-\mathrm{X}_{\mathrm{T} 1} \] Where \(\mathrm{X}_{\mathrm{d}}\) is the difference score, \(\mathrm{X}_{\mathrm{T} 1}\) is the score on the variable at time 1, and \(\mathrm{X}_{\mathrm{T} 2}\) is the score on the variable at time 2. The difference score, \(\mathrm{X}_{\mathrm{d}}\), will be the data we use to test for improvement or change. We subtract time 2 minus time 1 for ease of interpretation; if scores get better, then the difference score will be positive. Similarly, if we’re measuring something like reaction time or depression symptoms that we are trying to reduce, then better outcomes (lower scores) will yield negative difference scores. We can also test to see if people who are matched or paired in some way agree on a specific topic. For example, we can see if a parent and a child agree on the quality of home life, or we can see if two romantic partners agree on how serious and committed their relationship is. In these situations, we also subtract one score from the other to get a difference score. This time, however, it doesn’t matter which score we subtract from the other because what we are concerned with is the agreement. In both of these types of data, what we have are multiple scores on a single variable. That is, a single observation or data point is comprised of two measurements that are put together into one difference score. This is what makes the analysis of change unique – our ability to link these measurements in a meaningful way. This type of analysis would not work if we had two separate samples of people that weren’t related at the individual level, such as samples of people from different states that we gathered independently. Such datasets and analyses are the subject of the following A rose by any other name… It is important to point out that this form of t-test has been called many different things by many different people over the years: “matched pairs”, “paired samples”, “repeated measures”, “dependent measures”, “dependent samples”, and many others. What all of these names have in common is that they describe the analysis of two scores that are related in a systematic way within people or within pairs, which is what each of the datasets usable in this analysis have in common. As such, all of these names are equally appropriate, and the choice of which one to use comes down to preference. In this text, we will refer to paired samples, though the appearance of any of the other names throughout this chapter should not be taken to refer to a different analysis: they are all the same thing. Now that we have an understanding of what difference scores are and know how to calculate them, we can use them to test hypotheses. As we will see, this works exactly the same way as testing hypotheses about one sample mean with a tstatistic. The only difference is in the format of the null and alternative hypotheses.
{"url":"https://stats.libretexts.org/Courses/Rio_Hondo_College/PSY_190%3A_Statistics_for_the_Behavioral_Sciences/10%3A_Repeated_Measures/10.01%3A_Change_and_Differences","timestamp":"2024-11-10T11:18:40Z","content_type":"text/html","content_length":"131574","record_id":"<urn:uuid:1a627e70-40fe-4bc5-999b-bdf00dc9c8b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00265.warc.gz"}
See also: machine learning terms and Bias (ethics/fairness) In a neural network, bias is an additional input value that is added to the weighted sum of the input values in each neuron, before the activation function is applied. It provides the network with the ability to adjust the output of the neuron independent of the input. So for a machine learning model, bias is a parameter symbolized by either: b or w[0] More formally, in a neural network, each neuron receives inputs from the previous layer, which are multiplied by a weight and summed up. This weighted sum is then passed through an activation function to produce the output of the neuron. The bias term provides an additional constant input to the neuron that can shift the output of the activation function in a certain direction. The bias term is learned during the training process, along with the weights. The bias value is adjusted to minimize the error between the predicted output and the actual output. The presence of the bias term allows the neural network to model more complex relationships between the input and output.
{"url":"https://aiwiki.ai/wiki/Bias","timestamp":"2024-11-11T01:30:35Z","content_type":"text/html","content_length":"44718","record_id":"<urn:uuid:961ff30a-60c0-4946-993c-af8632b6f58c>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00387.warc.gz"}
A non-scary introduction to Elliptic Curves • Answer: An elliptic curve is a relatively simple mathematical graph of the form y^2 = x^3 + ax + b. • Answer: The elliptic curves used in cryptography are different to continuous curves in two key respects. 1. They are defined over a finite field, meaning that the curve is only defined on discrete (integer) values on the x-axis. 2. They are defined modulo a prime number, meaning that the y-values “wrap around” some maximum (prime) value. • Answer: The security of elliptic curve cryptography is based on the fact that it is very difficult to find the discrete logarithm of a point on the curve. This is a fancy way of saying that it is very difficult to find the value of x if you know the value of y. This is known as the “elliptic curve discrete logarithm problem” (ECDLP). • Answer: Because the values “wrap around” modulo some large prime, and because the fields (ranges of values) are very, very large, the distribution of values starts to look very random. So although it is very easy to calculate a y-value given an x-value, by using the curve’s formula, it is difficult to do the reverse. Furthermore, the derivation of a secret key from a public key involves repeating this process (typically) many many trillions of times, it becomes infeasible to brute-force the solution in a reasonable amount of time.
{"url":"https://tlu.tarilabs.com/cryptography-101/module1-elliptic-curves","timestamp":"2024-11-14T14:09:43Z","content_type":"text/html","content_length":"130037","record_id":"<urn:uuid:ddba5fc5-4de6-4a6b-afc2-9636a7d0b393>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00074.warc.gz"}
International Journal of Pure and Applied Mathematics Archives - Math Research of Victor Porton In the draft of my book there was an error. I’ve corrected it today. Wrong: $latex \forall a, b \in \mathfrak{A}: ( \mathrm{atoms}\, a \sqsubset \mathrm{atoms}\, b \Rightarrow a \subset b)$. Right: $latex \forall a, b \in \mathfrak{A}: ( a \sqsubset b \Rightarrow \mathrm{atoms}\, a \subset \mathrm{atoms}\, b)$. There are the same error in my […] My first math article (titled “Filters on Posets and Generalizations”) was recently published in a peer reviewed, open access journal. Why I published my first research article only in the age of 31? See my short autobiography. I previously submitted my article about filters on posets to Armenian Journal of Mathematics. I waited for review of my article about 17 months and they haven’t replied. So I withdraw my submission and now am submitting to an other journal, International Journal of Pure and Applied Mathematics.
{"url":"https://math.portonvictor.org/tag/international-journal-of-pure-and-applied-mathematics/","timestamp":"2024-11-10T14:46:11Z","content_type":"text/html","content_length":"92221","record_id":"<urn:uuid:abb200b4-3196-4b9e-80ac-c7d4b8add38e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00868.warc.gz"}
Point-Based Value Iteration we keep track of a bunch of alpha vectors and belief samples (which we get from point selection): $$\Gamma = \{\alpha_{1}, \dots, \alpha_{m}\}$$ $$B = \{b_1, \dots, b_{m}\}$$ To preserve the lower-boundedness of these alpha vectors, one should seed the alpha vectors via something like blind lower bound We can estimate our utility function at any belief by looking in the set for the most optimal: $$U^{\Gamma}(b) = \max_{\alpha \in \Gamma} \alpha^{\top}b$$ We now define a function named backup (see PBVI Backup), and call it on all of our beliefs to generate a new set of alpha vectors: $$\Gamma^{t+1} = \{backup(\Gamma, b) | b \in B\}$$ $$\alpha \leftarrow backup(\Gamma, b)$$ therefore we call backup on each \(b\). PBVI Backup backup procedure given \(\Gamma\) and $b$— we want to mint a single new alpha vector by selecting the highest-valued one from the set of good alpha-vectors, one for each action: $$\alpha = \arg\max_{\alpha_{a}} \alpha_{a}^{\top} b$$ now, we define each \(\alpha_{a}\) as: $$\alpha_{a}(s) = R(s,a) + \gamma \sum_{s’,o}^{} O(o|a,s’)T(s’|s,a)\alpha_{a,o} (s’)$$ where we obtain the old \(\alpha_{a,o}\) by computing vector which currently provides the highest value estimate, which we compute over all actions and observations \(a,o\) given our \(\Gamma\): $$\alpha_{a,o} = \arg\max_{\alpha \in \Gamma} \alpha^{\top} update(b,a,o)$$ Randomized PBVI see Perseus
{"url":"https://www.jemoka.com/posts/kbhpoint_based_value_iteration/","timestamp":"2024-11-03T13:44:52Z","content_type":"text/html","content_length":"6974","record_id":"<urn:uuid:b2a028c7-3b38-421f-a3d7-78fd31ebe86a>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00778.warc.gz"}
Study Guide - Section Exercises Section Exercises 1. Is [latex]\sqrt{2}[/latex] an example of a rational terminating, rational repeating, or irrational number? Tell why it fits that category. 2. What is the order of operations? What acronym is used to describe the order of operations, and what does it stand for? 3. What do the Associative Properties allow us to do when following the order of operations? Explain your answer. For the following exercises, simplify the given expression. 4. [latex]10+2\times \left(5 - 3\right)[/latex] 5. [latex]6\div 2-\left(81\div {3}^{2}\right)[/latex] 6. [latex]18+{\left(6 - 8\right)}^{3}[/latex] 7. [latex]-2\times {\left[16\div {\left(8 - 4\right)}^{2}\right]}^{2}[/latex] 8. [latex]4 - 6+2\times 7[/latex] 9. [latex]3\left(5 - 8\right)[/latex] 10. [latex]4+6 - 10\div 2[/latex] 11. [latex]12\div \left(36\div 9\right)+6[/latex] 12. [latex]{\left(4+5\right)}^{2}\div 3[/latex] 13. [latex]3 - 12\times 2+19[/latex] 14. [latex]2+8\times 7\div 4[/latex] 15. [latex]5+\left(6+4\right)-11[/latex] 16. [latex]9 - 18\div {3}^{2}[/latex] 17. [latex]14\times 3\div 7 - 6[/latex] 18. [latex]9-\left(3+11\right)\times 2[/latex] 19. [latex]6+2\times 2 - 1[/latex] 20. [latex]64\div \left(8+4\times 2\right) [/latex] 21. [latex]9+4\left({2}^{2}\right)[/latex] 22. [latex]{\left(12\div 3\times 3\right)}^{2}[/latex] 23. [latex]25\div {5}^{2}-7[/latex] 24. [latex]\left(15 - 7\right)\times \left(3 - 7\right) [/latex] 25. [latex]2\times 4 - 9\left(-1\right)[/latex] 26. [latex]{4}^{2}-25\times \frac{1}{5}[/latex] 27. [latex]12\left(3 - 1\right)\div 6[/latex] For the following exercises, solve for the variable. 28. [latex]8\left(x+3\right)=64[/latex] 29. [latex]4y+8=2y[/latex] 30. [latex]\left(11a+3\right)-18a=-4[/latex] 31. [latex]4z - 2z\left(1+4\right)=36[/latex] 32. [latex]4y{\left(7 - 2\ right)}^{2}=-200[/latex] 33. [latex]-{\left(2x\right)}^{2}+1=-3[/latex] 34. [latex]8\left(2+4\right)-15b=b[/latex] 35. [latex]2\left(11c - 4\right)=36[/latex] 36. [latex]4\left(3 - 1\right)x=4[/ latex] 37. [latex]\frac{1}{4}\left(8w-{4}^{2}\right)=0[/latex] For the following exercises, simplify the expression. 38. [latex]4x+x\left(13 - 7\right)[/latex] 39. [latex]2y-{\left(4\right)}^{2}y - 11[/latex] 40. [latex]\frac{a}{{2}^{3}}\left(64\right)-12a\div 6[/latex] 41. [latex]8b - 4b\left(3\right)+1[/latex] 42. [latex]5l\div 3l\times \left(9 - 6\right)[/latex] 43. [latex]7z - 3+z\times {6} ^{2}[/latex] 44. [latex]4\times 3+18x\div 9 - 12[/latex] 45. [latex]9\left(y+8\right)-27[/latex] 46. [latex]\left(\frac{9}{6}t - 4\right)2[/latex] 47. [latex]6+12b - 3\times 6b[/latex] 48. [latex]18y - 2\left(1+7y\right)[/latex] 49. [latex]{\left(\frac{4}{9}\right)}^{2}\times 27x[/latex] 50. [latex]8\left(3-m\right)+1\left(-8\right)[/latex] 51. [latex]9x+4x\left(2+3\right)-4\left(2x+3x\right)[/ latex] 52. [latex]{5}^{2}-4\left(3x\right)[/latex] For the following exercises, consider this scenario: Fred earns $40 mowing lawns. He spends $10 on mp3s, puts half of what is left in a savings account, and gets another $5 for washing his neighbor’s car. 53. Write the expression that represents the number of dollars Fred keeps (and does not put in his savings account). Remember the order of operations. 54. How much money does Fred keep? For the following exercises, solve the given problem. 55. According to the U.S. Mint, the diameter of a quarter is 0.955 inches. The circumference of the quarter would be the diameter multiplied by [latex]\pi [/latex]. Is the circumference of a quarter a whole number, a rational number, or an irrational number? 56. Jessica and her roommate, Adriana, have decided to share a change jar for joint expenses. Jessica put her loose change in the jar first, and then Adriana put her change in the jar. We know that it does not matter in which order the change was added to the jar. What property of addition describes this fact? For the following exercises, consider this scenario: There is a mound of [latex]g[/latex] pounds of gravel in a quarry. Throughout the day, 400 pounds of gravel is added to the mound. Two orders of 600 pounds are sold and the gravel is removed from the mound. At the end of the day, the mound has 1,200 pounds of gravel. 57. Write the equation that describes the situation. 58. Solve for g. For the following exercise, solve the given problem. 59. Ramon runs the marketing department at his company. His department gets a budget every year, and every year, he must spend the entire budget without going over. If he spends less than the budget, then his department gets a smaller budget the following year. At the beginning of this year, Ramon got $2.5 million for the annual marketing budget. He must spend the budget such that [latex]2,500,000-x=0[/latex]. What property of addition tells us what the value of x must be? For the following exercises, use a graphing calculator to solve for x. Round the answers to the nearest hundredth. 60. [latex]0.5{\left(12.3\right)}^{2}-48x=\frac{3}{5}[/ latex] 61. [latex]{\left(0.25 - 0.75\right)}^{2}x - 7.2=9.9[/latex] 62. If a whole number is not a natural number, what must the number be? 63. Determine whether the statement is true or false: The multiplicative inverse of a rational number is also rational. 64. Determine whether the statement is true or false: The product of a rational and irrational number is always irrational. 65. Determine whether the simplified expression is rational or irrational: [latex]\sqrt{-18 - 4\left(5\right)\left(-1\right)}[/latex]. 66. Determine whether the simplified expression is rational or irrational: [latex]\sqrt{-16+4\left(5\right)+5}[/latex]. 67. The division of two whole numbers will always result in what type of number? 68. What property of real numbers would simplify the following expression: [latex]4+7\left(x - 1\right)?[/latex] Licenses & Attributions CC licensed content, Specific attribution
{"url":"https://www.symbolab.com/study-guides/sanjacinto-atdcoursereview-collegealgebra-1/section-exercises.html","timestamp":"2024-11-08T02:46:15Z","content_type":"text/html","content_length":"134020","record_id":"<urn:uuid:1a6155d0-880d-4df3-ad91-e6e6a7017b9c>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00770.warc.gz"}
Nonlinear Evolution Equations That Change Type [PDF] [522do0r7hbd0] E-Book Overview This IMA Volume in Mathematics and its Applications NONLINEAR EVOLUTION EQUATIONS THAT CHANGE TYPE is based on the proceedings of a workshop which was an integral part of the 1988-89 IMA program on NONLINEAR WAVES. The workshop focussed on prob­ lems of ill-posedness and change of type which arise in modeling flows in porous materials, viscoelastic fluids and solids and phase changes. We thank the Coordinat­ ing Committee: James Glimm, Daniel Joseph, Barbara Lee Keyfitz, Andrew Majda, Alan Newell, Peter Olver, David Sattinger and David Schaeffer for planning and implementing an exciting and stimulating year-long program. We especially thank the workshop organizers, Barbara Lee Keyfitz and Michael Shearer, for their efforts in bringing together many of the major figures in those research fields in which theories for nonlinear evolution equations that change type are being developed. A vner Friedman Willard Miller, J r. ix PREFACE During the winter and spring quarters of the 1988/89 IMA Program on Non­ linear Waves, the issue of change of type in nonlinear partial differential equations appeared frequently. Discussion began with the January 1989 workshop on Two­ Phase Waves in Fluidized Beds, Sedimentation and Granular Flow; some of the papers in the proceedings of that workshop present strategies designed to avoid the appearance of change of type in models for multiphase fluid flow. E-Book Content The IMA Volumes in Mathematics and Its Applications Volume 27 Series Editors Avner Friedman Willard Miller, Jr. Institute for Mathematics and its Applications IMA The Institute for Mathematics and its Applications was established by a grant from the National Science Foundation to the University of Minnesota in 1982. The IMA seeks to encourage the development and study of fresh mathematical concepts and questions of concern to the other sciences by bringing together mathematicians and scientists from diverse fields in an atmosphere that will stimulate discussion and collaboration. The IMA Volumes are intended to involve the broader scientific community in this process. Avner Friedman, Director Willard Miller, Jr., Associate Director ********** IMA PROGRAMS 1982-1983 1983-1984 Statistical and Continuum Approaches to Phase Transition Mathematical Models for the Economics of 1984-1985 1985-1986 1986-1987 1987-1988 1988-1989 1989-1990 1990-1991 Decentralized Resource Allocation Continuum Physics and Partial Differential Equations Stochastic Differential Equations and Their Applications Scientific Computation Applied Combinatorics Nonlinear Waves Dynamical Systems and Their Applications Phase Transitions and Free Boundaries ********** SPRINGER LECTURE NOTES FROM THE IMA: The Mathematics and Physics of Disordered Media Editors: Barry Hughes and Barry Ninham (Lecture Notes in Math., Volume 1035, 1983) Orienting Polymers Editor: J.L. Ericksen (Lecture Notes in Math., Volume 1063, 1984) New Perspectives in Thermodynamics Editor: James Serrin (Springer-Verlag, 1986) Models of Economic Dynamics Editor: Hugo So=enschein (Lecture Notes in Econ., Volume 264, 1986) Barbara Lee Keyfitz Michael Shearer Nonlinear Evolution Equations That Change Type With 96 Figures Springer-Verlag New York Berlin Heidelberg London Paris Tokyo Hong Kong Barbara Lee Keyfitz Department of Mathematics University of Houston Houston, Texas 77204 USA Michael Shearer Department of Mathematics North Carolina State University Raleigh, North Carolina 27695 USA Series Editors Avner Friedman Willard Miller, Jr. Institute for Mathematics and Its Applications University of Minnesota Minneapolis, MN 55455 USA Mathematical Subject Classification Codes: Primary: 35M05, 76AIO, 35L65, 35L67: Secondary: 35K65, 65N99, 73E05, 76A05, 76H05, 76505, 82A25. Library of Congress Cataloging-in-Publication Data Nonlinear evolution equations that change type / [edited by] Barbara Lee Keyfitz, Michael Shearer. p. cm. - (The IMA volumes in mathematics and its applications ; v. Tl) "Based on the proceedings of a workshop which was an integral part of the 1988-89 IMA program on nonlinear waves''--Foreword. 1. Evolution equations, Nonlinear. 1. Keyfitz, Barbara Lee. 11. Shearer, Michael. 111. Series. QA377.N664 1990 515'.353-dc20 90-9970 Printed on acid-free paper. © 1990 Springer-Verlag New York Inc. Softcover reprint ofthe hardcover I st edition 1990 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use of general descriptive names, trade names, trademarks, etc., in this publication, even if the former are not especially identified, is not to be taken as a sign that such names,. as understood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone. Permission to photocopy for internal or personal use, or the internal o~ personal use of specific clients, is granted by Springer-Verlag New York, Inc. for libraries registered with the Copyright Clearance Center (CCC), provided that the base fee of $0.00 per copy, plus $0.20 per page is paid directly to CCC, 21 Congress St., Salem, MA 01970, USA. Special requests should be addressed directly to Springer-Verlag New York, 175 Fifth Avenue, New York, NY 10010, USA. ISBN 0-387-97353-2 1990 $0.00 + 0.20 Camera-ready copy prepared by the IMA. 9876 54 32 I ISBN-I3: 978-1-4613-9051-0 DOl: 10.1007/978-1-4613-9049-7 e-ISBN-I3: 978-1-4613-9049-7 The IMA Volumes in Mathematics and its Applications Current Volumes: Volume 1: Homogenization and Effective Moduli of Materials and Media Editors: Jerry Ericksen, David Kinderlehrer, Robert Kohn, J.-L. Lions Volume 2: Oscillation Theory, Computation, and Methods of Compensated Compactness Editors: Constantine Dafermos, Jerry Ericksen, David Kinderlehrer, Marshall Slemrod Volume 3: Metastability and Incompletely Posed Problems Editors: Stuart Antman, Jerry Ericksen, David Kinderlehrer, Ingo Muller Volume 4: Dynamical Problems in Continuum Physics Editors: Jerry Bona, Constantine Dafermos, Jerry Ericksen, David Kinderlehrer Volume 5: Theory and Applications of Liquid Crystals Editors: Jerry Ericksen and David Kinderlehrer Volume 6: Amorphous Polymers and Non-Newtonian Fluids Editors: Constantine Dafermos, Jerry Ericksen, David Kinderlehrer Volume 7: Random Media Editor: George Papanicolaou Volume 8: Percolation Theory and Ergodic Theory of Infinite Particle Systems Editor: Harry Kesten Volume 9: Hydrodynamic Behavior and Interacting Particle Systems Editor: George Papanicolaou Volume 10: Stochastic Differential Systems, Stochastic Control Theory and Applications Editors: Wendell Fleming and Pierre-Louis Lions Volume 11: Numerical Simulation in Oil Recovery Editor: Mary Fanett Wheeler Volume 12: Computational Fluid Dynamics and Reacting Gas Flows Editors: Bjorn Engquist, M. Luskin, Andrew Majda Volume 13: Numerical Algorithms for Parallel Computer Architectures Editor: Martin H. Schultz Volume 14: Mathematical Aspects of Scientific Software Editor: J.R. Rice Volume 15: Mathematical Frontiers in Computational Chemical Physics Editor: D. Truhlar Volume 16: Mathematics in Industrial Problems by A vner Friedman Volume 17: Applications of Combinatorics and Graph Theory to the Biological and Social Sciences Editor: Fred Roberts Volume 18: q-Series and Partitions Editor: Dennis Stanton Volume 19: Invariant Theory and Tableaux Editor: Dennis Stanton Volume 20: Coding Theory and Design Theory Part I: Coding Theory Editor: Dijen Ray-Chaudhuri Volume 21: Coding Theory and Design Theory Part II: Design Theory Editor: Dijen Ray-Chaudhuri Volume 22: Signal Processing: Part I - Signal Processing Theory Editors: L. Auslander, F.A. Griinbaum, W. Helton, T. Kailath, P. Khargonekar and S. Mitter Volume 23: Signal Processing: Part II - Control Theory and Applications of Signal Processing Editors: 1. Auslander, F.A. Griinbaum, W. Helton, T. Kailath, P. Khargonekar and S. Mitter Volume 24: Mathematics in Industrial Problems, Part 2 by Avner Friedman Volume 25: Solitons in Physics, Mathematics, and Nonlinear Optics Editors: Peter J. Olver and David H. Sattinger Volume 26: Two Phase Flows and Waves Editors: Daniel D. Joseph and David G. Schaeffer Volume 27: Nonlinear Evolution Equations that Change Type Editors: Barbara Lee Keyfitz and Michael Shearer Forthcoming Volumes: 1988-1989: Nonlinear Waves Computer Aided Proofs in Analysis Multidimensional Hyperbolic Problems and Computations (2 Volumes) Microlocal Analysis and Nonlinear Waves Summer Program 1989: Robustness, Diagnostics, Computing and Graphics in Statistics Robustness, Diagnostics in Statistics (2 Volumes) Computing and Graphics in Statistics 1989-1990: Dynamical Systems and Their Applications An Intr?duction to Dynamical Systems Patterns and Dynamics in Reactive Media Dynamical Issues in Combustion Theory This IMA Volume in Mathematics and its Applications is based on the proceedings of a workshop which was an integral part of the 1988-89 IMA program on NONLINEAR WAVES. The workshop focussed on problems of ill-posedness and change of type which arise in modeling flows in porous materials, viscoelastic fluids and solids and phase changes. We thank the Coordinating Committee: James Glimm, Daniel Joseph, Barbara Lee Keyfitz, Andrew Majda, Alan Newell, Peter Olver, David Sattinger and David Schaeffer for planning and implementing an exciting and stimulating year-long program. We especially thank the workshop organizers, Barbara Lee Keyfitz and Michael Shearer, for their efforts in bringing together many of the major figures in those research fields in which theories for nonlinear evolution equations that change type are being A vner Friedman Willard Miller, J r. During the winter and spring quarters of the 1988/89 IMA Program on Nonlinear Waves, the issue of change of type in nonlinear partial differential equations appeared frequently. Discussion began with the January 1989 workshop on TwoPhase Waves in Fluidized Beds, Sedimentation and Granular Flow; some of the papers in the proceedings of that workshop present strategies designed to avoid the appearance of change of type in models for multiphase fluid flow. As the papers in this volume indicate, physical processes whose simplest models may involve change of type occur also in other dynamic contexts, such as in the simulation of oil reservoirs, involving multiphase flow in a porous medium, and in granular flow. There is also considerable recent mathematical work on simple model problems involving systems of conservation laws in space and time that change type. Some of this work addresses the theoretical issues, in particular the loss of linearized wellposedness of initial value problems; but there are interesting numerical problems also. Much of the mathematical work was not previously known to applied mathematicians or fluid dynamicists looking at models for specific flows. In addition, recent work on both steady and unsteady models of viscoelasticity has indicated the importance of composite systems in the study of steady visco-elastic flows, and has exhibited change of type in these steady models; unsteady change of type (change of type in the evolution equations) has even been conjectured to describe some instabilities in viscoelastic flows. The general theme of the March 1989 \Vorkshop on Evolution Equations that Change Type was the relationship between the analytical and numerical issues posed by equations that change type, and the applications modelled by these equations. The papers in these proceedings by Coleman and by Cook, Schleiniger and Weinacht discuss the current status of modelling of viscoelastic fluids, including change of type for both steady and unsteady flows, while the poper of Crochet and Delvaux details how numerical computations can be performed on steady viscoelastic flows that change type. This includes adapting the concept of an upwind scheme from transonic flow calculations. Renardy's paper, and that of Malkus, Nohel and Plohr, obtain analytical results which help to compare different models of viscoelastic fluids. An explanation of how multiphase flow in porous media leads to conservation laws that change type can be found in Lars Holden's paper. There are dynamic models for phase transitions which exhibit change of type, and the propagation of phase boundaries in equations arising this way is analysed by Mischaikow and by Sprekels. Models of granular flow give rise to linearly ill-posed equations; the paper of Schaeffer and Shearer contains an analytical treatment of change of type in yield-vertex models of plasticity. Mathematical background may be found in papers of Keyfitz and of ·Warnecke, which include comparison of classical steady transonic with unsteady models. Mathematical properties of model equations which exhibit change of type, and constrnc- tion of solutions, are discussed by Holden, Holden and Risebro, by Hsiao and by Azevedo and Marchesin. Theoretical issues of well-posedness, weak formulations, and admissibility of shock waves arise naturally if one tries to relate the linear ill-posedness of the Cauchy problem to nonlinear considerations, or to formulate correct boundary conditions for equations of mixed type. An approach to this analysis through examples which are not strictly hyperbolic is given in the papers of LeFloch, of Liu and Xin, and of Shearer and Schecter. The example considered by Kranzer and Keyfitz is strictly hyperbolic, but is evidently related to nonstrictly hyperbolic problems. Well-posedness for a nonlinear model which is linearly ill-posed is described by Slemrod. One of the conclusions which emerged from the workshop was that at least some dynamic instabilities in viscoelastic flows can be explained by a simpler mechanism than change of type, namely a bifurcation of attractors. However, change of type of the transonic kind, in steady flows, remains of interest in viscoelasticity. Among promising mathematical approaches which were displayed at the workshop, Riemann problems played a prominent role in many of the talks, with new phenomena, loss of uniqueness of solutions, and constructive solutions being discussed in detail. One classic result on equations of mixed type, Friedrichs' 1958 theory of symmetric positive systems, emerged as a potential tool to discuss wellposedness of boundary-value problems. New uses for the qualitative theory of planar dynamical systems appear in the work of Liu and Xin, of Malkus, Nohel, and Plohr, of Azevedo and Marchesin, of Shearer and Schecter and of Keyfitz; higher-dimensional vectorfields appear in the papers of Kranzer and Keyfitz and of Mischaikow. As organizers of the workshop and editors of the proceedings, we extend a special word of thanks to Dan Joseph, whose papers on loss of hyperbolicity in viscoelastic models provided an important link between specialists in viscoelasticity and participants working in other areas related to equations that change type. In addition to introducing the participants to each other and organizing a lab tour, Dan presented a summary of Fraenkel's work on change of type in steady flow. We are also pleased to thank Avner Friedman and \Villard Miller, Jr. and the IMA staff for their smooth organization of the details of the workshop and the visits of the participants. Finally, we thank all the participants in this volume, who submitted their papers so promptly, and we thank the editorial staff of Patricia V. Brick, Stephan Skogerboe, Kaye Smith and Marise Ann Widmer who completed the manuscript preparation. Barbara Lee Keyfitz Michael Shearer Foreword ....................................................... ix Preface ......................................................... xi Multiple viscous profile Riemann solutions in mixed elliptic-hyperbolic models for flow in porous media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 1 A. V. Azevedo and D. Marche8in On the loss of regularity of shearing flows of viscoelastic fluids ................................................ 18 Bernard D. Coleman Composite type, change of type, and degeneracy in first order systems with applications to viscoelastic flows ................................................ 32 L. Pamela Cook, G. Schleiniger and R.I. Weinacht N uinerical simulation of inertial viscoelastic flow with change of type ............................................. 47 M.I. Crochet and V. Delvaux Some qualitative properties of 2 x 2 systems of conservation laws of mixed type ................................. 67 H. Holden, L. Holden and N.H. Ri8ebro On the strict hyperbolicity of the Buckley-Leverett equations for three-phase flow ................................... 79 Lar8 Holden Admissibility criteria and admissible weak solutions of Riemann problems for conservation laws of mixed type: a summary . . . . . .. 85 L. Hsiao Shocks near the sonic line: a comparison between steady and unsteady models for change of type .................. 89 Barbara Lee K eyfitz A strictly hyperbolic system of conservation laws admitting singular shocks .................................. 107 Herbert C. Kranzer and Barbara Lee Keyfitz An existence and uniqueness result for two nonstrictly hyperbolic systems ................................... 126 Philippe Le Floch Overcompressive shock waves .................................... 139 Tai-Ping Liu and Zhouping Xin Quadratic dynamical systems describing shear flow of non-Newtonian fluids .............................. 146 D.S. Malku8, I.A. Nohel and B.I. Plohr Dynamic phase transitions: a connection matrix approach ................................................ 164 Konstantin Mischaikow A well-posed boundary value problem for supercritical flow of viscoelaStic fluids of Maxwell type ........................................... 181 Michael Renardy Loss of hyperbolicity in yield vet'tex plasticity models under nonproportionalloading .................................. 192 David G. Schaeffer and Michael Shearer Undercompressive shocks in systems of conservation laws ............................................... 218 Michael Shearer and Stephen Schecter Measure valued solutions to a backward-forward heat equation: a conference report ............................... 232 M. Slemrod One-dimensional thermomechanical phase transitions with non-convex potentials of Ginzburg-Landau type ............ 243 Jurgen Sprekels Admissibility of solutions to the Riemann problem for systems of mixed type -transonic small disturbance theory- ............................. 258 Gerald Warnecke MULTIPLE VISCOUS PROFILE RIEMANN SOLUTIONS IN MIXED ELLIPTIC-HYPERBOLIC MODELS FOR FLOW IN POROUS MEDIA * A. V. AZEVED0 1 ,2 D. MARCHESIN 2,3 Abstract. We consider the Riemann problem for a system of two conservation laws of mixed type. We show by constructing two distinct solutions for a non trivial class of Riemann problem data that the viscous profile entropy condition is insufficient to guarantee uniqueness of the solution. This model possesses transitional shocks - or saddle to saddle connections of the associated dynamical system - of a kind not yet observed in conservation laws with quadratic polynomial flux functions. Introduction. Weak solutions of systems of conservation laws are not uniquely detennined from the initial data, unless conditions drawn from the physics - or entropy criteria - are used to supplement the partial differential equations and select physically meaningful solutions. The entropy criterion which seems to generalize most useful criteria for hyperbolic systems of conservation laws (such as Lax's [15] and Oleinik's [17]) requires the discontinuities to be the small viscosity lilnit of traveling wave solutions of the partial differential equations, retaining parabolic tenns that describe smaller effects usually neglected in the conservation law formulation. In this work we show a clear example of the limitations of the viscosity profile condition as an entropy criterion for mixed elliptic hyperbolic systems of two conservation laws. This is done most conveniently in the class of scale invariant solutions corresponding to a particularly simple class of initial data, called the Riemann problem. We show that for an open set of Riemann problem Cauchy data, there exist two distinct solutions, which are equally acceptable. Therefore one cannot expect that the Riemann solution for this model has a global continuous dependence on the initial data. This is not the first example where the viscous profile condition fails to provide an adequate uniqueness criterion in lnixed problems. This is the case of gas dynamics with phase transitions ([22], [23]) or silnilar systems of two equations possessing an infinite strip where the characteristic speeds are complex. However, this elliptic strip disappears completely and the problem becomes strictly hyperbolic when the flux functions are modified by making the thennodynalnic equation of state convex. This behavior is competely different from three-phase flow which gives rise to the model we study. There is a compact elliptic region which can at best shrink to a non removable hyperbolic singularity when the permeabilities describing the flow are modified [25], [16]. At such an umbilic point the characteristic speeds coincide; everywhere else in a neighborhood of this point the problem is strictly hyperbolic. *This research was supported in Brazil by CNPq, FAPERJ and in the U.S.A. by IMA with funds provided by NSF 1 Univ. de Brasilia, Brazil. 2Pontificia Univ. Cat6lica do Rio de Janeiro, Brazil. 3Instituto de Matematica Pura e Aplicada, E.D. Castorina 110, Rio de Janeiro, 22460, Brazil. For the strictly hyperbolic gas dynamics case as well as for the case of models with an isolated umbilic point related to three-phase flow, the viscous profile entropy condition is adequate [5], [20], [14], [10], [24], [13]. Therefore the failure of the viscous profile entropy condition in the presence of elliptic region may conceivably be due to inadequate modeling giving rise to systems of conservation laws which tum out to have mixed elliptic-hyperbolic behavior. These difficulties may be related to the lack of convergence of Glimm's method for certain mixed problems [9], [19]. Nonuniqueness in the context of nonlinear parabolic equations is reported in [26]. We study a system of two conservation laws with quadratic polynomial flux functions such that the characteristic speeds are complex in a bounded part of the set of all possible states. We believe that this system models the local behavior of the solution of Stone's model [7], which is a description of immiscible threephase flow commonly used in petroleum reservoir engineering. In the present work, shocks are considered admissible if they possess viscous profiles generated by a particularly simple parabolic term, where the viscosity matrix is a multiple of the identity. Nonunique solutions of this Riemann problem disregarding the viscous profile admissibility condition were obtained in [12]. We show by constructing two distinct solutions for a non trivial class of Riemann problem data that in this model the viscous profile entropy condition is insufficient to guarantee uniqueness of the solution. Conclusions that can be drawn are that either better entropy conditions are required in the presence of elliptic regions, or that fundamentally more accurate models must be developed to describe phenomena such as three-phase flow. We believe that the special properties observed in this model are related to the existence of a kind of transitional shock not yet observed in conservation laws with quadratic polynomial flux functions. Transitional shocks are represented by orbits connecting two saddles in the associated dynamical system. Classes of quadratic polynomial dynamical systems possessing saddles connected by straight line orbits have been observed by Gomes [11], Shearer [24], Isaacson, Marchesin and Plohr [13]. However, the transitional shocks found in the model considered here are pairs of saddles connected only by an orbit which is not a straight line. The behavior of solutions represented by states in the elliptic region is not clear yet. Numerical experiments reported in [3] indicate that states inside the elliptic region tend to leave it. We obtained an analytical proof of this fact using the viscosity entropy criterion. This result was obtained simultaneously and independently by H. Holden, 1. Holden and N. Risebro. The plan of this work is the following. The model is briefly described in §l. Entropy criteria including the viscous profile condition are briefly reviewed in §2. A pair of distinct solutions for the Riemann problem is constructed in §3. The Appendix contains some technical results used in the previous sections. Results related to the instability of states in the elliptic region are also contained in the Appendix. 1. The Model. Riemann problems originating in the theory of three phase flow in porous media are systems of conservation laws of the form + F(U)x = 0, with initial data (1.2) U(x, t = 0) = Uo(x) = { U(x,t) is a solution of (1.1) for t from R2 into R2 (1.3) vex, t) > 0 and the flux function F is a prescribed C 2 function F( u, v ) = ). 9( u,v If the eigenvalues Al (U) and A2(U) of dF(U) are equal at (uo, vo) and distinct in a neighborhood of (uo, vo), we say that (uo, vo) is an umbilic point. In [20], it was shown that if F is a quadratic polynomial mapping such that (1.1) is hyperbolic except at the umbilic point U = 0, the system (1.1) has the normal form F(U)- [t(au 2 +2buv+v 2 )] t(bu 2 + 2uv) , which is the gradient of a cubic polynomial. General linear perturbations of (1.1), F(U) _ [t(au 2 + 2buv + v 2 ) + CU + dV] t(bu 2 +2uv)+eu+fv ' split the umbilic point into a bounded region where the eigenvalues are complex conjugate [12], [18], [21]. This region is called an elliptic region and (1.1) becomes a conservation law of mixed type. For the elliptic region E to exist, it is important that d and e be distinct, to ensure that F is not a gradient. In our examplf(, we consider a = -1, b = 0, c = f = 0 and d = -e = p, where the inequality p > 0 is satisfied, so that E is a circle of radius p. This case is a perturbation of symmetric case I ([20]). Therefore, F is given by F(U) = _u 2 + v 2 ) + pv] . uv - pu We remark that adding a constant to F is irrelevant. This model was proposed and studied in detail by Holden [12]. The eigenvalues of dF(U) are with corresponding right eigenvectors associated to Ai( i = 1,2) : ri(U) = U + Ai(U) , if v Since the solutions of (1.1) satisfy U(ax, at) = U(x, t), for a > 0, we look for solutions of the form U(x, t) = UW, Such smooth solutions satisfy + dF(U)U' = 0, where the prime denotes the differentiation with respect to Therefore, continuous solutions of (1.1) are constructed locally using the integral curves of the differential equation = ri(U), x Ai(U) = t' i = 1,2. U' A rarefaction curve R; (UI ) from UI associated with a family i (R; (UI ), (i = 1, 2) ) is an integral curve of (1.6) starting at UI along which A;(U) is nondecreasing. It represents an i-rarefaction fan in physical space, defined by inverting the relation Ai(U) = = f . The eige::lValue A;(U) is called the speed of the rarefaction at U, or characteristic speed. A discontinuous solution of (1.1) U(x, t) = { UI, Ur, < st X> st which propagates with speed s = S(UI, Ur) and separates two CO:lstant states UI and Uro is called a shock (UI' S, Ur ). For such a discontinuous solution s, UI and Ur satisfy the Rankine-Hugoniot relation, which can be derived from the weak formulation of solutions of (1.1): (1.7) For a fixed UI, the set of Ur satisfying (1. 7) is a one parameter family. A branch along which s decreases is called a shock curve; it is a parametrization in state space of physical shock waves. We obtain the Hugoniot curve eliminating s in (1.7); it is the solution of (1.8) H(UI ) = {U: [feu, v) - f( Uz, VI)]( v - VI) - [g(u, v) - g( UI, VI)]( u - UI) = O}. The Hugoniot curve undergoes topological change of shape at certain lines called secondary bifurcation lines [13], [18], given by B 1 ,B2 ,B3 where v = p,V = V3u2p, v = -V3u - 2p, respectively. In many systems satisfying certain hypotheses [15], for a shock to be physically realizable, its speed as well as the characteristic speeds at the left and right of the discontinuity must satisfy certain inequalities, called Lax's entropy conditions. TIlls gives rise to the following nomenclature for discontinuities 1 - shock: ).1(Ur) < s < ).l(UI), 2 - shock: ).2(Ur) < s < ).2(U1), s < ).2(Ur ) s > ).1 (UI). Shocks for which either the left or right state is inside the elliptic region, do not fall directly in the above framework. There is no shock both states of which are inside the elliptic region (Prop. A.9). An elementary wave associated with a fanlily i( i = 1,2) is a rarefaction or a shock associated with a fanlily i. If i = 1 (i = 2) the elementary wave is called slow (fast). Intermediate states in a Riemann solution represent a region with constant state in physical space. A solution of a Riemann problem (1.1) and (1.2) comprises wave groups each containing a sequence of adjacent elementary waves of the same family. In certain cases, transitional shocks or rarefactions not associated with any particular fanlily may also exist. Constant states separate different wave groups. The speed increases from UI to Ur along the solution of a Riemann problem. 2. Viscosity Admissibility Criterion. Weak solutions of (1.1) and (1.2) are required for an existence theory but they are not uniquely determined by the initial data. Other conditions, called entropy criteria, are necessary to obtain uniqueness. A typical criterion is to consider (1.1) as an approximation to an equation of the form (2.1) > 0, where D(U) is the 2 x 2 viscosity matrix which models certain physical effects that are neglected in the conservation law (1.1), in the limit as € -+ 0+. We consider adnlissible the shocks (UI , S, Ur ) that are limits oftraveling waves of (2.1) as € -+ 0+, that is, limits of solutions of the form of (2.1), which tend to UI and Ur as ( approaches -00 and +00, respectively [5]. We assume that D(U) has eigenvalues with positive real part. If dF(U) has real distinct eigenvalues, this assumption guarantees that short wavelength perturbations of constant solutions decay exponentially in time. In this paper, We consider D(U) as the identity, so that (2.1) can be written in the form (2.3) Substituting (2.2) into (2.3) and integrating the result, we have -s[U - Uzl + F(U) 5 where the dot denotes differentiation with respect to (. We remark that as ( tends to infinity the right hand side of (2.4) tends to zero, satisfying the Rankine-Hugoniot relation (1.7). We refer to (2.4) (for fixed UI) as the field X.(U, U/). Thus, we write In [8) Gel'fand and in [6) Courant and Friedrichs show that studying discontinuous solutions of (1.1) and (1.2) as limits of solutions of (2.3) is equivalent to studying the existence of an orbit / of X.(U, UI), such that (2.5) U/ = a(r) where a(/) and weI) are the a-limit and w-limit sets of /, respectively. The viscosity entropy criterion consists in considering a shock (U/, 8, [Tr) as admissible if there exists an orbit of X.,(U, Ut) satisfying (2.5)j note that admissible Lax. I-shocks are repeller-saddle connections while Lax 2-shocks are saddle-attractor connections. We remark that for F given by (1.4) the eigenvalues fl.i d(u)X. are given by Pi = >'i - = fl.i(U),(i = 1,2) Note that if Uj(j = 1,2) lie in the elliptic region E, then J?.Pi = attractor if 8 > 0 or a repeller if 8 < O. and Uj is an If U/ is a repeller inside E, Ur is a saddle outside E and there exists an orbit of (2.4) connecting U/ to U r we call (U/,8,Ur ) a i-complex shock. Similarly, if Ul is a saddle outside E, Ur is an at tract or inside E, and there is an orbit connecting U1 to Ur we call (U/, 8, Ur ) a 2-complex shock. We will continue to denote a I-complex shock (2-complex shock) by Sl(S2), respectively. It is known that there exist shocks obeying Lax's entropy condition which are inadmissible because there is no orbit connecting the singularities U/ and Urj there are also saddle-saddle connections representing shocks with viscous profile which do not obey Lax's inequalities [13). These are called transitional shock waves. They are discontinuous solutions (U_,8, U+) with U_ connected to U+ by an orbit, such that < 8 < >'2(U-), >'l(U+) < 8 < >'2 (U+). >'l(U-) Obviously, such waves appear only outside E. We will denote transitional shock waves by X. Such waves are essential to ensure the existence of solutions of Riemann problems [20], [24), [10). For quadratic polynomial gradient systems, Chicone [4) showed that transitional shocks are represented straight line orbits. When the system is not a gradient, Frommer and Bautin [27] obtained an example with a saddle-saddle connection which is not a straight line; however, the two singularities are still connected by another straight line orbit (see Fig. A.l in the Appendix). We will show in §4 an example of two saddles connected only by one orbit which is not a straight line segment. Non straight line saddle-saddle connections allow for more complicated Riemann solution structures than those already known [13]. In the remaining chapters, if two states U_ and U+ are connected by a wave W(vV = Sj,S2,R j ,R2,X), we will use the notation 3. Nonuniqueness. We will show that there exists an open region of pairs (UI,Ur ) such that the Riemann problem with data (U/,Ur ) in this set admits two distinct solutions, both of which are admissible. We construct the Riemann solutions as sequences of elementary waves. The first solution consists of a I-rarefaction, a pair of transitional shocks and a 2-complex shock; the second solution consists of a I-rarefaction and a 2-complex shock. We will prove this result in two steps. In the first step (Prop. 3.1), we restrict UI to a segment of a straight line and Ur to an open region ~h; in the second step (Prop. 3.2), we consider UI in a region ill and Ur in another region il 2 • To prove Prop. 3.1, we construct an open segment of points U/ in B 2 • We define certain points, shown in Fig. 3.1, as follows (this can be done explicitly, since the Hugoniot curves are conic sections): i) A = (aj,a2) E B2,B = (b j ,b2) E B 3 , such that a2 > p,B E H(A) and H(B) is tangent to the axis U = 0, ii) UI = (UI,VI) E B2, with P < VI < a2, iii) M3 ~ (m3,n3) E B 3, such that M3 E H(U I). Now we define il 2 (UI ) as the region bounded by the axis U = and the curve H(M3)' Note that il 2 (UI) is contained in the triangle with vertices 0, a, (3, where 0= (O,O),a = (O,p),(3 = (~,p). To construct the solution of the Riemann problem, we consider (see Fig. 3.2) iv) Ur =(ur,vr )Eil2 (UI), v) M2 = (m2' n2) = H(Ur ) n B3; note that vi) M=(m,n)=H(Ur)nRj(UI), vii) M j = (0, -2p), -V3p < m3 < m2 < b2 < 0, Fig. 3.1: Points used to construct rll and rl 2 ; also, the region rl 2 . Fig. 3.2: Points used to construct the solution of the Riemann problem. ... ---.-. " "\." '. ... Ml Fig. 3.3: TheJirst solution of the Riemann problem with U/ E B 2 • Fig. 3.4: The second solution of the Riemann problem with UI E B 2 • PROP, 3.1. Let UI and Ur be as above. Tben tbe Riemann problem (1.1) and (1.2), witb F given by (1.4), bas at least two distinct solutions (I and II) which are admissible. Proof. I) First solution: UI & Ml & M2 ~ Ur (see Fig. 3.3): i) UI can be connected to Ml by a transitional shock (for VI close to Pi otherwise, Ml is an attractor such that the discontinuity (UI, S, M 1 ) is not a transitional shock), since from (1.5) Al (UI) = -A2(UI) = -(2uI Al(M1 ) and the wave speed is = -A2(M1 ) = -V3p, _ UI(VI - p) _ 1n3 s- UI- YJP· VI + 2p Clearly, and By Prop. A.2, there is an orbit connecting UI and MI. The eigenvectors rl(UI) and r2(M1 ) are parallel to B 2 , and the orientation of this orbit is from UI to MI. Therefore, we have joined UI to Ml by a transitional shock with speed s = UI- V3p. ii) Ml can be connected to M2 by a transitional shock, where and the wave speed is Clearly, and So we have joined MI to M2 by a transitional shock with speed s = m2 + V3p. The proof that this shock is admissible is analogous to that in i). Also, the speed S(MI' M 2 ) is larger than s(U/, MI) by Prop. A.4. iii) M2 can be connected to Ur by a 2-complex shock with speed s ),-",(_m..:;:.2_-...!P...!.) +-(, u....:.r_-_m--=..2 Vr - m2 By Prop. A.6, this shock is admissible and by Prop. A.7 the speed s(Mz, Ur ) is larger than S(MI,M2)' Therefore, we have presented a sequence of admissible waves with strictly increasing speed; this is the first Riemann solution. II) Second solution: U/ ~ M ~ Ur (see Fig. 3.4): i) U/ can be connected to M by a slow rarefaction wave. Since AI(U) < 0 for all U outside the elliptic region, all speeds in this wave are negative. So we have U/~M, with negative characteristic speeds. ii) M can be connected to Ur by a 2-complex shock with speed given by s= which is greater than zero, since A.5, this shock is admissible. Ur(V r - p) Vr - < p, n > men - p) , p, m > 0, U r > 0 and Vr < n. By Prop. Therefore is another sequence of admissible waves with increasing speed. Clearly, the solutions I and II are distinct. 0 Fig. 3.5: The region !11' -.:..-- ... ---- U. ,".... U, .. . .. Ml Fig. 3.6: The first solution of the Riemann problem with UI E nI . To construct n I (the set of points UI possessing solutions similar to I and II), we consider the point A given in i) and define nI as the region bounded by B I , B2 and the I-integral curve through A (Fig. 3.5). To construct the second sequence, we consider UI inside nI and define U. as the intersection of RI (UI) with B 2 • Similarly, n 2 (u.) is defined as in Prop. 3.l. PROP. 3.2. Let UI E n I and Ur E n 2 (u.), then the Riemann problem (1.1), (1.2) has at least two distinct solutions which are admissible. Proof. Since the rarefaction RI(UI) crosses B2 at U. lying underneath A [18], we can join UI to U. by a slow rarefaction; from U. we follow the same construction as described in the proof of Prop. 3.1 (see Fig, 3.6 and Fig. 3.7). 0 M Rio U, Fig. 3.7: The second solution of the Riemann problem, UI E n1 . Fig. 4.1: The dynamical system with a( /d = 4. Saddle-Saddle Connections. In order to check that a discontinuity satisfying the Rankine-Hugoniot condition (1.8) is in fact admissible one needs to verify the existence of orbits connecting singularities of a vector field. In this context, we must look for saddle-saddle connections to verify whether a crossing shock is admissible. An important result is Chicone's theorem [4], which states that saddlesaddle connections of quadratic polynomial gradient systems lie on straight lines. Frommer and Bautin [27] showed an example of a quadratic system on the plane which is not a gradient, where there exist two distinct orbits connecting a pair of saddles. One orbit is a straight line, but the other is not. A generalization of this fact for non-gradient systems would be a very useful result, since by Prop. A.l the only invariant lines are the secondary bifurcations, and therefore the transitional shocks would occur on these lines. However, by performing computer experiments using the Riemann Problem Solver Package of E. Isaacson, D. Marchesin and B. Plohr, we discovered pairs of saddles connected by only one orbit which is not a straight line segment. If we consider Ul and Ur1 as in the Fig. 4.1, we obtain the configuration shown there. The configuration shown in Fig. 4.2 is obtained considering a point Ur2 on the Hugoniot locus, which is a perturbation of Ur1 • We take 8 < 0 and obtain Ur2 by increasing 8 in such way that it remains negative. We remark that U/, Ur1 and Ur2 are saddles; since 8 < 0 and divXs(U, Ul) = -28, by Bendixson's criterion [1], there is no closed orbit. This is important: computer experiments displaying configurations such as those shown in these two figures would be unreliable if closed orbits were possible. In the first case (Fig. 4.1), the stable manifold /1 of UT1 , which crosses the v axis, has a singularity at infinity as a-limit, while in the second case (Fig. 4.2), the manifold /2 has a repeller Ui as a-limit. Therefore, by continuity, there is a saddle-saddle connection between U/ and a point Ur between Ur1 and Ur2 • It is clear that there is no straight line orbit connecting these two saddles. We will give more information about these connections in [2]. Fig. 4.2: The dynamical system with.a{Y2) = Ui . Fig. A.1: The dynamical system of Xo(U, (0, a2)). Appendix. In this Appendix, we present the propositions and proofs used in the previous sections. The proof of the next proposition can be found in [13]; a direct proof is easy. PROP. A.I. Tbe only invariant straigbt lines of Xs(U, U/) given by (2.4), are B 1 ,B2 and B 3 . PROP. A.2. Let A and B be saddles ofXs(U, U/) lying on tbe same secondary bifurcation line. Tben on tbis line tbere is an orbit joining A to B. Proof The result follows by Bezout's theorem; A and B are the only singularities on the same secondary bifurcation, which is an invariant line of X.(U, U/). 0 PROP. A.3. If U_ lies on B 2 ,M1 = (0,-2p), and U+ E B3 is sucb tbat s(U_, M 1) = S(M1' U+) tben U+ = H(U_) n B 3 • Proof Since S1 = s(U_,M1) = u_ - J3p and S2 = S(M1' U+) = u+ have u+ = u_ - 2J3p. We obtain the conclusion using that U+ = (u_ - 2J3p, -J3u_ + 4p) to verify that U+ E H(U_). 0 + J3p, PROP. A.4. If we consider U/,M I ,M2 as in Prop. 3.1, then s(U/,Mr) < s(MI ,M2 ). Proof. The proposition follows from Prop. A.3 because the speed decreases along the shock curve. D PROP. A.5. Given M and Ur as in the Prop. 3.1, then there exist an orbit / with M as a-limit of / and Ur as w-limit of /. Proof. To show that there exists an orbit of X.(U, Ud connecting the pair ]vI, Ur given in Prop. 3.1, we consider A = (0, a2) for 0 < a2 < p and B = (b I , b2 ) E H(A) and recall that the shock speed s is given by =J b2 and X.(U, U/) is given, for s xo(U, U)= / If bi V3p and b2 p, S = 0, by [!(-u2+v2-a~)+p(v-a2)] u(v _ p) and the dynamical system is shown in Fig. A.l [27]. Now, we take B' as a small perturbation of the point B, such that b; > p and B E H(A). Thus s > 0 and X.(U, U/) is a perturbation of Xo(U, Ud, where the perturbation is given by [u v - al ] a2 [u] v - which is vector field pointing to Ur. Therefore Ur is an attracting focus. In this case, since A is an attractor and divX.(U, U/) = -2. =J 0 there is no closed orbit, by Bendixson's criterion. Hence the unstable manifold of B tends to A. Therefore, there is a saddle-attractor connection. Since saddle-attractor connections are stable, under a small perturbation of A we still have orbits connecting A and B. Therefore the proposition is proved for A = Ur and B = M. D PROP. A.5. Given M3 and Ur as in Prop. 3.1, then there exists an orbit / with M as a-limit and Ur as w-limit of /. The proof is omitted because its idea is the same as that in Prop. A.5. PROP. A.7. LetMI , M2 and Ur be as in Prop. 3.1. Then Proof. We must show that We note that U r > m2, n2 So, since M2 E B 3 , we have < p, Vr < n2 and -2p < -p < V3u r +v r < (V3+1)p. The next propositions present results related with instability of states inside the elliptic region E and with invariance of waves connecting states outside E. These results were obtained simultaneously and independently by H. Holden, L. Holden and N. Risebro. Before proving them, we remark that a state inside E can be connected to another state outside E only by a (complex) shock, since rarefactions curves do not enter E. PROP. A.S. Let U/ and Ur be two states outside the open elliptic region E of a generic system of two conservation laws. Then there is no admissible wave from U/ to Ur with an intermediate state in the elliptic region. Proof. For simplicity, we suppose that there exists only one intermediate state M in the Riemann solution. By contradiction, we suppose that M is inside E and there exist two admissible shocks, one from U/ to M and another from M to Ur . So, as far as the dynamical system with U/ and M as singularities is concerned,M is an attractor and, by (2.6), 9t(fl(M)) < 0 =} s(U/,M) > O. On the other hand, for the dynamical system with M, Ur as singularities, ]..;f is a repeller and sCM, Ur ) < O. Therefore, the wave speed from left to right is decreasing, which is a contradiction. 0 PROP. A.9. For quadratic polynomial conservation laws, with D = I in (2.1), and homogeneous parts in cases I, ... , IV [21J, if U/ is inside the elliptic region, then H(Ut) n E = U/. Proof. If H(U/)nE =f U/, by contradiction, then there is state Ur E H(U/) such that Ur E E. Since F is quadratic, using the midpoint rule [13], where Urn = (U/ + Ur )/2 lies inside E (E is convex for cases I, ... ,IV [18]). This implies that Ur - U/ is an eigenvector of dF(Urn ) where Urn lies inside E, with real eigenvalue s, which is a contradiction. 0 PROP. A.lO. Under the hypotheses of the previous proposition, if U/ and Un U, # Un are states inside the elliptic region E, then if the Riemann solution exists, it has at least an intermediate state M which is outside E. Proof. By contradiction, we assume that there does not exist M outside E. Then M (which may be U,) is inside E and we have a shock from U/ to M. This implies that H(U/) n E # U/ which contradicts Prop. A.9. 0 REFERENCES [1] A. A. ANDRONOV, A. A. VITT AND S. E. KHAIKIN, Theory of Oscillations, Addison-Wesley Pub. Co., Inc., Massachusetts (1966). A. V. AZEVEDO, Doctoral Thesis, PUC/RJ, Rio de Janeiro, Brazil (1990). J. BELL, J. R. TRANGENSTEIN AND G. SHUBIN, Conservation Laws of Mixed Type Describing Three-Phase Flow in Porous Media, SIAM Jour. App\. Math. 46 (1986), pp. 1000-1017. C. CHICONE, Quadratic Gradients on the Plane are Generically Morse-Smale, Jour. Dilf. Eq. 33 (1979), pp. 159-166. C.C. CONLEY, J .A. SMOLLER, Viscosity Matrices for Two- Dimensional Nonlinear Hyperbolic . Systems, Comm. Pure App\. Mat. XXIII (1970), pp. 867-884. R. COURANT AND K.O. FRIEDRICHS, Supersonic Flow and Shock Waves, Interscience Publishers, New York (1948). F.J. FAYERS AND J.D. MATTHEWS, Evaluation of Normalized Stone's Methods for Estimating Three-Phase Relative Permeabilities, Soc. Petrol. Engin. J. 24 (1984), pp. 225-232. I.M. GEL'FAND, Theory of Quasilinear Equations, English transl. in Amr. Mat. Soc. Trans., ser. 2 29 (1963), pp. 295-381. H. GILQUIN, Glimm's Scheme and Conservation Laws of Mixed Type, SIAM J. Sci. Stat. Comput. 10 (1989), pp. 133-153. M.E. GOMES, Riemann Problems requiring a Viscous Profile Entropy Condition, Adv. Appl. Math. 10 (1989), pp. 285-323. M.E. GOMES, On Saddle Connections in Planar, Quadratic Dynamical Systems' with Applications to Conservation Laws, Preprint (1989). H. HOLDEN, On The Riemann Problem for a Prototype of a Mixed Type Conservation Law, Comm. Pure Appl. Mat. XL (1987), pp. 229-264. E. ISAACSON, D. MARCHESIN AND B. PLOHR, 'Transitional Waves for Conservation Laws, CMS Technical Report #89-20, U. Wisconsin-Madison (1988), to appear in SIAM J. Math. Ana\., 1990. E. ISAACSON AND J.B. TEMPLE, The Riemann Problem Near a Hyperbolic Singularity II, SIAM J. Appl. Math. 48 (1988), pp. 1287-1301. P. LAX, Hyperbolic Systems of Conservation Laws II, Comm. Pure Appl. Math. 19 (1957), pp. 537-566. D. MARCHES IN AND H. B. MEDEIROS, A Note on the Stability of Eigenvalue Degeneracy in Nonlinear Conservation Laws of Multiphase Flow, Current Progress in Hyperbolic Systems: Riemann Problems and Computations. (Bowdoin, 1988), Contemporary Mathematics 100, Arner. Math. Soc. (1989), pp. 215-224. O.A. OLEINIK, On the Uniqueness of Generalized Solution of Cauchy Problem for Non Linear System of Equations Occurring in Mechanics, Uspeki Mat. Nauk (Russian Math. Surveys) 12 (1957), pp. 169-176. C.F. PALMEIRA, Line Fields Defined by Eigenspaces of Derivatives of Maps form the Plane to Itself, Proceedings of the VI Conference International of Differential Geometry, Santiago de Compostela, Spain (1988). R. PEGO AND D. SERRE, Instability in Glimm's Scheme for Two Systems of Mixed Type, SIAM J. Numer. Anal. 25 (1988), pp. 965-988. M. SHEARER, D. SCHAEFFER, D. MARCHESIN AND P. PAES-LEME, Solution of Riemann Problem for a Prototype 2 X 2 System of Non-Strictly Hyperbolic Conservation Laws, Arch. Rat. Mech. Anal. 97 (1987), pp. D. SCHAEFFER, M. SHEARER, The Classification of2 X 2 Systems of Non-Strictly Hyperbolic Conservation Laws, with Application to Oil Recovery; with appendix by D. Marchesin, P. Paes-Leme, M. Shearer, D. Schaeffer, Comm. Pure Appl. Math. vol. XL (1987), pp. 141-178. M. SHEARER, Admissibility Criteria for Shock Wave Solutions of a System of Conservation Laws of Mixed Type, Proceeding of the Royal Society of Edinburgh 93A (1983), pp. 233-244. M. SHEARER, Non-uniqueness of Admissible Solutions of Riemann Initial Value Problems for a System of Conservation Laws of Mixed Type, Arch. Rat. Mech. Anal. 93 (1986), pp. 45-59. M. SHEARER, The Riemann Problem for 2 X 2 Systems of Hyperbolic Conservation Laws with Case I Quadratic Nonlinearities, J. Differential Equations 80 (1989), pp. 343-363. M. SHEARER, Loss of Strict Hyperbolicity of the Buckley-Leverett Equations for Three-Phase Flow in a Porous Medium, Numerical Simulation in Oil Recovery (ed. M.P. Wheeler), IMA vol. 11, Springer-Verlag (1988). N. VVEDENSKAYA. An Example of Non uniq ueness of a Generalized Solution of a Quasilinear System of Equations, Sov. Math. Dokl. 2 (1961), pp. 89-90. YE YAN-QIAN AND OTHERS, "Theory of Limit Cycles", Translations of Mathematical Monographs-AMS, Providence, Rhode Island (1984). ON THE LOSS OF REGULARITY OF SHEARING FLOWS OF VISCOELASTIC FLUIDS BERNARD D. COLEMAN* Abstract. In this expository article a brief survey is given of a part of the theory of rectilinear shearing flows of simple fluids with fading memory. The topics discussed have relevance to the search for a theory of melt fracture of polymers at high rates of shear. The emphasis is laid on unbounded growth of shear-acceleration waves and breakdown of initially smooth solutions. Key words. Viscoelasticity, non-Newtonian fluids, hyperbolic systems, melt fracture of polymers. 1. Preface. In a work published in 1968 [1], Morton Gmtin and I proposed that the phenomenon rheologists call "melt fracture" [2-10J or "elastic turbulence" may be an example of the formation of shock waves in viscoelastic fluids subjected to high rates of shear. Experimenters have not yet decided whether such is the case' or whether the phenomenon is caused by failure of adherence to bounding surfaces. Moreover, the situation has been complicated by the realization that there are several phenomena that are occasionally grouped under the appellation, "melt fracture", but which, like the "spurt" phenomenon studied by Vinogradov and co-workers [11],1 are associated with an apparent loss of monotonicity of the shear-stress function. In the last two decades much progress has been made in the development of the theory of the formation and growth of singularities in viscoelastic fluids with monotone shear-stress functions. I attempt to summarize here parts of the old and new work on the subject that I believe will prove important for the attainment of an understanding of the origin of melt fracture. 2. General Theory of Simple Fluids in Shear. A motion is called a rectilin· ear shearing flow if there is a Cartesian coordinate system in which thc componcnts of the velocity field have the form = vex, t), = o. Such a flow is isochoric. Of frequent occurrence in the theory of rectilinear shearing flows in viscoelastic fluids is the real-valued function, ,\ t, defined by ,\t is called the history up to time t (at x) of the rela.tive shear stmin 2 Shortly after Noll [13] formulated his definition of a simple fluid, Coleman and Noll [14] * I Department of Mechanics and Materials Science, Rutgers University, Piscataway, New .Jersey. See also Bagley, Cabott, and West [12]. ax = showed that for a general incompressible simple fluid it follows from the principle of material frame indifference that the components of the stress tensor Tin a rectilinear shearing flow obey the TXY(t) = t().t), (2.3) { TYY(t) - TXX(t) = nl().t), TYY(t) - TH(t) = n2().t), T XZ = TYz = 0, in which t, nl, and nz are real-valued functionals that depend on the material under consideration and satisfy the identities (2.4) t( _).t) = _t().t), ni( _).t) = ni().t), If one assumes that the body force density b has a potential i. c., that b = -grad1,&, then an elementary analysis shows that it follows from (2.1 )-(2.3) that the dynamical equations, + pb = in which p is the mass density, are equivalent to the assertions that (2.8) and T XX = a(t)y + (J(t) with a and (J functions of t only. It is easily shown that a is the driving force (per unit volume) in the direction of flow. When the specific driving force C\' is specified, (2.8) is a functional-differential equation for the function v in (2.1) anel (2.2). Steady Rectilinear Shearing Flows As we shall here be concerned with the dynamical stability of steady rectilinear shearing flows, a brief review of the properties of such solutions of (2.8) is needed. If the flow is steady, then (2.10) (2.2) reduces to (2.11 ) = I'-(x) = VI(X), and (2.8) becomes (2.13) where T is the shear-stress function which is familiar in the theory of steady viscometric flows 4 [15,16] and which is determined by the functional t through the relation i(S) == s. The number I'- is called the rate of shear. It follows from (2.4) that the function which takes real numbers into real numbers, is an odd function, i. e .• (2.15 ) and hence T(O) = O. Thermodynamics requires that T( 1'-)1'- be never negative [16], and hence that TI(O) :::: O. Let us here assume (2.16) for all "", which implies that is invertible. One writes T- 1 for the inverse of It follows from (2.10) and (2.13) that a steady rectilinear shearing How can occur only when the specific driving force Q is independent of t ancl (2.17) with v, like Q, constant (in space and time). In view of (2.16), specification of and v determines the rate of shear as a function of x: If the driving force Q is zero, constant, v has the form (2.19) is independent of x, and, to within an added v = I'-X. A steady rectilinear flow obeying (2.19) is called a simple shwring flow. anel is a motion that can take place in a fluid that lies between and adheres to two infinite 3 An apostrophe indicates a derivative. For a survey of that theory see [17J. plates parallel to the (y, z )-plane with separation d, one of which, at :1' = 0, is at rest, and the other, at x = d, is moving with velocity V; for clearly, (2.19) with '" = Vjd is compatible with such boundary conditions. Steady channel flow is a steady rectilinear shearing flow between two infiuite plates of separation d which are again parallel to the (y, z)-plaue, but are both at rest. For such a motion the driving force a is not zero. If we place the (y, z )-plane halfway between the plates, then the assumption that the fluid adheres to the plates gives the boundary conditions (2.21 ) v (dj2) = v (-d/2) = O. In view of (2.15) and (2.16), these boundary conditions are compatible with (2.18) only if v = 0, and hence 5 v'(x) = ",(x) = -T-1(ax), i.e., 3. Fading Memory and Instantaneous Elasticity. The jlost.1\lat.c of fmling memory introduced by Coleman and Noll [18,19]6 is an assumptiou of sllloot.hurss for constitutive functionals that relate certain variables, such as stress, to the histOl'Y of others, such as strain. The postulate asserts that such functionals have continuous differentials with respect to a particular norm 11·11 on a space of histories. This norm has two important properties. (i) For a history such as At, values At( s) at large s, i. e. J occurring in the "remote past", have less influence on IIA t I than values at small s. (ii) A material whose constitutive functionals are smooth with respect to this norm shows instantaneous elasticity: a jump in strain results iu a j1\Jl1p in stress that depends smoothly on the jump in strain, albeit the response funct.ion governing such dependence is determined by the material's history prior to the jump. As the present discussion is restricted to shearing motions governed by the functional-differential equation (2.8), i. e. J (3.1 ) it is not necessary to present the theory of fading memory in full generali ty, anu we may confine our attention to its implications for the functional t. The elements At of the domain of t are histories of the relat.ive' shear stn1iu. l3y (2.2), each such history is a function on [0,(0) obeying (3.2) 5 Cj. [15,17]. A more axiomatic treatment is given in the work Coleman and Mizel [20]. The present.at.ion given below draws heavily on that of Coleman and Gurtin [1]. 6 and, therefore, the value t(,\t) oft is determined by the restriction of,\t to (0,00). Thus, in a discussion of the functional t, we need not distinguish between a relative shearing history and its restriction to (0,00). For each (finite) p 2: 1 and each influence function k, i. c., each positive, monotone-decreasing, measurable function k with sPk(s) integrable over (0,00), let Lp( k), with norm 11·1, be the Banach space formed in the usual way from real-valued functions 9 on (0,00) for which (3.3) is finite. In the present context, the postulate of fading memory asserts that there is a p 2: 1 and an influence function k such that t has Lp( k) for its domain and is twice continuously differentiable in the following sense: For each 9 in L1' (k) there is, on Lp(k), a bounded linear form 8t(gl') and a bounded, symmetric, bilinear form 82 t (gl' , .) such that + I) = + 8t(gll) + i2t(gll, I) + 0(111112). Mel·) and 82 l(- I· , .) is joint.ly cont.inuous It is assumed that each of the functionals in all its arguments. Since M(gl') is both linear and continuous on Lp(k), there is a function K in Lq( k), with q-1 + p-1 = 1, such that for each 1 in Lp( k) Of course, K depends on g. One puts G'(s; g) = K(s)k(s), (3.6) so that M(gll) = LX) G'(s;g)l(s)ds. It is assumed that the mapping G'(s;·) is, for each s, a continllous functional on Lp( k) and that, for each g, G'e ; g) has a bounded derivative G"(- ; g). It. is furt.her assumed that the mapping 9 ...... G"e ; 9 )k( .)-1 is continuous from L1'( k) into Lq( k). If we put G(s;g) = _ G'(ajg)da = - K(a;g)k(a) dO", then for each g, lim G(s; g) s~oo = 0. (0 < s < co), The function G(· ;g) is called the relaxation modulus (for perturbations about the history g). As we shall see below, G(O; g) is an "instantaneous modulus". Let n be a configuration that occurs at some time, say t = 0, during the rectilinear motion of the fluid, and let lex, t) and u(x, t) be the shear st.rain and displacement at x at time t computed using n as the reference configuration: lex, t) = oxu(x, t), = [V(X,Od r By (2.2), the relative shear strain at time t - s (taken relative to the configmation at time t) is ).t(s) = lex, t - s) -,(x, t). Suppose now that at a particular value of x the shear strain, suffers a jump B at time t, so that (3.12) Then, the histories of the relative shear strain immediately before ,md immediat.ely after the jump are given, for each s > 0, by ).t- (s) = ,(x, t s) -,(x, C), (3.14) with 1 t the constant function on (0,00) with value 1. For the shear-stresses immediately before and after the jump in strain, we have (3.15b) Thus the jump in stress can be expressed as a function of the jump B in strain and this function depends on the history ).t- as a parameter: [TXY(t)] := TXY(t+) - TXY(t-) = HB; ).t-) t().t- _ B1 t) _ t().c). The functions B >--+ f(B;).C) and B >--+ t().t- - B1 t) are called instrmtanr.ol/..\ re.']!oT/..lC functions and characterize the instantaneous elasticity of the material. For each relative history ..\ t, the instantaneous mod,Llus E1 and instanta.neous second-order modulus E2 are defined by (3.17a) (3.17b) In view of (3.4) and (3.16), we have (3.18a) = 5t(..\c 11 t), E2 ( ..\ t) = 52 t (..\ c 11 t ,It) , and (3.7) and (3.8) yield (3.19) It follows from (2.4) that E1 is an even function of ..\t and E2 an odd functioJl: (3.20) When g in (3.4) has the special form g(s) = -K8, i.e., when 9 = -!i:t = /\1, one writes simply G(S;K) for G(s;g), G'(S;K) for G'(s;g), and Ei(l() for EiU t ) in the above equations, e.g., = 5t ( - E1 (I() E2 (K) = 52 t 11 t) = E1 ( - ( -l(il1 t, 1 t) = - E2 ( - I(). As E2 is an odd function, (3.22) i. e., the instantaneous second-order modulus E2 is zero for a flnid that has never been sheared. Let us here assume (3.23) Thermodynamics requires that the instantaneous first-order modulus E1 be not negative for a fluid that has not been sheared, i. e., E1 (0) ::: O. For simplicity, let us assume that (3.24) 4. Rectilinear Shear-Acceleration Waves. In our present subject the word wave refers to a propagating singular surface. A rectilinear shear wave is, at each instant, a planar surface perpendicular to the x-a.xis of the fixed Cartesian system in which (3.1) holds. If we write Xt for the value of x on this surface at time t, the velocity of the wave is e = e(t) = It is usual to assume that vex, t) and all its derivatives o:o~v(x, t) are continuous functions of (x, t) for (x, t) =f. (Xt, t) and that these quantities experience, at worst, jump discontinuities, [o:o~v], across the surface x = Xt. The rectilinear shear waves for which [v] called shear-acceleration waves. = 0, but [Otv] and [oxv] The velocity e(t) of a shear-acceleration wave propagating in a fluid obeying the postulate of smoothness laid down in the previous section obeys the formula [1,21]' (4.2) in which E 1 (·) is the functional in (3.17a), (3.18a), and (3.19), aml /\t is the history up to t (at Xt) of the relative shear strain; i. e., El (A t) is the instantaneous modulus at the wave. (The .cp(k)-valued function (x, t) f--> At is continuous across such a wave·r The goal of the research reported in [1] was to derive an exact formula for the amplitude a = aCt) = [Otv] of a shear-acceleration wave assuming that the wave is propagating into a region undergoing a steady rectilinear shearing flow. Thus, taking e(t) to be positive, Coleman and Gurtin supposed that for x :::: Xt, t :::: 0, and -co < cr s; t vY(X, cr) = vex), = 0, and hence ahead of the wave, for all cr S; t, the history AD" is given by AD"(s) with K = Vi (x), and at the wave, (0 S; s < co) (4.5) with (4.6) In such a case, (4.2) becomes, in the notation of equation (3.21a), (4.7) 7 See [22], pp. 253, 254. = -KS It was shown that the quantity, b = bet) = a(t)c(t)1/2, obeys a differential equation of Bernoulli type, db dt + jLCt)b + c.p(t)b2 = with (4.10) Solution of this equation gave the following theorem [1, p. 178]: The amplitude aCt) of a shear-acceleration wave which since time t = 0 has been advancing into a region undergoing a steady rectilinear shearing flow (4.4) is given by the explicit formula (4.il) a(t)c(t)1/2= 1 + a(0)c(0)1/2 fo c.p(a) exp( - W( a) )da ,W(t)= fotjL(a)da Jo with jL and c.p as in (4.10). In the same paper it was observed that methods developed by Coleman, Greenberg, and Gurtin [23, §3] can be employed to show that if the motion of the fluid ahead of the wave is a rectilinear shearing flow that is not steady, i.e., if v Y in (4.4) depends on time, then (4.11) still holds, but jL and c.p are not given by (4.10). The implications of equation (4.11) are particularly transparent in the case in which the motion ahead of the wave is a simple shearing flow so that (4.4) becomes (4.12) vY(x,a) = V for all x ~ Xt, t ~ 0, and for t ~ O. We then have + KOX, < a ::::: t. Here is independent of x, and and (4.11) reduces to (4.14a) with (4.14b) = jL(KO) = -G'(O; KO) E ( ) 2 1 KO = const., E 1 (1(;0) is the initial value, and G'(O; 1(;0) the initial slope, of the st.ress relaxation function for shearing perturbations about 1(;0' By (3.24), E 1 (,,0;0) > 0; let us al"o assume G'(O; 1(; 0) < 0, so that f1.(1(;0) > O. When E z ( 1(;0) = 0, as is the case when = 0, (4.14a) reduces to ( 4.15) In particular, the amplitude of a shear-acceleration wave propagating into a region that has not been sheared previously decays to zero exponentially. In view of (3.23), there will be a range of values of the rate of shearing for which ( 4.16) When EZ(l(;o) f= 0, the number IA(l(;o)1 is finite and plays the role of a critical amplitude. Because we here take c to be positive and have f1. (K·O) > 0, if 10.(0)1 < IA(Ko)1 or if a(O)Ez(l(;o) > 0, then aCt) -; 0 monotonically as t -> 00. If 0.(0) = A(l(;o), then a(t) = a(O). But, if both la(O)1 > IA(l(;o)1 and a(0)E2(,,0) < 0, then la(t)1 -; 00 monotonically and in a finite time t(X), given by t(X) = f1.(1(;0) In 1 - A(KO)) a(O) ( 4.17) = 2G(0; KO) In(l _ c(l(;o)G/(O; 1£0)) EZ(Ko)a(O) Thus, for a shear-acceleration wave propagating into a region undergoing the simple shearing motion (4.4), we have the following result: Although it is impossible for such a wave to grow in amplitude when 1(;0 = 0, the wave can achieve infinite amplitude in finite time if 1(;0 f= 0, provided [Otv] (or -c[oxv]) is of proper sign and exceeds in magnitude the critical amplitude IAI. One expects tha.t for KO near to zero, this critical amplitude, which is infinite for 1£0 = 0, will decrease as the magnitude of 1£0 is increased. If the region ahead of the wave is undergoing a steady rectilinear shearing flow for which Vi (x) does not reduce to a constant 1(;0 independent of x, the formula (4.11) does not reduce to (4.14) but can be analyzed 8 if E 1(1(;), E 2 (1(;), and G'(O;K) are known as functions of I(; and I(; = Vi is specified as a function of x as it is in channel flow for which (2.21) and (2.22) hold and 1£ = - T - 1 (ax). In channel flow, IKI is a maximum at the bounding surfaces x = ±td. From this and the observations made above for simple shearing flow, one expects that a shear-acceleration wave propagating into a fluid undergoing steady channel flow is more likely to attain infinite amplitude when it is near rather than far from the bounding surfaces. It is conjectured, but a proof is lacking, that the approach of l[otv]1 to infinity as t -; too signifies the appearance of a jump discontinuity in v, i.e., the appearance of a shear-shock wave. 8 As in [23, §3J. 5. Formation of Singularities in Initially Smooth Motions. The results on the amplitude of acceleration waves suggest that when the functional t obeys the postulate of fading memory adopted here, the solutions of equation (3.1) corresponding to smooth initial data with loxvl or IOtVI large will be such that these quantities become infinite at some value of x in finite time. Before discussing results of this type, let us examine some special cases of fluids wit.h fading memory. Among the fluids obeying the postulate of fading memory as stated in Section 3 are those for which the functional t has the form with each function Kn decaying rapidly to zero for large values of its arguments Si, i = 1, ... , n, and with these functions tending to zero in an appropriate sense as n -> 00. The postulate is obeyed also by BKZ fluids [24], for which (5.2) t(),t) = to H(),t(s),s)ds = 1t H(r(r) -,(t),t - r) dr, provided H is sufficiently smooth and IH(a, b)1 decays sufficiently rapidly with increasing b at fixed a and does not grow too rapidly with increasing lal at fixed b. a Also obeying the postulate are the extensions of the BKZ theory proposed recently by Coleman and Zapas [25]. It should be observed, however, that constitutive relations of the form, (5.3) T(t) = wC,(t)) K(t - a) C,(a)) da, with I a component of strain and T a component of stress, do not fall as special cases of the equation (2.3)1, i.e., (5.4) for the shear stress in a simple fluid. The large and growing literature on the qualitative theory of the dynamical equations of viscoelastic solids obeying relations of the form (5.3) is interesting in its own right and has on occasion suggested research applicable to fluids. 10 Slemrod [28] was the first to obtain a theorem showing the non-existence of global smooth solutions to the dynamical equation (2.8) for rectilinear shearing flows of simple fluids under the postulate of fading memory. He considered t.he (albeit unlikely) special case of (5.1) in which (5.5) 9 10 (n=1,3,5, ... ) For BKZ fluids, the existence of 8 2 t in (3.4) requires that. p in (3.3) be For surveys see [26] and [27]. 2: 2. and hence (5.4) reduces to (5.6) with h(O = h(-O = -h(O, n odd and he studied finite perturbations v of steady simple shearing flow. considered velocity fields v of the form, Thus he v(x,i) = Vxjd+v(x,i), for which (5.6) becomes (5.9) It was assumed that vex, i) is given as a smooth function for -00 < t ::; 0 and o ::; x < d. Slemrod's analysis was based on the clever observation that after the change of variables U(X'i)=V(X'i)-f3 w(x, i) = oxv(x, i), = e-!3sv(x,t-s)ds, the functional-differential equation (3.1) with a (5.11 ) with (5.12) After subjecting (5.11) to the prescribed data, u(d, i) = u(O, i) = 0, { u(x,O) = uo(x), w(x,O) = wo(x), and studying the evolution of the Riemann invariants along the characteristic curves of (5.11), he obtained results which may be summarized as follows: If h' (0) > 0, h"(O) :f. 0, if maxx luo(x)1 and maxx Iwo(x)1 are sufficiently small, if loxuo(x)1 or loxwo( x)1 are sufficiently large, and if appropriate sign conditions, depending on the sign of (T"(O), hold for oxuo and oxwo where loxuol and 10",1001 attain their maxima, then (5.11), (5.13) has a classical solution with u and only a finite time; i.e., (5.14) max loxv(x, t)1 x + max loxw(x, t)1 x in C 1 ([0, d]) for ---> 00 in finite time, which yields a number too for which (5.15) t--t oo max [loxv(x,t)1 x + IOtv(x,t)1J = 00. This result was extended to channel flows by Hattori [29J and further extended by Gripenberg [30J. (Related results for a fluid of the rate type are given in the recent monograph of Renardy, Hrusa, and N ohel. l ! ) Many results on "blow up" of smooth solutions were subsequently obtained using constitutive equations appropriate for solids. In particular there is the numerical study of Markowich and Renardy [31], and the recent analyses of Dafermos [32], Ramaha [33], and Nohel and Renardy [34], all of which are described ill Chapter II, Section 4 of the monograph of Renardy, Hrusa, and Nohel [26]. The method published by Dafermos [32J in 1986 is applicable to Cauchy problems for many types of viscoelastic fluids, including BKZ fluids, and, more generally, fluids obeying the postulate of fading memory. Without giving Dafermos' treatment the full description it deserves, this short discussion must now conclude with the observation that, when applied to rectilinear shearing flows of fluids with fading memory, the method developed by Dafermos permits one to show that 1j, for a perturbation v of a flow v in which El and E2 are both positive, one has (i) an appropriate norm of a relative-strain history (constructed from v) everywhere small initially, (ii) sUPx( -oxv) not too large initially, and (iii) supAaxv) large enough initially, then a classical solution does not exist for all t > 0, and hence sup "' lax?)1 --> 00 in finite time. Here, as in the theory of acceleration waves, one expects, but cannot as yet prove,12 that the blow up of loxvl or IOtVI in finite time implies the formation of a shear-shock. Acknowledgments. I am grateful to William J. Hrusa and Daniel C. Newman for help and suggestions. While this article was in preparation my research was supported by the National Science Foundation through Grant D MS-88-15924 to Rutgers University. [26, pp. 67-69]. Although it is strongly suggested by numerical studies. See, in particular, thc study of a viscoelastic solid [obeying a special case of (5.3)] performed by Markowich and H.enardy [31]. The participants in this workshop will find of particular interest Plohr's recent numcrical study [35] of channel flows of a Johnson-Segalman fluid that does not obey several of the hypotheses made here, such as monotonicity of the shear-stress function T and positivity of t.he modulus E 1 , and gives rise to dynamical equations that change type. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] B.D. COLEMAN and M.E. GURTIN, J. Fluid. Mech., 33 (1968), pp. 165-181. H.K. NASON, J. Appl. Phys., 16 (1945), pp. 338-343. R.S. SPENCER and R.E. DILLON, J. Colloid Sci., 4 (1949), pp. 241-255. J.P. TORDELLA, J. Appl. Phys., 27 (1956), pp. 454-458. J.P. TORDELLA, Trans. Soc. Rheology, 1 (1957), pp. 203-212. J.P. TORDELLA, J. Appl. Polymer Sci., 7 (1963), pp. 215-229. J.P. TORDELLA, in Rheology, Vol. 5, F.R. Eirich, ed., pp. 57-92, Acaderrric Press, New York, 1969. E.B. BAGLEY, J. Appl. Phys., 28 (1957), pp. 624-627. E.B. BAGLEY, J. Appl. Polymer Sci., 7 (1963), pp. S7-S8. J.J. BENBOW, R.V. CHARLEY, and P. LAMB, Nature, London, 192 (1961), pp. 223-224. G.V. VINOGRADOV, A. YA. MALKIN, YU. G. YANOOSKII, E.K. BORISENKOVA, B.V. YARLYKOV, and G.V. BEREZHNAYA, J. Poly. Sci. A2, 10 (1972), pp. 1061-1084. E.B. BAGLEY, 1.M. CABOTT, and D.C. WEST, J. Appl. Phys., 29 (1958), pp. 109-110. W. NOLL, Arch. Rational Mech. Anal., 2 (1958/9), pp. 197-226. B.D. COLEMAN and W. NOLL, Ann. N. Y. Acad. Sci., 89 (1961), pp. 672-714. B.D. COLEMAN and W. NOLL, Arch. Rational Mech. Anal., 3 (1959), pp. 289-303. B.D. COLEMAN, Arch. Rational Mech. Anal., 9 (1962), pp. 273-300. B.D. COLEMAN, H. MARKOWITZ, and W. NOLL, Viscometric Flows of Non-Newtonian Fluid.s, Springer, Berlin, Heidelberg, New York, 1966. B.D. COLEMAN and W. NOLL, Arch. Rational Mech. Anal., 6 (1960), pp. 355-370. B.D. COLEMAN and W. NOLL, Rev. Mod. Phys., 33 (1961), pp. 239-249; Errata, ibid., 36 (1964), p. 239. B.D. COLEMAN and V.J. MIZEL, Arch. Rational Mech. Anal., 23 (1966), pp. 87-123. B.D. COLEMAN, M.E. GURTIN, and 1. HERRERA R., Arch. Rational Mech. Anal., 19 (1965), pp. 1-19. B.D. COLEMAN and M.E. GURTIN, Arch. Rational Mech. Anal., 19 (1965), pp. 239-265. B.D. COLEMAN, J.M. GREENBERG, and M.E. GURTIN, Arch. Rational Mech. Anal., 22 (1966), pp. 333-354. B. BERNSTEIN, E.A. KEARSELEY, and L.J. ZAPAS, Trans. Soc. Rheology, 7 (1963), pp. 391-410. B.D. COLEMAN and L.J. ZAPAS, J. Rheology, 33 (1989), pp. 501-516. M. RENARDY, W.J. HRusA, and J.A. NOHEL, Mathematical Problems in Viscoelasticity, Longman, Harlow, Essex, and Wiley, New York, 1987. W.J. HRUSA, J.A. NOHEL, and M. RENARDY, Appl. Mech. Rev., 41 (1988), pp. 371-378. M. SLEMROD, Arch. Rational Mech. Anal., 68 (1978), pp. 211-225. H. HATTORI, Quart. Appl. Math, 40 (1982), pp. 112-127. G. GRIPENBERG, SIAM J. Math., 13 (1982), pp. 954-961. P. MARKOWICH and M. RENARDY, SIAM J. Numer. Anal., 21 (1984), pp. 24-51; Corrigenda, ibid., 22 (1985), p. 204. C.M. DAFERMOS, Arch. Rational Mech. Anal., 91 (1986), pp. 365-377. M. RAMMAHA, Commun. in Partial Differential Equations, 12 (1987), pp. 243-262. J.A. NOHEL and M. RENARDY, in Amorphous Polymers and non-Newtonian Fluids, C. Dafermos, J.L. Ericksen, and D. Kinderlehrer, eds.; IMA Volumes in Mathematics and its Applications, Vol. 6, Springer, New York, etc., 1987, pp. 139-152. B.J. PLOHR, Instabilities in Shear Flow of Viscoelastic Fluids with Fading Memory, CMS Report 89-13, U. Wisconsin, Madison (1989). G. SCHLEINIGER*t R. J. WEINACHT* Abstract. In this paper we discuss some features of systems of partial differential equations of first order related to change of type, to composite type and to degeneracy. We are interested in these effects with respect to viscoelastic fluid flow and hence we focus on the properties of two particular models of differential type, the Upper Convected Maxwell model and the Bird-DeAguiar model. Friedrichs' theory of symmetric positive operators is discussed as a means for treating these indefinite type systems and its use is illustrated for two simple systems, one of composite type and one of degenerate elliptic type. Key words. viscoelastic flow, steady flow, composite systems, mixed-type systems, degenerate systems AMS(MOS) subject classifications. 35M05, 76A10 1. Introduction. In this paper we discuss some features of systems of partial differential equations of first order related to change of type, to composite type and to degeneracy. We are particularly interested in these effects with respect to the system of equations describing viscoelastic fluid flow with constitutive equations of differential type [14). We focus on two models, namely the Upper Convected Maxwell model (UCMM) and the Bird-DeAguiar model. The first is actually contained within the second. It is well known that the equations governing the steady flow of an upper convected Maxwell fluid can be of changing type, Joseph et al [7). Moreover, in addition to the possible complication of changing type, the equations are always of composite type, i.e. there are both real and complex characteristics. In the case of two-dimensional flow the equations of conservation of mass and momentum together with the constitutive equations lead to a six-by-six system of coupled first order partial differential equations. If one looks at the eigenvalues of that system one finds that two are real (in fact there is one repeated eigenvalue associated with the streamlines), two are pure imaginary, and the other two can be complex or real depending on the speed of the flow (Mach number). Thus the system always has two real characteristics and may have four within the flow region. Thus the label that the system is of composite and changing type. More recently two new phenomena have been observed: first, the system of equations governing the Bird-DeAguiar model can change type yet again to become fully hyperbolic and second, the linearized UCMM is actually degenerate at boundaries where the velocity components are zero. These two phenomena will be discussed in this paper. Both are of special importance in discussions of the well-posedness of the boundary value problems *Department of Mathematical Sciences, University of Delaware, Newark, DE 19716. This work was supported by the National Science Foundation under grant # DMS-8714152. tWork supported by ARO grant # describing these flows. Both are also of importance in the numerical simulation of these flows. Although some work has been carried out on reducing the original system for the UCMM model to one of coupled but higher order equations in terms of the vorticity and the strearnfunction, we prefer in this work to retain the primitive variables and the first order system form. The advantage of this is that more complicated systems, e.g. the Bird-DeAguiar system, can hopefully be treated more simply and in a straightforward manner. First we review some recent analytical results on the Bird-DeAguiar model. These have been reported in part in Calderer, Cook and Schleiniger [2] and in Schleiniger, Calderer and Cook [16]. Then we discuss a method of attack to deal with the questions of appropriate boundary conditions and well-posedness of systems which may be degenerate, of composite type and of changing type. In particular we discuss the application of Friedrichs' theory for symmetric positive linear operators [4] and we apply it to several prototype problems. 2. The Bird-DeAguiar model. The constitutive equation obtained by Bird and DeAguiar was derived from molecular theory using a phase-space kinetic theory for concentrated solutions and melts [1],[3]. It models the polymer molecules as finitely extensible nonlinear elastic dumbbells in which both the Brownian motion and the hydrodynamic forces are made anisotropic. Thus there are three parameters in the model that can be varied to cover the range from isotropic linear forces (a Maxwell model) to various degrees of nonlinearity in the spring (as measured by b), of anisotropy in the Stokes law (measured by the parameter so) and in the Brownian motion (measured by the parameter f3). Other models which show aspects of these anisotropic molecular effects are integral models, namely the Doi-Edwards model and the Curtiss-Bird model. None allow for smooth variation in both Brownian motion and hydrodynamic drag. We consider the model in the absence of a solvent so that only the polymer contribution is considered, analogous to the UCMM. The constitutive equation, in dimensionless form, is then + J) D} { Z(sof3 Y+J - We Dt InZ T + WeT(I) = -3- Y (sof3 - Y)ZJ 3(Y + J) WeY D I Z} + -3- Dt Here y = Vu+(Vu)T is (two times) the rate of deformation tensor, 0 is the identity tensor and subscript (1) means the upper convected derivative. Details of the nondimensionalization appear in [16]. In particular the dimensionless parameters are the Weissenberg number We = )..U/L and the Reynolds number He = pU 2 /(3nK.Tm ) where U is a typical flow velocity, L is a typical length scale, p is the density of the fluid, K. is Boltzmann's constant, n is the number density of dumbbells, ).. is a time constant, and T m is the absolute mean temperature. The time and stress scales are L/U and 3nK.Tm respectively. Also we have that W = a + 2f3 3 Y = 4f3 - a Z = {I 3(W + J)} (Y + J) b (W + J) J = trace(T). Note that we have changed notation slightly from the Bird-DeAguiar paper in that our T is their -T so that we can write the total stress 7t as 7t= -p6 +T where p is the pressure. Note that if a = f3 = So = 1 then all forces are isotropic and the Tanner [17] constitutive model is obtained. IT a = f3 = So = 1 and b - t DO (a Hookean spring) then the upper-convected Maxwell model is (2.2) The system is completed by adding the equations of conservation of mass and conservation of linear momentum: Re{Ut + U· V'u} = -V'p+ V'. T. Several base states for steady flow of the models have been analyzed corresponding to two-dimensional flows: 1) shear flow: Uo = y, Vo = 0; 2) extensional flow: Uo = x, Vo = -y; and 3) Poiseuille (or channel) flow: Uo = u(y), Vo = O. In addition axisymmetric pipe flow: Uo = u(r), Vo = 0 was analyzed. In the first two cases base stresses which were constant were considered, while in the last two cases the stresses had some spatial dependence. In all cases one and only one base state was found which was evolutionary. Other base states were found but these suffered Hadamard instabilities. Of particular interest was the result of linearization of the equations for twodimensional perturbations about the base states mentioned above. The resulting system of equations in all four cases is a seven-by-seven first order system of partial differential equations of the form + BUy + CU = where U = (U,V,p,Tll,T12,T22,T33)T. This system was analyzed for its eigenvalues a defined by det(aA + B) = 0 and its corresponding characteristics, dy/dx = a. What occurred in all cases was that not only did we find that the model in general was of composite and possibly of mixed type as the UCMM model, but also that it could be fully hyperbolic depending on the flow parameters. This is the first time that a region where the steady system is fully hyperbolic has been observed for viscoelastic flows [2],[16] (In a later paper, by Verdier and Joseph [19], similar behavior was shown to occur for the White-Metzner model.). In particular the system of equations, 7 x 7 in this case, has three real eigenvalues (associated with the streamlines) and the other four eigenvalues can switch from four complex to two complex and two real, to all real. Moreover there are, in the latter case, seven linearly independent eigenvectors, so that the system is in fact hyperbolic. This behavior is illustrated in the figures below. Note that this behavior occurs even for the simplified case of the Bird-DeAguiar model: a = f3 = So = 1 which is the Tanner model. Note also that in the case of shear flow the coordinates x, y can be scaled by the Reynolds number He so that the boundary of the flow, y = 1, can occur at any horizontal level in Fig. 1, thus the flow can become fully hyperbolic at the wall. 5 real, 2 complex 4.04. 1 real 2.24 5 rea com ex 3 real, 4 complex Fig. 1. Distribution of eigenvalues: shear flow, He = 1, a = 1, /3 = 2.56, b= 2, So =0.6, We= 11.7 'Y 1.0 3 real, 4 complex .329 .285 .210 5 real, 2 complex 5 real, 2 complex Fig. 2. Distribution of eigenvalues: channel flow, He = 1, a = 1, /3 = 1, b = 2, So = 1, We = 10 One feature has been glossed over in the above discussion. The characteristic polynomial has a factor au + v, and in fact this factor occurs with multiplicity three. Thus all the comments above were for regions where u is not zero. However, if v = u = 0, then the characteristic polynomial is always zero, that is any a will do. In particular for channel flow if we are to assume that the base flow obeys no slip at the walls, then the system is degenerate at the walls. This introduces new questions about the correct boundary conditions to be prescribed. An analogous situation is studied for a degenerate elliptic system in section 5. 3. Symmetric positive systems. Friedrichs [4] introduced his theory of Symmetric Positive Systems specifically to treat systems which are not of one type, most notably of mixed elliptic-hyperbolic type. It is attractive therefore as a method that might also be useful for systems which are of composite type and degenerate type and of composite-mixed type which might arise, for example, in problems of viscoelastic flows. In this section we outline certain aspects of the theory of Symmetric Positive Systems in a more or less self-contained way but only to the extent needed for its application in the next sections. There we show by some simple examples how correct boundary conditions for well-posed problems can be discovered through its use. The examples are chosen to exhibit certain features which are present in viscoelasticity. Note that Friedrichs' theory is for linear systems of partial differential equations (however see [6]). Thus it can apply directly only to linearized equations (of, say, viscoelasticity) but one hopes that the results will suggest correct boundary conditions for the nonlinear case. Consider then the system of m first order equations Lu:= A(x,y)u x + B(x,y)u y + C(x,y)u = in a bounded region n in R2, where the unknown u is an m-vector. Although the theory is valid for n independent variables, we consider the case n = 2. The m-by-m matrices A and B are assumed to be CIon Q with C belonging to CO on Q. The Symmetry Condition is that A and B are symmetric on Q. The Positivity Condition is that the symmetric matrix (3.2.b) H := C + CT Ax - By is positive definite on Q. This condition arises in a natural way as will be seen below. Systems satisfying (3.2.a) and (3.2.b) are called Symmetric Positive Systems. Assume that n has a piecewise smooth boundary so that at all but a finite number of points of the boundary there is a well defined outward unit normal n = (nx,n y). Application of the divergence theorem for smooth (say C 1 (Q)) mvectors u and v yields the Green's Identity (3.3) = (u,L*v) + (v,/3u) 36 where the L2(n) inner product is denoted by (-,.) with the corresponding norm II·I!; and (.,.) denotes the L2(an) inner product. The formal adjoint L* of L is given by L*v := -(Av)x - (Bv)y + CTv with C T the transpose of C. The "boundary matrix" fJ is defined at all smooth points of by For ease of terminology we will discuss fJ as if it were defined at all points of Taking v = u in (3.3), and using the fact that L* = -L + H gives the identity 2(u,Lu) = (u,Hu) + (u,fJu) with H given in (3.2.b). At each (smooth) point P = (x,y) of let M(P) be a subspace of Rm. If the function u satisfies u(P) E M(P) and (3 is positive semi-definite on M(P) then 211u1l1lLull2: 2(u,Lu) 2: (u,Hu) 2: 2hollul1 2 , ho > 0 by virtue of (3.5) so that 2: ho Ilull which is Friedrichs' "basic inequality". From this follows uniqueness of smooth solutions of the boundary value problem Lu = u(P) E M(P) for P E provided that (3(P) is positive semi-definite on M(P). Clearly M(P) may be too restrictive a subspace for existence of solutions; for example the choice M(P) = {O} for all P on is too restrictive for existence of solutions for the nonhomogeneous degenerate elliptic system considered in section 5. Suppose the subspace M(P) is given through the boundary condition M(P)u(P) = 0 by means of a matrix M identity, = M(P). Then (3.3) can be rewritten as Friedrichs' first + (v,Mu) = (u,L*v) + (u,M*v) where M* = (3 + MT. Using again L* = -L + H and putting v = u one obtains + (u,Mu) = (u,Hu) + (u,M*u). Now if Mu = 0 on provided that an then again the basic inequality (3.7) results from using (3.9) (M + M*) :2: 0 on an. The condition (3.10) is Friedrichs' condition of semi-admissibility for the boundary condition M u = 0 and yields uniqueness of smooth solutions of the boundary value problem Mu = 0 on By putting u = v in (3.8) to obtain (v, Lv) + (v, Mv) one likewise has, for M*v = 0 on = (v, L*v) + (v, M*v) (3.13) provided that the condition of semi-admissibility (3.10) is satisfied. Thus (3.10) also yields uniqueness of smooth solutions of the adjoint boundary value problem (3.14.a) (3.14.b) L*v = gin 0. M*v = 0 on It is expected from this latter result that there will be existence of an appropriately defined weak solution of (3.11). Indeed from (3.8) one is led to define a weak solution of (3.11) as an L2(n) function u which satisfies (u,L*v) = (f,v) for all Gl(n) functions v satisfying M*v = 0 on an. Then, in a familiar way, Friedrichs obtains the existence of such weak solutions provided the semi-admissibility criterion (3.10) is satisfied. But for the uniqueness of weak solutions and their regularity as well as existence of strong solutions more is required than semi-admissibility of the boundary condition. Friedrichs introduced an admissibility criterion for the case of nonsingular (3, i.e. the case of non-characteristic boundary. For nonsingular (3 this is equivalent to the maximality condition of Lax [8] which we follow here: a subspace M(P) of Rm is maximal non-negative with respect to L iffor each E M(P) (3.16) and, moreover, there is no subspace of Rm which contains M(P) and on which (3.16) holds. Clearly M(P) determines a (linear homogeneous) boundary condition at each (smooth) point of 8n. Such a boundary condition is called admissible for L if M(P) is maximal non-negative on 8n. In the examples treated below the partial differential equations are homogeneous while appropriate boundary conditions will be non-homogeneous. For the simple geometries considered there one can explicitly make a reduction to the case of homogeneous boundary conditions. For simplicity this reduction will not be carried out and only homogeneous admissible boundary conditions will be considered as in the discussion of this section. Of course, for questions of uniqueness alone such a reduction is superfluous. It should also be emphasized that other boundary conditions than those treated below are of interest and approachable by the theory of symmetric positive operators but the exposition would then be complicated considerably. We should point out that some of the results of the theory of symmetric positive operators, particularly with regard to regularity of solutions ("weak=strong"), apply only to the case of noncharacteristic boundary points (i.e. where the boundary matrix f3 is non-singular) or when M(P) is of constant dimension on the boundary. In contrast, many problems of viscoelastic flow and the examples treated below do not satisfy these conditions. Also note that the regions that we consider below have corners. These points require special treatment for "weak = strong" results [4], [5], [8], [10], [15], [18]. We note also that Morawetz [9] treated a system of equations which includes Tricomi's equation (written as a 2-by-2 first order system) and obtained a weak solution for certain standard boundary value problems in regions including the parabolic line. She showed that for certain boundary geometries the theory of Symmetric Positive Systems does not apply directly so that some modification is needed to fit that case in Friedrichs' framework. 4. The Euler equations for uniform flow. The steady two-dimensional incompressible Euler equations linearized about the uniform flow Uo = 1, Vo = 0 and Po = constant are given by + Vy = 0 =0 Vx + py = 0 (4. loa) x + Px where u, v, give the x and y components of the velocity field and p is the pressure. The region under consideration is the channel Iyl < 1 with an inlet at x = O. This system, like its nonlinear version, is easily seen to be composite with one family of real characteristics y =constant and a family of complex characteristics y ± ix =constant. Since the pressure is determined only up to an additive constant by this system, in order to hope for uniqueness of a solution it is necessary to include the components of the gradient of p as separate dependent variables and thus to write the system in the augmented form x + Vy =0 =0 Vx + z = 0 Wy - Zx = 0 (4.2.c) (4.2.d) x +w where we have put w = Px, z = Py. The system is not in symmetric form. We achieve a symmetric form by premultiplication by a symmetrizing matrix Z. The most general Z is Z~ (~ Ho 0) d 0 0 -d 0 where a, b, e, d, 1 are arbitrary smooth functions of (x, y) to be chosen so as to satisfy det Z = ad3 =f. o. The resulting symmetric system, obtained by multiplying the system on the left by Z is AUx + BUy ( 4.4) + CU = with U = (u,v,w,z)T and b+e 1 0 -d °~ ) (0~ °o 0) 0 0 b d 0 a e -d 0 b~O) o We note that A, B, C are everywhere singular so that the boundary matrix fJ = nxA + nyB is singular everywhere on aQ: the boundary is everywhere characteristic. The positivity condition (3.2.b) requires that the symmetric matrix -(b x + ex) (4.5) -lx - by a e - dy e- d y 1+ d x be positive definite. It is easy to see that this is the case if, for example, we choose a = 1 = 2e- x , b = e = 0 and -d a constant which is > 1 on n (where we assume x ~ 0). With such a choice the quadratic form for the boundary matrix fJ on y = ±1 is U . fJU = ±2dvw and so the subspace determined by v = 0 (or w Similarly on the inlet x = 0 U· fJU = -(au 2 + lv 2 - 0) is maximal non-negative. so that the subspace determined by u = v = that downstream on x = 1 U . (3U = au 2 so that v = a (or z = a is maximal non-negative. + Iv 2 - Note also 0) is maximal non-negative. Thus we arrive at the expected result that admissible boundary conditions for the channel include the specification of valone on y = ±1 and the specification of u and v on the inlet; only v need be specified at a downstream outlet. The components of the pressure gradient Px and py are then uniquely determined. For the semi-infinite channel one would expect a limit condition that v would tend to zero at infinity. 5. A degenerate elliptic system: GASPT. A significant example of a degenerate elliptic equation is the equation of Weinstein's [20],[21] Generalized Axisymmetric Potential Theory (GASPT) (5.1) where k is a real constant. The equation is elliptic for y > a but for k not a the ellipticity degenerates on y = O. For k = 1/3 equation (5.1) is the canonical form of Tricomi's equation in the elliptic (subsonic) half-plane (j > O. For k not zero (5.1) is a prototype for elliptic partial differential equations which are singular or degenerate on a portion of the boundary of the region in question. In consonance with several of the flow problems considered above we consider regions contained in the strip a < y < 1. For definiteness let n denote the rectangular region (0,1) X (0,1) in the x, y plane. A first order system obtainable from (5.1) is, with u y - Vx YU x which is of the form AUx + yVy + kv = + BUy + au = u, tPy a a °as in (3.1) but here with - (0 -1) ° - (1a 0) - (0 0) ,B = ,C = Since det(O'A + B) = y(0'2 + 1) the characteristics of (5.2) (and of course of (5.1)) are non-real for y not zero but for y = every direction is characteristic, analogous to the UCMM or Bird-DeAguiar behavior for channel flow. By use of the symmetrizer (-ya b) yb the system assumes the symmetric form (5.3) with A= (bYay ay ) ,B = (-a y -by by Note that Z is singular on y The choice of a = y' + BUy + CU = kb) . 0 ka = o. and b = 0 gives for H in (3.2.b) H = y' (s +0 1 2k - (8 + 1) which for -1 < 8 < 2k - 1 is uniformly positive definite on sets bounded away from y = O. Such an s exists only for k > O. More relevant here is the fact that H = y' H 0 with the constant matrix Ho positive definite for such a choice of 8. Indeed, it then follows that (5.5) (U, HU) 2 ho(Y'U, U), so that the theory of symmetric positive operators applies for a weighted-L 2 (n) space rather than the usual (unweighted) space L2(n). For the case of uniqueness this is obvious. For existence of weak solutions a more extensive analysis is needed but is valid at least for k > l. Note that as k tends to zero the system (5.2) has the Cauchy-Riemaun system as the limit for which one obtains uniqueness of v only up to an additive constant if u is specified on the entire boundary. This is reflected in the fact that H can not be made positive definite in the domain for k = O. The system must be augmented to include u or v explicitly in order to obtain uniqueness. With the positivity condition (3.2.b) considered in the modified way (5.5) we now examine possible admissible boundary conditions. On y = 1 so that u = 0 is an admissible boundary condition while on y = 0 no boundary condition is to be specified, provided that u and v remain bounded near y = 0 (in fact no boundary condition is to be specified as long as y.+l U 2 and yS+lV 2 tend to zero as y tends to zero). On x = 0 or 1 so that u = 0 (or v = 0) is an admissible boundary condition there. Thus, for k > 1/2 (choosing 8 > 0) a correctly posed problem is to give u on x = 0, 1 and on y = 1 with no data given on y = 0, but only boundedness of u and v there. A few remarks should be made for those familiar with the results of GASPT. For k 2 1 a well-posed Dirichlet problem for (5.1) consists in specifying ¢ only on those parts of the boundary for which y > 0, provided merely that ¢> remains bounded near y = 0 (in fact, merely that yk-l¢> tends to zero for k > 1 [(logy)-l¢> tends to zero for k = 1] as y tends to zero), whereas for k < 1 ¢> must be specified all around. In treating the system (5.2) we require that u and v (i.e. ¢>x and ¢>y) remain bounded near y = O. This explains why we don't specify any conditions on u and v at y = 0 for 1/2 < k < 1. It is interesting to also consider an augmented version of the system (5.2) similar to the example of the Euler equations above. The point here is to be able to also obtain results for the Cauchy-Riemann system which was not possible through the corresponding two-by-two system. Letting w = U x and z = u y in (5.2) we arrive at a 3-by-3 system of the form (3.1), namely z = 0 Vx - + v y ) + kv = Wy - By using the symmetrizer Z = ( b 0 a c = O. yc) 0 0 we obtain a corresponding symmetric form for U = (v, w, z) T with -b) o . yc Here again every direction is characteristic for this system. The choices b = 0, a = ay\ and c = ,yk-l with a and , positive for k yields which can be written as H = ykHo(y) with Ho (uniformly) positive definite on provided that 2,(k - 1) > a for k > 1. Thus a modified positivity condition like (5.5) is satisfied. With such a choice of parameters we find that on y =constant so that v = 0 is an admissible boundary condition on y = 1 while on y = 0 no boundary condition need be specified provided that k > 1 and v and w remain bounded there (or merely that yk+lv 2 and ykvw tend to zero as y tends to zero). On x =constant so that v = 0 (or z = 0) is an admissible boundary condition on x = 0,1. Thus the situation for this augmented degenerate system is essentially the same as that for (5.2). For the non-degenerate case, k = 0, this is not true. As k tends to zero in (5.6) (dividing first by y) one obtains an augmented Cauchy-Riemann system. For its symmetric form we have so that the choice b = 0, a = -exy and c = "( (ex which is positive definite on On y =constant n under a > 0, > 0) yields restriction similar to that above: ex < 2"(. so that the boundary condition v = 0 is admissible on y = 0,1. On x =constant U . flU = -2nx"(vz so that the boundary condition v = 0 (or z = 0) is admissible also on x = 0,1. Thus by augmenting the Cauchy-Riemann system we obtain, via the theory of symmetric positive operators, the uniqueness of the solution (v, w, z) T as compared to uniqueness up to an additive constant for (u, v) T in (5.2). It should be mentioned that the Cauchy-Riemann system was also treated in the original paper of Friedrichs [4] along with other examples including the Tricomi system (see also Morawetz [9]). 6. Conclusion. It is clear that the differential models for viscoelastic fluid flow contain a wide variety of interesting features. In particular the equations form a coupled system of partial differential equations of first order which is not of standard type. The systems are composite, may be of changing type and may be degenerate at flow boundaries. This has been indicated in particular for the Bird-DeAguiar model where the system can also become fully hyperbolic in the flow region. For such systems the coupling can be of great significance, as was discussed for a two-by-two degenerate elliptic system in section 5. As the parameter k changes (which couples the system through lower order terms) the boundary conditions for a well-posed problem also change. We have applied Friedrichs' theory of Symmetric Positive Systems to several examples to show how it may be useful for attacking the more complicated problems. In a recent paper [13] Renardy considered small perturbations of plane Poiseuille flow of Maxwell type fluids at small shear rates with inflow and outflow boundaries. Assuming no-slip (zero velocity at solid walls), he gave an existence and uniqueness proof for small enough data (in some appropriate norm). A well-posed problem in that context is obtained if, in addition to the no-slip condition at the walls, inflow and outflow velocities as well as normal stresses at the inlet are prescribed. These inflow data must satisfy certain compatibility conditions which rule out singularities at the corners where inflow boundaries meet the walls (see also [11],[12]). Thus apparently the degeneracy has no effect on the boundary conditions for this particular coupling. It will be interesting to see if similar results hold for the more complex nonlinear model, the Bird-DeAguiar model. We are presently applying the Friedrichs' method to various differential constitutive models governing viscoelastic fluid flow. Acknowledgments. Gilberto Schleiniger was on leave at the Institute for Mathematics and its Applications at the University of Minnesota (IMA) when part of this work was in progress and he thanks the IMA and the Minnesota Supercomputer Institute for their support. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] R. B. Bird and J. R. DeAguiar, An encapsulated dumbbell model for concentrated polymer solutions and melts I. Theoretical development and constitutive equation, J. Non-Newtonian Fluid. Mech., 13 (1983), pp. 149-160. M. C. Calderer, L. Pamela Cook and G. Schleiniger, An analysis of the Bird-DeAguiar model for polymer melts, J. Non-Newtonian Fluid. Mech., 31 (1989), pp. 209-225. J. R. DeAguiar, An encapsulated dumbbell model for concentrated polymer melts II. Calculation of material functions and experimental comparisons, J. Non-Newtonian Fluid. Mech., 13 (1983), pp. 161-179. K. O. Friedrichs, Symmetric positive linear differential equations, Comm. Pure Appl. Math., 11 (1958), pp. 333-418. K. O. Friedrichs and P. D. Lax, Boundary value problems for first order operators, Comm. Pure Appl. Math., 18 (1965), pp. 355-388. S. Hahn-Goldberg, Generalized linear and quasiJinear accretive systems of partial differential equations, Comm. in Partial Differential Equations, 2 (1977), pp. 165-19l. D. D ..Joseph, M. Renardy and J .-C. Saut, Hyperbolicity and change of type in the flow of viscoelastic fluids, Arch. Rational Mech. Anal., 87 (1985), pp. 213-25l. P. D. Lax and R. S. Phillips, Local boundary conditions for dissipative symmetric linear differential operators, Comm. Pure Appl. Math, 13 (1960), pp. 427-455. C. S. Morawetz, A weak solution for a system of equations of elliptic-hyperbolic type, Comm. Pure Appl. Math., 11 (1958), pp. 315-33l. R. S. Phillips and L. Sarason, Singular symmetric positive first order differential operators, J. Math. Mech., 15 (1966), pp. 235-27l. M. Renardy, Inflow boundary conditions for steady flows of viscoelastic fluids with differential constitutive laws, Rocky Mt. J. Math., 18 (1988), pp. 445-453. , A well-posed boundary value problem for supercritical flow of viscoelastic fluids of Maxwell type, These proceedings. , Compatibility conditions at corners between walls and inflow boundaries for fluids of Maxwell type, preprint, May 1989. M. Renardy, W. J. Hrusa and J. A. Nohel, Mathematical Problems in Viscoelasticity, Longman Scientific and Technical (with John Wiley and Sons. Inc), New York, 1987. L. Sarason, On weak and strong solutions of boundary value problems, Comm. Pure Appl. Math, 15 (1962), pp. 237-288. G. Schleiniger, M. C. Calderer and L. Pamela Cook, Embedded hyperbolic regions in a nonlinear model for viscoelastic flow, to appear in the AMS Contemporary Mathematics Series, Proceedings of the 1988 Joint Summer Research Conference on Current Progress in Hyperbolic Equations: Riemann Problems and Computations, Bowdoin, Maine. [17] [18] [19] [20] [21] R. I. Tanner, Stresses in dilute solutions of bead-nonlinear-spring macromolecules. II Unsteady flows and approximate constitutive relations, Trans. Soc. Rheal., 19 (1975), pp. 37-65. D. S. Tartakofl', Regularity of solutions to boundary value problems for first order systems, Indiana J., 21 (1972), pp. 1113-1129. C. Verdier and D. D. Joseph, Change of type and loss of evolution of the White-Metzner model, to appear, J. Non-Newtonian Fluid Mech .. A. Weinstein, Generalized axially symmetric potential theory, Bull. Amer. Math. Soc., 59 (1953), pp. 20-38. , Singular partial differential equations and their applications, in Fluid Dynamics and Applied Mathematics (Proc. Sympos., Univ. of Maryland, 1961), edited by Diaz and Pai, Gordon and Breach, New York, New York, 1962, pp. 29-49. NUMERICAL SIMULATION OF INERTIAL VISCOELASTIC FLOW WITH CHANGE OF TYPE M.J. CROCHET and V. DELVAUX* SUMMARY. We examine plane inertial flows of viscoelastic fluids with an instantaneous elastic response. In such flows, the vorticity equation can change type when the velocity of the fluid exceeds the speed of shear waves. We use a finite element algorithm which has been developed for calculating highly viscoelastic flows. The algorithm is tested for supercritical flow regimes on the problem of the flow through a wavy channel. We next consider the problem of the flow of a Maxwell fluid around a circular cylinder for various flow regimes. We compare creeping flows, Newtonian flows and supercritical viscoelastic flows. We show that the flow kinematics is affected by supercritical flow conditions. In particular, a vorticity shock forms ahead of the cylindrical body. 1. Introduction. Earlier papers on the numerical simulation of viscoelastic flow have been essentially devoted to creeping flows. The major reason is that most rheological and experimental data relate to very viscous fluids in flows where the Reynolds number is much smaller than one. For such flows, inelastic calculations show that inertia effects have little influence on the creeping flow field (Re = 0). Another reason is that viscoelastic flow calculations already are quite difficult for creeping flows; the coupling between viscoelastic and inertia effects, to be discussed below, amounts to an additional numerical complexity. Still, it is now recognized that the smallness of the Reynolds number is not a sufficient reason for neglecting the inertia terms in the momentum equations. The crucial parameter is the product M2 of the Reynolds times the Weissenberg number; M is the viscoelastic Mach number, or the ratio of a characteristic velocity of the fluid to the speed of shear waves. In this chapter, we study steady inertial flows of viscoelastic fluids with an instantaneous elasticity. The mathematical analysis of the partial differential equations governing such flows was first addressed by Rlltkevich [1, 2]. More recently, Joseph, Renardy and Saut [3] have shown that, under some flow conditions, the vorticity equation in steady flow changes type. The flow domain contains sub critical (elliptic) and supercritical (hyperbolic) regions for the vorticity, which can become discontinuous across real characteristics in the hyperbolic region. Other mathematical features of the governing equations of viscoelastic flow, such as the loss of evolut.ion, may be fOllnd in [4-6]. Several interesting flow phenomena show features which are related to change of type. Significant examples are the sink flow considered by :Metmer et al [7], the delayed die swell phenomenon evidenced by Joseph et al. [8] and the flow around *Unite de Mecanique Appliquee, Universite Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium. The work of V. Delvaux is supported by the "Service de Programmation de la Politique Scientifique" . a circular cylinder studied by James and Acosta [9]. In the latter case, it was shown that, under supercritical conditions, the Nusselt number characterizing heat transfer from the cylinder to the surrounding fluid reaches an asymptotic value for velocities exceeding the wave velocity. A simplified analysis of that flow was given in [10]. In the present paper, we wish to analyze the kinematics of the flow of a Maxwell fluid around a circular cylinder under supercritical conditions; further results on the drag and the transport properties will be examined in a later paper. Our goal is to solve the full set of governing equations in the non-linear regime, without resorting to a linearized analysis [11]. We make use of a finite element analysis of viscoelastic flow, which has known considerable progress over the last few years. Recent review on the state of the art may be found in [12, 13], where many references are available. It is interesting to note that one of the difficulties affecting the numerical calculation of viscoelastic flow, even creeping flow, is the inaccuracy of the calculated stress field which can produce artificial change of type in the discretized system of governing equations [14]; with standard Galerkin methods, numerical errors then propagate throughout the flow domain. Earlier numerical attempts at calculating viscoelastic flow in the supercritical regime [15-17] hm'(' failed to simulate the flow far beyond the onset of criticality. In this chapter, we use the mixed finite element algorithm of Marchal and Crochet [18] which is characterized by bilinear sub-elements for the stresses and the use of streamline-upwinding for discretizing the constitutive equations. A recent investigation on the mesh convergence properties of such an algorithm may be found in [19]. In section 2, we examine the basic equations and the associated non-dimensional numbers, and we explain the numerical method. In section 3, we analyze the supercritical flow of a Maxwell fluid through a wavy channel, for which an exact analysis for small amplitude has been given by Yoo and Joseph [20]. We compare the performance of several numerical methods, and find that the method developed in [18] is satisfactory. Finally, we analyze in section 4 the flow of a Maxwell fluid around a circular cylinder. We find that the conjectures made by Joseph [11] are verified by our numerical analysis of the problem. 2. Basic Equations and Numerical Method. Let stress tensor which is decomposed as follows (2.1) denote the Cauchy -pI + 1:. , where p is the pressure and 1:. the extra-stress tensor. For the upper-convected Maxwell fluid, which we study below, 1:. is given by the constitutive equation \7 1:. + )..1:. = 2rUJ. , where).. is the relaxation time, TJ the shear viscosity and sJ. the rate of deformation tensor. The triangular superscript ill (2.2) denotes the upper-convected derivative, (2.3) 1:. = D1:./Dt - (Vv)T1:. -1:.Vv , where v is the velocity vector and D I Dt stands for the material derivative. The momentum and incompressibility equations are given as follows (2.4) - V P + V e I. + f = pa , where f is the body force, p the density and a the acceleration. In later sections, we assume that f vanishes but, by contrast with earlier papers, we take inertia effects into account. We limit ourselves to steady-state problems. Vie assume that the fluid does not slip along rigid walls. The flow in the entry sections is fully developed; we impose v and I. in these sections. In exit sections where the flow is assumed to be fully developed, we impose vasa boundary condition. Let V and L denote a characteristic velocity and a characteristic length of the flow. The dimensionless quantities associated with the flow are the Weissenberg number We, (2.6) and the Reynolds number Re, Re = pVLll7 . The velocity of the shear waves for a Maxwell fluid is given by (2.8) the viscoelastic Mach number M is defined as M = Vic = (Re We)1/2. The elasticity number, defined as (2.10) is also of interest; when the flow rate or a characteristic velocity of the flow increases in a given problem, one finds that l¥ e, Re and M increase linearly with V while E, which depends upon the fluid and the geometry, remains a constant. It is worthwhile to introduce two additional tensorial quantities which are defined as follows: I.A I.B I. + 17 1)..1 , I.A - P V If one considers the integral form of the constitutive equation (2.2), it is easy to show that the tensor I.A should always be positive definite [14]. For creeping flows, 'I.B = 'I.A- However, when inertia is taken into account, 'I.B can lose its positiveness; it was shown by Joseph, Renardy and Saut [3] that, in regions where 'I.B is non-positive, the vorticity equation is hyperbolic. These two tensors are important in our applications, because the lack of positiveness of 'I.A is a sign of numerical inaccuracy, while the loss of positiveness of 'I.B signals the presence of a hyperbolic region for the vorticity equation. Let n denote the flow domain on which we wish to solve the system of eqns. (2.1-5). In view of the implicit character of (2.2), we select as primitive unknowns the extra-stress tensor 'I., the velocity field y and the pressure p which belong to their respective spaces T, V and P. The flow domain is discretized by means of finite elements which cover a domain nh. The elements are characterized by their shape, their nodes, the nodal values and the interpolating functions. The finite element representations ofT., y and p are denoted by 'I. h, yh and ph which belong to their respective discrete spaces Th, V h and ph. A straightforward approach for calculating 'I. h , yh and ph is to substitute for (2.2), (2.4) and (2.5) their weak formulation which may be expressed as follows [21]: find 'I. h E Th, yh E V h and ph E ph such that (2.12) (2.13) (2.14) The brackets ( ) and ~ ~ denote the L2 scalar product over nh and its boundary, respectively, t denotes the surface force vector on the boundary, and expressions such as .,d;ll and a; b denote respectively trABT and a. b. The finite element algorithm called MIXI in [21], which has been used in many earlier publications, is based on the formulation (2.12-14) together with a p2 - CO interpolation for Th and Vh and pI - CO interpolation for ph. Such an algorithm is able to calculate creeping flows of viscoelastic fluids for low values of We. When We increases, a first sign of numerical inaccuracy is observed when 'I.A loses its positiveness [14]. This early symptom is soon followed by the loss of convergence of Newton's method for solving the non-linear algebraic system of equations resulting from (2.12-14). The loss of positiveness of 'I.A causes in fact an artificial change of type of the discretized version of the field and constitutive equations. Much higher values of the Weissenberg number have been rea:ched by means of a new mixed method introduced by Marchal and Crochet [18]. The method rests on two important improvements of the earlier mixed methods. First, an analysis of the structure of the mixed method for the stress-velocitypressure formulation reveals that the stresses are the primary variables; the stress equations of motion act as a constraint on the stress components, while the incompressibility condition acts as a constraint on the velocity field. The spaces Th, Vh, ph must satisfy the Babuska-Brezzi conditions of stability [22-23]; it is not the case with the MIX1 element [24]. The element developed in [18] consists of a p2 _ CO representation for the velocity field and a pI - CO representation for the pressure. For the stresses, Marchal and Crochet [18] subdivide the quadrilaterals for the velocity field into 4 x 4 sub-elements with bilinear shape functions. It has been shown in [24] that such a mixed element converges for Stokes flow. The second ingredient of the method introduced in [18] has to do with the hyperbolic character of the constitutive equation (2.2), where the streamlines are characteristics. It is well-known that the classical Galerkin (or Bubnov-Galerkin) method is unsuitable for solving advection dominated problems, unless one resorts to elements of small and sometimes unpractical size. In [18] Marchal and Crochet make use of the streamline-upwind method developed by Brooks and Hughes [25] with a special emphasis on bilinear quadrilateral elements. Let us consider in Fig. 1 an isoparametric bilinear element together with the deformed and "1 axes of the parent element and let ve, v'1 denote the scalar product of the velocity vector v and the vectors he and h'l indicated on Fig. 1. Let k and w be defined as follows, (vl k= + v~?/2 2 ,W= (v.v)" Fig. 1. Isoparametric bilinear element for the streamline-upwind scheme. Instead of (2.12), we adopt the following form for the weak formulation of the constitutive equation, The streamline-upwind scheme introduces an artificial stress diffusivity along the streamlines. However, it is easy to show [19] that the diffusivity vanishes when the size h of the element goes to zero. The method has been successfully applied to the calculation of several creeping flows. We show below that it can also be used for problems where inertia cannot be neglected, and, in particular, when the vorticity equation changes type in some parts of the flow. 3. Viscoelastic Flow Through Channels. Yoo and Joseph [20] have examined the consequences of viscoelasticity and inertia on the flow of a Maxwell fluid through a planar channel with wavy walls. The vorticity equation changes type when the velocity in the center of the channel is larger than the velocity of propagation of shear waves. The geometry of the problem is shown in Fig. 2; in a Cartesian system of coordinates, the location of the wall is given by the equation y = + c sin 27rX/ L), where H is the half width of the channel, L is the wave length of the perturbation, and c a small parameter. In what follows, we select a ratio H / L = 0.5. Let Vo denote the unperturbed velocity along the plane of symmetry. The Reynolds and Weissenberg number are defined as follows, Re = We = )"Vo/H, while the viscoelastic Mach number is =H(l + E ==~~-===~~~~==-===~ __ ________- L_________________ sin 21t Fig. 2. Geometry of the channel with wavy walls. Yoo and Joseph [20] solve the vorticity equation by means of a domain perturbation analysis. Through a separation of variables, they reduce the problem to a set of two fourth order differential equations which are solved by a standard shooting technique employing a fourth order Runge-Kutta method. Their results are given in terms of the first order perturbation of the vorticity and the streamfunction. More precisely, let W and 7/J denote respectively the vorticity and the streamfunction, and let wo,7/Jo denote their values in the unperturbed flow, that is the Poiseuille flow. The first order perturbation Wi, 7/Jl is defined as = Wo + CWl + ... , 7/J = 7/Jo + c7/Jl + ... Several plots of WI and 7/Jl are presented in [20) for various values of We and Re. We have performed a similar analysis by means of the numerical method explained in section 2. We have selected a small value of c = 0.001, and obtained numerical results w and .(fi for the vorticity and streamfunction. The first order perturbation w*, 'IjJ* is then deduced as follows w* = (w - WO)/IS, 'IjJ * = (.(fi - 'ljJo) / IS these functions can be readily compared with WI and 'ljJ1. The finite element mesh covers one wavelength of the wavy channel. It consists of a uniform distribution of eight elements in the flow direction and twenty four in the width. We impose the flow rate through the channel and apply periodic boundary conditions. No-slip conditions are imposed along the wavy wall. The primitive variables of the finite element calculation are the extra stress and velocity components and the pressure. The streamfunction is calculated a posteriori by solving the equation (3.6) while, for' the vorticity, we calculate a continuous representation least square projection of the discontinuous function w by means of a ax ay We have analyzed three situations with three numerical methods. The mixed method MIX1 does not use stress sub-elements and streamline upwinding. The 4 x 4 algorithm is a mixed method with stress sub-elements introduced in [18] with no streamline upwinding. Finally, the SU 4 x 4 algorithm is the method introduced by Marchal and Crochet in [18]. In Figs 3 to 5, we show contour lines of 'IjJ* on the left and w* on the right. The wavy wall is on top of the figures; the flow goes from right to left. In Fig. 3, we show a sub critical case at Re = 1, We = .01 and M = .1. This is an easy problem where we find an excellent correspondence between Yoo and Joseph's analytical results and the numerical results with all three algorithms. In Fig. 4, we consider a supercritical case (which was not analyzed by Yoo and Joseph) where Re = 5, We = .5 and M = 1.58. Although the Weissenberg number is low, we find that MIXI is unable to produce smooth vorticity contours. The situation improves with 4 x 4 (in the absence of upwinding), while smooth contourlines are obtained with SU4 x 4. It is interesting to note that the streamlines are the same with all three algorithms. In Fig. 5, we analyze the situation Re = 50, We = .5 and M = 5 and compare the results with those of Yoo and Joseph. Again, we find that the streamline pattern is good with all three algorithms. The MIXI method is unable to provide smooth vorticity contours. The situation improves with 4x4 while smooth lines are obtained with SU4 x 4. The comparison with the analytical results is excellent. On the right hand side of the figures, we note that SU 4 x 4 slightly planes some of the contours. This is attributed to the artificial stress diffusivity which is proper to the streamline upwind method We conclude from these examples that, at least in these simple cases, the SU4x4 algorithm is able to calculate solutions where the vorticity equation changes type. ~ (g ~~ Fig. 3. Streamlines and vorticity contours at Re = 1, We = .01 and M = .1. The corresponding values of the streamfunction are 0, ±.05, ±.1 and ±.2. The values of the vorticity are 0, ±1, ±2 and ±5. Fig. 4. Streamlines and vorticity contours at Re = 5, We = .5 and M The associated values are the same as in Fig. 3. = 1.58. Fig. 5. Streamlines and vorticity contours at Re = 50, We = .5 and M = 5. The corresponding values of the streamfunction are 0, ±.Ol, ±.05 and ±.1. The values of the vorticity are 0, ±.5, ±l and ±5. 4. Flow around a circular cylinder. Let us consider the combined effect of viscoelasticity and inertia on the flow around a circular cylinder. Fig. 6 shows the general expectation as explained by Joseph in his recent book [11], for a supercritical condition. "Vorticity is created at the surface of the body due to the no-slip condition, but this vorticity cannot propagate upstream into the region of uniform flow where the velocity is larger than the speed c of vorticity waves into rest. In a linear theory, the first changes in the vorticity would occur across the leading characteristic surface which, in the axisymmetric case, forms a cone like the Mach cone of gas dynamics. The undisturbed region in front of the cone is like the region of silence in gas dynamics." With our numerical method, we are not constrained by the linearized theory. We are able to consider a flow which is supercritical away from the body and sub critical near the body. Region of silence .,::-_ _ _ __ v vorticity Elliptic vorticity Fig. 6. Expected distribution of the vorticity in a perturbed uniform flow around a circular cylinder. In the present chapter, we limit ourselves to the effect of change of type upon the flow kinematics and in particular upon the vorticity. In a later paper [26], we will analyze the consequences of the modified kinematics upon drag and heat transfer, which are indeed dramatic. V V ./ x ,.. 200R Fig. 7. Geometry and boundary conditions for the flow around a circular cylinder. The geometry and boundary conditions of the problem are shown in Fig. 7. For economical purposes, we consider the flow of a Maxwell fluid around a circular cylinder located between two parallel walls. The distance between the walls is 50R, where R is the radius of the cylinder. Preliminary calculations have shown that, for the range of Re and We that we wish to consider, we need upstream and downstream regions which are respectively 50R and 200R long. We impose a uniform velocity profile in the ent.ry and exit sections, and a uniform velocity V along the walls. The velocity vanishes on the cylindrical wall. Additionally, we impose vanishing stresses in the entry section. For Re, We and M we use the definitions (4.1) Re = pVR/rJ, M = V/(ry/)..p)1/2 . The numerical method is SU 4 x 4. On the basis of the velocity field, we calculate the streamiunction .,p and the vorticity w as in section 3. Another quantity of interest for the present problem is the sign of detLB' defined by (2.11). A negative value of detLB indicates the extent of the region where the vorticity equation is hyperbolic. A partial view of the finite element mesh used for the results of the present section is shown in Fig. 8. In our later paper {26]' we will show that our results are confirmed by a coarser and a finer grid. The mesh contains 532 elements, 2233 nodes and· 31214 degrees of freedom. We note that very thin elements have been selected near the cylindrical wall with a view to a good definition of the boundary layer. Fig. 8. Partial view of the finite element mesh together with a zoom near the cylindrical wall. Let us first analyze the results obtained for creeping flow, i.e. for the purely viscoelastic case. We have calculated solutions up to We = 10, at which value a more refined mesh is already needed near the {;ylindrical boundary. In Fig. 9, we show streamlines near the cylinder for respective values of We equal to 0,5 and 10. The step between the streamlines is the same for all three values. We observe that their spacing increases when We increases; this indicates a tendency to stagnation near the cylinder when viscoelastic effects increase. We also detect a slight downstream shift of the streamlines. In Fig. 10, we analyze the axial velocity profile and the vorticity on the axis y = 0 indicated in Fig. 7. We note a hump in the velocity profile, since the flow rate is the same for all sections. As we expect from Fig. 9, we find that the velocity gradient is less steep at some distance of the cylinder for high values of We. We find however that the vorticity is steeper near the cylinder when We increases. The wiggles at We = 10 indicate that the finite element mesh is too coarse for an accurate resolution of the boundary layer. The vorticity contours are shown in Fig. 11. We find that, at We = 5, the contourlines are almost symmetric with respect to the line y = O. The wiggly behavior at We = 10 again indicates the coarseness of the mesh. We=O, Re=O We=5. Re=O Fig. 9. Streamlines on the zoom of Fig. 8 for creeping flow. Streamfunction values from left to right are given by (24.95 - n OA)/V R, with n = 0, ... , 12. .8 .6 .4 .2 .0 -.2 ;:--____~-----+------+-------+-------~x o 5 10 15 20 25 Fig. 10. Axial velocity v/V and vorticity profile wR/V along the axis y = 0 for creeping flow. Re=O, We=O Re=O, We=5 Re=O, We=10 Fig. 11. Vorticity contours for creeping flow. The values of wR/v are .1, .3 and .5, with the highest value closer to the cylinder These relatively high values of We allow us to consider small values of Re. Indeed, at We = 10, criticality is reached at Re = 0.1. Before examining the combined effects of inertia and viscoelasticity, let us consider the purely Newtonian situation with inertia. Fig. 12 shows the streamlines at Re = 0.3 and Re = 0.6. The effect of inertia is clearly a downstream shift of the streamlines with little effect on the extent of the stagnant region. This is confirmed in Fig. 13, where we also find little change in the vorticity when Re goes from 0.3 to 0.6. However, the effect of inertia on vorticity, even at such low values of Re, is clearly seen in Fig. 14; we find that the contourlines are shifted in the downstream direction. Let us observe in particular the spacing between the isovorticity lines in the upstream region, which will be modified once the vorticity equation changes type. Fig. 12. Streamlines on the zoom of Fig. 8 for inelastic flow. The values are the same as in Fig. 9. 1.4~-----------------------------' 1.2 v .8 We=O .2 0.3 .0 ·.2 Fig. 13. Axial velocity v/V and vorticity profile wR/V along the axis y = 0 for inelastic flow. Re=O.3, We=O Re=O.6. We=O Fig. 14. Vorticity contours for inelastic flow. The values are the same as in Fig. 11. Let us now analyze the coupled effect of viscoelasticity and inertia at We = 5. In Fig. 15, we show the extent of the hyperbolic region when Re goes from 0.18 to 0.6. At Re = 0.18, a hyperbolic region has developed between the cylinder and the wall, in view of the hump in the velocity profile detected in Fig. 10, although the Mach number, based on the velocity on the wall, is still below 1. We find that, at M = 1.5 and M = 1.75, the cylinder is still surrounded by an elliptic region. The effect on the streamlines is shown in Fig. 16 which, compared with Fig. 9, shows that the downstream shift is accentuated, as we might actually expect from Fig. 12. One of the most striking features is shown in Fig. 17, where we show the axial velocity and vorticity profiles along the line y = O. The major difference between the solution at M = 0.95 on one hand and M = 1.5 and 1.73 on the other is that the velocity profile is affected by an essentially discontinuous slope. The location of the discontinuity corresponds to the shock of vorticity observed on the same figure. The isovorticity lines of Fig. 18, to be compared with those of Fig. 14, confirm the existence of such a shock. On Fig. 18, we have also indicated the slope of the Mach cone, parallel to the line x = y (M2 - 1)-1/2. Quite clearly, the calculations exhibit the features of Fig. 6. The wiggles on the outer contour line are due to fact that w is quite flat in the immediate neighborhood of the shock, and that the mesh is coarse in the radial direction at such a distance from the cylinder (see Fig. 8). Fig. 15. Region of hyperbolic vorticity indicated by the shaded area. Fig. 16. Streamlines on the zoom of Fig. 8 for viscoelastic flow, We The values are the same as in Fig. 9. = 5. 1.4 1.2 1.0 .8 .6 .4 .2 .0 Fig. 17. Axial velocity v jV and vorticity profile wRjV along the axis y = 0 for viscoelastic flow, We = 5. Re=O.18, We=5 Re=0.45, We=5 Re=O.6, We=5 Fig. 18. Vorticity contours for viscoelastic flow, We The values are the same as in Fig. 11. = 5. The problem has also been studied for the same values of M at a higher value of We. The elasticity W ej Re is now four times as high. In Fig. 19, we observe that the extent of the downstream elliptic region is now definitely larger, as anticipated by Joseph [11]. Again, Fig. 20 demonstrates the abrupt change of the vorticity. Fig. 19. Region of hyperbolic vorticity indicated by the shaded area. Re=O.09, We=lO Re=O.22S, We=lO Re=O.3, We=lO Fig. 20. Vorticity contours for viscoelastic flow, We = 10. The values are the same as in Fig. 11. Conclusions. We have shown that a mixed finite element method with a proper integration of the constitutive equations allows one to calculate flows of viscoelastic fluids with instantaneous elasticity in the supercritical regime. The validity of the algorithm has been tested on the flow through a wavy channel which has been calculated analytically in [20]. The flow of a Maxwell fluid around a cylindrical body shows that the coupling between viscoelasticity and inertia can produce dramatic effects at fairly low values of the Reynolds number. In particular, we have observed the shock transition between regions of vanishing and active vorticity. In a later paper [26]' we will analyze the consequences of such flows on heat and momentum transfer properties. REFERENCES [1] I.M. RUTKEVICH, The propagation of small perturbations in a viscoelastic fluid, J. App!. Math. Mech., 34 (1970), 35-50. I.M. RUTKEVICH, On the thermodynamic interpretation of the evolutionary conditions of the equation of the mechanics of finitely deformable viscoelastic media of Maxwell type, J. Appl.. Math. Mech., 36 (1972), 283-295. D.D. JOSEPH, M. RENARDY AND J .C. SAUT, Hyperbolicity and change of type in the flow of viscoelastic fluids, Arch. Ration. Mech. Ana!., 87 (1985), 213-25l. D.D. JOSEPH AND J.C. SAUT, Change of type and loss of evolution in the flow of viscoelastic fluids, J. Non-Newtonian Fluid Mech., 20 (1986), 117-14l. F. DUPRET AND J .M. MARCHAL, Sur le signe des valeurs propres du tenseur d' extra-contraintes dans un ecoulement de fluide de Maxwell, J. de Mecanique theorique et appliquee, 5 (1986), 403-427. F. DUPRET AND J.M. MARCHAL, Loss of evolution in the flow of viscoelastic fluids, J. Non-Newtonian Fluid Mech., 20 (1986), 143-17l. A.B. METZNER, E.A. UEBLER AND C.F.C.M. FONG, Converging flows of viscoelastic materials, Am. Inst. Ch. Eng. J., 15 (1969), 750-758. D.D. JOSEPH, J.E. MATTA AND K. CHEN, Delayed die swell, J. Non-Newtonian Fluid Mech., 24 (1987), 31-65. D.F. JAMES AND A.J. ACOSTA, The laminar flow of dilute polymer solutions around circular cylinders, J. Fluid Mech., 42 (1970), 269-288. J.S. ULTMAN AND M.M. DENN, Anomalous heat transfer and a wave phenomenon in dilute polymer solutions, Trans. Soc. Rheo!', 14 (1970), 307-317. D.D. JOSEPH, Fluid dynamics of viscoelastic liquids, Springer-Verlag (1989). M.J. CROCHET, Numerical simulation of viscoelastic flow: A review, Rubber Chemistry and Technology - Rubber Reviews for 1989. R. KEUNINGS, Simulation of viscoelastic flow, in C.L. Tucker III (ed). Fundamentals of computer modeling for polymer processing (1988). F. DUPRET, J .M. MARCHAL AND M.J. CROCHET, On the consequence of discretization errors in the numerical calculation of viscoelastic flow, J. Non-Newtonian Fluid Mech., 18 (1985) 173-186. R.A. BROWN, R.C. ARMSTRONG, A.N. BERIS AND P.W. YEH, Galerkin finite element analysis of complex viscoelastic flows, Compo Meth. App!. Mech. Eng., 58 (1986), 201-226. J.H. SONG AND J.Y. Yoo, Numerical simulation of viscoelastic flow through sudden contraction using type dependent method, J. Non-Newtonian Fluid Mech., 24 (1987), 221-243. R.E. GAIDOS AND R. DARBY, Numerical simulation and change in type in the developing flow ofa non linear viscoelastic fluid, J. Non-Newtonian Fluid Mech., 29 (1988), 59-79. J .M. MARCHAL AND M.J. CROCHET, A new mixed finite element for calculating viscoelastic flow, J. Non-Newtonian Fluid Mech., 26 (1987) 77-114. M.J. CROCHET, V. DELVAUX AND J.M. MARCHAL, On the convergence of the streamline-upwind mixed finite element, J. Non-Newtonian Fluid Mech., submitted (1989). J.Y. Yoo AND D.D. JOSEPH, Hyperbolicity and change of type in the flow of viscoelastic fluids through channels, J. Non-Newtonian Fluid Mech., 19 (1985) 15-4l. M.J. CROCHET, A.R. DAVIES AND K. WALTERS, Numerical simulation of Non-Newtonian flow, Elsevier, Amsterdam (1984). 1. BABUSKA, The finite element method with Lagrangian multipliers, Numer. Math., 20 (1973) 179-192. F. BREZZI, On the existence, uniqueness and approximation of saddle-point problems arising from Lagrange multipliers, RAIRO Num. Analysis, 8-R2 (1974), 129-15l. M. FORTIN AND R. PIERRE, On the convergence of the mixed method of Crochet and Marchal for viscoelastic flows, to appear in Com. Meth. Appl. Mech. Eng. (1989). A.N. BROOKS AND T.J .R. HUGHES, Streamline-upwind/Petrov-Galerkin formulations for convection dominated flows with particular emphasis on the incompressible Navier-Stokes equations, Compo Meth. Appl. Mech. Eng., 32 (1982) 199-259. V. DELVAUX AND M.J. CROCHET, in preparation. SOME QUALITATIVE PROPERTIES OF 2 x 2 SYSTEMS OF CONSERVATION LAWS OF MIXED TYPE H. HOLDEN* L. HOLDENt N. H. RISEBRO:j: Abstract. We study qualitative features of the initial value problem Zt + F(z)x = 0, z(x, 0) = zo(x), x E R, where z(x,t) E R2, with Riemann inital data, viz. zo(x) = Z/ if x < 0 and zo(x) = Zr if x > o. In particular we are interested in the case when the system changes type when the eigenvalues of the Jacobian dF become complex. It is proved that if Z/ and Zr are in the elliptic region, and the elliptic region is convex, then part of the solution has to be outside the elliptic region. If both Z/ and Zr are in the hyperbolic region, then the solution will not enter the elliptic region. We show with an explicit example that the latter property is not true for general Cauchy data. This example is investigated numerically. Key words. conservation laws, mixed type, Riemann problems AMS (MOS) subject classifications. 35L65,35M05,76T05 1. Introduction. In this note we analyze certain qualitative properties of the 2 x 2 system of partial differential equations in one dimension of the form ~(u) + ~(f(U,V)) with u = u(x,t), v = v(x,t), x E R. In particular we are interested in the initial value problem with Riemann initial data, i.e. ( u(x,O)) v(x,O) (1.2) where u[, Un VI, Vr = { C;), for x for x are constants. The system (1.1),(1.2) arises as a model for a diverse range of physical phenomena from traffic flow [2] to three-phase flow in porous media [1]. Common for these applications is that one obtains from very general physical assumptions a system of mixed type, i.e. there is a region E C R2 of phase space where the 2 x 2 matrix (1.3) = (fu(U, v) gu(u,v) fv(u,v)) gvCu,v) has no real eigenvalues. The system is then called elliptic in E. *Division of Physics, Mathematics and Astronomy, California Institute of Technology, Pasadena, CA 91125, USA. On leave from Institute of Mathematics, University of Trondheim, N-7034 Trondheim-NTH, Norway. Supported in part by Vista, NAVF and NSF grant DMS-SS0191S. H.H. would like to thank Barbara Keyfitz, Michael Shearer and the Institute for Mathematics and its Applications for organizing a very stimulating workshop and for the invitation to present these results. tNorwegian Computing Center, P.O. Box 114, Blindern, N-0314 Oslo 3, Norway. Supported in part by Vista and NTNF. :j:Department of Mathematics, University of Oslo, P.O. Box 1053, Blindern, N-0316 Oslo 3, Norway. Supported by Vista and NTNF. Consider e.g. the case of three-phase flow in porous media where the unknown functions u and v denote saturations, i.e. relative volume fractions, of two of the phases, e.g. oil and water respectively. A recent numerical study [1] gave as a result with realistic physical data that there in fact is a small compact region E in phase space, and quite surprisingly the Riemann problem (1.1),(1.2) turned out to be rather well-behaved numerically in this situation. Subsequent mathematical analysis [25], [9], [16], [27] showed that one in general has to expect mixed type behavior in this case. Also in applications to elastic bars and van der Waal fluids [14], [28], [22], [23], [24] there is mixed type behavior. See also [20], [10], [11], [12], [13], [15], [17], [18], [19]. Parallel to this development there has been a detailed study of certain model problems with very simple flux functions (j, g) with elliptic behavior in a compact region E which has revealed a very complicated structure of the solution to the Riemann problem [7], [8]. In general one must expect nonuniqueness of the solution for Riemann problems, see [5]. We prove two theorems for general 2 x 2 conservation laws of mixed type. Specifically the flux function is not assumed to be quadratic. The first theorem states that if Zz is in the elliptic region E, then Zz is the only point on the Hugoniot locus of z/ inside E provided E is convex. In the second theorem we show that one cannot connect a left state outside E via an intermediate state inside E to a right state outside E if we only allow shocks with viscous profiles as defined by (2.21). This latter theorem has also been proved independently by Azevedo and Marchesin (private communication). Combining these two theorems we see that if z/, Zr rf. E then also the solution rf. E for all x E R, t > O. Finally we explicitly show that this property is not valid for the general Cauchy problem. The consequences of this for the Glimm's scheme is discussed by Pego and Serre[21] and Gilquin[3]. For the most recent result on conservation laws of mixed type we refer to the other contributions to these proceedings. z(x, t) 2. Qualitative properties. We write (1.1) as (2.1) where z = (:) and F = (~), with Riemann initial data (2.2) We assume that f and 9 are real differentiable functions such that the Jacobian dF has real eigenvalues exept in components of R2, each of which are convex. Let (2.3) A shock solution is a solution of the form z(x, t) = { for x < .st for x > st. where the shock speed s must satisfy the Rankine--Hugoniot relation [29] (2.5) The Hugoniot locus of is the set of points satisfying (2.6) For have E we let E z denote the convex component of E containing z. Then we 2.1. Let Z/ E E and assume tbat Ez, is convex, tben Hz/ nEz/ = {zt} and if Zr E E and EZr is convex, tben (2.8) Proof. We will show (2.7), (2.8) then follows by symmetry. Let assume that (2.9) E Hz/ and E E z ,. Then the straight line connecting Z/ and Zr is contained in Ez" viz. (2.10) a(t) = tZr + (1 - t)z/ E Ez, for t E [0,1] by convexity. Let /3(t) = F(a(t)). Then (2.12) /3'(t) = dF(a(t))(Zr - z/). We want to show the existence of k E R and of I E [0,1] such that (2.13) Assuming (2.13) for the moment we obtain by combining (2.12) and (2.13) (2.14) which contradicts (2.10). To prove (2.13) we consider the straight line passing through F(ZI) in the direction Zl - Zr. By assumption (2.15) Using this we see that this line passes through F( zr) and that there is alE [0,1 J such that ,B'(l)jj(zr - Zl) proving (2.13). 0 This implies that if z/ E E and {Zl, zr} are the initial values of a Riemann problem, then, the state immediately adjacent to z/ ( zr) in the solution will be outside of Ez, ( Ez,). This is so since this state must either be a point on a rarefaction or a shock. Rarefaction curves do not enter E, and we have just shown that neither does the Hugoniot locus. This result cannot be extended to an arbitrary elliptic region consisting of more than one component, as the following construction shows. Let Z/ # Zr with Zr E Hz/ and consider two neighborhoods of Zr, viz. Zr E n1 C n2 such that Z/ f/. n2 . Let G(z) be a smooth function such that dG is elliptic in n1 and G(zr) = F(zr). Now we use G to modify F near Zr (2.16) _ { F(z) F(z) = G(z) f/. n2 E nl and we make F smooth everywhere. If now Z/ is in the elliptic region of F, then both Z/ and Zr are in elliptic regions for F, and Zr E Hz/ for F. The other basic ingredient in the solution of the Riemann problem is rarefaction waves. These are smooth solutions of the form Z = z(x/t) that satisfy (2.1). The value z(~) must be an integral curve of Tj, j = 1,2 where Tj is a right eigenvector of dF corresponding to >.. j. ~ is the speed of the wave; ~ = >.. j (z( x / t)), therefore>.. j has to increase with ~ as Z moves from left to right in the solution of the Riemann problem. Note that no rarefaction wave can intersect E since the eigenvectors are not defined there. For a system of non-strictly hyperbolic conservation laws, the Riemann problem does not in general possess a unique solution, and by making the entropy condition sufficiently lax in order to obtain existence of a solution, one risks losing uniqueness. It is believed, see however [6], that the correct entropy condition which singles out the right physical solution is that the shock should be the limit as € -> 0 of the solution of the associated parabolic equation + F(z')x = €Z~x > O. We then say that the shock has a viscous profile. Let now Z/, Zr be two states that can be connected with a shock of speed s. We seek solutions of the form (2.17) , =Z ,(x-st) --€ =Z '(I:) 0 (Re(Aj(zm)) - Sr < 0 ), hence we obtain (2.24) which contradicts the fact that there is no Zm. 0 is to the right of in which case Combining Theorem 2.1 and Theorem 2.2 we obtain COROLLARY 2.3. Consider an admissible solution Z = z(x, t) of (2.1) with Riemann initial data (2.2). Assume that E is convex. Then (1) If Z/, Zr rt E, (2) If Z/ E E or then also z(x, t) Zr rt E for all x E R, t > E E and z(x, l) E E for some x, l, then z(x, l) E {z/, zr}. The corollary states that if the initial values in a Riemann problem are inside the convex elliptic region, then the solution will contain values outside this region if the entropy condition is based on the "vanishing viscosity" approach. Furthennore if the initial values are outside the convex elliptic region, then the solution will not enter this region. 3. The Cauchy problem - a counterexample. Based on the results of the Riemann problem in the previous section it is natural to ask whether the same property is true for the general Cauchy problem: If + F(z)", = 0 z(x,O) = zo(x) (3.1) and for all x E R rt E z(x, t) rt E (3.2) 1S (3.3) for all x E Rand t > 0 ? . The following example shows this not always to be the case. Let 2 1 (U f(u,v) = 2 2" +v 2 )+v g(u,v) = uv. (3.5) Making the ansatz u(x, t) = a(x),8(t) vex, t) = ,et) we easily find (3.7) for constants E R, i = 1, ... C1 ,4. Choosing = 1, = 0, = -2, we find uo(x) = 2x, vo(x) = -2, 2x u(x t) = , + l' 72 For this choice (3.2) is valid but (3.3) fails for some x E R for t > v'2 - 1, see figures 1 and 2. This and other [21] examples of solutions entering the elliptic region do however have the property that the solutions u and v are also solutions to the viscous equations since U xx = Vxx = 0, as well as to hyperbolic equations without any elliptic regions since we have that Vx = o. v u Figure 1. The solution at t = 0 and t > SQRT(2) - 1 in the z-plane Figure 2. The solution for t = 0 and for t > O. Comparing the general properties of the Riemann problem and the example just presented, it is clear that the Glimm's scheme [4] will be highly unstable when the system is of mixed type since one in this scheme replaces the general Cauchy problem by a series of Riemann prob,lems. This has recently been discussed by Pego and Serre [21], where another counterexample is provided and by Gilquin [3]. It was found that difference schemes also exhibit instabilities in this mixed type problem. The scheme used for the numerical examples was itself a mixed scheme: If both eigenvalues had positive (negative) real part a upwind (downwind) scheme was used, else a Lax-Friedrichs scheme was used. A pure Lax-Friedrichs scheme will have the same kind of oscillations, but they appear at a much smaller 6.x. In figures 3-5 we see the numerical solution to the initial value problem (3.11) vo(x) = uo(x) = 10' at times t = 0.0, t = 1.5 and t = 3.0 respectively. This is (3.7) with Cl = 1, = 0, C3 = 10 and C4 = -110, and the exact solution enters the elliptic region at t = .JliO - 10 ';:; 0.4881. In all the examples .6.x = 0.01 and 6.t = 0.002. In figures 6-7 we see numerical solutions to initial value problems with perturbations of these initial values at t = 2.0. C2 • - u Figure 3. Figure 4. Figure 5. Figure 6. Vo = Vo - O.02x, ito = uo. Figure 7. Vo = Vo + 0.03 sin illx, tio = These examples indicate that the solutions do enter the elliptic region, but this is difficult to determine due to the oscillations. REFERENCES [1) [2) [3) [4] [5] [6] [7] [8] [9] [10] [11] J .B. BELL, J .A. TRANGENSTEIN, G.R. SHUBIN, Conservation laws of mixed type describing three-phase flow in porous media, SIAM J. App!. Math, 46 (1986), pp. 1000-1017. J.H. BICK, G.F. NEWELL, A continuum model for two-directional traffic flow, Quart. J. App!. Math., 18 (1961), pp. 191-204. H. GILQUIN, Glimm's scheme and conservation laws of mixed type, SIAM J. Sci. Stat. Comp., 10 (1989), pp. 133-153. J. GLIMM, Solutions in the large for nonlinear hyperbolic systems of equations, Comm. Pure. App!. Math., 18 (1965), pp. 697-715. J. GLIMM, Nonuniqueness of solutions for Riemann problems, New York Univ. Preprint (1988). M.E.S. GOMES, The viscous profile entropy condition is incomplete for realistic flows, New York Univ. Preprint (1988). H. HOLDEN, On the Riemann problem for a prototype of a mixed type conservation law, Comm. Pure and App!. Math., 40 (1987), pp. 229-264. H. HOLDEN, L. HOLDEN, On the Riemann problem for a prototype of a mixed type conservation law II, in Contemporary Mathematics, Edited by W. B. Lindquist (to appear). L. HOLDEN, On the strict hyperboJicity of the Buckley-Leverett equations for three-phase flow in a porous medium, Norwegian Computing Centre, Preprint (1988). L. HSIAO, Admissible weak solution for nonlinear system of conservation laws of mixed type, J. Part. Ditr. Eqns. (to appear). L. HSIAO, Nonuniqueness and uniqueness of admissible solutions of Riemann problem for system of conservation laws of mixed type, Indiana University Preprint (1988). [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [27] [28] [29] L. HSIAO, Admissible weak solution of the Riemann problem for nonisothermal motion in mixed type system of conservation laws, Indiana University Preprint (to appear). E.L. ISAACSON, B. MARCHESIN, B.J. PLOHR, Transitional waves for conservation laws, University of Wisconsin Preprint (1988). R.D. JAMES, The propagation of phase boundaries in elastic bars, Arch. Rat. Mech. Anal., 13 (1980), pp. 125-158. B.L. KEYFITZ, The Riemann problem for nonmonotone stress-strain functions: A "hysteresis" approach, Lectures in Appl. Math., 23 (1986), pp. 379-395. B.L. KEYFITZ, An analytical model for change of type in three-phase flow, in Numerical Simuation in Oil Recovery, Edited by M.F. Wheeler, Springer-Verlag, New York-Berlin-Heidelberg, 1988, pp. 149-160. B.L. KEYFITZ, A criterion for certain wave structures in systems that change type, in Contemporary Mathematics, Edited by W.B. Lindquist (to appear). B.L. KEYFITZ, Admissibility conditions for shocks in conservation laws that change type, Proceedings of GAMM International Conference on Problems Involving Change of Type (to appear). B.L. KEYFITZ, Change of type in three-phase flow: A simple analogue, J. Diff. Eqn. (to appear). M.S. MOCK, Systems of conservations laws of mixed type, J. Diff. Eqn., 37 (1980), pp. 70-88. R.L. PEGO, D. SERRE, Instabilities in Glimm's scheme for two systems of mixed type, SIAM J. Numer. Anal., 25 (1988), pp. 965-988. M. SHEARER, The Riemann problem for a class of conservation laws of mixed type, J. Diff. Eqn., 46 (1982), pp. 426-445. M. SHEARER, Admissibility criteria for shock wave solutions of a system of conservation laws of mixed type, Proc. Roy. Soc. (Edinburgh), 93A (1983), pp. 223-224. M. SHEARER, Non-uniqueness of admissible solutions of Riemann initial value problems for a system of conservation laws of mixed type, Arch. Rat. Mech. Anal., 93 (1986), pp. 45-59. M. SHEARER, Loss of strict hyperbolicity for the Buckley-Leverett equations of three-phase flow in a porous medium, in Numerical Simuation in Oil Recovery, Edited by M.F. Wheeler, Springer-Verlag, New York-Berlin-Heidelberg, 1988, pp. 263-283. M. SHEARER, D.G. SCHAEFFER, The classification of2 x 2 systems of non -strictly hyperbolic conservation laws, with application to oil recovery, Comm. Pure. Appl. Math., 40 (1987), pp. 141-178. M. SHEARER, J. TRANG EN STEIN , Change of type in conservation laws for three-phase flow in porous media, North Carolina State Univ., Preprint (1988). M. SLEMROD, Admissibility criteria for propagating phase boundaries in a van der Waals fluid, Arch. Rat. Mech. Anal., 81 (1983), pp. 301-315. J. SMOLLER, Shock Waves and Reaction-Diffusion Equations, Springer, New York, 1983. ON THE STRICT HYPERBOLICITY OF THE BUCKLEY-LEVERETT EQUATIONS FOR THREE-PHASE FLOW LARS HOLDENt Abstract. It is proved that the standard assumptions on the Buckley-Leverett equations for three-phase flow imply that the equation system is not strictly hyperbolic. Therefore, the solution of the Buckley-Leverett equations for three-phase flow is very complicated. We also discuss four different models for the relative permeability. It is stated that Stone's model almost always gives (an) elliptic region(s). Furthermore, it is proved that Marchesin's model is hyperbolic under very weak assumptions. The triangular model is hyperbolic and the solution is well-defined and depends L1 -continuously upon the initial values in the Riemann problem. 1. Introduction. The Buckley-Leverett equations describe the flow of three phases in a porous medium neglecting capillary effects. If the system is not strictly hyperbolic, the solution is much more complicated. If the system is not hyperbolic, then the solution is not stable. Therefore it is important to know whether the system is strictly hyperbolic, hyperbolic or not hyperbolic. In the last years there have been several papers on the strict hyperbolicity of the Buckley-Leverett equations. Bell, Trangenstein and Shubin [1] showed that Stone's model, which is the most used model for three-phase relative permeabilities, may give an elliptic region. Shearer [9] proved that if two interaction conditions between the relative permeabilities are satisfied, then strict hyperbolicity fails. Shearer and Trangenstein [10] propose some other interaction conditions which also imply that strict hyperbolicity fails. They also discuss several alternatives to Stone's model. In a recent paper, Trangenstein [12] proves that a large class of models have elliptic regions when gravity is included. Fayers [2] examines the elliptic region(s) in Stone's model for different choices of the residual oil parameter. The properties of the solution near an elliptic region are not known. See [5] and [8]. In the following section the Buckley-Leverett equations are defined and the standard assumptions are listed. In the third section we will make an additional assumption called Stone's assumption. It is stated that with this assumption the Buckley-Leverett equations fail to be strictly hyperbolic. In the final section four different models are discussed. 2. The Model. Let u, v and w = l-u-v be the water, gas and oil saturations. Let f( u, v), g( u, v) and h( u, v) be the relative permeabilities divided by the viscosity for water, gas and oil respectively. Then the Buckley-Leverett equations for threephase flow are u t +( (1) Vt tNorwegian Computing f(u,v) ) f(u,v)+g(u,v)+h(u,v) =0 x g( u, v) ) = 0 f(u,v)+g(u,v)+h(u,v) x . P.O.Box 114 Blindern, 0314 Oslo 3, Norway. The equations are defined in il = {(u,v);O < u,v < u+v < I}. We will assume that the functions f,9 and h satisfy = 0, fu(O,v) (4) = 0, 9v( u, 0) -hu(u,l-u)-hv(u,l-u) fu(u,v) > 0 fu(u,v) - fv(u,v) > 0 for (u,V)Eil, 9v(U,v) >0 for (u, v) E il, hu(u,v) 0 for (u,v) E il, 9vv(U,V) >0 9vv(U,v)-29vu(U,V)+9uu(U,V) >0 for (u,v)Eil, huu(u,v) > 0 hvv(u,v) > 0 for (u,v) E il. (3) implies that the phase is not moving if it is not present. (4) states that the speed of the phase vanishes when the saturation vanishes. (5) and (6) state that the rate and the speed increase when the saturation increases and one of the two other phases has constant saturation. These assumptions are widely accepted. It is possible to give physical arguments for these properties. With an additional assumption at the corners of il, it is possible to prove that the system is not strictly hyperbolic, see [7]. 3. Stone's assumption. It is also usual to assume that the relative permeability of water and gas depends only on the water and gas saturation, namely feu) = f(u,v) and 9(V) = g(u,v). This assumption is called Stone's assumption, see [11]. It uses the fact that water is usually wetting both in contact with oil and gas, and gas is usually not wetting either in contact with water or oil. Some experiments indicate that the isoperms of the relative permeability of oil are concave, see [11]. See figure 1 for an illustration of concave isoperms. Then we may state the following Figure 1. Concave isoperms of the relative permeability of oil THEOREM 1. Assume (3), (4), (5), (6) and (7) are satisfied. Tben (1) is not strictly byperbolic in il. H tbe tbree curves f' + hu = 0, g' + hv = and f' + g' = intersect at a point Pin il, tben (1) is not strictly byperbolic at P. H tbe tbree curves intersect and tbe isoperms of b are concave, tben tbe system is strictly byperbolic except at P. H tbe tbree curves do not intersect in il, tben tbere is at least one elliptic region in il. This theorem is proved in [7]. See figure 2 for an illustration. elliptic region b) non-intersection a) intersection Figure 2. The three curves f' + g' = 0. f' + hu + hv Since the gas viscosity is much smaller than the viscosity of oil and water, the intersection point between g' + hv = 0 and f' + g = 0 has very small gas saturation. Therefore, the elliptic region which is caused by the non-intersection of the three curves has very small gas saturation. The elliptic region reported in [1] is another elliptic region. In their example there is also another elliptic region which is much smaller with smaller gas saturation. 4. Four different models. Since only two-phase relative permeabilities are measured, the formulas for three-phase relative permeabilities are usually only interpolation formulas in the three-phase region. In the following section we will discuss four different models for the relative permeabilities. 4.1 Stone's model. This is the most commonly used expression for the threephase relative permeabilities. See [11]. It assumes that the relative permeabilities satisfy (7) and in addition that ) _ (1- u - v) h(u,O) h(O,v) U,v (l-u)(I-v) . f(u),g(v), h(u,O) and h(O, v) are usually found from experiments. With reasonable assumptions on the two-phase relative permeabilities, Stone's model satisfies assumptions (3), (5), (6). (4) is satisfied for the water and gas relative permeabilities and only in the corners for the relative permeability of oil. Therefore, it is not possible to use Theorem 1 directly. If we make the following minor modification on the model (9) h(u v) , = (1- u - V)l+f h(u,O) h(O,v) (l-u)l+«I-v)l+< for € > 0, then (4) is also satisfied for the relative permeability of oil. Then we may use Theorem 1 and state that the modified Stone's model is not strictly hyperbolic. Since the modified Stone's model is almost identical to Stone's model, this indicates that except for very special two-phase data, Stone's model (8) is not strictly hyperbolic. 4.2 Marchesin's model. In Marchesin's model [9] it is assumed that the relative permeability for each phase only depends on its own saturation, i.e. in addition to (7). Therefore, we may use Theorem 1 making the additional assumptions (3), (4), (5) and (6). It is trivial to see that the three curves in Theorem 1 intersect at a common point. We will instead prove a theorem with much weaker assumptions on the relative permeabilities. Let us first define the following = {y < u < I}, gD = {y ER;g'(u) = Y for 0 < u < I}, hD = {y ER;h'(u) = Y for 0 < u < I}, H = fD n gD n hD and fD for 0 K = ((u,v) E 0.;f'(u) E H, g'(u) E H, and h'(u) E H}. We may then formulate the following theorem. THEOREM 2. Assume that (7) and (10) are satisfied. Then the system (1) is hyperbolic. If K is empty, then (1) is strictly hyperbolic. If K is not empty and (5) is satisfied, then (1) is strictly monotone except at a unique point P. Proof It is straightforward to prove that (1) with the assumption (7) and (9) is strictly hyperbolic except if, and only if, A = (B - C where = g(f' + hu), + E)2 + 4BC > 0 + hv) and E h(f' - g'). The system is hyperbolic if, and only if, A :::: O. If BC > 0, then obviously A > O. Assume B :::: 0 and C ::; O. Since implies that 0, g > 0 and hu = hv, E > O. This A= (B-C+E? +4BC > (B - C)2 + 4BC = (B + C)2. The argument is similar for B ::; 0 and C :::: o. Thus we have proved that A is non-negative and vanishes if, and only if l' = g' = hu = hv. If K is empty, then obviously (1) is strict hyperbolic. If K is not empty and (5) is satisfied, it is easy to prove that (1) is strictly hyperbolic except at a unique point. 0 4.3 The triangular model. The fractional flow function of gas is G()= g(u,v) u,v J(u,v)+g(u,v)+h(u,v)" In [6] it is proposed to let the fractional flow depend only on the gas saturation i.e. G(v) = G(u,v). Since the gas viscosity is much smaller than the oil and water viscosity, this is a good approximation. This results in a model + J;(Ul, ... , Ui)x = 0 for i = 1, ... , n. This model was studied in [6], [3] and [4] with Riemann initial data i.e. for x for i = 1, ... , n. The following theorem was proved in [6]. THEOREM 3. Assume that Ji is continuous, piecewise smooth and that increases or decreases faster than linear when Ui increases to 00 or decreases to -00 for i=1, ... ,n. Then there is a solution of (12) and (13). The solution is unique except for J EM and for each f, (UL, ... ,u n _, Ul+, ... ,u n +) EMf. M has measure zero in the supremum nonn and M f has Lebesgue measure zero for all f not in M. There is always uniqueness for n < 3. For the Buckley-Leverett equations we may state the following theorem. THEOREM 4. Assume (11) is satisfied. Tben (1) is strictly byperbolic except at a curve from one corner to anotber corner wbere tbe system is hyperbolic. Tbe solution of the Riemann problem exists uniquely, is well-defined in n and depends Ll-continuously on the initial data. The first part of this theorem is obvious. Gimse proves the second part of the theorem in [3] and [4]. 4.4 The hyperbolic model. It is possible to find a model which satisfies (3), (4), (5), (6), (7) and in addition ensures that the three curves in Theorem 1 intersect. In [7] is given an example of such a model which always is hyperbolic. The disadvantage with such models is that they become technically difficult in order to ensure that the three curves in Theorem 1 intersect. It is proved in [12] that such models have elliptic regions when gravity is included. Acknowledgement. The author wishes to thank Helge Holden for valuable discussions. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] BELL, J. B., TRANGENSTEIN, J. A. AND SHUBIN, G. R., Conservation laws of mixed type describing three-phase flow in porous media, Siam J. of Appl. Math., 46 (1986), pp. 1000-1117. FAYERS, F. J., Extensions of Stone's Method I and the Condition for real characteristics in three-phase flow, Presented at SPE conference in Dallas, September 1987. GIMSE, T., A numerical method for a system of equations modelling one-dimensional three-phase flow in a porous medium, in Nonlinear hyperbolic equations - Theory, Computation Methods, and Applications, J. Ballmann and R. Jeltsch (Eds.), Vieweg, Braunschweig, Germany, 1989. GIMSE, T., Thesis, University of Oslo, Norway, 1989. HOLDEN, H., HOLDEN, L. AND RISEBRO, N. H., Some qualitative properties of2 X 2 systems of conservation laws of mixed type, This proceeding. HOLDEN, L. AND H0EGH-KROHN, R., A class of n nonlinear hyperbolic conservation laws, J. Diff. Eq. (to appear). HOLDEN, L., On the strict hyperbo1icity of the Buckley-Leverett Equations for Three-Phase Flow in a Porous Medium, Siam J. of Appl. Math. (to appear). PEGO R. L. AND SERRE D., Instabilities in G1imm's scheme for two systems of mixed type, Siam J. Num. Anal., 25 (1988), pp. 965-988. SHEARER, M., Loss of strict hyperbo1icity of the Buckley-Leverett Equations for three-phase flow in a porous medium, in Numerical Simulation in Oil Recovery, Edited by M.F. Wheeler, Springer-Verlag, New York-Berlin-Heidelberg (1988), pp. 263-283. SHEARER, M. AND TRANGENSTEIN, J., Change of type in conservation laws for three-phase flow in porous media, Preprint. STONE, H. L., Estimation of Three-Phase Relative Permeability and Residual Oil Data, J. Cnd. Pet., 12 (1973), pp. 53-6l. TRANGENSTEIN, J.A., Three-phase flow with gravity, J. Contemp. Math. (to appear). L. HSIAOt Consider the model system + p(u~ Ut - Vx - = 0 which describes the one-dimensional isothermal motion of a compressible elastic fluid or solid in Lagrangian coordinate system. Here v denotes the velocity, U the specific volume for a fluid or displacement gradient for a solid, and -p is the stress which is determined through a constitutive relation. For many materials, p(u) is a decreasing function of u, and the system (1.1) is strictly hyperbolic. However, (1.1) can be of mixed type when it is used for dynamics of a material exhibiting change of phase such as in a Van der Waals fluid [SL] (see Figure 1.1). (Figure 1.1) How should we generalize the admissibility criteria used for strictly hyperbolic system to the case of mixed type in order to define an admissible discontinuity and *This research was supported in part by the Institute for Mathematics and its Applications with funds provided by the National Science Foundation. tDepartment of Mathematics, Indiana University, Bloomington, IN 47405 and Institute of Mathematics, Academia Sinica, Beijing, P.R. China an admissible weak solution for Riemann problems? Is it still possible to prove the existence and uniqueness of the admissible weak solution and determine whether or not the solution is continuously dependent on the initial data? Many results are now available which generalize admissibility criteria from strictly hyperbolic systems to those of mixed type. (See [J], [K l ], [K 2 ], [HAl], [HA 2 ], [HMJ, [HS l ], [HS 2 ], [HSLJ, [P], [SEl], [SE2 ], [SL].) We apply the generalization of shock (E) criterion, introduced in [HS 2 ] to obtain the result summarized here. See [HS 3 ] for the details. For a strictly hyperbolic system of the form (1.1), the Oleinik-Liu criterion (so called shock (E) criterion) can be described in two different ways which are equivalent. A discontinuity (Uju+,v+jU_,v_) is called admissible according to shock (E) criterion, if U = Ui( U_, u+) is such that (u+,v+) E Hi(U_,v_),i = 1 or 2,Hi(U_,V_) is the Hugoniot locus { determined by the Rankine-Hugoniot condition with (u~, v_) given, and Ui(U_,U+):::; Ui(U_,U) for all (u,v) E Hi(U_,V_) wlth U between u_ and U+. (u_,v_) E Hi(U+,v+),i = 1 or 2,Hi(U+,V+) is the Hugoniot locus { determined by the Rankine-Hugoniot condition with (u+, v+) given, and Ui(U,U+):::; Ui(U_,U+) for all (u,v) E Hi(U+,V+) with u between u_ and U+. Corresponding to the two criteria, (I) and (II), we have two types, (1') and (II') of generalized shock (E) criteriaj (u+,v+) E Hi(u_,v_),u = Ui(11+,U_),i = lor 2 { Ui(UjU_) 2: Ui(U+j1L) for any u between 11- and U+ where Ui( Uj u_) is defined as a real-valued function and similarly for (II'). Now a function (u(O,v(O)(e = x/t) is called an admissible type I solution to the Riemann problem for (1.1) with data (1.2) if i) it satisfies the boundary condition (u,v) ----; (u'!',v'!') as e- - ; +00 ii) it is either a rarefaction wave or a constant state wherever it is smooth iii) at any discontinuity it satisfies (1'). Similarly we define admissible type II solutions. Finally we define an admissible type III solution to be a function (u( 0, v(~)) that satisfies i), ii) iii) any discontinuity of the first fanlily satisfies (I') while any discontinuity of the second family satisfies (II'). It was proved in [HS 2 ] that there is a unique admissible type I solution for any given Riemann data and the same is true for type II. Moreover, the two solutions are identical for any given Riemann data. It is proved in [HS 3] that an admissible type III weak solution may not exist for some choices of Riemann data; however, there is a unique type III solution if the Riemann data belong to the same phase (that is, u_ :S u'" and u+ :S u'" or u_ :2: U/h U+ :2: up as shown in Figure 1.1). Furthermore, the three types of admissible weak solutions are identical if the Riemann data belong to the same phase [HS 3]. As far as continuous dependence is concerned, it is to be expected in general that the solution is not continuously dependent on the initial data in problems of mixed type. However, some kind of stable behavior can still be expected for our admissible weak solution; this is discussed in [HS3]. In conclusion, we see that the generalized shock (E) criterion supplies a reasonable generalization of an admissibility criterion for the mixed type system (1.1). Moreover, it can be shown that the solution constructed by Shearer in [SEd can be obtained as our type I or type II admissible weak solution, by using the approach introduced in [HS 3 ]. REFERENCES [HAl] H. HATTORI, The Riemann problem for a Van der Waals fluid with entropy rate admissibility criterion-Isothermal case, Arch. Rational Mech. Anal., 92 (1986), 247-263. H. HATTORI, The Riemann problem for a Van der Waals fluid with entropy rate admissibility criterion-Nonisothermal case, J. differential Equations, 65, 2 (1986), 158-174. L. HSIAO AND P. DE MOTTONI, Existence and uniqueness of Riemann problem for a nonlinear system of conservation laws of mixed type, Trans. Amer. Soc. (to appear). L. HSIAO, Admissible weak solution for nonlinear system of conservation laws in mixed type, P.D.E., 2, 1 (1989), 40-58. L. HSIAO, The uniqueness of admissible solutions of Riemann problem for system of conservation laws of mixed type, (to appear in Jour Diff Equn.). L. HSIAO, Admissibility criteria and admissible weak solutions of Riemann problems for conservation laws of mixed type, IMA Preprint Series (1990). R. HAGAN AND M. SLEMROD, The viscosity-capillarity criterion for shocks and phase transitions, Arch. Rational Mech. Anal. 83 (1984), 333-36l. R.D. JAMES, The propagation of phase boundaries in elastic bars, Arch. Rational Mech. Anal. 73 (1980), 125-158. B.L. KEYFITZ, Change of type in three-phase flow: A single analogue, J. Diff. E. (to appear). B.L. KEYFlTZ, Admissibility conditions for shocks in conservation laws that change type, (to appear). R. PEGO, Phase transitions: Stability and admissibility in one dimensional nonlinear viscoelasticity, (to appear). M. SHEARER, The Riemann problem for a dass of conservation laws of mixed type, J. Dill'. E., 46 (1982), 426-443. [SE 2 ] M. SHEARER, Admissibility criteria for shock wave solution of a system of conservation laws of mixed type, Proc. Royal. Soc. Edinburgh 93 A (1983), 233-244. M. SLEMROD, Admissibility for propagating phase boundaries in a Van der Waals fluid, Arch. Rational Mech. Anal. 81 (1983), 301-315. SHOCKS NEAR THE SONIC LINE: A COMPARISON BETWEEN STEADY AND UNSTEADY MODELS FOR CHANGE OF TYPE* BARBARA LEE KEYFITZt Abstract. We look at the structure of shocks for states near a locus where equations change type. Two basic models are considered: steady transonic flow, and models for unsteady change of type. Our result is that these two problems may be distinguished by the nature of the timelike directions and the forward light cone. This leads in a natural way to different candidates for admissible shocks in the two cases. AMS(MOS) subject classifications. AMS (1980) Subject Classification Primary 35L65, 35M05 Secondary 76H05, 35A30 1. Introduction. Models for Change of Type. Systems of quasilinear equations which are of different type at different states arise in two different ways in applications. The first is typified by the pair of conservation laws which governs steady, inviscid, irrotational, isentropic flow [2]: + (pv)y uy = a a Here W = (u, v) represents the velocity of a flow in the xy plane; p = p( Iw I) is the density, given by Bernoulli's law as a function of the speed. The first equation expresses conservation of mass; the second irrotationality of the flow. Under the assumption that the medium is an ideal gas (or some other reasonable thermodynamics), there is a speed, c*, the sonic speed, such that system (1.1) is hyperbolic if Iw I > c* ("supersonic flow") and nonhyperbolic (elliptic) if Iw I < c* ("subsonic"). The flow changes type along the curve u 2 + v 2 = c;, which we shall call the sonic line in this paper. The classification of these equations, and their relation to other models for gas flow, were studied by Courant and Friedrichs [2]. Here we are interested in the following property of weak solutions of (1.1): for every supersonic state wo, there is a one-parameter family of states, WI (1'), such that the function (1.2) W(x,y) = { Wo, WI, 0 on one side, JC, and D < 0 on the other, c. Since (2.9) 'V wD = 2q'V wq - 4det A'V(det B*) - 4det B*'V(det A), and ,det(B* A) = 0 ~ q = 0 when (2.8) holds, we see that 'V DolO only if at least one of det A, det B is nonzero. Hence on 13, (2.6) is a nontrivial form and {3 is well defined: tan {3 = -( det AI det B)1/2. Now let C = B* A, so D = (trC)2 - 4det C and, with C = (Cij) (2.10) If we suppose, without loss of generality, that det B( wo) root of (2.6), B* P( w, 0 # 0, then at the double = C cos {3 + (det B)I sin {3 = cos {3( C - where A = -( det B) tan {3 = (det C)1/2 = ttrC. We note that det BolO if and only if cos {3 # 0 so that P = 0 if and only if = C - (~trC) I = ~ 2 (cn2C21- C22 zero. But if every component of N is zero, then, from (2.10), 'V D is also zero, contrary to hypothesis. This shows that Rank P :::: 1 everywhere. Finally, still assuming det BolO, we find Now, the last statement of the theorem concerns whether P( wo, 0 is nilpotent: clearly p 2 = 0 ~ (BN)2 = 0 and since B is invertible, we have P = 0 if and only if NBN = O. Now N = B*A - ttr(B*A)I = B*A - (detB*A)1/2I, and + BB*A VIABIN = VIABIN, so so NBN NBN If detA 0, then N {IBIA - B*A so NBN A2 = 0, i.e. A is also nilpotent. If det A vabslAIN { A - B} where det A Since IBINA - NB*AB and NB*A IBIB*A 2 and this is zero precisely when = det B = 1 and, A and B have a cornmon eigenvector f, ~N} = = BN, det IA - BI = O. 0, then N BN = N {A since P we can write them in a basis {f, s} as and now (3 = 8 = 1; hence A - B is nilpotent. Thus we have proved the final assertion of the proposition, and we have also shown the useful fact that P has a nonzero eigenvalue only in the case that one of A or B is singular. 0 From now on we will work in the ball tJ given by the proof of Proposition 2.1. It is not necessary to restrict tJ to be a ball, but we omit details of the extension to other geometries. We also note that the matter of P being nilpotent is not invariant under reasonable notions of equivalence, such as interchanging the order of writing the equations. The appropriate definition of equivalence will be given after we discuss shocks. Requiring that (1.2) be a weak solution of (1.4) results in the equation (2.12) V(WI,WO,{3) == (f(wd - f(wo))cos{3 + (g(WI) - g(wo))sin{3 = 0. We consider this as a bifurcation equation with distinguished parameter (3: for a fixed Wo in tJ, we are interested in the zero-set WI = WI({3) of V( WI, wo(3). This set (the bifurcation diagram) is unchanged if V is multiplied on the left by an invertible matrix, and equivalent bifurcation diagrams result under coordinate changes {3 f---t (3'({3),w 1---+ w'(w,{3). The mapping V has the property V(wo,wo,{3) == 0 and so do equivalent mappings if both Wo and WI are transformed by the same function. This type of contact equivalence is called t-equivalence in [3, p 129]. We use the singularity theory approach of [3] to prove the following theorem which extends [6]. THEOREM 2.2. Under the further non degeneracy condition (2.13) at (wo, (30), where Wo E 13 and (30 is the solution of (2.4) and 10 P = Pro = 0, there is a ball tJ C H2, Wo E tJ, in which the bifurcation problem (2.14) V(W,wo,{3) = 0 is t-equivalent to the one-state-variable problem (2.15) near x = f.L = O. Here we may take f.L = f3 - f30 and x = w . ro; CI has the sign of the expression in (2.13) and 102 is also non-zero. Problem (2.15) has t-codimension one; an unfolding is given by (2.16) where we may take a in Figure 2.1, a-c. = Wo . ro, Wo The problem and its unfolding are shown ProoL We summarize the calculation. Fix Wo E ~, and suppose det B( wo) =I O. It can be shown that (2.13) implies that ro is not tangent to~; we choose a direction for (ro, 10 ) so that ro is directed towards :J{ and (2.13) is positive. We write W = Wo +xro + W(x,f3)rt- and let hex, f3) = 10 V( Wo + xro + W( x, f3)rt-, Wo, f3) where W is determined from It- V == O. Schmidt reduction recursively yields Performing the steps of the Liapunov hxx(O,f3o) Also hfJ = lod2 V(wo,wo,f3o)' =I O. = 0 and WfJ = 0 and (2.18) using properties of P: = Aro cos f3 + Bro sin f3 = 0, so, with Bro = y,ArO = -ytanf3 and PfJro = ysecf3,loPfJro = loBrosecf3. Now, loB is the left eigenvector of N (in (2.11)); hence it is orthogonal to the right eigenvector ro. Hence in (2.18), hxfJ = O. Calculating (It- V)xfJ = 0: and neither matrix vector product is zero, since where It-B = alloB + a2(loB).!. and a2 =I 0 since B is invertible; since (loB).!. is orthogonal to the left eigenvector of N it is not orthogonal to roo Finally, Now hf3f3 = Wf3f3 = 0 as a consequence of t-equivalence, and similarly hf3f3f3 = O. The final calculation is, (2.19) -loPro = 0, and thus hxf3f3 =I 0 by the calculations above. These for loPf3f3ro defining and nondegeneracy conditions determine the singularity to be equivalent to (2.15); by our normalization, we may take £1 = 1. In the unsteady and transonic cases where we have done the calculation, the sign of £2 is -1; this means that the nontrivial solutions of (2.15). have x > 0 and hence w E:re. If we unfold the singularity (2.15), we may do so by perturbing Wo away from:B, say to Wo = aro+wo, where the calculations above are performed at a fixed point E :B. Now we obtain the normal form ha(x,(3) = x(x - J.L2 + c:a), where £ > 0 in both the unsteady and transonic cases. corresponds to Wo E :J-C. Again the case a > 0 This concludes the proof. D The qualitative shape of the Hugoniot locus and its representation as a bifurcation diagram are shown in Figures 2.1 and 2.2. We remark that to each triple configurations: (1.2) is one and (2.20) w(x,y) = { wo, (3} there correspond two different shock cos (3 Wo , x cos (3 (its mirror image) is the other. + y sin (3 < 0 + y sin (3 > 0 V=O 2.1 The Hugoniot locus in state space. (a) Wo E 13 (b) Wo E (c) Wo E £ 2.2 The bifurcation diagram for the H ugoniot locus. (a) h(x,{3) =0 96 Our final use of the local theory in this section will be to obtain the relation between the shock angle (3, given by (2.12), and the local characteristic angles (31(W) < (32(W) obtained by solving (2.4) at Wo or W1, when these points are in:J-C. We observe the following in the case Wo E :J-C. 2.3 The characteristic angles along the loop. PROPOSITION 2.3. At the points W1 = Wo, corresponding to J.L = ±Va, we have (3 = (31(WO) or (32(WO); thus (31 = (30 - Va and (32 = (30 + Va, approximately. The genuine nonlinearity of the system in 0 n :J-C,(a consequence of (2.13)), and the fact that the loop crosses 13, show that the angles (3i(WI), calculated as functions of (3 along the locus, have the structure shown in Figure 2.3. For (31 (wo) S (3 < (3~, W1 E :J-C and (3( Wo, wt) < (31 (wt) < (32 ( wt); in the interval (3; < (32' WI E e and there are no real solutions to the characteristic equation. There is a second point (3; such that W1((32) E 13 and for (3 E ((30, (32(WO)), (31(Wt) < (32(W1) < (3(WO,W1). This section has generalized the results in [6] and localized the picture in [7]: our main conclusion is that the nondegeneracy conditions of Proposition 2.1 and Theorem 2.2 are sufficient to ensure a shock polar or Hugoniot loop near the sonic line and to establish the direction of variation of the characteristic covectors along the loop. We now show how this local behavior is related to shock 3. Spacelike curves and timelike conormals. We recall, using the notation of section 2, that the characteristic covectors ( = (cos (3, sin (3) are normals (or conormals) to the characteristic surfaces ¢i(X, y) = 0, defined by det P( w, \l x 0, .Alt < x < .A2t}. For the steady transonic flow or transonic small disturbance equations, C+ is the connected component of the complement of 81 U 82 which contains the flow vector W = (u, v). This motivatives the following definition: DEFINITION 3.1. Suppose given a determination C+(w) for all wE R2. A shock {WI, wo,,B} satisfies the C+ criterion if there is precisely one upstream state. We have already commented on the following classical observations. PROPOSITION 3.2. Let C+(w) = {t > 0, .Al(W)t < X < .A2(W)t} for a strictly hyperbolic genuinely nonlinear system (1.3). Then a shock satisfies the C+ criterion if and only if it satisfies the Lax geometric entropy criterion. PROPOSITION 3.3. Let W C+(w) be the component of R2\{8 1 U 8z } containing = (u, v) if W is a supersonic state of (1.1), and let C+ (w) be the open half-plane C+(wo),wo E 13, otherwise. Then the shocks that satisfy the C+ criterion are precisely the compressive shocks. (The extension of C+ to wEe is arbitrary: no state in e is ever upstream.) We note that for hyperbolic but nonstrictly hyperbolic systems, an overcompressive shock may be upstream with respect to both states, while a noncompressive shock never satisfies the C+ criterion. These observations, for a hyperbolic system, are not at all new. Definition 3.1 is an extremely special case of Majda's multidimensional shock stability condition [9], which he proves equivalent to Lax's geometric condition. In fact, for a system that is not genuinely nonlinear, where Lax's condition is not sufficient, the C+ -criterion is not either. This definition, however, allows an extension to systems without a physical time variable, such as (1.1), and the possibility of extension (as in Proposition 3.3) to systems that change type. Definition 3.1 may be useful in a couple of ways: it throws a light on the contrast between transonic and unsteady change of type by showing that these two systems differ already in the strictly hyperbolic region in their choice of C+. And it is closely connected with the construction of viscous profiles, as we will show in the next section. It is limited in some obvious ways: it does not admit a simple extension to systems of more than two equations except for extreme shocks. And it does not stand alone as a physical or mathematical admissibility criterion but merely suggests a method for proving a stability theorem. Finally, the definition is not complete without some words on the selection of C+(w) for W E 0: for smooth systems, the mapping W f----+ C+(w) is smooth in X, and W f----+ ct (w) can be extended smoothly to 0, as indicated in Proposition 3.3. However, the only continuous extension of ct into e is ct(w) = tjJ (the empty set) for wEe, and this gives r 2 = R2\ {a}, according to which all shocks are spacelike. There is not a clear generalization of "upstream"; however, if we replace C+ by (the closure of C+) then clearly the choice ct results, for Wo E ~, in a cone that degenerates to a line along which the fundamental solution is more singular. For Wo E e, the support of the fundamental solution is R2, but one can choose a fundamental solution with certain growth properties. For the moment, fix any such specification continuously in e. We have the following. THEOREM 3.4. Consider the states {Wl,wo,.8} for (1.4), parameterized by.8, in 0, with Wo E X. Then a) with the specification of the conorrnal cone r 1 ( w), the shock is spacelike with respect to Wo for all pairs {WI, wo} on the loop. In particular, there exists a smooth specification of ct (w) such that Wo is the upstream state for all pairs. b) for the conormal cone r 2(w), the shock is spacelike with respect to WI for .8 E (.81 (wo),.8n u (.8;, .82 ( Wo )), and there is a locally smooth specification of ct( w) such that WI is the upstream state for all such pairs. However, there is no smooth extension of ct (w) to wEe such that WI is the upstream state on the entire loop. Proof. Recall that S is spacelike for a state Wi and cone r j if its normal ( = (cos.8,sin.8) is in rj(Wi). For r 1(wo),S is clearly spacelike since .81(WO) < .8 < .82(WO) along the entire loop. From the definition, Wo is either an upstream or a downstream state, and there is a choice of ct that makes Wo the upstream state. Since 8 is never characteristic for wo, this determination of is smooth. Furthermore, 8 is not spacelike with respect to WI for WI E 91:, and under the natural choice of f 1 in e(f 1 = ¢), 8 is not spacelike for WI E e either. 3.2 The geometry of the light cones (b) For f we have, immediately, that 8 is never spacelike with respect to Woo On (.8I(WO),.8;) and (.8;,.82(WO)), 8 is spacelike with respect to f 2(WI). To see that there is no smooth determination of (w) for wEe that makes W2 an upstream state everywhere, we make a geometric/topological argument (See Figure 3.2). For WI E 91:, (WI) c (wo), because of the monotonicity of characteristic speeds. Now 8 always lies in one or other component of C 2 (WO)\C2 (WI), by the first statement after (b). However, for .8 near .81 (wo), 8 is almost characteristic and is near the 8 1 characteristic; near .82 (wo), 8 is near the 8 2 characteristic. Thus 8 cannot be exterior to C2(WI) for all WI on the loop for any determination of C2. This argument does not even depend on the choice of which component is +, but says 8 will actually fail to be spacelike. This has important implications, some of which we discuss in the next section. 4. The C+ criterion, viscous profiles and vectorfield dynamics. In this section we will outline the relation between a definition of admissibility based on the C+ criterion and the construction of travelling wave solutions, or viscous profiles, for an associated perturbed system. A perturbation of (1.4) by higher-derivative terms would take the form (4.1) + g(w)y = c{ox(Dw x + Ewy) + oy(Fw x + Gwy)} 101 where D, E, F, G are 2 x 2 matrices which, typically, might depend on w, and E: measures the strength of the perturbation. In specific applications, introduction of such terms might be motivated by considerations of viscosity or other dissipative or dispersive mechanisms. The idea is well known for unsteady systems where, if y represents time, the perturbation is (Dwx)x and (4.1) should have a uniformly parabolic character [10]. For a discussion of the physically motivated viscosity terms in the transonic case, see [7]. We will consider the general case for (4.1) elsewhere, observing here only that a parabolic character is what is desired, in order that solutions of (4.1) will converge to admissible weak solutions: the directions of rapid decay can be related to the light cone and conormal cone for (2.1). Here we will motivate the general case and illustrate the C+ criterion by considering self-similar travelling wave solutions of (4.1) with similarity parameter (4.2) = (x cos,B + y sin,B)/E:. Solutions approaching the shock (1.2) for (4.3) --+ Wo as X --+ -00, W E: --+ --+ WI 0 satisfy as X --+ 00, w' (±oo) = The solution whose limit is the "mirror image" shock (2.20) satisfy (4.4) --+ WI as X -00, W --+ Wo as X Substituting w(X) in (4.1), integrating once and using (4.3) or (4.4) results in (D cos 2 ,B + (E + F) cos,B sin,B + G sin 2 ,B)w' = (I( w) - f(wo)) cos,B + (g(w) - g(wo)) sin,B. We abbreviate this as M(w,,B)w ' = V(w,wo,,B), where M is a 2 x 2 matrix that measures the effect of the higher order terms and V is the mapping introduced and studied in section two. Remark. We have already introduced the notion of the shock S as a spacelike surface, at least with respect to one state Wi. Now X, defined in (4.2), which measures orthogonal distance from this surface, might be thought of as representing a "stream coordinate" in physical space; in that case X --+ -00 will represent "upstream" and X --+ +00 "downstream" limits, and these intuitive notions may be helpful. Since we are using the convention that Wo is the state that is always in X, we need different boundary conditions, (4.3) or (4.4), to cover the possibilities - both of which arise - that it be the upstream or downstream state. However, X does not necessarily represent "time". For example, in unsteady systems, (1.3), the characteristic speed is s = - tan,B and X = {3 (x - st); since cos,B > 0 in our normalization, X increases with increasing t only if s < o. This emphasizes the fact that "timelike" is properly defined in the cotangent space. We now consider separately the cases TS and US. We have noted that the state Wo satisfies the criterion for all WI on its Hugoniot loop. That is, Wo is an upstream state. Furthermore, for states WI E C) beyond the loop, WI is the upstream state. That is, if there is a solution to (4.6) satisfying (4.3) or (4.4) according as WI is in the loop or not, then (4.6) will be consistent with a well-known physical adlnissibility criterion, namely, admissibility of compressive shocks. We remark that the scalar dynalnical system :i; = x(x - J.l2 + a) completely describes these dynamics; here h = x(x - J.l2 + a) is the function obtained in Theorem 2.2 and both the cases a > O( Wo E 9i:) and a < O( Wo E £) are covered. See Figure 4.1. Now the full dynalnics of (4.6) will be either one- or twodimensional. (In [7] we showed that the addition of physical viscosity results in a one-dimensional system.) We may speak of M, or of the perturbation (4.1), as con8i8tent with TS dynamic8 in C) if (4.6) is vectorfield equivalent to (4.7). In [7] we showed that a system obtained by adding physical viscosity to (1.1) was consistent with TS dynamics. The general result is 4.1 The reduced vectorfield dynamics in the TS case. THEOREM 4.1. Suppose M in (4.6) is either of rank 1 or of rank 2 uniformly in CJ. Then in either case the reduction in Theorem 2.2 can be extended to (4.6). H M has rank 2, this will be a centermanifold reduction of and will reduce to (4.7) if the eigenvalues of d(M- I V) at w~ have the appropriate signs. H M has rank 1, then the dynamics of (4.6) are already one dimensional and will coincide with (4.7) for an open set of rank 1 matrices. A proof is sketched in [7]. By contrast, when the unsteady case, (1.3), is perturbed by addition of a viscosity type term, one does not anticipate a simple addition of one-dimensional dynamics to the bifurcation equations (2.16). We give some heuristic arguments before summarizing a theorem that describes a special case. First, we have noted that in the US case, where WI is the upstream direction in the loop near Wo, we cannot consistently choose WI to be upstream over the whole loop. Thus we do not expect to be able to reduce the dynamics to -x = x( x - f.L2 +a) (which is Figure 4.1 with the arrows reversed) for any M which would be consistent with the parabolic nature of the perturbed system. In fact, could we do so, WI would be a source in the I-dimensional dynamics and Wo a sink; for a nonsingular M, then, WI would be a saddle in the 2-dimensional dynamics (before the center manifold reduction) and Wo a sink. However, for equation (1.3), where one flux function is the identity mapping, it can be checked that the eigenvalues of dV are complex in the region and that it is impossible for M- I V to be a saddle for WI on the entire loop; if M is singular, it is still impossible to do this smoothly. Finally, we quote a theorem from [6] which describes one possible situation in this case. THEOREM 4.2. Consider + f(u)x = c:Mu xx at (w~,,Bo) E 13. H(2.13) holds: a = 10 • d2 f . 1'01'0 f= 0 and also the condition then (4.6) is equivalent, at Wo Takens-Bogdanov vectorneld and M x=y if = ax 2 104 = I, to the co dimension two Furthennore, ifwo E 9{ and {WI, wo,,8} satisfies the Hugoniot relation in (l, and M is near I, a complete unfolding of (4.9), up to t-equivalence, is obtained: (4.10) x=y if = ax 2 - bxy + f.LI X + f.L2Y· The unfolding of the singularity is described in [8], though not in nonnal fonn; unfolding of the normal form under equivalence (not t-equivalence) is given in [4], and applied to (4.8) in [6]. Neither of the treatments emphasizes the distinguished parameter,,8. In the unfoldil!-g, we may consider (f.LI(,8),f.L2(,8)) in (4.10) as a path through the unfolding space: the path begins and ends on the f.L2 axis, where the critical points coincide, corresponding to the ends of the loop, and f.LI < 0 gives a path with Wo E 9{ and ,8 E (,8I(WO),,82(WO)) such as we have been considering. The sketch in Figure 4.2 shows representative vectorfields in the three qualitatively different segments of the loop. Comparison of characteristic and shock speeds here shows that WI is the upstream state in the two intervals where connecting orbits exist (and the similarity parameter X in (4.2) need not approach the upstream state at -00). This is further evidence of the impossibility of any admissibility condition that includes all states in this case. However, we note that the points where the curve (f.LI(,8), f.L2(,8» meets Bh (locus of Hopf bifurcation) and Bsc (homoclinic bifurcation) are in the interior of e for M near I. Thus there are some unsteady shocks joining points in regions of different types that are admissible under the viscous profile criterion. 4.2 Vectorfield dynamics in the US case. In this section, we have not given a complete discussion of the relation between the C+ criterion and viscous perturbation, but several points have emerged: 1. The question of "suitability" of viscous perturbations is related to stability of the associated linearized parabolic system, as observed by Majda and Pego [10]; since that stability question is studied by examining the forward light cone and comparing it to evolution of the parabolic system, we note that here, too, the choice of "suitable" viscosities is related to a "suitable" definition of C+ (w). In particular, the perturbation must make the system "parabolic" and not "elliptic". 2. Travelling wave solutions of the parabolic system, because they are selfsimilar, reduce the study of the PDE to an ODE in the similarity variable. Locally we are interested in rest points of the vectorfield; these are given, up to some notion of equivalence, by the development in section 2: the steady state theory identities the rest points (wo and wr), and the C+ -criterion specifies something about their type (repellors for upstream limits, attractors for downstream). 3. The two prototype systems we have studied, TS and US, exemplify two qualitatively different kinds of behavior: in the TS case, there is no vectorfield bifurcation along the loop, and a one-dimensional theory is adequate. In the US case, it is necessary to consider a fully two-dimensional vectorfield unfolding. 4. These observations give a framework in which it may be possible to perform rigorous stability analysis of TS and US shocks. REFERENCES [1] [2] [3] [4] [5] [7] [8] [9] [10] [11] [12] [13] M.B. ALLEN III, G.A. BEHlE AND J.A. TRANGENSTEIN, Multiphase Flow in Porous Media, Springer, New York, 1988. R. COURANT AND K.O. FRIEDRICHS, Supersonic Flow and Shock Waves, Wiley Interscience, New York, 1948. M. GOLUBITSKY AND D.G. SCHAEFFER, Singularities and Groups in Bifurcation Theory, Springer, New York, 1985. J. GUCKENHEIMER AND P. HOLMES, Nonlinear Oscillations, Dynamical Systems, and Bifurcation of Vector Fields, Springer, New York, 1983. B.L. KEYFITZ, " A criterion for certain wave structures in systems that change type", to appear in Current Progress in Hyperbolic Systems, Contemp. Math., AMS, Providence, 1989. B.L. KEYFITZ, "Admissibility conditions for shocks in conservation laws that change type" , to appear in Proceedings of GAMM International Conference on Problems Involving Change of Type, ed K. Kirchgassner. B.L. KEYFITZ AND G.G. WARNECKE, "The existence of viscous profiles and admissibility for transonic shocks", preprint. N. KOPELL AND L.N. HOWARD, "Bifurcations and trajectories connecting critical points", Adv. Math., 18 (1975), pp. 306-358. A. MAJDA, "The stability of multi-dimensional shock fronts" , AMS Memoirs, 275 (1983). A. MAJDA AND R.L. PEGO, "Stable viscosity matrices for systems of conservation laws", Jour. Diff. Eqns, 56 (1985), pp. 229-262. M. RENARDY, W.J. HRUSA AND J.A. NOHEL, Mathematical Problems in Viscoelasticity, Longman, New York, 1987. D.G. SCHAEFFER, "Instability in the evolution equations describing incompressible granular flow", Jour. Diff. Eqns, 66 (1987), pp. 19-50. H.B. STEWART AND B. WENDROFF, "Two-phase flow: models and methods", Jour. Compo Phys., 56 (1984), pp. 363-409. 1. Introduction. The system Ut+(U 2 -v)x=O Vt+(t u3 - u )x=O is an example of a strictly hyperbolic, genuinely nonlinear system of conservation laws. Usually the Riemann problem for such a system is well-posed: centered weak solutions consisting of combinations of simple waves and admissible jump discontinuities (shocks) exist and are unique for each set of values of the Riemann data [1-3]. The characteristic speeds ).1 and ).2 of system (1), however, do not conform to the usual pattern for strictly hyperbolic, genuinely nonlinear systems: although locally separated, they overlap globally (d. Keyfitz [4] for a more general discussion of the significance of overlapping characteristic speeds). In particular, ).1 = U - 1 and ).2 = u + 1 are real and unequal at any particular point U = (u, v) of state space (as strict hyperbolicity requires), and ).2 -).1 = 2 is even bounded away from zero globally, but ).1 at one point U1 may be equal to ).2 at a different point U2 . The corresponding right eigenvectors rl = (1, u + 1) and r2 = (1, u - 1) of the gradient matrix for (1) display genuine nonlinearity, since ri . 'V).i > 0 for i = 1,2 but the two eigenvalues vary in the same direction: r i . 'V). j > 0 for i #- j, rather than the usual "opposite variation" ri· 'V).j < 0 familiar from (say) gas dynamics. As a result, classical global existence and uniqueness theorems [3,5] no longer apply. In Section 2, we investigate the Riemann problem for system (1). We find that the rarefaction curves cover the u, v-plane smoothly, but that the Hugoniot loci are compact curves. As a result, for each fixed left-hand state UL there is a large region of the plane which cannot be reached from UL by any combination of rarefaction waves and admissible shocks, even if we were to admit as shocks jump discontinuities which violate the Lax entropy condition. In Section 3, we introduce a new type of solution to (1), called a singular shock, which might be used to connect the left state UL to right states U R in this inaccessible region. We discuss how singular shocks may appear as limits of solutions to the Dafermos-DiPerna viscosity approximation + (u 2 - v)x = €tu xx Vt + (tu3 - u)x = Eivxx Ut tDepartment of Mathematics and Computer Science, Adelphi University, Garden City, New York 11530. :j:Department of Mathematics, University of Houston, Houston, Texas 77204-3476. Research of BLK supported by AFOSR, under Grant Number 86-0088. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Solutions to (2) do not always remain uniformly bounded as t: -+ 0+, but may instead approach singular distributions similar to modified Dirac 6-functions. (Singular solutions of this type were first found by Korchinski [6] for a nonstrictly hyperbolic system.) We investigate the asymptotic behavior of these solutions for small t: and discuss how their limits may be regarded as shocks with internal structure. We derive a generalized form of the Rankine-Hugoniot condition for these singular shocks and introduce two additional admissibility conditions for them. These conditions allow us to prove our principal result (Theorem 2), which asserts that the Riemann problem for (1) becomes well-posed for all Riemann data when the category of solutions is enlarged to include admissible singular shocks. Finally, in Section 4 we attempt to show that the admissible singular shocks which we have defined are actually limits of solutions to the Dafermos-DiPerna approximation (2). We present some analytic results and some numerical calculations as supporting evidence for this conjecture, and we describe what would be needed to convert the conjecture into a theorem. 2. The classical solution and its limitations. Since system (1) is strictly hyperbolic and genuinely nonlinear, the Riemann problem (3) (~) (x,O) = U(x,O) = for (1) has a classical solution when UR is sufficiently close to UL . To describe this classical solution, we rewrite (1) as + F", == Ut + AU", = with uZ - v ) F = ( 1 3 aU -u and A = aF = ( 2u au u 2 -1 -1) o . The eigenvalues of A are the characteristic speeds .AI with corresponding right eigenvectors 1 and .Az The rarefaction curves Ri are the integral curves of the ri, namely 1 Z Rz : v ="2u - u + Cz. = u +1 The Rankine-Hugoniot condition s[U] = [F] defines the Hugoniot locus H(Uo), the set of U-points which can be connected across a jump discontinuity to Uo , as ~ Uo [v] = [u] (u VI - [uJ2 /1 2) where [u] = u - Uo, [v] = v - Voj the Rankine-Hugoniot condition also determines the propagation speed s of the discontinuity as s = Uo + [u]/2 =F VI - [uJ2 /12. We note that the Hugoniot locus is restricted to the strip lu-uol ~ Vi2 and consists of four branches in the neighborhood of (uo, vo) which join to form a figure eight (see Figure 2.1). In particular, the locus is compact, and its compactness places a finite upper bound on the strength of any discontinuity which can occur as part of a weak solution of (4). \ , ", , / 0 which satisfies the same bound. Once we knew that solutions to (10), (ll) existed, we could apply the asymptotic analysis described in section 3 to obtain (20). Thus an a priori estimate on the maximum norm of U, for each fixed c would be sufficient to establish the conjecture. While we do not yet have a maximum-norm estimate, it is suggestive in this regard to note that our Theorem 1 does provide an L1 estimate, which is even uniform in c. The second bit of supporting evidence arises when we look at the actual maxima (and minima) of solutions of (21). The Riemann invariants 7r = V - tu2 + u and p = v - tu2 - u corresponding to these solutions satisfy the equations = [a( u - 1) - e]* - cu 2 , cp = [a(u + 1) - e]p - cu 2 • d Hence, for nonconstant solutions of (21), ir = 0 implies < 0 and p = 0 implies p < O. Thus each Riemann invariant has no (relative) minimum and at most one maximum on any solution trajectory. From this it can be shown (see [11] for details) that u has at most one maximum point = 6 and one minimum point = 6, while v has only a maximum at = 6, and that 6 < s < 6 < 6. Thus the trajectory of a solution to the Dafermos-DiPerna viscosity approximation in the u, v-plane should look like Figure 4.1. Observe that Figure 4.1 is related to the asymptotic picture of Figure 3.1 in exactly the same manner as (20) is related to (12), namely by addition of the Heaviside function Wee). u Figure 4.1 As our third piece of evidence, we present the results of a numerical solution ~ VS XI F0R EPSLN :: .3 WASH) • . 2 (S0LIDl • . 15 (o0T) I I I I I \ _ _L-~L-_L-_ _L - - J_ _~ Figure 4.2 VS XI F0R EPSLN =.3 (DASH), .2 (S0LIO)' .15 (D0T) I \ f \\ r J i \ i /\~ Ii 10 I \' \', , ' ......... - -..... ........ L-~__~~-~-~-~~~__~__~__L-~_ _~_ _~~_ _~_ _~~ Figure 4.3 2SU VS V F0R EPSLN :: . 3 COASH). .2 (50LI D). . 15 C00T) ' ... , ,, ,, , ' ... , ... ... . ... ... ... ... " .... ... ... , ... , " ... , , I o L-____ -6 ____L __ _ _ _ _ _ _ _L __ _ _ _L __ _ _ _ Figure 4.4 _ _ _ _~_ _~ of a two-point boundary-value problem similar to (21). We chose UL = (0,0) and UR = (-4.5,10.125), with the Dafermos parameter a = 1 and boundaries symmetrically situated (at ~ = -4 and ~ = -0.5) with respect to the theoretical singular shock speed s = -2.25. With a grid size /:),~ = .0025, we approximated if and fj by centered differences. We solved the resulting non-linear system of difference equations by an iteration procedure. At each step we determined the solution of the linear system which results when A( U) is computed from the U found at the previous step. Approximately 80 iterations were required for convergence, and the results did not change significantly when the grid size was halved, nor when the length of the ~-interval was increased or decreased by 50%. Computations were carried' out for three values of e:, namely e: = .3,e: = .2 and e: = .15, and the numerical solutions are compared in the accompanying figures. Figure 4.2 shows the first component u(~), Figure 4.3 shows the second component v(~), and Figure 4.4 shows the solution trajectory in the u,v-plane. Note the degree of resemblance between the computed Figure 4.4 and the theoretically derived Figure 4.1. Note further that the upward and downward "bumps" in u (Figure 4.2) do s~em to roughly double in height and reduce fourfold in width when e: is halved from .3 to .15, while the height of the single bump in v (Figure 4.3) does seem to more than double (though not quite quadruple). This at least approximately confirms the 1/e: and 1/e:2 scalings of (20). Finally, the theoretical value of the singular shock strength parameter C2 is 3.09375, while the value computed from the numerical solution for e: = .2, using the trapezoidal rule for the second integral in (19), was C2 ~ 3.11. On all these points, there seems to be sufficient agreement between numerical and theoretical solutions to support the conjecture, at least for the particular values of UL and UR E Qs (UL) used in the computation. Computations involving other values ofthe Riemann data, including cases where UR is in Q'(UL) or QI/(UL), are consistent with this specimen result. Acknowledgement. Much of the research described in this article was carried out while the authors were visitors at the Institute for Mathematics and its Applications at the University of Minnesota. It was influenced generally by the stimulating scientific atmosphere at the Institute, and in particular by fruitful conversations with Mitchell Luskin, Bruce Pitman, Bradley Plohr and Michael Shearer. REFERENCES [1] [3] [4] [5] [6] [7] [8] [9] [10] [11] LAX, P.D., Hyperbolic systems of conservation laws II, Comm. Pure Appl. Math., 10 (1957), pp. 537-566. SMOLLER, J.A., AND JOHNSON, J.L., Global solutions for an extended class of hyperbolic systems of conservation laws, Arch. Rat. Mech. Anal., 32 (1969), pp. 169-189. KEYFITZ, B.L., AND KRANZER, H.C., Existence and uniqueness of entropy solutions to the Riemann problem for hyperbolic systems of two nonlinear conservation laws, Jour. Diff. Eqns., 27 (1978), pp. 444-476. KEYFITZ, B.L., Some elementary connections among nonstrictly hyperbolic conservation laws, Contemporary Mathematics, 60 (1987), pp. 67-77. BOROVIKOV, V.A., On the decomposition of a discontinuity for a system of two quasilinear equations, 'Trans. Moscow Math. Soc., 27 (1972), pp. 53-94. KORCHINSKI, D., Solution of the Riemann problem for a 2 X 2 system of conservation laws possessing no classical weak solution, Thesis, Adelphi University, 1977. DAFERMOS, C.M., Solution of the Riemann problem for a class of hyperbolic systems of conservation laws by the viscosity method, Arch. Rat. Mech. Anal., 52 (1973), pp. 1-9. DAFERMOS, C.M., AND DIPERNA, R.J., The Riemann problem for certain classes of hyperbolic systems of conservation laws, Jour. Diff. Eqns., 20 (1976), pp. 90-114. KEYFITZ, B.L., AND KRANZER, H.C., A viscosity approximation to a system of conservation laws with no classical Riemann solution, Proceedings of International Conference on HYPl"rbolic Problems, Bordeaux, 1988. SHEARER, M., Riemann problems for systems of nonstrictly hyperbolic conservation laws, This volume. KEYFITZ, B.L., AND KRANZER, H.C., A system of conservation laws with no classical Riemann solution, preprint, University of Houston (1989). PHILIPPE LE FLOCHt Abstract. We prove a result of existence and uniqueness of entropy weak solutions for two nonstrictly hyperbolic systems, both a nonconservative system of two equations 8,u + 8",f(u) = 0, 8,w + a(u)8",w = 0, and a consenJative system of two equations 8,u + 8x f(u) = 0, 8,v + 8",(a(u)v) = 0, where f : R -> R is a given strictly convex function and a = f. We use the Vol pert 's product ([19], see also Dal Maso - Le Floch - Murat [1]) and find entropy weak solutions u and w which have bounded variation while the solutions v are Borel measures. The equations for w and v can be vi"wed as linear hyperbolic equations with discontinuous coefficients. 1. Introduction. The theory of nonlinear hyperbolic systems assumes usually the systems to be strictly hyperbolic with genuinely nonlinear or linearly degenerate characteristic fields and to be in conservation form. Moreover, general results of existence of entropy weak solutions to these systems are known only for initial data with small total variation (see Glimm [6] and Lax [ll]). But, most of the hyperbolic systems used by physicists have not genuinely nonlinear or linearly degenerate fields, are not strictly hyperbolic and sometimes are even not written in conservation form. This is for instance the case of the systems used in the modeling of great deformations of elastoplastic materials (Trangenstein - Colella [18]) and the modeling of two-phase flows as mixtures of liquid and vapor (Stewart - Wendrofl' [17]). It is recognized that the most part of the physical systems does not fit into the standard theory of conservation laws. The aim of the present paper is precisely to consider two examples of nonstrictly hyperbolic nonlinear systems, both a conservative and a nonconservative one, for which new concepts of weak solutions must be introduced. Namely, for these systems there is no existence and no uniqueness of (entropy weak) solutions in the usual context, say Lco solutions in the sense of distributions. But it is possible to extend the notion of weak solution and derive a formulation of a well - posed Cauchy problem for these systems. The main ingredient we shall use in this paper is the Volpert's product [19] for functions of bounded variation (BV). This work follows also the general theory of weak solutions to nonconservative nonlinear hyperbolic systems by Dal Maso - Le Floch - Murat [1,2] and Le Floch ([12] - [15]). *This research was supported in part by the Institute for Mathematics and its Applications with funds provided by the National Science Foundation. tCentre de Mathematiques Appliquees, URAD0756, Ecole Poly technique, 91128 Palaiseau Cedex (FRANCE), E-mail address: UMAP067 at FRORSI2.BITNET. We consider the Cauchy problem for both a nonconservative system of two equations + od(u) = 0, + a(u)oxw = 0, and a system of two conservation laws (1.2) where f : R ---+ R is a given smooth convex function and a = dd"f. These systems have only one characteristic speed, a( u), so that they are not strictly hyperbolic. It is easy to show that the Riemann problem for the conservative system (1.2) does not possess weak LOO-solution, except for some particular initial data, even if the initial data is assumed to be small (in Loo and BV norm). This fact contrasts with the standard results of existence of weak solutions to strictly hyperbolic systems (Glimm [6] and Lax [11]). On the other hand, we know that the usual notion of weak Loo solution in the sense of distributions does not make sense for the nonconservative system (1.1). In this paper, we use the Volpert's product (Volpert [19] and Le Floch [12]) which yields a notion of entropy weak solutions to nonconservative systems in the frame of functions of bounded variation. We prove that the Cauchy problem for the nonconservative system (1.1) is well-posed in the space BV, provided that the initial data for u is a BV function with only nonincreasing jumps (this means that u(O,·) is entropy) and the initial data for w is a Lipschitz continuous function. The solutions w we find are thus in BV and generally speaking are indeed discontinuous. A corresponding existence and uniqueness result of entropy weak solutions to the conservative system (1.2) follows by derivation, with respect to x, of the equation satisfied by the function w (set v = oxw). We emphasize that, with our definition below, the solutions v in system (1.2) are not functions but Borel measures on R+ x R. The initial data for v is only assumed to be in LOO(R), so that our results apply in particular to the Riemann problem for the conservative system (1.2). We remark that, in the systems (1.1) and (1.2), the equation for u is uncoupled from the second one and can be solved first, so that the equations for wand u, and can be viewed as linear hyperbolic equations with discontinuous coefficients. Such equations have been recently investigated by Di Perna - Lions [4], but their results do not apply to the above equations. We mention also that the ideas of this paper could be also applied to other systems which does not possess Loo solutions (Korchinski [8], Kranzer [9]). See also Keyfitz [7]. The plan of the paper is as follows. In Section 2, we recall the definition of weak solution to systems in nonconservative form. Section 3 contains the existence and uniqueness result for the nonconservative system. In Section 4, we deduce a result for the conservative system and we solve the Riemann problem. 2. Nonconservative products in BV space. In this section, we recall the definition of weak BV solutions to nonlinear hyperbolic systems in nonconservative form which is based on a notion of averaged superposition due to Volpert [19] and investigated in Le Floch ([12] - [14]). Let n be an open subset of Rm and p 2:: 1 be given. The space EV(n, RP) of functions of bounded variation consists of all integrable functions u : n -+ RP each of whose first order partial derivatives (i = 1,2,··· ,m) is represented by a finite Borel measure. The total variation TV( u) is by definition the sum of the total masses of these Borel measures. We recall that results of regularity concerning BV functions are proved by Volpert [19]. For an element u in EV(n, RP), it occurs that with the exception of a set with zero (m - I)-dimensional Hausdorff measure, each point y of n is regular, i.e. either a point of approximate continuity (Lebesque point) or a point of approximate jump. At a point yo of approximate jump, there exists a unit normallJ(Yo) E Rm, a left value u_(Yo) and a right value u+(Yo). Moreover these jump points form an at most countable set of curves. We refer to Volpert [19] for the precise definition and results. An interesting concept proposed in [19] is the notion of averaged superposition. DEFINITION 2.1. Let g be in el(RP, RP) and u be in LOO(n, RP) n EV(n, RP). The averaged superposition of the function u by the function g is defined by (2.1) if y is a Lebesque point of u, g(u)(y) = { g~u(y)), fo g((l - a)u_(y) + au+(y))da, if y is a jump point of u, for Hm_I-almost every y in measure). n (H m- l denotes the (m - 1) dimensional Hausdorff We emphasize that the basic definition of BV functions considers functions which are defined almost everywhere in the sense of the Lebesque measure, while Definition 2.1 treats functions which are defined a.e. with respect to the Hausdorff measure H m - l . Using the notion of averaged superposition, Volpert proves: THEOREM 2.1. (Volpert [19]) Let u and v be in L=(n, RP) n EV(n, RP) and g be in el (RP, RP). Then, for each i = 1,2,··· ,m, the function g( u) given by Definition 2.1 is measurable and integrable with respect to the Borel measure g~, so that the non conservative product A( ) gu °Yi makes sense as a finite Borel measure. This result is very useful even in the context of conservation laws (Di Perna [3], Di Perna - Majda [5]). It leads also to a definition of weak solution to systems in nonconservative form, i.e. hyperbolic systems (2.2) + A(u)axu = u(t,x) E W, where A is a given smooth function of u, not assumed to be a Jacobian matrix. DEFINITION 2.2. A function u in LOO(R+ x R, RP) n BV(R+ x R, RP) is said to be a weak BV-solution to the system in nonconservative form (2.2) if it satisfies (2.3) as Borel measures on R+ x R. This notion of weak solutions has been investigated in [12]. In particular, if the system is strictly hyperbolic with genuinely nonlinear or linearly degenerate characteristic fields, the Riemann problem for (2.2) can be solved in the class of piecewise smooth solutions. In other words, one has a generalization of Lax's theorem for the Riemann problem. In fact, Dal Maso - Le Floch - Murat [1,2J (see also Le Floch [15]) have recently shown that in general, Definition 2.2 is not sufficient and have proposed an extension of the Volpert's product. However, we restrict ourselves in this paper to Definitions 2.1 and 2.2 since it will be sufficient for our purpose. 3. A. nonstrictly hyperbolic and nonconservative 2 x 2 system. We consider the following nonconservative system of two equations: (3.1a) (3.1b) BtU + Bxf(u) = 0, Btw + a(u)Bxw = 0, where f : R -+ R is a given strictly convex function and a = :luf. Equation (3.1a) can also be written in the form (3.2) so that system (3.1) is of the general form (3.3) with here A(U) = (a(ou) 0) . The matrix A(U) is diagonal, thus (3.3) is hyperbolic. But it is not strictly hyperbolic since a( u) is a double eigenvalue of A(U). Furthermore, A(U) admits a genuinely nonlinear characte~istic field and a linearly degenerate one. To apply the general theory of systems in nonconservative form (see Section 2) to solve system (3.1), we set: DEFINITION 3.1. A function (u,v) in L"O(R+,BV(R,R2)) is said to be an entropy weak BV-solution to the nonconservative system (3.1) if 1) u is an entropy weak solution to the scalar conservation law (3.1a) in the usual sense of distributions (see Kruskov [10]); 2) w is a weak solution to (3.1b) in the sense of the Volpert's product (see Section 2 and [12]), i.e. (3.5) With this definition, we are going to prove that, under suitable assumptions on the initial data, the Cauchy problem for system (3.1) has one and only one entropy weak BV-solution. Our first result treats of existence of solutions when the initial data for w is a Lipschitz continuous function. THEOREM 3.1. (Existence) Let Uo be in BV(R) and Wo in W1,00(R). Then system (3.1) has at least one entropy weak solution (u, w) in LOO(R+, BV(R, R2)) satisfying the initial condition a.e. xER. In Theorem 3.1, the initial condition at t = 0 is understood in the usual strong LI sense, i.e. (for instance for u) t rlu(s,x)-uo(x)ldxds=O. t:8 t Jo JR Proof of Theorem 3.1. Equation (3.1a) is uncoupled from equation (3.1b) so that it can be solved first. Volpert has shown in [19) that it admits one (unique) entropy weak solution u in BV satisfying the initial data Uo E BV. Then, when u is known, we have to solve for w a linear hyperbolic equation with discontinuous coefficients. To find a weak solution to (3.1b), we will use an explicit formula due to Lax [11) for the entropy weak solution to (3.1a). We recall this formula briefly. Let G : R+ x R2 -+ R be the function given by (3.7) G(t,x,y) = x-y uo(z)dz +tg(--), t where g is the Legendre transform of the (convex) function R+ x R -+ R the function characterized by the property minG(t,x,y) = yER Denote by a.e.(t,x) E R+ x R. Then, Lax shows that the solution u to (3.1a) is given explicitly by (3.9) ( )_b(x-W,X)) u t, x - a.e. (t,x)ER+xR, where we have set b = a-I. We note in passing that by (3.9) the function ~ belongs to LOO(R+, BV) as u does. Then, we claim that the formula (3.10) = woWt,x)), a.e. (t,x) E R+ x R2, defines a weak BV-solution to equation (3.1b). Namely it is proved by Dal Maso - Le Floch - Murat [1] that wCe) is in L''''(R+, BV) when ~ is in LOO(R+, BV) but Wo is (only) in W1,oo. On the other hand, to prove that the function w given by (3.10) is a weak solution, we can set R+ x R = e U J U N, where e (respectively J) contains the points of approximate continuity (resp. jump) of u (and thus those of ~) and N has zero one-dimensional Hausdorff measure. We know that (3.9) defines a weak solution to (3.1a), thus 8t b ( x - ~( t, x)) + 8 x f On the set of approximate continuity (b ( x - ~( t, x) )) e, we deduce that thus (3.12) But 8tw + a(u)8x w = + a(u)8x O, so we find by (3.12) (3.13) 8t w + a(u)8x w = 0, Then, we turn to the subset J where u and ~ are discontinuous. Let (to, xo) be a point in J and U_,U+,cr denote the left-value, the right-value and the speed at the discontinuity point of u respectively. We denote also by w_ and w+ the left and right value of the function w at (to, xo) (these two values may be equal). Then, the value of the Borel measure at the point (to, xo) is by an immediate application of the Volpert's definition of averaged superposition (see Def. 2.1 and 2.3 and Volpert [19]). But, we have since u is a weak solution to equation (3.1a). Hence (3.14) becomes whatever the jump of w is. We conclude that atw + ii(u)axw = 0, on J. Finally, (3.ll), (3.13) and (3.14) prove that (3.10) defines a weak BV solution to equation (3.1b). 0 Remark 3.1. The regularity on the initial data Wo (i.e. Wo E W1,00) is needed just to ensure that the function wo(O has bounded variation (see (3.10)). Recall that, since the flux-function f is assumed to be convex, the entropy condition for a solution u to the scalar conservation law (3.1a) is u(x-, t) u(x+, t), x E R, a.e. t > O. We shall use a more precise statement obtained by Smoller [16] au ~ t' k.m ax . t he sense 0 f di stn'b utlOns, where k is a constant independent of x and t. We prove now a result of uniqueness of weak BV-solutions for system (3.1) in the case that the initial data Uo to (3.1a) does satisfy the entropy condition (3.15). THEOREM 3.2. (Uniqueness) Let the initial data Uo and Wo be in EV(R) and be an entropy initial data, i.e. there exists a constant J{o such that d dx Uo ~ in the sense of distributions. Then, problem (3.1), (3.6) has at most one entropy weak solution (u,w) in Loo (R+, EV(R, R2)). Proof of Theorem 3.2. It is well-known that the entropy weak solution u to the scalar conservation law (3.la) is unique, so we focus on equation (3.1b) for the function w. The proof below uses standard arguments for BV functions, so we only sketch it. First at all, since Uo satisfies (3.17) and the function inequality (3.16) can be improved to is strictly convex, then au < . ax - J{' ,m t h e sense 0 f d'Istn'b utlOns, (3.18) where is a constant independent of and t. On the other hand, if w is a solution to equation (3.1b), then for every smooth function Tf : R -+ R, the function (x, t) -+ Tf( w( x, t)) is also a weak solution to (3.1b), I.e. (3.19) The proof is based on the decomposition (3.11) for the function u. We omit it. By standard arguments of regularization, we also have (3.19) for the Kruskoventropies, I.e. 8t lw - kl + &(u)8x lw - kl = 0 (3.20) for every constant k. Then, let WI and W2 be two weak solutions to equation (3.1b). From the equation (3.20), it is classical to deduce (Kruskov [10], Volpert [19]) that (3.21) Finally, we rewrite the noncon8ervative equation (3.21) in a con8ervative form plus a mea8ure source-term (3.22) with (3.23) On the set of approximate continuity of the function u, we have (3.24) thanks to condition (3.18). On the other hand, at a point (x,t) of approximate jump of u, we have with the same notation as previously p( {(x, t)}) = a(u+)lwi - wil- a( u_ )Iwi - wII - [ = + a(u+ - u_))da. (Iwi - wil-lw2" - wII) Iwi - wil {a(u+) - [ + Iw2" - wII + a(u+ - {l a(u_ + a(u+ - u_))da - a(u_)}. I But, since we get JL( {(x, t)}) = Iwi - wil {a(u+) _ f(u+) - f(U-)} u+ - u_ + 1W 2- _ w 2+1 {f(U+) - f(u-) u+ - u_ Since u_ a (u_ )} . > u+ and f is convex, we deduce that JL( {(x, t)}) ::; Finally, (3.24) (3.25) show that the measure JL given by (3.23) satisfies JL ::; K . IW2 so that (3.22) implies the inequality (3.26) By integration in time and space and Gronwall's Lemma, we conclude that (3.27) which prove the uniqueness result concerning the equation (3.1b). 0 Remark 3.2. 1) The same arguments prove also that equation (3.19) yields the following inequali ty (3.28) Ot1J(w) + ox(a(u)1J(w»::; K1J(w). 2) If Uo does not satisfy (3.17), then inequality (3.16) holds but inequality (3.18) does not. Inequality (3.16) would be not sufficient to conclude with the arguments of the proof of Theorem 3.2. In fact, in that case, problem (3.1), (3.6) admits an infinity of entropy weak solutions. For instance, the function UL, x::; ta(uL) { u(t,x) = a-leT), ta(uL)::; x::; ta(uR) x 2:: ta( UR) is an entropy weak solution to the equation (3.1a) provided that And, for any w. in R, the function w(t,x) = ta( uL) ::; x ::; ta( UR) x 2:: ta( UR) is an entropy weak solution to (3.1b) corresponding to the initial data WL, WR, From Theorems 3.1 and 3.2, we conclude that: COROLLARY 3.3. (Existence and uniqueness): Let Wo be in WI,CO(R) and Vo be in BV(R) and satisfy d dxvo ~ K, where K is a constant independent of x. Then, problem (3.1) (3.6) has one and only one entropy weak BV-solution in LCO(R+, BV(R, R2)). 4. A nonstrictIy hyperbolic system of two conservation laws. We are now interested in the following conservative system of two equations + axI(v) = + ax(a(u)v) = 0, where, as previously, as is strictly convex and a = This system can be written + a(u)8xu = 0, + va'(u)axu + a(u)axv = o. So, it is of the general form (4.3) by setting A(U) = ( a(u) va'(u) The matrix A(U) has a double real eigenvalue, a(u), and it does not admit a basis of eigenvectors (except at v = 0). Thus, system (4.1) is not strictly hyperbolic. Nevertheless, it makes up a well-posed system for small time when the initial data to (4.1) is smooth, as we can see by solving equation (4.1a) first, and then equation (4.1b). It is easy to prove that, given a discontinuous initial data, system (4.1) has in general non existence of solution or non uniqueness, in the class of BV solutions (for instance). Thus, a more general notion of weak solution is needed. Our aim here is to derive a theory of weak solutions (existence and uniqueness) for the conservative system (4.1) from the results of Section 3 where the nonconservative system (3.1) was studied. Our definition of weak solution below is motivated by the following remark: if (u, w) is a smooth solution to system (3.1), then the pair (u, v), where (4.5) is a solution to (4.1). We denote by MI (R) the space of all bounded Borel measures on R. DEFINITION 4.1. A pair (u,v), with (4.6) is said to be an entropy weak solution to the conservative 8y8tem (4.1) if the function (u,w), with w defined by is an entropy weak solution to the (nonconservative) system (3.1). Using this definition, we can solve the Cauchy problem for system (4.1) with discontinuous initial data. The following result treats the existence of solutions. THEOREM 4.1. Let Uo be in EV(R) and Vo in L1(R). Then system (4.1) has at lea8t one entropy weak solution (u, v) in L=(R+, EV(R)) x L=(R+, Ml (R)) which satisfies the initial condition (4.8) In (4.8), the initial condition for J.l is understood in the following sense t-O t it JRr 0 Iv(s,]- oo,x[) The proof of Theorem 4.1 is a corollary of Theorem 3.1; so we omit it. Remark 4.1. By using the method of proof of Theorem 3.1, we could verify that a solution to (4.1b) is explicitly provided by the formula (4.10) v (t ,x ) --~Ix '" Vo uX (y-~(y,t))dY t in the sense of distributions. Moreover, as a corollary of Theorem 3.2, we get a result of uniqueness of weak solutions. THEOREM 4.2. Let Uo be in EV(R) and Vo be in Ml(R). Suppose that the initial data Uo is entropy, i.e. there exists a constant Ko such that duo dx -< Ko , in the sense of distributions. Then, problem (4.1), (4.8) has at m08t one entropy weak solution (u,v) in L OO (R+,EV(R)) x LOO(R+,M1(R)). Finally, we get the following result of existence and uniqueness. COROLLARY 4.3. Let Uo be in BV(R) and Vo in L1 (R). Suppose tbat Uo satisnes (4.11). Tben problem (4.1) (4.8) bas one and only one entropy weak solution in LOO(R+,BV(R)) X LOO(R+,M1 (R)). A simple but interesting Cauchy problem is the Riemann problem for which we have VL, X < 0 UL, x < 0 Uo(x) = { x> 0' x> 0 with UL, UR, VL, VR E R. Our results above was stated for simplicity in the case of bounded Borel measures, but an extension to locally bounded Borel measures could be proved and thus would apply to the Riemann problem. We omit the details, and are contented here with its explicit solution. When UL > UR, U is a shock wave ifx ut, The measure v, solution to equation (4.1b), is given by where H(x - ut) and Dr-ITt are the Heaviside function and the Dirac mass at respectively. The solution is unique in that case. When UL -7 = 17 < UR, U is a rarefaction wave UL u(t,x) if x :s: a(uL)t if a(udt:S: x:S: a(uR)t if x ~ a(uR)t. In that case, (4.1 b) has an infinity of solutions. In particular, for every w* in R, the measure v defined by = vL(l- H(x - ta(uL))) - (x - ta(uL))vLD:_ta(uLl +vRH(x - ta(uR)) + (x - +w*( Dr-ta(ud - Dr-ta(uR)) is a weak solution to the conservative equation (4.1b). Acknowledgements. I have appreciated very much the support and the hospitality of the IMA during my visit at the University of Minnesota. I wish to especially thank Barbara Keyfitz for her kindness. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] G. DAL MASO, PH. LE FLOCH, F.MuRAT, Definition and weak stability of a nonconservative product, Internal Report, Centre de Mathematiques Appliquees, Ecole Poly technique, Palaiseau (FRANCE) (to appear). G. DAL MASO, PH LE FLOCH, F. MURAT, Definition et stabilite faible d'un produit non conservatif, Note C.R. Acad. Sc. Paris (to appear). R. DIPERNA, Uniqueness of solutions of hyperbolic conservation laws, Ind. Univ. Math. J., 28 (1979), pp. 137-188. R.J. DIPERNA, P .L. LIONS, Ordinary differential equation, transport theory and Sobolev spaces, (to appear). R. DI PERNA, A. MAJDA, The validity of nonlinear geometric optics for weak solutions of conservation laws, Comm. Math. Phys., 98 (1985), pp. 313-347. J. GLIMM, Solutions in the large for nonlinear systems of equations, Comm. Pure Appl. Math., 18 (1965), pp. 697-715. B. KEYFITZ, A viscosity approximation to a system of conservation laws with no classical Riemann solution, to appear in Proc. of Int. Conf. on Hyp. Problems, Bordeaux 1988. D.J. KORCHINSKI, Solution of a Riemann problem for a 2 X 2 system of conservation laws possessing no classical weak solution, Ph. D. Thesis Adelphi University (1977). H. KRANZER, (to appear). N. KRUSKOV, First order quasilinear equations in several independent variables, Math. USSR Sb. 10 (1970), pp. 127-243. P.D. LAX, Hyperbolic systems of conservation laws and the mathematical theory of shock waves, CBMS monograph NO 11, SIAM (1973). PH. LE FLOCH, Entropy weak solutions to nonlinear hyperbolic systems in nonconservative form, Comm. Part. diff. Eq., 13 (6) (1988), pp. 669-727. PH. LE FLOCH, Solutions faibles entropiques des systemes hyperboliques nonlineaires sous forme non conservative, Note C.R. Acad. Sc. Paris, t. 306, Serie 1 (1988), pp. 181-186. PH. LE FLOCH, Entropy weak solutions to nonlinear hyperbolic systems in nonconservative form, in Proc. of the Second Int. Conf. on Nonlin. Hyp. Problems, (Aachen FRG) (1988), pp. 362-373. PH. LE FLOCH, Shock waves for nonlinear hyperbolic systems in nonconservative form, IMA series, University of Minnesota, USA, (to appear). J. SMOLLER, Shock waves and reaction-diffusion equations, Springer-Verlag, New York (1983). H.B. STEWART, B. WENDROFF, Two-phase flow: models and methods, J. Compo Phys., 56 (1984), pp. 363-409. J. TRANGENSTEIN, P. COLELLA, A higher-order godunov method for modeling finite deformations in elastic-plastic solids. A.!, VOLPERT, The space BV and quasilinear equations, Math. USSR Sb. 2 (1967), pp. 225-267. TAl-PING LIU* Abstract. Overcompressive shock waves for nonstrictly hyperbolic conservation laws are stable in a sense different from that of the corresponding viscous traveling waves. We describe the difference and the reason for it. Key words. Overcompressive shock, nonstrict hyperbolicity, nonlinear stability. AMS(MOS) subject classifications. 35L65; 76L05 1. Introduction. Consider a system of conservation laws at + a;;- = u E E' . The system is hyperbolic if f'(u) has real eigenvalues )..i(f/J),i strictly hyperbolic if the eigenvalues are distinct: 1,2, ... ,4. It is for all u under consideration. For such a system, nonlinear waves, shock waves, rarefaction waves and contact discontinuities, are nonlinearly stable, Liu [2]. In recent years much interest has centered on nonstrictly hyperbolic systems where )..i( u) - )..i+l (u), 1 S; i < n, may change signs (see several other articles in this volume for such physical models). Two new types of shock waves, overcompressive and undercompressive (crossing) shock waves, arise for such systems. In this article we study the stability of overcompressive shock waves. The result is compared with the result of Liu-Xin [3] for the corresponding viscous conservation laws (1.2) As in [3] we will illustrate the marked difference between (1.1) and (1.2) on the level of overcompressive shock waves by carrying out the analysis for the following simple models, 2 at +..!!...ax (au2 + bV) = : + ! (v;) = 0, (u, v) E J1!, b > 0. *Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012. Research supported in part by NSF grant DMS-847-03971, Army grant DAAL0387-K-0063, and AFOSR-89-0203. tCourant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012. Research supported. in part by NSF grant DMS-88-06731. au fJt : + (au 2 + bV) = 02u ox 2 ! (~) Write (1.1)' in matrix form The flux matrix is nondiagonalizable when its eigenvalues '\1 = au and '\2 = v are equal. A shock wave (U_,V_iU+,V+) with speed (J' satisfies the jump (RankineHugoniot) conditions For a classical shock wave three characteristic lines impinge on the shock and one leaves it, Lax [1]. A shock wave for two conservation laws is overcompressive if all characteristic lines impinge on the shock: For hyperbolic conservation laws, (1.1)', an overcompressive shock is a combination of two classical shock waves. In the next section we show that a perturbation of an overcompressive shock wave gives rise to two classical shock waves. The resulting waves and their location are determined by the two conservation laws (1.3) u dx = const ., v dx = const. For viscous conservation laws (1.2)', (1.3) has different implications. In Section 3 we recall briefly the result for (1.2)' in [3] and compare it to that of the previous section. Although we carry out our analysis for the simple models (1.1)' , the phenomenon exhibited is universal of over compressive shock waves. A plausible argument is presented in Section 4. 2. Inviscid Overcompressive Shock Waves. For simplicity, and without loss of generality, we only consider shock waves (u_, V-i u+, v+) of (1.1)' with zero speed. The jump condition becomes There are two possibilities. When v+ = v_ we adopt the usual entropy condition + (u 2/2)x = o. Thus the above jump condition becomes U+ < u_ for Ut (2.1) When v+ =I v_ we impose the entropy condition v+ < v_ for the second equation in (1.1)' Vt + (v 2 /2)x = 0 and obtain -v+ = v_ == Va > 0, au~ = au:' + 4bva. For each fixed u_ there are two values ±u+, u+, < 0, satisfying (2.2). When au_ > Va then (u_,v_iU+,V+) isovercompressive and (U_,V_i -u+,v+) is a classical shock wave. (U_,V_iU+,V+) can be viewed either as the combination of the classical waves (U_,v_i-U_,V_) and (-U_,V_iU+,v+) or as the combination of classical waves (U_,V_i-U+,V+) and (-U+,V+iU+,V+). (See Figure 2.1) . (14, if,.) Figure 2.1 Consider the overcompressive shock wave for for Let (u, V)(x, t) be a solution of (1.1)' which is a perturbation of (u, v)(x, t): (u, v)(x, t) = (u, v)(x, t) (u,v)(x,t)=O + (u, v)(x, t) To construct the solution (u, v)(x, t), we first solve the second equation + (v 2/2)x = 0 with given initial data v(x,O). With v(x,t) thus determined we can define Ul(X,t) so that (Ul(X,t), v(x,t)) consists of waves pertaining to the eigenvalue >'2 = v. That is, for each rarefaction wave in vex, t) we set Ul (x, t) so that (UI(X, t), vex, t)) is a rarefaction wave corresponding to A2; and the same for shock waves. This can be done explicitly for approximate solutions of v( x, t) by shock and rarefaction waves through the random choice method or other characteristic methods. Wesetu(x,t)=UI(X,t)+U2(X,t), U2(X,0)=u(X,0)-UI(X,0). Here U2(X,t) consists of waves pertaining to Aj only. Thus U2 solves Ut + (au 2/2) = almost always. Waves in U2 are either those generated by the initial data U2(X, 0) or through interaction of shock waves in (Uj, v). Since v solves the scalar equation v(x,t) tends to a single shock wave (v_,v+) after finite time t = T j . Since (Uj,v) consists of waves pertaining to A2, after time TI (Uj, v) is a single 2-shock. After time TI there is no interaction in (uJ,v)(x,t) and so U2 solves the scalar equation Ut + (au 2/2)x = 0. Thus U2 becomes a single I-shock wave after finite time T2. We thus conclude that after finite" time the solution (u, v) consists of two shock waves connecting (u_, v_) on the left and (u+, v+) on the right. It is easy to see that the two waves are either (u_,v_;-u_,v_) followed by (-u_,v_;u+,v+) or (u_, v_; -U+, v+) followed by (-u+, v+; U+, To carry out the above analysis the initial perturbation (iL, ii) needs only to be bounded. Since v satisfies a scalar equation with convex flux v 2/2, v(·, t) is of bounded total variation for any t > 0. The interactions of shock waves in (UI, v) give rise to I-waves in U2 which is also of bounded variation. Thus U2(·, t) is of bounded total variation for any t > 0. The location of the two shock waves in the asymptotic state can be determined through conservation law (1.3). The 2-shock coincides with the shock (v_,v+) for vex, t) and is determined by vex, t) = { v_ v+ x Xo, 2: TI J M Xo = (v_ - v+)-j ii(y) dy . Having determined the location of the 2-shock, the location of the I-shock is given by the first conservation law in (1.3) as follows: When the I-shock is located to the left of the 2-shock, Xl < xo, then the asymptotic state is the I-shock (u_, v_; -U_, v_) followed by the 2-shock ( -U_, Vo; U+, v+). The integral in x of the asymptotic state minus (u, v) can easily be calculated to be 2(XI - Xo )u_ + xo( u_ - u+). When Xl > Xo and the asymptotic state is the 2-shock (u_, v_; -U+, v+) followed by the I-shock (-u+, v+; U+, v+) the integral of its difference with (u, v) is -2(xj - xo)u+ + xo( u_ - u+). Since the integral equals the integral of iL, we have Case 1. When p > the asymptotic state is a 2-shock (u _, v _; -U+, v+) followed by a I-shock (-u+,v+;u+,v+) located at x = Xo and X = Xl, respectively; and Xj is given by J M ii(x,O)dx+xo(u+ -u_). Case 2. When p < the asymptotic state is a I-shock (U_, v_; -u_, v_) followed by a 2-shock (-u_,v_;u+,v+) located at x = Xl and x = Xo respectively; and Xl is given by J M 3. Viscous Overcompressive Shock Waves. For viscous conservation laws (1.2)' shock waves are smooth traveling waves (u,v)(x,t) = (cP,'IjJ)(x - st). As before for simplicity we study overcompressive shock waves (u_, v_; u+, v+) with speed s = 0. From (1.2)' we have cP' = acP2/2 + b'IjJ + A A == -auV2 + bv+ = -au=-/2 + bv_ , For the overcompressive shock we have and so the critical point (u_, v_) (or (u+, v+» is an unstable (stable) node for (3.1). Consequently there are infinitely many connecting orbits for (u_, v_; u+, v+). (d. Figure 3.1.) Figure 3.1 There exists a unique connecting orbit for each classical shock wave, (u_,v_;-u_,v_), (u_,v_;-u+,v+), (-u+,v+;u+,v+), (-u_,v_;u+,v+). The region between these orbits is filled with orbits for the overcompressive shock (u_,v_;u+,v+). When a traveling wave «/>I,..p)(x) is perturbed the solution converges to another traveling wave (2,..p )(x + xo) of Figure 3.1, properly translated. The new orbit 2 as well as the translation Xo are determined by the conservation laws (1.3). For details see Liu-Xin [3]. 4. Concluding Remarks. For two conservation laws an overcompressive shock wave absorbs all the characteristic curves around it. As a consequence a perturbation does not create diffusion waves propagating away from the shock wave, cf. Liu [4]. On the other hand, there are two time-invariants, (1.3). Thus a perturbation cannot just create a translation of the shock wave. As we have seen in the previous two sections, for the inviscid shock, it splits into two shocks; and for the viscous shock, it jumps to another viscous shock. Thus the two limits of viscosity tending to zero and time going to infinity do not commute. Although this phenomenon holds for diffusion waves of strictly hyperbolic systems, that it holds here for strongly nonlinear shock waves is significant. Although we carry out our analysis for simple models (1.1)' and (1.2)' the above phenomena would hold for general systems. For general n conservation laws, n > 2, an overcompressive shock wave with speed s for some i, 1 :::; i < n, would have the following properties: A perturbation of the inviscid shock wave would.create N-waves for each genuinely nonlinear field Aj, and a linear traveling wave for each linear degenerate field Aj, j f= i, j f= i + 1, and the splitting of the overcompressive shock into classical i and i + 1 shocks. For the corresponding viscous shock wave, a perturbation would give rise to diffusion waves pertaining to Aj,j f= i,j f= i + 1, fields and the jumping of the viscous profile to another profile. The equations of magneto-hydrodynamics are such an important physical model. Acknowledgment. The authors would like to thank IMA for the opportunity to visit the Institute during the Spring of 1989. Part of the research done in the present paper was initiated during the visit to the Institute. REFERENCES [1] LAX, P.D., Hyperbolic systems of conservation laws, II, Comm. Pure AppJ. Math. 10 (1957), pp. 537-566. LIU, T.-P., Admissible solutions of hyperbolic conservation laws, Memoirs, Amer. Math. Soc., No. 240 (1981). LIU, T.-P. AND XIN, Z., Stability of viscous shock waves associated with a non strictly hyperbolic conservation law, (to appear). LIU, T .-P., Nonlinear stability of shock waves for viscous conservation laws, Memoirs, Amer. Math. Soc., No 328 (1985). QUADRATIC DYNAMICAL SYSTEMS DESCRIBING SHEAR FLOW OF NON-NEWTONIAN FLUIDS* D. S. MALKUStl, J. A. NOHELt2, AND B. J. PLOHRt3 Abstract. Phase-plane techniques are used to analyze a quadratic system of ordinary differential equations that approximates a single relaxation-time system of partial differential equations used to model transient behavior of highly elastic non-Newtonian liquids in shear flow through slit dies. The latter one-dimensional model is derived from three-dimensional balance laws coupled with differential constitutive relations well-known by rheologists. The resulting initial-boundaryvalue problem is globally well-posed and possesses the key feature: the steady shear stress is a non-monotone function of the strain rate. Results of the global analysis of the quadratic system of ode's lead to the same qualitative features as those obtained recently by numerical simulation of the governing pde's for realistic data for polymer melts used in rheological experiments. The analytical results provide an explanation of the experimentally observed phenomenon called spurt; they also predict new phenomena discovered in the numerical simulation; these phenomena should also be observable in experiments. 1. Introduction. The purpose of this paper is to analyze novel phenomena in dynamic shearing flows of non-Newtonian fluids that are important in polymer processing [17]. One striking phenomenon, called "spurt," was apparently first observed by Vinogradov et al. [19] in experiments concerning. quasi static flow of monodispersive polyisoprenes through capillaries or equivalently through slit dies. They found that the volumetric flow rate increased dramatically at a critical stress that was independent of molecular weight. Until recently, spurt has been associated with the failure of the flowing polymer to adhere to the wall [5]. The focus of our current research is to offer an alternate explanation of spurt and related phenomena. Understanding these phenomena has proved to be of significant physical, mathematical, and computational interest. In our recent work [12], we found that satisfactory explanation and modeling of the spurt phenomenon requires studying the full dynamics of the equations of motion and constitutive equations. The common and key feature of constitutive models that exhibit spurt and related phenomena is a non-monotonic relation between the steady shear stress and strain rate. This allows jumps in the steady strain rate to form when the driving pressure gradient exceeds a critical value; such jumps correspond to the sudden increase in volumetric flow rate observed in the experiments of Vinogradov et al. The governing systems used to model such one-dimensional flows are analyzed in [12] by numerical techniques and simulation, and in the present work by analytical methods. The systems derive from fully three-dimensional differential constitutive relations with m-relaxation times (based on work of Johnson and Segalman [8] and Oldroyd [16]). *Supported by the U. S. Army Research Office under Grant DAAL03-87-K-0036, the National Science Foundation under Grants DMS-8712058 and DMS-8620303, and the Air Force Office of Scientific Research under Grants AFOSR-87-0191 and AFOSR-85-0141. tCenter for the Mathematical Sciences University of Wisconsin-Madison Madison, WI 53705 lAlso Department of Engineering Mechanics. 2Also Department of Mathematics. 3Also Computer Sciences Department. They are evolutionary, globally well posed in a sense described below, and they possess discontinuous steady states of the type mentioned above that lead to an explanation of spurt. The governing systems for shear flows through slit-dies are formulated from balance laws in Sec. 2. Specifically, we model these flows by decomposing the total shear stress into a polymer contribution, evolving in accordance with a differential constitutive relation with a single relaxation time and a Newtonian viscosity contribution (see system (JSO) in Sec. 2.). The flows can also be modelled by a system based on a differential constitutive law with two widely spaced relaxation times (see system (JS0 2 ) in [13].) but no Newtonian viscosity contribution. Numerical simulation [9, 12] of transient flows at high Weissenberg (Deborah) number and very low Reynolds number using the model (JSO) exhibited spurt, shape memory, and hysteresis; furthermore, it predicted other effects, such as latency, normal stress oscillations, and molecular weight dependence of hysteresis, that should be analysed further and tested in rheological experiment. In earlier work, Hunter and Slemrod [7] used techniques of conservation laws to study the qualitative behavior of discontinuous steady states in a simple onedimensional viscoelastic model of rate type with viscous damping. They predicted shape memory and hysteresis effects related to spurt. A salient feature of their model is linear instability and loss of evolutionarity in a certain region of state space. The objective of the present paper is to develop analytical techniques, the results of which verify these rather dramatic implications of numerical simulation. Based on scaling introduced in [12], appropriate for the highly elastic and very viscous polyisoprenes used in the spurt-experiment, we are led to study the following pair of quadratic autonomous ordinary differential equations that approximates the governing system (J SO) in the relevant range of physical parameters for each fixed position in the channel: 0- = (Z Z. = -u + 1) ( -"ife-; - ("if - e-; - U , - Z . Here the dot denotes the derivative dldt, "if is a parameter that depends on the driving pressure gradient as well as position x in the channel, and e; > 0 is a ratio of viscosities. System (1.1) is obtained by setting a = 0 in the momentum equation in system (JSO); this approximation is reasonable because a is at least several orders of magnitude smaller thCt?- e;. We show that steady states of system (J SO), some of which are discontinuous for non-monotone constitutive relations, correspond to to critical points of the quadratic system. We deduce the local characters of the critical points, and we prove that system (1.1) has no periodic orbits or closed separatrix cycles. Moreover, this system is endowed with a natural Lyapunov-like function with the aid of which we are able to determine the global dynamics of the approximating quadratic system completely and thus identify its globally asymptotically stable crical points (i.e. steady states) for each position x. This analysis is carried out in Sec. 3 When a, the ratio of Reynolds to Deborah numbers, is strictly positive, the stability of discontinuous steady states of system (J SO) remains to be settled. Recently, Nohel, Pego and Tzavaras [15] established such a result for simple model in which the polymer contribution to the shear stress satisfies a single differential constitutive relation; for a particular choice, their model and system (J SO) with a > a have the same behavior in steady shear. Their asymptotic stability result, combined with numerical experiments and research in progress, suggest that the same result holds for the full system (JSO), at least when a is sufficiently small. In Sec. 4.,the analysis of Sec. 3. is applied to each point x in the channel, allowing us to explain spurt, shape memory, hysteresis, and other effects originally observed in the numerical simulations in terms of a continuum of phase portraits. We discuss asymptotic expansions of solutions of systems (JSO) and (JS0 2 ) of Ref. [13] in powers of c: that enable us to explain latency (a pseudo-steady state that precedes spurt). The asymptotic analysis also permits a more quantitative comparison of the dynamics of the two models when c: is sufficiently small. In Sec. 5., we discuss physical implications of the analysis, particularly those that suggest new experiments. In Sec. 6., we draw certain conclusions. Although the analysis in this paper applies only to the special constitutive models we have studied, we expect that the qualitative features of our results appear in a broad class of nonNewtonian fluids. Indeed, numerical simulation by Kolkka and Ierley [10] using another model with a single relaxation time and Newtonian viscosity exhibits very similar character. 2. A Johnson-Segalman-Oldroyd Model for Shear Flow. The motion of a fluid under incompressible and isothermal conditions is governed by the balance of mass and linear momentum. The response characteristics of the fluid are embodied in the constitutive relation for the stress. For viscoelastic fluids with fading memory, these relations specify the stress as a functional of the deformation history of the fluid. Many sophisticated constitutive models have been devised; see Ref. [2] for a survey. Of particular interest is a class of differential models with m-relaxation times, derived in a three-dimensional setting in Refs. [12] and [13]; these models can be regarded as a special cases of the Johnson-Segalman model [8]when the memory function is a linear combination of m-decaying exponentials with positive coefficients or of the Oldroyd differential constitutive equation [16]. Essential properties of constitutive relations are exhibited in simple planar Poiseuille shear flow. We study shear flow of a non-Newtonian fluid between parallel plates, located at x = ±h/2, with the flow aligned along the y-axis, symmetric about the center line, and driven by a constant pressure gradient f. We restrict attention to the simplest model of a single relaxation-time differential model that possesses steady state solutions exhibiting a non-monotone relation between the total steady shear stress and strain rate, and thereby reproduces spurt and related phenomena discussed below. The total shear stress T is decomposed into a polymer contribution and a Newtonian viscosity contribution. When restricted to one space dimension the initial-boundary value problem, in non-dimensional units with distance scaled by h, governing the flow can be written in the form (see Refs. [9, 12]): aVt - cV xx +1 , ut-(Z+l)v x =-u, Zt +uv x =-Z on the interval [-1/2,0], with boundary conditions V( -1/2, t) = vx(O, t) and initial conditions ((IC)) V(x, 0) = Vo(x) , u(x,O) = uo(x) , Z(x,O) = Zo(x) ,on -1/2:S; x:S; 0; symmetry of the flow and compatibility with the boundary conditions requires that vo( -1/2) =:= 0, v!J(O) = and uo(O) = 0. The evolution of u, the polymer contribution to the shear stress, and of Z, a quantity proportional to the normal stress difference, are governed by the second and third equations in system (J SO). As a result of scaling motivated by numerical simulation and introduced in Ref. [12], there are only three essential parameters: a is a ratio of Reynolds number to Deborah number, 10 is a ratio of viscosities, and f is the constant pressu:-e gradient. When 10 = 0, and Z + 1 2:: 0, system (J SO) is hyperbolic, with characteristics speeds ±[(Z + 1)/ajI/2 and 0. Moreover, for smooth intial data in the hyperbolic region and compatible with the boundary conditions, techniques in [18] can be used to establish global well-posedness (in terms of classical solutions) if the data are small, and finite-time blow-up of classical solutions if the data are large. If 10 > 0, system USO) for any smooth or piece-wise smooth data; indeed, general theory developed in [15] (see Sec. 3 and particularly Appendix A) yields global existence of classical solutions for smooth initial data of arbitrary size, and also existence of almost classical, strong solutions with discontinuities in the initial velocity gradient and in stress components; the latter result allows one to prescribe discontinuous initial data of the same type as the discontinuous steady states studied in this paper. The steady-state solutions of system (J SO) play an important role in our discussion. Such a solution, denoted by V, G, and Z, can be described as follows. The stress components G and Z are related to the strain rate Vx through the relations u=--1 +v; Therefore, the steady total shear stress 1 +v; := G + cVx is given by S --2 + loS. T = w(v x ), The properties of w, the steady-state relation between shear stress and shear strain rate, are crucial to the behavior of the flow. By symmetry, it suffices to consider 8 ~ O. For all E: > 0, the function w has inflection points at 8 = 0 and 8 = )3. When E: > 1/8, the function w is strictly increasing, but when E: < 1/8, the function w is not monotone. Lack of monotonicity is the fundamental cause of the non-Newtonian behavior studied in this paper; hereafter we assume that E: < 1/8. The graph of w is shown in Fig. 1. Specifically, w has a maximum at 8 a minimum at 8 = 8 m , where it takes the values TM ;= W(8M) and T m respectively. As E: -+ 1/8, the two critical points coalesce at 8 = )3. =8M ;= W(8 m ) Fig. 1; Total steady shear stress T V8. shear strain rate Vx for steady flow. The case of three critical points is illustrated; other possibilities are discussed in Sec. 3. The momentum equation, together with the boundary condition at the centerline, implies that the steady total shear stress satisfies T = x for every x E [-~, 0]. Therefore, the steady velocity gradient can be determined as a function of x by solving (2.3) Equivalently, a steady state solution Vx satisfies the cubic equation P(v x ) where P( 8) ;= E: 8 3 - T 8 2 + (1 + E:)8 - T . 150 The steady velocity profile in Fig. 2 is obtained by integrating Vx and using the boundary condition at the wall. However, because the function w is not monotone, there might be up to three distinct values ofvx that satisfy Eq. (2.3) for any particular x on the interval [-1/2,0]. Consequently, Vx can suffer jump discontinuities, resulting in kinks in the velocity profile (as at the point x* in Fig. 2). Indeed, a steady solution must contain such a jump if the total stress T wall = 7/2 at the wall exceeds the total stress T M at the local maximum M in Fig. 1. -h/2 Fig. 2: Velocity proffie for steady flow. Finally, we remark that the flow problem discussed here can also be modelled by a system based on a differential constitutive law with two widely spaced relaxation times but no Newtonian viscosity contribution (see system (JS0 2 ) in Sec. 2. of [13]); with an appropriate choice of relevant parameters, the resulting problem exhibits the same steady states and the same characteristics as (JSO). 3. Phase Plane Analysis for System (JSO) When a = o. When a is not zero, numerical simulation developed in [9, 11, 12] discovered striking phenomena in shear flow and suggested the analysis that follows. A great deal of information about the structure of solutions of system (JSO) can be garnered by studying a quadratic system of ordinary differential equations that approximates it in a certain parameter range, the dynamics of which is determined completely. Motivation for this approximation comes from the following observation: in experiments of Vinogradov et ai. [19], a is of the order 10- 12 ; thus the term aVt in the momentum equation of system (JSO) is negligible even when Vt is moderately large. This led us to the approximation to system (JSO) obtained when a = o. When a = 0, the momentum equation in system (J SO) can be integrated to show that the total shear stress T := 17 + C;V x coincides with the steady value T(x) = -Ix. Thus T = T(x) is a function of x only, even though 17 and Vx are functions of both x and t. The remaining equations of system (J SO) yield, for each fixed x, the autonomous, quadratic, planar system of ordinary differential equations ,;. = (Z -(7) (T -(7) Z . + 1) ( T -c;-c;- Here the dot denotes the derivative d/dt. We emphasize that for each a different dynamical system is obtained at each x on the interval [-1/2,0] in the channel because T = -Ix. By symmetry, we may focus attention on the case T > 0; also recall from Sec. 2 that c; < 1/8; these are assumed throughout. The dynamical system (3.1) can be analyzed completely by a phase-plane analysis outlined below; the reader is referred to Sec. 3 in [13] for further details. Here we state the main results. The critical points of system (3.1) satisfy the algebraic system These equations define, respectively, a hyperbola and a parabola in the I7-Z plane; these curves are drawn in Fig. 3, which corresponds to the most comprehensive case of three critical points. The critical points are intersections of these curves. In particular, critical points lie in the strip 0 < 17 < T. Eliminating Z in these equations shows that the l7-coordinates of the critical points satisfy the cubic equation Q(I7/T) = 0, where (3.3) A straightforward calculation using Eq. (2.4) shows that P(v x ) T = P ( T-I7) - c ; - = --;Q(I7/T) Thus each critical point of the system (3.1) defines a steady-state solution of system (J SO): such a solution corresponds to a point on the steady total-stress curve (see Fig. 1) at which the total stress is T(x). Consequently, we have: Fig. 3: The phase plane in the case of three critical points. PROPOSITION For each position x in the channel and for each (1) there is a single critical point A when l' > 0, there are three possibilities: < 'I'm; (2) there is also a single critical point C if l' > 'I'M; (3) there are three critical points A, B, and C when 'I'm < l' < 'I'M. For simplicity, we ignore the degenerate cases, where which two critical points coalesce. or l' = To determine the qualitative structure of the dynamical system (3.1), we first study the nature of the critical points. The behavior of orbits near a critical point depends on the linearization of system (3.1) at this point, i.e., on the eigenvalues of the Jacobian matrix J associated with Eq. (3.1), evaluated at the critical point. To avoid solving the cubic equation Q( (7 /1') = 0, the character of the eigenvalues of J can be determined from the signs ofthe trace of J denoted by Tr J, the determinant of J denoted by Det J, and the discriminant of J denoted by Discrm J at the critical points. We omit these tedious calculations, a result of which is a useful fact: at a critical point, E; Det J = Q' ( (7 /1'). This relation is important because Q' is positive at A and C and negative at B. To assist the reader, Fig. 3 shows the hyperbola on which 0- = 0, the parabola on which Z = 0 [see Eqs. (3.2)], and the hyperbola on which Discrm J vanishes. As a result of the analysis above, we draw the following (1) Tr J < 0 at all critical points; (2) Det J > 0 at A and C, while Det J < 0 at B; and (3) DiscrmJ > 0 at A and B, whereas DiscrmJ can be of either sign at C. (For typical values of £ and T, DiscrmJ < 0 at C; in particular, DiscrmJ < 0 if C is the only critical point. But it is possible for Discrm J to be positive if T is sufficiently close to T m.) Standard theory of nonlinear planar dynamical systems (see, e.g., Ref. [3, Chap. 15]) now establishes the local characters of the critical points A, B, C in Proposition 3.1: PROPOSITION (1) A is an attracting node (called the classical attractor); (2) B is a saddle point; (3) C is either an attracting spiral point or an attracting node (called the spurt attractor). The next task is to determine the global structure of the orbits of system (3.1). In this direction, we modify an argument suggested by A. Coppel [4] and establish the crucial result, the proof of which involves a change in the time scale and an application of Bendixson's theorem: PROPOSITION System (3.1) has neither periodic orbits nor separatrix cycles. To understand the global qualitative behavior of orbits, we construct suitable invariant sets. In this regard, a crucial tool is that system (3.1) is endowed with the identity (3.1) (3.5) Thus the function V( 17, Z) := 17 2 + (Z + 1)2 serves as a Lyapunov function for the dynamical system. Notice that identity (3.5) is independent of T and £. Let r denote the circle on which the right side of Eq. (3.5) vanishes, and let C r denote the circle of radius r centered at 17 = 0 and Z = -1, i.e. C r := {(17, Z) : V( 17, Z) = r, r > O}; each C r is a level set of V. The circles r and C 1 are shown in Fig. 4, which corresponds to the case of a single critical point, the spiral point C. Eq. (3.5) also implies the critical points of system (3.1) lie on r. If r > 1, r lies strictly inside Cr. Consequently, Eq. (3.5) shows that the dynamical system (3.1) flows inward at points along Cr. Thus the interior of C r is a positively invariant set for each r > l. Furthermore, the closed disk bounded by C1 , which is the intersection of these sets, is also positively invariant. Therefore the above argument establishes: PROPOSITION 3.4:Each closed disk bounded by the circle tively invariant set for the system (3.1)' en r ~ 1 is a posi- The above results combined with identification of suitable invariant sets were used to determine the global structure of the orbits of system (3.1) in the cases of one and three critical points, and to analyze the stable and unstable manifolds of the saddle point at B. These results are shown in Figs. 5 and 6 and summerized in the following result. z t to > A point x that Next, consider increasing the load from is classical for remains classical for unless there is no classical attractor for T = i.e., > TM. A spurt point for on the other hand, is always a spurt point for 1. As a result, a point in x in the channel can change only from exceeds T M. When a classical attractor to a spurt attractor, and then only if 1 is chosen to be supercritical, loading causes the position x. of the kink in Fig. 2 to move away from the wall, but only to the extent that it must: a single jump in strain rate occurs at x. = -'I'M/1, where the total stress is 'I'. = 'I'M. These conclusions are valid, in particular, for a quasi-static process of gradually increasing the load from = to > 10 11xl 11x I 1 1crit· Now consider unloading from 70 > 0 to 7 < 70; assume, for the moment, that Here, the initial steady solution need not correspond to top jumping. For this type of unloading, a point x that is classical for 70 always remains classical the classical attractor for 7 exists because 71xl < 701xl. By contrast, a spurt for point x for 70 can become classical at T This occurs if: ( a) the total stress 'I = - 7x falls below 'I m; or (b) the spurt attractor Co forT = OX lies on the classical side ofthe stable manifold of the saddle point B forT = (see Proposition 4.1(2b)). 7 is positive. -7 -7x Combining the analysis of loading and unloading leads to the following summary of quasi-static cycles and the resulting flow hysteresis. Kinks move away from the wall under top jumping loading; they move toward the wall under bottom jumping unloading; otherwise they remain fixed. The hysteresis loop opens from the point at which unloading commences; no part of the unloading path retraces the loading path until point d in Fig. 7. To explain the latency effect that occurs during loading, assume that e is small. It is readily seen that the total stress TM at the the local maximum M is 1/2+0(e), while the local minimum m corresponds to a total stress 'I m of 2y'E [1 + O( E)]. Furthermore, for x such that 'I(x) = 0(1), 17 = T + O(e) at an attracting node at A, while 17 = O( €) at a spurt attractor C (which is a spiral). Consider a point along the channel for which 'I(x) > 'I M , so that the only critical point of the system (3.1) is C, and suppose that that 'I < 1. Then the evolution of the system exhibits three distinct phases, as indicated in Fig. 6: an initial "Newtonian" phase (0 to N); an intermediate "latency" phase (N to S); and a final "spurt" phase (S to C). The Newtonian phase occurs on a time scale of order e, during which the system approximately follows an arc of a circle centered at 17 = 0 and Z = -1. Having assumed that T < 1, Z approaches (4.1) as 17 rises to the value T. (If, on the other hand, 'I :2: 1, the circular arc does not extend as far as 'I, and 17 never attains the value 'I; rather, the system slowly spirals toward the spurt attractor. Thus the dynamical behavior does not exhibit distinct phases.) The latency phase is characterized by having 17 = 'I + O(E), so that 17 is nearly constant and Z evolves approximately according to the differential equation (4.2) Therefore, the shear stress and velocity profiles closely resemble those for a steady solution with no spurt, but the solution is not truly steady because the normal stress difference Z still changes. Integrating Eq. (4.2) from Z = Z N to Z = -1 determines the latency period. This period becomes indefinitely long when the forcing decreases to its critical value; thus the persistence of the near-steady solution with no spurt can be very dramatic. The solution remains longest near point L where Z = -1 + 'I. This point may be regarded as the remnant of the attracting node A and the point B. Eventually the solution enters the spurt phase and tends to the critical point C. Because C is an attracting spiral, the stress oscillates between the shear and normal components while it approaches the steady state. Asymptotic analysis carried out in Sec. 6 of [13] shows that when e is sufficiently small, system (JS0 2 ) of [13] has the same asymptotic properties as system (JSO). Thus system (JSO) approximates (JS0 2 ) quantitatively as well as qualitatively. 5. Physical Implications. One of the widely accepted explanations of spurt and similar observations is that the presence of the wall affects the dynamics of the polymer system near the wall. Conceivably, there could be a variety of "wall effects," the most obvious is thE) loss of chemical bond between wall and fluid, or wall slip [5]. Perhaps the most distinguishing feature of our alternative approach is: it predicts that spurt stems from a material property of the polymer and is not related to any external interaction. The spurt layer forms at the wall in situations such as top jumping because the stresses are higher there; for the same reason, of course, is chemical bonds would break at the wall;however, our approach predicts that. the layer of spurt points spreads into the interior of the channel on continued loading. Layer thickness is predicted to grow continuously in loading to a thickness that should be observable, provided secondary (two-dimensional) instabilities do not develop. Our analysis suggests other ways in which experiments might be devised to verify the dependence of spurt on material properties: (i) produce multiple kinks with spurt layer separated from the wall, (ii) produce hysteresis in flow reversal (Fig. 9). Our model predicts circumstances under which a different path can be followed in sudden reversal of the flow than would be followed by a sequence of solutions in which the pressure gradient is reduced to zero and reloaded again (with the opposite sign) to a value of somewhat smaller magnitude. Such behavior does not seem likely to be explainable by a wall effect. The most important and perhaps the easiest experiment to perform to verify our theory is to produce latency. Our analysis predicts long latency times for data corresponding to realistic material data; no sophisticated timing device would be required, nor would the onset of the instability be hard to identify. The increase in throughput is predicted to be so dramatic that simple visual inspection of the exit flow would probably be sufficient. 6. Conclusions. Although our analysis applies only to the special constitutive models we have studied, we expect that the qualitative features of our results appear in a broad class of non-Newtonian fluids. Our analysis has identified certain universal mathematical features in the shear flow of viscoelastic fluids described by differential constitutive relations that give rise to spurt and related phenomena. The key feature is that there are three widely separated time scales, each associated with an important non-dimensional number (a, e, and 1, respectively), when scaled by the dominant relaxation time, A-I. Each of these time scales can be associated with a particular equation in system (JSO) [13]. The key to understanding the dynamics of such systems is fixing the location of the discontinuity in the rate induced by the non-monotone character of the steady shear stress vs. strain rate. Acknowledgments. We thank Professor A. Coppel for suggesting an elegant argument that rules out the existence of periodic and separatrix cycles for the systems (3.1). We also acknowledge helpful discussions with D. Aronson, M. Denn, G. Sell, M. Slemrod and A. Tzavaras, and M. Yao. 1. A. Andronov and C. Chaikin, Theory of Oscillations, Princeton Univ. Press, Princeton, 1949. 2. R. Bird, R. Armstrong, and O. Hassager, Dynamics of Polymeric Liquids, John Wiley and Sons, New York, 1987. 3. E. Coddington and N. Levinson, Theory of Ordinary Differential Equations, McGraw-Hill, New York, 1955. 4. A. Coppel, , 1989. private communication. 5. M. Denn, "Issues in Viscoelastic Fluid Dynamics," Annual Reviews of Fluid Mechanics, 1989. to appear. 6. M. Doi and S. Edwards, "Dynamics of Concentrated Polymer Systems," J. Chem. Soc. Faraday 74 (1978), pp. 1789-1832. 7. J. Hunter and M. Slemrod, "Viscoelastic Fluid Flow Exhibiting Hysteretic Phase Changes," Phys. Fluids 26 (1983), pp. 2345-2351. 8. M. Johnson and D. Segalman, "A Model for Viscoelastic Fluid Behavior which Allows Non-Affine Deformation," J. Non-Newtonian Fluid Mem. 2 (1977), pp. 255270. 9. R. Kolkka, D. Malkus, M. Hansen, G. Ierley, and R. Worthing, "Spurt Phenomena of the Johnson-Segalman Fluid and Related Models," J. Non-Newtonian Fluid Mem. 29 (1988), pp. 303-325. 10. R. Kolkka and G. Ierley, "Spurt Phenomena for the Giesekus Viscoelastic Liquid Model," J. Non-Newtonian Fluid Mem., 1989. To appear. 11. D. Malkus, J. Nohel, and B. Plohr, "Time-Dependent Shear Flow Of A NonNewtonian Fluid," in Conference on Current Problems in Hyberbolic Problems: RiemannProblems and Computations (Bowdoin,19SS), ed. B. Lindquist, Amer. Math. Soc., Providence, 1989. Contemporary Mathematics, to appear. 12. D. Malkus, J. Nohel, and B. Plohr, "Dynamics of Shear Flow of a Non-Newtonian Fluid," J. Comput. Phys., 1989. To appear. 13. D. Malkus, J. Nohel, and B. Plohr, "Analysis of New Phenomena In Shear Flow of Non-Newtonian Fluids," SIAM J. Appl. Math., 1989. Submitted. 14. T. McLeish and R. Ball, "A Molecular Approach to the Spurt Effect in Polymer Melt Flow," J. Polymer Sci. 24 (1986), pp. 1735-1745. 15. J. Nohel, R. Pego, and A. Tzavaras, "Stability of Discontinuous Steady States in Shearing Motions of Non-Newtonian Fluids," Proc. Roy. Soc. Edinburgh, Series A, 1989. submitted. 16. J. Oldroyd, "Non-Newtonian Effects in Steady Motion of Some Idealized ElasticoViscous Liquids," Proc. Roy. Soc. London A 245 (1958), pp. 278-297. 17. J. Pearson, Mechanics of Polymer Processing, Elsevier Applied Science, London, 1985. 18. M. Renardy, W. Hrusa, and J. Nohel, Mathematical Problems in Viscoelasticity, Pitman Monographs and Surveys in Pure and Applied Mathematics, Vol. 35, Longman Scientific & Technical, Essex, England, 1987. 19. G. Vinogradov, A. Malkin, Yu. Yanovskii, E. Borisenkova, B. Yarlykov, and G. Berezhnaya, "Viscoelastic Properties and Flow of Narrow Distribution Polybutadienes and Polyisoprenes," J. Polymer Sci., Part A-2 10 (1972), pp. 10611084. 20. M. Yao and D. Malkus, "Analytical Solutions of Plane Poiseuille Flow of a Johnson-Segalman Fluid," in preparation, 1989. DYNAMIC PHASE TRANSITIONS: A CONNECTION MATRIX APPROACH* KONSTANTIN MISCHAIKOWt Abstract. Travelling wave solutions between liquid and vapor phases in a van der Waals fluid are shown to exist. The emphasis, however, is on the techniques, namely the Conley index and connection matrix. In particular it is suggested that these methods provide a natural approach for solving problems of this 1. Introduction. In this paper we shall discuss the existence of travelling wave solutions between liquid and vapor phases in a van der Waals fluid. The particular equations to be studied were derived by Marshall Slemrod [Sl1], and are meant to model an elastic fluid with pressure given by the van der Waals equations of state and possessing a higher order correction term given by Korteweg's theory of capillarity. A complete description of the analysis of shocks and phase transitions in such a fluid would have to begin with a discussion of a set of partial differential equations of mixed type. However, since we are only interested in the travelling wave solutions, we shall begin with the reduced problem, namely, a system of ordinary differential equations. The interested readers should consult [S11], [S12], [HS], [GI], [G2], and [G3] for a discussion of the full problem. It must be emphasized that most of the results which we shall describe were obtained by Slemrod [S11] and M. Grinfeld [GI], [G3]. The purpose for reproducing these results is to demonstrate the power of recent developments in the Conley index. Both Slemrod and Grinfeld made extensive use of this index. However, the techniques which we shall use, most prominently, the connection matrix, were not available when their work was being done. It is our contention that these new techniques simplify the computations sufficiently to warrant this presentation. Furthermore, this problem is typical of a wide variety of phase transition problems and these techniques should prove equally useful in these other applications. Before considering the specific equations let us consider four important characteristics of this problem. First, the system of ordinary differential equations is n-dimensional (in this case n = 3), and phase plane techniques are difficult if not impossible to apply in n ? 3. Second, the number of critical points is large (i.e. greater than or equal to 4). Third, the goal is to find heteroclinic orbits, i.e. solutions which asymptote to different critical points in forward and backward time. These three characteristics make the problem "challenging". Finally, there exists a Lyapunov function. This of course simplifies the problem. We hope to convince the reader that for any problem with the above characteristics, the Conley index theory is a natural approach to adopt. This approach can be summarized in three steps: 1. Identify the critical points and their Conley indices. *Partially supported by an AURIG from Michigan State University. tDepartment of Mathematics, Michigan State University, East Lansing, MI 48824. Current address: School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332. 2. Show that the set of bounded solutions is compact and compute its index. 3. Apply the connection matrix theory. This last step is, for the most part, just a matter of simple matrix computations. The equations we shall study are w' where: 1. The pressure p is given by the van der Waals equation of state R(} p(w,(}) = - - - w - b w2 a < b < w < 00 where a, b, and R are all positive constants, w is the specific volume, and () is the absolute temperature. 2. The specific internal energy function is given by a = -2' + F((}) w where F is any function for which F"(8) < a and lime_oo = -00. The most natural choice of F is F*((}) = -e v 8R-n8 + constant. We shall study the problem with the more general internal energy function only because it demonstrates some of the power of the index theory. In particular, for F = F* there are at most 4 critical points, two of which are of particular importance since they correspond to the liquid and vapor phases of the fluid. For a general F, there may be more critical points though again there are only two physically relevant ones. Using the index theory we shall show that there is a natural way to ignore the irrelevant critical points. In particular, under appropriate conditions (see the end of §3) treating the general F is no more difficult than studying F = F*. 3. U is the wave speed of the travelling wave and as indicated above will be used as a parameter. One only expects to find traveling wave solutions for a small set of wave speeds. 4. a and J.l are positive constants corresponding to the coefficient of thermal conductivity and the viscosity coefficient, respectively. 5. w_ and 8_ are the values of the specific volume and absolute temperature at the critical point corresponding to the liquid phase, p_ = p( Wo, 8_) and c_ = c(w_,8_). Since we are studying travelling waves, we are interested in those solutions to P u which satisfy certain boundary conditions, the explanation of which requires the following digression concerning the function p. In Figure 1.1, two 8 isoclines of p have been plotted (clearly, fixing 8 defines p as a function only of w). Choose 8_ 8 increasing 8 Ww - ex Figure 1.1 small enough that the function p( w, 8 _) has a local minimum and a local maximum. In the diagram the location of this minimum and maximum are labelled by w'" and wj3. Let w_ E (b,w",). This of course defines p_ and c_. One can now easily check that (w, v, 8) = (w_, 0, 8_) is a critical point of Pu. We can now formally state the problem which needs to be solved. Prove the existence of wave speeds U for which there exist solutions (w(t),v(t),8(t)) to Pu satisfying the boundary conditions lim (w(t),v(t),8(t)) = (w_,0,8_) t-+- w(3. Solutions of this form shall be denoted by w_ - t w+. If one defines 11 = {U > Olw- - t w+ exists} then the problem can be restated as: show that 11 =1= 0. The Lyapunov function for the system Pu is given by V(w,v,8) = Rln(w-b)-F'(8)+~ {-(c(w,8) - U + T(w - w_)2 6-) - (w - w_)p_ + 2AV2} It is a simple computation to check that for the system Pu, dV = _!!.w2 _ flU {-(c(W 8) _ c ) _ (w _ w_)p_ ' dt 8 a8 2 ::; 0. + U 2 (w _ 2 The rest of the paper is organized as follows. We begin in §2 with a brief review of the relevant aspects of the Conley index theory, with a special emphasis on the connection matrix and transition matrix techniques. In §3 we analyze the existence, and compute the index, of the critical points. (Notice that it is not clear from the above discussion that the critical point (w+, 0, 9+) exists.) Our analysis is quite similar to that of Grinfeld [Gl]. §4 is concerned with the set of bounded solutions. In §5 we compute the necessary connection matrices. And, finally in §6 we state and prove theorems guaranteeing the existence of the desired travelling waves. A final comment is in order. As was indicated earlier the main goal of this paper is to show how the connection matrix theory can be applied to dynamic phase transition problems. Thus we are emphasizing the technique rather than the results. This is most apparent in §3 where certain necessary assumptions are made, and in §6 where the results are stated. In the first case we state the assumptions in the most convenient form for the connection matrix theory. We justify this approach by acknowledging that the work required to verify the assumptions can be found in [G3]. With regard to §6, we state three theorems, only two of which we prove. As the reader'will see the proofs are trivial (given the connection matrix machinery). The theorem we do not prove is due to Grinfeld and requires a more subtle approach. We state it for two reasons: (one) to acknowledge that the connection matrix cannot do everything, and (two) because we feel that it would be desirable, and perhaps even possible, to extend Conley index techniques to handle such problems directly. Acknowledgements. I would like to thank Marshall 8lemrod and Michael Grinfeld for calling this problem to my attention. 2. Conley Index Theory. This section presents a brief review of the Conley index, connection matrices, and transition matrices. While it is assumed that the reader is familiar with the Conley index (see [C], [8], [8m]), the basic properties of the connection matrices and transition matrices are described in sufficient detail to allow the reader to mechanically apply the theory. For a complete presentation of this material see [F], [Mc], [Ml], [M2], and [R]. Throughout this paper S will denote an isolated invariant set and h(S), the Conley index of S, Recall that h(S) is the homotopy type of a pointed topological space, and therefore one can define a homology version of the Conley index by CH.(Sj R) == H.(h(S)j R) where the latter expression denotes the singular homology of the topological space h(S) relative the special point with coefficients in a ring R. The reason for using the homology index is that in general homology theory is more computable than homotopy theory. To simplify the computations even further we shall choose R = Z2' Of course this implies that CH*(S) = CH*(Sj R) is a graded vector space. PROPOSITION 2.1. If S is a hyperbolic critical point with an unstable manifold of dimension k, then H S = 0, tben CH.(S) (0,0,0, ... ). DEFINITION 2.2. Given two isolated invariant sets, S and S', the set of connections from S to S' is denoted by C( S, S') = {x Iw( x) c S' and w*( x) C S} where w(x) and w*(x) denote the omega and alpha limit sets of x, respectively. Using this language, our problem can be restated as; show that C( w_, w+) 1= 0. Before considering how one decomposes isolated invariant sets we need to introduce some notation. A partially order set,(P, », consists of a finite set P along with a strict partial order relation, >, which satisfies: > i never holds for i E P, and (ii) if i > j and j > k, then i > k for all i, j, k (i) i An interval in (P, k E I. E P. is a subset, Ie P, such that given i,j E I, if i < k < j then DEFINITION 2.3. A Morse decomposition of S, denoted by M(S) = {M(i)li E (P, >)}, is a collection of mutually disjoint isolated invariant subsets of S, indexed by (P, », such that given xES, then either x E M( i) for some i, or there exists i,j E P such that i > j and x E C(M(i),M(j». The invariant sets, M(i), are called Morse sets. Recall that in our problem we have a Lyapunov function, V. Hence, a natural Morse decomposition of S, the set of all bounded solutions, would consist of all the critical points of the system. Furthermore, an ordering on this Morse decomposition could be given by i > j if V(M(i» > V(M(j». Let I be an interval in P, then one can define a new isolated invariant set by ~ (l.! M(i)) U C~, C(M(j), MU))) Since M(l) is an isolated invariant set CH * (M(I» is defined. Turning to the definition of a connection matrix, let be a linear map, then can be written as a matrix of maps ~ = (~(i,j» where ~(ij) : CH*(M(i» ~ CH.(M(j». For any interval I in P, let ~(I) = (~(i,j»);,jEI· DEFINITION 2.4. ~ is called a connection matrix for M(S) if the following four properties are satisfied: (i) If i :f j then ~(i, j) = 0. (strict upper triangularity) (ii) ~(i,j)(CHk(M(i») C CHk-l(M(j». (degree -1 map) (iii) ~ = (iv) CH.(M(l) = ~~r:ge~ ~W for every interval I. (rank condition) Some of the basic theorems concerning connection matrices are as follows. THEOREM 2.5. (Franzosa [F]) Given a Morse decomposition there exists at least one connection matrix. THEOREM 2.6. (Franzosa [F]) Let {i,j} be an interval in P and assume that fl(i,j) i- 0, then C(M(i),M(j)) i- 0. THEOREM 2.7. ( McCord [Me]) Let M(S) = {M(I),M(O)II > O} be a Morse decomposition consisting of hyperbolic critical points. Assume for i = 0,1, that CHn(M(i)) = { = k +i otherWIse . Furthermore, let C( M(I), M(O)) consists of exactly q heteroclinic orbits whiclI arise as the intersection of the stable and unstable manifolds of M(O) and M(I), respectively. Then fl(I, 0) = q mod 2. Up to this point, our discussion has been based on a given flow. However, the problem we wish to study consists of a parameterized family of flows. Thus we need to compare different connection matrices corresponding to the flows at different parameter values. With this in mind, consider the family of differential equations x' = f(x, where x E Rn and>' E A = R. The first assumption that needs to be made is that there exists an isolated invariant set S which continues over A (see [Sa]). For a fixed >. this isolated invariant set will be denoted by SA. Let M(SO) = {MO(i)li E (pO, >0 )} and M(Sl) = {Ml(i)li E (pI, >l)} denote Morse decompositions of SA at >. = 0 and 1. Let fl o and fll be connection matrices for these Morse decompositions. DEFINITION 2.8. A transition system consists of an infinite sequence of systems of the form x' = f(x, >.) = cn>'(>' - where 1 » Cn > Cn+1 and limn_co Cn = O. For each Cn there exists an isolated invariant set with a Morse decomposition M = {M(k)lk E pO U pI and M(k) = Mi(k) and with the partial order relation j > i for j E pI and i E pO i >0 i' for i,i' E pO for kEPi} PROPOSITION 2.9. CHnUvJ(i)) ~ CHn(MO(i)) PROPOSITION 2.10. ([RJ, [M2]) A connection matrix for M takes the form where the entries Lj.0 and Lj.1 are the connection matrices for M(5°) and M(5 1). Remark 2.11. Notice, by Proposition 2.9 that the homology indices of M(j) are not in exact agreement with those of M1 (j). However, the difference is only in the grading. Thus the Lj.1 referred to in Proposition 2.10 is obtained by shifting the grading of the connection matrix for M( 51) up by one. DEFINITION 2.12. TI0 is called the transition matrix from A = 1 to A = O. Returning to the transition system. Notice that in the limit one obtains the parameterized system X' = f(x, A' = O. The following theorem indicates that entries in the transition matrix correspond to connections for the system x' = f( x, A) for)' E (0,1). THEOREM 2.13. (Reineck [R]) Let T(i 1 jO) : CH * (Ml(i)) -> CH * (M°(j)) be an entry of the transition matrix TI0. If TW jO) =f. 0, then there exists a finite sequence 1 :::::: Al :::::: ),2 :::::: ... :::::: Ak :::::: 0 of parameter values and corresponding kr E pAr such that C(M(kr),M(k r+1 )) =f. 0 under the flow defined by x' = f(X,A r ). Finally, the following theorem indicates that certain entries in the transition matrix can be determined by understanding how the individual Morse sets continue. THEOREM 2.12. (Mischaikow [MIl) Let the Morse set M(i) continue over [0, 1] an attractor or repeller. Then T( iI, iO) is an isomorphism. 3. The Critical Points. As was indicated in the introduction, the first step in applying the connection matrix techniques is to study the critical points of the system. Obviously, one obtains the critical points of P u by solving the following system of equations: O=v (3.1) 0= -Uv - U2(w - w_) - p(w,B) + p_ 0= !:!:.u {-(c:(w, B) - c:_) _ (w _ w_)p_ a + U 2 (w _ 2 + Av2}. 2 Substituting in the definitions of p(w,B) and c(w,B) one gets (3.2a) 0= -u 2 (w - RB a w_) - - - + - 2 w - b w + p- =:;-;a + c_ - (w - w_)p_ + 2Cw - Clearly then (3.2) is equivalent to (3.3a) (3.3b) Notice that f~(w) leads to B = hCw) F(B) - BF/CB) = hew). = - :.!.bh(w). Thus, differentiating (3.3b) with respect to w -BFfI(B) dB = dw -~ hew) w-b Hence, for h (w) », one has that < 0, i.e. the graph of the curve defined by (3.2b) is monotone decreasing as long as h (w) is positive. In particular, (3.2b) defines B as a function of w for all B > 0. From now on this function will be denoted by g2, i.e. B = g2(W). Recall, from the introduction, that B_ was chosen in such a manner as to insure that p( w, B_) = p_ has three solutions. Thus, setting U = implies that the graph of flew) is as in Figure 3.1. Choosing F = F* and C v large one obtains Figure 3.1(a). In particular, there are three values of w which correspond to critical points ofP u and the third one, w+, is greater than wf3. If we consider arbitrary F (though insisting that Ffl < 0) then it is possible to have the situation described in Figure 3.1(b). Notice, however, that w+ is still well defined. On the other hand if one chooses U > 0, but small then one obtains the diagrams of Figure 3.2. Again, (a) corresponds to F = F* and (b) represents a possible graph for a more general F. And again, it is worth emphasizing that the critical point (w+,O,B+) is uniquely defined in both cases. Figure 3.1 w_ wI w+ (b) Figure 3.2 Before continuing we wish to introduce some notation. As was mentioned before we are primarily interested in the critical points (w_,O,8_) and (w+,O,8_). To simplify the notation we shall denote these critical points by w±. However, the other critical points shall also playa role in our analysis. If F = F* then we need only concern ourselves with the points (w;,O,8;) for i = 1,2, however, for general F there may be more critical points. We can however define the Morse sets As was indicated in the introduction U is treated as a free parameter. Thus the information of Figures 3.1 and 3.2 needs to be presented in terms of U and w. We begin with the following definitions. Let Uo = min{U :::: Olw+ exists} and U1 = max{U :::: Olw+ exists }. As is indicated in Figure 3.1, it is possible for Uo = 0. To understand how the curves in Figures 3.1 and 3.2 change as a function of U, consider h and h as functions of U and w, i.e. (3.3) becomes 8= h(w,U) F(8) - 8F'(8) = h(w,U). Now differentiating by U gives a8 au = -2(w - b)(w - w_)"R > - 8F"(8) :~ a8 = U(w - w_? U(w - W_)2 8F"(8) > for w > w_. Thus, as U is increased the curve corresponding to pointwise and g2 moves up pointwise. moves down LEMMA 3.1. There exists U1 E (0,00). Proof Consider -w 4 h (w, U) = 0 as a function of U. This is equivalent to solving ] + [ w - + b + -pU2 w3 - b ( w - ) + -pU2 a + -w U - -ab U = 0. If we let U --+ 00, then we have that the zeroes are given by 0, b, and w_. Furthermore for w > W1, f( W, 00) < O. Thus the right local maxima of h tends to zero as U --+ 00. Therefore, there exists U* such that the right local maximum of f1 is tangent to 92. Since h is moving down pointwise and 92 is moving up pointwise as a function of U for all U > U*, W1 does not exist. 0 If we set F = F*, then the above arguments can be summarized by the bifurcation diagrams of Figures 3.3 and Figures 3.4. (For general F one might have more wiggles). Figure 3.3 Figure 3.4 Our next step is to determine the Conley indices of the critical points. This is most easily done by assuming that F = F*, linearizing about the critical points, and computing the characteristic equation. Doing this simple calculation yields: Solving for the roots explicitly one gets that if f~ (w) - f~ (w) > 0, then there exists a unique positive solution and if f~ (w) - f~( w) < 0, then all the solutions have negative real parts. Therefore, we have the following lemma. LEMMA 3.2. CH*(w±)::::; (0, Z2, 0, 0, ... ) and CH.(W i ) (Z2, 0, 0, ... ). Before considering this lemma in the context of more general F's, we shall make the following three assumptions. V(w_) > V(w+) A2 If w' is a critical point in Wi then V(w±) > V(w*). A3 For U E [U*, U**J, 82 > O. Since we are interested in obtaining connections of the form w _ -+ w + and since V is a Lyapunov function for (P) it is obvious that Al is a necessary condition. For a proof that this assumption can be satisfied, at least in the case of F = F*, we refer the reader to [G3]. Since it holds for F* it must also hold for F near F*. The second assumption is considerably stronger than what is necessary, but clearly holds for F = F* and hence for F near F*. It guarantees two things, first, that Wi are Morse sets and, two, that C H * (W i) remains the same as in Lemma 3.2. Therefore, Lemma 3.2 remains true for all F for which assumption A2 holds. Finally, A3 is necessary on physical grounds since B represents absolute temperature, and hence, must be positive. 4. The Set of Bounded Solutions. The second step in applying the Conley theory, is to study the set of bounded solutions. In our case this means understanding the set of bounded solutions for the parameter values U· < U < U** and for which w > band B > 0. Thus in this section we define an appropriate isolating neighborhood, i.e. one which contains all the orbits of interest to us, and then compute the index of the isolated invariant. Notice that one can think of the Lyapunov function V as a function of the parameter value. To emphasize this we shall write V = Vu. Define Let N = {(w,v,B)lw ~ A,B > O,V(w,v,B) ::; 2I b, and B > OJ lim V(w,v,B) = This follows from the lim V(w,v,B)=oo Ivl->oo lim V(w,v,B) = 00 8->0 and lim V( w, v, B) = 8->00 Next one needs to show that there exists A such that SnaN = 0. Let (w, v, B) E aN with w > A. Since ~~ < for these points, (w,v,B) rt. S. Thus we need only consider those points (w, v, B) E aN for which w = A. If v > 0, then (A, v, B) . [-t,O] rt. N and if v < 0, then (A, v, B) . [0, t] rt. N. So assume (A, v, B) E aN and v = 0. If Vi < 0, then one has an external tangency. Let (A, 0, 8) E aN for which v' = 0, then lim>'->b V( A, 0, 8) = 00, i.e. there exists a A such that v' of 0. One easily checks that in fact v' < O. D PROPOSITION 4.2. CH.(S) ~ (0,0,0, .... ). Proof. Notice that N is a contractible region. Furthermore L 8Nlv :::; O} is contractible and (N,L) is an index pair for S. 0 {(A, v, 8) E In the following sections Su will be the total isolated invariant set and its Morse decomposition will consist of the Morse sets w±, and Wi, i = 0,1 with the ordering induced by V. 5. The Connection Matrices. We are now in the position to begin step three in the application of the Conley index theory, namely the computations of the connection matrices. Recall that we are attempting to show that 11 -# 0. So choose Ul, U 2 , U3 rt 11 such that 0 :::; U' < U 1 < Uo < U 2 < U1 < U 3 < U·' (obviously, if Uo = 0, then U 1 cannot be defined). Let t:. i denote the connection matrix for the isolated invariant set SUL LEMMA 5.1. t:. 1 = CHO(W2) CH1(w_) = CHo(wd ) ) Proof. Consider t:. 1 . In this case the Morse decomposition consists of {W2, W _}. Thus, t:. 1 : CH.(W2) EB CH.(w_) -+ CH.(W2) EB CH.(w_). By Lemma 3.2, t:. 1 can be considered as a map on CHO(W2) EB CHl(W_). Since a connection matrix is a degree -1 map t:. 1 must take the form where * remains to be determined. By the rank condition, phism, which in Z2 coefficients implies that * = l. The proof for * must be an isomor- t:. 3 is similar. 0 The same sorts of arguments, which rely only on the definition of the connection matrix give the following result. LEMMA 5.2. CHo(wd 0 CH.(w,) ( CHO(W2) 0 2 t:. = 0 CH1(w+) CH1(w_) 0 CHO(W2) 0 0 0 0 CH1(w+) a c 0 0 b d 0 0 and ad + be = 1. Since U i rf. 1£, we shall use the transition matrices to show that for some value of U between U I and U 3 there exists an element of 1£. The transition matrices we shall use are those which relate ~I to ~2 and ~2 to ~3 and we shall denote these by TI2 and T23 respectively. LEMMA 5.3. T12) _ ~I CHo(WJ) CHO(W 2 ) CH I (W 2 ) CH2 (w_) CHO(W 2 ) CHI (w+) CHI (W 2 ) CH2 (w_) wbere a8 + b = 0 and c8 + d + 1 = Proof. From the discussion of §2 it is clear that only the T12 entries need to be determined. Because T12 is a degree -1 map, one immediately obtains the fact that ~ (~ D To determine the values of the entries (Y, (3", and 8 require a more careful analysis of the flow. Notice that w _ continues as a Morse set over the interval [U*, U**] and furthermore, is a repeller for all parameter values U. Thus, = 1. Similarly, W 2 continues as an attractor over the interval [U I , U 2 ] and hence, (3 = 1 and (Y = o. The equalities involving the entries of the matrix follow from the condition that connection matrices square to zero. 0 A similar argument results in: LEMMA 5.4. ( T 23 ~3 _ - CH1(W 1) CH2 (w_) CHO(W 1 ) CHO(W 2 ) CHI (w+) CHO(W 1 ) CHO(W 2 ) CHI (w+) CHI (w_) CH2 (w_) where a-y + b + 1 = 0 and C-y + d = O. Notice that , equations 0 or 1 and 0 or 1. Thus we can solve the systems of to obtain additional constraints on a, b, c, and d. Doing so one obtains the following lemma LEMMA ~) (l ~) (l Remark 5.6. The result of Lemma 5.5 is independent of the fact that Uo > o. this follows from the fact that we can continue the system with Uo = 0 to a system with Uo > o. Thus, given a connection matrix tJ.2 for Uo = 0 it must be related to a tJ.2 for Uo > 0 by a transition matrix, which only allows for the possibilities stated in Lemma 5.5. 6. Results. We are now ready to present the results. We remark that Theorems 6.1 and 6.4 can be found in [G3] while 6.2 is new. Hopefully the brevity and simplicity of the proofs will convince the reader that the previous sections were worth reading. THEOREM 6.1. Assume Al through A3, and that 0::; U* < Uo < U1 < U", then 'U =1= 0, i.e. there exists V E [Uo , UI ] for which there exists a heteroclinic orbit from (w_,O,(L) to (w+,O,B+) solving Po. ,a + Proof Recall from Lemmas 5.3 and 5.4 that + b = 0 and 'U = 0, then a=, = 0 and hence, b = 0 = 1. Therefore 'U =1= 0. 0 = o. THEOREM 6.2. Assume Al through A3, that 0 = U' = Uo < UI < U··, and that for some U E (0, UI ) there exists w _ -> W 2, then 'U =1= 0. Proof Lemma 5.5 determines the set of possible connection matrices for Pu with U E (O,UI ). In particular, either there exists a w_ -> W 2 or there exists a w_ -> WI. Furthermore, since W 2 is an attractor, the existence of a w_ -> Wi orbit for i = 1,2 implies that the corresponding connection matrix entry is 1. (If Wi is a single attracting critical point then this follows from Theorem 2.7. Even if Wi is more complicated the proof of the result still applies.) Thus if we assume that w _ --> W 2 exists and we let ~ 2 denote the corresponding co=ection matrix, then by Le=a 5.4, I = 1. therefore, there exists () E (U, U j ) for which w _ --> w + exists. 0 Remark 6.3. Since w _ has a one dimensional unstable manifold and since W 2 is an attractor, checking the existence of a w _ --> W 2 orbit is numerically tractable. We finish with a theorem which cannot be proved directly with our techniques. However, we feel it is important to generalize or extend the Conley index theory to the point that it can be applied to problems of this nature. THEOREM = F*, 6.4. Assume Al through A3, that 0 = U' = Uo < Uj < U'*, that is sutBciently large, and that fl is sufBciently small, then U =I 0. [C) [F] [HS] [G1] [G2] [G3] [Me] [M1] [M2] [R] [S] [Sill [SI2] [Sm] C. CONLEY, Isolated Invariant Sets and the Morse Index, Conf. Board Math. Sci. 38, AMS, Providence (1978). R. FRANZOSA, The connection matrix theory for Morse decompositions, Trans. AMS, Vo!' 311, No.2, Feb. 1989, pp. 781-803. R. HAGEN AND M. SLEMROD, The viscosity-capillarity admissibility criterion for shocks and phase transitions, Arch. for Rat. Mech. and Anal, 83, pp. 333-36l. M. GRINFELD, Topological Techniques in Dynamic Phase Transitions, Ph.D. Thesis, Rensselar Polytechnic Institute (1986). M. GRINFELD, Dynamic phase transitions: existence of "cavitation" waves, Proc. of the Royal Soc. of Edin., 107A (1987), pp. 153-163. M. GRINFELD, Nonisothermal dynamic phase transitions, Quart. of Applied Math., Vol 47, No.1 (March 1989), pp. 71-84. C. MCCORD, The connection map for attractor-repeller pairs, To appear in Trans. A.M.S. K. MISCHAIKOW, Existence of generalized homoclinic orbits for one-parameter families of flows, Proc. A.M.S., May 1988. K. MISCHAIKOW, Transition systems, Proceedings of the Royal Society of Edinburgh, 112 A (1989), pp. 155-175. J. REINECK, Connecting orbits in one-parameter families of flows, J. Erg. thy. and Dyn. Sys., Vol 8* (1988), pp. 359-374. D. SALAMON, Connected simple systems and the Conley index of isolated invariant sets, Trans. AMS, 291(1) (1985). M. SLEMROD, Admissibility criteria for propagating phase boundaries in a van der Waals fluid, Arch. Rat. Mech. Ana!., Vo!' 81, No 4, pp. 301-315. M. SLEMROD, Dynamic phase transitions in a van der Waals fluid, Journal of Differential Equations, Vol. 52, No.1, pp. 1-23. J. SMOLLER, Shock Waves and Reaction Diffusion Equations, Springer-Verlag, New York, 1983. This paper is dedicated to Daniel D. Joseph on the occasion of his 60th birthday A WELL-POSED BOUNDARY VALUE PROBLEM FOR SUPERCRITICAL FLOW OF VISCOELASTIC FLUIDS OF MAXWELL TYPE MICHAEL RENARDY* Abstract. For a class of viscoelastic fluids with differential constitutive laws of Maxwell type, we investigate the existence and uniqueness of steady flows. We consider small perturbations of uniform flow transverse to a strip. A well-posed boundary value problem is formulated for the case when the velocity of the fluid exceeds the speed of propagation of shear waves. Key words. viscoelastic fluids, boundary conditions, change of type AMS(MOS) subject classifications. 35M05,76A10 1. Introduction. While the study of existence and uniqueness results for steady flows of Newtonian fluids is well advanced, relatively little is known about viscoelastic fluids with memory. For such fluids, the nature of boundary conditions leading to well-posed problems is in general different from the Newtonian case. There are two reasons for this: 1. The memory of the fluid implies that what happens in the domain under consid- eration is dependent on the deformation history of the fluid before it entered the domain. Information about this deformation history must therefore be given in the form of boundary conditions at inflow boundaries. The precise nature of such inflow conditions is dependent on the constitutive relation; for example, fluids of Maxwell type [4] are different from fluids of Jeffreys type [5]. 2. For fluids of Maxwell type, there is a change of type in the governing equations when the velocity of the fluid exceeds the propagation speed of shear waves (cf. [1], [2], [7], [8]). This necessitates a change in the nature of boundary conditions. If boundary conditions which would be correct in the sub critical case are imposed in a supercritical situation, the problem becomes ill-posed in a similar fashion as the Dirichlet problem for the wave equation (see [6]). In the following, v denotes the velocity, p the pressure, T the extra stress tensor, p the density and j a given body force. The equation of motion is p(v. \7)v = div T - \7p + j, and the incompressibility condition is div v = *Department of Mathematics and ICAM, Virginia Tech, Blacksburg, VA 24061-0123. This research was completed while I was visiting the Institute for Mathematics and its Applications at the University of Minnesota. Financial support from the IMA and from the National Science Foundation under Grant No. DMs:.S796241 is gratefully acknowledged. We assume a Maxwell-type constitutive relation of the following form: +g; .(T) ) = p(_' + _J). aXj ax; Here>. and p are positive constants, and the matrix-valued functions P and g are assumed to be smooth; moreover, P, g and the first derivatives of g vanish at T=O. Equation (3) includes a number of popular rheological models (cf. e.g. [3]). The domain on which we want to solve (1)-(3) is the strip bounded by the planes Xl = and Xl = 1. In the X2- and x3-directions, we assume periodicity with periods Land M. The solutions we seek are small perturbations of the uniform flow v = (V, 0, 0), p = 0, T = 0. The given body force and the imposed boundary conditions are assumed to satisfy smallness conditions consistent with this. In [4], ·we considered this problem under the assumption that pV 2 < p. A wellposed boundary value problem was obtained by prescribing the velocities at both boundaries plus additional stress conditions at the inflow boundary Xl = 0. In two dimensions it is possible to prescribe the diagonal components TlI and T 22 . In three dimensions a correct choice of inflow stress conditions was obtained as follows. We expand each stress component in a Fourier series, e.g. TlI(O, X2, X3) = ~)N exp(27ri(kx2/ L + lX3/M)). Then one can, for example, prescribe the following inflow conditions: t~L t~L t~L t~~, t;~ if t;~ if III » Ikl, Ikl and III are comparable, (5) If, on the other hand, pV2 > p, then this choice of boundary conditions does not lead to a well-posed problem [6]. We shall show that, in this case, one can prescribe the following conditions: the inflow stresses as above, the normal velocity at both boundaries, plus the vorticity and its normal derivative (in two dimensions), or, respectively, the second and third components of the vorticity and their normal derivatives (in three dimensions) at the inflow boundary. The analysis for two space dimensions will be carried out in Section 2; the modifications needed for three dimensions will be discussed in Section 3. 2. The two-dimensional case. We apply the operation (v. \7) +..\ +(\7v )T to the equation of motion (1) and we use equation (3) to reexpress ((v. \7)+"\)T. After some algebra, this yields an equation of the following form (written in components) p((v. \7) + ..\)(v, \7)Vi = + (Tkj - 02v· UXjUXk OV' +((v· \7) + ..\)1; + ox: fJ + hi(V, \7v, T, \7T). Here we have set q = ((v· \7) + ..\ )p. The term h is a complicated expression which we do not write out explicitly; it contains only quadratic and higher order terms. Next we introduce a streamfunction-vorticity formulation. We set = - OX2' V2 = OXI' (= OXI - OX2' so that the incompressibility condition (2) is automatically satisfied and D.-zP = (. We take the curl of equation (6), which results in p((v. \7) + ..\)(v. \7)( = p,D.( + (Tkj - Pkj(T))-;---;:;- - P22 (T)-;;-z - Pll(T)-;;-z UXjUXk UX I UX 2 (9) Here is again a complicated expression which we do not write out explicitly. In the following, we shall solve (1)-(3) subject to the following boundary conditions: -zP = - V X2 + -zPo, on Xl = 0, -zP = -V X2 + -zPI, on Xl = 1, ( = (0, on Tll = t l , on = 0, ~ = 'flo, on = 0, = 0, T22 = t z on = 0. We note that prescribing -zP on both boundaries is equivalent to prescribing the normal velocity on these boundaries as well as the total flow rate in the xz-direction. °: ; Xl ::; 1 which are We denote by HS the space of all functions on the strip periodic with period L in the xz-direction and have s derivatives which are square integrable over one period. Sobolev spaces of periodic functions depending only on Xz are denoted by H(s). The corresponding norms are denoted by II· lis and II· II (s)· Moreover, 11'llk,l denotes the norm in Wk,OO([O, 1];H(l}). The goal of this section is the following existence and uniqueness result: THEOREM. Assume that IIf1l4' 111/>011(9/2), 111/>111(9/2), 11(011(3), 117]011(2), IliIll(3) and IIt 211(3) are suiIiciently small. Then there is a solution of (1)-(3) which satisfies the boundary conditions (10) and the regularity I/> + V X2 E H 5 , T E H3. Moreover, this solution is the only one for which III/> + V x211s, and IITlh are small. The construction of the solution is based on an iterative scheme. As a starting value for the iteration we use the uniform flow (11) Given I/>n, (n and Tn, we define v n by (12) n VI = - OX2 ' V2 ol/>n OX1 • Next, we determine T n +1 from the equation subject to the following initial conditions at = 0: Then we determine (n+1 from the initial-value problem Finally, we obtain I/>n+1 from the Dirichlet problem We define X(M) = {( 1jJ, (, T) I 111jJ + V x211s + 11(113 + IITII3 ::; M}. The space X(M) is complete under the metric We choose M small relative to 1, but sufficiently large relative to the norms of the prescribed data. In order to prove the theorem, it is sufficient to show that the mapping defined by the iteration (12)-(18) is a contraction in X(M). We begin by showing that the iteration maps X( M) into itself. Let us assume that (1jJn, (n, Tn) lies in X(M). We first discuss the solution of (13)-(15). A rearrangement of (15) yields {)2 (2) 1 -{ {) ) (Tn+l 22 - Tn+l) 11 Xl X2 + ({)2 -{) 2Xl {)2 )Tn+l -{) 2 12 X2 = P cur1 « Vn· ") v V n) - We can use (13) to express xl-derivatives of the stresses, i.e. (22) {) Tn+l ij Xl 1 [-v n - { {) ) Tijn+l = -;;2 VI X2 'Tn+l ij After substituting (22) in (21), we obtain an ODE from which we can determine T~+1 at the inflow boundary Xl = O. We denote this boundary value by tn+l. The following estimate is immediate Here we have set w n = v n - (V, 0). After determining t n +l , we have a full set of initial conditions to solve (13). Using standard energy estimates for hyperbolic equations (see [4] for some more details) we obtain a unique solution which satisfies an estimate of the form From equation (13), we can see that an expression like the one on the right hand side of (24) also provides an upper bound for lIe v n . 'Il)Tn+11l 3. For the solution of the initial-value problem (16), (17), one readily obtains the estimate This is insufficient because we need to estimate third order derivatives of (n+l. Results available in the literature would require the existence of a higher order derivative of rn either with respect to Xl or with respect to X2. We cannot use such an assumption because of the dependence of r on second derivatives of Tn+l. However, because of the bound for II (v n • V)Tn+ 1113, we can get bounds on II (v n . V)rnlh. Instead of differentiating (16) with respect to either Xl or X2, which is what is usually done, we can apply the operation (v n . V) to it. By doing this and deriving energy estimates in the usual fashion, we obtain an estimate of the form By taking into account the form of r, we can estimate the last three terms in (26) by a constant times (27) IIfIl4+(IIwnll4+IITn+11l3,0+IITn+llkl +IITn+llll,2+IITn+l IIO,3+II( vn. V)Tn+1113)2. Finally, from (18) we immediately obtain The claim that the iteration maps X (M) into itself now follows easily by combining the estimates (23)-(28). The derivation of estimates to show that the mapping defined by the iteration is a contraction is fairly straightforward, and we shall only demonstrate one step. From (16), (17) we obtain _p[((v n . V) +[(Tf/ l + >.)(v n . V) _ ((v n- l . V) + >')(v n- l . v)](n - Pkj(T n+l )) - (TJ:j - Pkj(T n ))] O~;~:k -[P22 (T n+l ) - P22 (T n )]o:(2n - [Pll (T n+l ) _ Pll (T n )]o:(2n uX l uX 2 Energy estimates now yield :::; C(IICI13(lIv n _v n - 1 1l3 + IITn+1-TnIl2)+ II r n -rn-1Ilo,0 + lIe vn . 'V)(rn _r n - 1 )110), We note that a bound on IIC n ll3 has already been obtained. In order to deal with the last term in (31), we note that By taking into account the form of r and equation (13), it is easy to estimate the terms on the right hand side of (32). 3. Modifications in three dimensions. The basic iteration scheme used to construct solutions and the function spaces chosen for the analysis will be as in the two-dimensional case, and we shall therefore confine the following discussion to those points where modifications are needed. One of these changes is that C= curl v is now a vector, and the equation analogous to (9), written in components, is (33) This is a system of PDE's for the components of C, and in order to make it symmetric hyperbolic, we add the assumption that the matrix P is symmetric. In the iteration, we use the following equation which is analogous to (16): The velocity can be determined in terms of the vorticity if we prescribe the normal velocity on both boundaries and the mean flux in the y- and z-directions. Unfortunately, we shall not be able to guarantee that all the iterates satisfy div cn+l = 0, and hence we cannot simply use the equation curl v n + 1 = Cn + 1 • Let II denote the orthogonal projection (in L2) onto the subspace The set of equations determining v n +l is Here the numbers a and fJ as well as the functions a and b are prescribed and JaJ + JfJJ + II all (7/2) + IIbll(7/2} is assumed to be small. Moreover, we have to assume the compatibility condition It can easily be shown along the same lines as in Section 2 that by combining (34), (36) with (13) and (small) inflow data for T, ( and O(jOXl' we obtain a convergent iteration. However, there are two problems: 1. It is not guaranteed that the limit of the iteration satisfies ( = curl v, or, equivalently, ( = II(. 2. It is not guaranteed that the original equation of motion (1) holds. This is because in proceeding from (1) to (6) we have applied the operation (v· \7) + A+(\7v)T. In orderto reverse this step and go from (6) to (1), we have to assume that (1) holds on the inflow boundary Xl = 0 (cf. [4]). In two dimensions we imposed this condition as equation (15). In order to remove these two difficulties, we must restrict the inflow data; i.e., only part of these data can be prescribed, and the rest have to be determined at each step of the iteration. We take the divergence of equation (33). If ( were equal to curl v, we would get 0 (to see this, recall that (33) was derived by taking the curl of (6)). Hence we find (we set curl v = w) O~i [p((v. \7) + A)(V· \7)((i - Wi) - fL~((i - (38) After some algebra, this yields [p((v. \7) + A)(V. \7) - 02 fL~ - (Tk·) - Pk).(T))--] div ( OXjOXk Here D2 is a second order differential operator with coefficients depending on the arguments indicated. Let d l denote the value of div ( at Xl = 0, and let d 2 denote the value of a~, div ( at Xl = O. From (39), we obtain the estimate As before, w denotes v - (V, 0, 0), We note that IIwl14 (36) yields the estimate + IITII3 is small. Moreover, Inflow conditions are now handled as follows. We prescribe arbitrary data for (2, (3 and their normal derivatives. The initial datum for £llaa is then determined Xl by requiring that div ( = 0 at Xl = O. Finally, the initial datum for (1 cannot be determined a priori, but must be computed at each step of the iteration. We require that, at Xl = 0, (42) where Sn+l is a constant to be determined. We then solve (42) for 02G+l/8xi and substitute into the first equation of (34). This yields an elliptic problem from which we can uniquely determine the inflow datum for (f+l up to an arbitrary constant, as well as the constant Sn+l. Finally, the arbitrary constant in (f+l is fixed by the requirement that (43) For the limit of the iteration, this obviously insures that d l = 0, d2 = s, aIld dX3 dX2 = O. To determine the constant we take the first equation of (33), set Xl = 0, and integrate over X2 and X3. If ( is replaced by W = curl v, we obtain an expression which vanishes identically (recall that (33) was derived by taking the curl of (6)). Hence we find JoL JoM (1 (0, X2, X3) 1L 1M [p((v. '\7) + >.)(v. '\7)((1 - wd - -(Tk' _ PdT)) 02((1 ) 11'£:::;.((1 - WI) (44) Next we integrate by parts in all terms which involve second order derivatives of ( - w such that one of the differentiations is with respect to X2 or X3' This leads to terms which can be estimated by a constant times (11Tlb term which remains is the integral of + IlwI14)1I( - Wil2' (45) We now note that (46) By using this, we obtain again terms which can, after an integration by parts, be estimated by a constant times (ilTlb + Ilwil4)11( - Wil2, plus s times the integral of pVI - I-l - Tn + Pn(T). As a result, s can be estimated by a constant times (IiTIi3 + il w Ii4)11( - Wil2' In conjunction with (40) and (41) this yields that div ( is indeed zero. To make sure that (1) is satisfied, we proceed as in [4]. At each step of the iteration, qn+l is determined by the relation Here ~ is the orthogonal projection of L2 onto the subspace of vectorfields with vanishing curl. From (47), qn+l is determined up to an arbitrary constant; we may fix this constant by requiring that (48) We note that qn+l is not necessarily periodic in the X2 and x3-directions, but may contain a part which is linear in Xz and X3. At Xl = 0, we impose the condition (49) This condition, in conjunction with (13) and the equation (50) can be used to express some of the inflow data for T in terms of others. For details we refer to [4]. Specifically we can prescribe the stress components specified in (5) and solve for the rest. Let us summarize the iteration scheme. We prescribe the following boundary data a priori: the normal velocities on both boundaries and the total flux in the X2- and x3-directions according to (35); the second and third components of the vorticity and their normal derivatives at the inflow boundary, (51) and the inflow stresses according to (5). We denote this prfscribed part of the stress by T p. We start the iteration by setting T = 0, C= 0, v = (V, 0, 0). At each step of the iteration, we first determine qn+1 from (47), (48). Then we calculate the inflow boundary value of T n + 1 from (5), (49), (50) and (13). We can now determine T n + 1 from (13). Next we use (42), (43) and (34) to determine the inflow value of Cf+l. Then we determine cn+l from (34) and v n +1 from (36). The existence theorem thus obtained is the following: THEOREM:. Assume that 111114, lI a ll{7/2), II bll{7/ 2), IICgll{3), IICgll{3), IICill{2), IICfll{2), IITpll{3)' lal and 1,81 are sufBciently small. Assume, moreover, that the matrix function P has symmetric values. Then there is a solution of (1-3) which satisfies the boundary conditions given by (5), (35) and (51) and the regularity v E H\ T E H3. Moreover, this solution is the only one for which IIv - (V, 0, 0) 114 and IITII3 are small. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] D.D. JOSEPH, M. RENARDY AND J.C. SAUT, Hyperbolicity and change of type in the flow of viscoelastic fluids, Arch. Rat. Mech. Anal., 87 (1985), pp. 213-251. M. LUSKIN, On the classiflcation of some model equations for viscoelasticity, J. Non-Newt. Fluid Mech., 16 (1984), Pl>. 3-11. J.G. OLDROYD, Non-Newtonian effects in steady motion of some idealized elastico-viscous liquids, Proc. Roy. Soc. London, A 245 (1958), pp. 278-297. M. RENARDY, Inflow boundary conditions for steady flow of viscoelastic fluids with differential constitutive laws, Rocky Mt. J. Math., 18 (1988), pp. 445-453. Recent advances in the mathematical theory of steady flows of viscoelastic fluids, J. NonNewt. Fluid Mech., 29 (1988), pp. 11-24. Boundary conditions for steady flows of non-Newtonian fluids, Proc. Xth In.t. Congr. Rheology (ed. P.H.T. Uhlherr), Vol. 2, Sydney, 1988, pp. 202-204. I.M. RUTKEVICH, The propagation of small perturbations in a viscoelastic fluid, J. Appl. Math. Mech., 34 (1970), pp. 35-50. J.S. ULTMAN AND M.M. DENN, Anomalous heat transfer and a wave phenomenon in dilute polymer solutions, Trans. Soc. Rheol., 14 (1970), pp. 307-317. LOSS OF HYPERBOLICITY IN YIELD VERTEX PLASTICITY MODELS UNDER NONPROPORTIONAL LOADING DAVID G. SCHAEFFER* AND MICHAEL SHEARERt §o. INTRODUCTION Several authors [5,8,10,12J have shown that the dynamic partial differential equations arising from continuum models for granular flow may be linearly ill-posed. In a typical theory with shear-strain hardening, the equations are linearly well-posed for small deformations but become linearly ill-posed at some critical deformation which occurs before the maximum shear stress is achieved. Before this critical deformation the dynamic equations are hyperbolic, but after this point the equations are of no definite type; rather they resemble Utt = !Lxx - U yy , the wave equation with a rotated time axis, in that the possible uncontrolled growth of a plane wave depends on its direction of propagation. The analyses to date have been restricted to the case of proportional loading. This term means that at every step the principal axes of the applied stress increment are parallel to those of the current stress; in other words, the stress axes are fixed in the material and never rotate. However, for yield vertex models such as in [3], the governing equations are fully nonlinear so that their type depends on the loading path. In this paper we determine, to lowest order in the rotation rate, how the type of the equations is changed by non proportional loading. Even for proportional loading, our analysis extends previous work by including the full Jaumann derivative. We also apply this analysis to explain the preferred orientation of fracture in shearing between parallel plates [1]. In §1 we present the equations to be analyzed, including a discussion of constitutive assumptions. Our results are formulated in §2 and proved in §3. (The application to shearing between parallel plates appears in §2.2.) §l. CONSTITUTIVE ASSUMPTIONS 1.1. Formulation of the equations. We begin with several caveats: (i) we restrict our attention to two space dimensions; (ii) we consider the equations without reference to boundary conditions that might be imposed; and (iii) we neglect elastic deformations. (It would be only a technical complication to include elastic effects.) The unknowns describing granular flow consist of the density p, the velocity Vi, and the (Cauchy) stress tensor Tij (with compressive stresses regarded as positive). These six unknowns are subject to conservation of mass and momentum, + pOjVj = + OjTij = pdt Vi *Research supported under NSF Grants DMS 86-04141 and 88-04592. The latter includes funds from AFOSR. Department of Mathematics, Duke University, Durham, NC 27706 tResearch supported under NSF Grant DMS 87-01348 and ARO Grant DAAL 03-88-K-0080. Department of Mathematics" North Carolina State University, Raleigh, NC 27695 where d t = 8 t + vj8j is the material derivative and we employ the summation convention. The unknowns are also subject to appropriate constitutive laws that we will formulate after the introduction of some notation. The deformation rate tensor is defined by (1.2) (note the minus sign), and the Jaumann stress rate is defined by (1.3) where (1.4) is the anti-symmetric part of the velocity gradient. As explained in Chapter VIII, §1 of [9], the final two terms on the right in (1.3) allow for changes in stress due to rotation of the material, yielding an objective measure of the rate of change of the stress. Given any 2 x 2 matrix A, we can split it into its deviatoric and spherical parts, + C2irA)I. We identify the space of symmetric 2 x 2 matrices with R3 using the coordinates ( All -2 A22 ' A 12, All + 2 so that (1.5) results from the decomposition (1.7) Let 7r : 0(2) --t C(R3) denote the natural action of the orthogonal group on symmetric 2 x 2 matrices, 7r(R) . A = RART. Note that the decomposition (1. 7) is invariant under this action. Applying (1.5) to the stress tensor, we obtain the mean stress and (scalar) shear stress (a) (1.9) where norm, are the eigenvalues of Tij, with > 0, and I . I is the Euclidean (LlO) Under conditions of continued loading, the ratio is called the coefficient of mobilized friction. In a typical constitutive test, this quantity varies with the total shear strain as sketched in Figure 1.1, but in the present context fl cannot be expressed as a function of the shear strain since the constitutive laws that we consider are path dependent. It follows from (1.9) that fl < 1 if, as we always assume, both principal stresses are p.= T /er --JI'----_ _ _ _ _ _ _ _ _ _ _ _ _ Total shear strain Figure 1.1: An illustration of shear strain hardening The constitutive laws relate the stress and deformation rates. Assuming continued loading, we may write these laws as (a) (Ll2) devV = WeT, V t (O"-ldevT» 1 2trV = -,8l devV J where ,8 is a real parameter (1,81 < 1) and W : R3 X R2 --. R2 is a smooth function (Note that V t (a- 1 devT) is trace free so the second argument of W belongs to R2) with the following two properties: (1) Rate independence: W is homogeneous of degree 1 in its second argument, and (2) Isotropy: For any rotation matrix R E 0(2), \lJ(-7r(R) . T, nCR)· A) = nCR) . \lJ(T, A). (In our notation we suppress the dependence of f3 and \lJ on history parameters such as the total shear strain. Inclusion of such parameters would not change the princiapl part, and hence the hyperbolicity or nonhyperbolicity, of the equations under study.) A simple example of a nonlinear function satisfying these hypotheses can be obtained by a slight modification, to make \lJ smooth, of a constitutive relation proposed in [16]; viz., \lJ(T, A) where P : R Z ~ u { = Gp 1(1 - P)AI 2 devT } IdevTI + alAI + G r (I - is the projection operator along the direction devT, 1 PA = -I--lz(devT,A)devT devT (cf. Figure 1.2), and Gp , G n and a are constants. The subscripts p and rare mnemonic for "proportional" and "rotating", respectively, and the constants Gp and G r are the strength moduli of the material with respect to such loading. This terminology should be clarified by the discussion in §l.2 below. The appearance of V't(u-1devT) in the second argument of \lJ will also be addressed there. Here we focus on working out the consequences of the above two assumptions. dev T ",'" II -PI _ _ _ _ _ __ Figure l.2: Projection onto the stress deviator Because of the isotropy condition (l.13), we may rotate coordinates so that, at one specific point in space and time, the stress tensor T is diagonal and Tll > Tn > O. In such a coordinate system we compute from (l.3) that is given by (1.9b). Thus (1.16) where X l ,XZ ,X3 are the three components on the right in (1.15) and p is given by (1.11). We conclude from homogeneity that (1.17) the supscript (d) on the first argument of 'II is a reminder that (1.17) holds at a point where the stress tensor is diagonal. Incidentally, in proportional loading we have X z = 0, expressing the condition of no rotation in the stress axes; and, at least before the maximum shearing stress in Figure 1.1, we have (1.18) . We now apply the isotropy condition (1.13) using the reflection matrix 0 -1 Since T is diagonal in the chosen coordinate system, Jr(R) . T = T. On the other hand Expressing (1.12a) in components Vn - V22 -1 2 =a wl(T,(Xl -pX3 ,XZ )) VIZ = a-lwz(T,(Xl - pX3 ,XZ)) and applying (1.13), we conclude that WI is even in Xz and 'liz is odd. Note that functions of the form (a) (1.19) wl(T,(Xl - pX3 ,X2 )) = (Xl - pXa)1/Jl(T,sZ) wZ(T,(Xl - pX3 ,XZ)) = X z1/Jz(T,sZ), where (1.20) is the slope of the stress path, satisfy the requirements of homogeneity and isotropy; since s is odd under the reflection R, only the square of this quantity appears in (1.19). Indeed, provided (1.18) holds, any function 'II satisfying the above two conditions may be expressed in the form (1.19). Our analysis below will be on a small neighborhood of the proportional loading path in which (LIS) is satisfied. (Remark: The dependence of ~ on history, which our notation suppresses, could invalidate the derivation of (1.19). Specifically, to obtain (1.19) we must assume that either (i) the history of the sample under study is invariant under the reflection R or (ii) ~ depends on history through quantities such as the total shear stress I which are invariant under reflections.) Later we shall need the Taylor expansion of (1.19) near X 2 = O. Retaining only O( S2) terms in 1/;1 and 0(1) terms in '1/'2, we derive the constitutive relations (1.21) where the constants G p , GT) and a may depend on T. (Remark: 'vVe shall see in § 3 that iI;lcluding an 0(s2) term in '1/'2 would not change condition (2.9) below.) Equations (1.21) merely express (1.14) in the chosen coordinate system. 1.2. Discussion of the constitutive assumptions. (a) The flow rule (1.12a). The classical flow theory of plasticity for a strain hardening granular material is formulated in terms of a yield condition. If I is the total shear stress (defined by the ordinary differential equation dt! = Idev VI), the yield condition typically has the form (1.22) for some function fJ,(t) such as illustrated in Figure 1.1. (Strictly speaking (1.22) should be an inequality; equality holds if the material is deforming plastically, which we always assume.) As sketched in Figure 1.3, the yield surface (1.22) is a cone in stress space; the figure shows both the full three dimensional space and a representative two dimensional cross section in a deviatoric plane, {(T = const}. This conical shape reflects the fact that friction between the grains is the dominant mechanism controlling the deformation of a granular material; i.e., the shearing stress required to overcome friction is proportional to the mean stress. These ideas have been formalized [4] in the notion of "psammic" material (1/;afJ,fJ,os is Greek for sand); such materials are defined to have no material properties with the dimensions of stress, so that stresses in constitutive relations can appear only as ratios. 1.3: A smooth yield surface exhibiting isotropy The hydrostatic axis (Tn = T22t T12 = 0) Figure 1.3 (a) The full three dimensional stress space Figure 1. 3 (b) Intersection with a deviatoric plane = canst.} The constitutive function /-1(,) can be measured in a biaxial test. Figure 1.4 shows schematically the apparatus for such a test. Starting from a state of hydrostatic stress (71 = (72 = (7*, the lateral pressure (72 is held constant while the top plate is moved slowly down. The pressure (71 required to achieve this (as well as the lateral displacement) is continuously monitored, giving a graph such as in Figure 1.1. To extract this constitutive information one must assume that stresses and deformations are uniform across the sample. Figure 1.4: Schematic of a biaxial test As shown for example in [12], the yield condition (1.22) can be combined with a flow rule to derive a constitutive relation of the form (1.12a). The constitutive function \Ji so derived is linear in its second argment; in symbols, (1.23) \Ji(T,A) = CPA, p the special case of (1.14) obtained by setting = 0, The prediction that G r = 00 discredits this theory. To see why, consider a uniform sample subjected to a stress history such as the circular arc Be in Figure 1.5, the mean stress a being held constant. In such a history the principal stress axes rotate relative to material a.,""{es, but the principal stresses ai remain constant. Equation (1.23) predicts that no deformation will occur along such a stress path since 'VtCa-1devT) is everywhere orthogonal to T. Unlike the biaxial test, tests which rotate the stress axes are difficult to perform. Nevertheless, there is general agreement [2,6,13,14] on one point.: plastic deformation accompanies rotation of stress axes when the material is at yield. --4~----""""'4t-- Tll - T22 Figure 1.5: Some informative stress histories The deformation theory of plasticity [7] predicts a finite value for G r that can be determined using just data from biaxial tests. For stress paths close to proportional loading one obtains the constitutive relation (1.14) with (1.24) a: = However, this theory also suffers from a serious defect regarding rotation of the stress axes. Consider subjecting an isotropic material to two stress histories moving from A to C in Figure 1.5, one moving directly along AC, the other indirectly along ABC. The total work in the deformation theory is independent of path, so equal work is performed during both histories. On the other hand, equal work is performed along AC and AB since one stress history may be transformed to the other by a change in the coordinate system. Thus, although plastic deformation occurs along BC, no work is performed d1£ring this deformation. This is a most unsatisfactory prediction. To obtain positive work with the constitutive law (1.14), one must require that the coefficient a: satisfy (1.25) > O. This derivation of (1.25) is flawed because we used (1.14) to describe the material response along the stress path BC. Since the tangent to BC makes an angle of 90° with the current stress, BC differs greatly from proportional loading. However, (1.14) is an approximation valid for nearly proportional loading with limited justification elsewhere. Despite these reservations, we shall assume (1.25). Our hypothesis is that, at the first possible occurrence in the Taylor expansion, there is a term which increases the rate of plastic work Tij V;j relative to a linear theory. Indeed, we shall augment (1.25) by assuming that the dimensionless constant a is at least of the order of unity. Let us note in passing that (1.25) implies that \l1 cannot be derived as the gradient of a plastic potential. If \l1 is a gradient, then the first nonlinear term in (1.21a) is O(Xi). Such cases will be studied in a future publication. Here we assume (1.25), noting that at least in some cases (e.g. [13]) the experimentally determined \l1 cannot be derived from a potential. In our formulation of equations in §1.1 we made no mention of a yield condition. The only role of a yield surface in the present theory is to separate loading from unloading. Since plastic yielding accompanies rotation of the stress axes, stress histories such as Be in Figure 1.5 must lie outside the yield surface. Thus the yield surface must have a vertex at the current stress such as is shown in Figure 1.6. Note that, consistent with the assumption of psammic material, all cross sections with a deviatoric plane are geometrically similar to Figure 1.6(b). Thus in the full three dimensional stress space, the "vertex" in "yield vertex models" is really a line of corners on the yield surface. Figure 1.6: A yield surface with a corner exhibiting induced anisotropy The hydrostatic axis (T11 =T22 , T12 =0) Figure 1.6 (a) The full three dimensional stress space • Along this path loading occurs I I I I Figure 1.6 (b) Intersection wi th a deviatoric plane { 2T Although we do not accept the deformation theory of plasticity, nonetheless we refer to (1.24) for representative values for Gp and Gr. In particular, the modulus Gp for proportional loading tends to zero as the maximum shear stress is approached. Since the RHS of (2.5b) is positive, this inequality fails before the maximum is reached, resulting in nonhyperbolic equations. By contrast, according to (1.22,24), G r = ,-IT. Since constitutive tests rarely proceed beyond 10% deformation, the factor is substantially larger than 2. Thus (::. Ja) will not be violated, so we focus on (2.5b). The term (fl - (3)T and the factor 1 - (2T /G r )2 in (2.5b) result from including the full Jaumann derivative. Because of the latter, even in the case of normality (i.e., fl = (3), the threshold for ill-posedness occurs before G p goes to zero at the maximum stress in Figure 1.1. The complexity of (2.5b) makes some approximations desirable. Since 2T I G r < < 1 we may use the binomial theorem to expand [1- (2TIG r ?P/2. Thus the RHS of (2.5b) is approximately equal to (2.6) In fact T2 I G r is typically much smaller than (fl - (3)T, so we will drop the middle term in (2.6) altogether. It is not difficult to show that the third term in (2.6) is nonnegative and vanishes when fl = (3. Thus this quantity is O((fl_(3)2); specifically Combining the above, we obtain the following approximation for (2.5b): (2.7) If we take representative values for the parameters, say fl = .7, (3 = .3, = I, T ,= .06, then the first term in (2.7) equals .4T and the second equals (8/9)T, or approximately .9T. The following theorem, the main analytical result of the paper, gives the first order correction to (2.5b) if s(O) =J O. THEOREM 2.2. For small equations (2.1) are hyperbolic if and only if (2.9) where G* is the RHS of (2.5b) and C is a dimensionless positive constant. Remark. C depends on an explicit formula for C. (3, fl, GpIGr, and TIG r . Equation (3.32). below gives Observe from comparison of (2.5b) and (2.9) that the effect of taking small =J 0 is destablizing; i.e., hyperbolicity is lost sooner if s(O) is nonzero. This behavior is illustrated in Figure 2.1. We chose 11G p as the abscissa in the figure so that the quasistatic evolution of the parameters in a constitutive test is described by motion to the right in the figure. s(O) 2.2. An application to shearing between parallel plates. The hyperbolic/nonhyperbolic boundary in Figure 2.1 is the union of two curves, a situation which arises as follows. At the critical point for proportional loading, hyperbolicity is lost because the frequencies of plane waves in certain directions go to zero and become complex. Specifically, this happens for plane waves in two directions located symmetrically with respect to the axes, as illustrated in Figure 2.2. Plane waves in these two directions respond differently to rotations in the stress of the base state. Hence there are two curves in Figure 2.1. The dotted curves are continuations of the boundary for one direction into the region where the frequency of the other direction becomes complex first. ,- "' ,Nonhyperbolic , ... ... ... Figure 2.1: Regions of hyperbolicity in parameter space 8 = -8* Figure 2.2: Directions of wave vectors of plane waves whose frequencies become complex (N ote: The 6 -axis, corresponding to the major principal stress axis, is shown as vertical to facilitate comparison with Figure 2.3.) This structure has consequences for experiments. It is felt [11] that shear bands form in material when the equations lose hyperbolicity, the shear band being ap- proximately normal to the direction ~ of the plane wave whose frequency becomes complex. In the biaxial test, which has proportional loading, the frequencies for both directions in Figure 2.2 become complex at the same time. Moreover, as illustrated in Figure 2.3, shear bands in such experiments are observed to have either orientation. By contrast, in shearing between parallel plates (cf. Figure 2.4(a)), the rotation 8(0) of the base state is nonzero, so the frequency for one direction becomes complex before that of the other. This leads to a preferred orientation for the shear band. We shall argue in §3.4 that this mechanism selects shear bands of the orientation shown in Figure 2.4( c), the orientation invariably seen in experiments Figure 2.3: Shear bands in the biaxial test §3. PROOFS 3.1. Formulation of the eigenvalue problem. Our first task is to compute the principal part of the linearization of (2.1). For (2.1a) we obtain the equation (3.1 ) where dt = Ot + v;o)Oj and for (2.1b), + 01 p(O)dt 1h p(O)dt 1h _ 02 (Tll ; Tn) + T + (Tll ~ Tn) 0 (Tll ; Tn) + T + (Tn ~ T o. 02 zz ) = Stresses in the latter equation are written in terms of the coordinates (1.6) for symmetric tensors. Recalling (1.17), we compute that the principal part of the linearization of (2 .1 c) is (3.3) = )o){ (Tll ; Tn) _ fLdt (Tll ~ T22 )] + 207 [d t T12 r(O)(01 V2 - 02Vdl}. To vectorize the notation, define a 2 x 2 matrix £h. ax, ax, ax, ax,· Equation (3.3) may be rewritten in this notation as (3.5) Bd t (t1 2 where B;, i = 1,2 denotes the to (3.6) j3 IdevVCO) I {(V CO) -V2~))(3 2 l l column of B. Finally, equation (2.1d) linearizes 3 2 v- 2 )+ lVl - Vc O)(3 12 +32 V- l )}+3lVl - +32 V- 2 = If VCO) were diagonal, then we would have Vl~) = 0 and VCO) 11 VCO) ; = IdevVcO)I, so that (3.6) would simplify to Of course when sCO) =1= 0, the deformation rate yeO) is not diagonal, even though TCO) is diagonal. However if sCO) is small, tven VCO)is approximately diagonal, resulting in partial simplification of (3.6). Specifically, recalling (2.4), we deduce from (1.21) that Using this information in (3.6) yields (3.7) modulo an error that is 0(( sCO))2). Next we look for exponential solutions of the linearized equations, where [T is a 6-component vector of constants. This exponential satisfies (3.1,2,5,7) if and only if solves the generalized eigenvalue problem (written in blocks corresponding to the decomposition U = (p, v, devT, iT) 0 + rB 2 C(OT 0 0 B 0 p(O) I oo ) - -'2 ) 6 (1 + (3)6 + (38(0) -&6 ) (38(0) -&6 + (1 - (3)6 ' , is a 2-component column vector, and is the row vector (6, -6). (The symbol Q may be regarded as a contraction of the symbol .1 for orthogonal and T for transpose. ) Equations (2.1) are hyperbolic if and only if the frequency w is real for all Since the term (v(O), 0 in (3.8) is real, w is always real if and only if the eigenvalue >. in (3.9) is always real. 3.2. Analysis of the eigenvalue problem (3.9). We claim that zero is a double eigenvalue of (3.9). Now U = (1,0,0, O)T is one eigenvector of (3.9) associated with>' = O. Since the 2 x 3 matrix (=7, ') has rank at most 2, there is another null eigenvector of (3.9) of the form (0,0, U3 , ( 4 ). Thus zero is a double eigenvalue of (3.9), and moreover there are two linearly independent eigenvectors. (As we will see below, under certain circumstances the multiplicity may be higher than two.) Apart from one zero eigenvalue, the eigenvalues of (3.9) coincide with those of the 5 x 5 problem (3.11 ) p(O) I ts + rB ,q B 0 Since our interest is in nonzero eigenvalues of (3.9), we confine our attention to (3.11). First we premultiply and postmultiply (3.11) by the invertible matrices 0 0) respectively, where e;, i = 1,2 is a unit vector along the ith axis, to reduce (3.11) to a 5 x 5 generalized eigenvalue problem in standard form tB-IS + re2,q C(OT For purposes of the following lemma, let us abstract the structure of (3.12) as o o m) 0 0 where L and M are 2 x 2 matrices and e and mare 2-component column vectors. In the lemma we use the notation (m, e) for the inner product on R2 and for the orthogonal vector. LEMMA 3.1. (3.13) has precisely three eigenvalues, one of which is zero. The other two eigenvalues are real if and only if (3.15) Remark. Although (3.13) involves 5 x 5 matrices, there are only three eigenvalues. As regards the original system (3.12), this deficit is because the speed of longitudinal sound waves is infinite and does not appear as an eigenvalue of (3.12). The two nonzero eigenvalues of (3.12), equal in magnitude and opposite in sign, are the speeds of transverse sound waves. Proof. Solving the first component of (3.13) for WI and substituting the result into the latter two components reduces (3.13) to a 3 x 3 generalized eigenvalue problem (3.16) Since by assumption eT m ¥o 0, the second component of (3.16) may be solved for W3 and substituted into the first, yielding the 2 x 2 ordinary eigenvalue problem (3.17) As noted above, one eigenvalue of (3.13), hence also of (3.17), is zero. The other eigenvalue of (3.17), let '8 call it c, gives rise to a pair of eigenvalues of (3.13) of the form ). = ±.JC. Thus (3.13) admits three eigenvalues. These eigenvalues are real if and only if c ~ 0, which happens if and only if the trace of (3.17) is nonnegative; in symbols, (3.18) tr(LM - (m, e)-I LmeTM) ~ o. Using the identity tT(AB) = iT(BA) to rearrange both terms in (3.18), we rewrite this inequality as (3.19) where Observe that P is the projection from R2 onto £.1- relative to the basis {nl, n2} where (3.20) i.e., (By (3.14), nl and n2 are linearly independent so (3.20) is a basis for R2.) The dual basis to (3.20) is Now iT MLP = I:.(ni,MLPni) = (n;,MLn2). i=l The proof is complete. To apply the lemma, we substitute from (3.12) and (3.10) to compute that p(O)(m,£) = (e = (1 + J.l27el,£(m + J.l)(1 + (3)a + 2(38(0) ~: 66 + (1- 11)(1- (3)ei· Recall that 0 ::; J.l < 1 and 1(31 < 1. Thus, at least for small 8(0), this quadratic form is positive definite. Therefore condition (3.14) of the lemma is verified. Moreover, since (m,£) > 0, condition (3.15) simplifies to (3.21) We claim that, modulo terms that are 0((8(0»)2), the LHS of (3.21) equals 1/2(p(O»)2 times the quantity (3.22) + J.l )(1 + (3)( Gr - 2T jet + (1 - J.l )(1 - (3)( Gr + 2T )ei + 2(2Gp - (1 - J.l(3)G r - 2(J.l- (3)T)eiei + 2s(0)66p(O To summarize the above analysis, this claim yields the following conclusion: The system (2.1) is hyperbolic if and only if the homogeneous quartic form (3.22) is positive definite. The calculation verifying (3.22) may be shortened somewhat by the following trick. Taking transposes to shift M to the other factor in (3.21) and substituting data from (3.12), we rewrite the LHS of (3.21) as 1/2(p (0))2 times the quantity (3.24) For any E H2, we have is the rotation matrix We simplify the vector on the left in (3.24) as follows Here we have used the fact, which may be seen from (3.10(a)), that S is a multiple of a rotation matrix; hence (i) ST commutes with Rand (ii) SST = 1~12 I. To continue we need a formula for B-1 where B was defined by (3.4). Recall from (1.17) that weT, X) equals a times the RHS of (1.21). Differentiating (1.21) we find that, modulo terms which are O((s(O))2), This, together with formulas (3.10), completes the data needed to verify (3.22). Incidentally, to prove a claim made in §1 following (1.21), let us consider the effect of including the 0((s(0))2) term in (1.21b). Such a term would be proportional to xUX?, so it would perturb the 2,1-entry of B by 0((s(0))3), the 2,2-entry by 0((s(0))2). Thus this term is irrelevant for the present O(s(O)) 3.3. Analysis of quartic forms. (a) Case 1: s(O) = O. We now prove Theorem 2.1. We must test whether (3.22) is positive definite. Note that for the present case the final term in (3.22) vanishes. A quartic form is positive definite if and only if (a) (3.25) a> 0 c> 0 ( c) When this test is applied to the above form, (3.25a) yields (2.5a), (3.25b) is vacuous, and (3.25c) yields (2.5b). The proof is complete. Case 2: s(O) f= 0. Next we prove Theorem 2.2 The framework for this proof is the following observation. In the borderline case when s(O) = and G p = G* where G* equals the RHS of (2.5b), the form (3.22) is nonnegative, but it vanishes along the two lines in the plane through the origin and making angles ±8* with the a.xis, where tan8 = {(l+ fl )(l+ fi )(l- ~n}1/4 r (1 - fl)(l - fi)(l + ~:) The perturbing term in (3.22), 2S(0)e16p(e), has opposite signs at these two directions. When s(O) f= 0, hyperbolicity will be lost slightly earlier near the direction for which this term is negative, slightly later near the direction for which it is positive. To make this quantitative, let us write Q(e; G p , s(O») for the expression (3.22); we indicate explicitly the parameters that will be varied in the following analysis. We define a function F(8; 5,s(0») = Q((cos8, sin 8); G* If s(O) by + 5, s(O»). 0, then (2.1) loses hyperbolicity at Gp = G* +5 where 5 is defined implicitly minF(8;5,s(0») = 0. e The minimum will be obtained at a critical point; i.e., at a solution of the system F(8;5,s(0») = Fe( 8; 5, s(O») = 0. This system is to be solved for 8 and 5 as functions of s(O). When s(O) = 0, (3.28) has the two solutions 8 = ±8*, 5 = 0. When s(O) f= 0, we have to lowest order (3.29) To evaluate the derivative in (3.29), we invert the relation Fe Fee (~) =-(::J which is obtained by differentiating (3.28) with respect to of the fact that Fe( 8*,0,0) = 0, we obtain Taking advantage (3.30) provided FoFee is nonzero, which is easily checked. Referring to (3.22) to evaluate the derivatives in (3.30) and substituting into (3.29) we find 5 ~ p(cos8*,sin8*) (0) ~ =r= 2 cos 8* sin 8* 8 , the minus sign holding near 8 = +8* and the plus sign near 8 = -8*. Since both signs occur, the correction to Gp in (2.5b) needed to guarantee hyperbolicity must be positive. On substituting (3.23) into (3.31) and recalling (3.26), we verify (2.9) with C equal to [(1 - /L)2(1 - ,82)(1 - (~: )2)]-1/4 times the absolute value of (3.32) (-u_ - s, 0) has a negative eigenvalue associated with (1,0); the eigenvalue for eigenvector (0,1) is positive for s < -2u_/3 and negative for s > -2u_/3; A trajectory from Cu_, 0) to (-u_ - s,O) along the line v = 0 is given by /s(t) = (u.(t), 0), with We first describe, for s in the interior of the interval (2.8), a simple use of Melnikov's method, as J-t varies away from the origin, to describe saddle-to-saddle connections near the trajectory / sCt) associated with the undercompressive shocks (2.3)-(2.5). For s near the endpoints of the interval (2.8), the use of Melnikov's method is more complicated, so we content ourselves here with sketching the ideas and outlining the results. Full details will be given in a forthcoming paper[6]. (i) Let /3 > 0 be small. For s E (2u_ + /3, -2u_/3 - (3), J-t = 0, the heteroclinic trajectory /. joins hyperbolic saddle points (i.e., with nonzero real eigenvalues of opposite sign). For (s,J-t) in a neighborhood N of (2u_ + /3, -2u_/3 - (3) x {OJ in R x R3 , there are unique hyperbolic equilibria near (u_,O) and (-u_ - s,O). There is also a smooth function (the Melnikov function) d: N - - t R such that d( s, J-t) = 0 if and only if the equilibria are joined by a heteroclinic trajectory near /s. From above, we have d(s,O) = O. Let J s( t) = exp [- [ divGCls( t), s, O)dt] = e 2st • (div means divergence with respect to (u,v)). Let m be any component of p. Then :~ (s, 0) is given by Melnikov's integral: ad am(S'O) = [: aG Js(t)G(/s(t),s,O)/\ amCis(t),s,O)dt where /\ means detenninant. These integrals can be evaluated explicitly, as explained in [6J. Let ((s) = .1rS«:.__2f3.)) 2._+. , if s E (2u_, -2u_/3), # 0; s = 0. ad av_ (s,O) = 2((s) (3.2a) (3.2b) ad 2 8b(s,0) = "3 u _((s) ad 1 -a (s,O) = -u_(s + 2u_)((s). a 3 by the implicit function theorem, the set d = °is a codimension one manifold. When a = 0, there is a degeneracy in the problem, in that d( s, v_, b, 0) identically in s along a curve 1 = ¢(b) = -"3u_b + OW) Indeed, when d = and (v_, b) satisfies (3.3), there is a line near the u-axis that is invariant for (2.7) for all s. Both hyperbolic equilibria lie on the line, as does the heteroclinic trajectory joining them. We claim that the only simultaneous solutions of d(s,v_,b,a) = ad as(s,v_,b,a) =0 (3.4a) (3.4b) in N are given by (3.3), with a = O,S arbitrary. The proof of this statement comes from (3.2): 2 1 = 2((s )v_ + "3u_((s)b + "3u_(s + 3u_)((s)a (3.5a) + O(I(v_,b,aW). aad (s, v_, b, a) = 2('( s )v_ + ~u_('( s)b + !u_((s + 3u_ )('( s) + ((s))a s 3 3 (3.5b) + O(I(v_,b,aW) Since ((s) # for s E (2u_, -2u_/3), the gradients of (3.5a,b) on the s-axis des, v_, b, a) are linearly independent. By the implicit function theorem, the zeroes of (3.5) form a two-dimensional surface. But then the surface must coincide with the surface of known solutions a = O,S arbitrary, v_ = ¢(b). In particular, we have proved the following result. 3.1. Let be small, and let N be the neighborhood of zero R3 discussed above. H (v_, b, a) E N and a -# 0, then there is at most one s E [2u_ + 13, -2u_/3 - j3j such that des, v_, b, a) = O. To treat the end points of the interval (2.8), we have to deal with the complications of the bifurcation of equilibria. We use the bifurcation theory approach of Golubitsky and Schaeffer [2], in which the bifurcation parameter s is treated in a distinguished manner. Thus, a singularity corresponds to a nongeneric bifurcation diagram for equilibria. Similarly, an unfolding of a singularity, with unfolding parameters, has a bifurcation diagram for each choice of the unfolding parameters. The parameters Jl = (v _, b, a) play the role of unfolding parameters here. Since the unfolding is well understood for a = 0, our goal is to describe the situation for a -# o. We describe the endpoints s = 2u_, s = -~u_ separately. (ii) When Jl = 0, there is a sub critical pitchfork primary bifurcation at s = 211-. Since the trivial solution U = U _ is preserved for all values of the parameters, the universal unfolding of the singularity (that preserves the trivial solution) has one unfolding parameter. In particular, there is a transcritical primary bifurcation and a limit point bifurcation for each value of Jl, except for Jl on a two dimensional surface corresponding to subcritical primary bifurcation. As shown in [5], this occurs precisely when the slow characteristic speed loses genuine nonlinearity at U_: (3.6) Because the left state U_ undergoes a bifurcation, for certain parameter values, there are several equilibria that might be left hand states for saddle-to-saddle connections. We therefore define two Melnikov functions, d( s, Jl) for U_, and a separate function d 1 whose zeroes represent saddle-to-saddle connections U_ ----> U+ wi th [r _ -# u_. This is done roughly as follows. The universal unfolding of the primary bifurcation of equilibria at Jl = 0, s = 2u_ may be written (3.7) where x = v - v_ + h.o.t.,,\ = s - 2u_ + h.o.t., and fJ, the unfolding parameter is a function of Jl that is zero at Jl = O. Here, h.o.t. signifies higher order terms in U, s, Jl. Now solutions of (3.7) are x = 0, corresponding to the trivial solution U = U_, and nontrivial solutions that may be parameterized smoothly by x and fJ, with ,\ = -( x 2 + 13). The latter correspond to equilibria U_ -# U_. Thus, the function d(s,Jl) is a smooth extension of that defined in subsection (i), and the function d1 depends smoothly upon v, v_, b, a. As in Proposition 2.1, we have that d(s,Jl) = 0 for at most one value of s for fixed Jl with a -# O. The corresponding result for d 1 = 0 states that for each choice of the parameters v_, b, a there is at most one v with d1 (v, v_, b, a) = O. Since s is a smooth function of v, there is at most one s for each choice of the parameters. Calculating derivatives of d and d l with respect to their variables at the origin leads to a description of the sets d = 0, d l = 0 for s near 2u_, J.L near zero. (iii) When J.L = 0 , there is a supercritical secondary bifurcation of equilibria at s = -~u_. The universal unfolding of this singularity requires two parameters, and the resulting bifurcation diagrams have transcritical secondary bifurcation on a co dimension one manifold M. The unfolding takes the form (3.8) where x = v-v_+h.o.t., A = s+~u_+h.o.t., and T and f3 are unfolding parameters. From (3.8), we express f3 = AX - x 3 - TX 2 as a smooth function of the other variables. Thus, the equilibria are parameterised smoothly by x, A, T. Then the Melnikov function d is a smooth function of x, A, T, a, and the set d = 0 is a smooth surface in the space of these variables. Therefore, f3 is a smooth function on this surface. Next, we calculate derivatives of d at the origin, project out x and finally change back to the original parameters s, v_, b, a. This process leads to a description of the set d = 0 for s near -~u_, J.L near zero. The next step in the analysis is to put the pieces together from subsections (i)(iii). Rather than describe this, together with the structure of the sets d = 0, d l = 0 in detail, we describe only the bifurcations of the vector field that are significant for shock admissibility. For example, consider bifurcation of equilibria. If none of the equilibria involved in the bifurcation is connected to U_, then the bifurcation is not significant for shocks. Now bifurcations of the vector field G( . ,s, J.L) occur at certain values of (s, J.L). However, s is treated as a distinguished parameter, so that we are really concerned with singular bifurcation diagrams for the family of vector fields G( " . , J.L) The following surfaces, in addition to a = 0, describe the values of J.L for which G( " " J.L) has a singular bifurcation diagram that affects admisible shocks. FI : Inflection locus for slow family. Primary bifurcation is sub critical at s = AI(U-) when J.L lies on Fl' B : Secondary bifurcation locus. = 0 at value of s for which there is a limit point for fast family not connected to U_. F: des, f-L) U_ --> U+ is a saddle-to-saddle connection with speed s, and s = A2(U't-) for some equilibrium U+ f= U±. E: d( s, p.) = 0 at value of s for which there is a limit point for fast family connected to U_ . U+ is a saddle-to-saddle connection with speed s = .A2(U+). Equations for F and E coincide with the equation for B up to linear terms in p.. E I : d(s,p.) = 0 at the primary bifurcation point s = .AJ(U_). F2: Remote saddle-to-saddle connection at s = .AJ(U_). Figure 1. The (b, v_) plane for > o. In Figure 1, we show the curves schematically (i.e., not to scale) in the (b,v_) plane for fixed ct > o. Note that F has tangential intersection with E and B but the curvatures are different. In Figure 2, we give the corresponding bifurcation diagrams for (b, v _) in each region of Figure 1. Note that part of the bifurcation curve B plays no role in shock admissibility, and is therefore omitted from Figures 1 and 2. The curve FI similarly has a lesser role in understanding admissibility of fast shocks, so we show only that portion affecting the analysis of slow shocks in Figure 1 and omit FI from Figure 2. In the bifurcation diagrams of Figure 2, some transitions from one bifurcation diagram to its neighbor involve a bifurcation of equilibria that is not represented because it corresponds to inadmissible shocks. In Figure 2, an x represents a value of (s, U) for which there is a connection U_ --+ U with speed s, with d = O. A vertical arrow indicates points (s, U) and (s, U') for which d I = 0 and the corresponding connection with speed s is U --+ U'. Figures 1 and 2 contain the crucial information for solving Riemann problems, as explained in the next section. u v »C ~ ~ ,)C ~ ~~ 7 ~ ~8 F )c ...... El EF F ~ ooj. C - Admissible Lax shocks. Figure 2. Bifurcation Diagrams 4. The Riemann Problem. Let F(U) be given by (2.6) with U = (u,v). The Riemann problem is (4.1) (4.2) For a = 0, (4.1),(4.2) was solved uniquely in [8) for all b, UL, UR. Here, we show how the results of section 3 apply to the perturbed problem with a =f. O. To treat the global problem, with UL, UR in some neighborhood of U = 0, the umbilic point, would require unfolding a vector field with a high degree of degeneracy. In section 3, we considered a somewhat simpler degeneracy that captures the main features of the problem of admissibility of shocks for nonstrictly hyperbolic systems. Specifically, when a = 0 and b = 0, the u-axis represents the coincidence of three phenomena: (i) v = 0 is an inflection locus; (ii) v = 0 is a secondary bifurcation curve; (iii) v = 0 is an invariant straight line. This is associated with undercompressive shocks. Varying b from zero splits items (i) and (ii), but (ii) coincides with (iii) even for b =f. O. For a =f. 0 however, we find there are no straight invariant lines, and the phenomena occur more or less independently. This results in solutions of Riemann problems that are structurally stable to higher order perturbations of the nonlinearity F(U). The solution of the Riemann problem involves three classes of admissible elementary waves, namely slow waves, fast waves and undercompressive shocks. Slow and fast waves are combinations of admissible Lax shocks and rarefaction waves. These waves may be represented by wave curves, as described in [5), for example. For instance, consider small a > 0, small b, and UL near the u-axis. There is a slow wave curve WI (Ud through UL representing points U _ = (u _ , v _) such that UL may be joined, in the (x, t)-plane to U_ by a slow wave. The curve WI (U L) crosses the u-axis transversally, and we may parameterize WI(Ud smoothly by v_. Since b is fixed, varying U_ along WI(UL) varies (b,v_) on a vertical line in Figure 1, thus selecting a sequence of bifurcation diagrams. These bifurcation diagrams contain information about the possible shock waves with U_ on the left. In particular, we can use the bifurcation diagrams to locate undercompressive shocks and the transitions between undercompressive shocks and fast and slow admissible Lax shocks. It is then straightforward to add admissible fast waves in the (x, t)plane, to get all possible wave combinations with UL on the left, and intermediate values of U near the u-axis. Each combination of waves determines a constant value UR of U on the right. The Riemann problem is solved if the values of UR cover an appropriate set once for each fixed value of UL in the domain of interest. Here, UL lies in a neighborhood of a fixed point (-1,0), and the set of UR is a neighborhood of(u_, -3u_) x {OJ. This procedure, and the resulting covering of the UR plane for each fixed UL is represented by a picture of the UR plane, in which the plane is divided into regions. Each region is labelled by the specific combination of waves used to construct the solution of the Riemann problem for UR in that region. For each a of. 0, there are lots of different cases to consider, although most cases differ only in small details. To see why there are so many cases, first note that the curves in Figure 1 represent UL boundaries in the sense of [5J. That is, for fixed b, the solution of the Riemann problem changes as (b, VL) crosses a boundary in Figure 1. In addition, the sequence of bifurcation diagrams depends on the location of b in Figure 1. Transitions occur at values of b at which curves in Figure 1 intersect. Given the large number of cases, each with a fairly complex solution of the Riemann problem, we content ourselves here with stating that the solution is well defined, unique, and has continuous dependence in LIon the initial data. Going through each of the cases, as (b, vd varies within Figure 1, we have proved the following result. PROPOSITION 4.1. There is 8 > 0 and a neighborhood M of (-1,3) x {OJ in + 11 + IVLI + Ibl + lal < 8, R2 such that for each UL = (UL,vL),b,a satisfying IUL and each U R in M, there is a unique centered solution of the Riemann problem (4.1),(4.2) constructed from admissible elementary waves. We show just one example of the UR plane in Figure 3, choosing (b, v L) to lie in region 1 of Figure 1, and with the sequence 1,2,7,8,9,10 of bifurcation diagrams, as v_ decreases from v L. Figure 3 is a picture of the neighborhood M in U wspace of Proposition 4.1. Note that since M is in reality a narrow horizontal strip, the vertical VR scale has been exaggerated to show detail. M is divided into regions by various curves. Each region is labelled by the corresponding sequence of waves occuring in the solution of the Riemann problem, for UR in that region. The construction and notation are by now standard (cf. [5,7,8]) and self-explanatory, with the exception of the curves ~ representing undercompressive shocks, and the curve 5". The undercompressive shock curve ~ has end points E', F~. Let E, F2 be the points on W1 (Ud corresponding to the boundaries E,F2 • That is, if U = (u_,v_) at E,F2 (respectively), then the corresponding point (b,v_) lies on the boundary E, F2 of Figure 1. Now there is a slow shock from F2 to a point F2 as shown, with shock speed s that is characteristic on the left of the shock. Further, by the role of the boundary F2 in Figure 1, there is an undercompressive shock with F2 on the left that also has speed s. The endpoint F~ of ~ is on the right of this shock. Similarly, there is an undercompressive shock from E to the endpoint E'. For each point U_ on W 1 (UL) between F2 and F2 , there is a unique corresponding point U'- on ~ such that U_ - > U'- is an undercompressive shock. The curve 5" cuts off fast Lax shocks where they become inadmissible. The point is that when a saddle-to-node connection U_ - > U+ in the phase plane becomes a saddle-to-saddle connection U_ - > U'-, say at speed s, then there is also a saddle-to-node connection U'- - > U+ at the same speed s. This second shock is the fast shock from a point on ~ to a point on 5". \ \ , \ UL F2 E I I shock curve. ) rarefaction curve. rarefaction-shock curve. undercompressive shock curve limit points. S" (see text). Figure 3. Solution of Riemann Problems: the UR plane. REFERENCES [1] [2] [3] [4] C. C. CHICONE, Quadratic gradients on the plane are generically Morse-Smale, J. Differential Equations, 33 (1979), pp. 159-161. M. GOLUBITSKY AND D.G. SCHAEFFER, Singularities and Groups in Bifurcation Theory, Springer, New York, 1985. E. ISAACSON, D. MARCHESIN, B. PLOHR, Transitional waves, preprint. D.G. SCHAEFFER AND'M. SHEARER, The classification of 2 X 2 systems of nonstrictly hy- [5] [6] [7] perbolic conservation laws, with application to oil recovery, Camm. Pure Appl. Math., 40 (1987), pp. 141-178. D.G. SCHAEFFER AND M. SHEARER, Riemann problems for nonstrictly hyperbolic 2 X 2 systems of conservation laws, Trans. Amer. Math. Soc., 304 (1987), pp. 267-306. S. SCHECTER AND M. SHEARER, Undercompressive shocks for nonstrictly hyperbolic conservation laws, IMA preprint. M. SHEARER, D.G. SCHAEFFER, D. MARCHESIN AND P.J. PAES-LEME, Solution of the Riemann problem for a prototype 2 x 2 system of nonstrictly hyperbolic conservation laws, Arch. Rat. Mech. Anal., 97 (1987), pp. 299-320. M. SHEARER, The Riemann problem for 2 X 2 systems of hyperbolic conservation laws with Case I quadratic nonlinearities, J. Differential Equations, 80 (1989), pp. 343-363. M. SLEMRODt Abstract. We examine the asymptotic behavior of measure valued solutions to the initial value problem for the nonlinear heat conduction equation 8u at = V . q(Vu), '" E 0, in a bounded domain 0 C RN with boundary condtions of the form 'As major themes of this workshop are ill-posed and mixed initial value problems, it seems appropriate to report on some recent work of mine on a backward-forward heat equation which will appear in Slernrod (1989). To be specific consider the heat conduction equation au = x E n, t \7. q(\7u) in a bounded domain ~ RN. The boundary of is assumed to be smooth. On impose either homogeneous Dirichlet boundary values u= 0 or no flux insulated boundary values q(\7u) . n an, t > 0 where n( x) denotes the exterior unit normal at x E initial condition u(x,O) = uo(x), x E n. Also u will satisfy the For convenience call the above system Pv and PN respectively. The main interest of this paper is that q will only be assumed to satisfy smoothness and growth conditions and Fourier's inequality ..\. q(..\) 2': O. *This research was supported in part by the Air Force Office of Scientific Research, Air Force Systems, USAF, under Contract Grant No. AFOSR-87-0315. The United States Government is authorized to reproduce and distribute reprints for government purposes not withstanding any copyright herein. tCenter for Mathematica] Sciences, University of \Visconsin, Madison, WI 53706 No monotonicity requirements such as (q(Ad-q(A2))'P'1 -A2) 2: 0 for AI, A2 ERN will be imposed. Note the Fourier inequality is consistent with the classical Clausius-Duhem inequality (Truesdell, 1984, p. 116). Other assumptions on q are that (i) q is (at least) a continuous map: RN ~ RN satisfying growth conditions (ii) (iii) for all A E RN; here Cl, C2, C3 are positive constants. (iv) q is the gradient of a C 1 (RN;R) potential 0 t > 0, = uo(x). The regularization is not ad hoc and is in fact based on a higher order theory of heat conduction due to J. C. Maxwell (1876, eqns. (53), (54)); see also Truesdell and Noll (1965, p. 514). Problems PD" and PN" admit two natural "energy" estimates which motivate an attempt to pass to a weak limit as E ~ 0+ and hence obtain a weak solution of PD and PN. Unfortunately the presence of the nonlinear terms q(Vu') prevents the success of this venture. However in the spirit of the work of L. Tartar (1979, 1982) and later work of R. DiPerna (1983 a, b, c, 1985), M. Schonbek (1982), and R. DiPerna and A. Majda (1987a, b) the following information on the sequence {q(Vu')} is known. Namely iffor < T < 00, QT = (0, T)xQ, where ~ denotes weak convergence then for q continuous satisfying growth conditions there is a subsequence {Vu'>} so that (q().),lI x ,t().)} weakly in Ll(QT) for a probability measure (q().), Vx,t().)} II x r q().)dvx,t()') and v is called a Young measure. The above representation of weak limits of q(Vu') permits a passage to weak limits in PD" and PN, •. These limits satisfy a measure theoretic version of PD and P N and the associated u is called a "measure valued" solution of P D or P N (in the sense of DiPerna (1985)). The function u lies in LOO((O, (0); V) when V = HJ(O) tor PD and Hl(Q) for P N . Moreover it inherits a natural "energy" inequality from the regularized problems PD,. and PN, •. This inequality can be exploited to establish the trend to equilibrium as t -+ 00 ofu (see also Slemrod, 1989a, b, Notation: We endow L2(Q) with the usual inner product lIul1 2 = QT denotes the cylinder Q foru,v E L2(O), (u,u). (0, T) and for u, v E L2( QT) (u, V)L'(QT) JrQT u(x, t)v(x, t)dxdt, = (u,U)£2(QT)' For problem PD we denote V = HJ(Q) where HJ(Q) is endowed with the inner product (U,V)l = lIulii = Vu· Vvdx For problem P N we denote u,v E HJ(Q), Hl(Q) where Hl(Q) is endowed with the inner product (U,V)l = Iluili 10 V'u· V'vdx + (10 UdX) (10 VdX) for u,v E Hl(Q), = (u,u)i (see Temam (1988)). For PD we 8et (U,V)Hl(C;h) = 1 at at + auav V'u· V'vdxdt l r (10 UdX) (10 Vdx) dt, The 8ub8cript b will denote a uniformly bounded subset of an indicated set. For the problem PD we denote the space of te8t function8 W = Cgo(Q). For the problem PN we denote the space of te8t function8 W = {w E COO(Q); ~~ = O}. Let L~( Qr; M(RN)) denote the 8pace of weak * mea8urable mapping8 J.l : Qr ...... M(RN) that are essentially bounded with nonn = ess sup 1IJ.l(x, t)IIM (Recall J.l is weak * measurable if (J.l( x, t), J) is measurable with respect to x, t E QT for every f E Co(R N ).) M(RN) is the Banach space of bounded Radon measures over RN. For v E M(RN) we write Frob (RN) is the Banach space of probablity measures over RN. Prob (Rn) we write CO(RN) denotes the Banach space of continuous functions fying limlAI_oo f(>..) = 0, Ilfllco(RN) sup If(>")I· AERN f : RN For v E R satis- The arrows --+,~, ~ denote strong, weak, and weak * convergence respectively. We now recall the concept of measure valued solution introduced by R. DiPerna (1985). An element U E Hl(QT) n C([O, T]; L2(i1)) n L=((O, T); V) is a measure valued solution of P v or P N on QT if there exists a measure valued map from the physical domain QT to Prob (RN) the space of probability measures over the state space domain RN so that d _ + ((q(>.), IIx,t(.).)) , \7w) = for all w E W a.e. in (0, T); \7U = (>.,lIx.t(>')) a.e. in QT; An element u E V is a measure valued equilibrium solution of Pv or P N if there exists a measure valued map II: x,i 1--+ IIx,t E Prob(R N ) from i1 x [0,00) to Prob(RN) so that ((q(>.),lIxA>'), \7w) for all w E W a.e. on (0,00) and \7U(x) = (>.,lIx,t(>')) a.e. ini1 x [0,00). We state the fundamental theorem for Young measures as given by J. M. Ball (1988). Let S eRn be Lebesgue measurable. Let K C R m be closed and let z(j) : R n, j = 1, 2, . .. be a sequence of Lebesgue measurable functions satisfying z(j)(-) --+ K in measure as j 1--+ 00, i.e. given any open neighborhood U of Kin R m ~im meas {y E S; z(j)(y) U} = 0. Then there exists a subsequence z(l') of z(j) and a family {lI y }, YES, of positive measures on R m, depending measurably on y, so that (i) IhllM = JRm dlly :::; 1 a.e. in yES; (ii) supp lIy C K for almost all yES; (iii) f(z(I'))~(lIy, f) = JRm f(>.)dll y(>') in L=(S) for each continuous function f E Suppose further that {z(I')} satisfies the boundedness condition lim sup meas{y E SnBR: Iz(I')(y)1 ~ k} = 0 for every R > 0 where BR = {y E Rm; Iyl :S R}. Then IIvyilM = 1 for yES (i.e. Vy is a probability measure) and given any measurable subset A of S for any continuous function f : Rm relatively compact in L1(A). R such that {f(z(I'J)} is sequentially weakly As noted in Ball (1988) the boundedness condition is very weak and is equivalent to the following: given any R > 0 there exists a continuous nondecreasing function gR: [0, (Xl) -> R with limt->oo gR (t) = (Xl, such that Furthermore Ball notes that if A is bounded, the condition that {f( z(JLJ)} be sequentially weakly relatively compact in Ll(A) is satisfied if and only if sup [ 1/>(lf(z(I'J)l)dy I' < (Xl for some continuous function 1/> : [0, (Xl) -> R with limt->oo ~ = (Xl (de la Vallee Poussin's criterion; cf. MacShane (1947), Dellacherie & Meyer (1975), Natanson (1955)). To construct measure valued solutions of PD or PN we proceed as follows: First let u f be a classical smooth solution of either PD,f or PN,f. Then for all t,7" E R+: IIuf(t + 7")11 2 - IIu«t)II2 = - 2i ((~Uf(S + - 2f fll~Uf(t)II2 + Furthermore for P N ,< t), q(Vuf(s iT II~Ufll2ds + t))))ds (Vuf(x, t))dx t lIat 8u (s) 2 l dx = + io uf(x,t)dx = for all t E R+. + in (Vuo(x))dx. From the above estimates we observe that for any T {u f } c L/;",((O, 00); V) n Hl(QT), {£1/2 b.u f L~((O, 00); L 2(n)), } L/;",((O, 00); L2(n)). Furthermore there exists a subsequence of {U f } also denoted by {u f and u, u E Loo((O, 00); V)) n Hl(QT) n C([O, T]; L 2(n)), au at EL 2((O,oo);L 2)) (n , so that (a) uf~uinLoo((O,oo);V); (b) V'Uf~V'U in LOO((0,00);L2(n)); (c) a;;t' ~ uf ~ (e) u f (f) u(t) a;: in L2((0, 00); L2(n)); u in H1(QT); U in C([0,T];L 2 (n)); Uo in L2(n) as t We can then use (a)-(f) above and the fundamental theorem on Young measures to conclude there exists a further subsequence again denoted by {u f } and a probability measure Vx,t, (x,t) E n x R+, so that for every bounded subset A c n x R+ and every f(>..) satisfying If(>")1 ::; const.(l + 1>"1'), < 2, >.. E RN we have and Vu = (vx,t, >..) a.e. in A. We then see that u is a measure valued solution of the relevant initial-boundary value problem PD or PN • Furthermore if we set 1>"1> then u satisfies the "energy" inequality + T)/l 2 -llu(t)112 ::; -21 1 T (g(>..), Vx,s+t)dtds. Remark. The idea of minorizing >. . q(>.) by g(>.) was given to the author by Professor E. Zuazua. It allows the application of the fundamental theorem on Young measure to the energy inequality (3.5) without putting additional growth restrictions on q. Having established the existence of measure valued solutions we now turn to their asymptotic behavior of measure valued solutions. In what follows we shall assume Uo is smooth as is necessary for P D " and P N " to have classical smooth solutions. ()+ (uo) = Ut>o u( t; uo) defines the positive orbit in V through Uo. We already know u E LOO ((O~oo); V) and so ()+(uo) ~ B C V, with a metrized weak- V topology with metric d. Define the w-distance between two sets B 1 , B2 ~ B by Finally define the weak w-limit set of ()+(uo) by ww(uo) = {X E V;u(t n ; uo) ~ X in V as n -> 00 for some sequence {t n }, tn -> oo}. It thus follows that ww(uo) is non-empty and w-dist(u(t;tlo),ww (uo)) t -> 0 as Now we are able to state the main result of Slemrod (1989d). Namely we can give a categorization of ww( uo). Theorem. Let X E ww(uo). Then X is a measure valued equilibrium solution of PD(PN) evolution equations, i.e. there is positive probability measure Yx,t with supp Yr,t ker q hence satisfying (q(>.), Yx,t) = 0 and = (>',Yx,t) QT QT. Moreover if kerqi ~ [ai, b;J, i = 1, ... ,N then ai:::; ("VX)i:::; bi a.e. in il, i = 1,00' ,N. From the above theorem we can establish the following corollaries. Corollary: For problem PD if for some i, 1 :::; i :::; N, then ("VX)i =.0 a.e. in O. Corollary: For Problem PD: if for each i, kerqi ~ R- or kerqi ~ R+, 1 :::; i :::; N, then ww(uo) = {OJ and for any Uo u(t, uo) ~ 0 as t -> 00 in HJ(O). Corollary: For Problem PN: if ker q = 0 then ww( tlo) = c (a constant), c = (meas 0)-1 and for any u(t, uo) -> 00 in Hl(Q). Examples. 1) Consider the case N = 1 and q possessing the graph shown in Figure 1. For Problem Pv: u(t,uo) ~ 0 as t -> 00 in HJ(Q). For Problem PN : dist(u(t,uo),ww(uo» -> 0 in Hl(Q) as t -> 00 where ww(uo) ~ {X ; measure valued equilibrium solutions of P N , 0 :S *(x) :S 6 a.e. in Q}. Figure 1 2) Consider the case N = 1 and q possessing the graph shown in Figure 3. For Problems PV(PN): weak-dist(u(t,uo);ww(uo» -> 0 in V as t -> 00 where ww( uo) S;;; {X; measure valued equilibrium solutions of Pv ( PN ), eo :s (n) :s 6 a.e. in Q}. Figure 2 Figure 3 J. M. Ball (1988), A version of the fundamental theorem for Young measures, to appear Proc. CNRS-NSF Workshop on Continuum Theory of Phase Transitions: Nice, France, January 1988, eds. M. Rascle and D. Serre, Springer Lecture Notes in Mathematics. C. Dellacherie and P-A. Meyer (1975), Probabilities et Potentiel, Hermann, Paris. R. J. DiPerna (1983a), Convergence of approximate solutions to conservation laws, Archive for Rational Mechanics and Analysis 82, pp. 27-70. R. J. DiPerna (1983b), Convergence ofthe viscosity method for isentropic gas dynamics, Comm. Math. Physics 91, pp. 1-30. R. J. DiPerna (1983c), Generalized solutions to conservation laws, in Systems of Nonlinear Partial Differential Equations, NATO ASI Series, ed. J. M. Ball, D. Reidel. R. J. DiPerna (1985), Measure-valued solutions to conservation laws, Archive for Rational Analysis and Mechanics 88, pp. 223-270. R. J. DiPerna and A. J. Majda (1987a), Concentrations and regularizations in weak solutions of the incompressible fluid equations, Comm. Math. Physics 108, pp. 667-689. R. J. DiPerna and A. J. Majda (1987b), Concentrations in regularizations for 2-D incompressible flow, Corom. Pure and Applied Math. 40, pp. 301-345. E. J. MacShane (1947), Integration, Princeton Univ. Press. J. C.' Maxwell (1876), On stresses in rarified gases arising from inequalities of temperature, Phil. Trans. Roy. Soc. London 170, pp. 231-256 = Papers 2, pp. 680-712. 1. P. Natanson (1955), Theory of Functions of a Real Variable, vol. 1, F. Unger Publishing Co., New York. M. E. Schonbek (1982), Convergence of solutions to nonlinear dispersive equations, Comm. in Partial Differential Equations 7, pp. 959-1000. M. Slemrod (1989a), Weak asymptotic decay via a "relaxed invariance principle" for a wave equation with nonlinear, nonmonotone clamping, to appear Proc. Royal Soc. Edinburgh. M. Slemrod (1989b), Trend to equilibrium in the Becker-Doring cluster equations, to appear Nonlinearity. M. Slemrod (1989c), The relaxed invariance principle and weakly dissipative infinite dimensional dynamical systems, to appear Proc. Conf. on Mixed Problems, ed. K. Kirschgassner, Springer Lecture Notes. M. Slemrod (1989d), Dynamics of measured valued solutions to a backwards forwards heat equation, submitted to Dynamics and Differential Equations. L. Tartar (1979), Compensated compactness and applications to partial differential equations in "Nonlinear Analysis and Mechanics", Herior-Watt Symposium IV, Pitman Research Notes in Mathematics pp. 136-192. R. Temam (1988), Infinite-Dimensional Dynamical Systems in Mechanics and Physics, SpringerVerlag, New York. C. Truesdell (1984), Rational Thermodynamics, Second Edition, Springer-Verlag, New York. C. Truesdell and W. Noll (1965), The Non-Linear Field Theories of Mechanics, in Encyclopedia of Physics, ed. S. Flugge, Vol. III/3, Springer-Verlag, Berlin-Heidelberg-New York. ONE-DIMENSIONAL THERMO MECHANICAL PHASE TRANSITIONS WITH NON-CONVEX POTENTIALS OF GINZBURG-LANDAU TYPE JURGEN SPREKELS* Abstract. In this paper we study the system of partial differential equations governing the nonlinear thermomechanical processes in non-viscous, heat-conducting, one-dimensional solids. To allow for both stress- and temperature-induced solid-solid phase transitions in the material, possibly accompanied by hysteresis effects, a non-convex free energy of Ginzburg-Landau form is assumed. Results concerning the well-posedness of the problem, as well as the numerical approximation and the optimal control of the solutions, are presented in the paper, in particular in connection with the austenitic-martensitic phase transitions in the so-called "shape memory alloys" . Key words. phase transitions, non-convex potentials, Ginzburg-Landau theory, shape memory alloys, hysteresis, conservation laws. AMS(MOS) subject classifications. 35L65, 35K60, 73U05, 73B30 1. Introduction. In this paper we consider thermomechanical processes in non-viscous, one-dimensional heat-conducting solids of constant density p (assumed normalized to unity) that are subjected to heating and loading. We think of metallic solids that do not only respond to a change ofthe strain E = U x (u stands for the displacement) by an elastic stress u = u( E), but also react to changes of the curvature of their metallic lattices by a couple stress p = p( Ex). Thus, the corresponding free energy F is assumed in Ginzburg-Landau form, i.e., F = F(E,E x ,8), where 8 is the absolute temperature. In the framework of the Landau theory of phase transitions, the strain E plays the role of an "order parameter", whose actual value determines what phase is prevailing in the material see [3]). Since we are interested in solid-solid phase transitions, driven by loading and/or heating, which are accompanied by hysteresis effects, we do not assume that F( E, Ex, 8) is a convex function of the order parameter E for all values of (Ex, 8). A particularly interesting class of materials are the metallic alloys exhibiting the so-called "shape memory effect". Among those there are alloys like CuZn, CuSn, AuCuZnz, AgCd and, most important, TiN i (so-called Nitinol). In these materials, the metallic lattice is deformed by shear, and the assumption of a constant density is justified. The relation between shear stress and shear strain (u - E-curves) of shape memory alloys exhibit a rather spectacular temperature-dependent hysteretic behavior (see [2] for an account of the properties of shape memory alloys): *Fachbereich Bauwesen, Universitat-GHS-Essen, D-4300 Essen 1, West Germany (visiting at IMA). This work was supported by Deutsche Forschungsgemeinschaft (DFG), SPP "Anwendungsbezogene Optimierung und Steuerurig" . Fig 1. Typical 0' - €-curves in shape memory alloys, with temperature 8 increasing from a) to c). In addition, for sufficiently small shear stresses the € - 8-diagrams: another hysteresis occurs in e Fig 2. €-8 curves in shape memory alloys for different values of 0'. On the microscopic scale, this hysteretic behaviour is ascribed to first-order stress-induced (fig. 1a,b) or temperature-induced (fig. 2) phase transitions between different configurations of the metallic lattice, namely the symmetric hightemperature phase "austenite" (taken as reference configuration) and its two oppositively oriented sheared versions termed "martensitic twins", which prevail at low temperatures (d., [6], [7]). The simplest form for the free energy F which matches the experimental evidence given by figs. 1,2 quite well and takes interfacial energies into account is given by (d., [4], [5]) where Cv denotes the specific heat, C is some constant, 81 and 82 are (positive) temperatures and 1\:1, 1\:2, 1\:3" are positive constants. A complete set of data for the alloy AuCuZn2 is given in [5J. Note that within the range of interesting temperatures, for 8 -+ 81 , F is not convex as function of E. In the sequel, we assume F in the somewhat more general form (with positive 1\:1,1\:2, , ) (1.3) where F1 and F2 satisfy the hypothesis: The dynamics of thermomechanical processes in a solid are governed by the conservation laws of linear momentum, energy and mass. The latter may be ignored since p is constant for the materials under consideration (we assume p == 1). The two others read (1.4a) Utt - (T x + J.Lxx + qx - f , (TEt - J.LExt Here the involved quantities have their usual meanings, namely: (T-elastic stress, J.L-,couple stress, u-displacement, f - density of loads, e - density of internal energy, q - heat flux, g -density of heat sources or sinks. We have the constitutive relations (1.5) and we assume the heat flux in the Fourier-form (1.6) = -1\:8 x > 0 is the heat conductivity. Notice that (1.6) implies that the second principle of thermodynamics in form of the Clausius-Duhem inequality is automatically satisfied. Inserting (1.3), (1.5), (1.6) in the balance laws and assuming a one-dimensional sample of unit length, we obtain the system Utt - (1i: 1 8F{(e) + 1i:2F~(e)t (1. 7a) e v 8t - + ,u xxxx j , 1i: 1 8F{(e)et - 1i:8 xx = g, = Ux , to be satisfied in the space-time cylinder QT, where T i > 0, Qt:= Q x (0, i). > 0, Q = (0,1), and, for In addition, we prescribe the initial and boundary conditions (1. 7d) U(x,O) = uo(x),Ut(x,O) = UI(X), 8(x,0) = 80 (x),x ETI, u(O, i) = uxx(O, i) = u(l, i) = u xx (l, i) = 0, i E [0, T), 8x (0, i) = 0, -1i:8 x (l, i) = (3(8(1, i) - 8r(i)), i E [0, TJ, (1. 7f) where {3 > at x = 1. is a heat exchange coefficient, and 8r stands for the outside temperature In the following sections we state some results concerning the well-posed ness of the system (1. 7a-f) , including a convergent numerical algorithm for its approximate solution. To abbreviate the exposition, all constants in (1. 7a-f) are assumed to equal unity; this will have no bearing on the mathematical analysis. 2. Well-posedness. We consider (1.7a-f). In addition to (HI), we generally (H2) Uo E jj4(Q) = {u E H4(Q)lu(0) = u(l) = UI E HI(Q) n H2(Q); 80 E H2(Q), 80 (x) > 0, (H3) Bb(O) = 0, 8r (0) = 80 (1) (H4) j,g E HI(O,T;HI(Q)), Br(i) > on [0, T). u"(O) = u"(I)}; V x E TI. 8b(1) > (compatibility). HI(O,T), where g(x,i) 2: 8r E on Q T and We have the result: THEOREM 2.1. Suppose (Hl)-(H4) hold. Then (l.7a-f) has a unique solution (u, 8) which satisfies (2.1a) U E W 2,CO(0, T; L2 (Q)) n Wl,co (0, T; HI (Q) n H2(Q)) n L CO (0, T; jj4(Q)), 8 E HI(O,T;HI(Q)) n L2(0,T;H3(Q)), 8(x, i) > 0, on TIT. Moreover, the operator (f, g, 8r ) f---> (u, 8) maps bounded subsets of HI (0, T; HI (Q)) X HI (0, T; HI(Q)) x]v( into bounded subsets of X x Y, where]V( := {z E HI(O, T)I on [0, T}, z(o) = eo(1)+e~(l)}, X:= W 2 ,oo (0, T; L2(fl)) nWI,OO(O, T; HI (fl) n H2(fl)) n Loo(O, T; j[4(fl)) and Y := HI (0, T; HI(fl)) n L2 (0, T; H 3 (fl)). z(t) > Proof. The existence result is easily obtained by combining the Galerkin approximation employed in the proof of Theorem 2.1 in [9] with the a priori estimates derived in the proof of Theorem 2.1 in [12]; the uniqueness is a direct consequence of the subsequent Theorem 2.3. Finally, the boundedness of the mapping (f, g, er) f--t (u, e) follows from the above-mentioned a priori estimates. 0 A sharper existence result, with regards to the smoothness properties of the solution (U, e), has been established in [12]: THEOREM 2.2. Suppose that, in addition to (H1)-(H4), the following assumptions on the data of (1. 7a-£) are satisfied: Uo E H 5 (fl), UI E H 3 (fl), eo E H4(fl), ftt E L 2(fl T ), g E L2 (0, T; H 2(fl)), er E H2(0, T) . Furthermore, suppose that eo satisfies compatibility conditions of sufHciently high order. Then (1.7a-f) has a unique classical solution (u, e), and all the partial derivatives appearing in (1.7a-c) belong to the Holder class CO',0'/2(ITT ), for some a E (0,1). Proof. See Theorem 2.1 in [12]. 0 We now derive a stability result with respect to the data (f,g,er) which guarantees the uniqueness of the solution (u, e). THEOREM 2.3. Suppose the general hypotheses (H1), (H2) and e~(O) satisfied. We consider the variational problem /Ut(x,t) 0,0:2 > 0: IIB(r)111~(n) ~ o:tIIBx(T)IIIIB(T)1I ~ 8I1Bx(T)112 + 0:2I1B(T)112 + CIO IIB(r)11 2. Since B~2) E L=(0,T;L 2(n)), this implies that t IB~2)12B2dxdT ~ / / o n / 0 ~ C10 8/ / + Cll / / o non whence t ~Cll8 / II Bx(r)11 2dT o t + C12 /(IIB(r)1I 2 + lIu xx (T)112 + Il ut(T)112)dT. o c) Finally we have 1/ / B[B(1)u~11u;1} = 1/ / B[Blt~.~u;l} + t 1131: = o n t B(2)U xx u;1) + B(2)u~21utldxdTI o n ~ Gl3 /(IIB(T)11 2 + IIU xx (T)1I2 + Il ut(T)11 2)dxdT. o Summarizing the inequalities (2.12), (2.14), (2.18) and (2.19), we have shown that t ~ IIB(t)112+ 11IBx(r)112dr+~ 1 ::; C 14 .81 II Bx(r)11 2dr + ~ Ilgllhrh) + ~ II Brlli2(o,T) t + C15 1([IB(r)11 2+ Iluxx (r)112 + IIllt(t)112)dr. Adding (2.11) and (2.20), adjusting 8> 0 sufficiently smail, and invoking Gronwall's lemma, we have finally proved the assertion. 0 3. Optimal Control. We now turn our interest to optimal control problems associated with the system (1. 7a-f). It is of considerable interest in the technological application of shape memory alloys to control the evolution of the austeniticmart'ensitic phase transitions in the material; in this connection, a typical object is to influence the system via the natural control variables j, g, Br in such a way, that a desired distribution of the phases in the material is produced. Since the phase transitions are characterized by the order parameter €, it is natural to use € as the main variable in the cost functional. We consider the following control problem: 11 T Minimize J(u,B;j,g,Br ) = J T L 1 (x,t,u x (x,t),B(x,t),j(x,t),g(x,t))dxdt ° fl L 3(x,u x (x,T),B(x,T))dx, subject to (1. 7a-f) and the side condition (J, g, Br) E X, where X denotes some nonempty, bounded, closed and convex subset of Hl(O, T; Hl(n))x{g E Hl(O, T; HI (n))lg(x, t) 2: 0 on TIT} X M. For Ll : R6 -+ R, L2 : R2 -+ R, L3 : R3 -+ R, we assume: (H5) (i) L 1 ,L 2,L3 are measurable with respect to the variables (x,t), resp. t, resp. x, and continuous with respect to the other variables. (ii) Ll is convex with respect to j and g. (iii) L2 is convex with respect to Br . These assumptions are natural in the framework of optimal control; a typical form for J would be J(u, B; j, g, Br) = ~Iilux - uxIIV(flT) 2 + ~211B - -BIIV(flT) + ~3I1ux(" T) - iil1 2+ ~41IB(-, T) - 811 2 + ~51Ijlli2(flTl + ~61Iglli2(flT) + ~711Br Ili2(O,T)' - with f3i 2: 0, but not all zero, and functions ux , 0 E L2(0.T), ii, fj E L2(0.), representing the desired strain and temperature distributions during the evolution and at t = T. There holds THEOREM 3.1. Suppose (Hl)-(H5) are true, then (CP) has a solution (u, 0;], g, Or). Proof Let {(In,gn, Br,n)}cX denote a minimizing sequence, and let (un,Bn) denote the solution of (1.7a-f) associated with (In, gn, Br,n), n E N. Since X is bounded, we may assume that (3.2) weakly in HI weakly in HI (O,T;Hl(0.)), (0, T; Hl(0.)), Or, weakly in HI(O, T). Due to the weak closedness of the convex and closed set X, g, Or) E X. Let (u,O) denote the associated solution of (1.7a-f). Now, owing to the boundedness of X and Theorem 2.1, {(Un, Bn)}nEN is a bounded subset of X x Y. Therefore, we may assume that for some (u, B) E X x Y there holds uniformly on Un,xx --+ U xx , Un,xt --+ Urt, Un,xxxx --+ U xxxx , as well as Bn,x -+ Bx, Bn,t -+ Bt, 8n ,xx Bxz , Passing to the limit as n -+ 00 in the equations (1. 7a-f) shows that (u, B) solves (1. 7a-f) for the data g, Or), i.e., u = U, B = O. Hence, (u, 0;], g, Or) is admissible, and, in view of (H5), (3.4) Thus (u, 0; ], g, Or ) is a solution of (C P). D REMARKS. 5. The above way of arguing follows the lines of [10] where a related result was derived for a much more restricted free energy F. 6. It is natural to look for necessary conditions of optimality for the optimal controls of (CP). A corresponding result has not yet been derived. 7. The problem of the automatic self-regulation of the system via a fixed feedback control regulating the boundary temperature Br has been considered in [11]. 4. Numerical Approximation. In this section we follow the lines of (8). We assume the free energy in the special form (see (1.2)) ( 4.1) F(t t , x, B) = -B log B + B + -2 Bt 2 - + -6 + -2 t2. x Let 121416 Pia ( t ) B = - Bt - - t + - t . , 2 4 6 ( 4.2) ( 4.3) where q,( t1, t2) is a polynomial of degree 5 in t1, t2. We are going to construct a numerical scheme for the approximate solution of (1.7a-f). To this end, we assume that (HI)-(H4) and (2.2) hold, so that Theorem 2.2 applies. Now let K,N,M E N be chosen. We put h = 0< _ Z. < _ N• xi(N) -.i.. - N' it, t~M) = mh, 0:.::::: m:'::::: M, and Define ( 4.4) YN = {linear splines on [0,1) corresponding to the partition {x;N)}~a of [O,I)}, and let ( 4.5) where denotes the j-th eigenfunction of the pigenvalue problem ZIIII AZ , in (0,1), z(O) = z"(O) = 0 = z"(I) = z(I). We introduce the projection operators PK = H4(0, 1) - orthogonal projection onto ZK, QK = H2(0, 1) - orthogonal projection onto ZK, RN = H1(0, 1) - orthogonal projection onto YN, and the averages ( 4.8) fM(x) = m 1 8I'M , =h J mh Br(t) dt. We then consider the discrete problem m 8m · d m ",K (D M,N,K ) F In u = L.Jk=1 C¥k Zk, = 1 ::; k Yk m ::; M, such that 2 m-I + m-2 - U U h2 + _2 8m-I( m Ux + U m-I)I: \,x x - fMe]dx = 0, VeE ZK, m - 8m - 1 m m I 8 1 _ _ _ _ _ 7J - - 8 m I. (u x )2 _ (U x - )2 7J + W(u~, u~-I)ex + u~exx + 8';'7Jx (4.9c) gM7J] dx UO = PK(UO), + (8m (1) - u O _ u- I h 8i\M )7J(1) = 0, V 7J E YN, QK(UI), 8° = RN(8 o). The following result has been shown in [8]: THEOREM 4.1. Suppose (Hl)-(H4) and (2.2) are true, and let N be sufficiently large. Then there exist constants 8 1 > 0,82 > 0, which do not depend on M, N, l 0 with the constant states Jl L , Jl R ERn. In the context of genuinely nonlinear strictly hyperbolic systems this problem is solved by connecting the states Jl L and Jl R at infinity by constant states which are connected via shock and rarefaction waves (cp. Chapter 17 in Smoller [48], Lax [29], [30], [31]). 2.5. A weak solution to (2.1) that is discontinuous along a smooth curve r c R2 must satisfy the jump condition, called Rankine-Hugoniot condition in gas dynamics (cp. Smoller [48] Chapter 15.3), s . [lL] = [f (lL)] on Here s = S(lLL' JlR)=~ is the local speed of the discontinuity (shock), i. e. the tangent to r with respect to the t-axis. By [g(Jl)] we denote the difference g(Jl L ) g(Jl R ) for any smooth function g : Rn ---> Rn. We will restrict our investigation of the Riemann problem only to states Jl L , Jl R that satisfy (2.7) for some s E R. This means that we only consider solutions taking the constant values Jl L , Jl R to the left resp. right of the shock curve r (right being the direction of the positive x-axis). We study the solution set of the Rankine-Hugoniot equations for an arbitrary given!h ERn. Then (2.8) gives a system of n equations in the n + 1 unknowns (8, UI, ... , Un). The Jacobian of (2.8) is J = (!h -1[, Id + I'(.!L)). On the line L = {(8, !h) I 8 E R} the Jacobian has rank n if and only if 8 =I>'1(!h), ... , >'n('!h)· Suppose the latter to be the case, then the implicit function theorem gives a unique differentiable solution curve locally around Ih. This curve must be a part of the line L, since all points of L trivially solve (2.8). These points are not interesting because on L one has lLL = lL R, i. e. no shock occurs. The interesting solutions are obtained by the study of the bifurcation curves from the line of trivial solutions L. These bifurc~tions occur at S1 = >'1 (lL L ),· .. , sn = >'2(lLd since rank J < n there. For the following discussion we assume that l' (1L) has distinct (simple) real eigenvalues everywhere, i. e. is strictly hyperbolic, and that the system (2.1) is genuinely nonlinear everywhere. It is natural to take 8 as the bifurcation parameter. ,According to the "bifurcation from simple eigenvalues theorem" (cp. Smoller [48] Theorem 13.4, Hale [4] Theorem 5.1) our assumptions imply that there exists a C2-curve transversal to L at (s;, lLd for each i, 1 :::; i :::; n. We denote these curves by "'i : R --+ R n+1 and assume these parametrized by E E R, i. e. "'i : E --+ (8i(E),lL i (E)) , (8i(0),lL i (0)) = (S';,lLd, 1:::; i :::; n. We will call them the Hugoniot curves. The implicit function theorem (cp. Hale [4], Theorem 3.5) lets us determine the curve "'i as long as for E =I- 0 we have 8;(E) =I- >'j (lL;(E)) for all j, 1 :::; j :::; n, i. e. that rank J = n. 2.6. The genuine nonlinearity of the system allows for a parametrization of the "'; to be chosen such that lLi (0) = r;(lL L ), 1L (0) =r; (.!LL), 8i(0) = >';(lL L ) and;i (0) = ~ (see Smoller [48] Corollary 17.13, Lax [30]), where ri(.!L) has been normalized to give 'V'>'i (1L)ri(1L) == l. This also implies that for small E >'i (.!L;(E)) < 8i(E) < >'i+1 (.!Li(E)) for E < 0 >';-1 (lLi(E)) < 8i(E) < >'i (lLi(E)) for E > 0 (cp. proof of Theorem 17.14 in Smoller [48]). These latter inequalities remain valid globally for general systems only under additional assumptions (cp. Liu [34], Mock [35]). The property 8i(E) =I- > 'k (lLi(E» for all k, 1:::; k :::; n, and E =I- 0 may be used to define a stronger notion of genuine nonlinearity (cp. Gel'fand [15], Mock [35]). This would imply the global existence of the curves "'i since it implies rank J = n near "'i for E =I- O. It is common to distinguish strong shocks from weak shocks by saying that for strong shocks E is large resp. small for weak shocks. Important for the distinction is the fact that properties like (2.10) may be lost if E becomes too large. 2.7. The Riemann problem (2.6) for (2.1) is not uniquely solvable unless a further admissibility criterion for the discontinuities is imposed on solutions. A simple example for multiple solutions for the scalar case (n = 1) may be found in Smoller [48) Chap. 15.3. Compatibility considerations involving the characteristic speeds Ak(!h), AkGln), 1 ::; k ::; n, lead to the following inequalities (cp. Lax [29), Smoller [48)) Ak(1IR) < s(11£, lI R) < Ak(lI L ) Ak-l(lIL) < s(1IL' lI R ) < Ak+l(1IR) for some k E {I, ... , n}. These are the Lax shock conditions. A discontinuity is considered admissible and called a k-shock iff (2.11) is satisfied for some index k. Comparing this to (2.10) this implies that one considers as admissible only shocks on the c < 0 branch of the Hugoniot curves "k for small c. 2.8. Another approach to admissibility is the viscosity method. This is motivated by the fact that conservation laws arising in inviscid fluid flows may be obtained as limits when in equations for viscous flows the viscosity and heat conduction coefficients tend to zero. The paradigm is that the admissible shock solutions should be approximated by the smooth solutions of the viscous equations in a suitable sense (cp. Becker [1), Gel'fand [15), Gilbarg [16), Weyl [50)). For the Riemann problem this may be translated into the following. Consider the following parabolic regularization of the system (2.1) (2.12) !It + f(!I)x = cA!Ixx> > 0, 2: 0 where A E Rn 2 is a matrix to be appropriately chosen. Suppose the system (2.12) possesses for each c > 0 a smooth solution U· of the form (2.13) with (2.14) where lI L , lIR E Rn are connected by an Hugoniot curve. Here T = X~8t and s = s(lIL'lIR ) is the shock speed given by (2.7). If the family 1I" converges to a shock solution of (2.6) in the sense of distributions, then the shock solution is admissible. The special form (2.13), (2.14) seems to be due to Gel'fand [15). There is considerable freedom as to the suitable choice for the right hand side, i. e. the regularization, in (2.12) (see e. g. Conley/Smoller [7), [8), [9), Keyfitz [26), Mock [35), Slemrod [46), Smoller [48), Wendroff [49)). Conley and Smoller have investigated classes of matrices that give the same admissibility for certain classes of problems. We will say that a shock (s, lI L, lI R) has a viscous profile if the corresponding solution of the Riemann problem (2.6) can be approximated by solutions to (2.12), (2.14) of type (2.13) in the limit c --t o. A simple form of (2.12) is given by choosing A = Id (cp. Gel'fand [15], Conley /Smoller [7], Slemrod [46], Smoller [48]), i. e. (2.15) Substitution of (2.13) into (2.15), multiplication by € and denoting differentiation wi th respect to T by (.) we get the second order ordinary differential system . +fOD. -s lL = lL . This system may be integrated once to obtain the first order system -slL + f(Jl.) + C =lL with CERn. The conditions (2.11) imply limT~±= lL (T) = 0 and therefore the left hand side of (2.14) must vanish at lLL and lL R . Choosing lLL we get (2.18) Note that the jump conditions (2.7) then give C = slLR - f(lL R ). One can now study the vector field (2.19) -s(lL - lLd + f(lL) - f(Jl.L)· We recall (cp. Guckenheimer/Holmes [18]) that a curve 'Y = 'Y(t) having in every point p the vector Xp as its tangent is called an orbit of X. The w-limit set of'Y is the set If in --+ -00 is taken one obtains the a-limit set oC'Y). The vector field X has fixed points (X = 0) at p = lLL and p = lL R . The problem of finding a solution to (2.12), (2.14) is now equivalent to the problem of finding a heteroclinic orbit 'Y C R2 with (2.20) This dynamical systems approach to admissibility seems to be due to Weyl [50]. A fixed point is called hyperbolic if the eigenvalues of the linearization X' have a nonzero real part. In this case the stability of the vector field is determined by linear field X' according to the Hartman-Grobman theorem (cp. Guckenheimer /Holmes [18] Theorem 1.4.1, Hale [4] Sect. 3.7). 2.8. In (1.3) the second equation is a compatibility condition, since the system is derived from a second order equation. It seems preferable to avoid taking viscous perturbations of this compatibility condition. As we shall see this will not be necessary, i. e. it will be possible to use a degenerate Matrix A. The dynamical systems approach becomes trivial, i. e. one-dimensional, in the case of a system of two equations. This works because -p(.) is convex. For nonconvex p(.) that appear for instance in the theory of van der Waals fluids this is no longer the case. Slemrod [45] showed that in that case one has to add a third order dispersion term, together with the second order diffusion term, to one equation. In Slemrod [46] this is shown to be essentially equivalent to adding second order terms to both equations. The point in that case is that a dynamical system in the plane is needed to circumvent a third fixed point. 2.9. A further criterion for the admissibility of solutions is provided by entropy inequalities. The basis of this approach is the existence of an additional scalar conservation law (2.21 ) + F(!l)x = 0 satisfied by all smooth solutions to (2.1). The functions U, F : R n --+ Rare assumed to be continuously differentiable. Denoting the respective Jacobians by U' and F' the equation (2.21) is satisfied by smooth solutions of (2.1) iff (2.22) holds. In the case n = 2 (2.22) is a system of two partial differential equations of first order for the two unknowns U, F which may be used to determine them. In case n > 2 the system (2.22) is overdetermined and generically admits no solution. But, for example the Euler equations (n ~ 3) admit an additional conservation law for the entropy. This fact leads one to expect that physically relevant systems of conservation laws may generally possess such an additional conservation law. Due to the mentioned entropy law the function U is called an entropy for the system (2.1), even if it is not a physical entropy. Now (2.21) is only valid for smooth solutions. Its validity across a jump discontinuity would lead to another jump relation, i. e. add another equation in (2.7). This will generally eliminate all shock solutions by reducing the Hugoniot curves to L, leaving lIL =.!h as the only possible solutions. 2.10. A true physical entropy will not be conserved at a shock (cp. Landau/Lifschitz [28], Oswatitsch [40], [41], Zierep [51]). But, due to the 2nd law of thermodynamics it would satisfy an inequality that restricts the physically possible transitions. Using a method due to Kruzkov [27] it is possible to derive such an inequality that can be used as a selection principle. Supposp we are given a family lIe, 0 < £ ::; £0, of smooth solutions to (2.12), (2.14) of type (2.13). Further suppose that the family lIe is uniformly bounded in L=(R x R+) and converges in Lloc(R x R+) for £ --+ 0 to a solution II of the corresponding Riemann problem while U'(lIe)AlI~ remains bounded independently of £ in Lloc(R X R+). Multiplication of (2.12) with (2.23) + F(J:ze)" 6U'C!.t) Alzexx (U'(!Y)An:)" - 6(!l~rU"(!le)An~. If one now assumes U"(nJA ;:::: 0, i. e. (x)'"U"(ne)Ax ;:::: 0 for all x ERn, then (2.24) gives + F(ne)x Choosing an arbitrary + F(n~)p" dxdt :::; -6 U'(ne)An: px dxdt. Due to our boundedness assumptions the right hand side of (2.26) vanishes for giving U(n) Pt + F(ll) Px 6 -+ dxdt:::; 0 or the entropy inequality (2.27) in the sense of distributions. At a shock the left and right states must therefore satisfy the entropy inequality (2.28) s[U]- [F]:::; 0 which is obtained by the same arguments as those used in deriving the jump conditions (2.7). Note that wherever the solution is locally smooth (2.21) will be satisfied anyway. THEOREM 2.1. Suppose that the system (2.1) is strictly hyperbolic and genuinely nonlinear. Also suppose that it posesses a strictly convex entropy function U : R n -+ R with associated flux function F: Rn -+ R such that (2.24) is satisfied. H a weak solution of (2.1) contains a weak shock of speed s(JlL , llR) then the Lax shock conditions (2.11) and the entropy inequality (2.27) are equivalent. For llL' llR sufficiently close a solution to the Riemann problem (2.6) satisfying (2.11) or (2.28) admits an approximation of the type described in Section 2.7. Proof. See Smoller [48] Theorem 20.8, Theorem 24.6. 3. The transonic small distnrbance system. 3.1 In the introduction we have put the transonic small disturbance equation into the following form with p( u) = u - "22 p(u)x+Vy=O Vx - u y = o. The system does not have the time variable as a canonical directed variable which appears in the Riemann problem (2.6) or the viscosity equations (2.12). The expansion giving (1.1) is carried out under the assumption that the flow is predominantly directed in the positive x-direction. We will see later that this choice of a distinguished direction for the Riemann problem corresponds directly to the choice of admissibility criteria canonical for the transonic flow problem. It will not matter whether we formally set t = x or t = Y in (3.3). 3.2 The small disturbance problem has a canonical admissibility condition to exclude non-compressive shocks, inherited from potential flow theory (cp. Landau/Lifschitz [28], Oswatitsch [40], [41], Zierep [51]). Suppose a shock solution of the Riemann problem (2.6) is given and for some value of y a shock is at x. We denote by u-Cx) the limit value of u as x approaches x for x x and u+(x) the value for x x. Then this admissibility condition is (3.2) (cp. Cole/Cook [6]), i. e. the flow velocity in x direction is decreased through a shock. This corresponds to an increase in density for potential flows. We want to investigate how this condition corresponds to the admissibility conditions given in Chap. 2 for hyperbolic problems. The notation u-, u+ will allways refer to the original x, y variables of the transonic flow problem. We had noted in the introduction that pC u) type. Namely it is elliptic for strictly hyperbolic = u - ;2 implies that (3.3) changes ~)+A(~l ~)] = 0 gives the eigenvalues A±(u) = ±v'U=1. We divide the (u, v)-plane into half planes e = {(u, v) E R 2 Ju I}, 9{ = {Cu, v) E R 2 Ju I}, and the line Z = {( u, v) E R 2 Ju = I}. For convenience we will use the same notation to divide the u-axis. The eigenvalues are distinct in e and 9{. 3.3. We would like to again take up the discussion of Hugoniot curves. The Rankine-Hugoniot equations (in the form (2.8)) are S[UL - u] S[VL - v] = -[VL - = (P(ud - v] p(u)]. These imply or _S2 = p(UL) - p(u) = p'(() UL -u with ( between UL and u. Since pi ~ 0 in U Z we can deduce that for UL E U Z the Hugoniot curves cannot lie in U Z. This corresponds to the fact ,c has no bifurcation points if UL E since there are no real eigenvalues. For UL E Z S = 0 is the only possible bifurcation point of ,c. The Hugoniot curves cannot enter e due to the above argument. Suppose p(.) has the additional property that to each U E e there corresponds awE ~ such that p(u) = pew). Then for (UL, vd E the point (w, v) with pew) = p(uL) and v = VL lies on an Hugoniot curve and the Jacobian (2.9) has rank 2 there, since 0 is not an eigenvalue in~. This is the case for our example. So in this case one easily finds a branch of the Hugoniot curves that is detached from For UL E ~ we have the properties observed in section 2.2, i. e. two curves bifurcating from ,c at the eigenvalues (i. e. S = Al or S = '2). In our example p(u) = U - u2' we may eliminate S in (3.6) in order to give an explict representation of the projections of the Hugoniot curves onto the (u, v)plane. One has v - VL = ±J(u - UL) (p(ud - p(u)) =±V(U-UL)(UL-U=±V(U-Ud2 (UL;U (~- ~)) V=VL±(U-UL) JUL+U --2--1. One obtains real-valued algebraic curves if ~ 1 i. e. U ~ 2 - UL. Choosing again z E wE!J{ such that pez) = pew) one sees that for UL = z the Hugoniot curves are only one curve lying in the half plane U ~ w. For (u L = w, v d the two K I • 1 . z Figure 1 3.4. We set P(t) = Jot p(s) ds the primitive of p(.). It is well known that (1.2) is the Euler-Lagrange equation to the functional L('Px, 'Py)dxdy= P(¢x) + 'P; dxdy. Since the integrand L('Px, 'Py) is not explicitly y or x dependent Noether's theorem gives two additional conservation laws satisfied by smooth solutions to (1.2) resp (3.3) (cp. DiPerna [13], Mock [36]). They are for y-independence and for x-independence We will write these as (Ui)x i. e. UI + (Fi)y = 0, i = 1,2 in the =2 - U2 = -uv variables u = ¢x, v = ¢y, = vp(u) v2 F2 = 2 + P(u) - These are not the only additional conservation laws to (3.3) known (cp. Mock [36] Sect. VI). 4. The p-system. 4.1. For better comparison with hyperbolic theory we put our investigations in the context of the well known p-system (cp. Oleinik [39], Smoller [48], Leibovich [32], Wendroff [49]). We formally introduce the variable t instead of y. This gives the system Ut - (4.1) Vt = 0 + p(u)x = 0, 0 everywhere and p'(ucrit) = 0 for where p: R - t R is some smooth function, p" one Ucrit E R.1 Using the notation.!I = (:) and the flux function f : R2 -+ R2, fCJl) = the system can be written in the standard conservation law form (cp. (2.1)) + f(!L)x The Jacobian of the flux function is ( 0 -01) f' = p'(U) and has the eigenvalues = _V_pl(U), = V-pl(U). The right eigenvectors are 7'1 - 1) (J-pl(U) Also VAl '7'1 _p"(U) ~ 2y_ pl(U) VA2 '7'2 2y _pl(U) and therefore the system is genuine nonlinear everywhere. For p' ( tI) V>' are parallel to the u-axis and the 7' i, i = 1, 2, approach 0 the vectors (~) Therefore genuine nonlinearity, taken as a non-orthogonality condition, holds also for p'( u) = O. 4.2. In case p' (u) < 0 the system is strictly hyperbolic since the eigenvalues are real and distinct. This case is extensively discussed in Smoller [48J Chap. 17.A. If p' > 0 the eigenvalues are purely imaginary and distinct. In this case the system is elliptic. Due to the assumption p" < 0 there is at most one Ucrit E R such that p' (Unit)= O. Since we assume the existence of such a Ucrih we may divide the (u, v)-plane as in Section 3.3 into the two half planes £ = {(u, v) E R21 u < Ucrit i. e. p'(U) > O} lWe interchange u and v w.r.t. Smoller [48] Chapter 17 in order to conform with the standard notation in transonic flows. For the same reason we take p" < o. z )( 111 Figure 2 and:J-( = {(u, v) E RZlu > Ucrit i. ep'(u) < OJ. These planes are separated by the line Z;::: {(u, v) E RZlu = ucrit i. ep'(u) = OJ. The jump conditions (2.7) for the p-system are (cp. S[UL - UR] = -[VL - VR] S[VL - VR] = [P(uL) - We had seen in Section 3.3 that if both states (UL' VL) and (UR, VR) lie in the elliptic region or one in and one in Z, they cannot be connected by a shock since p' 2:: 0 in euz. So in the following we will only consider the cases of hyperbolichyperbolic shocks (both states in :J-C) and hyperbolic-elliptic shocks. Shocks between states in 9{ and Z will not be discussed explicitly. For the same reason, namely (3.8) and pH i- 0, the Hugoniot curves X;i satisfy Si(C:) i- (Ak(V(C:)) for k = 1, 2 and c: i- 0 (cp. Section 2.5). Therefore, they exist for all c: i- O. There are no secondary bifurcations, since rank J = 2 for c: i- 0 (cp. (2.9)). 4.3. In the case p'(uL), p'(UR) < 0, i. e. strict hyperbolicity, the Lax shock conditions (2.11) allow for two types of shocks (cp. Lax [29], Smoller [48] Chap. I7.A) (4.6) I-shocks: S < Al(ud, Al(UR) < S < A2(UR) 2-shocks: Al(UL) < S < A2(UL), AZ(UR) < or using (4.4) one commonly makes the distinction (4.7) back shocks: - j-P'(UR) < S < -j-p'(uL), e. S < 0 front shocks: j-p'(UR) < S < j-p'(uL), e. S > O. If one assumes p' (u) < 0 and p" (u) < 0 for some subinterval of the real containing UL and UR then this implies (4.8) UR > UL and S < 0 for back shocks UR < UL and for front shocks. The inequalities (4.6) resp. (4.7) obviously only make sense for hyperbolic-hyperbolic shocks. 4.4. We will now study the entropy inequalities (2.28) provided by the two additional conservation laws given in (3.11). The first one is usually associatied with the p-system (cp. Smoller [48] Sect. The entropy inequality for UI , FI LEMMA 4.1. [r - P(U)] - fp(u)v] :::; 0 is equivalent to the inequalities (4.10) i. e. (4.8). Proof. The inequality for (4.9) becomes for!I L = (UL, vd,!I R = (UR' VR) Using the jump conditions (4.5) one gets (4.11) = VR - VL 2 = S [(UL - UR) VR - VL 2 p(Ud+p(UR) 2 + P(UR) - ] P(ud . Using the mean value theorem one obtains for some ~ between UL and UR. Since pI! < 0, (4.10) gives (4.9) and vice versa. D U- For the transonic flow problem (4.10) would imply U- > u+ for s > 0 and < u+ for s < O. Obviously this does not give the physical shocks (3.2). 4.5. We now proceed to look at the second possibility in (3.13): 4.2. The entropy inequality for U2, F2 s[-UV]- [-up(u)+p(u)+r]:::;o is equivalent to the inequality (4.14) Proof The entropy inequality (4.13) becomes o ~ S [URVR - ULVL] = + uLP(ud - + P(UR) - 2R - ; UL (p(UL) - svd - UR (p(UR) - SVR) VR +VL 2 (VR - vd· +P(UR) - P(ud + Again using the jump relations (4.5) one gets O~(UL-UR)(P(UR)-SVR)+P(UR)-P(UL)+ = (UL - UR) (P(UR) - S VR; VL) = (UL - UR) p(UR) VR +VL 2 + P(UR) - + p(ud + P(UR) 2 S(UL-UR) P(ud The mean value theorem again gives the inequality (4.16) for some ebetween UL and UR. Since P" < 0 this is equivalent to D The Lax shock conditions (4.6) or (4.7) only make sense if p'(ud, p'(UR) < 0, i. e. only for states that lie in the strictly hyperbolic region:J{. For the entropy inequalities (4.9) and (4.13) this restriction is not necessary. Note that (4.11) allows UL to be in:J{ as well as c, whereas (4.14) requires UL to be in:J{ (since not both UL and UR may lie in c). 4.6. We had seen in our discussion of the Kruzkov argument that the entropy U and the matrix A should be compatible via the inequality (2.24), i. e. U"A ~ O. We want to add a diffusion only to the second equation in (4.1). The Hessians of U1 and Uz given in (3.11) are U~' = ( 0 -1) o . Note that Uz is not a convex function, whereas U1 is convex in:J{, i. e. for p'(u) Compatible matrices A are given by Thereby we obtain the two systems (4.18) Ut - = 0 + p(u)x = 273 < o. and Ut - = 0 + p(U)x = -cU xx · Applying the method outlined in Section 2.8 to (4.18) we obtain the system or the first order equation (4.20) with C = 82UL + p(uL) = 82UR + p(UR). This one-dimensional dynamical system has only two fixed points UL, UR. At UL there should be a source and at UR a sink. These requirements imply for 8 > > 8 2 > -p'(UR) for -p'(uL) < 8 2 < -p'(UR) for -p'(UL) > 0 < o. Suppose UL, UR E :Ji:. Then we may take roots and obtain with (4.4) the Lax shock conditions (4.7). We have now seen that the Lax shock conditions (4.7), the entropy inequality (4.9) and the viscosity approximation (4.18) give the same admissibility conditions for shock solutions to the Riemann problem if UL, UR E:Ji:. This is well known (see Theorem 2.1 in Smoller [48]). The entropy inequality and the viscosity method give equivalent admissibility conditions also for mixed type shocks. These are not the conditions sought for transonic flows. 4.7. We may use (4.21) to introduce generalized Lax shock conditions for certain systems of mixed type. Suppose we are given a mixed type 2 by 2 system with two purely imaginary eigenvalues in e, with .Al = >-2, and a positive and a negative eigenvalue in :Ji:, as in the case of our example. Then we can give shock inequalities that are equivalent to the Lax inequalities for hyperbolic-hyperbolic shocks and are valid even for hyperbolic-elliptic shocks. These are obtained by using (4.21). Suppose lm.Al < 0 < Im.A2 for the eigenvalues in e and .Al < 0 < .Az in X. We require for a I-shock (back shock) that 8 < 0 and (4.22) For a 2-shock (front shock) we require > 0 and (4.23) And for stationary shocks, i. e. = 0, we require either (4.22) or (4.23) to hold. The above restrictions on the eigenvalues were made to avoid an elaborate discussion of the different cases that may arise otherwise. For the complications that arise in the case of more general complex eigenvalues see Keyfitz [26]. The inequalities (4.22) and (4.23) are equivalent to (4.6) for 1[L, 1h E :Ji: under the above assumptions on the eigenvalues. For the p-system they are also equivalent to (4.21). Figure 3 4.8. The entropy inequality (4.13) gave the correct admissibility criterion for transonic flows. Let us look at the system (4.19). We obtain the first order equation with C as before. And we get as analog to (4.21) (4.24) for all s. We immediately see For (4.25) UL, UR E X and (4.14) follows immediately since p" < o. E X we may take roots to obtain A2(lh) > s > A2(JlR) > 0 o >A1(JlR) > s > ).1(lh) for s for s >0 < o. Only the first set of inequalities is equivalent to the Lax shock conditions (4.6). If one thinks of the compatibility considerations between ingoing/ outgoing characteristics and the Rankine-Hugoniot conditions that are made to derive the Lax shock conditions (cp. Lax [29] or Smoller [48] Chap. 15.D) there seems to be something wrong here. For s > 0 we have a 2-shock with three ingoing and one outgoing characteristic. For s < 0 this situation is reversed (see Figure 3). But, consider the following situation. Suppose we are given the initial data not on the axis t = 0, but on the axis x = O. Then one sees (cp. Figure 3) that one has Lax I-shocks, resp. 2-shocks, as usual. One clearly sees that the entropy inequality (4.13) and the viscosity method using (4.19) are connected to the flow direction (positive x-direction) as distinguished direction. In the usual hyperbolic theory t is the distinguished direction and we have formally set y = t. This direction is only distinguished if a convex entropy like U1 is used. Use of the nonconvex entropy U2 makes the x-direction distinguished even though the formal treatment is the same as before. Note also that U2 was due to the x-invariance of (3.8). 4.9. We again make the assumptions made in Section 4.7 concerning the eigenvalues of a mixed 2 by 2 system. Now we require for I-shocks (back shocks) that s < 0 and > 0 and For 2-shocks (front shocks) we require s (4.27) For stationary shocks s = 0 we require either (4.26) or (4.27) to hold. In this case either of the two implies the other. These are equivalent to (4.25) in the case of the p-system. 4.10. We may collect our results in the following theorem: 4.3. Consider the Riemann problem U(O ) = u. ( ) = {lla ,Y ~ Y llb for y for y for the system (4.1). For p we demand p" < O. Assume that Jla , Jlb E R2 lie on a common Hugoniot curve with corresponding shock speed s = s(Jla , Jlb )· Let Jl be the corresponding piecewise constant shock solution, i. e. ll(x, y) for sy < x for sy ={ U > x. Then the following adimissibility conditions are equivalent: (a) u- > u+. (b) The entropy inequality (U2)y + (F2)x :S 0 holds in the sense of distributions. (c) s[U2 J- [F2J :S O. (d) Jl can be obtained as a limit for £ --. 0 of solutions If to the system (4.18). The limit is achieved in the sense of distributions. (e) The inequalities (4.27) for s :S 0 or (4.28) for s > 0 are satisfied. Proof. (b) and (c) are obviously equivalent for!l (see Section 2.10). The equivalence of (a) and (c) was shown in Lemma 4.2. For the p-system (e) is equivalent to (4.25). Since p" < 0 this implies (a). Now suppose (a) is satisfied. This implies -p'(uL) > -P'(UR)' Also we have seen that one state must lie in J( (Section 4.2). So at least UL E:Ji. In this case for small £ f 0 (2.10) is valid. The inequalities (2.10) imply (4.25) and therefore (e) for small £. Then they must be valid for all £ f 0 since s(£)2 never becomes equal to -p' (u(£)) due to (3.7) and the convexity of -p(.). The equivalence between (4.25) and the solvability of (4.19) for £ > 0 was shown in Section 4.8. It remains to show that !le:£. dx --. dx for every :£. E [CO'(R~)l2, R~ = {(x, y) E R21Y > o}. Obviously the sequence.!Ie is uniformly bounded, since .!Ie(O must lie between UL and UR. Let So be the strip {(x, y) E R~ Ilx - syl < 8}. Take E CO'([2), then supp 10 may be divided into three parts A6 = supp 10 n {(x, y) E R~ I x - sy 2': 8}, Bo =SUPPlOnso, Co =SUPPlOn{(X, Y)ER~ Ix-sy~-8}. Notethat.!Ie(x, y)= .!I'(X-:8 Y ). On Ao we may take, for given 8 > 0, c so small that me(x, y) - .!Ial is arbitrarily small. The same holds on Co for .!Ib' Since .!Ie - .!I is bounded on Bo for any 8 the integral r 1(.!Ie - .!I) '£1 dx can be made arbitrarily small by choosing 8 small. Choose 8 to make (4.29) smaller than for given 8 > O. Then we choose c in order to make Therefore.!Ie converges to .!I in the sense of distributions. o 5. Further admissibility conditions. 5.1. The convexity of -p(.) can be used to obtain Olelnik's E-condition (cp. Oleinik [39], Dafermos [11]) for the p-system. The convexity of -p(.) is equivalent for any UL, UR E R. 5.1. The admissibility condition (5.2) is equivalent to the transonic E-condition, namely, the requirement that p(U) - p(uL) for all U U P(UR) - p(uL) Proof. The inequality (5.4) is equivalent to UL > UR. This implies UL > U for all U between UL and UR. Therefore, division of (5.1) by U - UL for any such u reverses the inequality to give (5.3). Conversely, (5.3) and (5.1) can only be valid simultaneously iff UL > UR· 0 Note that the other set of admissibility conditions, i. e. those connected to Lemma 4.1 are equivalent to p(U) - p(ud ~P(UR) - P(UL) U-UR UR -UL p(u) - p(ud :::;P(UR) - p(ud U-UR UR -UL for s:::;o for (cp. Dafermos [11], Oleinik [39], Shearer [43]). 5.2. Another admissibility condition is Liu's extended entropy condition (cp. Liu [33], [34], Dafermos [12]). In the context of admissibility conditions compatible with Lemma 4.1 it states that for an admissible shock one must have (5.5) for alllI that lie on the Hugoniot curve connecting IlL and lIR' Note that we still assume that -pO is convex. Hsiao [22] and Hsiao/de Mottoni [23] used a modified version for the case of nonconvex p(. ). In order to cover the case lIL E E the inequality (5.5) is required to hold only for those 1I with U between UL and UR for which s(lIL,}l) is defined. For the transonic flow problem we have: LEMMA 5.2. The admissibility condition is equivalent to the requirement that (5.6) for allll on the Hugoniot curve, that lie between IlL and IlR' Proof. By (3.6) we have Then Lemma 5.1 states that > u+ is equivalent to for all.!l on the Hugoniot curve between .!lL and .!lR (cp. Section 3.3). 5.3. Another type of admissibility condition can be obtained by generalizing (2.28) in the following manner. We will not require (2.27) to hold for smooth solutions, but instead we replace it by for some constant M2': o. The idea behind this approach is to choose F, G in a convenient way in order to give (5.2) at a shock. The constant M is then chosen very large so that (5.7) does not become a restriction for smooth parts of the flow. Note that (5.7) still implies an inequality of the type (2.28), i. e. SW]- [G] :::; 0 (5.8) at a shock. A very simple example is given by choosing F(lL) == 0 and G(1I.) = u. Then (5.7) gives ux:::;M and (5.8) gives -[UL - UR] :::; 0, i. e. UL 2': UR. Note that with U = cPx the inequality is cPxx :::; M. We could also take F(1I.) = v and G(1I.) = u. Then we have Vt + U x :::; M. or cPyy + cPxx :::; M. This gives S[VL - VR]- [UL - UR] :::; O. Using (4.5) we obtain 5.4. Inequalities of this type, i. e. cPxx :::;M or 1:,.cP :::;M have been uses in variational methods for the calculation of transonic flows (see Bristeau et. a1. [2], [3], [17], Feistauer/Necas [14]). For the calculations one has to ensure that the constant M is chosen so large that it does not interfere with the smooth accelerations present in the transonic flow. Necas [38] gave a whole class of such inequalities, called h~entropies that give the same admissibility condition. Suppose h : Rt --; R, Rt == {e ERie 2': O}, is a CI-function that satisfies hCO > 0 for e > 0 and h(O + 2eh' (0 > O. Then the h-entropy inequality is (5.9) in the sense of distributions (cp. Necas [37], [38]). Denoting by unit normal along a shock curve (5.9) implies (5.10) along this curve, i. e. h(I1I.L I2)lLL . n :::; h(I1I. R I2)lL R . n. This is the same as (5.8) since (~I) is a normal vector to the shock curve. For (5.11) we may therefore write (5.11) LEMMA 5.3. The h-entropy inequality (5.9) giving (5.12) where h(e) > 0 for equivalent to e > 0 and hCO + 2eh' (0 (a) 1:,.cP :::;M ,resp. [1l. ill :::; 0 (b) cPxx :::;M, resp. u- > u+ > 0, is an admissibility condition wben applied to tbe p-system. Proof Choosing i 1- n, both of unit length, we may set a = II· n, b = II· i and write h (lIII 2 ) II· n = h( a 2 + b2 )a. This function is monotone in a. To see this suppose for given a, b E R that h' (a 2 + b2 ) 2: O. Then Now suppose h' (a 2 + b2 ) h(a 2 O. Then + b2 )a 2: h(a 2 + b2 ) + 2h' (a 2 + b2 )(a 2 + b2 ) > O. It is strictly monotone for a 2 + b2 i- O. This gives the equivalence of (5.12) and (a). Wehad seen above that (a) implies (b). Since the argument works backwards they are equivalent. Note that (b) is not included in the class of h-entropy inequalities. Suppose H(e) = J h(s)ds is the primitive of h(·). Then the assumptions on h imply that o H( IIII 2) is a convex function of II. The h-entropy inequality is formally given by the Gateaux derivative for all r.p E CO"(R~), r.p 2: 0, (cp. Necas [38]). Choosing H( u, v) a general convex function of (u, v) includes (b) by taking H (U, v) = "z2. The other set of admissibility conditions given in Lemma 4.1 imply that u increases through a shock if one passes through the shock in the positive y-direction. This is accomplished by the inequality u y = rPxy 2: M, M :::; O. 6. The inverse p-system. 6.1. For the steady flow problem (1.1) the choice of a i-variable to give (4.1) was arbitrary. Also, we have seen that the x-variable in the transonic problem (1.1) is really the distinguished variable. Therefore, we will take a look at the inverse p-system p(U)t+ vx=O Vt - = 0 where p : R -> R has the same properties as in Section 4.1. The characteristics are now the reciprocals .Al(II) = -~, .A2(II) = - J 1 . The type of the y-p'(u) system is the same as in Section 4.1. One should not be discouraged by the fact that the eigenvalues become infinite for p' (u) = O. This means that the characteristics become parallel to the x-axis there. In the transonic flow problem one is interested in shocks where the shock speed ~; = 0, which is possible for mixed type shocks. In the framework of the inverse p-system this amounts to s being infinite. Note also that for mixed type problems involving smooth transition of the solution through the value Ucrit such that p'(Ucrit) = 0 the system (5.1) cannot be transformed to (4.1) by inverting ( p'(U) 6.2. The inverse p-system as a mixed type system was studied by Mock [36]. He avoided the above difficulties by taking reciprocal eigenvalues and reciprocal shock speeds. For our purposes, the discussion of shocks, it will suffice to disregard the state where p'(u) = 0 and the infinite shock speeds. Then one may keep the usual notations. The Rankine-Hugoniot jump conditions are S[P(UL) - P(UR)] = [VL - VR] S[VL - VR] = -[UL - UR]. (6.2) One obtains (6.3) for a ( between UL und UR. Again this excludes shocks with both states in 6.3. e U Z. We now study the entropy inequality. It is (Ui )", + (Fi)t 0 i = 1,2 s[F;] - [U;] 0 i = 1,2. First we take UI , Fl. Then we have using (5.2) and the mean value theorem o 2 S[FI] - lUll = S[VLP(UL) - VRP(UR)] = p(ud +2 P(UR)( UR =(uR-ud + vk - vi + P(ud UL ) + P( UL ) - P( UR ) ebetween UL and UR. For P" < 0 this implies for shocks. Now u-, u+ have to be taken with respect to the t-axis. Therefore one obtains U+ U- < U- for < u+ for >0 S < 0 S !h =.JI+ Figure 4 (see Figure 2). This is the same as obtained for the p-system (see Figure 4). The other entropy inequality gives + ULVL p(UR) [(UR - UL) p(UL) + 2 - P(UR) = (UR - ebetween UL and UR. (6.5) - URVR + P(ud ] Since p" < 0 this implies for for 8> 0 8 E-Book Information • Series: The IMA Volumes in Mathematics and Its Applications 27 • Year: 1,990 • Edition: 1 • Pages: 284 • Pages In File: 296 • Language: English • Identifier: 978-1-4613-9051-0,978-1-4613-9049-7 • Doi: 10.1007/978-1-4613-9049-7 • Cleaned: 1 • Orientation: 1 • Paginated: 1 • Org File Size: 12,469,402 • Extension: pdf • Tags: Mathematics, general Physics, general • Toc: Front Matter....Pages i-xiv Multiple Viscous Profile Riemann Solutions in Mixed Elliptic-Hyperbolic Models for Flow in Porous Media....Pages 1-17 On the Loss of Regularity of Shearing Flows of Viscoelastic Fluids....Pages 18-31 Composite Type, Change of Type, and Degeneracy in First Order Systems with Applications to Viscoelastic Flows....Pages 32-46 Numerical Simulation of Inertial Viscoelastic Flow with Change of Type....Pages 47-66 Some Qualitative Properties of 2 × 2 Systems of Conservation Laws of Mixed Type....Pages 67-78 On the Strict Hyperbolicity of the Buckley-Leverett Equations for Three-Phase Flow....Pages 79-84 Admissibility Criteria and Admissible Weak Solutions of Riemann Problems for Conservation Laws of Mixed Type: A Summary....Pages 85-88 Shocks Near the Sonic Line: A Comparison between Steady and Unsteady Models for Change of Type....Pages 89-106 A Strictly Hyperbolic System of Conservation Laws Admitting Singular Shocks....Pages 107-125 An Existence and Uniqueness Result for Two Nonstrictly Hyperbolic Systems....Pages 126-138 Overcompressive Shock Waves....Pages 139-145 Quadratic Dynamical Systems Describing Shear Flow of Non-Newtonian Fluids....Pages 146-163 Dynamic Phase Transitions: A Connection Matrix Approach....Pages 164-180 A Well-Posed Boundary Value Problem for Supercritical Flow of Viscoelastic Fluids of Maxwell Type....Pages 181-191 Loss of Hyperbolicity in Yield Vertex Plasticity Models under Nonproportional Loading....Pages 192-217 Undercompressive Shocks in Systems of Conservation Laws....Pages 218-231 Measure Valued Solutions to a Backward-Forward Heat Equation: A Conference Report....Pages 232-242 One-Dimensional Thermomechanical Phase Transitions with Non-Convex Potentials of Ginzburg-Landau Type....Pages 243-257 Admissibility of Solutions to the Riemann Problem for Systems of Mixed Type....Pages 258-284
{"url":"https://vdoc.pub/documents/nonlinear-evolution-equations-that-change-type-522do0r7hbd0","timestamp":"2024-11-07T06:41:55Z","content_type":"text/html","content_length":"547069","record_id":"<urn:uuid:31ec51f1-147c-46a1-8929-aaa0833ff3b0>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00007.warc.gz"}
Oil Price Shocks, Durables Consumption, and China’s Real Business Cycle Modern Economy Vol.10 No.04(2019), Article ID:92012,24 pages Oil Price Shocks, Durables Consumption, and China’s Real Business Cycle Yunqing Wang^1,2*, Xinyu Sui^3, Wenjie Pan^4 ^1School of Finance, Shanghai Lixin University of Accounting and Finance, Shanghai, China ^2PICC Asset Management Company Limited, Shanghai, China ^3School of Economics and Management, Shanghai Bangde College, Shanghai, China ^4School of Statistics and Information, Shanghai University of International Business and Economics, Shanghai, China Copyright © 2019 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). Received: March 27, 2019; Accepted: April 21, 2019; Published: April 24, 2019 Motivated by the facts that the sharp volatility in international oil prices has become one of the important external sources in driving China’s economic fluctuations, and in view of the strong correlation between oil and consumer durables, we build a real business cycle (RBC) model incorporating durable goods consumption in the context of oil price shocks. Using quarterly data on Chinese economy to conduct an empirical test, we examine China’s cycle characteristics of macroeconomic volatility and the transmission mechanism of oil price shocks. The study shows: 1) In the RBC model the consumption will be divided into durables and non-durables, which plays a crucial role in explaining Chinese economic fluctuations. The core of the model is to improve the forecast of consumption volatility and weak pro-cyclicality, which is closer to the actual economy; 2) Oil price shocks mainly affect consumption volatility, but seldom influence output, investment and labor, the three variables of which are largely influenced by technology shocks; 3) The model reveals that the transmission mechanism is determined by intra-temporal income effects and inter-temporal effects of portfolio rebalanced between durable goods and capital goods. Oil Price Shocks, Durables Consumption, Real Business Cycle, The Chinese Economy 1. Introduction Since its reform started in 1978, China’s economy has sustained high growth about 8% - 10%^1, shoring up the demand for oil. As early as 2003, China has surpassed Japan to become the world’s second largest oil consumer after the United States. With a sharp rise in the consumption scale, external dependence of crude oil is also rising. Since the year 2011, China has surpassed the US as the world’s largest oil importer; in 2012, China’s net oil imports accounted for 86% of the global growth increment; its dependence on foreign oil in 2014 reached 59.5% of its overall consumption. From the beginning of this century, international oil prices have gone up and down for more than 50% for three times. Take the recent market for example, since the second half of 2014, the British Brent crude oil prices fell more than 60% in less than seven months, second only to the financial crisis in 2008. The sharp volatility in international oil prices has become one of the important external sources in driving business cycle fluctuations in China. As a major energy and raw material in modern industry, oil price volatility influences a nation’s macro-economy through a variety of channels [1]. According to the analytical framework of “shock-transmission mechanism” for business cycle theory, oil price shocks belong to the supply-type of real business cycle. Compared to the RBC theory with technology shocks of the supply type [2] [3] [4] , China’s RBC literatures based on the oil shocks are still in great short, and besides, it lacks studies on RBC models established on China’s economic data in empirical testing and prediction. In an attempt to shed light on the above issues, this paper accounts for the impact factors in driving the cyclical pattern of China’s economy in the sight of oil price shocks. It should be emphasized that, in recent years, the driving forces for China’s oil consumption growth not only come from industrialization and urbanization, but also from changes in the structure of consumer demand. The consumption structure of Chinese residents has gone from subsistence to well-off, and then upgraded to be the consumer. On one hand, the proportion of total food consumption is on decline, with the Engel coefficient of urban residents decreasing from 57.5% in 1978 to 35.0% in 2013, and rural residents from 67.7% to 37.7% (from China Statistical Yearbook, 2014). On the other hand, the types of consumer goods continue to be enriched and the quality continues to be improved, among which the most obvious sign is that the various durable goods of residents have continued to be on increase. Not only has the amount of color television sets, refrigerators and other traditional home appliances are on fast rise^2, other newly developing household consumptions such as personal computers, mobile phones, sports cars and other entertainment equipment are significantly expanded^3. Oil as a raw material is widely involved in the production of consumer durables sectors, but also used as input and fuel in durable goods. The consumption upgrading has led to the transformation of the industrial structure, and boosted the demand for oil. Distinctive from consumer non-durable goods (non-durables) consumption, durable goods (durables) have higher prices and long-term use for each time. In addition, durables consumption behavior is obviously different from other consumer behaviors. On one hand, to those durables that do not belong to the necessities of life, households can selectively consume according to income in different periods, so the intertemporal elasticity of substitution is much bigger than that of non-durables [5] ; on the other hand, adjusting durables consumption faces higher costs and has “investment irreversibility”, and for individual families, durables consumption is discrete and the purchase decision can be triggered more diversely. Moreover, generally speaking, volatility of durables consumption is much larger than the non-durables. The characteristics of durables itself, exhibit different features than other consumer goods in response to oil price shocks. For example, under the impact of rising oil prices, production costs of durables partially increase, affecting the corresponding demand and investment, but also might postpone people’s purchase of durables, thereby reducing consumption [6]. Firstly, with regard to oil price shocks, there are a series of influential works in the field of oil price shocks based on RBC framework [7] [8] [9]. She believes that high oil prices are equivalent to the negative impact of technology, and with a reasonable relations between capital utilization rate and oil usage, oil price rise will reduce firms’ capital utilization, which in turn will decrease investment and output, leading to a variety of consequences such as interest rates rise and rising inflation. Moreover, Rotemberg and Woodford (1996) examine the proposition in the context of the imperfectly competitive market, and conclude that imperfect competition is very important for understanding the effects of oil shocks on US economy [10]. To examine the energy impact on the business cycle mechanisms, Kim and Lougani (1992), Dhawan and Jeske (2007) introduce endogenously energy input to the production function in the RBC model by transferring the traditional “capital-labor” type production function into the “capital-labor-energy consumption” type production function [11] [12]. By developing a RBC model of open economy, Backus and Crucini (2000) show that volatility in oil prices is responsible for trade volatility of the most of countries in the world during the last twenty five years in 20th century [13]. Wu (2009) follows Finn (2000) by developing a RBC model in line with China’s national conditions, in order to explore impacts on China’s energy efficiency fluctuations [14]. The numerical simulation shows that endogenous capital utilization rate change plays a key role in China’s energy efficiency fluctuations, which is similar with the conclusion made by Finn’s study on the US economy. Moreover, focusing on the Chinese economy under the RBC framework, Sun and Jiang (2012) find the energy price shocks would lead to higher inflation, and have negative impact on economic growth in the short term, but is of short persistence [15]. Secondly, as to durables consumption, mainstream literatures can be divided into durables and non-durables and its impact of macro-economy. Durables refer to the automobiles, household goods, sports equipment, jewelry and other goods that don’t quickly wear out, while non-durables are the opposite of durables, such as a short or one-time consumption of goods and services. The motivation for scholars to make such a distinction lies in that in one way, different pace of expenditure and intertemporal elasticity of substitution between two kinds of goods will affect the growth speed of the actual economy; in another way, durables are much sensitive to the economic policy, particularly the monetary policy, than the non-durables, which will lead to changes in policy transmission mechanism and optimal economic policy. Studies are represented by Ogaki and Reinhart (1998), Erceg and Levin (2006), and Monacelli (2009) [16] [17] [18]. For China’s economy, Fan et al. (2007) focus on durables consumption of urban and rural residents by using CHNS micro-database, and proves that empirical results support the (S, s) model [19]. Also, Yin and Gan (2009) use CHNS data to study the impact of housing reforms on household durables consumption, and find that housing reform significantly increased durables consumption [20]. Zhao and Hsu (2012) follow the method proposed by Cooley and Prescott (1995) to estimate consumer durables for China, and find that durables consumption is much more volatile than output [21] [22]. Throughout these studies, it can be found that for Chinese economy, discussion of the oil price volatility and durables consumption are separated in the study of economic relations, either simply on the oil price impact on the economy, or just on the role of durables in the economy. There is little literature discussing the complementary nature of them. Moreover, those studies exploring the macroeconomic effects through dividing the consumer goods into durables and non-durables are focused on quantitative analysis, whereas the study of the impact of consumer durables in RBC framework is not yet involved. Meanwhile, if missing the reality features described by the rising durables consumption of urban and rural residents in China, such modelling may bring forth error of fitting and it is difficult to accurately capture the oil price impact on China’s macroeconomic mechanisms. In addition, when establishing the RBC theoretical framework of oil economy, the existing literature is silent on using actual economic data to test whether the model really applies to China’s cyclical properties. In view of this, based on RBC framework, our work complements these studies by incorporating non-durables and durables to investigate the transmission mechanism of oil prices on the economy, and moreover shows the patterns of China’s business cycle in the context of oil price shocks. Compared with the most existing studies, the contribution of the paper is three-fold: first, we conduct an RBC exercise using quarterly data rather than annual data through the abundant studies in RBC models talking about China’s business cycle [23] [24] [25] , so it can better fit the characteristics of Chinese pro-cyclical weak consumption; second, we contribute to the existing literature by incorporating consumer durables into RBC theoretical framework, different drastically from the approach based on survey data from empirical analysis to investigate China’s economic fluctuations; third, we follow Dhawan and Jeske (2007) by taking into account the correlation between oil and durables through building the three elements in the household sector “non-durables-durables-oil” into two nested CES consumption function. Our result shows that it does a fairly good job in capturing China’s real business cycle. The remainder of this paper is organized as follows. Section 2 describes cyclical properties of oil price and other macro series in China; Section 3 presents the RBC model of oil economy, including durables and non-durables consumption. Section 4 conducts on calibration for parameters. Section 5 discusses model results. Section 6 concludes the paper. 2. Cyclical Properties of Oil Price and China’s Economy This paper uses data from the databases of CICE and WIND, with the choice of quarterly data and a time span from 1997Q1 (1997Q1 means the first quarter of 1997, the same below) to 2016Q1, a total of 77. China has officially compiled quarterly data since the 1990s, while from 1997 the National Bureau of Statistics of China (NBS) began to announce monthly or quarterly consumer durables data, goods such as the car/furniture/appliances/sports and entertainment products. Durables data are crucial to the modeling and analysis of our work so the sample is selected from the beginning of 1997. In order to be consistent with the results of the DSGE theory below, seasonal adjustment of variables is made except oil prices and labor, and then all variables are in logarithms and have been de-trended with the HP filter. According to CPI on a yearly basis and monthly series published by NBS, quarterly fixed base ratio based on 1997Q1 can be figured out, and quarterly GDP deflator is used to calculate the actual value of the relevant economic variables. Firstly, the actual consumption is the outcome of the total quarterly retail sales of consumer goods divided by quarterly GDP deflator. Secondly, unlike the US, Chinese consumption official statistics have not been carried out for durables and non-durables. With reference to the mainstream literature classification method and the availability of China data, four representative variables of durables, namely, the car/furniture/appliances/sports and entertainment products, can be divided by quarterly GDP deflator, and then comes out the actual durables investment. Thirdly, since there is no quarterly or monthly private investment data in the statistics officially published, which is consistent with Wang and Zhu (2015), the domestic loans, self-financing, foreign investment, and other capitals will be taken as representative variables of the total funds for private investment [26]. If the variables are divided by quarterly GDP deflator, actual private investment (i.e. capital investment) is figured out. Fourthly, the actual total investment is defined as the investment total of actual private investment and durables investment. Fifthly, using “unit employees in total” as labor is applied in Huang (2005). Sixthly, the actual GDP is nominal GDP divided by the quarterly GDP deflator. Seventhly, “retail: enterprises over the quota: petroleum and petroleum products” can be obtained as representative variables for household oil consumption. If this series is further divided by the quarterly GDP deflator, the actual household oil consumption will be obtained. Eighthly, the use of the West Texas Intermediate (WTI) crude oil spot prices^4 in which the monthly data using the geometric mean method become quarterly data, then converted to RMB price, and divided by quarterly GDP deflator to obtain actual oil prices. From Figure 1, it can be seen that oil prices is obviously higher volatile than GDP. In particular, the standard deviation of GDP is 0.0376, whereas the standard deviation of oil prices is 0.2073, 5.51 times of that of GDP. Seen from the co-movement, oil prices and GDP show contemporaneous co-movement in irregular pattern. For example, from early 2007 to the end of 2009, oil prices experiences the greatest volatility; especially, during 2007Q3-2008Q2, oil prices and GDP have a positive contemporaneous co-movement, whereas during 2008Q4-2009Q3, oil prices and GDP have a negative contemporaneous co-movement; but another example is during the two periods of 1998Q1-1998Q4 and 2014Q1-2015Q3, oil prices and GDP show departure trend. Overall, we do not find obvious conclusion that supports traditional economic theory of rising (decreasing) oil prices leading to the fall (growth) in output. Table 1 summarizes the statistical moments regarding the Chinese business cycle from 19997Q1-2016Q1, which can provide some stylized facts of China business cycles. First, oil prices and household oil consumption are more volatile than capital investment, durables investment, and GDP, and volatilities of labor and consumption Figure 1. China’s actual GDP and the actual oil price volatility (HP filter) between 1997Q1-2016Q1. Table 1. The statistical moments of China’s economy between 1997Q1-2016Q1. Note: See the first paragraph in this section of the data sources and data processing described; relative standard deviation is the ratio of standard deviation of the variables to standard deviation of the GDP; autocorrelations refer to the first-order autocorrelation coefficient. are the lowest. On one hand, capital investment and durables investment are more severe volatilities, 1.58 times and 1.40 times of the amplitude of GDP respectively, also showing a strong pro-cyclicality. On the other hand, volatility in labor is smoother, proving that China’s features are different from business cycles in the developed markets as well as other emerging markets. Second, one striking fact is that consumption is slightly pro-cyclical with a correlation of only 0.15, as opposed to China’s strong pro-cyclicality derived from annual data produced by Rao and Liu (2014), as well as different from strong pro-cyclicality of the US data [27]. Actually, over the past 3 decades, China has achieved a remarkable growth primarily through investment and exports, whereas consumption has been always sluggish, and “high savings and low consumption” is China’s important economic characteristics different from the US and other developed economies [28]. Therefore, it is believed that Chinese consumption is slightly pro-cyclical, which can be evidenced by the quarterly data. Other macro series are pro-cyclical, especially for household oil consumption, which is slightly pro-cyclical (0.04), indicating that there is a certain degree of rigidity in households’ consumption of oil or its products (such as daily petrochemicals) which does not volatile significantly as incomes changes occur. Third, the autocorrelations of consumption and durables investment are 0.71 and 0.59 respectively, indicating strong “consumer” inertia in both of them, which is in line with “Catch up with the Joneses” [29] [30]. Meanwhile, there is a strong autocorrelation of oil prices and household oil consumption, meaning both a degree of persistence in them. Four, volatility in oil prices is as high as 0.2073, three times greater than capital investment, which shows the volatility of oil prices may be an important source of external shocks for China’s economic fluctuations. Meanwhile, the low correlation (0.06) between oil prices and GDP may also indicate the asymmetry, the alternative and the complexity between oil prices and the economy, requiring comprehensive and dynamic researches on the inherent association between oil prices and China’s macroeconomics, among which DSGE model is just able to provide an analytical framework that combines short and long term, overt and unity. 3. Modeling Based on the canonical RBC model framework developed by Hansen (1985) and Cooley and Prescott (1995), this paper refers to the setting mode of Dhawan and Jeske (2007) model, thus set the production function into the form of “capital-oil-labor”, which is a three elements double nested structure. The consumer goods of household sector are also divided into durables and non-durables in the utility function to build a DSGE model of oil economy containing both households and firms. 3.1. Households The representative household consumption ( ${C}_{t}$ ) consists of durables ( ${D}_{t}$ ), oil and oil-products ( ${O}_{h,t}$ , hereinafter referred to oil) and non-durables ( ${N}_{t}$ ). Assume the double nested CES functional form is constituted by three elements: ${C}_{t}={\left[{\alpha }_{c}{\left({N}_{t}\right)}^{-{\rho }_{c}}+\left(1-{\alpha }_{c}\right){\left({F}_{t}\right)}^{-{\rho }_{c}}\right]}^{-\frac{1}{{\rho }_{c}}}$(1) ${F}_{t}\equiv {\left[{\alpha }_{F}{\left({D}_{t-1}\right)}^{-{\rho }_{F}}+\left(1-{\alpha }_{F}\right){\left({O}_{h,t}\right)}^{-{\rho }_{F}}\right]}^{-\frac{1}{{\rho }_{F}}}$(2) ${\alpha }_{c}\in \left(0,1\right)$ , ${\alpha }_{F}\in \left(0,1\right)$ , ${\rho }_{c}\ge -1$ , and $\text{}{\rho }_{F}\ge -1$ , where $\frac{1}{1+{\rho }_{c}}$ is the elasticity of substitution between the composite of oil and durables (defined as ${F}_{t}$ ) and non-durables, and $\frac{1}{1+{\rho }_{F}}$ is the elasticity of substitution between oil and durables. There is an accumulative process of consumption for durables and the operating mode is similar to the capital ( ${K}_{t}$ ) in the model, both belonging to the state variables. The representative household utility function is as follows: ${E}_{0}\left\{{\sum }_{t=0}^{\infty }{\beta }^{t}\left[\mathrm{log}\left({C}_{t}\right)-\frac{{\left({L}_{t}\right)}^{1+\eta }}{1+\eta }\right]\right\}$(3) where $\beta$ denotes the discount factor, ${L}_{t}$ is the labor supply variable, $\eta$ is the inverse of the elasticity of labor supply. The budget constraint for households is: ${I}_{K,t}$ and ${I}_{D,t}$ are the investments of capital and durables respectively. ${w}_{t}$ and ${r}_{t}^{k}$ denote real wages and return on invested capital. ${P}_{o,t}$ is the actual price of oil. In addition, the stock of given capital and durables evolves according to: ${I}_{K,t}={K}_{t}-\left(1-{\delta }_{k}\right){K}_{t-1}+\frac{{\varnothing }_{k}}{2}{\left(\frac{{I}_{K,t}}{{K}_{t-1}}-{\delta }_{k}\right)}^{2}{K}_{t-1}$(5) ${I}_{D,t}={D}_{t}-\left(1-{\delta }_{d}\right){D}_{t-1}+\frac{{\varnothing }_{d}}{2}{\left(\frac{{I}_{D,t}}{{D}_{t-1}}-{\delta }_{d}\right)}^{2}{D}_{t-1}$(6) ${\delta }_{k}$ and ${\delta }_{d}$ are discount factors of capital and durables respectively. In line with the setting mode of Atkeson and Kehoe (1999), it is assumed that the investments of capital and durables bring about additional adjustment costs, so ${\varnothing }_{k}$ and ${\varnothing }_{d}$ are defined as the parameters for adjustment costs [31]. The first order condition is obtained by solving the dynamic optimal choice problem of the representative household: ${\lambda }_{t}={\alpha }_{c}{\left({C}_{t}\right)}^{{\rho }_{c}}{\left({N}_{t}\right)}^{-1-{\rho }_{c}}$(7) $\begin{array}{l}{\lambda }_{t}{Q}_{D,t}=\beta \left\{\left(1-{\alpha }_{c}\right){\alpha }_{F}{\left({C}_{t+1}\right)}^{1+{\rho }_{c}}{\left({F}_{t+1}\right)}^{{\rho }_{F}-{\rho }_{c}}{\left({D}_{t} \right)}^{-\left(1+{\rho }_{F}\right)}\underset{\begin{array}{l}\\ \end{array}}{\overset{\begin{array}{l}\\ \end{array}}{}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\ hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\lambda }_{t+1}{Q}_{D,t+1}\ left[\left(1-{\delta }_{d}\right)-\frac{{\varnothing }_{d}}{2}{\left(\frac{{I}_{D,t+1}}{{D}_{t}}-{\delta }_{d}\right)}^{2}+{\varnothing }_{d}\frac{{I}_{D,t+1}}{{D}_{t}}\left(\frac{{I}_{D,t+1}}{{D}_ {t}}-{\delta }_{d}\right)\right]\right\}\end{array}$(8) ${\lambda }_{t}{P}_{o,t}=\left(1-{\alpha }_{c}\right)\left(1-{\alpha }_{F}\right){\left({C}_{t}\right)}^{{\rho }_{c}}{\left({F}_{t}\right)}^{{\rho }_{R}-{\rho }_{c}}{\left({O}_{h,t}\right)}^{-1-{\rho ${\lambda }_{t}{w}_{t}={\left({L}_{t}\right)}^{\eta }$(10) ${Q}_{K,t}=\beta \frac{{\lambda }_{t+1}}{{\lambda }_{t}}\left\{{r}_{t}^{k}+{Q}_{K,t+1}\left[\left(1-{\delta }_{k}\right)-\frac{{\varnothing }_{k}}{2}{\left(\frac{{I}_{K,t+1}}{{K}_{t}}-{\delta }_{k}\ right)}^{2}+{\varnothing }_{k}\frac{{I}_{K,t+1}}{{K}_{t}}\left(\frac{{I}_{K,t+1}}{{K}_{t}}-{\delta }_{k}\right)\right]\right\}$(11) ${Q}_{K,t}\left[1-{\varnothing }_{k}\left(\frac{{I}_{K,t}}{{K}_{t-1}}-{\delta }_{k}\right)\right]=1$(12) ${Q}_{D,t}\left[1-{\varnothing }_{d}\left(\frac{{I}_{D,t}}{{D}_{t-1}}-{\delta }_{d}\right)\right]=1$(13) where ${\lambda }_{t}$ is the Lagrange multiplier of budget constraint, ${Q}_{K,t}$ and ${Q}_{D,t}$ are the shadow prices (i.e. the Lagrange multipliers of capital and durables accumulation equations) of capital and durables, respectively. (7), (8) and (9) are Euler equations of non-durables, durables and household oil consumption, which describe the optimal consumption choices of the household on these three goods. (10) is the supply equation of labor and (11) is the Euler equation of capital. (12) and (13) characterize the optimal dynamic investment behaviors of capital and 3.2. Firms Same with the settings of household sector, the production function of firms is a double nested CES^5 functional form constituted by three elements: ${Y}_{t}={A}_{t}{\left[{\alpha }_{y}{\left({X}_{t}\right)}^{-{\rho }_{y}}+\left(1-{\alpha }_{y}\right){\left({L}_{t}\right)}^{-{\rho }_{y}}\right]}^{-\frac{1}{{\rho }_{y}}}$(14) ${X}_{t}\equiv {\left[{\alpha }_{x}{\left({K}_{t-1}\right)}^{-{\rho }_{x}}+\left(1-{\alpha }_{x}\right){\left({O}_{f,t}\right)}^{-{\rho }_{x}}\right]}^{-\frac{1}{{\rho }_{x}}}$(15) ${Y}_{t}$ is the production, ${O}_{f,t}$ is the oil consumption of firms, ${X}_{t}$ is the composite of capital and oil ( similar to ${F}_{t}$ in household sector), ${A}_{t}$ is neutral technology shock, also known as the so-called total factor productivity (TFP) and its logarithmic form follows the stochastic process below: $\mathrm{ln}\left({A}_{t}\right)=\left(1-{\rho }_{A}\right)\mathrm{ln}\left(A\right)+{\rho }_{A}\mathrm{ln}\left({A}_{t-1}\right)+{u}_{A,t}$ , ${u}_{A,t}~N\left(0,{\left({\sigma }_{A}\right)}^{2}\ where ${\rho }_{A}\in \left(0,1\right)$ is the autoregressive coefficient, A is its steady state value, ${\sigma }_{A}$ is the standard deviation of technology shocks. Also, ${\alpha }_{y}\in \left (0,1\right)$ , ${\alpha }_{x}\in \left(0,1\right)$ , ${\rho }_{y}\ge -1$ , ${\rho }_{x}\ge -1$ . We can derive the first order condition with respect to ${L}_{t}$ , ${K}_{t}$ and ${O}_{f,t}$ by solving the profit maximization problem of firms: ${w}_{t}=\left(1-{\alpha }_{y}\right){\left({A}_{t}\right)}^{-{\rho }_{y}}{\left({Y}_{t}/{L}_{t}\right)}^{1+{\rho }_{y}}$(16) ${r}_{t}^{k}={\alpha }_{y}{\alpha }_{x}{\left({A}_{t}\right)}^{-{\rho }_{y}}{\left({X}_{t}\right)}^{{\rho }_{x}-{\rho }_{y}}{\left({Y}_{t}\right)}^{1+{\rho }_{y}}{\left({K}_{t-1}\right)}^{-\left(1+{\ rho }_{x}\right)}$(17) ${P}_{o,t}={\alpha }_{y}\left(1-{\alpha }_{x}\right){\left({A}_{t}\right)}^{-{\rho }_{y}}{\left({X}_{t}\right)}^{{\rho }_{x}-{\rho }_{y}}{\left({Y}_{t}\right)}^{1+{\rho }_{y}}{\left({O}_{f,t}\right)} ^{-\left(1+{\rho }_{x}\right)}$(18) 3.3. Equilibrium Conditions and Model Solution So far, we have characterized the optimal choices of the households and firms under constraints: the maximization of expected utility of households and the maximization of expected profits of firms, so the market clearing of the final good is: ${N}_{t}+{I}_{K,t}+{I}_{D,t}+{P}_{o,t}\left({O}_{h,t}+{O}_{f,t}\right)\le {Y}_{t}$(19) In recent years, oil coming from abroad has accounted for an increasing proportion of China’s aggregate amount. Chinese external dependence of petroleum and crude oil both broke the point of 55% in 2011, surpassed the US as the highest in the world. Thus the oil price volatility is highly relevant with the international market of crude oil. In addition, China has started late on transactions of staple commodities, which results in some problems about the market like the few varieties, small size, low openness and the lack of pricing power. So the oil pricing in China depends on the international market to some extent. In order to focus on analyzing the impact of oil price on China’s macro-economy, consistent with the assumptions of Rotemberg and Woodford (1996) on US crude oil, we assume that volatility of oil prices in China depends on the international market, that is to say the oil price is completely exogenous and follows the ARMA (1, 1) process (see the parameter calibration part in the next chapter). The final log-linearization is: ${\stackrel{^}{P}}_{o,t}={\rho }_{o}{\stackrel{^}{P}}_{o,t-1}+{\stackrel{^}{u}}_{o,t}+{\rho }_{u}{\stackrel{^}{u}}_{o,t-1}$ , ${u}_{o,t}~N\left(0,{\left({\sigma }_{o}\right)}^{2}\right)$ ${\rho }_{o}$ and ${\rho }_{u}$ are the coefficients of the oil price ARMA (1, 1) respectively, ${\sigma }_{o}$ is the standard deviation of actual oil price shocks. At the same time, a part of the output components should be used to pay for the imported oil, so the relation is: $V{A}_{t}={Y}_{t}-{P}_{o,t}\left({O}_{h,t}+{O}_{f,t}\right)$ . The difference value is defined as the value added of production ( $V{A}_{t}$ ), combined with the market clearing equation of final good: Finally, by solving the log-linearized equations, optimal equilibrium path for each endogenous variable can be obtained: $\left\{{C}_{t},{N}_{t},{D}_{t},{O}_{h,t},{F}_{t},{I}_{D,t},{I}_{K,t},{w}_{t},{L}_{t},{r}_{t}^{k},{K}_{t},{Y}_{t},X,{O}_{f,t},{\lambda }_{t},{Q}_{D,t},{Q}_{K,t},V{A}_{t}\right\}$ 4. Calibration 4.1. Oil Prices and Technology Shocks The purpose of this paper is to examine the relevance of oil prices and China’s economy, thus how to determine the correlation coefficient of oil price shocks is particularly important. Through trial and error, it is found that ARMA (1, 1) model can fit the actual fluctuating trend of oil prices in the sample period, as seen in Figure 2. Specific estimation results are shown in Table 2: the parameter estimation results are very significant, while tests show residuals of the regression equation is a zero mean and the standard deviation for a smooth sequence is 0.12. According to its autocorrelation coefficient and partial correlation coefficient, it is found that no serial correlation exists and ARMA (1, 1) model can be identified. Thus the results are available: ${\rho }_{o}= 0.56$ , ${\rho }_{u}=0.46$ and ${\sigma }_{o}=0.12$ . Based on quarterly frequency data, Wang et al. (forthcoming) find the first-order regression coefficient of China’s technology shock is 0.8, which is slightly lower Figure 2. Actual value (solid line) of oil price volatility and its simulated value (dotted line + triangle) (HP filter) during 1997Q1-2016Q1; the horizontal axis is time, and the vertical axis is fluctuating values. Table 2. Estimation results of actual oil prices by ARMA (1, 1) model. than 0.95 of the US [33]. So we choose ${\rho }_{A}=0.8$ and ${\sigma }_{A}=0.03$ in line with Wang et al. (2019) [34]. 1) The discount factor β From 1997Q1-2016Q1, the average inflation growth rate on a quarter-to-quarter basis is 1%, so the quarterly discount factor is set at 0.99. 2) Capital depreciation rate δ[k] and durables depreciation rate δ[d] In the study of China’s economic fluctuations in the literature, the average life span of China’s fixed asset is mostly set at 10 years, the capital depreciation rate is 0.1, and the corresponding quarterly value is 0.025 [35]. Previous studies have not yet estimated the depreciation rate of durables, but Chinese scholars also include assets of durables in estimating fixed assets, and therefore might assume it is the same as the rate of capital depreciation. 3) Substitution parameters ρ[c], ρ[y], ρ[F], ρ[x] Using data from the US and Japan respectively, Pakos (2011) show that elasticity of substitution between durables and non-durables is close to 1, i.e., ${\rho }_{c}=0$ [36]. Following Kim and Loungani (1992), elasticity of substitution between the production function of the composite of energy and capital and labor is set at 1, namely, ${\rho }_{y}=0$ . Using the US industrial data, Lee and Ni (2002) find that higher oil prices will not only reduce the supply of energy-intensive output, but also reduce the demand for durables such as cars, etc., which means oil products and durables are complementary in the actual economy, therefore, ${\rho }_{F}\ge 0$ , in which we set at 0.7 [37]. Lu et al. (2009) and Yang et al. (2011) calculate the energy and capital substitution parameters of China to be 0.47 and 0.49 respectively, which is similar with that of Ma et al. (2008) estimation of 0.52 and 0.47 from the data of the US and E.U. [38] [39] [40]. However, there is considerable regulation on China’s energy market, and the resulting energy price distortion is significantly higher than that of developed countries, implying such substitution relationship weaker than that of developed economies. In view of this, Huang and Lin (2011) use a meta-regression model to estimate the elasticity of substitution between energy and capital, and we refer to the value 0.25, and hereby ${\rho }_{x}=3$ [41]. 4) Share parameter α[y] We choose ${\alpha }_{y}=0.5$ as in He et al. (2009). A combination of steady-state values and other parameters can pin down the other three share parameter values without calibration. 5) Investment adjustment cost parameters ϕ[k] and ϕ[d] The larger the values of ${\varphi }_{k}$ and ${\varphi }_{d}$ , the greater the adjustment costs of investment of capital and durables, or vice versa, and when the value is 0, adjustment costs do not exist. Compared to ${\varphi }_{k}=1$ in Atkeson and Kehoe (1999) for the US economy, we set the two parameters as 3 in terms of China as a developing country with the incompleteness of financial 6) Labor supply elasticity η There are few Chinese empirical researches on the setting of η and the results differences are also large, but in the RBC literature, its value is generally set to 1, and we also choose this value. 7) Steady-state value (K/Y, N/Y, I[D]/N, O[h]/N, K/O[f]) To solve the differential equations system after log-linearization, five more steady-state values are needed to determine. According to (5), when it’s steady-state there is K = I[K]/δ[k], then to calculate the mean capital investment data in the sample period, and the steady-state capital value K can be figured out when combining the previously calibrated capital depreciation rate. When utilizing mean data related to the output, durables investment, household oil consumption, non-durables consumption in the sample period, it is easy to obtain K/Y of 22, N/Y of 0.2, I[D]/N of 0.61, and O[h]/N of 0.31. In addition, as it is impossible to get oil consumption data from China firms, K/O[f] = 300 is set with a reference of the estimation of the US economy by Kim and Loungani (1992). The differences of the capital accumulation and the level of economic development between Sino-US at this stage may be of nearly three decades, so the value set also has certain rationality. In conclusion, all deep parameters of RBC model are summarized in Table 3: 5. Model Results Toolkit package containing Matlab source code by Uhlig (1999) is used to obtain cyclical characteristic information for each macroeconomic variable in “Second Table 3. Calibration of deep parameters. Moment” after the model log-linearization, and the results are shown in Tables 4-8. For comparison purposes, it is treated as follows: 1) RBC model with durable goods consumption is taken as a benchmark model in this paper, and there exist two shocks (oil price and technology) which are denoted as DRBC. 2) Model structure is the same with 1), but with only the oil price shocks, denoted as DRBC-OIL; the purpose is not to investigate changes in the technology, but only to discuss the impact on the business cycle of oil price shocks. 3) Model structure is the same with 1), but with only technology shocks, denoted as DRBC-TFP; the purpose is not to investigate changes in oil price, but will only discuss the impact on the business cycle of technology shocks. 4) It contains RBC model with a single consumption structure, namely, a simple RBC type model, which means consumption is not distinguished between durables and non-durables, and also with two shocks, denoted as SRBC. 5) Model structure is the same with 4), but with only the oil price shocks, denoted as SRBC-OIL. 6) Model structure is the same with 4), but with only technology shocks, denoted as SRBC-TFP. When introducing consumer durables into oil economy of RBC model, there are three questions to be answered: first, compare directly the artificial DRBC model (benchmark model) with the actual economy to see if it better predicts China’s RBC? Second, compared with the standard oil economy model SRBC, are the predictions of DRBC benchmark model obviously improved? Third, compared to traditional RBC model which considers technology as the most important source of economic fluctuations, what kind of role is oil price shock playing in the business cycle, what impact differences on the core macroeconomic variables such as output and consumption, what is the transmission mechanism? 5.1. Comparison with the Actual Economy The predicted results of DRBC, DRBC-OIL and DRBC-TFP are shown in Table 4 & Table 5, and when compared with the actual economy, several findings on economic variables can be achieved as follows: From the point of volatility, the standard deviations of oil price and household oil consumption are 14.98% and 10.09% respectively, far greater than the volatility of output 3.75%, 4.00 times and 2.69 times of the output respectively; capital investment, total investment, durables investment are also higher than that of output, namely, 5.60%, 5.39% and 5.05%; output is ranked No. 6, and labor and consumption are low, with only 1.69% and 1.61%, of which consumption volatility the lowest reflects exactly the classical theory of household intertemporal smooth consumption behavior advocated by RBC^6. Priorities of this volatility Table 4. Business cycle properties of model DRBC. Note: Simulation results are average over 10000 simulations each with length 77 quarters, which is the same sample number of periods as the China sample. The K-P ratio denotes the ratio of standard deviation of artificial economy to that of actual economy, after using the HP filtering method proposed by Kydland and Prescott (1982). (Similarly for Tables 5-7). Table 5. Business cycle properties of model DRBC-OIL and DRBC-TFP. completely coincide with the actual economy. From the point of view of K-P ratio, the output volatility of DRBC is close to data from China, with its output K-P ratio reaching 0.9973, indicating that the model accounts for 99.73% of the volatility of output in the data. Table 6 shows that output K-P ratio of DRBC-TFP reaches 113.56%, while the corresponding DRBC-OIL only 26.33%, indicating that the main source of output volatility is technology shocks rather than oil price shocks. This conclusion is relatively consistent with the study under the framework of classical RBC by scholars that suggests that the main source of volatility of China’s output since reform and opening up in 1978 is technology shocks. Consumption volatility under DRBC is slightly lower than data from China, and the K-P ratio is 0.9253, which means that the artificial economy can account for 92.53% of the volatility of China’s consumption; it is worth noting that the K-P ratio of consumption in DRBC-OIL is up to 90.80%, while the K-P ratio of consumption in the DRBC-TFP is only 25.86%, indicating that the main source of consumption volatility is oil price shocks rather than technology shocks, as opposed to the output. The K-P ratio of three investment variables of DRBC are close to 1, marking that the artificial economy captures three types of investments. Also shown in Table 5, similar with output, capital investment and total investment are mainly driven by technology shocks, and oil price shocks can account for 79.92% of the volatility of durables investment. It is slightly higher than 75.38% of technology shocks, because durables investment is actually the future consumption of durables to households, and the impact of oil price shocks is bigger than technology shocks on consumption. K-P ratio of household oil consumption in DRBC is 1.2676, signaling that the model accounts for 126.76% of the household oil consumption, which somewhat exaggerates the volatility of that, and it is further discovered that DRBC-TFP is only 6.28%, and DRBC-OIL is 126.26%, demonstrating that the oil price shocks can almost capture all the volatility of household oil consumption, which is also consistent with the intuitive logic. The labor K-P ratio of DRBC is 0.6898, and moreover, DRBC-OIL and DRBC-TFP are 25.71% and 73.88% respectively, indicating that the volatility of labor is mainly driven by technology shocks. Finally, the K-P ratio of oil price in DRBC is 0.7226, proving that the benchmark model can account for the oil price volatility of 72.26%. From the point of view of the correlations with output, the model DRBC shows that all series are pro-cyclical, except for oil prices, which is weakly countercyclical. The DRBC predicts this dimension closely. In particular, the correlations of labor, capital investment, the total investment and output are up to 0.99, 0.98, and 0.89 respectively, higher than in the actual economy, showing a strong pro-cyclicality. The correlation between durables investment and output is 0.63, slightly higher than 0.52 in the actual economy. As mentioned earlier, compared with the developed economies, China’s consumption is weakly pro-cyclical, and DRBC better captures the feature, with the correlation between consumption and output in the artificial economy being 0.18, not far from 0.15 of the actual economy^7. The correlation between household oil consumption and output is 0.07, close to 0.04 of the actual economy, showing a weak counter-cyclicality. The model underestimates the cyclicality of oil prices to output ratio and obtains −0.04, as opposed to 0.06 in the actual economy, nevertheless both closer to 0. The predicted correlations with output in the artificial economy are consistent with the actual economy in order. From the point of view of the autocorrelations, the autocorrelations of variables of DRBC have shown a positive correlation, exhibiting persistency, which are also more matching with the actual In summary, the DRBC model can accurately simulate the “Second Moment” feature about the actual economy, and can be used as an appropriate model to capture the volatility of China’s economy. 5.2. Compared with SRBC Model That Does Not Consider Consumer Durables Simulated results that do not consider the consumer durables of SRBC, SRBC-OIL, and SRBC-TFP are shown in Table 6 & Table 7. The most salient feature when comparing DRBC with SRBC is DRBC has improved consumption prediction and made the results closer to the actual economy and better. Starting with consumption comparison, volatilities in consumption prediction by DRBC (1.61%) are greater than that of SRBC (0.68%), closer to the actual economy (1.74%). Furthermore, with regard to the K-P ratio, the explanatory power of DRBC of 92.53% is much higher than SRBC of 39.08%; Eventually, SRBC shows a strong pro-cyclicality for consumption (a correlation between consumption and output is 0.84), which cannot fit a weak pro-cyclical consumption characteristic of China’s actual economy, whereas DRBC can better fit the characteristics. From the output comparison, SRBC predicts the standard deviation of output Table 6. Business cycle properties of model SRBC. Note: NA denotes Undefined (Similarly hereinafter). Table 7. Business cycle properties of model SRBC-OIL and SRBC-TFP. as 3.72%, almost the same level with DRBC, close to 3.76% in the actual economy, and thus these two models are also similar in explanatory power of the K-P ratio. Actually in Table 7 the K-P ratio of SRBC-TFP is as high as 112.50%, while the SRBC-OIL is only 26.33%, which further validates that the output is mainly technology-driven, with a weaker driver by oil prices. Both the autocorrelations of outputs from DRBC and SRBC are 0.48 and the output is showing a significant persistency. Both of models mimic this well. From investment and labor, the standard deviations of these two series for SRBC are 6.03 and 1.64 respectively. When compared with the actual economy, DRBC is slightly better than SRBC in volatility forecasting. Investment volatility in SRBC increases mainly due to the lack of smooth durables investment, and households only rely on capital investment portfolio rebalanced, resulting in increased investment volatility. In Table 7, on capital investment and labor, the K-P ratios by SRBC-TFP are 119.39% and 71.02%, while the K-P ratios by SRBC-OIL are only 5.73% and 26.12% respectively, which further support the conclusion that technology shocks are the main source of volatilities in capital investment and labor. 5.3. Variance Decomposition Table 8 shows the variance decomposition of technology and oil price shocks of DRBC in accounting for China’s macroeconomic variables volatilities. In order to focus on the problem studied in this paper, only two exogenous shocks^8 are introduced in the model: one is technology shocks, the core of RBC theory, the other one is oil price shocks, the theme of this paper. Variance decomposition results clearly show the extent of the impact of exogenous shocks on macroeconomic variables volatilities. It can be seen that the impact of oil price shocks is reflected in three variables, namely, consumption, durables investment and household oil consumption. Among them, the oil price shocks account for 35.1% of consumption volatility, 34.8% of durables investment volatility, 95.4% of household oil consumption volatility; oil price shocks account for less than 10% of volatility on output^9, investment, total investment and labor, which means that most of the volatilities are derived from technology shocks, and especially technology shocks account for 96.3% of the output, which is almost consistent with the traditional RBC theory that technology shocks can account for 100% of output volatility approximately. In short, from the variance decomposition it can be found that output, investment and labor volatilities in this model are mainly dominated by technology shocks, and oil price shocks mainly affect consumption volatility, also can verify the conclusions made from Tables 4-7. 5.4. Impulse Response Analysis Figure 3 plots the responses of the main macroeconomic variables to one standard deviation of positive oil price and technology shocks. By the contemporaneous effect and the household’s intertemporal budget constraint, rising oil prices lead to a negative income effect and households reduce durables, nondurables, and household oil consumption, thus consumption fall and the labor supply increases. Notice that the rise in oil prices has triggered a running track of the capital investment “first rise and then the fall”, rather than an immediate decline of traditional RBC model. Specifically, the impact has led to increased investment in the first two periods, and the decline from the third period. Economic logic behind this is the following: investment and accumulation process of durables are entirely decided by households, and capital goods are jointly decided by households and firms (households determine capital supply, and firms Figure 3. Impulse responses of main macroeconomic variables (dashed line indicates the positive technology shock, and the solid line denotes the positive oil price shock). determine capital demand), therefore, it can be drawn that households need to rebalance its investment portfolio for durables and capital goods. According to the calibration study, in the initial steady state, the proportion between household oil and durables (O[h]/D = 0.013) is much larger than the ratio of oil to capital in production (O[f]/K = 0.003). The decline in marginal revenue of durables caused by oil price shocks is higher than the marginal revenue decline of capital goods. In order to balance the marginal income differences, households will immediately rebalance the portfolio, increase capital goods while reducing durable goods, and this capital increase will sufficiently offset the decrease in firms’ demand of capital investment brought by high oil prices; meanwhile the ARMA (1, 1) of the oil price shocks determines that the propagation of oil price shocks is characterized by two periods in the time dimension, which together leads to a two-periods increase in capital investments (i.e., greater than zero). Then starting from the third period, capital investment is switched into a negative trend, mainly because the high capital stocks ${K}_{t}$ and low durables stocks ${D}_{t}$ in the initial period have led to the fact that the portfolio rebalanced behavior of households cannot fundamentally reverse the huge gaps between those two, therefore, produce subsequent negative trends for two types of investment. The rise of capital and labor has increased the production, but brought forth the decline in the value added $V{A}_{t}$ because the decline in durables investment and non-durables is greater than the short-term increase in the magnitude of capital investment. Unlike oil price shocks, technology shocks have a direct impact on production function, but do not enter the utility function and do not directly influence durables investment. Therefore technology shocks do not affect the two portfolio reallocation by households, just leading to the rise in capital investment. Since our purpose is to study the impact of oil prices, and the impact mechanism of technology on the economy has been extensively studied in a large number of RBC literatures, it will not be discussed here due to limited space. 6. Conclusions Existing literatures on China’s RBC focus on the impact of macroeconomic cycle brought by technology, finance, monetary, international credit, and sunspot shocks, but lack discussion on energy price shocks represented by oil, and ignore the fact that the international oil price volatility in recent years is one source of external shocks of China’s economic fluctuations. Especially in the past two decades, international crude oil prices are violently fluctuated within a wide range of repeated shocks between $ 20/barrel and $147/barrel, and early in 2011, China’s dependence on foreign oil overtook that of the United States, being the world’s number one. In view of this, we build oil economy RBC model with durables consumption, simulation and forecast, combining with economic data of 1997Q1 to 2016Q1 in China, with the following discoveries: First, it is important to divide the RBC model into consumer durables and non-durables when studying China’s economic fluctuations. According to the simulation results of model DRBC with consumer durables, the core finding is DRBC has improved consumption volatility and weak pro-cyclicality predicted closer to the actual economy. On one hand, the traditional SRBC containing oil price shocks can account for only about 40% of consumption volatility, while DRBC increases the explanatory power to more than 80% of the volatility; on the other hand, SRBC shows a strong pro-cyclicality and cannot predict the weak pro-cyclicality of China’s actual economy, whereas DRBC can fit the feature. Second, the oil price shocks mainly affect consumption volatility, but seldom influence output, investment and labor, the three variables of which are largely influenced by technology shocks. Specifically, the K-P ratio of consumption in DRBC-OIL is up to 90.80%, while K-P ratio of that in DRBC-TFP is only 25.86%; the K-P ratios of output, investment and labor in DRBC-TFP are 113.56%, 109.61%, and 73.88% respectively, while in DRBC-OIL the corresponding ratios are 26.33%, 11.64%, and 25.71% respectively. Third, the benchmark model (DRBC) reveals that the transmission mechanism of oil prices is determined by intra-temporal income effects and inter-temporal effects of portfolio rebalanced between durable goods and capital goods. It is implicated that the impact of oil price shocks on China’s output volatility may not be so big, but the main impact is on consumption. Expanding domestic demand and boosting consumption become the main tone in the future of China’s economic transition and growth; therefore great importance should be attached to the impact of oil price shocks on consumption levels. Acknowledgment of Funding This work was supported by the Project of the National Social Science Fund of China [grant numbers: [15CJY064]. Conflicts of Interest The authors declare no conflicts of interest regarding the publication of this paper. Cite this paper Wang, Y.Q.., Sui, X.Y. and Pan, W.J. (2019) Oil Price Shocks, Durables Consumption, and China’s Real Business Cycle. Modern Economy, 10, 1310-1333. https://doi.org/10.4236/me.2019.104089 ^1The statistics come from the National Bureau of Statistics of China. ^2In between 1985-2012, the numbers of refrigerators and color television sets owned by every 100 urban residents rise from 17.2 and 6.6 sets to 136.1 and 98.5 units respectively, seven times and 14 times higher respectively; the growth in rural households is faster due to its poor condition. Data sources: WIND information. ^3Take personal computer for example, in 2012 the urban population per hundred units have 87.0 personal computers, every 100 rural households have 21.4 sets, nine times and 42.7 times than that of 2000 respectively. ^4Data resource: http://www.eia.gov/dnav/pet/pet_pri_spt_s1_m.htm. ^5In general, there are three types of nested structures between capital, energy (E) and labor in the CES production function, i.e., (K/E)/L, (K/L)/E, and (L/E)/K. According to the estimation on the total capital stock since China’s reform and opening, based on optimization method, Lu et al. (2009) propose that (K/E)/L (i.e. the form of (14) and (15) in this paper) is in line with the actual situation of China. In addition, it is a fact that energy impacts and conducts other macroeconomic variables through the capital good market, so we choose the form of (K/E)/L [32]. ^6The RBC consumption smooth theory actually supports the life cycle hypothesis by Modigliani & Brumberg (1954) and the permanent income hypothesis by Friedman (1957), which means that the individual resources available over the entire lifetimes are important determinants of consumption, and when encountered by wealth shocks in life, rational consumers will adjust their spending to prevent the whole greater volatility of consumption. ^7We have searched for the main core economic journals for nearly a decade, and found in the literature of China’s RBC theory study over 80% analysis shows a correlation between consumption and output above 0.8. China’s consumption shows a strong pro-cyclicality, similar with the US economy. However, we believe this result is debatable, because the US is a “low savings and high consumption” economy, and their consumption is the largest engine in driving economic growth, once reached 70% of GDP, whereas China is a “low consumption and high savings” economy, and the consumption is below 40% of GDP for long term. Over the past three decades, the rapid growth is mainly drove by investment and exports of the “two carriages”, therefore, so from the intuitive logic, China’s consumption should be weak pro-cyclicality, different from a strong pro-cyclicality of the US economy. The main reason is probably largely based on the authors’ estimation of annual data and analysis, in fact, the foundation work on RBC “Time to Build and Aggregate Fluctuations” written by Kydland and Prescott (1982), and besides, Hansen’s work (1985) “Indivisible Labor and the Business Cycle” which has an important influence on RBC theory, are based on quarterly data for parameter calibration and “Second Moment” estimation (the former sample period is 1950Q1-1979Q2, the latter sample period is 1955Q3-1984Q1), which to some extent reflects the necessity and reasonableness of the paper in using quarterly data. ^8In another study based on the New Keynesian DSGE model, we introduce more exogenous shocks (ten exogenous shocks such as government spending, demand preference, labor supply, investment, etc.) in doing variance decomposition analysis, and the conclusions also show that the oil price shocks on the macroeconomic variables is similar with the conclusions drawn from the body of this paper. ^9In fact, China’s output is insensitive to oil price fluctuations, which is very similar with that of the US economy since 2000. According to Li (2008), in the rise of oil prices in the new century, the US economy shows “tolerance” to the continuous rising energy prices and “sustainability” of economic growth with high energy prices [42]. The reason may be that there is a strong complementary relationship between oil and durables consumption, and volatility in oil prices may be weakened by durables’ stability, resulting in the weakening of oil transmission capacity in product markets, which has led to the weakening of the impact on output volatility.
{"url":"https://file.scirp.org/Html/18-7202146_92012.htm","timestamp":"2024-11-04T11:57:04Z","content_type":"application/xhtml+xml","content_length":"167321","record_id":"<urn:uuid:11fe66c5-824e-4860-a0c4-df79c5570cc2>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00456.warc.gz"}
Next: MINIMUM DISTINGUISHED ONES Up: Propositional Logic Previous: MINIMUM 3DNF SATISFIABILITY &nbsp Index • INSTANCE: Disjoint sets X,Z of variables, collection C of disjunctive clauses of at most 3 literals, where a literal is a variable or a negated variable in • SOLUTION: Truth assignment for X and Z that satisfies every clause in C. • MEASURE: The number of Z variables that are set to true in the assignment. • Bad News: NPO PB-complete [282]. • Comment: Transformation from MAXIMUM NUMBER OF SATISFIABLE FORMULAS [389]. Not approximable within 279]. MAXIMUM ONES, the variation in which all variables are distinguished, i.e. 282], and is not approximable within 279]. MAXIMUM WEIGHTED SATISFIABILITY, the weighted version, in which every variable is assigned a nonnegative weight, is NPO-complete [47]. Viggo Kann
{"url":"https://www.csc.kth.se/~viggo/wwwcompendium/node230.html","timestamp":"2024-11-11T08:22:00Z","content_type":"text/html","content_length":"4826","record_id":"<urn:uuid:b5eb92be-d85d-4610-bee9-16a64a37ac7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00672.warc.gz"}
multi-instance Alternatives - Haskell Algebra | LibHunt Monthly Downloads: 19 Programming language: Haskell License: Apache License 2.0 multi-instance alternatives and similar packages Based on the "Algebra" category. Alternatively, view multi-instance alternatives based on common mentions on social networks and blogs. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR. Promo coderabbit.ai Do you think we are missing an alternative of multi-instance or a related project? Add another 'Algebra' Package Popular Comparisons Alternative versions of common typeclasses, augmented with a phantom type parameter x. The purpose of this is to deal with the case where a type has more than one candidate instance for the original, unaugmented class. Example: Integer sum and product The canonical example of this predicament is selecting the monoid instance for a type which forms a ring (and thus has at least two strong candidates for selection as the monoid), such as Integer. This therefore gives rise to the Sum and Product newtype wrappers, corresponding to the additive and multiplicative monoids respectively. The traditional fold-based summation of a list of integers looks like this: >>> import Data.Foldable (fold) >>> import Data.Monoid (Sum (..)) >>> getSum (fold [Sum 2, Sum 3, Sum 5]) :: Integer By replacing fold with multi'fold, whose constraint is MultiMonoid rather than Data.Monoid.Monoid, we can write the same thing without the newtype wrapper: >>> :set -XFlexibleContexts -XTypeApplications >>> multi'fold @Addition [2, 3, 5] :: Integer
{"url":"https://haskell.libhunt.com/multi-instance-alternatives","timestamp":"2024-11-11T05:08:37Z","content_type":"text/html","content_length":"45020","record_id":"<urn:uuid:7988bf09-ff44-4a39-b55a-9e0d2032b9c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00692.warc.gz"}
y=filter(A,B,x) how? I want to do digital filtering of an digitized signal. The Matlab-function y=filter(A, B, x) would do exactly what I need. A(denominator) and B(numerator) contain the coefficients of the digital filter H(z). I couldn't find any equivalent function in Scilab. What function in Scilab is quivalent to this filter()-function from Matlab? Best regards,
{"url":"https://www.dsprelated.com/showthread/comp.dsp/37753-1.php","timestamp":"2024-11-04T04:27:40Z","content_type":"text/html","content_length":"66397","record_id":"<urn:uuid:c6ec56ea-b25c-45c5-80e7-15ff45ef93a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00231.warc.gz"}
The Stacks project Lemma 52.10.5. Let $I \subset \mathfrak a$ be ideals of a Noetherian ring $A$. Let $M$ be a finite $A$-module. Let $s$ and $d$ be integers. If we assume 1. $A$ has a dualizing complex, 2. $\text{cd}(A, I) \leq d$, 3. if $\mathfrak p \not\in V(I)$ and $\mathfrak q \in V(\mathfrak p) \cap V(\mathfrak a)$ then $\text{depth}_{A_\mathfrak p}(M_\mathfrak p) > s$ or $\text{depth}_{A_\mathfrak p}(M_\mathfrak p) + \ dim ((A/\mathfrak p)_\mathfrak q) > d + s$. Then $A, I, V(\mathfrak a), M, s, d$ are as in Situation 52.10.1. Comments (0) Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0EFW. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0EFW, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/0EFW","timestamp":"2024-11-11T23:09:15Z","content_type":"text/html","content_length":"15998","record_id":"<urn:uuid:bc06b73b-0e04-434b-818e-a4725544e2f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00189.warc.gz"}
Reconstruction of Semileptonic K0 Decays at Babar Prepared for submission to JHEP INR-TH-2018-014 Phenomenology of GeV-scale Heavy Neutral Leptons Kyrylo Bondarenko,1 Alexey Boyarsky,1 Dmitry Gorbunov,2;3 Oleg Ruchayskiy4 1Intituut-Lorentz, Leiden University, Niels Bohrweg 2, 2333 CA Leiden, The Netherlands 2Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia 3Moscow Institute of Physics and Technology, Dolgoprudny 141700, Russia 4Discovery Center, Niels Bohr Institute, Copenhagen University, Blegdamsvej 17, DK- 2100 Copenhagen, Denmark E-mail: [email protected] [email protected] [email protected] [email protected] Abstract: We review and revise phenomenology of the GeV-scale heavy neutral leptons (HNLs). We extend the previous analyses by including more channels of HNLs production and decay and provide with more refined treatment, including QCD corrections for the HNLs of masses (1) GeV. We summarize the relevance O of individual production and decay channels for different masses, resolving a few discrepancies in the literature. Our final results are directly suitable for sensitivity studies of particle physics experiments (ranging from proton beam-dump to the LHC) aiming at searches for heavy neutral leptons. arXiv:1805.08567v3 [hep-ph] 9 Nov 2018 ArXiv ePrint: 1805.08567 Contents 1 Introduction: heavy neutral leptons1 1.1 General introduction to heavy neutral leptons2 2 HNL production in proton fixed target experiments3 2.1 Production from hadrons3 2.1.1 Production from light unflavored and strange mesons5 2.1.2
{"url":"https://docslib.org/doc/145431/reconstruction-of-semileptonic-k0-decays-at-babar","timestamp":"2024-11-10T04:16:01Z","content_type":"text/html","content_length":"65130","record_id":"<urn:uuid:319726cb-2905-4413-899f-3fd0c262a08c>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00365.warc.gz"}
Exclusive OR of two polyshape objects polyout = xor(poly1,poly2) returns a polyshape object whose regions are the geometric exclusive OR of two polyshape objects. The geometric exclusive OR contains the regions of poly1 and poly2 that do not overlap. poly1 and poly2 must have compatible array sizes. [polyout,shapeID,vertexID] = xor(poly1,poly2) also returns vertex mapping information from the vertices in polyout to the vertices in poly1 and poly2. The xor function only supports this syntax when poly1 and poly2 are scalar polyshape objects. The shapeID elements identify whether the corresponding vertex in polyout originated in poly1, poly2, or was created from the exclusive OR. vertexID maps the vertices of polyout to the vertices of poly1, poly2, or the exclusive OR. ___ = xor(___,Name=Value) specifies options using one or more name-value arguments in addition to any of the input argument combinations in previous syntaxes. You can use any of the output argument combinations in previous syntaxes. For example, polyout = xor(poly1,poly2,Simplify=false) returns a polyshape object whose vertices have not been modified regardless of intersections or improper Exclusive OR Create and plot two polygons. poly1 = polyshape([0 0 1 1],[1 0 0 1]); poly2 = polyshape([0.75 1.25 1.25 0.75],[0.25 0.25 0.75 0.75]); hold on Compute and plot the exclusive OR of poly1 and poly2. polyout = xor(poly1,poly2) polyout = polyshape with properties: Vertices: [13x2 double] NumRegions: 2 NumHoles: 0 Vertex Mapping Create two polygons, and compute and plot their exclusive OR. Display the vertex coordinates of the exclusive OR and the corresponding vertex mapping information. poly1 = polyshape([0 0 1 1],[1 0 0 1]); poly2 = translate(poly1,[0.5 0]); [polyout,shapeID,vertexID] = xor(poly1,poly2); [polyout.Vertices shapeID vertexID] ans = 9×4 0 1.0000 1.0000 1.0000 0.5000 1.0000 2.0000 1.0000 0.5000 0 2.0000 4.0000 0 0 1.0000 4.0000 NaN NaN NaN NaN 1.0000 1.0000 1.0000 2.0000 1.5000 1.0000 2.0000 2.0000 1.5000 0 2.0000 3.0000 1.0000 0 1.0000 3.0000 There are two boundaries in the exclusive OR, separated by a row of NaN values in the Vertices property. For example, consider the first boundary in the array. The first and last vertices of the exclusive OR originated in poly1, since the corresponding values in shapeID are 1. These vertices are the first and fourth vertices in the property poly1.Vertices, respectively, since the corresponding values in vertexID are 1 and 4. Similarly, the second and third vertices of the exclusive OR originated in poly2, and they are also the first and fourth vertices in the property poly2.Vertices, respectively. Input Arguments poly1 — First input polyshape scalar | vector | matrix | multidimensional array First input polyshape, specified as a scalar, vector, matrix, or multidimensional array. poly2 — Second input polyshape scalar | vector | matrix | multidimensional array Second input polyshape, specified as a scalar, vector, matrix, or multidimensional array. Name-Value Arguments Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter. Example: polyout = xor(poly1,poly2,Simplify=false) Before R2021a, use commas to separate each name and value, and enclose Name in quotes. Example: polyout = xor(poly1,poly2,"Simplify",false) KeepCollinearPoints — Keep collinear points as vertices true or 1 | false or 0 Keep collinear points as vertices, specified as one of these numeric or logical values: • 1 (true) — Keep all collinear points as vertices. • 0 (false) — Remove collinear points so that the output polyshape contains the fewest vertices necessary to define the boundaries. If you do not specify the KeepCollinearPoints name-value argument, the function assigns its value according to the values used during creation of the input polyshape objects. • If each input polyshape kept collinear points as vertices during creation, then the function sets KeepCollinearPoints to true. • If each input polyshape removed collinear points during creation, then the function sets KeepCollinearPoints to false. • If the collinear points of the input polyshape objects were treated differently, then the function sets KeepCollinearPoints to false. Simplify — Modify polygon vertices to simplify output true or 1 (default) | false or 0 Modify polygon vertices to simplify output, specified as one of these numeric or logical values: • 1 (true) — Modify polygon vertices to produce a well-defined polygon when the output vertices produce intersections or improper nesting. • 0 (false) — Produce a polygon that may contain intersecting edges, improper nesting, duplicate points, or degeneracies. Computing with ill-defined polygons can lead to inaccurate or unexpected Output Arguments polyout — Output polyshape scalar | vector | matrix | multidimensional array Output polyshape, returned as a scalar, vector, matrix, or multidimensional array. The two input polyshape arguments must have compatible sizes. For example, if two input polyshape vectors have different lengths M and N, then they must have different orientations (one must be a row vector and one must be a column vector). polyout is then M-by-N or N-by-M depending on the orientation of each input vector. For more information on compatible array sizes, see Compatible Array Sizes for Basic Operations. shapeID — Shape ID column vector Shape ID, returned as a column vector whose elements each represent the origin of the vertex in the exclusive OR. The value of an element in shapeID is 0 when the corresponding vertex of the output polyshape was created by the exclusive OR. An element is 1 when the corresponding vertex originated from poly1, and 2 when it originated from poly2. The length of shapeID is equal to the number of rows in the Vertices property of the output polyshape. The xor function only supports this output argument if the input polyshape objects are scalar. Data Types: double vertexID — Vertex ID column vector Vertex ID, returned as a column vector whose elements map the vertices in the output polyshape to the vertices in the polyshape of origin. The elements of vertexID contain the row numbers of the corresponding vertices in the Vertices property of the input polyshape. An element is 0 when the corresponding vertex of the output polyshape was created by the exclusive OR. The length of vertexID is equal to the number of rows in the Vertices property of the output polyshape. The xor function only supports this output argument when the input polyshape objects are Data Types: double Extended Capabilities C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. Usage notes and limitations: • Dynamic memory allocation must be enabled for code generation. • Name-value pair must be compile time constant. Version History Introduced in R2017b R2024b: Control whether to simplify output Specify the Simplify name-value argument to control whether the xor function simplifies its output. By default, Simplify is true and xor returns a well-defined polygon by resolving boundary intersections and improper nesting and also by removing duplicate points and degeneracies. You can specify Simplify=false to gain performance when performing a series of exclusive ors or subtractions. In this case, you can simplify the output once at the end either by specifying Simplify= true in the final function call or by using the simplify function on the final output polygon.
{"url":"https://it.mathworks.com/help/matlab/ref/polyshape.xor.html","timestamp":"2024-11-02T07:54:02Z","content_type":"text/html","content_length":"103276","record_id":"<urn:uuid:10ae7439-08c4-4b7f-93d6-fd4a4cf16a19>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00692.warc.gz"}
Welcome Anonymous Main Pages Forums & Pictures Find Out About Us Surveys & Archive Members options Verified Premise For a Moebius GUT Is Quarks Are Indivisible Difference Between Forums Author Message Dan Post subject: Verified Premise For a Moebius GUT Is Quarks Are Indivisible Posted: Sun Nov 21, 2021 11:51 am 2021-11-21, 2021-11-22, 2021-11-23 Changed title Joined: Jan 01, Posts: 448 Definitions Location: USA GUT = Grand Unification Theory must tie the minimum number of forces needed to describe how energy and matter work from a Planck unit up. Planck's Constant = The smallest time it takes light to move a fixed distance. The basic forces are iG(instant Gravity), RF(Repulsive Force) also instant, and Newton's Gravity equations which propagate at c, the speed of light. This follows from the observation we can not see smaller than a Quark = they take up space. This observation is confirmed by the Pauli Exclusion Principle which outlines the interaction of Quarks in fermions, a class of base particles that obey the Exclusion Principle. From these observations we can describe the interaction of the basic forces iG(instant Gravity), RF(Repulsive Force) also instant, and Newton's Gravity equations, to show how our Moebius Geometry universe works from the Quark Premise. h measures the time it takes light to go one h distance. Ergo: ht(time) = hd(distance). We can not see smaller than that. Both measured by c, A GUT must maintaining this identity in describing the interaction of these basic c and Quarks. A comment on GUT theory making. First, it starts with anomalies, next you look for Always Do or Don't occur statements about our universe, which means a GUT must be premised on a base identity that holds for all interactions of matter and energy we observe. Hence the Premise above that indivisible Quarks take up space and time. Getting to this point took eliminating thousands of wrong Hypothesis. Einstein called the process recombining until nothing was left but the basic premise. In short, you get to be wrong a lot, but persistence wins the day. Proof of the pudding is the predicted measurements. GUT Theory When h time = h distance, then Quark one times Quark two is = or > 2 h. This identity puts severe constraints on how iG, G, & RF must work to create an expanding Moebius Geometry Universe. The next identity is when a ionized proton is adjacent to another ionized proton; then iG = G. As they separate in distance iG(sum of all protons in a given mass) remains a constant force at all distances; while Newton's G begins to predominate moving at c as the sum of the masses increases. The RF goes in a straight line between both sides of the M surface and keeps M and A-M as far as possible apart, which is determined by any given Moebius Edge Width = 1/20,000 of Max Surface Area determined by this Eq. Maximum Surface Area a Moebius = 2DDPhi, so its Edge Width = 2DDPhi/20,000. (D = diameter of a Moebius around its centerline between edges). We must verify that iG and RF do explain what we see in our universe, e.g. 1.Use tidal gauges to show iG arrives on Earth 5 minutes sooner than G moving at c moves. 2.Show iRF going from both sides of our M surface does explain why Dark matter stays about 2,5 light years away from our solar system. 3. Look for stars disappearing between us and the Hyades system which will show we are the centerline planet. There are more measurable implication which I will add later. "I swear to speak honestly and seek the truth when I use the No 1st Cost List public record." Last edited by Dan on Tue Nov 23, 2021 7:20 am; edited 9 times in total All times are UTC You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Seeing Ourselves Forums Last posts Last 10 Forum Messages Latest Articles Disconnect Links We have received page views since April 27, 2005
{"url":"https://nofirstcostlist.com/?name=Forums&file=viewtopic&t=1108","timestamp":"2024-11-14T11:08:38Z","content_type":"text/html","content_length":"51535","record_id":"<urn:uuid:1ee8b83c-8e31-402b-a450-2d051d91c2d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00835.warc.gz"}
Maths Week England - teachingmathsscholars.org Maths Week England Maths Week England launched in November 2019, following several years of hugely popular Maths Weeks in other nations, such as Maths Week Scotland which runs in October. Maths Week England is supported by many well-known maths organisations such as ATM (Association of Teachers of Mathematics) and The MA (The Mathematical Association). Aims Of Maths Week England So, what is Maths Week England and what is it aiming to do? Visit the Maths Week England website for a personal story from the founder; Andrew Jeffrey. Maths Week England Aims To: 1. Raise the profile of mathematics throughout England 2. Change the conversation about maths in the population at large to be more positive 3. Allow children and adults from all social and economic backgrounds to access and enjoy interesting mathematical experiences 4. Support teachers to plan special low-cost high-impact maths activities at their own schools during Maths Week 5. Encourage Higher-Education centres to invite schoolchildren to visit for maths events, in order to raise aspirations and encourage higher take-up of the study of maths at A-level and university 6. Make maths accessible and enjoyable for people who thought it was an elitist subject for ‘clever’ people: to ‘love and enjoy’ is a worthy goal! It is important to notice that there is no set recipe laid out for achieving these aims. It is down to the creativity, enthusiasm and passion of individual teachers to make Maths Week England come How Can A Mathematics Teacher Training Scholar Get Involved In Maths Week England? In the past, there have been competitions run as part of Maths Week England, hosted by different organisations including AMSP, Sumdog, MEI, and Times Tables Rockstars. Look out for announcements for this year, as competitions can be a great way to get your pupils excited about maths and also a way of doing some different maths which is outside the normal curriculum. So, What About Future Maths Weeks? Why not tell your Head of Department about Maths Week England, and hopefully they might decide to embrace it. Here are a few suggestions of what you could do: • The school might have an existing Maths Week which they want to relocate to coincide with Maths Week England. • Take part in competitions - it may work best if your department chooses one competition, promotes it and gives it lesson time during the week. • Run an external trip or get in a guest speaker • Work across the whole school to run cross curricular maths activities • Introduce your pupils to maths which is outside the curriculum – for example teaching a lesson on binary numbers, fractals or code breaking • Run a maths focused assembly • Teach lessons on maths careers or applications of maths As a passionate teacher of mathematics, Maths Week England is an exciting chance to get creative, so make sure you share what you have done via the Maths Week England website. Keep up-to-date with the latest Maths Scholarships news: Find us on Twitter, Instagram, LinkedIn, YouTube, and Facebook. Join our mailing list or get in touch Here.
{"url":"https://teachingmathsscholars.org/mathsweekengland","timestamp":"2024-11-09T22:30:27Z","content_type":"text/html","content_length":"56833","record_id":"<urn:uuid:fa60385b-a255-454d-b32e-b502846eec1a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00216.warc.gz"}
Principles of Modern Physics Rakennetyyppi: Opintojakso Koodi: IITB4001 OPS: IT 2021 Taso: Insinööri (AMK) Opiskeluvuosi: 2 (2022-2023) Lukukausi: Kevät Laajuus: 4 op Vastuuopettaja: Mäkinen, Seppo Opetuskieli: Englanti Toteutukset lukuvuonna 2022-2023 Tot. Ryhmä(t) Opiskeluaika Opettaja(t) Kieli Ilmoittautuminen 3003 IT2021-2, IT2021-2A, IT2021-2B 9.1.2023 – 29.4.2023 Lassi Lilleberg Englanti 1.12.2022 – 9.1.2023 The student learns the physical models, which are used to describe mechanical and electromagnetic oscillators. By combining individual oscillators, the student will be able to study both mechanical and emg wave motion. The course will give an understanding of the phenomena, which lead to the beginning of the era of quantum physics. The most important results of quantum physics, such as those related with the photon, the atom and the atomic nucleus will be studied in the course. The student will understand the energy band structure of crystalline solids, the difference between metals and semiconductors, as well as the most important technical applications of semiconducting materials. The student will learn the basic phenomena of quantum physics and the related technical devices. In addition to the theoretical understanding, the student will learn how to apply her knowledge experimentally in laboratory environment. The results are analysed, together with thorough error calculations with some experiments. Each student will write 2 reports on the measurements. Opiskelijan työmäärä 108 h, which contains 56 h of scheduled contact studies. The assessment of student’s own learning 1 h is included in contact lessons. Edeltävät opinnot / Suositellut valinnaiset opinnot Electricity and Magnetism. Simple harmonic oscillations, damped oscillations, wave motion, electromagnetic oscillations and the associated wave motion, basics of quantum physics, the photon, Bohr ’s atomic model, applications of atomic physics, atomic nuclei. Raymond A. Serway, John W. Jewett: "Physics for Scientists and Engineers with Modern Physics", Thomson Books/Cole. Opetusmuoto / Opetusmenetelmät The relevant theories of physics, together with associated problems and applications, are studied on a course of lectures. In addition, the student will individually solve a number of given homework exercises. Students will also take part in laboratory measurements. The measurements are done in groups of 3 students. Grade 5: The student knows all the quantities and units discussed on the course, and she understands how they are related with each other. The student is able to independently apply the natural laws discussed on the course while solving complicated problems related with the contents of the course. Grade 3: The student knows most of the quantities and units discussed on the course, and she understands a significant amount of the relationships between them. The student is able to apply the natural laws discussed on the course while solving medium-level problems related with the contents of the course. Grade 1: The student knows the most important quantities and units discussed on the course, and she understand the most important relationships between them. The student is able to apply the natural laws discussed on the course while solving basic problems related with the contents of the course. The assessment is based on an examination, homework exercises and laboratory work. The student must solve at least 25 % of the given homework exercises, and she must complete all the associated experiments in the laboratory of physics, as well as write two reports on the measurements.
{"url":"https://ops.vamk.fi/fi/IT/2021/IITB4001/","timestamp":"2024-11-09T07:05:48Z","content_type":"text/html","content_length":"8631","record_id":"<urn:uuid:d48bc313-aa87-4b97-a157-1594e8192357>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00381.warc.gz"}
Smith Number in Java | How to Check Smith number in Java? Updated April 7, 2023 Definition of Java Smith Number In Java we have a different kind of functionality; the smith number is the one kind of functionality that is provided to the user. Basically, the smith number is nothing but the composite number (whose base is 10 in the number system) whose addition of all digits is equal to the addition of all digits of its all prime factor (excluding 1), this is known as the smith number. Another name for smith number is the joke number. By definition, we can say that all prime numbers are excluded naturally if they satisfy the condition. Normally smith number is a very basic subfield of the number system used in mathematics as per user requirement and some of its requirement of the problem statement. Logic behind smith number Now let’s see what logic behind the smith number is as follows. Logic of the smith number is very simple, let’s consider the following number then we will easily understand the logic as follows. Example: Suppose we need to check if the given number is a smith number or not. The given number is: 95 First, we need to find out the primes factor of 95 is 19 and 5 (5, 19) Now find the sum of digit that is 9 + 5 = 14 Sum of prime factor is 5 + 1 + 9 = 15 Now compare both results, see here 14 is not equal to 15. So the given number is not a prime number. Let’s consider another number as follows. Given number: 58 First find the primes factor of 58 = 2 and 29 Sum of prime factor is = 2 + 2 +9 = 13 Sum of given number digit is = 5 + 8 = 13 Now compare both results, here both results are same or we can say that is equal. So we can say that the given number is the smith number. So this is the very simple logic behind the smith number, we just need to compare the prime factorial sum and digit sum. If both sums are equal then the given number is the smith number otherwise the number is not a smith number. How to Check Smith number in Java? Now let’s see how we can check if a given number is smith or not in java as follows. In the above point, we already discussed different examples of smith numbers. Now let’s see the different steps to find the smith number as follows. 1. First we need to initialize or read numbers from the user. 2. After that we need to find the sum of the digits of the given number. 3. Next we need to find the primes factor of a given number. 4. Now calculate the sum of digits of prime factors. 5. Now compare the sum of digits of a given number and sum of digits of prime factors. a. If both sums are equal then we can say that the given number is the smith number. b. Else, we can consider a given number is not a smith number because sums are different. So the above-mentioned steps are useful to implement smith number programs in Java. Now let’s see the different examples of smith numbers in java for better understanding as follows. Example #1 import java.util.*; public class Smith_Num static int F_Sum_P_Fact(int no) int j=2, add=0; return add; static int F_S_Digit(int no) int sum=0; return sum; static boolean isPrime(int j) boolean b=true; int d=2; return b; public static void main(String args[]) Scanner s_c = new Scanner(System.in); System.out.print("Enter a number: "); int no=s_c.nextInt(); int x = F_S_Digit(no); int y = F_Sum_P_Fact(no); System.out.println("addition of digit = "+x); System.out.println("addition of prime factors digits is = "+y); System.out.print("The user enterd number is smith number."); System. out.print("The user entered number is not smith number."); In the above program, we try to implement the smith number program in java. Here we first created the function to sum of digits of prime factors; similarly, we also created the function to find the sum of digits for the given number as shown in the above program. After that, we created the function to check if the given number is a prime number or not by using Boolean functions. Then we write the main function, inside the main function we accept the number from the user and call all function which we already created and compare both sums. If the sum is equal then print the given number as smith number and if the sum is not equal then print the given number is not smith number. The final output of the above program we illustrate by using the following screenshot as follows. Example #2 Let’s see another example as follows. import java.util.*; public class Smith_Num_2 static List<Integer> F_P_Fact(int no) List<Integer> output = new ArrayList<>(); for (int j = 2; no % j == 0; no = no/j) for (int j = 3; j* j <= no; j=j+2) while (no % j == 0) no = no/j; if (no != 1) return output; static int S_Digit(int no) int s= 0; while (no > 0) s =s+(no % 10); no = no/10; return s; public static void main(String args[]) for (int no = 1; no < 5000; no++) List<Integer> Fact = F_P_Fact(no); if (Fact.size() > 1) int s = S_Digit(no); for (int fa : Fact) s =s-S_Digit(fa); if (s == 0) In the above example, we try to find out the all smith number up to 5000 as shown. The final output of the above program we illustrate by using the following screenshot as follows. We hope from this article you learn Smith Number in java. From the above article, we have learned the basic logic of Smith Number and we also see different examples of Smith Number. From this article, we learned how and when we use the Smith Number in java. Recommended Articles This is a guide to Smith Number in Java. Here we discuss the Definition, How to check smith number in Java? example with code implementation. You may also have a look at the following articles to learn more –
{"url":"https://www.educba.com/smith-number-in-java/","timestamp":"2024-11-13T05:32:26Z","content_type":"text/html","content_length":"314178","record_id":"<urn:uuid:9a9a82d0-1994-4a65-b9cd-dfadfd8e0f67>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00374.warc.gz"}
1.Did NASA invent thunderstorms to cover up the sounds of space battles? 2.If evolution is true why hasn’t my dog turned into a werewolf? 3.Why do crocodiles walk so gayly? 4.Why do we scrub down and wash up? 5.Can blind people see their dreams? 6.If electricity comes from electrons, does morality come from morons? 7.Why does Donald Duck wear a towel when he comes out of the shower, when he doesn’t usually wear any pants? 8.How come you press harder on a remote control when you know the battery is dead? 9.If an orange is orange, why isn’t a lime called a green or a lemon called a yellow? 10.If a cat always lands on its feet, and buttered bread always lands butter side down, what would happen if you tied buttered bread on top of a cat? 11.If the #2 pencil is the most popular, why’s it still #2? 12.What color would a smurf turn if you choked it? 13.Where’s the egg in an egg roll? 14.Why aren’t blue berries blue? 15.Where is the lead in a lead pencil? 1 Like un-explored power for hovercraft right here … hmmmmm … but since both sides always have to face up, the thing would spin forever! meh … i’m sure someone will come up with an idea/solution where either the cat or the buttered toast gives first. One will need to have more “face-up-all-the-time-ness” for the thing to hover without spinning and spilling all its contents (known as “passengers”) … must vodka more about this 1 Like No 8 is so true:D I have a feeling that this could lead to a breakthrough in anti-gravity matter. …let me gin over it The two are mutually exclusive events with tenous correlation due to the differences in animity and weight. A cat tied to buttered bread would alter the dynamics of its fall and vice-versa. 2 Likes Ni pombe ya leap yr inaongea No.3 …:D:D 1 Like Wee probability uko fiti :D:D i pictured it in my mind and i just cracked up!
{"url":"https://kenyatalk.com/t/questions/74319","timestamp":"2024-11-02T07:53:53Z","content_type":"text/html","content_length":"33047","record_id":"<urn:uuid:13fa687b-a55a-4add-968a-15a4466b74bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00877.warc.gz"}
First partial derivatives For a function of a single variable, $y=f\left(x\right)$ , changing the independent variable $x$ leads to a corresponding change in the dependent variable $y$ . The rate of change of $y$ with respect to $x$ is given by the derivative , written $\frac{df}{dx}$ . A similar situation occurs with functions of more than one variable. For clarity we shall concentrate on functions of just two variables. In the relation $z=f\left(x,y\right)$ the independent variables are $x$ and $y$ and the dependent variable $z$ . We have seen in Section 18.1 that as $x$ and $y$ vary the $z$ -value traces out a surface. Now both of the variables $x$ and $y$ may change simultaneously inducing a change in $z$ . However, rather than consider this general situation, to begin with we shall hold one of the independent variables fixed . This is equivalent to moving along a curve obtained by intersecting the surface by one of the coordinate planes. Consider $f\left(x,y\right)={x}^{3}+2{x}^{2}y+{y}^{2}+2x+1.$ Suppose we keep $y$ constant and vary $x$ ; then what is the rate of change of the function $f$ ? Suppose we hold $y$ at the value 3 then In effect, we now have a function of $x$ only. If we differentiate it with respect to $x$ we obtain the expression: We say that $f$ has been partially differentiated with respect to $x$ . We denote the partial derivative of $f$ with respect to $x$ by $\frac{\partial f}{\partial x}$ (to be read as ‘partial dee $f$ by dee $x$ ’ ). In this example, when $y=3$ : $\phantom{\rule{2em}{0ex}}\frac{\partial f}{\partial x}=3{x}^{2}+12x+2.$ In the same way if $y$ is held at the value 4 then $f\left(x,4\right)={x}^{3}+8{x}^{2}+16+2x+1={x}^{3}+8{x}^{2}+2x+17$ and so, for this value of $y$ $\phantom{\rule{2em}{0ex}}\frac{\partial f}{\partial x}=3{x}^{2}+16x+2.$ Now if we return to the original formulation and treat $y$ as a constant then the process of partial differentiation with respect to $x$ gives $\begin{array}{rcll}\frac{\partial f}{\partial x}& =& 3{x}^{2}+4xy+0+2+0\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\ phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}& \text{}\\ & =& 3{x}^{2} +4xy+2.& \text{}\end{array}$ The Partial Derivative of $f$ with respect to $x$ For a function of two variables $z=f\left(x,y\right)$ the partial derivative of $f$ with respect to $x$ is denoted by $\frac{\partial f}{\partial x}$ and is obtained by differentiating $f\left (x,y\right)$ with respect to $x$ in the usual way but treating the $y$ -variable as if it were a constant. Alternative notations for $\frac{\partial f}{\partial x}$ are ${f}_{x}\left(x,y\right)$ or ${f}_{x}$ or $\frac{\partial z}{\partial x}$ . Find $\frac{\partial f}{\partial x}$ for 1. $f\left(x,y\right)={x}^{3}+x+{y}^{2}+y$ , 2. $f\left(x,y\right)={x}^{2}y+x{y}^{3}.$ 1. $\frac{\partial f}{\partial x}=3{x}^{2}+1+0+0=3{x}^{2}+1$ 2. $\frac{\partial f}{\partial x}=2x×y+1×{y}^{3}=2xy+{y}^{3}$ For functions of two variables $f\left(x,y\right)$ the $x$ and $y$ variables are on the same footing, so what we have done for the $x$ -variable we can do for the $y$ -variable. We can thus imagine keeping the $x$ -variable fixed and determining the rate of change of $f$ as $y$ changes. This rate of change is denoted by $\frac{\partial f}{\partial y}$ . The Partial Derivative of $f$ with respect to $y$ For a function of two variables $z=f\left(x,y\right)$ the partial derivative of $f$ with respect to $y$ is denoted by $\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\frac{\partial f}{\partial y}$ and is obtained by differentiating $f\left(x,y\right)$ with respect to $y$ in the usual way but treating the $x$ -variable as if it were a constant. Alternative notations for $\frac{\partial f}{\partial y}$ are ${f}_{y}\left(x,y\right)$ or ${f}_{y}$ or $\frac{\partial z}{\partial y}$ . Returning to $f\left(x,y\right)={x}^{3}+2{x}^{2}y+{y}^{2}+2x+1$ once again, we therefore obtain: $\phantom{\rule{2em}{0ex}}\frac{\partial f}{\partial y}=0+2{x}^{2}×1+2y+0+0=2{x}^{2}+2y.$ Find $\frac{\partial f}{\partial y}$ for 1. $f\left(x,y\right)={x}^{3}+x+{y}^{2}+y$ 2. $f\left(x,y\right)={x}^{2}y+x{y}^{3}$ 1. $\frac{\partial f}{\partial y}=0+0+2y+1=2y+1$ 2. $\frac{\partial f}{\partial y}={x}^{2}×1+x×3{y}^{2}={x}^{2}+3x{y}^{2}$ We can calculate the partial derivative of $f$ with respect to $x$ and the value of $\frac{\partial f}{\partial x}$ at a specific point e.g. $x=1,\phantom{\rule{1em}{0ex}}y=-2$ . Find ${f}_{x}\left(1,-2\right)$ and ${f}_{y}\left(-3,2\right)$ for $f\left(x,y\right)={x}^{2}+{y}^{3}+2xy$ . [Remember ${f}_{x}$ means $\frac{\partial f}{\partial x}$ and ${f}_{y}$ means $\frac{\partial f}{\partial y}$ .] ${f}_{x}\left(x,y\right)=2x+2y$ , so ${f}_{x}\left(1,-2\right)=2-4=-2$ ; ${f}_{y}\left(x,y\right)=3{y}^{2}+2x,$ so ${f}_{y}\left(-3,2\right)=12-6=6$ Given $f\left(x,y\right)=3{x}^{2}+2{y}^{2}+x{y}^{3}$ find ${f}_{x}\left(1,-2\right)$ and ${f}_{y}\left(-1,-1\right)$ . First find expressions for $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ : $\frac{\partial f}{\partial x}=6x+{y}^{3},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\frac{\partial f}{\partial y}=4y+3x{y}^{2}$ Now calculate ${f}_{x}\left(1,-2\right)$ and ${f}_{y}\left (-1,-1\right)$ : ${f}_{x}\left(1,-2\right)=6×1+{\left(-2\right)}^{3}=-2$ ; ${f}_{y}\left(-1,-1\right)=4×\left(-1\right)+3\left(-1\right)×1=-7$ As we have seen, a function of two variables $f\left(x,y\right)$ has two partial derivatives, $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ . In an exactly analogous way a function of three variables $f\left(x,y,u\right)$ has three partial derivatives $\frac{\partial f}{\partial x}$ , $\frac{\partial f}{\partial y}$ and $\frac{\partial f}{\partial u}$ , and so on for functions of more than three variables. Each partial derivative is obtained in the same way as stated in Key Point 3: The Partial Derivatives of $f\left(x,y,u,v,w,\dots \phantom{\rule{0.3em}{0ex}}\right)$ For a function of several variables $z=f\left(x,y,u,v,w,\dots \phantom{\rule{0.3em}{0ex}}\right)$ the partial derivative of $f$ with respect to $v$ (say) is denoted by $\frac{\partial f}{\partial v}$ and is obtained by differentiating $f\left(x,y,u,v,w,\dots \phantom{\rule{0.3em}{0ex}}\right)$ with respect to $v$ in the usual way but treating all the other variables as if they were Alternative notations for $\frac{\partial f}{\partial v}$ when $z=f\left(x,y,u,v,w,\dots \phantom{\rule{0.3em}{0ex}}\right)$ are ${f}_{v}\left(x,y,u,v,w\dots \phantom{\rule{0.3em}{0ex}}\right)$ and $ {f}_{v}$ and $\frac{\partial z}{\partial v}$ . Find $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial u}$ for $f\left(x,y,u,v\right)={x}^{2}+x{y}^{2}+{y}^{2}{u}^{3}-7u{v}^{4}$ $\frac{\partial f}{\partial x}=2x+{y}^{2}+0+0=2x+{y}^{2};\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\frac{\partial f}{\partial u}=0+0+{y}^{2}×3{u}^{2}-7{v}^{4}=3{y}^ {2}{u}^{2}-7{v}^{4}$ . The pressure, $P$ , for one mole of an ideal gas is related to its absolute temperature, $T$ , and specific volume, $v$ , by the equation where $R$ is the gas constant. Obtain simple expressions for 1. the coefficient of thermal expansion, $\alpha$ , defined by: $\phantom{\rule{2em}{0ex}}\alpha =\frac{1}{v}{\left(\frac{\partial v}{\partial T}\right)}_{P}$ 2. the isothermal compressibility, ${\kappa }_{T}$ , defined by: $\phantom{\rule{2em}{0ex}}{\kappa }_{T}=-\frac{1}{v}{\left(\frac{\partial v}{\partial P}\right)}_{T}$ $\phantom{\rule{2em}{0ex}}v=\frac{RT}{P}\phantom{\rule{1em}{0ex}}⇒\phantom{\rule{1em}{0ex}}{\left(\frac{\partial v}{\partial T}\right)}_{P}=\frac{R}{P}$ so $\alpha =\frac{1}{v}{\left(\frac{\partial v}{\partial T}\right)}_{P}=\frac{R}{Pv}=\frac{1}{T}$ $\phantom{\rule{2em}{0ex}}v=\frac{RT}{P}\phantom{\rule{1em}{0ex}}⇒\phantom{\rule{1em}{0ex}}{\left(\frac{\partial v}{\partial P}\right)}_{T}=-\frac{RT}{{P}^{2}}$ so ${\kappa }_{T}=-\frac{1}{v}{\left(\frac{\partial v}{\partial P}\right)}_{T}=\frac{RT}{v{P}^{2}}=\frac{1}{P}$ 1. For the following functions find $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ 1. $f\left(x,y\right)=x+2y+3$ 2. $f\left(x,y\right)={x}^{2}+{y}^{2}$ 3. $f\left(x,y\right)={x}^{3}+xy+{y}^{3}$ 4. $f\left(x,y\right)={x}^{4}+x{y}^{3}+2{x}^{3}{y}^{2}$ 5. $f\left(x,y,z\right)=xy+yz$ 2. For the functions of Exercise 1 (a) to (d) find ${f}_{x}\left(1,1\right),\phantom{\rule{1em}{0ex}}{f}_{x}\left(-1,-1\right),\phantom{\rule{1em}{0ex}}{f}_{y}\left(1,2\right),\phantom{\rule{1em} {0ex}}{f}_{y}\left(2,1\right)$ . 1. $\frac{\partial f}{\partial x}=1,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\frac{\partial f}{\partial y}=2$ 2. $\frac{\partial f}{\partial x}=2x,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\frac{\partial f}{\partial y}=2y$ 3. $\frac{\partial f}{\partial x}=3{x}^{2}+y,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\frac{\partial f}{\partial y}=x+3{y}^{2}$ 4. $\frac{\partial f}{\partial x}=4{x}^{3}+{y}^{3}+6{x}^{2}{y}^{2},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\frac{\partial f}{\partial y}=3x{y}^{2}+4{x}^{3}y$ 5. $\frac{\partial f}{\partial x}=y,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\frac{\partial f}{\partial y}=x+z$ ${f}_{x}\left(1,1\right)$ ${f}_{x}\left(-1,-1\right)$ ${f}_{y}\left(1,2\right)$ ${f}_{y}\left(2,1\right)$ (a) 1 1 2 2 (b) 2 $-2$ 4 2 (c) 4 2 13 5 (d) 11 1 20 38
{"url":"https://bathmash.github.io/HELM/18_2_partial_derivatives-web/18_2_partial_derivatives-webse1.html","timestamp":"2024-11-12T05:17:40Z","content_type":"text/html","content_length":"111612","record_id":"<urn:uuid:1ff69f29-c39c-4ce5-99c1-f4d46a55a3b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00520.warc.gz"}
What is meant by matrix calculus? The term "matrix calculations" refers to mathematical operations applied to matrices. Matrices are rectangular arrays of numbers, symbols, or expressions arranged in rows and columns. Matrix calculations are a fundamental part of linear algebra and are used in many fields such as engineering, physics, computer science, and economics. Typical software functions in the area of "matrix calculations": 1. Matrix Creation: Definition and creation of matrices of various sizes and dimensions. 2. Basic Operations: Performing basic operations such as addition, subtraction, and multiplication of matrices. 3. Transposition: Calculating the transpose of a matrix, where rows and columns are swapped. 4. Determinant Calculation: Computing the determinant of a matrix, an important value in linear algebra. 5. Inversion: Computing the inverse of a matrix, if it exists. 6. Eigenvalue and Eigenvector Calculation: Determining eigenvalues and eigenvectors of a matrix, which are significant in many applications like physics and statistics. 7. Diagonalization: Converting a matrix into a diagonal form to simplify calculations. 8. Solution Methods: Implementing algorithms to solve linear systems of equations represented in matrix form. 9. Visualization: Graphical representation of matrices and their transformations. 10. Saving and Loading: Storing matrices and their calculation results and loading this data from files. Examples of "matrix calculations": 1. Calculating the productivity of a company by analyzing input-output matrices. 2. Solving network systems in electronics using admittance or impedance matrices. 3. Determining transition probabilities in Markov chains. 4. Processing image data in computer vision by applying convolution operations on matrices. 5. Modeling and simulating physical systems using state equations in control engineering. 6. Analyzing financial data through covariance and correlation matrices.
{"url":"https://www.softguide.com/function/matrix-calculus","timestamp":"2024-11-11T07:12:23Z","content_type":"text/html","content_length":"79045","record_id":"<urn:uuid:4709cd77-ff3d-4038-9bb6-49d0170e3f53>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00300.warc.gz"}
Matrix Equation -- from Wolfram MathWorld Nonhomogeneous matrix equations of the form can be solved by taking the matrix inverse to obtain This equation will have a nontrivial solution iff the determinant Gaussian elimination, LU decomposition, or the square root method. For a homogeneous matrix equation to be solved for the determinant Now multiply by The value of the determinant is unchanged if multiples of columns are added to other columns. So add But from the original matrix, each of the entries in the first columns is zero since Therefore, if there is an determinant is zero. This is also true for determinant is 0. This approach is the basis for Cramer's rule. Given a numerical solution to a matrix equation, the solution can be iteratively improved using the following technique. Assume that the numerically obtained solution to where 10) Combining (11) and (12) then gives
{"url":"https://mathworld.wolfram.com/MatrixEquation.html","timestamp":"2024-11-02T09:16:50Z","content_type":"text/html","content_length":"61986","record_id":"<urn:uuid:ef745baf-2d0c-497d-9735-bed640957494>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00136.warc.gz"}
CSCE 790 CSCE 790 Section 1: Advanced Topics in Probabilistic Graphical Models Instructor: Marco Valtorta Office: INNOVA 2269, 777-4641 E-mail: mgv@cse.sc.edu Office Hours: M 1500-1800. Please check by phone or email. Others by appointment. Meeting Time and Place: TTH 1140-1255, SWGN 2A15. Bulletin Description: Topics in Information Technology. Reading and research on selected topics in information technology. Course content varies and will be announced in the schedule of courses by suffix and title. May be repeated for credit as topics vary. Prerequisites: Graduate Standing in Computer Science, Mathematics, Statistics, or permission of the instructor. Students from these three disciplines will be asked to contribute according to their background and interests. For example, a computer science student may be asked to explain the implementation of a Bayesian network learning algorithm in an R package; a statistics student may be asked to present a paper on the parametric variant of an inference algorithm, and a mathematics student may be asked to present a paper that describes the algebraic structure of a local computation framework. Course Learning Outcomes. The overall goal of the course is to prepare students to carry out research in probabilistic graphical models. Specifically, by the end of this course, the student will be able to: • Use the Hugin Bayesian network and influence diagram tool to construct Bayesian networks. • Compare and contrast key algorithms for belief propagation in Bayesian networks based on local computation and relate them to a common algebraic foundation. • Apply and explain variable elimination and several version of the junction tree methods at a detailed algorithmic level. • Explain and justify the interventional model of causality. • Apply Pearl's do-calculus of intervention, prove its properties, and describe its limitations in the parametric case. • Compare and contrast the Markov properties of advanced probabilistic graphical models, such as chain graphs and ancestral graphs, and explain the uses of such models, with special focus on causal • Explain the properties and limitations of structural learning algorithms for Bayesian networks, under the assumption of faithfulness (PC algorithm and its variants) and embedded faithfulness (FCI algorithm and its variants). • Use a few key R packages for inference, learning, and causal modeling. Reasonable accommodations are available for students with a documented disability. If you have a disability and may need accommodations to fully participate in this class, contact the Office of Student Disability Services: 777-6142, TDD 777-6744, email sasds@mailbox.sc.edu, or stop by LeConte College Room 112A. All accommodations must be approved through the Office of Student Disability Lecture Notes Videos from the spring 2009 version of CSCE 582, which may be useful to catch up on background notions. Lecture Notes from the spring 2009 version of CSCE 582, which may be useful for background notions. For example: 1. The pdf slides for Ch.1 [J07] have a good presentation of pointwise table operations in the context of probability computation 2. The pdf slides for Ch.2 [J07] define evidence as a vector of zeros and ones. 3. The transcript of notes of 2009-01-30 has a proof of the chain rules for Bayesian networks. Notes on Non-Serial Dynamic Programming (NSDP) used on 2019-02-19. This set of slides includes a presentation of variable elimination using relational algebra (slides 38-41), as used on 2019-02-19.
{"url":"https://cse.sc.edu/~mgv/csce790sp19/index.html","timestamp":"2024-11-14T12:27:57Z","content_type":"text/html","content_length":"11114","record_id":"<urn:uuid:6194ca09-1988-4a1c-82a4-2b883e0a995c>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00456.warc.gz"}
Design Like A Pro Write In Simplest Radical Form Write In Simplest Radical Form - Use the quotient property to rewrite the radical as the quotient of two radicals. Web how to simplify a radical expression using the quotient property. Web a radical is also in simplest form when the radicand is not a fraction. This online calculator will calculate the simplified radical expression of entered values. Its factors are 3 · 11, neither of which is a square number. 486 = 3 2 × 3 2 × 3 × 2. The nth powers of 2,a,3^2, and b^3 are, respectively, 2 2^n,a^n,3^ (2n), and b^ (3n). Therefore, is not in its simplest form. Web and once we apply it, we get something that returns us to point 1., and simplifying such radical expressions is no biggie. Web a radical is also in simplest form when the radicand is not a fraction. Web how to simplify a radical expression using the quotient Use the quotient property to rewrite the radical as the quotient of two radicals. If there is a fraction under the radical symbol, the radical is not in simplest form. Web for the simple case where \ displaystyle {n}= {2} n = 2, the following 4 expressions all have the same value: 18 has the square factor 9. Simplify the radicals in the numerator and the denominator. 33, for example, has no square factors. √a x √b = √(a x b) A full explanation of dealing with this type of question can be fournd at: Therefore, is not in its simplest form. How do you multiply two In this example, we simplify √ (2x²)+4√8+3√ (2x²)+√8. √72 find the largest square factor you can before simplifying. Therefore, is not in its simplest form. Simplifying radicals or simplifying radical expressions is when you rewrite a radical in its simplest form by ensuring the number underneath the square root sign (the radicand) has no square numbers as factors. A full. Get a widget for this calculator. In this tutorial, the primary focus is on simplifying radical expressions with an index of 2. Web how to simplify a radical expression using the quotient property. The nth powers of 2,a,3^2, and b^3 are, respectively, 2 2^n,a^n,3^ (2n), and b^ (3n). √72 find the largest square factor you can before simplifying. 2 50− 32+ 72 −2 8. Web a radical, unless the radical reduces to an integer; Web to simplify a radical expression, look for factors of the radicand with powers that match the index. Get a widget for this calculator. To see your tutorial, please scroll down. Write In Simplest Radical Form - Find the factors of the number under the radical. Web simplifying radical expressions (addition) a worked example of simplifying an expression that is a sum of several radicals. Web the radical is simplified and the six is multiplied times that simplification. Web there is a long and honorable history of civil disobedience in the united states, but true civil disobedience ultimately honors and respects the rule of law. Write the factors outside the radical which have the power 2. (a × n√b) / (c × m√d) similar to point 3. √72 find the largest square factor you can before simplifying. Web to simplify a radical, factor the number inside the radical and pull out any perfect square factors as a power of the radical. Simplifying radicals or simplifying radical expressions is when you rewrite a radical in its simplest form by ensuring the number underneath the square root sign (the radicand) has no square numbers as factors. Simplify the fraction in the radicand, if possible. 21 + 7+ 2 3+26. Write the factors outside the radical which have the power 2. Divide radicals & rationalize denominators. Click the blue arrow to submit. It will show the work by separating out multiples of the radicand that have integer roots. Web a radical, unless the radical reduces to an integer; √a x √b = √(a x b) Solve an equation, inequality or a system. 486 = 3 × 3 × 3 × 3 × 3 × 2. Web for the simple case where \displaystyle {n}= {2} n = 2, the following 4 expressions all have the same value: A radicand with no factors containing perfect squares; A radical symbol, a radicand, and an index. √24 factor 24 so that one factor is a square number. Web a radical is also in simplest form when the radicand is not a fraction. Divide radicals & rationalize denominators. Evaluate √15(√5+√3) 15 ( 5 + 3) Evaluate √340 340. > simplify radical expressions calculator. A full explanation of dealing with this type of question can be fournd at: Web what is simplifying radicals? It will show the work by separating out multiples of the radicand that have integer roots. Web Steps For Dealing With A Radical Form. The simplification calculator allows you to take a simple or complex expression and simplify and reduce the expression to it's simplest form. 18 = 9 · 2. √a x √b = √(a x b) This online calculator will calculate the simplified radical expression of entered values. A Radical Symbol, A Radicand, And An Index. Web a radical is also in simplest form when the radicand is not a fraction. √24 factor 24 so that one factor is a square number. To see your tutorial, please scroll down. Solve an equation, inequality or a system. Therefore, Is Not In Its Simplest Form. Clearly identify the expression you want to calculate or simplify. Following these guidelines ensures the expression is in its simplest form. To multiply two radicals, multiply the numbers inside the radicals (the radicands) and leave the radicals unchanged. Identify the type of root you have.
{"url":"https://cosicova.org/eng/write-in-simplest-radical-form.html","timestamp":"2024-11-06T23:53:55Z","content_type":"text/html","content_length":"27110","record_id":"<urn:uuid:799763f3-9684-410f-a198-de1a16983a1f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00453.warc.gz"}
Python Data Science Handbook Views: ^92280 Kernel: Python 3 The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book! In Depth: k-Means Clustering In the previous few sections, we have explored one category of unsupervised machine learning models: dimensionality reduction. Here we will move on to another class of unsupervised machine learning models: clustering algorithms. Clustering algorithms seek to learn, from the properties of the data, an optimal division or discrete labeling of groups of points. Many clustering algorithms are available in Scikit-Learn and elsewhere, but perhaps the simplest to understand is an algorithm known as k-means clustering, which is implemented in We begin with the standard imports: The k-means algorithm searches for a pre-determined number of clusters within an unlabeled multidimensional dataset. It accomplishes this using a simple conception of what the optimal clustering looks like: • The "cluster center" is the arithmetic mean of all the points belonging to the cluster. • Each point is closer to its own cluster center than to other cluster centers. Those two assumptions are the basis of the k-means model. We will soon dive into exactly how the algorithm reaches this solution, but for now let's take a look at a simple dataset and see the k-means First, let's generate a two-dimensional dataset containing four distinct blobs. To emphasize that this is an unsupervised algorithm, we will leave the labels out of the visualization By eye, it is relatively easy to pick out the four clusters. The k-means algorithm does this automatically, and in Scikit-Learn uses the typical estimator API: Let's visualize the results by plotting the data colored by these labels. We will also plot the cluster centers as determined by the k-means estimator: The good news is that the k-means algorithm (at least in this simple case) assigns the points to clusters very similarly to how we might assign them by eye. But you might wonder how this algorithm finds these clusters so quickly! After all, the number of possible combinations of cluster assignments is exponential in the number of data points—an exhaustive search would be very, very costly. Fortunately for us, such an exhaustive search is not necessary: instead, the typical approach to k-means involves an intuitive iterative approach known as expectation–maximization. k-Means Algorithm: Expectation–Maximization Expectation–maximization (E–M) is a powerful algorithm that comes up in a variety of contexts within data science. k-means is a particularly simple and easy-to-understand application of the algorithm, and we will walk through it briefly here. In short, the expectation–maximization approach here consists of the following procedure: 1. Guess some cluster centers 2. Repeat until converged 1. E-Step: assign points to the nearest cluster center 2. M-Step: set the cluster centers to the mean Here the "E-step" or "Expectation step" is so-named because it involves updating our expectation of which cluster each point belongs to. The "M-step" or "Maximization step" is so-named because it involves maximizing some fitness function that defines the location of the cluster centers—in this case, that maximization is accomplished by taking a simple mean of the data in each cluster. The literature about this algorithm is vast, but can be summarized as follows: under typical circumstances, each repetition of the E-step and M-step will always result in a better estimate of the cluster characteristics. We can visualize the algorithm as shown in the following figure. For the particular initialization shown here, the clusters converge in just three iterations. For an interactive version of this figure, refer to the code in the Appendix. The k-Means algorithm is simple enough that we can write it in a few lines of code. The following is a very basic implementation: Most well-tested implementations will do a bit more than this under the hood, but the preceding function gives the gist of the expectation–maximization approach. Caveats of expectation–maximization There are a few issues to be aware of when using the expectation–maximization algorithm. The globally optimal result may not be achieved First, although the E–M procedure is guaranteed to improve the result in each step, there is no assurance that it will lead to the global best solution. For example, if we use a different random seed in our simple procedure, the particular starting guesses lead to poor results: Here the E–M approach has converged, but has not converged to a globally optimal configuration. For this reason, it is common for the algorithm to be run for multiple starting guesses, as indeed Scikit-Learn does by default (set by the n_init parameter, which defaults to 10). The number of clusters must be selected beforehand Another common challenge with k-means is that you must tell it how many clusters you expect: it cannot learn the number of clusters from the data. For example, if we ask the algorithm to identify six clusters, it will happily proceed and find the best six clusters: Whether the result is meaningful is a question that is difficult to answer definitively; one approach that is rather intuitive, but that we won't discuss further here, is called silhouette analysis. Alternatively, you might use a more complicated clustering algorithm which has a better quantitative measure of the fitness per number of clusters (e.g., Gaussian mixture models; see In Depth: Gaussian Mixture Models) or which can choose a suitable number of clusters (e.g., DBSCAN, mean-shift, or affinity propagation, all available in the sklearn.cluster submodule) k-means is limited to linear cluster boundaries The fundamental model assumptions of k-means (points will be closer to their own cluster center than to others) means that the algorithm will often be ineffective if the clusters have complicated In particular, the boundaries between k-means clusters will always be linear, which means that it will fail for more complicated boundaries. Consider the following data, along with the cluster labels found by the typical k-means approach: This situation is reminiscent of the discussion in In-Depth: Support Vector Machines, where we used a kernel transformation to project the data into a higher dimension where a linear separation is possible. We might imagine using the same trick to allow k-means to discover non-linear boundaries. One version of this kernelized k-means is implemented in Scikit-Learn within the SpectralClustering estimator. It uses the graph of nearest neighbors to compute a higher-dimensional representation of the data, and then assigns labels using a k-means algorithm: We see that with this kernel transform approach, the kernelized k-means is able to find the more complicated nonlinear boundaries between clusters. k-means can be slow for large numbers of samples Because each iteration of k-means must access every point in the dataset, the algorithm can be relatively slow as the number of samples grows. You might wonder if this requirement to use all data at each iteration can be relaxed; for example, you might just use a subset of the data to update the cluster centers at each step. This is the idea behind batch-based k-means algorithms, one form of which is implemented in sklearn.cluster.MiniBatchKMeans. The interface for this is the same as for standard KMeans; we will see an example of its use as we continue our discussion. Being careful about these limitations of the algorithm, we can use k-means to our advantage in a wide variety of situations. We'll now take a look at a couple examples. Example 1: k-means on digits To start, let's take a look at applying k-means on the same simple digits data that we saw in In-Depth: Decision Trees and Random Forests and In Depth: Principal Component Analysis. Here we will attempt to use k-means to try to identify similar digits without using the original label information; this might be similar to a first step in extracting meaning from a new dataset about which you don't have any a priori label information. We will start by loading the digits and then finding the KMeans clusters. Recall that the digits consist of 1,797 samples with 64 features, where each of the 64 features is the brightness of one pixel in an 8×8 image: The clustering can be performed as we did before: The result is 10 clusters in 64 dimensions. Notice that the cluster centers themselves are 64-dimensional points, and can themselves be interpreted as the "typical" digit within the cluster. Let's see what these cluster centers look like: We see that even without the labels, KMeans is able to find clusters whose centers are recognizable digits, with perhaps the exception of 1 and 8. Because k-means knows nothing about the identity of the cluster, the 0–9 labels may be permuted. We can fix this by matching each learned cluster label with the true labels found in them: Now we can check how accurate our unsupervised clustering was in finding similar digits within the data: With just a simple k-means algorithm, we discovered the correct grouping for 80% of the input digits! Let's check the confusion matrix for this: As we might expect from the cluster centers we visualized before, the main point of confusion is between the eights and ones. But this still shows that using k-means, we can essentially build a digit classifier without reference to any known labels! Just for fun, let's try to push this even farther. We can use the t-distributed stochastic neighbor embedding (t-SNE) algorithm (mentioned in In-Depth: Manifold Learning) to pre-process the data before performing k-means. t-SNE is a nonlinear embedding algorithm that is particularly adept at preserving points within clusters. Let's see how it does: That's nearly 92% classification accuracy without using the labels. This is the power of unsupervised learning when used carefully: it can extract information from the dataset that it might be difficult to do by hand or by eye. Example 2: k-means for color compression One interesting application of clustering is in color compression within images. For example, imagine you have an image with millions of colors. In most images, a large number of the colors will be unused, and many of the pixels in the image will have similar or even identical colors. For example, consider the image shown in the following figure, which is from the Scikit-Learn datasets module (for this to work, you'll have to have the pillow Python package installed).
{"url":"https://cocalc.com/share/public_paths/8b892baf91f98d0cf6172b872c8ad6694d0f7204/notebooks%2F05.11-K-Means.ipynb","timestamp":"2024-11-05T06:10:33Z","content_type":"text/html","content_length":"1049797","record_id":"<urn:uuid:260847bc-8f10-450b-bfc0-0480482fe8b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00479.warc.gz"}
Factors of 25 - BrainiacsHQ In mathematics, a factor is a number that can divide another number without leaving a remainder. Understanding factors is crucial, as it forms the basis for many concepts in algebra, geometry, and number theory. Today, let's delve into the factors of the number 25 and uncover how to find them using various methods. Whether we're multiplying, dividing, or breaking the number down into prime factors, it's all part of the fun world of numbers. What Are Factors? Factors of a number are the numbers that divide that number exactly without leaving a remainder. For example, when we talk about the factors of 25, we mean the numbers that can be multiplied together to give 25. Factors are always whole numbers, and they include 1 and the number itself. Factors Of 25 Here are the factors of 25: How To Calculate The Factors Of 25 Calculating the factors of 25 is simple. We look for numbers that we can multiply together to get 25 and check if they divide 25 exactly. Here's an easy way to find them: 1. Start with 1 and check if it divides 25 without leaving a remainder. 2. Move to the next number, 2, and repeat the process. 3. Continue this process until we reach the number 25, checking each number. Multiplication Method The multiplication method involves finding pairs of numbers that multiply to give the original number – in this case, 25. Here's how: 1. Start with the smallest possible pair (1 and 25). 2. Check if multiplying them gives 25. 3. Add the pair to our list and move on to the next pair of numbers. 4. Continue until we've tested all possible pairs. For 25, we have: 1. 1 x 25 = 25 2. 5 x 5 = 25 So, the pairs of numbers are (1, 25) and (5, 5), leading us to the factors: 1, 5, and 25. Division Method The division method involves dividing 25 by different whole numbers to see which ones result in a whole number quotient. Here's how: 1. Start with 1 and divide 25 by 1. The quotient is 25. 2. Move to 2 and divide 25. The result is not a whole number. 3. Continue this process with 3 and 4. 4. When we get to 5, 25 / 5 = 5, which is a whole number. 5. Continue until we reach 25. Through the division method, we again find that the factors of 25 are 1, 5, and 25. What Is Prime Factorization? Prime Factorization is the process of breaking down a number into its prime numbers, which multiply together to give the original number. For instance, the prime factorization of 25 involves finding prime numbers that can multiply to create 25. Prime Factors Of 25 Here are the prime factors of 25: The prime factorization of 25 is 5 x 5. Understanding the factors of 25, how to calculate them using multiplication and division methods, and breaking the number into its prime factors gives us a solid foundation for more advanced mathematical concepts. The skills we acquire here are not only useful in academic settings but also in real-life problem-solving. What Are Factors? Factors are numbers that can be multiplied together to get another number. How Many Factors Does 25 Have? 25 has three factors: 1, 5, and 25. What Are The Prime Factors Of 25? The prime factor of 25 is 5. What Is Prime Factorization? Prime Factorization is breaking down a number into its prime factors. Can 25 Be Divided By 3? No, 25 cannot be divided by 3 without leaving a remainder.
{"url":"https://brainiacshq.com/factors-of-25/","timestamp":"2024-11-04T17:27:10Z","content_type":"text/html","content_length":"195461","record_id":"<urn:uuid:948da68d-f3be-4213-8860-c9729062f60a>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00876.warc.gz"}
Answers created by Max G. • Back to user's profile • There are (n+1) white and (n+1) black balls each set numbered 1 to (n+1). The number of ways in which the balls can be arranged in a row so they the adjacent balls are of different colours is ? • The first four terms of an arithmetic sequence are 21 17 13 9 Find in terms of n, an expression for the nth term of this sequence? • How do you simplify #x^6 / x^(2/3)#? • Two opposite sides of a parallelogram have lengths of #3 #. If one corner of the parallelogram has an angle of #pi/12 # and the parallelogram's area is #14 #, how long are the other two sides? • (log₃13) (log₁₃x) (logₓy) = 2 Solve for y. ? • How do you solve the polynomial inequality and state the answer in interval notation given #x^6+x^3>=6#? • why is the expression x1/2 is undefined when x is less than 0? • Let x be a binomial random variable with n=10 and p=0.2 In how many possible outcomes are there exactly 8 successes? • Differentiate the function. Y= √x(x-4) ? • How do you implicitly differentiate #9=e^(y^2-y)/e^x+y-xy#? • How do I solve secx - 2tanx = 0 over the interval (0,2pi]? • If #3x^2-4x+1# has zeros #alpha# and #beta#, then what quadratic has zeros #alpha^2/beta# and #beta^2/alpha# ? • Examine whether lim_"x→0"e^(1/x)/e^(1/x)+1 exits or not? • Use the given graph of f to find a number delta...? • How do you find the value of #cot 240# using the double angle or half angle identity? • I know that it will be of the form #2C(x)=C(x/2)#, but I am still stuck to write the transformed function. Can anybody help? • How do you find the equation of a regression line with a TI-83? • Show that any linear function #f(x) = ax + b# is a continuous function in any #x_o in ℝ# ? • Question #51024 • Calculate #sum_(n=0)^oo sqrt(n+3)+sqrtn-2sqrt(n+2)# ? • (1-(1/(2x+3)))>((x+1)/(x-1)) ? • How do you write #(4sqrt(3)-4i)^22# in the form of a+bi? • Question #b37dd • Let # f(x) = |x-1|. # 1) Verify that # f(x) # is neither even nor odd. 2) Can # f(x) # be written as the sum of an even function and an odd function ? a) If so, exhibit a solution. Are there more solutions ? b) If not, prove that it is impossible. • A boat is sailing due east parallel to the shoreline at speed of 10 miles per hour. At a given time, the bearing to a lighthouse is S 72° E, and 15 minutes later the bearing is S 66°. How do you find the distance from the boat to the lighthouse? • If sin x = -12/13 and tan x is positive, find the values of cos x and tan x ? • Question #8e421 • Is #f(x)=1-x-e^(-3x)/x# concave or convex at #x=4#? • Question #10dc7 • How do you integrate #int (1-2x^2)/((x+9)(x+7)(x+1)) # using partial fractions? • Are there solutions to the system of inequalities described by #y<3x+5, y>=x+4#? • The remainder when x^(2011) is divided by x^2 -3x+2 is ? • If the sum of the coefficient of 1st ,2nd,3rd term of the expansion of (x2+1/x) raised to the power m is 46 then find the coefficient of the terms that does not contain x? • How do use the discriminant test to determine whether the graph #9x^2+6xy+y^2+6x-22=0# whether the graph is parabola, ellipse, or hyperbola? • Question #d3878 • Question #753ef • What is the derivative of cos^4(x) + cos (x^4)? • Question #bc494 • Question #55510 • Question #3d707 • How do you find intercepts, extrema, points of inflections, asymptotes and graph #y=x^5-5x#? • Question #47488 • Between two positive real nos. a and b there are 2 G.M's G_1 and G_2 and a single A.M A.The G.M between G_1 and G_2 is M.Can you prove that M^3=ab(A^3)? • Question #acc14 • Question #137b9 • Question #ae174 • What is the limit as x approaches infinity of #ln(x)#? • Question #b6ffe • Question #91d56 • How do you solve #abs(6b-2)+10=44#? • Question #6f857 • Why is #r=3cos2theta# not symmetric over #theta=pi/2#? • Question #771c5 • How do you find the average rate of change of y with respect to x over the interval [1, 3] given #y=3x^4#? • A parallelogram is determined by the vectors a = (-2,5) and b = (3,2). Determined the angles between the diagonals of the parallelogram? • Question #aa041 • How can I tell if my data is normally distributed? Do the mean, median and mode have to be identical for it to be a normal distribution? • Question #3052f • How do you find the domain and range of #M(x)= -2/14x^2-11x-15#? • How do I find the anti-derivative of #x^3sec^2(x) + 3x^2tan(x)#? • Question #40999 • Kindly solve this question based on Functions ? • Two opposite sides of a parallelogram have lengths of #8 #. If one corner of the parallelogram has an angle of #pi/8 # and the parallelogram's area is #12 #, how long are the other two sides? • Two opposite sides of a parallelogram have lengths of #8 #. If one corner of the parallelogram has an angle of #pi/8 # and the parallelogram's area is #12 #, how long are the other two sides? • Question #a6c71 • How do you find the sum of #Sigma(-2/7)^n# from n #[0,oo)#? • Question #987b8 • What is the period of #f(t) = 3(sin(t-pi/4)-4)#? • How do you simplify #(4/5)/(6 2/3)#? • Question #61f5a • What is the square root of #48569834# ? • Given #cosbeta=5#, how do you find #cosbeta#? • What is the range of the function #x^2+y^2=36#? • How do you factor #a^4+2a^2b^2+b^8#? • Question #3100a • How do you solve and graph #abs(2x+3)<=11#? • How do you rationalize the denominator of #1/(sqrt(2)+sqrt(3)+sqrt(5))#?
{"url":"https://api-project-1022638073839.appspot.com/users/max-g-3/answers","timestamp":"2024-11-02T22:00:33Z","content_type":"text/html","content_length":"26498","record_id":"<urn:uuid:116e2308-2937-499a-ab26-c9ed5b96acc9>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00311.warc.gz"}
Explain interconnect crosstalk in digital vlsi written 5.7 years ago by modified 2.6 years ago by 1 Answer written 5.7 years ago by Crosstalk is the term given to the situation where energy from a signal on one line is transferred to a neighboring line by electromagnetic means. In general, both capacitive and inductive coupling exist. At the chip level, however, the currents through the signal lines are usually too small to induce magnetic coupling, so that parasitic inductance is ignored here. Capacitive coupling, on the other hand, depends on the line-to-line spacing S as illustrated in the general situation portrayed in Figure. Since capacitive coupling between two conducting lines is inversely proportional to the distance between the two lines, a small value of S implies a large coupling capacitance $C_c$ exists. Because of this dependence, it is not uncommon to find a minimum layout spacing design rule for critical lines that is actually larger than which could be created in the processing line. Also, the capacitive coupling increases with the length of the interaction, so it is important that the interconnects not be placed close to one another for any extended distance. Let us use the geometry in Figure to estimate the coupling capacitance. This cross-sectional view shows the spacing S between two identical interconnect lines. An empirical formula that provides a reasonable estimate for the coupling capacitance $C_c’$ per unit length is given by in units of F/cm which can be applied directly to the geometry. The total coupling capacitance in farads of a line that has a length d is calculated from $C_c = C_c' . d$ This shows explicitly the fact that $C_c$ increases as the separation distance S decreases. The importance of $C_c$ becomes evident when we examine how two circuits can interact via electric field coupling. Consider the situation shown in Figure where two independent lines interact through a coupling field $E_c$. Line 1 is at a voltage $V_1(t)$ at the input to inverter B, while line 2 has a voltage $V_2(t)$ which is the input of inverter D. The field is supported by the difference in voltages $V_1(t) - V_2(t)$. At the circuit level, we analyze the situation by introducing lumped-equivalent transmission line models as in Figure. The electric field interaction is included through the coupling capacitor $C_c$. The placement of $C_c$ in the circuit corresponds to the simplest type of single-capacitor coupling model; a more accurate analysis might add two capacitors, one on each side of the resistors. The current through the capacitor is calculated from the relation $i_c=C_c\frac{dV_c}{dt} =C_c\frac{d(V_1-V_2)}{dt}$ and is assumed to flow from line 1 to line 2 by the choice of voltages. If the difference $V_1(t) - V_2(t)$ changes in time, then the two lines become electrically coupled and the voltages are different from the case where they are independent.
{"url":"https://www.ques10.com/p/39545/explain-interconnect-crosstalk-in-digital-vlsi-1/","timestamp":"2024-11-09T16:28:17Z","content_type":"text/html","content_length":"28507","record_id":"<urn:uuid:8f4b4b8e-286a-4250-9744-27710cc8bf09>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00110.warc.gz"}
An improved and extended parameterization of the CO2 15µm cooling in the middle and upper atmosphere (CO2_cool_fort-1.0) Articles | Volume 17, issue 10 © Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License. An improved and extended parameterization of the CO[2] 15µm cooling in the middle and upper atmosphere (CO2_cool_fort-1.0) The radiative infrared cooling of CO[2] in the middle atmosphere, where it emits under non-local thermodynamic equilibrium (non-LTE) conditions, is a crucial contribution to the energy balance of this region and hence to establishing its thermal structure. The non-LTE computation is too CPU time-consuming to be fully incorporated into climate models, and hence it is parameterized. The most used parameterization of the CO[2] 15µm cooling for Earth's middle and upper atmosphere was developed by Fomichev et al. (1998). The valid range of this parameterization with respect to CO[2] volume mixing ratios (VMRs) is, however, exceeded by the CO[2] of several scenarios considered in the Coupled Climate Model Intercomparison Projects, in particular the abrupt-4×CO[2] experiment. Therefore, an extension, as well as an update, of that parameterization is both needed and timely. In this work, we present an update of that parameterization that now covers CO[2] volume mixing ratios in the lower atmosphere from ∼0.5 to over 10 times the CO[2] pre-industrial value of 284ppmv (i.e. 150 to 3000ppmv). Furthermore, it is improved by using a more contemporary CO[2] line list and the collisional rates that affect the CO[2] cooling rates. Overall, its accuracy is improved when tested for the reference temperature profiles as well as for measured temperature fields covering all expected conditions (latitude and season) of the middle atmosphere. The errors obtained for the reference temperature profiles are below 0.5Kd^−1 for the present-day and lower CO[2] VMRs. Those errors increase to ∼1–2Kd^−1 at altitudes between 110 and 120km for CO[2] concentrations of 2 to 3 times the pre-industrial values. For very high CO[2] concentrations (4 to 10 times the pre-industrial abundances), those errors are below ∼1Kd^−1 for most regions and conditions, except at 107–135km, where the parameterization overestimates them by ∼1.2%. These errors are comparable to the deviation of the non-LTE cooling rates with respect to LTE at about 70km and below, but they are negligible (several times smaller) above that altitude. When applied to a large dataset of global (pole to pole and four seasons) temperature profiles measured by MIPAS (Michelson Interferometer for Passive Atmospheric Spectroscopy) (middle- and upper-atmosphere mode), the errors of the parameterization for the mean cooling rate (bias) are generally below 0.5Kd^−1, except between $\mathrm{5}×{\mathrm{10}}^{-\mathrm{3}}$ and $\mathrm{3}×{\mathrm{10}}^{-\mathrm{4}}$ hPa (∼85–98km), where they can reach biases of 1–2Kd^−1. For single-temperature profiles, the cooling rate error (estimated by the root mean square – rms – of a statistically significant sample) is about 1–2Kd^−1 below $\mathrm{5}×{\mathrm{10}}^{-\mathrm{3}}$hPa (∼85km) and above $\mathrm{2}×{\mathrm{10}}^{-\mathrm{4}}$hPa (∼102km). In the intermediate region, however, it is between 2 and 7Kd^−1. For elevated stratopause events, the parameterization underestimates the mean cooling rates by 3–7Kd^−1 (∼10%) at altitudes of 85–95km and the individual cooling rates show a significant rms (5–15Kd^−1). Further, we have also tested the parameterization for the temperature obtained by a high-resolution version of the Whole Atmosphere Community Climate Model (WACCM-X), which shows a large temperature variability and wave structure in the middle atmosphere. In this case, the mean (bias) error of the parameterization is very small, smaller than 0.5Kd^−1 for most atmospheric layers, reaching only maximum values of 2Kd^−1 near $\mathrm{5}×{\mathrm{10}}^{-\mathrm{4}}$hPa (∼96km). The rms has values of 1–2Kd^−1 (∼20%) below $\sim \mathrm{2}×{\mathrm{10}} ^{-\mathrm{2}}$hPa (∼80km) and values smaller than 4Kd^−1 (∼2%) above 10^−4hPa (∼105km). In the intermediate region between $\sim \mathrm{5}×{\mathrm{10}}^{-\mathrm{3}}$ and $\sim \mathrm{2}× {\mathrm{10}}^{-\mathrm{4}}$hPa (85–102km), the rms is in the range of 5–12Kd^−1. While these values are significant in percentage at $\sim \mathrm{5}×{\mathrm{10}}^{-\mathrm{3}}$–$\mathrm{5}×{\ mathrm{10}}^{-\mathrm{4}}$hPa, they are very small above $\sim \mathrm{5}×{\mathrm{10}}^{-\mathrm{4}}$hPa (96km). The routine is very fast, taking (1.5–7.5)$×{\mathrm{10}}^{-\mathrm{5}}$s, depending on the extension of the atmospheric profile, the processor and the Fortran compiler. Received: 28 Oct 2023 – Discussion started: 06 Nov 2023 – Revised: 22 Mar 2024 – Accepted: 05 Apr 2024 – Published: 28 May 2024 Carbon dioxide is the major infrared cooler of the atmosphere from the lower stratosphere up to the lower thermosphere, where emission by nitric oxide becomes important (López-Puertas and Taylor, 2001). However, the CO[2] infrared emissions in the ν[2] bands near 15µm that are responsible for the cooling are in non-local thermodynamic equilibrium (non-LTE) above around 70km (see e.g. López-Puertas and Taylor, 2001). Thus, in addition to the difficulty of computing the cooling rates in LTE, which requires the solution of the radiative transfer equation (RTE), i.e. a non-local problem, we have to calculate the non-LTE populations of the emitting levels. Thus, the calculation of the non-LTE cooling rates requires the solution of the statistical equilibrium equations for all energy levels producing a significant emission and the corresponding RTE equations for all bands originating from them (see e.g. Chap. 3 in López-Puertas and Taylor, 2001). Therefore, the exact calculation of non-LTE cooling rates in general circulation models (GCMs) or climate models that extend in height above the stratopause is virtually impractical, and hence efficient parameterizations of the CO[2] infrared cooling have been developed for implementation in such models. The most used parameterization of the CO[2] 15µm cooling for Earth's middle and upper atmosphere was developed by Fomichev et al. (1998). That parameterization is applicable for a limited range of CO[2] abundances, up to double the pre-industrial CO[2] concentration. Nowadays, however, with the rapid increase in the CO[2] concentration in the atmosphere and its expected increase in the coming decades, climate model projections are being carried out in much higher CO[2] scenarios that even quadruple the pre-industrial CO[2] abundance (van Vuuren et al., 2011; O'Neill et al., 2016). For example, such scenarios are considered in the Coupled Climate Model Intercomparison Projects. Therefore, parameterizations coping with such large CO[2] concentrations are highly in demand, and that is precisely the purpose of our work. In addition to that of Fomichev et al. (1998), other parameterizations of the CO[2] 15µm cooling rates have been developed in the past. In the case of Earth's atmosphere, it worth mentioning the comprehensive review of the early works reported by Fomichev et al. (1998), the summary presented in Sect. 5.8 of López-Puertas and Taylor (2001) and the more recent work of Feofilov and Kutepov (2012). Further, just before this work was submitted, a new parameterization was developed that uses the accelerated lambda iteration and opacity distribution function techniques (Kutepov and Feofilov, 2023). For the Mars and Venus atmospheres, where CO[2] is the most abundant species, the problem has been tackled in several studies, e.g. López-Valverde et al. (1998), Hartogh et al. (2005), López-Valverde et al. (2008) and Gilli et al. (2017, 2021). In our case, we have the option of developing a completely new parameterization to adapt other CO[2] parameterizations (like those cited above) or to extend and improve the parameterization of Fomichev et al. (1998). Attending mainly to practical reasons of promptness, we opted for the latter. The paper is structured as follows. A very basic description of the parameterization is presented in Sect. 2. Section 3 describes the input atmospheric parameters used in the parameterization and required for calculating the reference cooling rates. In Sect. 4 we describe the calculations of the reference LTE and non-LTE cooling rates. A detailed description of the parameterization is presented in Sect. 5. The testing and accuracy of the parameterization against (i) the reference cooling rates, (ii) the measured temperature fields of the middle atmosphere of MIPAS (Michelson Interferometer for Passive Atmospheric Spectroscopy) and (iii) the temperature profiles obtained by a high-resolution version of the Whole Atmosphere Community Climate Model (WACCM-X) are discussed in Sects. 6, 7 and 8, respectively. The previous cooling rate parameterization was commonly used together with a parameterization of the CO[2] near-infrared heating rates (Ogibalov and Fomichev, 2003 ) in GCMs. As we have extended the former to higher CO[2] volume mixing ratios (VMRs) and we do not plan to extend the latter to higher CO[2] VMRs in the near future, we assess in Sect. 9 the validity of the current near-infrared heating parameterization for high CO[2] VMRs. In Sect. 10, we summarize the main conclusions of the study. 2Framework of the parameterization As discussed above, this parameterization is essentially based on that of Fomichev et al. (1998). To compute the CO[2] cooling rate, the atmosphere is divided into five regions: one LTE and four different non-LTE regions. The method and approximations for computing the cooling rates in those regions are described in detail in Sect. 5. The new parameterization has a finer grid and, because it has been developed to cover a larger range of CO[2] VMRs, the boundaries of the non-LTE regions were revised and, in general, their upper boundaries were shifted to higher altitudes. The scheme consists of 83 levels in $x=\mathrm{log}\left(\mathrm{1000}/p\left(\mathrm{hPa}\right)$), covering x=0.125 to x=20.625 spaced by 0.25. The parameterization, however, can also be used for x>20.625, where it uses the same scheme as for the NLTE4 region (see Sect. 5.4). The relationship between pressure and geometrical altitude for the reference temperature profiles is shown in Fig. S1 in the Supplement. To a first approximation, the geometric altitude z below ∼120km is related to x by z (km)≈7x. The parameterization computes cooling rates for given inputs of temperature and concentrations of CO[2], O(^3P), O[2] and N[2] as a function of pressure. No specific grid is required, and it can be irregular. The routine interpolates the given parameters to its internal pressure grid. Possible cooling effects caused by temperature disturbances at vertical scales smaller than the internal grid of the parameterization, ≲1.75km (see e.g. Kutepov et al., 2013), are not taken into account. That is, we assume that the cooling induced by non-resolved gravity waves propagating with a vertical wavelength of the order of or smaller than ≲1.75km would be taken into account in the GCMs by using an appropriate GW (gravity wave) parameterization. Further, the collisional (de)activation of CO[2](v[2]) levels by the main atmospheric molecules (N[2] and O[2]) and by O(^3P) can also be prescribed. To compute the different coefficients employed by the parameterization (see Sect. 5), reference LTE and non-LTE cooling rates are required (see Sect. 4.1 and 4.2). These are calculated for selected reference atmospheres and are described in the next 3The reference atmospheres We used the same six pressure–temperature reference atmospheres as in Fomichev et al. (1998) for altitudes below ∼120km. Above this altitude, they were extended up to ∼200km with the empirical US Naval Research Laboratory Mass Spectrometer Incoherent Scatter Radar version 2.0 (MSIS2) model (Emmert et al., 2021) for medium conditions of solar activity: F[10.7]=103sfu (June 2011) for all atmospheres except for MLE, which was F[10.7]=142sfu (September 2011). These six p–T profiles cover the envelope of the climatological zonal mean temperatures of the current middle atmosphere very well, e.g. as measured by MIPAS from 2007 to 2012 (see Sect. 7). However, they do not cover the small-scale temporal and spatial temperature variability (see Fig. B6). The performance of the parameterization for such variability is addressed in Sect. 7. Further, the range of the six temperature profiles does not cover the episodes of stratospheric warming with an elevated stratopause well. During these events, the altitude region of the typical stratopause, at about 50km, is much colder, being even 50K colder than during normal conditions, and the altitude of the typical mesopause, near 85–90km, is warmer by a similar amount (see Fig. 1a, profile ES). For these conditions the temperature profile is nearly isothermal from the tropopause up to about 0.1hPa and exhibits an inversion above, with a peak near the mesopause. We anticipate that, for these rare conditions, the error incurred by this parameterization can be significant (see Sect. 7.2). We should also mention that the envelope of these reference atmospheres does not fully cover the predicted temperatures for the end of this century for projections with high CO[2] emissions. In particular, WACCM simulations for this century under the RCP6.0 scenario (Marsh, 2011; Marsh et al., 2013; Garcia et al., 2017) yield zonal mean temperatures which are colder in the middle atmosphere. In order to cover such predictions, the envelope of the six p–T profiles assumed here would have to be widened by about −30K in the upper stratosphere and by about −20K in the mesosphere. The parameterization accuracy for such predicted temperatures has not been fully assessed in this work as we have considered only the projection of high CO[2] VMR profiles but not the corresponding predicted temperature fields. This will be the subject of future work. 3.2CO[2], O(^3P), O[2] and N[2] abundances The valid range of the parameterization of Fomichev et al. (1998) with respect to CO[2] volume mixing ratios is exceeded by the CO[2] concentration of several scenarios considered in the 6th Coupled Climate Model Intercomparison Project (CMIP6), in particular for the 4×CO[2] experiment. Several CO[2] scenarios have been proposed for the future. Thus, van Vuuren et al. (2011) proposed the scenario RCP2.6, which reaches tropospheric CO[2] values near 1000ppmv by the end of the century. Likewise, Meinshausen et al. (2011) suggested the high CO[2] scenario of RCP8.5 (CMIP5), which has CO[2] concentrations of 2000ppmv in the second half of the 23rd century or even higher than 2000ppmv (e.g. SSP5-8.5 in CMIP6; see O'Neill et al., 2016). Here we used a wide range of tropospheric CO [2] values ranging from about half of the pre-industrial (1851) value of 285ppmv to about 10 times this value (see Fig. 1b). The specific profiles were built from a WACCM run under the CMIP6 SSP5-8.5 scenario (Marsh, 2011; Marsh et al., 2013; Garcia et al., 2017). Global annual mean profiles of CO[2] were taken from WACCM simulations for years: 1851 (pre-industrial), CO[2] profile no. 2; 2014, CO[2] profile no. 3; 2050, CO[2] profile no. 4 ($\sim \mathrm{2}×$pre-industrial); and 2099, CO[2] profile no. 6 ($\sim \mathrm{4}×$pre-industrial). In addition, we set up the low CO[2] profile (no. 1) by halving pre-industrial profile no. 2, the intermediate CO[2] profile no. 5 ($\sim \mathrm{3}×$pre-industrial) from the mean of WACCM outputs for 2050 and 2099, the high CO[2] profile no. 7 ($\sim \mathrm{5}×$pre-industrial) by multiplying WACCM output for 2099 by a factor of 1.25, and the highest CO[2] profile no. 8 ($\sim \mathrm{10}×$pre-industrial) by multiplying WACCM output for 2099 by a factor of 2.7. In addition to those CO[2] VMR profiles, we also composed the intermediate profile nos. 9, 10 and 11 for testing the parameterization (see Sect. 6.2), which are shown in Fig. 1b with dashed lines. Profile nos. 9 and 11 were obtained by multiplying WACCM outputs for the years 2050 and 2099 by factors of 0.979 and 1.8, respectively. Profile no. 10 was calculated by weighting the WACCM annual mean for the years 2050 and 2099 by 0.76875 and 0.256250, respectively. WACCM provides the CO[2] VMR profiles up to about 130km. Above that altitude, they were calculated by using a WACCM-X run for 2008, which provides a CO[2] VMR up to near 500km, and scaling them, in pressure levels, by the CO[2] value of the corresponding CO[2] profile at a pressure of $\mathrm{5} As discussed above, the parameterization requires the N[2], O[2] and O(^3P) volume mixing ratio profiles for the six p–T reference atmospheres. They were taken from the MSIS2 model (Emmert et al., 2021) and are shown in Fig. S2. 4Cooling rates for the reference atmospheres We describe in this section the non-LTE cooling rates used as a reference. To compute the coefficients of the parameterization and the boundaries of the different layers, they also require the calculations of the cooling rates in LTE, which are also described in this section. Further, we have assessed the accuracy of the LTE cooling rates by comparing them with those calculated by an independent code, the Reference Forward Model (RFM, Dudhia, 2017). 4.1Reference LTE cooling rates The LTE cooling rates have been computed using a modified Curtis matrix formulation (Funke et al., 2012). In that computation we used as the basis for the radiative transfer calculations (e.g. the optical depths, the transmittances and their differences) the Karlsruhe Optimised and Precise Radiative Transfer Algorithm (KOPRA, Stiller et al., 2002). This code is a well-tested general-purpose line-by-line radiative transfer model that includes all the known relevant processes for performing accurate radiative transfer calculations in planetary atmospheres. We used the CO[2] line list of HITRAN 2016 (Gordon et al., 2017) and the line shapes were modelled with a Voigt profile including the pressure and temperature dependencies of the Doppler and Lorentz half-widths. The line mixing, although of little importance in this case because the transmittances are integrated over a wide spectral range, was also taken into account (see Stiller et al., 2002). The flux transmittances were computed using a 10-point Gaussian quadrature. The wavenumber grid was 0.0005cm^−1. The LTE cooling rates have been computed for the CO[2] bands associated with the vibrational states of the ν[1]ν [2] mode manifold covering the spectral range from 540 to 800cm^−1 in intervals of 10cm^−1. All bands listed in HITRAN 2016 for the six most abundant isotopes in those spectral regions were included in the calculation. For reference, the accurate cooling rates computed assuming LTE conditions for the six p–T profiles and the reference eight CO[2] VMRs are shown in Figs. S3 and S4. In order to ensure the accuracy of these LTE cooling rates, we have compared them with those obtained with another very well-tested and widely used radiative transfer code, RFM (Dudhia, 2017). This code has been used in many studies relevant to the MIPAS instrument (Fischer et al., 2008) and for the retrieval of MIPAS level-2 data obtained by the University of Oxford. It worth mentioning that RFM uses a classical Curtis matrix method (double-flux transmittance differences), while we use the modified Curtis matrix method. Figure 2 shows the results of the comparison for the US standard temperature profile and the CO[2] VMR of Fomichev et al. (1998). Here we used a common fine-altitude grid of 0.5km. We see that the agreement between both codes is very good, with differences at most of the altitudes smaller than 0.1–0.2Kd^−1. Note that some of the major differences appear to be associated with small oscillations in the RFM results. The same formulation has been used to calculate the Curtis matrices of all the CO[2] ν[2] bands which are required to compute the coefficients of the parameterization in the LTE region (see Sect. 5). 4.2Reference non-LTE cooling rates The reference line-by-line non-LTE cooling rates have been computed by using the GRANADA non-LTE code. The details of the method for solving the system of equations for CO[2] are given in Funke et al. (2012). In addition to the solution of the statistical and radiative transfer equations described in that work for the calculation of the non-LTE populations of the CO[2] levels, here, in order to compute accurate non-LTE cooling rates and to account for the overlap between the different CO[2] ν[2] bands, we included an additional final iteration computing the radiation fields in all the bands by using the lambda iteration method. This algorithm shares the radiative transfer algorithm with KOPRA (Stiller et al., 2002). Thus, the details of the radiative transfer calculation related to KOPRA, e.g. line shape, spectroscopic data, wavenumber grid, or line mixing, given in LTE Sect. 4.1 above, also apply to the non-LTE calculations described here. For this case of non-LTE cooling rates, each ro-vibrational band contributes according to the non-LTE populations of their upper and lower levels. The non-LTE cooling rates calculated here comprise 16 ν[2] vibrational bands emitting and absorbing in the 15µm region, i.e. the fundamental ν[2] band, three first hot ν[2] bands, seven second ν[2] hot bands of the major CO[2] isotopologue and ν[2] fundamental bands of isotopologues ^16O^13C^16O, ^16O^12C^18O, ^16O^12C^17O, ^16O^13C^18O and ^16O^13C^17O. The contributions of other weaker ν[2] bands arising from higher v[2] levels, e.g. v[2]=4, 5 or 6, are included in the calculation but have negligible contributions for the conditions of Earth's atmosphere. Funke et al. (2012) ^a Rate coefficient for the forward sense of the process (cm^3s^−1). ^b This rate is taken as 10^−15cm^3s^−1 for temperatures lower than 150K (see Funke et al., 2012). T is temperature (K). i and j are different CO[2] isotopologues. i=1–6 except as noted. v[2] denotes equivalent 2v[1]+v[2] states: for example, v[2]=2 is the triad (10002, 02201, 10001). For the calculations of the non-LTE cooling rates, a collisional scheme and collisional rates are required. Although the collisional rates affecting the CO[2] v[2] levels are an input parameter for the parameterization, here we have used, for the calculations of the reference cooling rates and for testing the parameterization, the collisional rates described in Funke et al. (2012). They have been recently revised and used in the non-LTE retrieval of temperature from SABER and MIPAS measurements (García-Comas et al., 2008, 2023). The most relevant collisional processes concerning the populations of the levels emitting in the different ν[2] bands described above, and their rates, are listed in Table 1 for easier reference. We should note that these rates and their temperature dependencies are different from those used in the previous parameterization of Fomichev et al. (1998). The values are in general of very similar magnitudes, except for the k[O] rate (process1c in Table 1) that has been considered here with its upper limit, i.e. about a factor of 2 larger than in the parameterization of Fomichev et al. (1998). This rate coefficient is not well known, with uncertainties of the order of a factor of 2 (see e.g. García-Comas et al., 2008). While laboratory measurements are in the range of 1.5 to $\mathrm{2}×{\mathrm{10}}^{-\mathrm{12}}$cm^3s^−1, the values derived from atmospheric observations are close to $\mathrm{6}×{\mathrm{10}}^{-\mathrm{12}}$cm^3s^−1. Although this rate can be chosen when using this parameterization, we have optimized it for its larger value (see Table 1), as this rate has been used in the most recent non-LTE retrievals of temperature from SABER and MIPAS measurements. The effects of using half of this value on the cooling rates are discussed in Sect. 6.2. In the comparisons shown in the next sections, we consistently used the collisional rates in Table 1 for the two parameterizations. The cooling rates near 15µm change very little with the illumination conditions. However, those cooling rates (or more strictly speaking, the flux divergence) of the CO[2] ν[2] bands computed by GRANADA under daytime conditions are affected by some emission from the relaxation and/or redistribution of the solar energy absorbed in the near-infrared bands (see e.g. López-Puertas et al., 1990). As this absorption or heating is already taken into account by the near-infrared (NIR) solar heating parameterization (see Sect. 9), all the non-LTE cooling rates computed here have been performed for nighttime conditions. The results for the accurate, line-by-line non-LTE cooling rates computed for the six reference p–T profiles and the eight CO[2] VMRs are shown in Fig. 3 from the stratosphere up to the lower thermosphere and in Fig. S5 for the upper part of the parameterization, i.e. from 80 to 200km. We observe that the altitude distribution of the cooling rates depends very much on the temperature profile. This is the major difficulty in building the parameterization. A general common feature is the maximum near the stratopause, because at these altitudes the non-LTE cooling rates do not differ significantly from those in LTE, and these are mainly driven by the high temperature of this region. Above the stratopause, the total non-LTE cooling rates depend very much on the contributions of the different bands, e.g. the ν[2] fundamental band of the major isotopologue (FB), the contributions of the first and hot bands (Hots) and those of the ν[2] fundamental bands of the five minor isotopologues. These contributions are shown in Fig. 4 for the contemporary CO[2] VMR profile (no. 3) and four p–T profiles. The non-LTE cooling rates generally decrease with altitude above the stratopause, reaching a minimum near the mesopause for several p–T profiles; see e.g. the SAS and MLE atmospheres in Figs. 3 and 4. The cooling can even be negative, e.g. heating for the very cold sub-arctic summer (SAS) mesopause, where heating can be several Kelvin per day (see the bottom-left panel of Fig. 3). Exceptional cases are the winter atmospheres (mid-latitude winter, MLW (not shown), and sub-arctic winter, SAW), where the mesopause is warmer and the cooling rates are high in this region. Above the mesopause, the cooling rate rapidly increases following the enhancement of the kinetic temperature. Above about 130km, however, the cooling rates decline (see Fig. B1). At these altitudes, cooling to space is a very good approximation to the non-LTE cooling rate, which, when expressed in Kelvin per day, is proportional to the CO[2] VMR, to the atomic oxygen density [O] and to the temperature through $\mathrm{exp}\left(-E/kT\right)$ (see e.g. Sect. 9.2 in López-Puertas and Taylor, 2001). As altitude increases, the CO[2] VMR decreases, and so does [O]. These two effects overcome the temperature increase, leading to a net cooling rate decrease. Note the significant contribution of the hot bands in the lower thermosphere (120–150km; see Fig. B1), essentially due to the first hot band of the major isotopologue at these altitudes, which is about 10% of the total cooling. The dependence of the non-LTE cooling rates on the CO[2] abundances is illustrated in Fig. 3. We observe that, in general, the cooling rate correlates very well with the CO[2] abundance, although that correlation is not always linear and generally depends on altitude. This is also true for the cases where we have net heating for low CO[2] VMR, e.g. the SAS atmosphere between about 75km and 95km. For the MLE and MLS atmospheres, the cooling rate near ∼90km changes from net cooling to net heating for the largest CO[2] VMRs. Further, the very low cooling for the tropical (TRO) p–T profile near 70km remains very low even when the CO[2] VMR varies by a factor of 20. In the lower thermosphere, e.g. above around ∼110km, however, the dependency of the cooling rate on the CO[2] is very close to being linear (see Fig. S5). A comparison of the non-LTE and LTE cooling rates for the six p–T reference atmospheres for CO[2] VMR nos. 3 (current VMR) and 6 (4 times the pre-industrial value) is shown in Figs. 5 and S6. This comparison is necessary in order to establish the boundaries of the different atmospheric regions of the parameterization (see Sect. 5). We first observe (Fig. 5) that the altitude of the departure of the cooling rate from LTE (considered the altitude where the non-LTE–LTE difference is larger than 5%) depends on the temperature profile and ranges from $\mathrm{5.2}×{\mathrm{10}}^{-\mathrm{2}} $hPa (∼72.5km) for the SAS atmosphere to $\mathrm{1.2}×{\mathrm{10}}^{-\mathrm{2}}$hPa (∼78.7km) for the TRO atmosphere. A similar plot but for the higher CO[2] profile no. 6 ($\sim \mathrm{4}×$ pre-industrial) is shown in Fig. S6. An overview of the altitude or pressure level of the deviation of the cooling rate from LTE is shown in Fig. B2 for the six p–T profiles and the eight CO[2] VMR profiles. We see that the lower altitudes (higher pressures) occur for the SAS and SAW reference atmospheres. It is also evident that this altitude increases with the CO[2] VMR, except for the SAS and SAW cases, for which it is nearly independent of the CO[2] VMR. That is expected as, for a more abundant CO[2] atmosphere, the 15µm bands become optically thicker and fewer collisions are sufficient for keeping the emitting levels in LTE. Figure B2 suggests that, for higher CO[2] VMRs, the LTE–non-LTE transition region could be placed at higher altitudes. However, as the parameterization is intended to cover the full range of CO[2] VMR profiles, we have to be conservative, and we placed it at the lowest altitude for any p–T or CO[2] VMR profile. Thus, it has been taken at x=9.875 ($p=\mathrm{5.14}×{\mathrm{10}}^{-\mathrm{2}}$hPa, z≈70km), which, except for the SAW atmosphere with the lowest CO[2] profile, fulfils the LTE–non-LTE transition region for all the p–T and CO[2] VMR profiles. For completeness, Fig. B3 shows an example of the comparison of non-LTE and LTE cooling rates, including the thermosphere for the six p–T profiles and the current CO[2] values. This shows the enormous difference between LTE and non-LTE cooling rates (non-LTE values being much smaller) in the thermosphere. The atomic oxygen concentration is an input to the parameterization and plays a crucial role in determining the CO[2] infrared cooling. As a consequence, it is very important in establishing the different non-LTE regions of the parameterization (Fomichev et al., 1998). To identify the atmospheric regions where it is important, we have performed a calculation by dividing the k[O] collisional rate by a factor of 2, which is almost equivalent to changing the O(^3P) concentration by the same factor. Figure 6 shows this effect for four p–T profiles and considering the current CO[2]. Generally, it is most important above around 10^−3hPa (∼95km). However, for the SAS and SAW atmospheres, it is also important down to $\mathrm{5}×{\mathrm{10}}^{-\mathrm{3}}$hPa (∼85km). The fact that its importance starts being significant at different atmospheric levels for the different p–T profiles poses an additional difficulty in the development of the parameterization. Essentially, here we follow the parameterization developed by Fomichev et al. (1998). A brief description of the method, including the most important features and equations, is given in this section. The atmosphere is divided into five different regions (see Fig. 7) where different approaches are used for calculating the cooling rates. These regions are qualitatively the same as those defined by Fomichev et al. (1998), but their altitude extensions (except for the LTE region) have been significantly revised, mainly as a consequence of the ample range of CO[2] abundances for which this parameterization is developed. In fact, their upper boundaries have been moved upwards (except for LTE), resulting in the following ranges: LTE x=0–9.875 (z=0–≈70km), NLTE1 x=9.875–12.625 (z≈70 –87km), NLTE2 x=12.625–16.375 (z≈87–109km), NLTE3 x=16.375–19.875 (z≈109–180km) and NLTE4 x>19.875 (z≳180km). The lowermost (LTE) and uppermost (NLTE4) regions are the most straightforward and also the regions where the errors are in general smaller. The most difficult parts are the transition regions from LTE to non-LTE, where (i) several bands contribute to the cooling with different source functions and their relative contributions depend very much on the actual temperature profiles (see Fig. 4), and (ii) the exchange of radiation between the layers is significant and different for the considered bands. Further, although most of the radiative excitation at a given layer is produced by the absorption of photons travelling from below, the absorption of photons travelling downwards can also contribute significantly. This is the case, for example, for the stronger fundamental band near the mesopause. Further, the cooling above around 90km also depends on the collisions with atomic oxygen. This effect can be accurately taken into account in the upper non-LTE regions where all the bands become optically thin. However, it is very difficult to represent it properly between around the mesopause and a few tens of kilometres above, where the atomic oxygen concentration varies largely and the exchange of radiation between the layers is still important. 5.1The LTE region The parameterization in the LTE region is based on the Curtis matrix method. The cooling rate ${\mathit{ϵ}}_{i}^{t}\left(\mathit{u }\right)$ at a given pressure level x[i], in a spectral region ν and for a particular temperature profile t is given by $\begin{array}{}\text{(1)}& {\mathit{ϵ}}_{i}^{t}\left(\mathit{u }\right)=\sum _{j=\mathrm{0}}^{{j}_{\mathrm{CM}}}{\mathcal{A}}_{i,j}^{t}\left(\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}{\mathit{\ phi }}_{j}^{t}\left(\mathit{u }\right)+{v}_{i}^{t}\left(\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}\mathit{\phi }\left({T}_{\mathrm{surf}}^{t},\mathit{u }\right),\end{array}$ where the indices i,j refer to pressure levels x[i] and x[j], and the sum is extended over pressure levels x[j] ranging from the lower boundary, x[j]=0, until ${x}_{{j}_{\mathrm{CM}}}=\mathrm{13.875} $. This upper boundary of the Curtis matrix has been selected to minimize the error in the lowest non-LTE region, NLTE1 (see more details in Sect. 5.2 below). ${\mathcal{A}}_{i,j}^{t}\left(\mathit{u }\right)$ is the modified Curtis matrix (slightly different from its usual definition; see e.g. López-Puertas and Taylor, 2001). The factor ${\mathit{\phi }}_{j}^{t}\left(\mathit{u }\right)$ in Eq. ( 1) represents the exponential part of the Planck function and is given by $\begin{array}{}\text{(2)}& {\mathit{\phi }}_{j}^{t}\left(\mathit{u }\right)=\mathrm{exp}\left[-h\mathit{u }/\left({k}_{\mathrm{B}}\phantom{\rule{0.125em}{0ex}}{T}_{j}^{t}\right)\right],\end{array}$ where h is the Planck constant, k[B] is the Boltzmann constant, and ${T}_{j}^{t}$ is the temperature of the p–T profile t at level x[j]. Similarly, $\mathit{\phi }\left({T}_{\mathrm{surf}}^{t},\ mathit{u }\right)$ corresponds to the exponential part of the Planck function for the surface temperature ${T}_{\mathrm{surf}}^{t}$. The ${v}_{i}^{t}\left(\mathit{u }\right)$ term accounts for the absorption at level i times the transmission from the surface up to level i at ν, so that ${v}_{i}^{t}\left(\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}\mathit{\phi }\left({T}_{\mathrm{surf}}^{t}, \mathit{u }\right)$ accounts for the heating rate due to the absorption of the radiation from the surface (or lower boundary). The cooling rate is calculated in the spectral range from 540 to 800cm^ −1 divided into frequency intervals, ν, 10cm^−1 wide. Those LTE cooling rate profiles have been calculated for each of the six p–T reference atmospheres and the eight CO[2] VMR profiles. In the parameterization, the Curtis matrix is expressed with an explicit temperature dependence by ${\mathcal{A}}_{i,j}^{t}\left(\mathit{u }\right)={\mathbf{a}}_{i,j}^{t}\left(\mathit{u }\right)+{\mathbf{b}}_{i,j}^{t}\left(\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{i}^{t}\ left(\mathit{u }\right),$ where the matrix coefficients ${\mathbf{a}}_{i,j}^{t}\left(\mathit{u }\right)$ and ${\mathbf{b}}_{i,j}^{t}\left(\mathit{u }\right)$ are given by $\begin{array}{c}{\mathbf{a}}_{i,j}^{t}\left(\mathit{u }\right)={\mathcal{A}}_{i,j}^{t}\left(\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}\frac{{S}_{\mathrm{0}}\left(\mathit{u }\right)}{{S}_{\ mathrm{0}}\left(\mathit{u }\right)+\left[{S}_{\mathrm{1}}\left(\mathit{u }\right)+{S}_{\mathrm{2}}\left(\mathit{u }\right)\right]\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{i}^{t}\left(\mathit{u } \right)},\\ {\mathbf{b}}_{i,j}^{t}\left(\mathit{u }\right)={\mathcal{A}}_{i,j}^{t}\left(\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}\frac{{S}_{\mathrm{1}}\left(\mathit{u }\right)+{S}_{\mathrm{2}}\ left(\mathit{u }\right)}{{S}_{\mathrm{0}}\left(\mathit{u }\right)+\left[{S}_{\mathrm{1}}\left(\mathit{u }\right)+{S}_{\mathrm{2}}\left(\mathit{u }\right)\right]\phantom{\rule{0.125em}{0ex}}{\mathit{\ phi }}_{i}^{t}\left(\mathit{u }\right)},\end{array}$ and S[0](ν), S[1](ν) and S[2](ν) are the band strengths of the fundamental, first hot and second hot bands, respectively, in the ν interval. In this way, the temperature dependence, mainly caused by the band strength of the first and second hot bands, is carried out in ${\mathit{\phi }}_{i}^{t}\left(\mathit{u }\right)$. Those matrix coefficients are calculated for each spectral interval ν. We obtain the coefficients for the entire spectral region of the CO[2] 15µm bands by summing over all the ν intervals and weighting with the ν dependency of the ${\mathit{\phi }}_{i}^{t}\left(\mathit{u }\right)/{\mathit{\phi }}_{i}^{t}\left({\mathit{u }}_{\mathrm{0}}\right)$ factor, e.g. ${\mathbf{a}}_{i,j}^{t}=\frac{{\sum }_{\mathit{u }}{\mathbf{a}}_{i,j}^{t}\left(\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{j}^{t}\left(\mathit{u }\right)}{{\mathit{\phi }}_{j}^ {t}\left({\mathit{u }}_{\mathrm{0}}\right)}\text{ and }{\mathbf{b}}_{i,j}^{t}=\frac{{\sum }_{\mathit{u }}{\mathbf{b}}_{i,j}^{t}\left(\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_ {j}^{t}\left(\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{i}^{t}\left(\mathit{u }\right)}{{\mathit{\phi }}_{j}^{t}\left({\mathit{u }}_{\mathrm{0}}\right)\phantom{\rule{0.125em} {0ex}}{\mathit{\phi }}_{i}^{t}\left({\mathit{u }}_{\mathrm{0}}\right)},$ with ν[0]=667.3799cm^−1 being the frequency of the fundamental band of the major isotopologue. Next, we define global a[i,j] and b[i,j] matrix coefficients, to be used for any input temperature profile, as weighted averages of ${\mathbf{a}}_{i,j}^{t}$ and ${\mathbf{b}}_{i,j}^{t}$ for the six reference p–T profiles. We introduce a set of normalized weights ${\mathit{\xi }}_{i}^{t}$, altitude-dependent, for each temperature profile so that $\begin{array}{}\text{(3)}& {\mathbf{a}}_{i,j}=\sum _{t}{\mathit{\xi }}_{i}^{t}\phantom{\rule{0.125em}{0ex}}{\mathbf{a}}_{i,j}^{t}\phantom{\rule{0.125em}{0ex}}\text{ and }{\mathbf{b}}_{i,j}=\sum _{t} {\mathit{\xi }}_{i}^{t}\phantom{\rule{0.125em}{0ex}}{\mathbf{b}}_{i,j}^{t}.\end{array}$ Analogously to the matrix coefficients ${\mathbf{a}}_{i,j}^{t}\left(\mathit{u }\right)$ and ${\mathbf{b}}_{i,j}^{t}\left(\mathit{u }\right)$, we define the corresponding vector coefficients for the surface flux, ${a}_{\mathrm{surf},i}^{t}\left(\mathit{u }\right)$ and ${b}_{\mathrm{surf},i}^{t}\left(\mathit{u }\right)$, so that ${v}_{i}^{t}\left(\mathit{u }\right)={a}_{\mathrm{surf},i}^{t}\left(\mathit{u }\right)+{b}_{\mathrm{surf},i}^{t}\left(\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{i}^{t}\left(\ mathit{u }\right),$ $\begin{array}{c}{a}_{\mathrm{surf},i}^{t}\left(\mathit{u }\right)=\frac{{v}_{i}^{t}\left(\mathit{u }\right)\phantom{\rule{0.33em}{0ex}}{S}_{\mathrm{0}}\left(\mathit{u }\right)}{{S}_{\mathrm{0}}\left (\mathit{u }\right)+\left[{S}_{\mathrm{1}}\left(\mathit{u }\right)+{S}_{\mathrm{2}}\left(\mathit{u }\right)\right]\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{i}^{t}\left(\mathit{u }\right)},\\ {b} _{\mathrm{surf},i}^{t}\left(\mathit{u }\right)=\frac{{v}_{i}^{t}\left(\mathit{u }\right)\phantom{\rule{0.33em}{0ex}}\left[{S}_{\mathrm{1}}\left(\mathit{u }\right)+{S}_{\mathrm{2}}\left(\mathit{u }\ right)\right]}{{S}_{\mathrm{0}}\left(\mathit{u }\right)+\left[{S}_{\mathrm{1}}\left(\mathit{u }\right)+{S}_{\mathrm{2}}\left(\mathit{u }\right)\right]\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{i} ^{t}\left(\mathit{u }\right)},\\ {a}_{\mathrm{surf},i}^{t}=\frac{{\sum }_{\mathit{u }}{a}_{\mathrm{surf},i}^{t}\left(\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}\mathit{\phi }\left({T}_{\mathrm {surf}}^{t},\mathit{u }\right)}{\mathit{\phi }\left({T}_{\mathrm{surf}}^{t},{\mathit{u }}_{\mathrm{0}}\right)},\\ {b}_{\mathrm{surf},i}^{t}\left(\mathit{u }\right)=\frac{{\sum }_{\mathit{u }}{b}_{\ mathrm{surf},i}^{t}\left(\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}\mathit{\phi }\left({T}_{\mathrm{surf}}^{t},\mathit{u }\right)\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{i}^{t}\left(\ mathit{u }\right)}{\mathit{\phi }\left({T}_{\mathrm{surf}}^{t},{\mathit{u }}_{\mathrm{0}}\right)\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{i}^{t}\left({\mathit{u }}_{\mathrm{0}}\right)}.\end $\begin{array}{}\text{(4)}& {a}_{\mathrm{surf},i}=\sum _{t}{\mathit{\xi }}_{i}^{t}\phantom{\rule{0.125em}{0ex}}{a}_{\mathrm{surf},i}^{t}\phantom{\rule{0.125em}{0ex}}\text{ and }{b}_{\mathrm{surf},i}= \sum _{t}{\mathit{\xi }}_{i}^{t}\phantom{\rule{0.125em}{0ex}}{b}_{\mathrm{surf},i}^{t}.\end{array}$ In this way, the cooling rate ϵ[i], at a pressure level x[i], for a given input temperature profile T[inp] is calculated in the parameterization by $\begin{array}{}\text{(5)}& \begin{array}{rl}{\mathit{ϵ}}_{i}& =\sum _{j}\phantom{\rule{0.125em}{0ex}}\mathit{\left\{}\left[{\mathbf{a}}_{i,j}+{\mathbf{b}}_{i,j}\phantom{\rule{0.125em}{0ex}}{\mathit {\phi }}_{i}^{{T}_{\mathrm{inp}}}\left({\mathit{u }}_{\mathrm{0}}\right)\right]\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{j}^{{T}_{\mathrm{inp}}}\left({\mathit{u }}_{\mathrm{0}}\right)\mathit{\ right\}}\\ & +\left[{a}_{\mathrm{surf},i}+{b}_{\mathrm{surf},i}\phantom{\rule{0.33em}{0ex}}{\mathit{\phi }}_{i}^{{T}_{\mathrm{inp}}}\left({\mathit{u }}_{\mathrm{0}}\right)\right]\phantom{\rule {0.125em}{0ex}}\mathit{\phi }\left({T}_{\mathrm{surf}}^{inp},{\mathit{u }}_{\mathrm{0}}\right).\end{array}\end{array}$ The weights ${\mathit{\xi }}_{i}^{t}$ are obtained by minimizing the cost function χ(x[i]) at each pressure level x[i], given by the sum of the square of the differences of the reference LTE cooling rates, ${\mathit{ϵ}}_{\mathrm{ref}}^{t}$, and those computed by the parameterization by using Eq. (5), ${\mathit{ϵ}}_{\mathrm{par},i}^{t}$, for each p–T profile t, e.g. $\mathit{\chi }\left({x}_{i}\right)=\sum _{t}\phantom{\rule{0.33em}{0ex}}{\mathit{\eta }}^{t}\phantom{\rule{0.125em}{0ex}}\mathit{\left\{}{\mathit{ϵ}}_{\mathrm{ref},i}^{t}-{\mathit{ϵ}}_{\mathrm or, in more detail, by $\begin{array}{rl}\mathit{\chi }\left({x}_{i}\right)& =\sum _{t}\phantom{\rule{0.33em}{0ex}}{\mathit{\eta }}^{t}\phantom{\rule{0.125em}{0ex}}\mathit{\left\{}{\mathit{ϵ}}_{\mathrm{ref},i}^{t}-\mathit {\left\{}\sum _{j}\phantom{\rule{0.33em}{0ex}}\sum _{{t}^{\prime }}\phantom{\rule{0.125em}{0ex}}{\mathit{\xi }}_{i}^{{t}^{\prime }}\phantom{\rule{0.125em}{0ex}}\left[{\mathbf{a}}_{i,j}^{{t}^{\prime }}+{\mathbf{b}}_{i,j}^{{t}^{\prime }}\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{i}^{t}\left({\mathit{u }}_{\mathrm{0}}\right)\right]\\ & \cdot {\mathit{\phi }}_{j}^{t}\left({\mathit{u }}_{\mathrm {0}}\right)\phantom{\rule{0.125em}{0ex}}+\left[{a}_{\mathrm{surf},i}^{{t}^{\prime }}+{b}_{\mathrm{surf},i}^{{t}^{\prime }}\phantom{\rule{0.33em}{0ex}}{\mathit{\phi }}_{i}^{t}\left({\mathit{u }}_{\ mathrm{0}}\right)\right]\phantom{\rule{0.125em}{0ex}}\mathit{\phi }\left({T}_{\mathrm{surf}}^{t},{\mathit{u }}_{\mathrm{0}}\right)\mathit{\right\}}{\mathit{\right\}}}^{\mathrm{2}}.\end{array}$ The normalized coefficients η^t were originally introduced to consider different fractions of the area of Earth ascribed to each p–T reference profile. Thus, in the previous parameterization they were taken to be equal to 0.05 for the subarctic (winter and summer) profiles, 0.1 for the mid-latitude (winter and summer) profiles, 0.4 for the tropical profile and 0.3 for the mid-latitude equinox p–T profile. In this study, we have explored different options including the original coefficients and a uniform weighting for the six p–T profiles, and we found a smaller χ for the latter, e.g. $\ mathit{\eta }=\mathrm{1}/\mathrm{6}$ for all the profiles. Hence, that was included in this version. In that way, we have parameterized the cooling rates as a function of temperature. The cooling rates also depend on the CO[2] VMR profiles (see Fig. 3). The parameterization incorporates the dependence on the CO[2] abundance by calculating a[i,j] and b[i,j] for a generic CO[2] profile by assuming a linear interpolation in $\mathrm{log}\left[{\mathbf{a}}_{i,j}/\mathrm{VMR}\left({x}_{i}\ right)\right]$ and $\mathrm{log}\left[{\mathbf{b}}_{i,j}/\mathrm{VMR}\left({x}_{i}\right)\right]$ from the adjacent CO[2] VMR profiles. Thus, the a[i,j] and b[i,j] coefficients of Eq. (3) have been calculated (and are provided) for the eight CO[2] VMRs shown in Fig. 1b. 5.2The NLTE1 region: the transition from LTE to non-LTE This region is difficult to parameterize because we have several bands contributing to the cooling (see Fig. 4), and their relative contributions depend significantly on both the temperature structure and the CO[2] VMR profile. Note that, at certain levels, the cooling induced by the weaker hot bands is larger than that of the stronger fundamental band. We should also note that the contribution of the first hot bands at high altitudes, ∼110–150km, is not negligible (5%–10%, Fig. B1). This contribution is accounted for in the parameterization by implicitly assuming that it is produced by the fundamental band of the main isotopologue (see below and Sect. 4.2). The lower boundary of this region, i.e. the LTE–NLTE1 transition, occurs, depending on the temperature profile, at altitudes from ∼70km up to ∼85km (0.08 to 0.004hPa; see Fig. B2), taking place a few kilometres lower for the subarctic summer atmosphere and the lowest CO[2] VMR. This transition region occurs at higher altitudes for larger CO[2] VMRs. That is, the atmosphere becomes optically thicker and fewer collisions are enough to keep the levels in LTE. However, since we also need to represent the low CO[2] VMRs, we decided to conservatively set up this region at rather low altitudes, ${x}_{b,\mathrm{1}}=\mathrm{9.875}$ (∼70km), the same level used in the previous parameterization. The upper limit of this region was set up in the previous parameterization at the pressure levels where collisions with O(^3P) start affecting the cooling rates significantly. Again, that pressure level depends on the temperature profile (and also on the O(^3P) concentration), being lower at ∼0.004hPa (x≈12.4), for the subarctic summer and winter conditions (see Fig. 6). Here, we have taken the upper boundary of ${x}_{b,\mathrm{2}}=\mathrm{12.625}$ (≈87km), slightly higher than the 12.5 value assumed in the original parameterization. In this region we followed, as in Fomichev et al. (1998), the matrix approach discussed in Sect. 5.1 above. Thus, Eq. (5) was used but with modified ${\mathbf{a}}_{i,j}^{\prime \phantom{\rule {0.125em}{0ex}}t}$ and ${\mathbf{b}}_{i,j}^{\prime \phantom{\rule{0.125em}{0ex}}t}$ coefficients that account for the non-LTE corrections. For each p–T profile, t, we define ${\mathbf{a}}_{i,j}^{\prime \phantom{\rule{0.125em}{0ex}}t}={\mathbf{a}}_{i,j}^{\phantom{\rule{0.125em}{0ex}}t}\phantom{\rule{0.125em}{0ex}}\left[{\mathit{ϵ}}_{\left(\mathrm{ref},\mathrm{nlte}\ ${\mathbf{b}}_{i,j}^{\prime \phantom{\rule{0.125em}{0ex}}t}={\mathbf{b}}_{i,j}^{\phantom{\rule{0.125em}{0ex}}t}\phantom{\rule{0.125em}{0ex}}\left[{\mathit{ϵ}}_{\left(\mathrm{ref},\mathrm{nlte}\ where ${\mathit{ϵ}}_{\left(\mathrm{ref},\mathrm{lte}\right)}^{t}$ and ${\mathit{ϵ}}_{\left(\mathrm{ref},\mathrm{nlte}\right)}^{t}$ are the reference LTE and non-LTE cooling rates, respectively. Then, the general ${\mathbf{a}}_{i,j}^{\prime }$ and ${\mathbf{b}}_{i,j}^{\prime }$ coefficients were calculated by following the same procedure as for the LTE region, i.e. by weighting the p–T-specific $ {\mathbf{a}}_{i,j}^{\prime \phantom{\rule{0.125em}{0ex}}t}$ and ${\mathbf{b}}_{i,j}^{\prime \phantom{\rule{0.125em}{0ex}}t}$ coefficients with a set of altitude-dependent weights ${\mathit{\xi }}_{i} ^{\prime \phantom{\rule{0.125em}{0ex}}t}$ and minimizing the total cost function χ(x[i]) (see Sect. 5.1). In this way, we obtain ${\mathbf{a}}_{i,j}^{\prime }=\sum _{t}{\mathit{\xi }}_{i}^{\prime \phantom{\rule{0.125em}{0ex}}t}\phantom{\rule{0.125em}{0ex}}{\mathbf{a}}_{i,j}^{t}\phantom{\rule{0.125em}{0ex}}\left[{\mathit{ϵ}}_{\ ${\mathbf{b}}_{i,j}^{\prime }=\sum _{t}{\mathit{\xi }}_{i}^{\prime \phantom{\rule{0.125em}{0ex}}t}\phantom{\rule{0.125em}{0ex}}{\mathbf{b}}_{i,j}^{\phantom{\rule{0.125em}{0ex}}t}\phantom{\rule and the cooling rates are computed by using Eq. (5) but replacing a[i,j] and b[i,j] by ${\mathbf{a}}_{i,j}^{\prime }$ and ${\mathbf{b}}_{i,j}^{\prime }$, respectively, i.e. $\begin{array}{}\text{(6)}& \begin{array}{rl}{\mathit{ϵ}}_{i}& =\sum _{j}\phantom{\rule{0.125em}{0ex}}\mathit{\left\{}\left[{\mathbf{a}}_{i,j}^{\prime }+{\mathbf{b}}_{i,j}^{\prime }\phantom{\rule {0.125em}{0ex}}{\mathit{\phi }}_{i}^{{T}_{\mathrm{inp}}}\left({\mathit{u }}_{\mathrm{0}}\right)\right]\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{j}^{{T}_{\mathrm{inp}}}\left({\mathit{u }}_{\ mathrm{0}}\right)\mathit{\right\}}\\ & +\left[{a}_{\mathrm{surf},i}+{b}_{\mathrm{surf},i}\phantom{\rule{0.33em}{0ex}}{\mathit{\phi }}_{i}^{{T}_{\mathrm{inp}}}\left({\mathit{u }}_{\mathrm{0}}\right)\ right]\phantom{\rule{0.125em}{0ex}}\mathit{\phi }\left({T}_{\mathrm{surf}}^{inp},{\mathit{u }}_{\mathrm{0}}\right).\end{array}\end{array}$ This procedure, while producing a perfect match for a single atmosphere by construction, generates irregularities for other atmospheres at some levels, e.g. when using ${\mathbf{a}}_{i,j}^{\prime \ phantom{\rule{0.125em}{0ex}}t}$ for atmosphere t^′ at points where ${\mathit{ϵ}}_{\left(\mathrm{ref},\mathrm{lte}\right),i}^{t}$ is close to zero. We observed that the irregularities were significantly mitigated by reducing the dimensions of the Curtis matrix from 83×83 to 55×55, where i=55 corresponds to x[CM]=13.875 ($p=\mathrm{9.422}×{\mathrm{10}}^{-\mathrm{4}}$hPa, z≈94km), i.e. by placing x[CM] slightly above the boundary between the NLTE1 and NLTE2 regions. Errors induced in the LTE cooling rates by the matrix reduction are negligible (smaller than 0.05Kd^−1 at the upper 5.3The NLTE2 and NLTE3 regions: the recurrence formula with and without correction The parameterization in the NLTE2, NLTE3 and NLTE4 regions is based on the recurrence formula proposed by Kutepov and Fomichev (1993). This approach is valid when the cooling rate is dominated by the fundamental band, and the absorption of radiation coming from the layers above the layer at work can also be neglected (Kutepov and Fomichev, 1993; Fomichev et al., 1998). The boundaries of these regions are then adapted to the applicability of that approach. The NLTE3 boundaries were chosen to embrace the region where those conditions are fulfilled to a large degree. In the layers below, i.e. in the NLTE2 region, that formula is not accurate and requires a correction term to account for the absorption of radiation coming from the layers above and for the cooling of bands other than the fundamental one of the main isotopologue. The recurrence formula is also the basis for the calculation of the cooling rate in the NLTE4 region (see Sect. 5.4), but it is simplified because the exchange of photons within the layers of this region can be neglected. Further, we should emphasize that the dependence of the cooling rate on the CO[2] VMR in these regions is mainly twofold: on the one hand, its direct dependence (see Eq. 7 below) and, on the other hand, the escape function which depends on the CO[2] column above a given layer (see Figs. S7 and S8). We discuss below the boundaries of the NLTE2 and NLTE3 layers and the expressions used for the cooling rates, i.e. the recurrence formulation. The lower boundary of the NLTE2 region is set up in the layer where the cooling rate obtained by the corrected recurrence formula is more accurate than that given by the non-LTE-corrected Curtis matrix approach (used in NLTE1). This has been set up at ${x}_{b,\mathrm{2}}=\mathrm{12.625}$ (≈87km), which is very similar to the value in the original parameterization of 12.5 (≈85km). The upper boundary of the NLTE2 region is set up at the pressure level where the recurrence formula does not need to be corrected to yield an accurate estimation of the cooling rate. In this work, it has been set up at ${x}_{b,\mathrm{3}}=\mathrm{16.375}$ (≈109km), which is significantly higher than the value of 14 (≈93km) set up in the previous parameterization. The main reason for this is that, for the higher CO[2] VMRs used here, the atmosphere becomes optically thicker; hence, the absorption of radiation of the layer above needs to be taken into account, also at lower pressures. Thus, the cooling rates in the NLTE2 and NLTE3 regions are calculated by $\begin{array}{}\text{(7)}& \mathit{ϵ}\left({x}_{i}\right)={\mathit{\kappa }}_{F}\phantom{\rule{0.33em}{0ex}}\frac{\mathrm{VMR}\left({x}_{i}\right)\phantom{\rule{0.125em}{0ex}}\left[\mathrm{1}-\ mathit{\lambda }\left({x}_{i}\right)\right]}{M\left({x}_{i}\right)}\phantom{\rule{0.33em}{0ex}}\stackrel{\mathrm{̃}}{\mathit{ϵ}}\left({x}_{i}\right),\end{array}$ where ${\mathit{\kappa }}_{F}=\mathrm{2.55520997}×{\mathrm{10}}^{\mathrm{11}}$ is a constant that depends on the Einstein coefficient of the fundamental band (A), on ν[0] and on the units of ϵ ( Fomichev et al., 1998)^1. VMR(x) is the CO[2] VMR; M(x) is the mean molecular weight, $\mathit{\lambda }\left({x}_{i}\right)=A\phantom{\rule{0.125em}{0ex}}/\phantom{\rule{0.125em}{0ex}}\left[A+{l}_ {t}\left({x}_{i}\right)\right]$, where ${l}_{t}\left({x}_{i}\right)={k}_{\mathrm{N}\mathrm{2}}\phantom{\rule{0.125em}{0ex}}\left[{\mathrm{N}}_{\mathrm{2}}\right]+{k}_{\mathrm{O}\mathrm{2}}\phantom{\ rule{0.125em}{0ex}}\left[{\mathrm{O}}_{\mathrm{2}}\right]+{k}_{\mathrm{O}}\phantom{\rule{0.125em}{0ex}}\left[\mathrm{O}\right]$, ${k}_{{\mathrm{N}}_{\mathrm{2}}}$, ${k}_{{\mathrm{O}}_{\mathrm{2}}}$ and k[O] are the collisional rate constants with N[2], O[2] and O(^3P) (see Table 1); and [N[2]], [O[2]] and [O(^3P)] are the concentrations of the respective species. Note that the collisional rates depend on x[i] through their temperature dependencies. The $\stackrel{\mathrm{̃}}{\mathit{ϵ}}$ at level x[i], $\stackrel{\mathrm{̃}}{\mathit{ϵ}}\left({x}_{i}\right)$, is obtained by the recurrence formula $\begin{array}{}\text{(8)}& \begin{array}{rl}& \left[\mathrm{1}-\mathit{\lambda }\left({x}_{i}\right)\phantom{\rule{0.125em}{0ex}}\left(\mathrm{1}-{D}_{i}\right)\right]\phantom{\rule{0.125em}{0ex}}\ stackrel{\mathrm{̃}}{\mathit{ϵ}}\left({x}_{i}\right)=\left[\mathrm{1}-\mathit{\lambda }\left({x}_{i-\mathrm{1}}\right)\phantom{\rule{0.125em}{0ex}}\left(\mathrm{1}-{D}_{i-\mathrm{1}}\right)\right]\\ & \phantom{\rule{0.25em}{0ex}}\stackrel{\mathrm{̃}}{\mathit{ϵ}}\left({x}_{i-\mathrm{1}}\right)+{D}_{i-\mathrm{1}}\phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{i-\mathrm{1}}-{D}_{i}\phantom{\rule {0.125em}{0ex}}{\mathit{\phi }}_{i},\end{array}\end{array}$ starting from the lower boundary at x[i]=x[b2], where, using Eq. (7), $\begin{array}{}\text{(9)}& \stackrel{\mathrm{̃}}{\mathit{ϵ}}\left({x}_{b\mathrm{2}}\right)=\frac{M\left({x}_{b\mathrm{2}}\right)}{{\mathit{\kappa }}_{F}\phantom{\rule{0.33em}{0ex}}\mathrm{VMR}\left ({x}_{b\mathrm{2}}\right)\phantom{\rule{0.125em}{0ex}}\left[\mathrm{1}-\mathit{\lambda }\left({x}_{b\mathrm{2}}\right)\right]}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\mathit{ϵ}\left and ϵ(x[b2]) is obtained by Eq. (6). The D[i] coefficients above are given by $\begin{array}{}\text{(10)}& {D}_{i}=\left({d}_{i-\mathrm{1}}+\mathrm{3}\phantom{\rule{0.125em}{0ex}}{d}_{i}\right)/\mathrm{4}\text{ and }{D}_{i-\mathrm{1}}=\left(\mathrm{3}\phantom{\rule{0.125em} L(u) is the escape function which mainly depends on the CO[2] column, u, above a given level x[i]. The temperature of those layers affects this function as it influences the line shape of the CO[2] lines and hence the probability of photons escaping into space. This is reflected in Fig. S7a, which shows L(u) as a function of the CO[2] column for the six p–T reference atmospheres and a single CO [2] VMR profile (no. 3, current VMR). The dependence of L(u) with the CO[2] VMR profiles is shown in Fig. S7. In our calculations, we have used for L(u) the average of this function for the six p–T reference atmospheres. The α(x[i],u) parameter entering Eq. (11) and needed in the NLTE2 region has been computed by minimizing the following cost function at each point x[i]: $\mathit{\chi }\left({x}_{i}\right)=\sum _{t}\phantom{\rule{0.33em}{0ex}}{\mathit{\eta }}^{t}\phantom{\rule{0.125em}{0ex}}{\left[{\mathit{ϵ}}_{\mathrm{ref}}^{t}\left({x}_{i}\right)-{\mathit{ϵ}}_{\ mathrm{par}}^{t}\left(\mathit{\alpha },{x}_{i}\right)\right]}^{\mathrm{2}}.$ After performing some sensitivity tests, we used uniform weighting for the different reference atmospheres (${\mathit{\eta }}^{t}=\mathrm{1}/\mathrm{6}$ for all atmospheres) rather than the area weighting used in the previous parameterization. Other tests were performed to determine the optimal upper boundary for the α correction: extending the region upwards reduces the error in the x=16–19 region but results in a spurious jump at the uppermost boundary, which is avoided when using a lower x[b3] of 16.375. It is worth noting that α above x=14.5 takes values below unity, thus decreasing the escape in the region. For the fit of the optimal α, the parameterized value of ϵ(x[b2]) is considered a starting point rather than the reference value. In the NLTE3 region, we used the same method as in region NLTE2, except that no correction for the L(u) function is applied, i.e. $\mathit{\alpha }\left({x}_{i},u\right)=\mathrm{1}$. 5.4The NLTE4 region The recurrence formula described above is also valid in the uppermost NLTE4 region, but, as the CO[2] bands are so optically thin here, the exchange of radiation within the layers of this region can be neglected, and the recurrence formula is reduced to a simpler expression, i.e. a cooling-to-space term and an additional term that accounts for the absorption of the radiation emitted by the layers below its boundary. Thus, the cooling rate for this region is computed by using Eq. (7) but with a simple expression for $\stackrel{\mathrm{̃}}{\mathit{ϵ}}\left({x}_{i}\right)$, $\stackrel{\ mathrm{̃}}{\mathit{ϵ}}\left({x}_{i}\right)=\mathrm{\Phi }\left({x}_{b\mathrm{3}}\right)-\mathit{\phi }\left({x}_{i}\right)$, that gives a smooth transition to the cooling-to-space approximation, e.g. $\begin{array}{}\text{(12)}& \mathit{ϵ}\left({x}_{i}\right)={\mathit{\kappa }}_{F}\phantom{\rule{0.125em}{0ex}}\frac{\mathrm{VMR}\left({x}_{i}\right)\phantom{\rule{0.125em}{0ex}}\left[\mathrm{1}-\ mathit{\lambda }\left({x}_{i}\right)\right]}{M\left({x}_{i}\right)}\phantom{\rule{0.125em}{0ex}}\left[\mathrm{\Phi }\left({x}_{b\mathrm{3}}\right)-\mathit{\phi }\left({x}_{i}\right)\right],\end where Φ(x[b3]) is obtained from the boundary condition $\begin{array}{}\text{(13)}& \mathrm{\Phi }\left({x}_{b\mathrm{3}}\right)=\stackrel{\mathrm{̃}}{\mathit{ϵ}}\left({x}_{b\mathrm{3}}\right)+\mathit{\phi }\left({x}_{b\mathrm{3}}\right)\end{array}$ and uses the recurrence formula in Eq. (8). 6Testing the parameterization for the reference atmospheres The parameterization has been tested against the reference cooling rates calculated for the reference atmospheres (the six p–T profiles and the eight CO[2] VMR profiles) (see the next section) and for intermediate CO[2] VMRs and the k[O] collisional rate (Sect. 6.2). Further, it has been verified for measured temperature profiles that exhibit a large variability (Sect. 7) and for the temperature profiles obtained by a high-resolution version of WACCM-X (Sect. 8). 6.1Accuracy of the parameterization for the reference atmospheres In this section, we discuss the accuracy of the current parameterization for the assumed reference atmospheres. The non-LTE models used in the original (Fomichev et al., 1998) and current parameterizations are different. Hence we expect some differences caused not just by the parameterization itself, but possibly also by the different non-LTE models. Figure 8 shows the cooling rates of this parameterization compared to those of the previous parameterization and the reference ones for a contemporary CO[2] VMR profile (no. 3) and the six p–T profiles. The comparisons for the lower CO[2] profiles are shown in Figs. S9 and S10 and in Figs. S11–S15 for high CO[2] VMRs. We should clarify that, to make the comparison meaningful, the three sets of cooling rates shown here include the same updated collisional rates (Table 1). Note that these rates are different from those used in the previous parameterization (Fomichev et al., 1998). The new parameterization also supports the previous collisional rates, but it has been optimized for the new ones in Table 1. As expected, larger differences are obtained in the region between 10^−2hPa (∼80km) and $\mathrm{2}×{\mathrm{10}}^ {-\mathrm{5}}$hPa (∼120km) and are more marked for the SAS and SAW atmospheres. The differences are more clearly illustrated in Figs. 9 and S16, where we show the mean and standard deviation of the differences for the four lowest and four highest CO[2] VMR profiles, respectively. The improvement of the new parameterization is noticeable (compare the blue and red lines). In general, the cooling rates of the current parameterization are more accurate than in the previous one for most of the regions and temperature structures. We observe that the errors (e.g. the differences with respect to the reference non-LTE cooling rates) of the new parameterization (red curves) are very small overall. They are below ∼0.5Kd^−1 for the current and lower CO[2] abundances (see Fig. 9). For higher CO[2] concentrations, between about 2 and 3 times the pre-industrial values, the largest errors are ∼1–2 Kd^−1 and are located near 110–120km (see Fig. 9 and the top-left panel in Fig. S16). The quoted values refer to the mean of the differences, although they are larger for the individual p–T atmospheres. The spread of these values is larger in the region of 10^−2hPa (∼80km) to 10^−4hPa (∼105km), where the root mean square (rms) reaches values between −2 and +2Kd^−1 (Fig. 9). For the very high CO[2] concentrations (4, 5 and 10 times the pre-industrial abundances), the errors are also very small, below ∼1Kd^−1 for most regions and conditions, except in the 107–135km region, where we found maximum positive biases of ∼4, ∼5 and ∼16Kd^−1 for 4×, 5× and 10× the pre-industrial CO[2] VMR profiles (see Fig. S16). Those maximum errors are however comparable when expressed in relative terms, all about 1.2%. The significant rms in the region of ∼80–120km is also notable; clearly, this region is more difficult to parameterize, particularly for such a large range of CO[2] abundances. That increase in the differences of the new parameterization with respect to the reference calculations for the very high CO[2] VMRs near 110km seems to be related to the transition region from NLTE2 to NLTE3 (see Fig. 7). It looks like the cooling in the lower part of the NLTE3 region also requires correction by the α factor for high CO[2] VMRs. This suggests that, for higher CO[2] VMRs, the parameterization would be more accurate if this transition altitude has risen. Such a rise, however, would worsen the cooling below this boundary. This manifests the difficulty in obtaining very accurate cooling rates for a large range of CO[2] VMRs with this method. 6.2Assessment of the cooling rates for intermediate CO[2] VMRs and for the k[O] collisional rate The aim of the parameterization is to be used for any CO[2] VMR input profile in the range of profile nos. 1 and 8 of Fig. 1b and any plausible value for the k[O] rates discussed in Sect. 4.2. In this section we demonstrate that the parameterization is also very accurate for CO[2] VMRs that fall between the reference profiles used for its development and also when using different k[O] values. In particular, we show results for the intermediate CO[2] VMR profile nos. 9, 10 and 11 (see Fig. 1b) and the k[O] collisional rate used in the reference calculations divided by a factor of 2. Figure B4 shows the results of the calculation for the intermediate CO[2] VMR profile no. 9, which is between the current CO[2] VMR value and that projected for 2050 (2 times the pre-industrial value). We can observe similar features in the calculations for the contemporary CO[2] VMR profile (no. 3) (see Fig. 8), although the differences are slightly larger because the CO[2] VMR profile is larger. The distinctions are more clearly seen in Fig. 10, where we show the mean and standard deviation of the differences for the six p–T profiles. The patterns in the differences, as well as their values and spreads, are very similar to those described above in Sect. 6 for the CO[2] reference profiles. The major differences appear between 105 and 135km, reaching maximum values of 1, 2 and 9Kd^−1 for VMR profile nos. 9, 10 and 11, respectively. Again, we observe that the new parameterization is more accurate at practically all altitude levels. Further, the maximum values of the standard deviations of the differences for the various p–T profiles also resemble very much those discussed before, reaching maximum values of about 2Kd^−1, 3Kd^−1 and 15Kd^−1 for the respective CO[2] VMR profiles. Note that, although these values are larger for higher CO[2] VMRs, they are very similar when expressed in percentage. As the CO[2](v[2])–O(^3P) collisional rate, k[O], is still uncertain nowadays by about a factor of 2 (see e.g. García-Comas et al., 2012) and we intend this parameterization to also be used for rates different from the nominal value, we have tested its accuracy for its lowest likely value. Figure B5 shows the results of decreasing the collisional rate k[O] by a factor of 2 for CO[2] VMR profile no. 3 (current value) and the six p–T profiles. The errors incurred when using this rate are slightly larger than for the nominal rate. We see that, for the reduced rate, the differences are generally below 1Kd^−1 but can have values of up to 2Kd^−1 near 90km for the mid-latitude summer and mid-latitude winter atmospheres and between ∼85 and 100km for the SAS conditions. The improvement with respect to the previous parameterization is not that large for this case (see Fig. 11), only below 70km and near 90km, mainly caused by the significant difference incurred by the previous parameterization for the SAW atmosphere (see the bottom-right panel in Fig. B5). The smaller differences between both versions of the parameterizations for the reduced k[O] are likely caused because the previous one was optimized for this reduced rate. 6.3Performance of the parameterization In Table 2 we list some examples of the time taken to execute the parameterization for two processors, two compilers and three atmospheric intervals. Better performance is noticeable (up to a factor of 5 faster) when using the Intel Fortran Compiler Classic (ifort) with respect to gfortran. We did not test the ifort compiler in the second processor, which suggests that the times obtained with ifort and processor no. 1 could be improved by a factor of 2.7. We did not try other more modern Fortran compilers like ifx, which could run the parameterization even faster. It is also worth mentioning that, if the atmosphere is cut in the first 50km, the execution is about 1.76 times faster. This is because, in the lower region, where the cooling occurs under LTE conditions, the calculation involves the Curtis matrix and operations of the coefficient matrices a and b are required. Reducing this region, e.g. starting near 50km where LTE still prevails, makes the parameterization significantly faster. Reducing the atmosphere in the upper layers, however, hardly decreases the computation time. Thus, when using a processor of type 2 with an ifort compiler for an atmosphere in the range of 50–270km, the calculation of a cooling rate profile could be as low as 0.015ms and still have a margin of improvement when using modern Fortran compilers like ifx. 7Testing the parameterization for the MIPAS-measured temperatures 7.1Solstice and equinox conditions We have compared the cooling rates estimated by the parameterization with the reference ones for realistic, e.g. measured, temperature profiles that present a large variability and very variable vertical structure (see e.g. Fig. B6). Specifically, we compared them for the p–T profiles measured by the MIPAS instrument (García-Comas et al., 2023) for 5 full days of measurements (about 2500 profiles) with global latitude coverage and covering 2d for solstice conditions (14 January and 13 June) and 2d for equinox conditions (25 March and 21 September) for 2010. Further, we compare the results for the temperatures of 15 February 2009 when strong stratospheric warming followed by an elevated stratopause event occurred in the northern polar hemisphere (see Sect. 7.2). The comparison is carried out for the MIPAS measurements taken only under nighttime conditions, as the MIPAS non-LTE cooling rates for daytime, obtained simultaneously with the temperature inversion, also include the fraction of the 15µm cooling which is produced by the relaxation of the solar energy absorbed by CO[2] NIR bands, which is not accounted for in this parameterization (see Sect. 9). The zonal means of the temperatures, CO[2] VMRs and O(^3P) abundances for those conditions are shown in Figs. B7, S19 and S20, respectively. The results are presented in Fig. 12 for the zonal mean of the differences for 1d of solstice and 1d of equinox conditions and in Fig. 13 as the global mean difference for all latitudes for each of the 4 individual days. In general, the new parameterization is slightly more accurate. For example, the deviations of the cooling rates from the reference calculations in the altitude range of 105–115km are larger in the old parameterization (about 2Kd^−1) than in the new one and are negligible in this region. Also, the differences with respect to the reference calculations are larger in the altitude range of 80–95km for solstice conditions and at altitudes of 80–100km for equinox conditions (see Fig. 13). Overall, the errors in the mean profiles of the cooling rates of the new parameterization for 1d of measurements are below 0.5Kd^−1, except in the region between $\mathrm{5}×{\mathrm{10}}^{-\ mathrm{3}}$ and $\mathrm{3}×{\mathrm{10}}^{-\mathrm{4}}$hPa (∼85–95km), where they can reach values of 1–2Kd^−1. This region is the most difficult to parameterize because several bands contribute to the cooling rate, and they are very sensitive to the temperature structure of the middle atmosphere (e.g. even outside this region). Note also that this is precisely the region where the rms of the differences of the cooling rate with respect to the reference ones are largest, reaching values of up to 6Kd^−1 (see Fig. 13). 7.2Elevated stratopause conditions The comparison of the cooling rates estimated by the old and new parameterizations with respect to the reference calculations for 15 February 2009, a day with a pronounced and unusual elevated stratopause event (see the zonal mean temperatures in Fig. B9), is shown in Fig. 14. Similar features to the other conditions shown above can be appreciated, except in the polar winter region. The mean of the differences and the standard deviations for all the profiles at latitudes north of 50°N are shown in Fig. 15. The differences are significantly larger than for other latitudes in the 80–95km altitude region. Both parameterizations underestimate the cooling in that atmospheric region. The new parameterization has, however, better performance above about 80km, but in the elevated stratopause region (80–100km), it still underestimates the cooling by 3–7Kd^−1 (∼10%). It seems clear that part of this underestimation is caused by the fact that such atypical temperature profiles (see Sect. 3.1) were not considered in the parameterization. However, its inclusion would not solve the problem, as in the calculations of the coefficients a trade-off of the weighting of the different p–T reference atmospheres has to be chosen (see Sect. 5.1 and 5.2). Thus, it might ameliorate the inaccuracy for these elevated stratopause events but would worsen the accuracy for other general situations. This manifests the difficulty or limitation of this method in providing accurate non-LTE cooling rates for all the temperature structures (gradients) that we might find in the real atmosphere. Nevertheless, we have to keep in mind that these situations are sporadic and limited to high polar regions. Hence they should not impact significantly the accuracy of the cooling rates of this parameterization in global multiyear GCM simulations. 8Testing the parameterization for WACCM-X high-resolution temperatures In addition to the tests above, we have also tested the parameterizations for the temperature structure obtained with a high-resolution version of the WACCM-X model (Liu et al., 2024). This version of WACCM-X has a fine grid of 0.25°×0.25° in latitude×longitude and a vertical resolution of 0.1 scale heights (∼0.5km) in most of the middle and upper atmosphere, transitioning to 0.25 scale heights in the top three scale heights^2. With such a fine grid, the model itself can internally generate gravity waves, thus providing temperature profiles with a vertical structure very similar to that measured by high-vertical-resolution lidars of the mesosphere and lower thermosphere. Some examples of p–T profiles of the model exhibiting those vertical features and the latitudinal and longitudinal variabilities are shown in Fig. S23 of the Supplement. Further, the model spans from the surface up to nearly 600km ($\sim \mathrm{5}×{\mathrm{10}}^{-\mathrm{10}}$hPa), which is ideal for testing the parameterization. In addition to the pressure–temperature profiles of the model, their O(^3P), O[2] and N[2] VMR profiles have been used. A contemporary CO[2] VMR (profile no. 3) was included in these calculations. The parameterization has been tested for a total of 225 temperature profiles. They have been selected from the model output for January conditions at four latitudes, 20, 40, 60 and 70°N of the northern winter hemisphere, and two additional latitudes, 60 and 70°S of the southern summer hemisphere. For each latitude, 36 profiles corresponding to longitudes from 0 to 360° every 10° were selected. A few p–T profiles are shown in the left column of Fig. 16, and all the profiles for latitudes 20°N, 60°N, 70°N and 70°S are shown in Fig. S23 in the Supplement. We should note that those temperature structures very much resemble those measured by lidar instruments. The results for a few representative p–T profiles are shown in Fig. 16. A few more examples are shown in Figs. S24–S26 in the Supplement. In general, the results are very similar to those obtained for the MIPAS temperatures. The parameterization works very well below $\sim \mathrm{5}×{\mathrm{10}}^{-\mathrm{3}}$hPa (∼85km), with differences generally smaller than 1–2Kd^−1. In the upper regions, above $\sim \mathrm{2}×{\mathrm{10}}^{-\mathrm{4}}$hPa (∼105km, not fully shown in Fig. 16 because of the small scale chosen to highlight the differences in the region of larger differences), it also works very well. In this region, the cooling rate differences are slightly larger than near $\sim \mathrm{5}×{\mathrm{10}}^{-\mathrm{3}}$hPa (∼85km), generally below 5Kd^−1, but they are much smaller in relative terms since the cooling rates at high altitudes are very large (of the order of 100–300Kd^−1). In the intermediate region, between $\sim \mathrm{5}×{\mathrm {10}}^{-\mathrm{3}}$hPa and $\sim \mathrm{2}×{\mathrm{10}}^{-\mathrm{4}}$hPa, the algorithm still reproduces the reference calculations rather well but not as well as in the other regions. The cooling rate differences can reach up to 10Kd^−1 at a few isolated levels of some individual profiles which can represent up to about 20%. It is noticeable though that, while the differences can be significant, the profile shapes of the reference and parameterized calculations are very similar (see the middle column of Fig. 16). To have a global perspective, we have plotted in Fig. 17 the mean of the differences for all the p–T profiles together with their rms. We see that the mean (bias) of the parameterization is very small, practically below 1.5Kd^−1 anywhere and below 0.5Kd^−1 for most of the atmospheric layers. The rms, a representative error of individual profiles, is also small at levels below $\sim \ mathrm{2}×{\mathrm{10}}^{-\mathrm{2}}$hPa (∼80km) with values of 1–2Kd^−1 (∼20%) and above 10^−4hPa (∼105km) with values smaller than 4Kd^−1 (∼2%). In the intermediate region, between $\sim \mathrm{5}×{\mathrm{10}}^{-\mathrm{3}}$ and $\sim \mathrm{2}×{\mathrm{10}}^{-\mathrm{4}}$hPa, the rms is however significant, with most values in the range of 5–12Kd^−1. While these values are significant in percentage at $\sim \mathrm{5}×{\mathrm{10}}^{-\mathrm{3}}$–$\mathrm{5}×{\mathrm{10}}^{-\mathrm{4}}$hPa, they are very small above $\sim \mathrm{5}×{\mathrm{10}}^{-\mathrm{4}}$hPa. 9Discussion: the use of this parameterization with a previous CO[2] solar NIR heating rate parameterization Some of the GCM models use the parameterization of the CO[2] 15µm cooling together with that of the CO[2] NIR heating of Ogibalov and Fomichev (2003). Hence, as we are updating the former for larger CO[2] abundances and as the update of the NIR heating parameterization for the large CO[2] abundances is beyond the scope of this work, we investigate whether the latter is still valid for the large CO[2] abundances. For that purpose, we compute CO[2] NIR heating rates with the parameterization of Ogibalov and Fomichev (2003) and with the GRANADA model for the large CO[2] concentrations for the six p–T reference atmospheres. We should note that the non-LTE models used in both parameterizations are different, and hence we expect some difference caused not just by the parameterization itself but also by the differences between the underlying non-LTE model in the NIR heating parameterization and GRANADA. The CO[2] NIR heating rates of GRANADA were calculated with the rate coefficients and photolysis rates described in Funke et al. (2012) but updated with those described in Jurado-Navarro et al. (2015, 2016) and also with those described below. In particular, the ${J}_{{\mathrm{O}}_{\ mathrm{3}}}$ rate used in these calculations is ∼10% smaller than in Jurado-Navarro et al. (2015) below 100km and thus leads to an [O(^1D)] of ∼10% smaller below 90km but that is very similar near 100km. Above ∼100km, the ${J}_{{\mathrm{O}}_{\mathrm{2}}}$ coefficient used in the present calculations is about 40% smaller than in Jurado-Navarro et al. (2015), leading to a similar reduction in [O(^1D)]. Further, we updated the following collisional rates. The rate coefficient of N[2] + O(^1D) → N[2](1) + O has been increased by a factor of 1.08, and the collisional deactivation of N[2](1) with atomic oxygen (which has an important role in the heating rates; see e.g. López-Puertas et al., 1990) has been updated from values of $\mathrm{4.5}×{\mathrm{10}}^{-\ mathrm{15}}$$\left(T/\mathrm{300}{\right)}^{\mathrm{1.5}}$ to $\mathrm{4.3}×{\mathrm{10}}^{-\mathrm{15}}$$\left(T/\mathrm{300}{\right)}^{\mathrm{2.9}}$cm^3s^−1. The results of the comparison are shown in Fig. 18 for the tropical atmosphere and an intermediate solar zenith angle (SZA) of 44.5°. The region of most importance for the CO[2] NIR heating rates is that comprised between 0.1 and 0.01hPa (Fomichev et al., 2004). In this region, the differences between the algorithm of Ogibalov and Fomichev (2003) and GRANADA are in the range of +0.2 to −0.5Kd ^−1 for CO[2] VMRs up to 5 times the pre-industrial CO[2] profile, e.g. about 10% to 15%. Hence, given that they have been computed with very different non-LTE models and the significant effect that parameters like the CO[2] VMR above ∼90km, the collisional rate between N[2](1) and O(^3P), the O(^3P) concentration itself and the rate of exchange of CO[2] v[3] quanta with N[2](1) have on this solar heating (see e.g. López-Puertas et al., 1990), these differences are reasonable. Hence the new CO[2] cooling rate parameterization reported here can be safely used together with the CO[2] solar NIR heating parameterization of Ogibalov and Fomichev (2003) for CO[2] VMRs up to 5 times the pre-industrial CO[2] profile. 10Summary and conclusions An improved and extended parameterization of the CO[2] 15µm cooling rates of Earth's middle and upper atmosphere has been developed. It essentially follows the same method of the parameterization of Fomichev et al. (1998). The major novelty is its extended range of CO[2] abundances, ranging from CO[2] profiles with tropospheric values close to half of the pre-industrial value to 10 times that value. This extension of CO[2] profiles can still be safely applied to the parameterization of the CO[2] near-infrared heating of Ogibalov and Fomichev (2003) up to at least 5 times the pre-industrial CO[2] values and is normally combined with this cooling rate parameterization. Other improvements or updates are as follows. They have an extended and finer vertical grid, increasing the number of levels from 8 to 83. The CO[2] line list has been updated, from HITRAN 1992 to HITRAN 2016. Although the collisional rate coefficients affecting the CO[2] v[1] and v[2] levels are input parameters for the parameterization, in this version we have used more contemporary values, e.g. as currently used in the non-LTE retrieval of temperature from CO[2] 15µm emissions of SABER and MIPAS measurements (García-Comas et al., 2008, 2023). The rate coefficients are in general of a very similar magnitude, except for the collisional deactivation of CO[2](v[1],v[2]) levels by atomic oxygen, which is now larger by approximately a factor of 2, e.g. close to its accepted upper limit. As a consequence of the larger range of CO[2] VMR profiles, the different NLTE layers for computing the cooling rates have been significantly revised. For example, it is worth mentioning that the lowermost altitude of the cooling-to-space approximation (the uppermost NLTE layer) has risen from ∼110km to 160–170km. The new parameterization has been thoroughly tested against line-by-line LTE and non-LTE cooling rates for (i) the six p–T reference atmospheres; (ii) the two most important input parameters (besides temperature), the CO[2] VMR profiles and the collisional rate of CO[2](v[1],v[2]) by atomic oxygen; (iii) realistic measured temperature fields of the middle atmosphere (about 2500 profiles), including an episode of strong stratospheric warming with a very elevated stratopause; and (iv) the temperature profiles (225 profiles) obtained by a high-resolution version of WACCM-X capable of generating internally gravity waves and hence with temperatures showing a large variability and pronounced vertical wave structures. Further, to illustrate the improvements, the comparisons of points (i) to (iii) have also been performed for the previous parameterization. For the reference temperature profiles, the errors of the new parameterization (mean of the differences in the cooling rates with respect to the reference calculations for the six p–T atmospheres) are below 0.5Kd^−1 for the current and lower CO[2] VMRs. For higher CO[2] concentrations, between about 2 and 3 times the pre-industrial values, the largest errors are ∼1–2 Kd^−1 and are located near 110–120km. For the very high CO[2] concentrations (from 4 to 10 times the pre-industrial abundances) the errors are also very small, below ∼1Kd^−1, for most regions and conditions, except in the 107–135km region, where the parameterization overestimates them in a few Kelvin per day (∼1.2%). For these reference atmospheres, the new parameterization has a better performance for most of the atmospheric layers and temperature structures. From the testing of the parameterization for realistic current temperature fields of the middle atmosphere as measured by MIPAS, we found that, in general, the new parameterization is slightly more accurate. In particular, in the 105–115km range, the previous parameterization overestimates the cooling rate by 1.5Kd^−1, while the new one is very accurate. However, in the other height regions the difference is not so important. The new parameterization has a better performance in the 80–95km altitude region. Overall, the errors in the mean profiles (bias) of the cooling rates of the new parameterization, calculated for four different atmospheric conditions with about 500 profiles in each of them, are below 0.5Kd^−1, except between $\mathrm{5}×{\mathrm{10}}^{-\mathrm{3}}$ and $\ mathrm{3}×{\mathrm{10}}^{-\mathrm{4}}$hPa (∼85–95km), where they can reach biases of 1–2Kd^−1. That region is the most challenging to parameterize because several CO[2] 15µm bands contribute to the cooling rate, and they depend very heavily on the temperature structure of the whole middle atmosphere (e.g. even outside this region). For single-temperature profiles, the cooling rate error (characterized by the rms of the difference between the reference and the parameterized cooling rates) is about 1–2Kd^−1 below $\mathrm{5}×{\mathrm{10}}^{-\mathrm{3}}$hPa (∼85km) and above $\ mathrm{2}×{\mathrm{10}}^{-\mathrm{4}}$hPa (∼100km). In the intermediate region, however, it is significant, between 2 and 7Kd^−1. We have further tested the parameterization against very rare and demanding situations, such as the temperature structures of stratospheric warming events with an elevated stratopause. In these situations, however, the parameterization underestimates the cooling rates by 3–7Kd^−1 (∼10%) at altitudes of 80–100km, and the individual cooling rates show a significant rms (5–15Kd^−1). In addition, we have tested the parameterization for the temperature structure obtained by a high-resolution version of WACCM-X, with the temperatures showing a large variability and pronounced vertical wave structure. The mean (bias) error of the parameterization is very small, smaller than 0.5Kd^−1 for most atmospheric layers, and below 1.5Kd^−1 for almost any altitude from the surface up to 200km. The rms of the differences in the cooling rates from the parameterization and the reference model is similar to that obtained for MIPAS temperatures, with values of 1–2Kd^−1 ( ∼20%) below $\sim \mathrm{2}×{\mathrm{10}}^{-\mathrm{2}}$hPa (∼80km) and smaller than 4Kd^−1 (∼2%) above 10^−4hPa (∼105km). In the intermediate region, between $\sim \mathrm{5}×{\mathrm{10}}^ {-\mathrm{3}}$ and $\sim \mathrm{2}×{\mathrm{10}}^{-\mathrm{4}}$hPa, they are slightly larger than for MIPAS, with values in the range of 5–12Kd^−1. These values, while they are significant in relative terms at $\sim \mathrm{5}×{\mathrm{10}}^{-\mathrm{3}}$–$\mathrm{5}×{\mathrm{10}}^{-\mathrm{4}}$hPa, are very small in percentage above $\sim \mathrm{5}×{\mathrm{10}}^{-\mathrm{4}}$hPa. As has been shown, this parameterization has some limitations (see Sects. 6, 7.2 and 8). In order to be able to apply specific approximations for the cooling rates, it has been designed for fixed atmospheric regions where specific radiative transfer regimes prevail. Thus, its extension to a very large range of CO[2] abundances inevitably causes a loss of accuracy for extreme cases in specific atmospheric layers. A possible solution for future updates could be to use different extensions of the non-LTE regions (i.e. Fig. 7) for different abundances of CO[2]. Likewise, this parameterization (like the original one) was devised for use in GCMs, i.e. to produce accurate cooling rates globally, e.g. when considering all expected temperature profiles covering the different latitudinal and seasonal conditions. Thus, the ability of the parameterization to compute accurate cooling rates for individual temperature profiles with large temperature gradients in the $\mathrm{5}×{\mathrm{10}}^ {-\mathrm{3}}$hPa (∼85km) to $\mathrm{3}×{\mathrm{10}}^{-\mathrm{4}}$hPa (∼95km) region is limited. On the contrary, it is extremely fast. The routine takes only 15µs of CPU time to calculate a profile in the range of 50 to 270km on a machine with an Intel Core i7 4.2GHz processor when compiled with ifort. This is more than 6600 times faster than the best option of the NLTE15μmCool-E v1.0 routine recently reported by Kutepov and Feofilov (2023). To conclude, parameterizations overcoming those limitations but retaining that speed are highly desirable for development in the future. Appendix A:Notes and recommendations for using the parameterization The routine source code is written in Fortran 90 and is available at https://doi.org/10.5281/zenodo.10849970 (López-Puertas et al., 2024). It has been devised for implementation in general circulation models, although it can also be used for other purposes, e.g. to compute the CO[2] 15µm cooling rate for a given reference atmosphere. The code is organized in a library (in the directory source/modules/) that can be included in a more complex GCM model. The subroutine to be called is CO2_NLTE_COOL inside module file co2cool.f90. The following inputs are required (in order) by CO2_NLTE_COOL. • Atmospheric profiles as a function of pressure for temperature and four VMRs of CO[2], O, O[2] and N[2] • lev0: the index of the given pressures so that p(lev0) is the maximum pressure level (lower boundary) to be considered for calculating the heating rate. Heating rates will be calculated from that pressure up to the minimum pressure specified in the pressure array. For example, if p is given in the range of 10^3 to 10^−6hPa (or 10^−6 to 10^3 hPa) and p(lev0)=1hPa, the heating rate will be calculated in the range of 1 to 10^−6hPa. • surf_temp: surface temperature – if set to a negative value, the temperature of the maximum pressure level will be used. • hr: heating rate. This is an input–output array with the same dimension of pressure. It will only be calculated at pressures in the range of p(lev0) (the maximum pressure considered) to the minimum specified pressure (minimum(pressure)). Note that throughout this paper we have used the term “cooling rates”, e.g. the hr values with changed signs. • The values are the temperature (K), pressure (hPa), VMRs (molmol^−1, not ppm) and heating rate (Kd^−1). • Input profiles can run either from the ground to the top of the atmosphere (decreasing pressures) or reverse (top to ground with increasing pressures). The pressure grid can be irregular. • Important notes: calculations in the LTE region. 1. Pressure levels should include the surface pressure (near 10^3hPa), even if the 15µm cooling is to be calculated only at lower pressure levels (higher altitudes), i.e. p(lev0) ≪ 10^3hPa. 2. If 15µm cooling is only calculated in the non-LTE regime, it is recommended to set up the lower boundary, p(lev0), close to the limit of the LTE–non-LTE transition, e.g. near 1hPa. In this way, more time-consuming calculations in the LTE region will be avoided. The output is expressed as the heating rate (Kd^−1) on the given input grid in the range of p(lev0) to the minimum specified pressure. To compile the routine, follow these steps. • Edit Makefile and change the Fortran compiler to your preferred choice (e.g. gfortran or ifort). • From this folder, run make. The compilation produces a test program run_cool (see below) and a module library file lib/libco2_cool.a. A test program, source/main.f90, is also provided to test the parameterization on individual profiles. Its input file input.dat has a fixed format. Do not change the number of commented lines. • First input at line 9: n_lev, lev0 and T_surf • Start from line 12. • Six atmospheric profiles are read (n_lev rows are expected). The output heating rates are written in the output.dat file. To test main.f90, two input–output files are provided: input_test.dat and input_test2.dat with their corresponding output_test.dat and output_test2.dat output files. The first computes the heating in the full pressure range provided, the second only at pressures smaller than ∼1hPa. To test the routine, follow these steps. • cp input_test.dat input.dat • ./run_cool • Check that the results in output.dat are consistent with output_test.dat. • The same procedure can be done for test 2. The routine is supplied with the collisional rates described in this paper (see Table 1). Nevertheless, they can be changed by the user. They are prescribed in the constants.f90 module. • The rates are defined in the form $z=a\sqrt{T}+b\phantom{\rule{0.125em}{0ex}}\mathrm{exp}\left(-g\phantom{\rule{0.125em}{0ex}}{T}^{-\mathrm{1}/\mathrm{3}}\right)$. The coefficients are specified as follows. • for CO[2]–O: a_zo, b_zo, g_zo (default: $\mathrm{3.5}×{\mathrm{10}}^{-\mathrm{13}}$, $\mathrm{2.32}×{\mathrm{10}}^{-\mathrm{9}}$, 76.75) • for CO[2]–O[2]: a_zo2, b_zo2, g_zo2 (default: $\mathrm{7.0}×{\mathrm{10}}^{-\mathrm{17}}$, $\mathrm{1.0}×{\mathrm{10}}^{-\mathrm{9}}$, 83.8) • for CO[2]–N[2]: a_zn2, b_zn2, g_zn2 (default: $\mathrm{7.0}×{\mathrm{10}}^{-\mathrm{17}}$, $\mathrm{6.7}×{\mathrm{10}}^{-\mathrm{10}}$, 83.8) The most likely rate to be changed is k[O], probably by using smaller values. We tested a collisional k[O] rate 2 times smaller than used in the development of the parameterization and found that its accuracy did not change significantly (see Sect. 6.2). Although the parameterization is specifically developed for the CO[2] 15µm non-LTE region, it also works for the LTE region, but the user should be cautious that other important cooling rates in the LTE region, such as those of O[3] and H[2]O, are not included. We recommend that GCM users utilize their radiation scheme in the LTE region and this parameterization in the non-LTE region (e.g. above ∼50 or 60km). Boundaries of the parameterization. About the lower boundary (maximum pressure), see the notes above. About the upper boundary (minimum pressure), there is in principle no limitation, but we recommend setting it as high as the upper lid of your model. There is a large number of GC and CC models with an upper lid at $\sim {\mathrm{10}}^{-\mathrm{2}}$hPa (or ∼80km). This parameterization can be used for such models to compute the CO[2] non-LTE cooling rates between ∼50 and ∼80km. We note that, under these circumstances, the cooling rates near the upper lid might not be accurate, as the contribution of the layers above the upper lid is not considered. This, however, is not a limitation of the parameterization itself but an intrinsic limitation of this kind of model. There is no restriction either on the upper limit of the upper boundary, provided it is physically meaningful. That is, it can be placed at altitudes as high as 500km or higher. Appendix B:Additional figures Code and data availability The code is available at https://doi.org/10.5281/zenodo.10849970 (López-Puertas et al., 2024). The parameterization is also available as a Python routine for calculating cooling rates for specific purposes at https://doi.org/10.5281/zenodo.10567258 (Fabiano et al., 2024). Note that the Python version is much slower than the Fortran version, and it is not recommended for use in GCMs. MLP performed the LTE and non-LTE reference calculations, participated in the adaptation of the original parameterization, wrote the manuscript and had the final editorial responsibility for this paper. FF led (together with BF) the adaptation of the original parameterization of Fomichev et al. (1998), wrote the code of the new parameterization, performed all the tests of the parameterization and calculated the cooling rates of the previous and new parameterizations presented here. VF made a critical contribution to the adaptation of its original algorithm and to its update. BF co-led the adaptation of the original parameterization and designed the accuracy tests. DRM provided the CO[2] abundance and advice on the development of the parameterization for its use and implementation in climate models. All the authors contributed to the discussions and provided text and comments. The authors declare that they have no conflict of interest. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. The IAA team acknowledges financial support from the Agencia Estatal de Investigación, MCIN/AEI/10.13039/501100011033, through grant nos. PID2019-110689RB-I00, PID2022-141216NB-I00 and CEX2021-001131-S. We thank two anonymous referees for their very valuable suggestions leading to an improvement of this work. This research has been supported by the Agencia Estatal de Investigación (grant nos. PID2019-110689RB-I00, PID2022-141216NB-I00 and CEX2021-001131-S). The article processing charges for this open-access publication were covered by the CSIC Open Access Publication Support Initiative through its Unit of Information Resources for Research (URICI). This paper was edited by Tatiana Egorova and reviewed by two anonymous referees. Dudhia, A.: The Reference Forward Model (RFM), J. Quant. Spectrosc. Ra., 186, 243–253, https://doi.org/10.1016/j.jqsrt.2016.06.018, 2017.a, b Emmert, J. T., Drob, D. P., Picone, J. M., Siskind, D. E., Jones, M., Mlynczak, M. G., Bernath, P. F., Chu, X., Doornbos, E., Funke, B., Goncharenko, L. P., Hervig, M. E., Schwartz, M. J., Sheese, P. E., Vargas, F., Williams, B. P., and Yuan, T.: NRLMSIS 2.0: A Whole-Atmosphere Empirical Model of Temperature and Neutral Species Densities, Earth Space Sci., 8, e2020EA001321, https://doi.org/ 10.1029/2020EA001321, 2021.a, b Fabiano, F., López-Puertas, M., and Bernd, F.: CO[2] cool – v1.1 (v1.1), Zenodo [code], https://doi.org/10.5281/zenodo.10567258, 2024.a Feofilov, A. and Kutepov, A.: Infrared Radiation in the Mesosphere and Lower Thermosphere: Energetic Effects and Remote Sensing, Surv. Geophys., 33, 1231–1280, https://doi.org/10.1007/ s10712-012-9204-0, 2012.a Fischer, H., Birk, M., Blom, C., Carli, B., Carlotti, M., von Clarmann, T., Delbouille, L., Dudhia, A., Ehhalt, D., Endemann, M., Flaud, J. M., Gessner, R., Kleinert, A., Koopman, R., Langen, J., López-Puertas, M., Mosner, P., Nett, H., Oelhaf, H., Perron, G., Remedios, J., Ridolfi, M., Stiller, G., and Zander, R.: MIPAS: an instrument for atmospheric and climate research, Atmos. Chem. Phys., 8, 2151–2188, https://doi.org/10.5194/acp-8-2151-2008, 2008.a Fomichev, V. I., Ogibalov, V. P., and Beagley, S. R.: Solar Heating by the Near-IR CO2 Bands in the Mesosphere, Geophys. Res. Lett., 31, L21102, https://doi.org/10.1029/2004GL020324, 2004.a Fomichev, V. L., Blanchet, J.-P., and Turner, D. S.: Matrix parameterization of the 15µm CO[2] band cooling in the middle and upper atmosphere for variable CO[2] concentration, J. Geophys. Res., 103, 11505–11528, 1998.a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w Funke, B., López-Puertas, M., García-Comas, M., Kaufmann, M., Höpfner, M., and Stiller, G. P.: GRANADA: a Generic RAdiative traNsfer AnD non-LTE population Algorithm, J. Quant. Spectrosc. Ra., 113, 1771–1817, https://doi.org/10.1016/j.jqsrt.2012.05.001, 2012.a, b, c, d, e, f, g Garcia, R., Smith, A., Kinnison, D., de la Camara, A., and Murphy, D.: Modification of the gravity wave parameterization in the Whole Atmosphere Community Climate Model: Motivation and results, J. Atmos. Sci., 74, 275–291, https://doi.org/10.1175/JAS-D-16-0104.1, 2017.a, b García-Comas, M., López-Puertas, M., Marshall, B., Wintersteiner, P. P., Funke, B., Bermejo-Pantaléon, D., Mertens, C. J., Remsberg, E. E., Gordley, L. L., Mlynczak, M., and Russell, J.: Errors in SABER kinetic temperature caused by non-LTE model parameters, J. Geophys. Res., 113, D24106, https://doi.org/10.1029/2008JD010105, 2008.a, b, c García-Comas, M., Funke, B., López-Puertas, M., Bermejo-Pantaleón, D., Glatthor, N., von Clarmann, T., Stiller, G., Grabowski, U., Boone, C. D., French, W. J. R., Leblanc, T., López-González, M. J., and Schwartz, M. J.: On the quality of MIPAS kinetic temperature in the middle atmosphere, Atmos. Chem. Phys., 12, 6009–6039, https://doi.org/10.5194/acp-12-6009-2012, 2012.a García-Comas, M., Funke, B., López-Puertas, M., Glatthor, N., Grabowski, U., Kellmann, S., Kiefer, M., Linden, A., Martínez-Mondéjar, B., Stiller, G. P., and von Clarmann, T.: Version 8 IMK–IAA MIPAS temperatures from 12–15µm spectra: Middle and Upper Atmosphere modes, Atmos. Meas. Tech., 16, 5357–5386, https://doi.org/10.5194/amt-16-5357-2023, 2023.a, b, c Gilli, G., Lebonnois, S., González-Galindo, F., López-Valverde, M. A., Stolzenbach, A., Lefèvre, F., Chaufray, J. Y., and Lott, F.: Thermal Structure of the Upper Atmosphere of Venus Simulated by a Ground-to-Thermosphere GCM, Icarus, 281, 55–72, https://doi.org/10.1016/j.icarus.2016.09.016, 2017.a Gilli, G., Navarro, T., Lebonnois, S., Quirino, D., Silva, V., Stolzenbach, A., Lefèvre, F., and Schubert, G.: Venus Upper Atmosphere Revealed by a GCM: II. Model Validation with Temperature and Density Measurements, Icarus, 366, 114432, https://doi.org/10.1016/j.icarus.2021.114432, 2021.a Gordon, I. E., Rothman, L. S., Hill, C., Kochanov, R. V., Tan, Y., Bernath, P. F., Birk, M., Boudon, V., Campargue, A., Chance, K. V., Drouin, B. J., Flaud, J. M., Gamache, R. R., Hodges, J. T., Jacquemart, D., Perevalov, V. I., Perrin, A., Shine, K. P., Smith, M. A. H., Tennyson, J., Toon, G. C., Tran, H., Tyuterev, V. G., Barbe, A., Császár, A. G., Devi, V. M., Furtenbacher, T., Harrison, J. J., Hartmann, J. M., Jolly, A., Johnson, T. J., Karman, T., Kleiner, I., Kyuberis, A. A., Loos, J., Lyulin, O. M., Massie, S. T., Mikhailenko, S. N., Moazzen-Ahmadi, N., Müller, H. S. P., Naumenko, O. V., Nikitin, A. V., Polyansky, O. L., Rey, M., Rotger, M., Sharpe, S. W., Sung, K., Starikova, E., Tashkun, S. A., Auwera, J. V., Wagner, G., Wilzewski, J., Wcisło, P., Yu, S., and Zak, E. J.: The HITRAN2016 molecular spectroscopic database, Satellite Remote Sensing and Spectroscopy: Joint ACE-Odin Meeting, October 2015, 203, 3–69, 2017.a Hartogh, P., Medvedev, A. S., Kuroda, T., Saito, R., Villanueva, G., Feofilov, A. G., Kutepov, A. A., and Berger, U.: Description and Climatology of a New General Circulation Model of the Martian Atmosphere, J. Geophys. Res.-Planets, 110, E11008, https://doi.org/10.1029/2005JE002498, 2005.a Jurado-Navarro, A. A., López-Puertas, M., Funke, B., García-Comas, M., Gardini, A., Stiller, G. P., and von Clarmann, T.: Vibration-vibration and vibration-thermal energy transfers of CO[2] with N[2] from MIPAS high resolution limb spectra, J. Geophys. Res., 120, 8002–8022, https://doi.org/10.1002/2015JD023429, 2015.a, b, c Jurado-Navarro, Á. A., López-Puertas, M., Funke, B., García-Comas, M., Gardini, A., González-Galindo, F., Stiller, G. P., Clarmann, T. V., Grabowski, U., and Linden, A.: Global distributions of CO2 volume mixing ratio in the middle and upper atmosphere from daytime MIPAS high-resolution spectra, Atmos. Meas. Tech., 9, 6081–6100, https://doi.org/10.5194/amt-9-6081-2016, 2016.a Kutepov, A. and Feofilov, A.: New Routine NLTE15µmCool-E v1.0 for Calculating the non-LTE CO[2] 15µm Cooling in GCMs of Earth's atmosphere, Geosci. Model Dev. Discuss. [preprint], https://doi.org/ 10.5194/gmd-2023-115, in review, 2023.a, b Kutepov, A. A. and Fomichev, V. I.: Application of the Second-Order Escape Probability Approximation to the Solution of the NLTE Vibration-Rotational Band Radiative Transfer Problem, J. Atmos. Terr. Phys., 55, 1–6, 1993.a, b Kutepov, A. A., Feofilov, A. G., Medvedev, A. S., Berger, U., Kaufmann, M., and Pauldrach, A. W. A.: Infra-Red Radiative Cooling/Heating of the Mesosphere and Lower Thermosphere Due to the Small-Scale Temperature Fluctuations Associated with Gravity Waves, in: Climate and Weather of the Sun-Earth System (CAWSES), edited by: Lübken, F.-J., 429–442, Springer Netherlands, Dordrecht, ISBN 978-94-007-4347-2, 978-94-007-4348-9, https://doi.org/10.1007/978-94-007-4348-9_23, 2013.a Liu, H.-L., Lauritzen, P. H., and Vitt, F.: Impacts of Gravity Waves on the Thermospheric Circulation and Composition, Geophys. Res. Lett., 51, e2023GL107453, https://doi.org/10.1029/2023GL107453, López-Puertas, M. and Taylor, F. W.: Non-LTE radiative transfer in the Atmosphere, World Scientific Pub., Singapore, 2001.a, b, c, d, e, f López-Puertas, M., López-Valverde, M. A., and Taylor, F. W.: Studies of Solar Heating by CO2 in the Upper Atmosphere Using a Non-LTE Model and Satellite Data, J. Atmos. Sci., 47, 809–822, https:// doi.org/10.1175/1520-0469(1990)047<0809:SOSHBC>2.0.CO;2, 1990.a, b, c López-Puertas, M., Fabiano, F., Fomichev, V., Funke, B., and Marsh, D. R.: CO[2] cool (fortran version), Zenodo [code], https://doi.org/10.5281/zenodo.10849970, 2024.a, b López-Valverde, M. A., Edwards, D. P., López-Puertas, M., and Roldán, C.: Non-Local Thermodynamic Equilibrium in General Circulation Models of the Martian Atmosphere 1. Effects of the Local Thermodynamic Equilibrium Approximation on Thermal Cooling and Solar Heating, J. Geophys. Res., 103, 16799–16812, https://doi.org/10.1029/98JE01601, 1998.a López-Valverde, M. A., López-Puertas, M., and González-Galindo, F.: New Parameterization of CO[2] Cooling Rates at 15µm for the EMGCM, ESA Rep. ESA Rep., ESA, 2008.a Marsh, D. R.: Chemical-Dynamical Coupling in the Mesosphere and Lower Thermosphere, in: Aeronomy of the Earth's Atmosphere and Ionosphere, 2, 3–17, Springer, Dordrecht, iaga special sopron book edn., 2011.a, b Marsh, D. R., Mills, M. J., Kinnison, D. E., Lamarque, J.-F., Calvo, N., and Polvani, L. M.: Climate Change from 1850 to 2005 Simulated in CESM1(WACCM), J. Climate, 26, 7372–7391, https://doi.org/ 10.1175/JCLI-D-12-00558.1, 2013.a, b Meinshausen, M., Smith, S. J., Calvin, K., Daniel, J. S., Kainuma, M. L. T., Lamarque, J.-F., Matsumoto, K., Montzka, S. A., Raper, S. C. B., Riahi, K., Thomson, A., Velders, G. J. M., and van Vuuren, D. P.: The RCP Greenhouse Gas Concentrations and Their Extensions from 1765 to 2300, Clim. Change, 109, 213, https://doi.org/10.1007/s10584-011-0156-z, 2011.a Ogibalov, V. P. and Fomichev, V. I.: Parameterization of Solar Heating by the near IR CO2 Bands in the Mesosphere, Adv. Space Res., 32, 759–764, https://doi.org/10.1016/S0273-1177(03)80069-8, 2003.a , b, c, d, e, f, g O'Neill, B. C., Tebaldi, C., van Vuuren, D. P., Eyring, V., Friedlingstein, P., Hurtt, G., Knutti, R., Kriegler, E., Lamarque, J.-F., Lowe, J., Meehl, G. A., Moss, R., Riahi, K., and Sanderson, B. M.: The Scenario Model Intercomparison Project (ScenarioMIP) for CMIP6, Geosci. Model Dev., 9, 3461–3482, https://doi.org/10.5194/gmd-9-3461-2016, 2016. a, b Stiller, G. P., von Clarmann, T., Funke, B., Glatthor, N., Hase, F., Höpfner, M., and Linden, A.: Sensitivity of trace gas abundances retrievals from infrared limb emission spectra to simplifying approximations in radiative transfer modelling, J. Quant. Spectrosc. Ra., 72, 249–280, 2002.a, b, c van Vuuren, D. P., Edmonds, J., Kainuma, M., Riahi, K., Thomson, A., Hibbard, K., Hurtt, G. C., Kram, T., Krey, V., Lamarque, J.-F., Masui, T., Meinshausen, M., Nakicenovic, N., Smith, S. J., and Rose, S. K.: The Representative Concentration Pathways: An Overview, Clim. Change, 109, 5–31, https://doi.org/10.1007/s10584-011-0148-z, 2011.a, b Note that this constant has been changed from its value of 2.63187×10^11 in Fomichev et al. (1998) to the value used here of 2.55520997×10^11. We recall that the Δx grid of the parameterization is 0.25.
{"url":"https://gmd.copernicus.org/articles/17/4401/2024/","timestamp":"2024-11-04T15:49:06Z","content_type":"text/html","content_length":"599155","record_id":"<urn:uuid:6c34d2ff-89c1-4262-b8a0-a85432e0e505>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00284.warc.gz"}
The Gate city journal. (Nyssa, Or.) 1910-1937, May 18, 1921, Image 4 m a ts poultry women oat-in g for a bundh of 5 0 or 60 laying hens can profit­ ably become members o f this organ­ Published evory ITI day at Nys«*. ization. Whether or not this sec'ion secures this !»eneHt 'depends upon Oregon, by securing enough producé. Do you want these benefits? For informa­ tion apply to Mr. eHrberc Hidkox of ft Kntered at the Po-fofflce at Nyaan. Kingman C olon y,'or Mr. A. B. Cain Oregon, as second-class mail matter of Ontario, chairman of tlhe Malheur ! 5 IT P A Y S TO P A Y CASH ■ ft ; Look this slip over—Ask the clerk T H A N K YOU FOR YOUR PATRO N AG E PLE ASE C A L L A G A IN The Nyssa Trading y vi Groceries, Hardware, Implements Phone 68 Save and leturn $10 worth o f then« oath receipts find re ceive sixty cents in trade. ! NYSSA— PRODUCT— County Fanil Try our new Honey Syrup Bureau g a ñ il») depart- Sundy after Sunday i.ohooi the j f t On* year, la advanoe. ....... . .»1.50 Young People’s class repaired to the j f t Six month«, in advance....... home of Mrs. M. M. Or eel ing. the 1 f t ills carrying lunch and beautifully " f t made .May baskets, which -were f i ’led, H a tramp over the Snake river w Mrs. John Knnls o f Sioux City, bluff. Many friends -were rememiber- f t d with flowers. After a dlass {two, arrived Wednesday for a visit with her mother, Mrs. Ed'tha Scott. 1 ing," the young people disbanded. T h e P. T. A. will give the low er f t Mrs. Henry Oonley, M'ss Catherine Conley and Mr. Chas Conley o f Apple rrades and their teacher, Miss Vera f t Valley, Sioux City friend« o f Mrs. \’ eeb, a picnic at Big Bend Park f t will also serve f t Knnie, called at the Scott home Son- A’ednesday. They unch to the iHgh (School and pupils Mrs. M. M. Maxwell anli M i-« Cor­ f the 8th grades at the d o s e of the f t inne IMiaxweU left Tuesday for a visit ■xams Friday. Eight pupils will take the final ex- f t with relatives at New They will remain for the commence­ i.minationis Thursday and Friday. ment exercises, their niece, Miss Pauline D’elricft, (being anions the 1901 -graduates from the New Ply­ mouth schools. Mr. and .Vint OPver Zehner and family o f iFruitland were week-end guests o f Mrs Zehner’s sister, Mrs Frank Shuler, and family in the w Special Discount on DeLaval Cri'iam Separators ft A surprise dinner party was given Friday evening by Mrs. Conrad Mar­ [l i f t tin and M i s Bernice Martin in honor ''■ f t f t f t f t f t l of Mr. Martin’s forty-eighth birthliay The guests were Mr. and Mrs. il. C Reed and children *►0 and M t . and Mis. W . L. Schafer and sons, and they ,ero royally enter­ end Grinding Looter and- Alvin Sclia/e- motored Phone 36 R. to Payette last SatUTtiay. Mr. Wellman was calling In the Colony (Monday. There -wa a large attendance at F O R G R A I N . M I L T . F E F .D the -lecture on co-oi>ertive marketing AN1) SEEDS OF A L L KINDS by Mr. Paul Mefil o f the Extension Service of the O. A. C. at IBig Bend Park Monday. Owing to the short notice o f the lecture given Thursday -by Mr. Geo Mansfield, state Farm Bureau presi­ Near Depot,-Nyssa, Oregon Phone 204 dent, many o f the people were miss­ T h e Kingman Colony was well represented, many o f the ladles be­ ing prevent. (No one could -hear the f t f t f t f t f t f t f t f t f t f t f t f t f t M X I M K M I l l i X X M M lecture without the conviction that co-cperatIon Is the only way out for ft the grower. -Miss Helen Cowgi'l, assistant state f t | elnib leader, went with th-e girls’ sew f t ing club anil' their leader, Miss -Flo-r- T o s e a l In t h « f t en-e Kingman, at tlhe «h o o d house delicious Burley at 11 a. m. Monday. o b a c c o flavor. Mrs. Edith,a Scott, Mrs. John Ennis Mrs. M. M. Creeling and H. R. Scott f t were guests of (Mr. and Mrs. Herliert f t Hikikox on a motor trip to Ontario f t Tuesday night. attended, a.v delegate, f t Mr. iliickox f t the called meeting o f poultry -men of this section, interested in the plan of the co-operative -marketing o f -poul­ f t try anli eggs. This meeting was called as a re­ f t sult of instructions given at a South­ One block from depot. f t ern Idaho and Eastern Oregon poul­ f t try men’s meeting at Boise, April 30. Clean beds 50c, 75c and SI.00 It is the plan ofthose back of this movement to form an organization Good meals............. 25« to 50« here in Eastern Oregon -and Southern Rooms by week or month f t Idaho, which will affi'late -with the f t Pacific P ultry Producers' Associa­ f t tion. Here is the long looked for movement fo r more profitable and f t satisfactory methods o f handling the Two Flavors « duo I nLULlYLU faction Oil Stoves, ■ S Wire Screening, Poultry Netting. Maple and Marshmallow Handled by all grocery stores. Guaranteed pure Honey and Coin. Made by the Oregon Apiary Company 5 McCarty & Coward Oregon m ftK ftx tftftftftftftftftft ft ft ft ft ft ft ft ft ft ft ft ft * Go to the 5 I I I Q T R i P n U r n A l i n e of New P e r Peckham Furniture Co. Nyssa Flour Housa Furnishers and Funeral Directors I. H. Kella» in Charge o f Parma Store. House Phone 50-S8S Stores at Oaldwell and Parma, Idatoo FLOUR & FEED Ö E J± Nyssa Grain and Seed Co.j P o u ltry j Service G arage j S u p p l ie s Liberty Theater 20 per cent off United States and Goodyear Tires N y s s a , O re g o n It’s Toasted Regular Program SATURD AY & SUNDAY Best of Educational Features & Comedy ■ ft ft ft ft ft ft ft ft ft ft ft ft H ft ft ft ft ft ft ft ft ft ft ft ft Home Remedies Pure Extracts and Spices Teilet Articles Veterinari Remedies Poultry Compounds ■ San Tox and Nyal Agencies ¡ *■ ■ Jurries D rug Co. 3 W. R. JURRIES The price of Lard Has Dropped Again Choice Moats of All Kinds kept Constantly on hand ■ ■ ■ ■ ■ f t f t f t f t f t f t f t f t f t f t f t f t f t f t f t f t f t J. Boy dell, Agt. Phono Ofioo 42 Residence 33 Nyssa, Oregon PA R M A Robert Taylor, Proprietor. Eastman Kokaks— Brunswick Phonographs n Mail Orders Receive Prompt Attention ■ 3 Nyssa Blacksmith Shop : Public Patronage Solicited f t f t f t f t f t f t f t f t f t f t f t f t f t ■ Nyssa Realty Co. H ftftftftftftftftftftftft ftftftftftftftftftftftftft General Blacksmithing All Kind« o f Repair Work f t f t f t f t f t f t f t f t f t f t f t f t * ■ h Sold by ----------- J. R. HUNTER, STAR HOTEL Nyssa, Oregon ■ ■■■Mftftftftftft ■ ■ ■ a * * * * * * * C. C. COTTON. Prop. H otel W estern * and GENERAL h Auto Insurance covering Fire, Theft * Transportation, Collision and Accident ALBERT FOUCH ■ ■ LET US WRITE YOUR POLICY TUESDAY, TH U R SD A Y ■ McConnon’ s P rod u cts Phono 43 Nyssa Meat Maiket When You W ant Screen Doors and Posts Call Phone 8 Nyssa, Oregon
{"url":"https://oregonnews.uoregon.edu/lccn/sn99063862/1921-05-18/ed-1/seq-4/ocr/","timestamp":"2024-11-02T11:22:32Z","content_type":"text/html","content_length":"18382","record_id":"<urn:uuid:08d79f26-ceaa-419f-8201-3e53fdd4519f>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00765.warc.gz"}
HCCS Continuity and Derivatives Questions - Custom Scholars HCCS Continuity and Derivatives Questions Math 1431Homework Assignment 5 (Written) Limits, Continuity and Derivatives (Chapters 1 and 2) Full Name: • Print out this file, fill in your name and ID above, and complete the problems. If the problem is from the text, the section number and problem number are in parentheses. (Note: if you cannot print out this document, take the time to carefully write out each problem on your own paper and complete your work there.) • Use a blue or black pen (or a pencil that writes darkly) so that when your work is scanned it is visible in the saved • Write your solutions in the spaces provided. Unless otherwise specified, you must show work in order receive credit for a problem. • Students should show work in the spaces provided and place answers in blanks when provided, otherwise BOX final answer for full credit. • Once completed, students need to scan a copy of their work and answers and upload the saved document as a .pdf through CASA CourseWare. • This assignment is copyright-protected; it is illegal to reproduce or share this assignment (or any question from it) without explicit permission from its author(s). • No late assignments accepted. 1. (12 points) (a) (8 points) Evaluate the limit lim (b) (4 points) The limit above can be interpreted as f 0 (c), the derivative of a function evaluated at some point x = c. Identify a point x = c and a function f (x) that matches this interpretation. 2. (12 points) The piece-wise function g(θ) is given by A sin θ + cot θ g(θ) = cos θ − 6θ : θ ≤ π/2 : θ > π/2 (a) (6 points) Determine the value of the constant A so that g(θ) is continuous on the interval (0, π). (b) (6 points) Using the value for A from part (a), is g(θ) differentiable on the interval (0, π)? Explain your answer. 3. (9 points) Several expressions are written below, and each one contains a function and an interval. Circle only those expressions that satisfy the conditions of the IVT with N equal to any value in between the given function’s end point outputs (no work need be included with this problem). (a) f (x) = −x2 + sec x on [0, 3π] (d) j(x) = 2x on [−2, 3] (g) m(x) = (b) g(x) = on [−2, 2] 2 + sin x (e) k(x) = arctan x on [−1, 1] x2 −3x+2 : x 6= 1 : x=1 on [0, 4] (c) h(x) = −x2 + sec x on [0, π/4]. (f) `(x) = (h) p(x) = any continuous function on (−16, 8] |x − 3| on [3, 2022] (i) q(x) = x2/3 on [−3, 3] 4. (12 points) Use the diagram below to sketch a graph of a function y = f (x) with the following properties (make sure your graph is clearly drawn so that these features are present): a) The domain of f (x) is [−4, −2) ∪ (−2, 4] b) The range of f (x) is (−∞, ∞) c) f (x) has a removable discontinuity at x = 0. d) lim− f (x) = −1 and 3 = f (2) = lim+ f (x) e) f (3) > 0 f) f (x) is continuous at x = −3 but f 0 (−3) DNE 5. (12 points) Several expressions are written below, and each expression represents a real number. Arrange these expressions in order from greatest to least; for example, you might think (a) > (b) > (c) > (d) > (e) > (f), but this example is wrong so don’t use it as your answer. (No work needs to be included with this problem.) 24 + 10 t→∞ 6 + 22 (a) lim d p 5 − x2 24 + 10 t→0 6 + 22 (c) lim (d) −5×2 θ→0 sin(20θ) (e) lim x cos x 6. (10 points) One of the world’s least-regarded companies, Chegg Inc.1 , runs a website that some critics and reviewers lambaste as a “waste of money” run by “con artists.” This company claims that students who pay for their services are more likely to succeed in college courses, but such claims are dubious at best. Shown below is a graph y = f (x) that displays a typical Chegg user’s performance on graded work in a Math course over a four month period. Their running average is displayed along the vertical y-axis with the horizontal x-axis displaying time. Running Average out of 100 1 Month 2 Months 3 Months 4 Months (a) (3 points each) Naming the labeled points so that P = (p, f (p)), Q = (q, f (q)) and R = (r, f (r)), arrange the numbers f 0 (p), f 0 (q) and f 0 (r) in order from least to greatest: < < Work for this part of the problem can be shown by sketching tangent lines at the points P, Q and R labeled above. These tangent lines need only be hand-drawn, based on the provided graph (as more detailed sketches would have required a formula for f (x) that is not provided). (b) (1 point) Based on the given graph, what is the students’ actual course grade (approximately)? That is, approximate the value of lim f (x). x→4− 1 see, for example, the following: Link 1, Link 2, Link 3, Link 4, Link 5, Link 6 4 7. (12 points) Some information about two functions, f (x) and g(x), are provided in the table below; for this question it is assumed that the compositions f ◦ g and g ◦ f are well-defined functions. x 3 4 5 f (x) g(x) 1 5 2 0 4 6 f 0 (x) g 0 (x) 4 3 2 2 14 1 Based on the information provided, answer the following questions. 0 (a) f ◦ g (3) = 0 (b) g ◦ f (5) = (c) Find the slope of the line tangent to h(x) = f x2 + 1 when x = 2. 8. (12 points) Several statements are presented below; mark each one as either T (“TRUE”) or F (“FALSE”). No work need be included for this problem. a) If f 0 (2) exists then lim f (x) = f (2) = lim f (x) = lim f (x). x→2 x→2+ x→2− b) The tangent line for y = f (x) at the point (c, f (c)) has as its equation y = f (c) + f 0 (c)(x − c). √ c) The two functions r(t) = t2 − t+sin2 t+4 and `(t) = t2 −t1/2 −cos2 t+π have the same derivative. d) lim θ→0 1 θ csc θ 2 = 1. 5 9. (12 points) The set of points that satisfy the equation xy + x = 2y 2 trace out the curve shown in the image below. As indicated, this curve passes through the points (1, 1) and (0, 0). (a) Use implicit differentiation to find the slope of the line tangent to this curve at the point (1, 1) or explain why the slope is undefined. (b) Use implicit differentation to find the slope of the line tangent to this curve at the point (0, 0) or explain why the slope is undefined. 6 10. (7 points) Let’s do a derivative “in reverse!” That is, instead of handing you a function f (x) and asking you to compute f 0 (x), let’s start with the derivative and ask you to figure out which function was differentiated. To this end, suppose we have f 0 (x) = π + 2x − sin x + 2x−3 . Which, if any, of the following functions could equal the original f (x)? (a) f (x) could equal 2 − cos x − 6x−4 (b) f (x) could equal πx + x2 + cos x − x−2 + 42 − √ (c) f (x) could equal πx + x2 + cos x + x−2 + 2 (d) f (x) could equal πx + x2 + sin x − 6x−4 + √ √ 2022 17 + 2 + 1 π (e) None of the above 7
{"url":"https://customscholars.com/hccs-continuity-and-derivatives-questions/","timestamp":"2024-11-08T16:54:09Z","content_type":"text/html","content_length":"58309","record_id":"<urn:uuid:4bd98d14-ead0-4490-9a04-678c456fe491>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00226.warc.gz"}
Math Teacher Mambo I gave the following quiz today in Algebra 1, and I've had "Scribd" on my radar for some time, and then I saw it used on the " " blog, so I thought I'd test it out here to see what's what. Hmmm, not exactly perfect, but I'll work on it. I have passionate feelings for this resource and turn to it whenever I need great ways to ask questions or great real-life examples. I got my copy at an NCTM workshop, and am doing the happy dance ever since. For example, in precalculus today we were talking about function notation, and after we did some "math" graph and table and equation examples, I pulled 2 context examples from the book. One was a table having to do with NFL revenue for a number of recent years, and one was a velocity vs. time graph for a man going to the store. They both generated discussion about the amount of money NFL made (billions!) and interpretations of the velocity graph and why it may be so "wiggly" going up and down and such and how you knew when he reached the store. The questions were worded such as: find v(24) and interpret the meaning. So they had to think about what "24" meant in this problem and what the "y" value meant and put it all in a coherent sentence. We also started a discussion on graphs such as y = abs(f(x)). We first refreshed our memory on abs(x) (much needed for some folks). Then I had them make a table for f(x) = x*x - 2 and graph it. Then I had them hold up their hands in the shape they thought y=abs(f(x)) would be .... some, of course, held up "V" hands. So we made the table values and graphed JUST the points, and then I showed them the visual of "flipping" all the negatives "up". Ooh, ahh. That was a good segue into doing the same table/graph thing with f(x) = x ..... then doing y = abs(f(x)). On a whiny note, my 6 class sizes so far are: 31, 22, 17, 26, 39, 34. I'm just saying. .... but maybe that's the norm in other schools. BUT, I rember my math teacher friend in another state complaining that she had LARGE classes one year .... "26". A while back I was first introduced to Wikki Stix at a calculus workshop where the presenter showed how she taught slope fields by having large pieces of graph paper around the room, and each group had a differential equation and each person in the group had a point or 2 or 3, and they were to calculate the slope and stick the piece of wikki stix on that point in the appropriate slope. Super I was at a teacher supply store the other day, and found 48 for $5.95, and bought them thinking I could find some use for them. Well, now I have 2 things so far. On the first day of school, I will review functions and lead to function notation and how to read f(x) and find f(3) and find x such that f(x) = 6, etc, by first handing out premade graphs/grids/coordinate planes to each student and one wikki stix (stick?) to each kid. I'll first instruct them to plop down a "graph", and maybe have them decide if they're functions or not and maybe have themselves walk with their graphs and separate into 2 camps of functions and non functions. They can then self correct by glancing at others and discussing it amongst themselves. Then I'll say find your f(3), and separate yourselves into like groups (how to deal with non integers .....???). Anyway, you get the idea. I could do more. My second idea is what I think is an improvement on an old activity I've done with precal in the past regarding learning the sine and cosine graphs. But my teacher friend (who I first learned to teach with/from) in another state basically does the same activity but with Wikki Stix. She has them do the measuring of the angles and the heights, not with a string, but with the Stix. Also, once they measure the "height", they snip the Stix into that length and "stick" it down on their graph. So ultimately, later on you have a Wikki Stix sine curve graph. Way cool and something I'll try this year .... though with 3 precal classes of (current sizes) 37, 32, 30ish, how much will that cost? I was at a math workshop today, and from a participant I learned a great calculator skill that left me saying, "where have you been all my TI-83/4/silver edition/titanium life?". Suppose you're graphing, and suppose you simultaneously want to see the graph and the table screens. Well: mode > G-T, and poof, when you hit graph, voila, split screen. Then you can toggle between the 2 parts by hitting the "table" or "graph" button, and then you can do whatever you want on that part of the screen. Also, you can split the screen horizontally (mode > horiz ...) and then you can show the graph and either the main window or the lists or a table, etc. Ooh, aah. Inservice Monday Meeting, meeting, lunch, meeting Where did summer go? Ackh! In the process of updating my homework website further, I thought I'd add a "games" link that showed some fun thinking/math/logic games. Well, of course I had to try them out to see what I was linking. .... Too many hours later, I'm addicted to 2 I found at http://www.coolmath-games.com/ : "walls logic" and "bloxorz". The first has you building diagonal walls on each of the tiles in a grid, and you can't form loops, and you have to follow "# of walls rules" for certain intersections. The second has 33 levels, and you basically move a brick around to get it in a hole. ... Okay, must stop playing and go do other things. On a side note, as I was trying the new games, I found myself getting blurry eyed just reading all the instructions, and I just wanted to jump in and test the waters instead of reading about it. Hmmmm, sounds familiar. So now I'll try to think of each school lesson in a new way: how can the students get started right away doing something and then learning what they need to know about it at various stages of the class. It looks like on 8/25 we're starting right away with 1.5 hour block classes when school starts (as opposed to meeting all classes and having an "alpha" period with about 40 minute classes). I need about 40 minutes to call roll and ask for pronunciation corrections, have them fill out a seating chart (I like them to sit where they want the first time, so I see who can/can't sit together when I soon make a seating chart), take pictures of groups (so I can memorize names), and talk about the syllabus. While all this is going on, I like them to be doing something. Here's my something: Here's part of their first assignment (and parent homework): This is a math autobiography and asks questions about past classes and school experiences and such. It also asks their parents to indicate "I am proud of my child because" .... This will tell me many things about the student: if they do work on time, if they are neat/verbose/last-minute/thoughtful. They also get to see their parent's bragging comments (and sometimes if there are no comments, I'm sad for the kid). I also have a good opportunity to get their parents' e-mail address for future grade sending. Then I have about 50 minutes left. I'm thinking of a "box plot" activity and a "meet and greet" activity. ... still in the planning stages, though. Whew! Two tasks done. Just last week I went to an aerobatics contest my husband was in. It's small, so EVERYONE is wrangled into volunteering somehow. I got to be a boundary judge and learn the skills of how to read a sequence card and how to call the ins and outs and how to survive blistering heat and curious cows and carcas sightings. First my partner and I (another wrangled wife) had to drive 10 minutes to the location. This is the other wife sitting in position ready to call outs/ins. The contraption is uber cool and thought up by an engineer/pilot who used a compass and his brains. AND there's math involved (what There are 2 sets of 2 strings lined up perpendicular to the other set, and you line up the strings of one set with your eye and .... since 2 lines form a plane, you can determine when the aerobatic planes cross the boundary or not. We were at the "southeast" boundary, so our job was to call "out south", "in south", "out east", "in east". Boy were we stressed about doing it right. We're not pilots, and we had to figure out what squiggly lines on the cards matched what things the pilots were doing in the air. Eventually, we mastered it as a team. Well, I am glad I started my homework website last year, and I'll continue it this year. I found many benefits: * I made one of the 1st homework assignments be for parents to look at the site and send me an e-mail. This gave me their address for sending home grades, and it also made them aware. * It gave me a place to post extra worksheets and solutions where they could download them (I didn't do this too much, but maybe will increase it this year) * Students had an easy way to find homework assignments for whatever reasons. * Parents now had a place to see/check whether their children had homework or tests coming up. This year ... (with my 4 PREPS!) I'll continue it, and I want to enhance it: * I want to somehow incorporate quick easy-to-make-&-upload videos showing various math techniques (I'll film them). * I potentially want to link to videos from TeacherTube (since YouTube is blocked at our campus). * I want to post notes for students that are absent (this one I'm iffy about for a variety of reasons: difficulty, space issues, student accountability, ...) All in all, I'm happy with weebly.com. Hopefully, it will continue to be free or even cheap, and hopefully they'll resist putting ads on their/my page since that was a big draw for me.
{"url":"https://mathteachermambo.blogspot.com/2008/08/","timestamp":"2024-11-02T02:56:47Z","content_type":"text/html","content_length":"139503","record_id":"<urn:uuid:51f00548-4bd2-4734-8f4e-d2b2298fa1ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00468.warc.gz"}