text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# LEDs in Parallel != sum of expected current draw?
#### mordhau5
Joined Sep 19, 2012
6
Hey guys, long time lurker, first-time poster!
I am attempting to create a loong string of non-HP, T-1 3mm LEDs as they would make great spotlights for an Amiibo (little plastic figures) shelf I'm building. I ordered 25 of these (3mm 3.3v 20mA: Datasheet here) from Digi-Key just to play around with their brightness, angle, and to prototype the string.
My plan was to run them 4-to-a-series, on a parallel string of probably 28 or so sets. I chose this configuration because
1. I already have a large 12v PSU sending power to a set of RGB LED strips as a backlight to the display, and it should have more than enough capacity left-over for these little guys.
2. running each LED at 3v provides AMPLE luminosity for the display and I may even have to include a pot to dim the whole circuit, but that's separate from this issue.
But the TLDR is: My concern is that when I pop 25 of them into my breadboard on the parallel power rails hooked up to my Dr. Meter benchtop PSU running at a fixed 3.0V, neither my standalone multimeter or the built-in Ammeter in the PSU read a draw of 25 * 1 Led at 3v (< 20mA). In fact, just putting two in parallel don't even come out to anywhere 2x the current. For example, I currently have 10 plugged in and I'm getting a draw of 114 mA but try any one of them in solo, and it reads 17-24mA. Given I'm a novice, but my understanding of components in parallel is that current draw is summed...right? I can for sure say that the brightness of the LEDs doesn't not seem to drop when more are added, even though their current draw seems to only increase by fractions of the individual led current. I even tried spacing them far apart just so their brightness' don't appear averaged to my eye.
I'm also to understand that sometimes low-quality benchtop PSU's can be unreliable at getting accurate draw readings (and Dr. Meter is certainly no hero...) but my multimeter seemed to confirm the phenomenon. I put this question to the measurement board because I have a hunch that I may be using my multimeter incorrectly to measure amperage and that my PSU's reading is wonky (it does seem to fluctuate when I turn output on/off). But honestly, I'm stuck! I don't want to proceed with ordering 100 more of these and wiring them up if they're actually drawing way more current than I'm currently seeing.
#### dl324
Joined Mar 30, 2015
12,258
Few of the members who have an electronics background would advise you to connect a lot of LEDs in parallel without a resistor in each string to deal with the fact that the LEDs will won't have matched forward voltages. In a worst case scenario, the LED with the lowest forward voltage would hog current and die. That could set up a cascading failure where the one with the lowest forward voltage hogs current and dies; until all of them are dead.
The human eye response to light is logarithmic and you'd need a change in brightness of about 2X to be able to discern the difference.
#### ericgibbs
Joined Jan 29, 2010
11,637
hi 5,
Welcome to AAC.
Note on the LED datasheet the Vfwd of the LED can vary from 3.3V to 4V.
So one LED could be 3.3V and another 4V and so on, they are not matched to operate exactly at 3.3V.
Putting a number of LED's in parallel means that some LED's will draw more current and other less.
Do not connect them in parallel.
E
#### mordhau5
Joined Sep 19, 2012
6
Few of the members who have an electronics background would advise you to connect a lot of LEDs in parallel without a resistor in each string to deal with the fact that the LEDs will won't have matched forward voltages. In a worst case scenario, the LED with the lowest forward voltage would hog current and die. That could set up a cascading failure where the one with the lowest forward voltage hogs current and dies; until all of them are dead.
The human eye response to light is logarithmic and you'd need a change in brightness of about 2X to be able to discern the difference.
OH ok I see, so I've been thinking about this the wrong way. I need to treat the strings like each is their own LED with their own specs and pick a resistor that guarantees it will run at the same voltage as every other string in the parallel circuit. I suppose I would choose a fixed current on the PSU to allow them to draw then and see what voltage they require?
#### mordhau5
Joined Sep 19, 2012
6
hi 5,
Welcome to AAC.
Note on the LED datasheet the Vfwd of the LED can vary from 3.3V to 4V.
So one LED could be 3.3V and another 4V and so on, they are not matched to operate exactly at 3.3V.
Putting a number of LED's in parallel means that some LED's will draw more current and other less.
Do not connect them in parallel.
E
I took those values to mean that any one of them COULD run at 4V but that would be outside of it's typical range and anything higher would run the risk of damaging the LED, but 3.3 was what they were optimized/QC'd to run at. I see now that it's the variance in manufacturing. Unfortunately running them all in series is simply not an option I'll need to find a way to get them in parallel with resistors.
#### bertus
Joined Apr 5, 2008
21,054
Hello,
It is not wise to parallel leds directly.
The leds will not have exactly the same forward voltages.
The led with the lowest forward voltage will draw most of the current and will fail as first.
Then the other leds will get more current and again the led with the lowest forward voltage will draw most of the current and then that led will fail.
etc. until all leds are dead.
Bertus
#### mordhau5
Joined Sep 19, 2012
6
I'm already seeing that when I set the PSU to only allow a maximum draw of 15mA, the Voltage drop across each one varies somewhat, even the same LED in different slots on the breadboard. I think I may have what I need to proceed: take a fixed current draw and see what the required voltage drop is across each and use Ohm's law to pick a resistor for each series-set of LEDs that keeps them all at the same voltage drop (within some tolerance that allows for a long-enough lifetime of the string). I figure if I'm already keeping the current well below the rated amperage, then the small difference of increased current on the series with the lowest drop can still handle the difference for longer.
#### dl324
Joined Mar 30, 2015
12,258
I think I may have what I need to proceed: take a fixed current draw and see what the required voltage drop is across each and use Ohm's law to pick a resistor for each series-set of LEDs that keeps them all at the same voltage drop (within some tolerance that allows for a long-enough lifetime of the string).
It's simpler than that.
Assume a typical forward voltage of 3.3V for the LEDs. Three in series would drop 9.9V. The current limit resistor would be:
$$\small R = \frac{V}{I} = \frac{12V-9.9V}{20mA} = 105\Omega$$
Use 100 (or 110).
There will be small differences in brightness, but it shouldn't be noticeable.
#### mordhau5
Joined Sep 19, 2012
6
It's simpler than that.
Assume a typical forward voltage of 3.3V for the LEDs. Three in series would drop 9.9V. The current limit resistor would be:
$$\small R = \frac{V}{I} = \frac{12V-9.9V}{20mA} = 105\Omega$$
Use 100 (or 110).
There will be small differences in brightness, but it shouldn't be noticeable.
That makes sense, but if I'm putting 4 in series, the required voltage drop to run at typical would be 13.2V meaning the limiting factor would be the 12V psu I'm assuming? Are you saying that you can't/shouldn't rely on your power source to provide no more than the rated Voltage? My original assumption was that because I'm increasing the drop beyond what the Power Source should be capable of, that that will limit my current on its own.
#### dl324
Joined Mar 30, 2015
12,258
Are you saying that you can't/shouldn't rely on your power source to provide no more than the rated Voltage?
A good quality power supply would provide the rated voltage from no load to full load. That's what I was assuming; 12V means 12V.
If you're using a low quality supply that gives a higher nominal voltage when lightly loaded, the voltage will drop when you get to the "rated" load.
#### Wiebenor
Joined Nov 22, 2018
7
Hmm... Digi-Key... i know that place. got a friend who works there in homeland security compliance, or whatever it's called. hes the guy that makes sure any parts sold, are not going to banned countries.
That's a good place to buy stuff I hear, although I haven't actually bought anything from them, as of yet. In your specific use though, would a set of white, or colored, strip lights, work just as well, i wonder?
#### dl324
Joined Mar 30, 2015
12,258
Digi-Key... [snip] That's a good place to buy stuff I hear,
DigiKey, Jameco, Newark, Mouser are all reputable. The only issue I have with DigiKey is high prices; though I've also heard that they have reasonable shipping prices. I haven't bought from them for decades; preferring Jameco and Newark.
|
{}
|
# Calculate the Taylor expansion of a function
## Calculus taylor_series_expansion
Calculus in processing ... please wait
### Function : taylor_series_expansion
#### Summary :
The taylor series calculator allows to calculate the Taylor expansion of a function.
Taylor_series_expansion online
#### Description :
The online taylor series calculator helps determine the Taylor expansion of a function at a point. The Taylor expansion of a function at a point is a polynomial approximation of the function near that point. The degree of the polynomial approximation used is the order of the Taylor expansion.
# Usual function Taylor expansion
The calculator can calculate Taylor expansion of common functions.
For example, to calculate Taylor expansion at 0 of the cosine function to order 4, simply enter taylor_series_expansion(cos(x);x;0;4) after calculation, the result is returned.
To calculate dl at 0 of the exponential function to order 5, simply enter taylor_series_expansion(exp(x);x;0;5), , after calculation, the result is returned.
# Calculation of the Taylor series expansion of any differentiable function
To calculate Taylor expansion at 0 of the f: x->cos(x)+sin(x)/2, to order 4, simply enter taylor_series_expansion(cos(x)+sin(x)/2;x;0;4) after calculation, the result is returned.
The taylor series calculator allows to calculate the Taylor expansion of a function.
#### Syntax :
taylor_series_expansion(function;variable;value;order),
• function: the function at which the expansion is researched,
• variable: the variable used for the expansion,
• value: the point at which the expansion is researched,
• order: order of the expansion.
#### Examples :
taylor_series_expansion(cos(x);x;0;4), returns (x^4)/24+(-x^2)/2+1
Calculate online with taylor_series_expansion (Calculate the Taylor expansion of a function)
|
{}
|
## Disclaimer
First time I wrote more than a one-liner in Perl, the Swiss Army chainsaw of scripting languages. After quickly skimming the beginner's tutorial, (and no longer just copying Perl one-liners from Stackoverflow and friends), I consider myself ready to write porting helpers for kde-dev-scripts.git! Hey!
If you still don't know those scripts, check them out now. They're tremendously helpful, especially those for porting KDE4-based applications to Qt5/KF5. I wouldn't want to miss them!
## Make the most out of CMake's Automoc
Now there's a new porting helper: convert-to-cmake-automoc.pl, just committed today with https://git.reviewboard.kde.org/r/121991/.
This script tries to remove includes such as #include "<basename>.moc" from cpp files to make source code suitable for CMake's Automoc feature.
There are some pitfalls, though:
• In some cases, one still needs to #include "<basename>.moc"; for example when the moc-generated code uses classes declared inside the .cpp file (hint: K_PLUGIN_FACTORY) -- in this case the moc file cannot be compiled in a separate translation unit.
• In some cases, one still needs to #include "moc_<basename>.cpp"; for example when Q_PRIVATE_SLOT is used in the header file
This script handles these cases.
## Example: attica.git
Invoking this script on your source code will fix the annoying "No output generated" warnings from moc:
\$ make
...
/home/kfunk/devel/src/kf5/frameworks/attica/tests/projecttest/projecttest.cpp:0: Note: No relevant classes found. No output generated.
...
Now run find -iname "*.cpp" | xargs convert-to-cmake-automoc.pl and you'll end up with this patch.
|
{}
|
Search…
⌃K
accTOKE
Introduction
The locking of TOKE as accTOKE gives users the ability to participate in the weekly accTOKE Cycle rewards. Rewards accrued by the Tokemak POA deployments are claimable in wETH by accTOKE lockers. Demonstrations on how to lock TOKE as accTOKE can be found here.
accTOKE Locked Staking
The system allows for TOKE to be staked as accTOKE for either 2, 3 or 4 weekly Cycles. Users will be able to either:
• Migrate TOKE that was already staked in the system to direct liquidity
• Or directly access accTOKE with TOKE that is not yet staked on the platform
Please note that participation in accTOKE locked staking will not allow for directing liquidity.
The lock period chosen initially will be the minimum locking period applying to any subsequent TOKE locked as accTOKE. The unlock of the latest TOKE then applying to the total accTOKE balance of the user. Example:
• Cycle 1: Locked 50 TOKE as accTOKE for 3 cycles.
• Unlock of 50 TOKE at Cycle 4
• Cycle 2: Locking of additional 50 TOKE for 3 cycles (minimum)
• Unlock of 100 TOKE at Cycle 5
After accTOKE is successfully requested for withdrawal, a user can lock additional TOKE without affecting the requested amount. However, the subsequent deposit is still bound to the minimum lock period.
Reward Mechanics
Users who have chosen to stake their TOKE as locked accTOKE will receive rewards in wETH (wrapped ETH) – upon claiming the rewards the user has the option to unwrap to ETH. Unwrapping to ETH when claiming rewards will require an additional transaction. The rewards can be claimed at the end of each weekly Cycle, not solely at the end of their lock time.
The rewards are distributed to accTOKE stakers according to the following logic:
In a first step, the accTOKE power is calculated for the individual user:
$User\ acc\ power\ =\ accTOKE\ Qty\ \times \ number\ of\ cycles\ locked$
The accTOKE power of the user is now used to calculate their rewards:
$User_{a\ }wETH\ Rewards\ =\ \frac{POA\ rewards\ in\ wETH}{\sum users\ acc\ power} \ \times User_{a}\ acc\ power$
accTOKE rewards are variable and will fluctuate based on many factors, including quantity of TOKE locked, deployed POA, POA rewards and the users chosen lock duration.
Unlocking accTOKE
Users can request to unlock during the last Cycle of their chosen lock period and then withdraw at any time after the lock expires.
Please note that accTOKE and the "regular" TOKE staking are two separate contracts. It is possible to seamlessly lock staked TOKE by using the "Migrate" button. If a user choses to stake their TOKE as an LD after requesting withdrawal from accTOKE, it requires withdrawal back into their wallet and re-staking.
accTOKE Emissions
It’s important to remember that not all POA rewards go to accTOKE lockers, some rewards will still accrue back to Tokemak’s POA. The planned schedule for accTOKE rewards from POA deployments is as follows:
• Cycle 241: 20% of POA rewards
• Cycle 242: 25% of POA rewards
• Cycle 243: 30% of POA rewards
• Cycle 244: 35% of POA rewards
• Cycle 245: 40% of POA rewards
• Cycle 246: 45% of POA rewards
• Cycle 247: 50% of POA rewards
After the above terminal rate reaches 50%, it is possible for the rate to increase or decrease, depending on a number of factors, including eventual governance. If the rate is to be modified, we will provide as much notice as possible given the circumstances which lead to the change to ensure that accTOKE lockers can make an informed decision to either continue locking or withdraw.
accTOKE Caps Schedule
The capped amount of TOKE that can be deposited into the accTOKE mechanism will follow this schedule:
• Cycle 241: 1,000,000 TOKE
• Cycle 242: 2,000,000 TOKE
• Cycle 243: 3,000,000 TOKE
• Cycle 244: 4,000,000 TOKE
• Cycle 245: 5,000,000 TOKE
• Cycle 246: 6,000,000 TOKE
• Cycle 247: Cap is lifted
Once Cycle 246 concludes, the cap on accTOKE will be removed.
|
{}
|
# Remember torques?
A hollow shaft has an inner diameter of 0.035 m and an outer diameter of 0.06 m. Compute the torque (In N*m) if the shear stress is not to exceed 120 Mpa.
×
|
{}
|
sin(cosx)=0
• March 21st 2010, 05:15 PM
Chinnie15
sin(cosx)=0
I need to prove sin(cosx)=0, but I'm looking through all of my identities and none of them seem to fit? Any help on this would be great!
Thank you!
Brittney
• March 21st 2010, 05:39 PM
skeeter
Quote:
Originally Posted by Chinnie15
I need to prove sin(cosx)=0, but I'm looking through all of my identities and none of them seem to fit? Any help on this would be great!
Thank you!
Brittney
this equation is not true for all x (not an identity) ... it is a conditional equation, so there is nothing to "prove".
since $\sin(something) = 0$ , then $something$ has to be 0 or an integer multiple of $\pi$.
since $-1 \le \cos{x} \le 1$ , none of the multiples of $\pi$ will work, therefore $\cos{x}$ can only equal 0.
I leave you to determine those values of x that make $\cos{x} = 0$
• March 21st 2010, 05:45 PM
Chinnie15
Oh ok, thanks! So if cos90=0.. that would get me my answer?
• March 21st 2010, 05:48 PM
skeeter
Quote:
Originally Posted by Chinnie15
Oh ok, thanks! So if cos90=0.. that would get me my answer?
that would be one possible solution.
• March 21st 2010, 05:53 PM
Chinnie15
Ok, I think I get it now. So cos of 270 is also equal to zero. And then all coterminal angles of 90 and 270?
Edit: Here is my solution for my hw- Sin(cos x)=0 when x is 90±360 or 270±360. Sin(cos x)--> Sin(0) --> 0
|
{}
|
# Math Help - Periodicity of a function
1. ## Periodicity of a function
Hello. I need some help on solving the following problem
Review the periodicity of the following function:
$f(x)=x+\frac{2x}{x^2-1}$
2. Originally Posted by javax
Hello. I need some help on solving the following problem
Review the periodicity of the following function:
$f(x)=x+\frac{2x}{x^2-1}$
|
{}
|
## anonymous 5 years ago Finding a power series representation for the function f(x)=3/(1-x^4) and determining the interval of convergence. So far ive gotten to 3sigma (-1)^n (x^(4n)+1)/(4n+1)
1. anonymous
$3\sum_{n=0}^{\infty} (-1)^n ((x^(4n+1)/(4n+1)$
2. anonymous
)
|
{}
|
# align two piecewise functions
I have problem with my piecewise functions
I read others post and I tried this but I have errors
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\begin{align*}
$x_{{i}{j}}$ =
&\begin{cases}
\text{1,} &\quad\text{Si se envía agua desde área i ( i $\in$ \{1,2,3,4,5,6,7,8,9,10\}) a sitio j ( j $\in$ \{1,2,3,4,5,6,7\})} \\
\end{cases}\\
$y_{j}$ =
&\begin{cases}
\text{1,} &\quad\text{ Si se construye planta en el sitio j ( j $\in$ \{1,2,3,4,5,6,7\})} \\
\end{cases}
\end{align*}
\end{document}
Runaway argument? ! Paragraph ended before \align* was complete. \par l.592
I did this (using [) and it works well but they appear not aligned
\documentclass{article}
\usepackage{amsmath}
\begin{document}
$x_{{i}{j}} = \begin{cases} \text{1,} &\quad\text{Si se envía agua desde área i ( i \in \{1,2,3,4,5,6,7,8,9,10\}) a sitio j ( j \in \{1,2,3,4,5,6,7\})} \\ \text{0,} &\quad\text{En caso contrario} \end{cases}$
$y_{j} = \begin{cases} \text{1,} &\quad\text{ Si se construye planta en el sitio j ( j \in \{1,2,3,4,5,6,7\})} \\ \text{0,} &\quad\text{En caso contrario} \end{cases}$
\end{document}
and I want they are aligned, like this
• remove the around x_{ij} and y_{j}. \begin{align*} is already a math environment. – Troy Nov 30 '18 at 17:00 • lol I did that and it didn't work before but now it works hahaha thanks man!! is there a way to align the functions using [ .... ] ? – EmiliOrtega Nov 30 '18 at 17:02 • Yes, with the \begin{aligned} environment from amsmath. E.g. of its use: Alignment in equations – Troy Nov 30 '18 at 17:04 ## 3 Answers Beware! align already typesets its contents in math mode (but \text reverts to text mode). You have to just align the equals signs with &=; for the long line, I suggest a tabular. Some vertical spacing will help, too. \documentclass{article} \usepackage{amsmath} \begin{document} \begin{align*} x_{ij} &= \begin{cases} 1, &\begin{tabular}[t]{@{}l@{}} Si se envía agua desde área i (i\in\{1,2,3,4,5,6,7,8,9,10\}) \\ a sitio j (j\in\{1,2,3,4,5,6,7\}) \end{tabular} \\[4ex] 0, &\text{En caso contrario} \end{cases}\\ y_{j} &= \begin{cases} 1, &\text{Si se construye planta en el sitio j (j\in\{1,2,3,4,5,6,7\})} \\[1ex] 0, &\text{En caso contrario} \end{cases} \end{align*} \end{document} Here it is I simplified the text part typing using the cases* environment from mathtols (needless to load àmsmath). I had to dplit the first case to make it fit the margins: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage[showframe]{geometry} \usepackage{mathtools} \begin{document} \begin{align*} x_{{i}{j}} & = \begin{cases*} \text{1,} &\quad Si se envía agua desde área i\: ( i \in \{1,2,3,4,5,6,7,8,9,10\}) \\ & \quad a sitio j\: ( j \in \{1,2,3,4,5,6,7\}) \\ \text{0,} &\quad{En caso contrario} \end{cases*}\\[1.5ex] y_{j} & = \begin{cases*} 1, &\quad Si se construye planta en el sitio j\: ( j \in \{1,2,3,4,5,6,7\}) \\ 0, &\quad En caso contrario \end{cases*} \end{align*} \end{document} If we go back to your original attempt, a few quick changes gives the result you wanted. This: removes the around x_{ij} and y_j and moves the & you use for alignment to align the equals signs. Edit: I also compressed the set notation to shorten the long line and returned i and j to math mode, as you currently have them as subscripts in math mode. If you'd prefer them to be set upright you should change the subscripts as well.
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\begin{align*}
x_{ij} &=
\begin{cases}
\text{1,} &\quad\text{Si se envía agua desde área $i$ ($i \in \{1,2,\dots,10\}$) a sitio $j$ ($j \in \{1,2,\dots,7\}$)} \\
\end{cases}\\
y_{j} &=
\begin{cases}
\text{1,} &\quad\text{ Si se construye planta en el sitio $j$ ($j \in \{1,2,\dots,7\}$)} \\
• Since i and j are used in math mode on the LHS, it seems appropriate to use $i$ ($i \in \{1,\dots,10\}$) and $j$ ($j \in \{1,\dots,7\}$)` on the RHS. – Werner Nov 30 '18 at 17:17
|
{}
|
# Towards a Dual Representation of Lattice QCD
@article{Gagliardi2018TowardsAD,
title={Towards a Dual Representation of Lattice QCD},
author={G. Gagliardi and W. Unger},
journal={arXiv: High Energy Physics - Lattice},
year={2018}
}
• Published 2018
• Physics
• arXiv: High Energy Physics - Lattice
Our knowledge about the QCD phase diagram at finite baryon chemical potential $\mu_{B}$ is limited by the well known sign problem. The path integral measure, in the standard determinantal approach, becomes complex at finite $\mu_{B}$ so that standard Monte Carlo techniques cannot be directly applied. As the sign problem is representation dependent, by a suitable choice of the fundamental degrees of freedom that parameterize the partition function, it can get mild enough so that reweighting… Expand
3 Citations
#### Figures from this paper
Path optimization in $0+1$D QCD at finite density
• Physics
• Progress of Theoretical and Experimental Physics
• 2019
We investigate the sign problem in $0+1$D quantum chromodynamics at finite chemical potential by using the path optimization method. The SU(3) link variable is complexified to theExpand
Bag representation for composite degrees of freedom in lattice gauge theories with fermions
• Physics
• 2018
We explore new representations for lattice gauge theories with fermions, where the space-time lattice is divided into dynamically fluctuating regions, inside which different types of degrees ofExpand
Su(N) Polynomial Integrals and Some Applications
• Physics, Mathematics
• 2018
We use the method of the Weingarten functions to evaluate SU(N) integrals of the polynomial type. As an application we calculate various one-link integrals for lattice gauge and spin SU(N) theories.
#### References
SHOWING 1-10 OF 13 REFERENCES
Dual representation of lattice QCD with worldlines and worldsheets of Abelian color fluxes
• Physics
• 2017
We present a new dual representation for lattice QCD in terms of wordlines and worldsheets. The exact reformulation is carried out using the recently developed abelian color flux method where theExpand
Dual Formulation and Phase Diagram of Lattice QCD in the Strong Coupling Regime
• Physics
• 2018
We present the computation of invariants that arise in the strong coupling expansion of lattice QCD. These invariants are needed for Monte Carlo simulations of Lattice QCD with staggered fermions inExpand
Lattice gauge theory without link variables
• Physics
• 2014
A bstractWe obtain a sequence of alternative representations for the partition function of pure SU(N) or U(N) lattice gauge theory with the Wilson plaquette action, using the method ofExpand
Revisiting SU(N) integrals
In this note, I revisit integrals over $\SU(N)$ of the form $\int DU\, U_{i_1j_1}\cdots U_{i_pj_p}\Ud_{k_1l_1}\cdots \Ud_{k_nl_n}$. While the case $p=n$ is well known, it seems that explicitExpand
"J."
however (for it was the literal soul of the life of the Redeemer, John xv. io), is the peculiar token of fellowship with the Redeemer. That love to God (what is meant here is not God’s love to men)Expand
EPJ Web Conf
• 175
• 2018
Phys
• Rev. D 97
• 2018
Phys
• Rev. D 92
• 2015
P
• 2006
Nucl
• Phys. B 248
• 1984
|
{}
|
# Sample Standard Deviation Calculator
Instructions: In order to use this sample standard deviation calculator (SD), please provide the sample data below and this solver will provide step-by-step calculation:
Type the sample (comma or space separated)
Name of the variable (Optional)
The sample standard deviation (usually abbreviated as SD or St. Dev. or simply $$s$$) is one of the most commonly used measures of dispersion, that is used to summarize the data into one numerical value that expresses our disperse the distribution is. When we say "disperse", we mean how far are the values of distribution relative to the center.
### How do you calculate the sample standard deviation?
Let $$\{X_1, X_2, ..., X_n\}$$ be the sample data. The following formula is used to compute the sample standard deviation:
$s = \sqrt{\frac{1}{n-1}\sum_{i=1}^n (X_i-\bar X)}$
Observe that the formula above requires to compute the sample mean first, before starting the calculation of the sample standard deviation, which could be inconvenient if you only want to compute the standard deviation. There is an alternative formula that does not use the mean, which is shown below: $s = \sqrt{\frac{1}{n-1}\left( \sum_{i=1}^n X_i^2 - \frac{1}{n} \left(\sum_{i=1}^n X_i\right)^2 \right)}$
One of the advantages of this calculator is that it will calculate the standard deviation for you with work, so that you can follow all the steps.
### Example of calculation of the standard deviation
Example: For example, assume that the sample data is $$\{ 1, 2, 5, 8, 10\}$$, then, the sample SD is computed as follows:
$s = \sqrt{\frac{1}{n-1}\left( \sum_{i=1}^n X_i^2 - \frac{1}{n} \left(\sum_{i=1}^n X_i\right)^2 \right)}$ $= \sqrt{\frac{1}{5-1}\left( 1^2+2^2+5^2+8^2+10^2 - \frac{1}{5} (1+2+5+8+10 )^2 \right)} = 3.8341$
The sample standard deviation is typically used as a representative measure of the dispersion of the distribution. But, the problem with the sample standard deviation is that it is sensitive to extreme values and outliers. If what you need is to compute all the basic descriptive measures, including sample mean, variance, standard deviation, median, and quartiles please check this complete descriptive statistics calculator .
### Population versus Sample Values
Please notice that you are computing the sample standard deviation from a sample of data. In order to compute the population standard deviation, you will need to have ALL the data from the population. And also, when computing the population st. deviation, the formula will have a $$n$$ in the denominator instead of a $$n-1$$. The reasons for this go beyond the scope of this tutorial.
Sometimes, you need to estimate the standard deviation, but you maybe do not have the sample data, or the data are incomplete. In that case, you can use the rule of thumb to compute the standard deviation .
### Difference between the standard deviation and standard error
Standard error corresponds to the standard deviation of the sampling distribution of sample means. This standard error calculator will compute the standard error for the case you know the standard deviation population, and you want to compute the standard deviation of sample means, with a given sample size $$n$$.
|
{}
|
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
A slightly different way to prove Francesco+auniket's result is as follows. First, given two different planes in the family, all $d$ points must lie in their intersection, which is a line $L$.
Second, every plane intersecting $C$ properly does so in $d$ points (counting multiplicities). So for every plane $H$ through $L$ intersecting $C$ properly, $H\cap C \subset L$. If there is any such plane, then general planes through $L$ intersect $C$ properly. Let $H_1, \dots, H_m$ be the (therefore finitely many) planes through $L$ which meet $C$ nonproperly. Then $C \subset \bigcup H_i$, each $H_i$ contains a component $C_i$ of $C$ of degree $d_i$ which goes through $d_i$ of the $d$ points. If there is no such plane on the other hand, then $L$ is a component of $C$.
Actually, this is projecting from $L$ rather than projecting from a point of $L$.
A slightly different way to prove Francesco+auniket's result is as follows. First, given two different planes in the family, all $d$ points must lie in their intersection, which is a line $L$.
Second, every plane intersecting $C$ properly does so in $d$ points (counting multiplicities). So for every plane $H$ through $L$ intersecting $C$ properly, $H\cap C \subset L$. If there is any such plane, then general planes through $L$ intersect $C$ properly. Let $H_1, \dots, H_m$ be the (therefore finitely many) planes through $L$ which meet $C$ nonproperly. Then $C \subset \bigcup H_i$, each $H_i$ contains a component $C_i$ of $C$ of degree $d_i$ which goes through $d_i$ of the $d$ points. If there is no such plane on the other hand, then $L$ is a component of $C$.
|
{}
|
## Elementary Technical Mathematics
$\frac{23}{6}$
We change to an improper fraction: $3\displaystyle \frac{5}{6}=\frac{3*6+5}{6}=\frac{23}{6}$
|
{}
|
# Independence of $\ell$ and local terms
Joint IAS/Princeton University Number Theory Seminar Topic: Independence of $\ell$ and local terms Speaker: Martin Olsson Affiliation: University of California, Berkeley Date: Thursday, November 14 Time/Room: 4:30pm - 5:30pm/S-101 Video Link: https://video.ias.edu/jointiasnts/2013/1114-MartinOlsson
Let $k$ be an algebraically closed field and let $c:C\rightarrow X\times X$ be a correspondence. Let $\ell$ be a prime invertible in $k$ and let $K\in D^b_c(X, \overline {\mathbb Q}_\ell )$ be a complex. An action of $c$ on $K$ is by definition a map $u:c_1^*K\rightarrow c_2^!K$. For such an action one can define for each proper component $Z$ of the fixed point scheme of $c$ a local term $\text{lt}_Z(K, u)\in \overline {\mathbb Q}_\ell$. In this talk I will discuss various techniques for studying these local terms and some independence of $\ell$ results for them. I will also discuss consequences for traces of correspondences acting on cohomology.
|
{}
|
Definitions
Nearby Words
# line integral
In mathematics, the integral of a function of several variables defined on a line or curve that has been expressed in terms of arc length (see length of a curve). An ordinary definite integral is defined over a line segment, whereas a line integral may use a more general path, such as a parabola or a circle. Line integrals are used extensively in the theory of functions of a complex variable.
In mathematics, a line integral (sometimes called a path integral or curve integral) is an integral where the function to be integrated is evaluated along a curve. Various different line integrals are in use. In the case of a closed curve in two dimensions or the complex plane it is also called a contour integral.
The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulas in physics (for example, $W=vec Fcdotvec s$) have natural continuous analogs in terms of line integrals ($W=int_C vec Fcdot dvec s$). The line integral finds the work done on an object moving through an electric or gravitational field, for example.
## Vector calculus
In qualitative terms, a line integral in vector calculus can be thought of as a measure of the total effect of a given field along a given curve.
### Line integral of a scalar field
For some scalar field f : URn $to$ R, the line integral along a curve CU is defined as
$int_C f, ds = int_a^b f\left(mathbf\left\{r\right\}\left(t\right)\right) |mathbf\left\{r\right\}\text{'}\left(t\right)|, dt.$
where r: [a, b] $to$ C is an arbitrary bijective parametrization of the curve C such that r(a) and r(b) give the endpoints of C.
The function f is called the integrand, the curve C is the domain of integration, and the symbol ds may be heuristically interpreted as an elementary arc length. Line integrals of scalar fields do not depend on the chosen parametrization r.
### Line integral of a vector field
For a vector field F : URn $to$ Rn, the line integral along a curve CU, in the direction of r, is defined as
$int_C mathbf\left\{F\right\}\left(mathbf\left\{r\right\}\right)cdot,dmathbf\left\{r\right\} = int_a^b mathbf\left\{F\right\}\left(mathbf\left\{r\right\}\left(t\right)\right)cdotmathbf\left\{r\right\}\text{'}\left(t\right),dt.$
where $cdot$ is the dot product and r: [a, b] $to$ C is a bijective parametrization of the curve C such that r(a) and r(b) give the endpoints of C.
A line integral of a scalar field is thus a line integral of a vector field where the vectors are always tangential to the line.
Line integrals of vector fields are independent of the parametrization r in absolute value, but they do depend on its orientation. Specifically, a reversal in the orientation of the parametrization changes the sign of the line integral.
### Path independence
If a vector field F is the gradient of a scalar field G, that is,
$nabla G = mathbf\left\{F\right\},$
then the derivative of the composition of G and r(t) is
$frac\left\{dG\left(mathbf\left\{r\right\}\left(t\right)\right)\right\}\left\{dt\right\} = nabla G\left(mathbf\left\{r\right\}\left(t\right)\right) cdot mathbf\left\{r\right\}\text{'}\left(t\right) = mathbf\left\{F\right\}\left(mathbf\left\{r\right\}\left(t\right)\right) cdot mathbf\left\{r\right\}\text{'}\left(t\right)$
which happens to be the integrand for the line integral of F on r(t). It follows that, given a path C , then
$int_C mathbf\left\{F\right\}\left(mathbf\left\{r\right\}\right)cdot,dmathbf\left\{r\right\} = int_a^b mathbf\left\{F\right\}\left(mathbf\left\{r\right\}\left(t\right)\right)cdotmathbf\left\{r\right\}\text{'}\left(t\right),dt = int_a^b frac\left\{dG\left(mathbf\left\{r\right\}\left(t\right)\right)\right\}\left\{dt\right\},dt = G\left(mathbf\left\{r\right\}\left(b\right)\right) - G\left(mathbf\left\{r\right\}\left(a\right)\right).$
In words, the integral of F over C depends solely on the values of G in the points r(b) and r(a) and is thus independent of the path between them.
For this reason, a line integral of a vector field which is the gradient of a scalar field is called path independent.
### Applications
The line integral has many uses in physics. For example, the work done on a particle traveling on a curve C inside a force field represented as a vector field F is the line integral of F on C.
## Complex line integral
The line integral is a fundamental tool in complex analysis. Suppose U is an open subset of C, $gamma$ : [a, b] $to$ U is a rectifiable curve and f : U $to$ C is a function. Then the line integral
$int_gamma f\left(z\right),dz$
may be defined by subdividing the interval [a, b] into a = t0 < t1 < ... < tn = b and considering the expression
$sum_\left\{1 le k le n\right\} f\left(gamma\left(t_k\right)\right) \left(gamma\left(t_k\right) - gamma\left(t_\left\{k-1\right\}\right) \right).$
The integral is then the limit of this sum, as the lengths of the subdivision intervals approach zero.
If $gamma$ is a continuously differentiable curve, the line integral can be evaluated as an integral of a function of a real variable:
$int_gamma f\left(z\right),dz$
=int_a^b f(gamma(t)),gamma,'(t),dt.
When $gamma$ is a closed curve, that is, its initial and final points coincide, the notation
$oint_gamma f\left(z\right),dz$
is often used for the line integral of f along $gamma$.
The line integrals of complex functions can be evaluated using a number of techniques: the integral may be split in to real and imaginary parts reducing the problem to that of evaluating two real-valued line integrals, the Cauchy integral formula may be used in other circumstances. If the line integral is a closed curve in a region where the function is analytic and containing no singularities, then the value of the integral is simply zero, this is a consequence of the Cauchy integral theorem. Because of the residue theorem, one can often use contour integrals in the complex plane to find integrals of real-valued functions of a real variable (see residue theorem for an example).
### Example
Consider the function f(z)=1/z, and let the contour C be the unit circle about 0, which can be parametrized by eit, with t in [0, 2π]. Substituting, we find
$oint_C f\left(z\right),dz = int_0^\left\{2pi\right\} \left\{1over e^\left\{it\right\}\right\} ie^\left\{it\right\},dt = iint_0^\left\{2pi\right\} e^\left\{-it\right\}e^\left\{it\right\},dt$
$=iint_0^\left\{2pi\right\},dt = i\left(2pi-0\right)=2pi i$
where we use the fact that any complex number z can be written as reit where r is the modulus of z. On the unit circle this is fixed to 1, so the only variable left is the angle, which is denoted by t. This answer can be also verified by the Cauchy integral formula.
### Relation between the line integral of a vector field and the complex line integral
Viewing complex numbers as 2-dimensional vectors, the line integral of a 2-dimensional vector field corresponds to the real part of the line integral of the conjugate of the corresponding complex function of a complex variable. More specifically, if $mathbf\left\{r\right\} \left(t\right)=x\left(t\right)mathbf\left\{i\right\}+y\left(t\right)mathbf\left\{j\right\}$ and $f\left(z\right)=u\left(z\right)+iv\left(z\right)$, then:
$int_C overline\left\{f\left(z\right)\right\},dz = int_C \left(u-iv\right),dz = int_C \left(umathbf\left\{i\right\}+vmathbf\left\{j\right\}\right)cdot dmathbf\left\{r\right\} - iint_C \left(umathbf\left\{i\right\}-vmathbf\left\{j\right\}\right)cdot dmathbf\left\{r\right\},$
provided that both integrals on the right hand side exist, and that the parametrization $z\left(t\right)$ of C has the same orientation as $mathbf\left\{r\right\}\left(t\right)$.
Due to the Cauchy-Riemann equations the curl of the vector field corresponding to the conjugate of a holomorphic function is zero. This relates through Stokes' theorem both types of line integral being zero.
Also, the line integral can be evaluated using the change of variables.
## Quantum mechanics
The "path integral formulation" of quantum mechanics actually refers not to path integrals in this sense but to functional integrals, that is, integrals over a space of paths, of a function of a possible path. However, path integrals in the sense of this article are important in quantum mechanics; for example, complex contour integration is often used in evaluating probability amplitudes in quantum scattering theory.
Search another word or see line integralon Dictionary | Thesaurus |Spanish
|
{}
|
# Displaying hierarchical comments for movies
I have written a function which fetches all comments for a given movie. Since comments can be nested (e.g. someone replied to a comment) I am fetching them recursively (based on the parent comment id). Currently, my function looks like this:
<?php
$movieid =$this->movieid;
function display_comments($movieid,$comment_parent_id=0, $level=0) { if(!$comments = CommentModel::FetchAllComments($movieid,$comment_parent_id)){
return false;
}
foreach($comments as$comment) {
$comment_id =$comment->comment_id;
$user_id =$comment->user_id;
$comment_content =$comment->comment_content;
$comment_timestamp =$comment->comment_creation_datetime;
$comment_is_deleted =$comment->comment_is_deleted;
$comment_parent_id =$comment->comment_parent_id;
?>
<div class="comment">
<b><a href="<?php echo Config::get('URL'); ?>profile/showProfile/<?php echo $user_id; ?>"><?php echo UserModel::getPublicProfileOfUser($user_id)->user_name; ?></a>, <?php echo $comment_timestamp; ?></b> <br> <?php echo$comment_is_deleted == 0 ? $comment_content : '<i>Deleted</i>' ?><br> <form action="<?php echo Config::get('URL'); ?>comment/addComment" method="post"> <div class="form-group"> <label> Comment: <input type="text" class="form-control" name="comment_contet" required /> </label> </div> <!-- set CSRF token at the end of the form --> <input type="hidden" name="csrf_token" value="<?= Csrf::makeToken(); ?>" /> <input type="hidden" name="movie_id" value="<?php echo$movieid; ?>" />
<input type="hidden" name="parent_comment" value="<?php echo $comment_id; ?>"> <div class="form-group"> <input type="submit" class="btn btn-default" value="Submit" /> </div> </form> <?php if(Session::get("user_account_type") == 7) : ?> <a href="<?php echo Config::get('URL').'comment/softDelete/'.$comment_id.'/'. $movieid ?>"><?php echo$comment_is_deleted == 0 ? 'Delete' : 'Undelete' ?></a>
<?php endif;
//Recurse
display_comments($movieid,$comment_id, $level+1); ?> </div> <!-- /comment --> <?php } } display_comments($this->movieid);
?>
It is working, but it seem very ugly!
Right now, I'm not fetching anything from the controller apart from the $movieid. # A thought: Is there any way to change this function, so the controller fetches the comments: public function view($movieid)
{
$this->View->render('comment/view', array( 'comments' => CommentModel::FetchAllComments($movieid),
'movieid' => $movieid )); } And then a foreach function can output the comments, recursively, something like (very sorry for pseudo code): foreach($this->comments as $comment){ echo '<div class="comment">'; if($comment->comment_parent_id){
foreach($comment->comment_parent_id as$comment){
echo '<div class="comment">'.$comment->comment_content . 'this is a parent comment</div>'; } } echo$comment->comment_content . 'the "main" comment</div>';
}
Now display_comments is running itself over and over, wouldn't it be correct to fetch all comments, then arrange them in the right order?
# My question:
How can i improve my recursive comment function?
• Just to confirm, you aren't asking for new functionality that isn't already working, right? Just how to improve it? – Insane Jan 23 '16 at 23:20
• I would like to improve my function, because it is not satisfying to look at. – Adam Jan 23 '16 at 23:28
• I have reworded your question to make it a bit clearer as to what it is doing and that it is indeed working as intended - you just like to improve it. – ChrisWue Jan 24 '16 at 6:50
|
{}
|
### HETEROGENEOUS MULTISCALE METHOD FOR OPTIMAL CONTROL PROBLEM GOVERNED BY ELLIPTIC EQUATIONS WITH HIGHLY OSCILLATORY COEFFICIENTS
Liang Ge1, Ningning Yan2, Lianhai Wang3, Wenbin Liu4, Danping Yang5
1. 1. School of Mathematical Sciences, University of Jinan, Jinan 250022, China;
2. Institute of Systems Science, Academy of Mathematics and System Science, Chinese Academy of Science, Beijing 10080, China;
3. Shandong Provincial Key Laboratory of Computer Networks, Shandong Computer Science Center, Jinan 250014, China;
4. KBS, University of Kent, CT2 7PE, UK;
5. Department of Mathematics, East China Normal University, Shanghai 200062, China
• Received:2015-10-28 Revised:2017-01-09 Online:2018-09-15 Published:2018-09-15
Liang Ge, Ningning Yan, Lianhai Wang, Wenbin Liu, Danping Yang. HETEROGENEOUS MULTISCALE METHOD FOR OPTIMAL CONTROL PROBLEM GOVERNED BY ELLIPTIC EQUATIONS WITH HIGHLY OSCILLATORY COEFFICIENTS[J]. Journal of Computational Mathematics, 2018, 36(5): 644-660.
In this paper, we investigate heterogeneous multiscale method (HMM) for the optimal control problem with distributed control constraints governed by elliptic equations with highly oscillatory coefficients. The state variable and co-state variable are approximated by the multiscale discretization scheme that relies on coupled macro and micro finite elements, whereas the control variable is discretized by the piecewise constant. By applying the wellknown Lions' Lemma to the discretized optimal control problem, we obtain the necessary and sufficient optimality conditions. A priori error estimates in both L2 and H1 norms are derived for the state, co-state and the control variable with uniform bound constants. Finally, numerical examples are presented to illustrate our theoretical results.
CLC Number:
[1] A. Abdulle, On a priori error analysis of fully discrete heterogeneous multiscale FEM, SIAM Multiscale Model. Simul., 4:2(2005), 447-459.[2] A. Abdulle, The finite element heterogeneous multiscale method:a computational strategy for multiscale PDEs, GAKUTO Int. Ser. Math. Sci. Appl., 31(2009), 135C184.[3] A. Abdulle, A. Nonnenmacher, A short and versatile finite element multiscale code for homogenization problems, Comput. Methods Appl. Mech. Engrg., 198:37(2009), 2839-2859.[4] G. Allaire, Homogenization and two-scale convergence, SIAM J. Math. Anal., 23:6(1992), 1482- 1518.[5] A. Brandt, Multi-level adaptive solutions to boundary value problems, Math. Comput., 31:138(1977), 333-390.[6] L.Q. Cao, Multiscale asymptotic method of optimal control on the boundary for heat equations of composite materials, J. Math. Anal. Appl., 343:2(2008), 1103-1118.[7] L.Q. Cao, J.J. Liu, W. Allegretto, N.N. Yan, A multiscale approach for optimal control problems of linear parabolic equations, SIAM J. Control Optim., 50:6(2012), 3269-3291.[8] Z. Chen, Multiscale methods for elliptic homogenization problems, Numer. Meth. For PDEs., 22:2(2006), 317-360.[9] P.G. Ciarlet, The Finite Element Method for Elliptic Problems, North-Holland, 1978.[10] M. Dorobantu, B. Engquist, Wavelet-based numerical homogenization, SIAM J. Numer. Anal., 35:2(1998), 540-559.[11] C. A. Duarte and J.T. Oden, An h-p adaptive method using clouds, Comput. Methods Anal. Mech. Engrg., 139:1(1996), 237-262.[12] W. E and B. Engquist, The heterogeneous multiscale methods, Commun. Math. Sci., 1:1(2003), 87-132.[13] W.E, P.Ming, P.Zhang, Analysis of the hetegeous multiscale method for elliptic homogenization problems, J. Amer. Math. Soc., 18:1(2005), 121-156.[14] C. Fabre, J.P. Puel, E. Zuazua, Approximate controllability of the semilinear heat equation, Proc. Roy. Soc. Edinburgh Sect. A, 125:1(1995), 31-62.[15] P. Henning, M. Ohlberger, B. Schweizer, An adaptive multiscale finite element method, SIAM Multiscale Model. Simul., 12:3(2014), 1078-1107.[16] T. Hughes, G. Feijo, L. Mazzei, J.B. Quincy, The variational multiscale method a paradigm for computational mechanics, Comput. Methods Anal. Mech. Engrg., 166:1(1998), 3-24.[17] T.Hou, X.Wu, A multiscale finite element method for elliptic problems in composite materials and porous media, J. Comput. Phys., 134:1(1997), 169-189.[18] V.V. Jikov, S.M. Kozlov, O.A. Oleinik, Homogenization of Differential Operators and Integral Functionals, Springer-Verlag, Berlin, Heidelberg, 1994.[19] S. Kesavan, J. Saint, Homogenization of an optimal control problem, SIAM J. Control Optim., 35:5(1997), 1557-1573.[20] J. Li, A multiscale finite element method for optimal control problems governed by the elliptic homogenization equations, Comput. Math. Appl., 60:3(2010), 390-398.[21] R. Li, W.B. Liu, H.P. Ma, T. Tang, Adaptive finite element approximation of elliptic optimal control, SIAM, J. Control. Optim., 41:5(2002), 1321-1349.[22] J.L. Lions, Optimal Control of Systems Governed by Partial Differential Equations, SpringerVerlag, Berlin, 1971.[23] J.L. Lions, Remarks on the theory of optimal control of distributed systems, in Control Theory of Systems Governed by Partial Differential Equations, Academic Press, New York, 1977, 1-103.[24] J.L. Lions, Some Methods in the Mathematical Analysis of Systems and Their Control, Science Press, Beijing, 1981.[25] J.J. Liu, L.Q. Cao, N.N. Yan, Multiscale asymptotic analysis and computation of optimal control for elliptic systems with constraints, SIAM J. Numer. Anal., 51:4(2013), 1978-2004.[26] W.B. Liu, D. Tiba, Error estimates for the finite element approximation of a class of nonlinear optimal control problems, J. Numer. Func. Optim., 22(2001), 953-972.[27] W.B. Liu, N.N Yan, A posteriori error estimates for convex boundary control problems, SIAM J. Numer. Anal., 39:1(2001), 73-99.[28] W.B. Liu, N.N. Yan, A posteriori error analysis for convex distributed optimal control problems, Adv. Comp. Math., 15:4(2001), 285-309.[29] W.B. Liu, N.N, Yan, A posteriori error estimates for optimal control problems governed by parabolic equations, Numer. Math., 93:3(2003), 497-521.[30] W.B. Liu, N.N. Yan, A posteriori error estimates for optimal control of Stokes flows, SIAM, J. Numer. Anal., 40(2003), 1805-1869.[31] W.B. Liu, N.N. Yan, Adaptive Finite Element Methods for Optimal Control Governed by PDEs, Series in Information and Computational Science, Science Press, Beijing, 2008.[32] J.M. Melenk, I.Bäbuska, The partition of unity finite element method:basic theory and applications, Comput. Methods Appl. Meh. Engrg., 139:1(1996), 289-314.[33] P.B. Ming, X.Y. Yue, Numerical methods for multiscale elliptic problems, J. Comput. Phy., 214:1(2006), 421-445.[34] M. Sarkis, Nonstandard coase spaces and schwarz methods for elliptic problems with discontinous coefficients using non-conforming elements, Numer. Math., 77:3(1997), 383-406.[35] J. Xu, L. T. Zikatanov, On multigrid methods for generalized finite element methods, Comput. Sci. Eng., 26(2003), 401-418.[36] D.P. Yang, W.B. Liu, Y.Z. Chang, A priori error estimate and superconvergence analysis for an optimal control problem of bilinear type, J. Comput. Math., 26:4(2008), 471-487.[37] H. Zoubairi, Optimal control and homogenization in a mixture of fluids separated by a rapidly oscillating interface, Electron. J. Diff. Equ., 2002:27(2002), 1-32.
[1] Liang Ge, Tongjun Sun. A SPARSE GRID STOCHASTIC COLLOCATION AND FINITE VOLUME ELEMENT METHOD FOR CONSTRAINED OPTIMAL CONTROL PROBLEM GOVERNED BY RANDOM ELLIPTIC EQUATIONS [J]. Journal of Computational Mathematics, 2018, 36(2): 310-330. [2] Ruihan Guo, Liangyue Ji, Yan Xu. HIGH ORDER LOCAL DISCONTINUOUS GALERKIN METHODS FOR THE ALLEN-CAHN EQUATION: ANALYSIS AND SIMULATION [J]. Journal of Computational Mathematics, 2016, 34(2): 135-158. [3] Hongfei Fu, Hongxing Rui. A PRIORI ERROR ESTIMATES FOR LEAST-SQUARES MIXED FINITE ELEMENT APPROXIMATION OF ELLIPTIC OPTIMAL CONTROL PROBLEMS [J]. Journal of Computational Mathematics, 2015, 33(2): 113-127. [4] Tianliang Hou, Yanping Chen. MIXED DISCONTINUOUS GALERKIN TIME-STEPPING METHOD FOR LINEAR PARABOLIC OPTIMAL CONTROL PROBLEMS [J]. Journal of Computational Mathematics, 2015, 33(2): 158-178. [5] C. Brennecke, A. Linke, C. Merdon, J. Schöberl. OPTIMAL AND PRESSURE-INDEPENDENT L2 VELOCITY ERROR ESTIMATES FOR A MODIFIED CROUZEIX-RAVIART STOKES ELEMENT WITH BDM RECONSTRUCTIONS [J]. Journal of Computational Mathematics, 2015, 33(2): 191-208. [6] Yuping Zeng, Jinru Chen, Feng Wang, Yanxia Meng. A PRIORI AND A POSTERIORI ERROR ESTIMATES OF A WEAKLY OVER-PENALIZED INTERIOR PENALTY METHOD FOR NON-SELF-ADJOINT AND INDEFINITE PROBLEMS [J]. Journal of Computational Mathematics, 2014, 32(3): 332-347. [7] Hongfei Fu, Hongxing Rui, Hui Guo. A CHARACTERISTIC FINITE ELEMENT METHOD FOR CONSTRAINED CONVECTION-DIFFUSION-REACTION OPTIMAL CONTROL PROBLEMS [J]. Journal of Computational Mathematics, 2013, 31(1): 88-106. [8] Michael Hinze, Ningning Yan, Zhaojie Zhou. Variational Discretization for Optimal Control Governed by Convection Dominated Diffusion Equations [J]. Journal of Computational Mathematics, 2009, 27(2-3): 237-253. [9] Danping Yang, Yanzhen Chang, Wenbin Liu . A Priori Error Estimate and Superconvergence Analysis for an OptimalControl Problem of Bilinear Type [J]. Journal of Computational Mathematics, 2008, 26(4): 471-487. [10] Ming Wang, Weimeng Zhang . Local a Priori and a Posteriori Error Estimate of TQC9 Element for theBiharmonic Equation [J]. Journal of Computational Mathematics, 2008, 26(2): 196-208.
Viewed
Full text
Abstract
|
{}
|
View Single Post
P: 13 1. The problem statement, all variables and given/known data Integrate the following: (sin(x)/x)^4 between negative infinity and infinity. 2. Relevant equations The residue theorem, contour integral techniques. The answer should be 2pi/3 3. The attempt at a solution I'm not even sure where to start honestly. I define a function f(z)=(sin(z)/z)^4. I'm not quite sure what to make of the point z=0, but I make a contour integral the shape of half a donut in the upper half plane with a little half-circle above z=0. So, I have 3 integrals to consider, the principal value integral on the x-axis, the one on the little half-circle and the big half-circle. According to the residue theorem, the sum of these integrals should give me 0. I'm pretty sure that using Jordan's lemma, we can prove that the integral on the big half-circle is 0. Also, the principal value integral on the x-axis is the original function. What do I do with the last part now? I define z=εe^(iθ) there and insert in my function. The integral is between pi and 0, and I need to take the limit of ε as it goes to 0. I'm honestly lost, is there any chance someone could help me at least start this problem? I don't know if what I've written above is correct or not. Just a little help please =(, this problem has been a great a source of stress for me recently.
|
{}
|
## let A be the non-singular square matrix of order 3×3 then |adj A|is equal to
Question
let A be the non-singular square matrix of order 3×3 then |adj A|is equal to
in progress 0
1 month 2021-08-14T01:39:57+00:00 1 Answer 0 views 0
|
{}
|
# Impact of outward FDI on firms’ productivity over the food industry: evidence from China
Ting-gui Chen (School of Economics and Management, Shanghai Ocean University, Shanghai, China)
Gan Lin (School of Economics and Management, Shanghai Ocean University, Shanghai, China)
Mitsuyasu Yabe (Laboratory of Environmental Economics, Department of Agricultural and Resource Economics, Faculty of Agriculture, Fukuoka, Japan)
ISSN: 1756-137X
Publication date: 25 October 2019
## Abstract
### Purpose
The purpose of this paper is to study the impact of outward foreign direct investment (OFDI) on the productivity of parent firms over the food industry.
### Design/methodology/approach
The main data in this paper are derived from the China Industrial Enterprise Database 2005–2013 and a data set of Chinese firms’ OFDI information. Then this paper uses propensity score matching to match the treatment and control groups with firm characteristics and combines that with the differences-in-differences method to estimate the real effect of OFDI on total factor productivity.
### Findings
The food firm’s OFDI significantly improves the parent firm’s productivity (known as the OFDI own-firm effect), but this promotion only exists in the short term. The OFDI own-firm effect of food firms differs remarkably as the sub-sectors, regions and ownership of firms vary. The food firm’s OFDI in “non-tax havens” and high-income destinations has a significantly stronger effect on the parent firm’s productivity. FDI, R&D and exporting can effectively strengthen the OFDI own-firm effect of food firms.
### Originality/value
The effect of OFDI on food industry productivity has not been researched yet. This paper aims to fill this gap. This paper further divides the characteristics of food firms into different sub-sectors, regions and ownership types for a comparative analysis, with the aim of conducting a more comprehensive study at the micro-level of firms. In addition, an investigation into which factors influence the degree of the OFDI own-firm effect at the micro-level has not been found in the literature. This paper will draw its own conclusions.
## Keywords
#### Citation
Chen, T., Lin, G. and Yabe, M. (2019), "Impact of outward FDI on firms’ productivity over the food industry: evidence from China", China Agricultural Economic Review, Vol. 11 No. 4, pp. 655-671. https://doi.org/10.1108/CAER-12-2017-0246
### Publisher
:
Emerald Publishing Limited
## 1. Introduction
With the deepening of economic globalization and regional economic integration, China’s outward foreign direct investment (OFDI) has entered the fast lane in the context of China’s “Going out” and “the Belt and Road” Initiatives. The USA, Japan and China, respectively, are the largest investors worldwide and China is responsible for 8.65 percent of the world’s OFDI flows (UNCTAD, 2016). China has become a major investor in some developed destinations, especially in cross-border mergers and acquisitions (CM&A). Outward foreign direct investment is abbreviated as OFDI, which is one of the main forms of outward channel of international investment compared with foreign direct investment (FDI). OFDI is defined separately by the International Monetary Fund and the Ministry of Commerce of the People’s Republic of China. In this paper, we define OFDI as Chinese firms that invest in foreign destinations for operation and management rights of foreign firms with emphasis on capacity-building and adoption of improved production standards. This study only investigates Chinese food firms that have undertaken OFDI. OFDI helps a parent firm stimulate exports (Mucchielli and Soubaya, 2002; Jiang and Jiang, 2014a, b), enhance the quality of export products (Du and Li, 2015; Jing and Li, 2016), promote industrial upgrading (Blomstrom et al., 2000; Li, 2012), ease overcapacity (Wen, 2017) and improve profit margins (Yang and Cao, 2017). OFDI also increases the parent firm’s total factor productivity (TFP) because intellectual capital or other nontechnical information through external channels promotes a firm’s productivity (Jiang and Jiang, 2014a, b), which is known as the OFDI own-firm effect. This effect may be different among various industries because firms have their own motives, abilities and methods in OFDI (Blonigen, 2005; Chawla and Rohra, 2015).
Chinese food firms have begun to actively explore the overseas markets. For example, the Shuanghui Group spent 7.10bn to acquire the largest US pork producer “Smithfield” in 2013. In 2014, the Yili Group established an oceanic production base in New Zealand. Bright Diary acquired a 76.70 percent stake in an Israeli dairy firm, Troyes, in 2015.
China’s main motivations for these FDI are as follows. First, the productivity of the food industry is relatively low in the manufacturing sector (Jin et al., 2017), which influences the efficiency of production processes. Second, the demand for high-quality food is increasing as consumers’ income increases. Third, food security incidents have become more frequent, the total degree of trust in the domestic food industry is decreasing (Li and Shi, 2014) and the domestic firms can produce safer food to meet domestic demand by acquiring overseas firms or introducing higher-safety production lines. Technological breakthroughs are required to improve this situation. However, the effect of OFDI on food industry productivity is not in the literature. This paper aims to study this issue. The OFDI in the food industry gradually increased in the background of China’s “Going out” and “the Belt and Road” initiatives; this paper put forward suggestions regarding the performance of China’s OFDI that may be considered as good practice in innovation promotion within the food industry. In addition, it can help create more specific policies for food firms wanting to engage beneficially in OFDIs. The remainder of the paper is as follows. Section 2 is the literature review and hypotheses. Section 3 describes the data and the calculation of the food firm’s TFP. Section 4 presents the model and method of empirical research. Section 5 examines the results of the empirical analysis. Section 6 concludes.
## 2. Literature review and hypotheses
### 2.1 Literature review
Two types of research are related to this paper. The first type is the study of the relationship between OFDI and TFP for different destinations. Helpman et al. (2004) posited that the most productive firms choose to serve the overseas market through OFDI, the more productive firms choose to export and the lowest productive firms only serve the domestic market. This enterprise heterogeneous trade theory has been verified by a series of empirical studies in different destinations, such as Keller and Yeaple (2004) for the USA, Damijan et al. (2008) for Slovenia and Ryuhei and Takashi (2012) for Japan. Many scholars have noticed there may be a mutual causal relationship between OFDI and productivity in developed destinations, that is, OFDI may also increase the parent firm’s TFP. The possible theoretical explanation for this phenomenon is that incomplete markets make multinationals gain monopoly advantages and utilize these advantages through OFDI to enhance their technological superiority (Hymer, 1969).
Many scholars have conducted empirical research on the OFDI own-firm effect. Using Swedish manufacturing data, Braconier et al. (2001) demonstrated that OFDI can significantly facilitate technology imports. Kimura and Kiyota (2006) used Japanese firm-level vertical panel data to demonstrate that enterprises with OFDI have higher productivity growth. Imbriani et al. (2011) used the 2003–2006 Italian firm-level data to analyze the effect of OFDI and indicated that OFDI would increase the productivity of manufacturing enterprises. Gazaniol and Peltrault (2013) used propensity score matching (PSM) to study the impact of OFDI on the microeconomic performance of French enterprises and demonstrated that part of this business group is more inclined to invest abroad and could significantly improve its business performance through investment.
Does the OFDI own-firm effect still exist in emerging economies and developing destinations? The firms in these destinations cannot improve their productivity by using monopolistic advantages. However, the parent firm may improve their productivity by obtaining advanced technologies and management skills abroad, such as building factories and through CM&A, then utilizing them to improve the product quality and production technology (Desai et al., 2005; Syverson, 2010). Jiang and Jiang (2014a, b) used the 2004–2006 Chinese Industrial Enterprises Database to study the relationship between OFDI and productivity and discovered that OFDI could significantly improve the productivity of enterprises, but the promotion gradually reduces over time. Mao and Xu (2014) used China’s 2004–2009 firm-level data to conclude that a significant causal effect exists between OFDI and corporate innovation. Huang and Zhang (2017) used China’s firm-level data from 2002–2007 to examine the effect of OFDI based on the heterogeneity of a firm’s productivity. They divided firms in terms of absorptive capacity and whether they receive national support and demonstrated the following: enterprises effectively improve productivity if they invest abroad for the first time and the degree of impact varies greatly with the characteristics of the enterprise. Yang et al. (2013) used the 1987–2000 data from Taiwan’s manufacturing industry to study the effect of OFDI on the technological efficiency of enterprises and demonstrated a positive correlation between the enterprises’ OFDI activities and technological progress.
By contrast, some scholars believe that OFDI will not improve the productivity of firms (Hijzen et al., 2007; Bai, 2009) and has a negative effect on them (Dhyne and Guerin, 2014). This may be due to the differences in sample selection, such as Bai (2009) found that the reverse technology spillover effect of China’s OFDI was not statistically significant based on the macro data of national statistical yearbooks of 14 destinations. It may also be due to research methods that have resulted in different outcomes despite use of the same Japanese firm-level data as shown by Hijzen et al. (2007) and Kimura and Kiyota (2006). Hijzen et al. (2007) used a differences-in-differences (DID) model and found that “Going out” had no significant effect on the improvement of firms’ productivity. Some scholars even find that OFDI has a negative impact on productivity (Dhyne and Guerin, 2014). So, whether the differences in these conclusions are due to the differences in sample selection or research methods, no consensus exists as to whether OFDI can significantly improve a firm’s productivity. Because the literature on the OFDI own-firm effect has been mostly at the macro-level the question then becomes, “Will the OFDI own-firm effect of firms with different micro characteristics be different?” Therefore, this paper further divides the characteristics of food firms into different sub-sectors, regions and ownership types for a comparative analysis, with the aim of conducting a more comprehensive study at the micro-level of firms.
The second type of literature has examined the OFDI own-firm effect in specific industries. Pradhan and Singh (2008) examined the Indian auto industry and demonstrated that enterprises can enhance their productivity when they invest in developed or developing destinations. Shen and Ju (2016) demonstrated that the reverse technology spillover effect of OFDI in China’s electronic information industry is significant and increases in the technical level strengthen this effect. Driffield and Love (2003) examined the reverse spillover effect of OFDI in manufacturing in the UK and demonstrated that OFDI does increase the technological level of its manufacturing industry; however, this increase is limited to research and development (R&D) intensive industries. The literature on OFDI in the food industry is insufficient; thus, the impact of OFDI on a firm’s productivity is a valuable topic for the food industry.
In addition, an investigation into which factors influence the degree of the OFDI own-firm effect at the micro-level has not been found in the literature. In this paper, we will draw our own conclusions in the final part of the empirical analysis results.
### 2.2 Hypotheses
As mentioned, China’s food firms may conduct OFDI due to efficiency seeking motivation, food quality motivation and food safety motivation, but what is the specific mechanism of the OFDI own-firm effect? This paper generalizes the conduction process into three phases (Figure 1).
The first phase is the acquisition of advanced technology and management experience. Food firms can acquire foreign advanced technology and management experience through four channels: technology transfer, learning and imitation, the flow of talent and sharing platform. The technology transfer refers to the transfer of technological achievements within firms. At present, more and more Chinese firms are engaged in mergers and acquisitions of firms with more advanced technology levels in developed destinations, internalizing the external market, and acquiring patent technologies, supply chain management, R&D teams, etc. Learning and imitation means that foreign subsidiaries of multinational firms could track, learn and imitate the technology research methods and directions of local leading firms. Compared with domestic firms, multinational subsidiaries are more likely to use their convenience to obtain the latest research and marketing methods, management models and cooperation with scientific research institution due to they are closer to local advanced firms and research centers. The flow of talent means that multinational subsidiaries could improve the technical level by introducing local capable technical talents and management talents, in addition, they can also share and exchange technologies and enhance the abilities of technological innovation through cooperation with local firms. Sharing platform means that multinational subsidiary could absorb their advanced technologies by using local resource platforms, R&D facilities, scientific research culture and scientific research achievements.
The second phase is the absorption and transformation of food firms. After acquiring the advanced technology and management experience of foreign subsidiaries, the multinational parent food firm needs a process of absorbing and transforming to internalize it into its own technological advantages, which can be achieved through the personnel and product flows of multinational subsidiaries and parent firms.
The third phase is the stage in which the firm’s technology spreads to the food industry. Domestic firms could bring technological upgrading through demonstration effect and competitive effect. The demonstration effect means that multinational corporations will play a demonstration role for other firms within the food industry and encourage domestic non-multinational firms to strengthen the construction of R&D institutions after acquiring advanced technologies of foreign leading firms. The competitive effect refers to that the acquisition of advanced technology by multinational corporations will increase the pressure of firms’ competition in the food industry, and then force food firms to enhance their innovation capabilities in order to survive. It is worth noting that such a technological upgrading process can be achieved not only between non-multinational corporations and multinational corporations, but also between different multinational corporations.
Based on the above analysis, we propose the first hypothesis:
H1.
OFDI could increase the productivity of China’s food industry, and the OFDI own-firm effect has hysteresis due to the time required for the absorption, transformation and diffusion of technologies.
China’s food industry consists of three sub-sectors: agricultural food processing industry (AFPI), food manufacturing (FM) and beverage manufacturing (BM), they have different levels of development. There may be differences when the OFDI own-firm effect spreads from firms to the industry. Therefore, food firms affiliated with different sub-sectors may have different effects on productivity through OFDI. The role of FDI in productivity improvement may also be different. Second, the economic level and corporate culture of different regions are also different, so the OFDI own-firm effects of food firms belonging to different regions may also be different, which needs to be verified by subsequent empirical studies. Third, we could divide Chinese firms into state-owned firms and non-state-owned firms according to the type of ownership. OFDI of these two types of food firms may have different effects on productivity due to the operation of non-state-owned firms are freer and more efficient than state-owned firms (Huang and Zhang, 2017), thus, the non-state-owned firms could make better use of reverse technology spillovers from OFDI. Finally, the different destinations of OFDI represent different investment objectives. So, the difference in the destinations of OFDI may have an impact on the OFDI own-firm effect. The manner in which a firm’s OFDI can obtain reverse technology spillovers is through investment, CM&A and other activities to obtain advanced technologies and management skills, the parent firms may become the users and creators of technologies and skills, the developed degree of an investment destinations may affect the efficiency of technology and experience absorption, and OFDI in high-income destinations may be more conducive to the promotion of the firm’s productivity. Moreover, some firms have “system to escape or speculate” motives to invest abroad, meaning the firm invests in “tax havens,” such as Hong Kong, the British Virgin Islands, and the Cayman Islands, to obtain domestic investment preferential policies – their main purpose is not to obtain advanced technologies and management skills. This phenomenon also exists in the food industry, thus, this part of OFDI may have no significant effect on enhancing TFP.
Based on the above analysis, we propose the second hypothesis:
H2.
There is a firm heterogeneity in the OFDI own-firm effect. Specifically, the OFDI own-firm effect of food firms in different sub-sectors and regions may be different. Non-state-owned firms could gain greater productivity enhancement through OFDI than state-owned firms. The OFDI in high-income destinations may be more effective than it in low-income destinations, and the OFDI in “tax havens” may not achieve effective productivity gains.
In addition to objective factors such as sub-sectors and region, the OFDI own-firm effect may also be affected by the firm’s own characteristics such as the level of FDI, export status and innovation ability. FDI allows firms to obtain financial advantages in learning that allow them to absorb and apply advanced technologies and management skills better than the other types of firms. Second, firms focused on R&D absorb the advanced technologies, gain a dominant initiative advantage and apply more efficiently them to the production than other types of firms. Finally, the received advanced technologies can produce the parent firm’s own products through OFDI, and the export of these products is one of the communications between the parent and overseas firm. Additional exchanges may promote the parent firm to continue to use the experiences and technologies to enhance TFP.
Based on the above analysis, we propose the third hypothesis:
H3.
FDI, R&D and exporting could enhance the OFDI own-firm effect.
This paper then uses the PSM and DID methods to examine the following four questions based on the information from the Chinese Industrial Enterprises Database for the period 2005–2013: does the OFDI own-firm effect exist in the food industry? Could this effect persist? Considering the heterogeneity of firms, do the different types of food firms have different effects? Does the food firm’s OFDI in different types of destinations affect the effect? Finally, may some characteristics of food firms affect the OFDI own-firm effect?
## 3. Data and TFP
### 3.1 Data sources and processing
The main data in this paper are derived from the China Industrial Enterprise Database (CIED), which is maintained by the China National Bureau of Statistics and includes all state-owned and above-scale (enterprise annual sales above RMB 5 and 20m, respectively, since 2011) non-state-owned enterprises. The subject of this paper is food firms, which corresponds to the following categories in the CIED: AFPI (industry code: 13), FM (industry code: 14) and BM (industry code: 15). We process the data according to Xie et al. (2008) and Yang (2015): excluding if industrial output value, total assets, capital stock, product sales or other key variables are missing, zero or negative; excluding if the number of employees in a firm is less than 8; excluding if the firm was established before 1950; and keeping if the paid-in capital of a firm is greater than 0. The CIED does not contain OFDI observations. Thus, this paper will process the information from the CIED and a data set of Chinese firms’ OFDI information (CFOFDI) acquired from the Chinese Ministry of Commerce to make matches according to the name of firms and obtain the combined data that comprise firms’ OFDI activities. We demonstrate that the observations of OFDI in the combined data before 2004 are very few and begin to increase significantly in 2005. Therefore, the time span of this paper is 2005–2013. The combined data contain 258,182 observations (337 identified as OFDI observations) and 72,981 firms (307 identified as OFDI firms).
### 3.2 Calculation of TFP
TFP is a key variable in the subsequent analysis of this paper. Studies have used different methods for its estimation, such as the ordinary least squares (OLS) method, the Olley–Pakes (OP) method, the Levinsohn-Petrin method, the fixed effects (FE) method and so on. The OP method could solve the selectivity and simultaneity biases (if we use OLS to estimate TFP) by constructing a survival probability function to estimate the entry and exit of firms and an investment function as the proxy variable of the firm’s observable efficiency effect (Olley and Pakes, 1996). Thus, this paper chooses the OP method as its primary method of estimating the TFP of firms for subsequent analysis. We also use the FE model to calculate TFP, to increase the robustness of the results. In this paper, the output elasticities of capital and labor are estimated by the following OP regression model. Next, TFP is calculated according to the Cobb–Douglas production function:
(1) ln Y i t = β 0 + β 1 ln K i t + β 2 ln L i t + β 3 age i t + m δ m year m + n θ n reg n + k φ k ind k + ε i t ,
where lnYit is the log of output (total value) from firm i at time t, lnKit is its capital input measured by total fixed assets and lnLit is its labor input measured by practitioners. The output value and capital input are based on the industrial producer and fixed asset investment price indexes (base year is 2005) to reduce. The regression uses the OP semiparametric three-step regression method. The state variables are lnKit. The firm’s age is ageit. The free variables are lnLit, a regional dummy variable (regn) and a three-digit code industry dummy variable (indk). The control variable is time trend variable (yearm). The proxy variable is the investment variable (lnIit). The exit variable is exit according to whether the firm is not included in the combined data, and if the firm is out of the data, exit is 1, otherwise 0. Table I presents the description of the relevant variables.
The results in Table I demonstrate that the TFP estimated by the OP method is less than that of the FE method and the regular pattern that an OFDI firm’s TFP is higher than that of non-OFDI firm in the three sub-sectors of the food industry. This phenomenon may be because of the OFDI own-firm effect, but it may also be because the firm had a higher TFP before its OFDI (Helpman et al., 2004) – known as the self-selection effect of the firm. Therefore, determining if OFDI effectively contributes to the improvement of food firms’ TFP requires verification by following empirical analysis.
## 4. Empirical methodology
To verify the aforementioned problems and explore the effect of OFDI on food firms’ TFP, this paper constructs a treatment group (OFDI firms) and control group (non-OFDI firms). Considering that the differences between the TFP of OFDI firms and non-OFDI firms are also likely to be caused by other unobservable and non-time-varying factors, this paper first uses the PSM method to match the treatment and control groups with firm characteristics and then combines that with the DID method to estimate the real effect of OFDI on TFP.
### 4.1 PSM for the sample
According to Heckman et al. (1997), we first divided the sample into treatment and control groups. The treatment group consists of OFDI firms, the control group consists of non-OFDI firms. After merging the CIED and CFOFDI, it can be clearly seen which firms have OFDI records, if the firm has OFDI records in the combined data, we set its OFDI value to 1, otherwise 0. And the logit model calculates the probability score of a firm’s OFDI. We select the control group for the OFDI firms (the treatment group) based on the proximity of the probability score. This paper selects labor productivity, capital intensity, firm size, export, age, FDI, ownership type and R&D as matching variables, see Jiang and Jiang (2014a, b) and Ye and Zhao (2016). Table II presents the matching variables calculation method.
Table III shows the summary of matching variables. It indicates that the value of each firm characteristic of OFDI firms is significantly different from that of non-OFDI firms before matching. Therefore, we next use PSM to screen out a certain number of non-OFDI firms so that the values of their firm characteristics are close to OFDI firms.
The matching methods of the PSM include radius matching, caliper matching, K-nearest neighbor matching and kernel matching. K-nearest neighbor matching is the most commonly used. We use K-nearest neighbor matching to pair the treatment and control groups. Because the number of OFDI firms in the sample is relatively small, we make the “k” value equal to 4, meaning we match four non-OFDI firms for each OFDI firm with similar firm characteristics. To test the matching effect, Table IV lists the standardized deviation of the matching variables. We can observe that the standardized deviations of all the variables are less than 5 percent, and the t-test results of the variables demonstrate no significant difference between the two groups after matching. According to Rosenbaum and Rubin (1983), the results demonstrate that K-nearest neighbor matching balances the combined data well.
Is the TFP of OFDI firms still higher than non-OFDI firms after controlling the characteristics of firms? First, we use the PSM method for the entire sample. The results (Table V) demonstrate that the TFP of the treatment and control groups is 6.185 and 5.668, respectively, before matching: the difference between the two is 0.517. After K-nearest neighbor matching, the TFP of the control group is 6.101: the difference between the two is reduced to 0.085. However, the t-test results demonstrate that the difference in the TFP between the two groups is still significant after matching, that is, the TFP mean of OFDI firms is still higher than that of non-OFDI firms after controlling the firm characteristics. We also use radius and kernel matching to do a robust test. The results demonstrate that the other two matching methods support that the TFP of OFDI firms is significantly higher than non-OFDI firms.
Notably, matching the entire sample does not allow us to locate the non-OFDI firms, the firm characteristics or the TFP closest to the OFDI firms when not yet have the OFDI records because the full sample also contains the observations of the OFDI firms after investing abroad. To meet the common trend assumption acquired by the DID method, this paper matches the treatment and control groups year by year based on the firm characteristics when the OFDI firms have not yet invested abroad (refers to Jiang and Jiang, 2014a, b). Additionally, if the firm has an OFDI record in the first year of its entry into the combined data, we base the firm’s characteristics on this year to select the non-OFDI firms that match the OFDI firms. Table VI shows the results of yearly matching. The TFP of the treatment and control groups is very close after yearly matching; the difference is significantly reduced before matching; and the results basically meet the condition that there is no difference between the TFP of OFDI firms before investing abroad and non-OFDI firms.
### 4.2 DID for the sample after matching
We obtain a new control group in which the firm characteristics are similar to the treatment group after the yearly matching. Next, we add the observations of the treatment group to form new combined data. The new combined data contain 9,120 observations (315 identified as OFDI observations) and 1,444 firms (296 identified as OFDI firms). The ATT results of the whole sample in Table V demonstrate that the TFP of OFDI firms is still higher than non-OFDI firms after matching. Therefore, this paper next uses the DID method to examine whether OFDI can improve the TFP of food firms. The DID model is usually used to examine whether the effect of the policy has significant statistical significance, it has the advantage of avoiding endogeneity compared with the traditional method, that is controlling the possible interaction between the dependent variable and the independent variables. Meanwhile, the DID model as a classical method in empirical research can make causal inference of independent variables influencing dependent variables. But using the DID model requires certain conditions, one of the most important is what is called the “natural experiment,” that is the policy impact or the firm’s OFDI decision in this paper must be exogenous, this is one of the reasons why PSM is used in this paper to control the firm’s characteristic and productivity.
The classical DID regression model is as follows:
(2) ln ( TFP i t ) = β 0 + β 1 OFDI i t × TIME t + β 2 Z i t + β 3 OFDI i t + β 4 TIME t + ε i t ,
where the interactive term OFDIit × TIMEt is the product of the OFDI dummy variable (takes 1 if the firm belongs to the treatment group, and 0 otherwise) and time dummy variable (takes 1 for the periods in and after the firm undertakes first OFDI, and 0 otherwise). Zit includes the control variables. εit is the error term. Theoretically, the coefficient of the interaction term β1 represents the effect of OFDI on the firm’s TFP. However, the model is more suitable for the two-stage model. In the data used in this paper, food firms undertake their OFDI in different years, and the period of the same firm’s OFDI is not unique; therefore, this paper refers to Beck et al. (2010) and estimates the following model:
(3) ln ( TFP i t ) = β 0 + β 1 OFDI i t × TIME t + β 2 Z i t + β 3 IND i t + β 4 YEAR t + ε i t ,
where INDit represents the industry FE, and YEARt represents the year FE. In this paper, we select KLR, SIZE, EXP, AGE, FDI, OWN and RD as the control variable Zit, according to the literature. The coefficient of OFDIit × TIMEt represents the impact of OFDI on the firm’s TFP, meaning that OFDI has a positive effect on TFP if β1>0, and OFDI has a negative effect on TFP if β1<0.
## 5. Empirical results
First, to intuitively perceive the causal relationship between OFDI and the TFP, this paper starts to estimate Equation (3) based on the new combined data. Second, the dynamic trend of the OFDI own-firm effect is investigated by the hysteresis effect test. Third, we examine the effect of OFDI on TFP from the perspective of sub-sectors, sub-regions, the different types of ownership and the different investment destinations. Finally, we introduce microcosmic characteristic variables to investigate the influence of firm characteristics on the OFDI own-firm effect.
### 5.1 Baseline results
Table VII reports the baseline results. From Column 1, we can see that the coefficient of OFDIit × TIMEt is significantly positive without controlling any variables, the year or industry FE, indicating that the food firms’ OFDI can significantly increase their productivity. Additionally, although the year and industry FE are controlled, the results are still robust, according to Columns 2. Columns 3 and 4 are the regression results after controlling the firm characteristics variables, where Column 4 is the result of the regression of Equation (3), and the coefficient of OFDIit × TIMEt is still significantly positive with controlling of the firm characteristics variables, year FE and industry FE. These results prove, again, that food firms’ OFDI can promote the promotion of TFP. Column 5 reports the result of the TFP calculated by the FE method. The coefficient of OFDIit × TIMEt is 0.0675, which is significant at a 5 percent significance level. These results indicate that the result reported by Column 4 is robust.
Table VII also reports the values of other variables. The coefficient of the capital labor ratio (KLR) is significantly negative. This result indicates that capital intensity will inhibit the increase in TFP, which may be because of the inefficiency of the re-allocation of food industry resources, and the capital the firms will not have what they require to increase output and TFP, resulting in a waste of capital. This situation inhibits TFP growth. The coefficient of firm size (SIZE) is significant, which indicates that firms with a larger scale have a higher TFP. The coefficient of export (EXP) is also positive, indicating that exports will promote the growth of TFP, which is consistent with most studies. Thus, we can conclude that a food firm’s OFDI can significantly enhance its TFP.
### 5.2 Hysteresis effect
After a firm undertakes OFDI, it may affect more than the TFP in the current period. Generally, firms require some time to learn, absorb, make technological advancements and improve their management skills. Therefore, the impact of OFDI on TFP may have a hysteresis effect. Jiang and Jiang (2014a, b) also confirmed that the OFDI own-firm effect does have a hysteresis effect, by using the Chinese firm-level data. Does the same law apply to food firms? Table VIII shows the results.
Columns 6–8 in Table VIII are the results without controlling for the year and industry FE. We observe that the coefficients of the core interaction term (OFDIit × TIMEt) from the lag one period to three periods are significantly positive, and the OFDI own-firm effect is the strongest in the first period of lag, weakens in the second period and rebounds in the third period. After we control for the year and industry FE (see the results in Columns 9–11), the core interaction term’s coefficient of the lag one period is still significantly positive and the highest; however, the coefficients of the second and third periods of lag are no longer significant. The results in Table VII indicate that the OFDI own-firm effect has a hysteresis effect in the food industry and the food firm can also obtain the enhancement after one year of OFDI, but the enhancement will be weakened after two years of OFDI.
This conclusion is significantly different from the findings of Jiang and Jiang (2014a, b). Their research on China’s manufacturing shows that the OFDI own-firm effect increases first and then declines and is the strongest in the lag two years. The difference may be because food firms have a relatively short time to learn advanced technologies and gain experience and can continue to effectively increase TFP in the short term; however, its impact on TFP will diminish after full absorption.
In summary, H1 has been verified.
### 5.3 Firm heterogeneity effect
To study the impact of OFDI on TFP, this paper classifies food firms according to sub-sectors (two-digital), regions and ownership types. We divide the food industry into three sub-sectors: the AFPI, FM and BM. We divide the regions into the eastern region (ER), northeastern region (NER), central region (CR) and western region (WR), according to the China National Bureau of Statistics. We divide ownership types into state-owned (SO) and non-state-owned (NSO). Table IX presents the results of heterogeneity effect test.
Columns 12–14 in Table IX report the results of the inspection of the sub-sectors. The coefficients of the core interaction term are both significantly positive in the AFPI (0.068) and BM (0.229). This result indicates that OFDI effectively promotes the TFP of the AFPI and BM firms, and the role of OFDI in promoting BM is greater than that of the AFPI. The coefficient of the core interaction term is not significant in FM, demonstrating that OFDI has no obvious effect on the improvement of TFP in FM. Because of the large differences in productivity levels across regions, the OFDI own-firm effects may have different effects in different regions.
Columns 15–18 report the results of the inspection of the regions. The coefficients of the core interaction term are both significantly positive in Columns 15 (0.077) and 17 (0.340). These results demonstrate that the OFDI own-firm effect exists, obviously, in the ER and CR, and the role in the CR is greater, whereas OFDI cannot significantly improve the TFP of firms in the NER and WR, according to Columns 16 and 18. By calculating the food firm’s TFP, we demonstrate that the firm productivity in the CR is the lowest. Thus, the biggest role of OFDI in the CR is probably because of the law of diminishing marginal utility on TFP, that is, the impact of external shocks on productivity is faster and more pronounced in areas with lower productivity.
Columns 19 and 20 report the results of the ownership types inspection. We can observe that the coefficient of the core interaction term is significantly positive in Column 20 and not noticeable in Column 19. This result shows that OFDI cannot significantly improve the TFP of SO firms but can significantly improve the TFP of NSOs.
### 5.4 Investment destinations
What types of destinations to invest in may be one of the factors that affect the OFDI own-firm effect? Table X shows the inspection results according to the classification of investment destinations.
Table X demonstrates that the core interaction term coefficient of non-tax havens is significantly positive, whereas the core interaction term of tax havens is nonsignificant. These results demonstrate that the food firm’s OFDI in non-tax havens can promote the improvement of TFP, and OFDI in tax havens cannot noticeably enhance the firm’s TFP. Second, the core interaction term coefficient of high-income destinations is positive, and the core interaction term coefficient is nonsignificant in middle- and low-income destinations. These results demonstrate that food firms can achieve effective productivity promotion through investing in high-income destinations. The results in Table X validate the previous hypothesis. So, H2 has been verified.
### 5.5 Influence of firm characteristics on the OFDI own-firm effect
Through the aforementioned analysis, we conclude that the food firm’s OFDI can significantly improve the TFP. If that is true, then what factors will affect the effect? We will further investigate if the firm characteristics influence the OFDI own-firm effect by constructing the interaction term of OFDI and FDI, RD, capital labor ratio (KLR), export (EXP), firm size (SIZE) and firm age (AGE). The following equation is shown in detail:
(4) ln ( TFP i t ) = β 0 + β 1 OFDI i t × TIME t + β 2 OFDI i t × TIME t × M i t + β 3 Z i t + β 4 IND i t + β 5 YEAR t + ε i t ,
where M represents FDI, RD, KLR, EXP, SIZE and AGE, respectively, Table XI reports the results of the joint effect test.
Table XI shows that the coefficients of the interaction term of OFDI and M are significantly positive except the results of Columns 30 and 31, it can be seen that when FDI, RD and EXP values are 1, the coefficient values of OFDIit × TIMEt are 0.222, 0.229 and 0.139, respectively, from the results of Columns 26, 27 and 29, they are all significantly higher than 0.111 in Column 25. When FDI, RD and EXP values are 0, the coefficient values of OFDIit × TIMEt are 0.083, 0.105 and 0.077, respectively, and they are all significantly lower than 0.111 in Column 25. The results demonstrate that FDI, R&D and exporting can significantly enhance the OFDI own-firm effect. In Column 28, if the value of KLR is 1, the coefficient value of the interaction term of OFDI and KLR is −0.005, and in fact the value of KLR is between 0 and 1, so in any case the coefficient value of the interaction term of OFDI and KLR is less than that of Column 25, indicating that capital intensity has an inhibitory effect on the OFDI own-firm effect. In Columns 30 and 31, the coefficients of interaction term of OFDI and SIZE, AGE are not significant, it shows that firm size and firm age may not have an outstanding influence on the OFDI own-firm effect. In summary, H3 has been verified.
## 6. Conclusions
Related studies confirm that OFDI has a significant effect on TFP. Does this phenomenon exist in the food industry? What are the characteristics of the OFDI own-firm effect in the food industry? This paper empirically studied the effect of OFDI on TFP by using information from the CIED and the data set of CFOFDI from 2005 to 2013 and draws the following conclusions: first, the OFDI of food firms significantly enhances their own TFP. Therefore, the government should encourage food firms to “Going out” for technological improvement and learn advanced technology and management experience from firms in developed destinations under the conditions permitting. Second, the OFDI own-firm effect has a hysteresis effect on the food industry, and the food firm can obtain strong enhancement in the short term; however, this enhancement will weaken in the long run. Third, the OFDI own-firm effect shows obvious firm heterogeneity. OFDI can significantly improve the TFP of the AFPI and BM at the industry level, and cannot significantly improve the TFP of FM. At the regional level, OFDI can significantly improve the TFP of food firms in the ER and CR but not in the WR and NER. At the ownership level, OFDI can significantly improve the TFP of non-state-owned firms but not state-owned firms. The government should formulate different policies for different sectors and regions to encourage food firms’ OFDI and needs to promote the exchange of experiences and lessons among firms, to jointly explore how to strengthen innovation and promote industrial development. The government needs to continue to promote the reform of state-owned firms, encourage fair competition between state-owned and non-state-owned firms, promote the rational use of resources by state-owned firms and inspire their innovation potential. Fourth, the food firm’s OFDI in high-income destinations and non-tax havens can have a significant own-firm effect, whereas productivity cannot improve if the firm invests in medium- and low-income destinations and tax havens. The government should improve the legal system, limit domestic food firms’ investments for speculative purposes and establish a strict regulatory system to pay close attention to the OFDI of food firms. Fifth, FDI, R&D and exporting can strengthen the OFDI own-firm effect of food firms, whereas the capital intensity will inhibit the effect.
## Figures
#### Figure 1
The mechanism of OFDI’s impact on the productivity of the food industry
## Table I
Descriptive statistics of combined data
Agricultural food processing industry Food manufacturing Beverage manufacturing
Variable OFDI non-OFDI OFDI non-OFDI OFDI non-OFDI
lnYit 12.179 10.847 12.182 10.675 12.255 10.747
lnKit 10.136 8.668 10.388 8.914 10.571 9.206
lnIit 8.964 7.550 9.106 7.744 9.514 8.042
lnLit 5.569 4.657 5.787 4.914 5.878 4.910
TFPit (OP) 6.565 6.093 5.493 4.978 5.596 4.987
TFPit (FE) 9.537 8.697 9.364 8.260 9.052 8.051
Note: The values in the Table are the mathematical mean
## Table II
Matching variables calculation method
Variable Variable name Calculation method
LP Labor productivity The log of output labor ratio
KLR Capital labor ratio The log of capital labor ratio
SIZE Firm size The log of output
EXP Export 1 if export delivery value is greater than 0, 0 otherwise
AGE Firm age The number of years since the creation of the firm
FDI Foreign direct investment 1 if the firm has FDI, 0 otherwise
OWN Ownership type 1 if state-owned capital accounts for more than 0.5 paid-up capital, 0 otherwise
RD Research and development 1 if the firm conducts R&D, 0 otherwise
Note: Output and capital are the real value after the reduction
## Table III
The summary of matching variables
OFDI firms Non-OFDI firms
Variable Mean SD Mean SD
LP 6.460 1.200 6.019 1.101
KLR 4.544 1.338 4.076 1.278
SIZE 11.590 1.785 9.877 1.453
EXP 0.543 0.475 0.319 0.466
AGE 11.407 8.171 9.147 7.825
FDI 0.216 0.500 0.415 0.493
OWN 0.021 0.477 0.336 0.473
RD 0.099 0.300 0.039 0.193
## Table IV
Standardized deviation of matching variables
Mean
Variable Treated Control % bias t-stat.
LP 6.460 6.446 1.2 0.37
KLR 4.544 4.502 3.0 0.91
SIZE 11.590 11.539 3.1 0.92
EXP 0.543 0.558 −3.4 −0.92
AGE 11.407 11.275 1.6 0.50
FDI 0.216 0.210 1.5 0.43
OWN 0.021 0.018 1.9 0.69
RD 0.099 0.097 0.9 0.23
## Table V
Matching results of the entire sample
Variable Sample Treated Controls Difference SE t-stat.
K-nearest neighbor matching TFP Unmatched 6.185 5.668 0.517 0.026 20.13***
ATT 6.185 6.101 0.085 0.029 2.90***
Radius matching TFP Unmatched 6.185 5.668 0.517 0.026 20.13***
ATT 6.173 6.084 0.089 0.026 3.42**
Kernel matching TFP Unmatched 6.185 5.668 0.517 0.026 20.13***
ATT 6.185 5.802 0.383 0.026 14.94***
Notes: All the TFP in the Table are estimated by OP method. *,**,***Significant at the 10, 5 and 1 percent levels, respectively
## Table VI
The results of yearly matching
Sample of year Variable Sample Treated Controls Difference SE t-stat.
2005 TFP Unmatched 5.744 5.248 0.496 0.082 6.02
ATT 5.744 5.583 0.161 0.084 1.93
2006 TFP Unmatched 5.072 5.406 −0.334 0.216 −1.55
ATT 5.072 4.828 0.244 0.289 0.85
2007 TFP Unmatched 5.458 5.503 −0.045 0.265 −0.17
ATT 5.458 5.468 −0.009 0.391 −0.03
2008 TFP Unmatched 5.573 5.525 0.048 0.302 0.16
ATT 5.573 5.578 −0.005 0.387 −0.01
2009 TFP Unmatched 5.871 5.795 0.076 0.370 0.20
ATT 5.871 5.811 0.060 0.466 0.13
2010 TFP Unmatched 6.342 5.666 0.676 0.596 1.13
ATT 6.342 6.209 0.133 0.554 0.24
2011 TFP Unmatched 5.698 5.931 −0.233 0.199 −1.17
ATT 5.698 5.888 −0.190 0.276 −0.69
2012 TFP Unmatched 5.951 6.024 −0.090 0.352 −0.26
ATT 5.951 5.744 0.207 0.599 0.35
2013 TFP Unmatched 6.315 5.938 0.377 0.343 1.10
ATT 6.315 6.318 −0.003 0.344 −0.01
## Table VII
Baseline results
(1) (2) (3) (4) (5)
VARIABLES TFP (OP) TFP (OP) TFP (OP) TFP (OP) TFP (FE)
OFDIit × TIMEt 0.409*** 0.111*** 0.213*** 0.0871** 0.0675**
KLR −0.108*** −0.0945*** −0.0486***
SIZE 0.182*** 0.159*** 0.309***
EXP 0.0279 0.127*** 0.128***
AGE 0.0222*** −0.00195 −0.000151
FDI −0.0350 0.0327 0.0259
OWN −0.154** −0.0464 −0.0403
RD −0.0995*** −0.0116 0.0481*
Constant 5.797*** 5.567*** 4.039*** 4.180*** 5.452***
INDit No Yes No Yes Yes
YEARt No Yes No Yes Yes
Observations 9,120 9,120 9,120 9,120 9,120
R2 0.01 0.103 0.08 0.129 0.261
Notes: The dependent variable is lnTFPit. *,**,***Significant at the 10, 5 and 1 percent levels, respectively
## Table VIII
Results of the hysteresis effect
(6) (7) (8) (9) (10) (11)
Lag one period Lag two periods Lag three periods Lag one period Lag two periods Lag three periods
OFDIit × TIMEt 0.188*** 0.107** 0.127** 0.0870** 0.0047 0.0376
Firm characteristics Yes Yes Yes Yes Yes Yes
INDit No No No Yes Yes Yes
YEARt No No No Yes Yes Yes
Constant 4.362*** 4.474*** 4.464*** 4.589*** 4.837*** 5.313***
Observations 7,219 5,986 4,842 7,219 5,986 4,842
R2 0.080 0.039 0.035 0.082 0.058 0.050
Number of firms 1,348 1,289 1,149 1,348 1,289 1,149
Notes: The dependent variable is lnTFPit estimated by the OP method. *,**,***Significant at the 10, 5 and 1 percent levels, respectively
## Table IX
The results of heterogeneity effect test
(12) (13) (14) (15) (16) (17) (18) (19) (20)
AFPI FM BM ER NER CR WR SO NSO
OFDIit × TIMEt 0.068* 0.066 0.229** 0.077** −0.055 0.340*** 0.100 −0.163 0.097***
Firm characteristics Yes Yes Yes Yes Yes Yes Yes Yes Yes
YEARt Yes Yes Yes Yes Yes Yes Yes Yes Yes
Constant 4.12*** 3.03*** 3.14*** 3.83*** 4.36*** 4.26*** 3.26*** 3.75*** 3.93***
Observations 5,385 2,279 1,456 5,471 966 1,580 1,103 191 8,929
Number of firms 870 414 222 853 159 255 177 67 1,441
Notes: The dependent variable is lnTFPit, estimated by OP method. *,**,***Significant at the 10, 5 and 1 percent levels, respectively
## Table X
Results of the investment destinations
(21) (22) (23) (24)
Non-tax havens Tax havens High-income destinations Middle- and low-income destinations
OFDIit × TIMEt 0.0795* 0.0977 0.0921** 0.0774
Firm characteristics Yes Yes Yes Yes
INDit Yes Yes Yes Yes
YEARt Yes Yes Yes Yes
Constant 4.071*** 4.400*** 4.241*** 4.078***
Observations 6,485 2,635 5,866 3,254
R2 0.137 0.121 0.123 0.143
Notes: The dependent variable is lnTFPit, estimated by the OP method. This paper divides the destinations according to the World Bank GNI ranking (2010): high-income destinations have a per capita income that exceeds $12,276, middle income destinations have a per capita income between$1,006 and 12,275 and low-income destinations have a per capita income less than $1,005. Tax havens: several destinations are contained. Based on this new, combined data and tax havens are defined as: Hong Kong (China), the British Virgin Islands and Macao (China). *,**,***Significant at the 10, 5 and 1 percent levels, respectively ## Table XI Results of the joint effect (25) (26) (27) (28) (29) (30) (31) Model 1 Model 2 Model 3 Model 4 Model 5 Model 6 Model 7 OFDIit × TIMEt 0.111*** 0.0829** 0.105*** −0.0370 0.0766 −0.225 0.157** OFDIit × TIMEt × FDI 0.139*** OFDIit × TIMEt × RD 0.124* OFDIit × TIMEt × KLR 0.0320* OFDIit × TIMEt × EXP 0.0622* OFDIit × TIMEt × SIZE 0.028 OFDIit × TIMEt × AGE −0.003 INDit Yes Yes Yes Yes Yes Yes Yes YEARt Yes Yes Yes Yes Yes Yes Yes Constant 5.567*** 5.567*** 5.566*** 5.567*** 5.567*** 5.568*** 5.567*** Observations 9,120 9,120 9,120 9,120 9,120 9,120 9,120 R2 0.103 0.104 0.104 0.104 0.103 0.104 0.103 Number of firms 1,444 1,444 1,444 1,444 1,444 1,444 1,444 Notes: The dependent variable is lnTFPit, estimated by the OP method. *,**,***Significant at the 10, 5 and 1 percent levels, respectively ## References Bai, J. (2009), “The effect of the reverse technology spillover of Chinese outward direct investment on TFP: an empirical analysis”, World Economy Study, Vol. 8 No. 10, pp. 65-69+89 (in Chinese). Beck, T., Levine, R. and Levkov, A. (2010), “Big bad banks? The winners and losers from bank deregulation in the united states”, Journal of Finance, Vol. 65 No. 5, pp. 1637-1667. Blomstrom, M., Konan, D.E. and Lipsey, R.E. (2000), “FDI in the restructuring of the Japanese economy”, NBER Working Paper No. 7693, 1050 Massachusetts Avenue, Cambridge. Blonigen, B.A. (2005), “A review of the empirical literature on FDI determinants”, Atlantic Economic Journal, Vol. 33 No. 4, pp. 383-403. Braconier, H., Ekholm, K. and Knarvik, K.H.M. (2001), “In search of FDI-transmitted R&D spillovers: a study based on Swedish data”, Review of World Economics, Vol. 137 No. 4, pp. 644-665. Chawla, K. and Rohra, N. (2015), “Determinants of FDI: a literature review”, The International Journal of Business & Management, Vol. 3 No. 3, pp. 227-250. Damijan, J.P., Kostevc, Č. and Polanec, S. (2008), “From innovation to exporting or vice versa? Causal link between innovation activity and exporting in Slovenian microdata”, LICOS Discussion Paper No. 204, Catholic University of Leuven. Desai, M.A., Foley, C.F. and Hines, J.R. (2005), “Foreign direct investment and domestic economic activity”, NBER Working Paper No. 11717, 1050 Massachusetts Avenue, Cambridge. Dhyne, E. and Guerin, S.S. (2014), “Outward foreign direct investment and domestic performance: in search of a causal link”, NBB Working Paper No. 272, National Bank of Belgium. Driffield, N. and Love, J.H. (2003), “Foreign direct investment, technology sourcing and reverse spillovers”, The Manchester School, Vol. 71 No. 6, pp. 659-672. Du, W.J. and Li, M.J. (2015), “Relationship between OFDI and quality of export products: a variable weight estimation based on propensity score matching”, Journal of International Trade, Vol. 8 No. 12, pp. 112-122 (in Chinese). Gazaniol, A. and Peltrault, F. (2013), “Outward FDI, performance and group affiliation: evidence from French matched firms”, Economics Bulletin, Vol. 33 No. 2, pp. 891-904. Heckman, J.J., Ichimura, H. and Todd, P.E. (1997), “Matching as an econometric evaluation estimator: evidence from evaluating a job training programme”, Review of Economic Studies, Vol. 64 No. 4, pp. 605-654. Helpman, E., Melitz, M. and Yeaple, S. (2004), “Export versus FDI with heterogeneous firms”, American Economic Review, Vol. 94 No. 1, pp. 300-316. Hijzen, A., Tomohiko, I. and Yasuyuki, T. (2007), “The effects of multinational production on domestic performance: evidence from Japanese firms”, Discussion Papers Series No. 07-E-006, The Research Institute of Economy, Trade and Industry, Tokyo. Huang, Y. and Zhang, Y. (2017), “How does outward foreign direct investment enhance firm productivity? A heterogeneous empirical analysis from Chinese manufacturing”, China Economic Review, Vol. 44 No. 3, pp. 1-15. Hymer, S. (1969), “Multinational corporation and international oligopoly: the non-American challenge”, Center Discussion Paper No. 76, Yale University, Economic Growth Center, New Haven, CT. Imbriani, C., Pittiglio, R. and Reganati, F. (2011), “Outward foreign direct investment and domestic performance: the Italian manufacturing and services sectors”, Atlantic Economic Journal, Vol. 39 No. 4, pp. 369-381. Jiang, G.H. and Jiang, D.C. (2014a), “Outward FDI of Chinese industrial enterprises and progress of enterprise productivity”, The Journal of World Economy, Vol. 9 No. 9, pp. 53-76 (in Chinese). Jiang, G.H. and Jiang, D.C. (2014b), “The ‘export effect’ of foreign direct investment by Chinese enterprises”, Economic Research Journal, Vol. 49 No. 5, pp. 160-173 (in Chinese). Jin, S., Guo, H., Delgado, M.S. and Wang, H.H. (2017), “Benefit or damage? The productivity effects of FDI in the Chinese food industry”, Food Policy, Vol. 68 No. 4, pp. 1-9. Jing, G.Z. and Li, P. (2016), “Has OFDI promoted the quality of export product in China”, Journal of International Trade, Vol. 8 No. 12, pp. 131-142 (in Chinese). Keller, W. and Yeaple, S. (2004), “Multinational enterprises, international trade, and productivity growth: firm-level evidence from the United States”, NBER Working Paper No. 9504, 1050 Massachusetts Avenue, Cambridge. Kimura, F. and Kiyota, K. (2006), “Exports, FDI, and productivity: dynamic evidence from Japanese firms”, Review of World Economics, Vol. 142 No. 4, pp. 695-719. Li, F.C. (2012), “The industrial upgrading effect of home countries’ outward foreign direct investment: an empirical study from provincial panel of China”, Journal of International Trade, Vol. 6 No. 10, pp. 124-134 (in Chinese). Li, X. and Shi, L. (2014), “An economics explanation for the crisis of trust in food industry”, Economic Research Journal, Vol. 49 No. 1, pp. 169-181 (in Chinese). Mao, Q.L. and Xu, J.Y. (2014), “Dose foreign direct investment of Chinese enterprises promote enterprise innovation”, The Journal of World Economy, Vol. 8 No. 8, pp. 98-125 (in Chinese). Mucchielli, J.L. and Soubaya, I. (2002), “Intra-firm trade and foreign investment: an empirical analysis of French firm”, in Lipsey, R. and Mucchielli, J.L. (Eds), Multinational Firms and Impacts on Employment, Trade and Technology, pp. 43-83. Olley, G.S. and Pakes, A. (1996), “The dynamics of productivity in the telecommunications equipment industry”, Econometrica, Vol. 64 No. 6, pp. 1263-1297. Pradhan, J.P. and Singh, N. (2008), “Outward FDI and knowledge flows: a study of the Indian automotive sector”, International Journal of Institutions and Economies, Vol. 1 No. 1, pp. 155-186. Rosenbaum, P.R. and Rubin, D.B. (1983), “The central role of the propensity score in observational studies for causal effects”, Biometrika, Vol. 70 No. 1, pp. 41-55. Ryuhei, W. and Takashi, N. (2012), “Productivity and FDI of Taiwan firms: a review from a nonparametric approach”, Discussion Papers No. 12-E -033, The Research Institute of Economy, Trade and Industry, Tokyo. Shen, J.X. and Ju, Y. (2016), “The effects of OFDI reverse technology spillover in China’s information technology industry: a quantile regression study”, International Business, Vol. 1 No. 8, pp. 79-87+149 (in Chinese). Syverson, C. (2010), “What determines productivity?”, NBER Working Papers, Vol. 49 No. 2, pp. 326-365. UNCTAD (2016), “World investment report 2015”, United Nations, WBG, GNI per capita, Atlas method (current US$), New York, NY, available at: https://data.worldbank.org/indicator/NY.GNP.PCAP.CD (accessed July 28, 2011).
Wen, H.W. (2017), “Does Chinese firm outward FDI relieve overcapacity-an empirical study based on Chinese industrial enterprises database”, Journal of International Trade, Vol. 4 No. 10, pp. 107-117 (in Chinese).
Xie, Q.L., Rawski, T.G. and Zhang, Y.F. (2008), “Productivity growth and convergence across China s industrial economy”, China Economic Quarterly, Vol. 7 No. 3, pp. 809-826 (in Chinese).
Yang, P.L. and Cao, Z.Y. (2017), “The effect of outward foreign direct investment on firm’ s profit rate: evidences from Chinese industrial firms”, Journal of Zhongnan University of Economics and Law, Vol. 1 No. 1, pp. 132-139+160 (in Chinese).
Yang, R.D. (2015), “Study on the total factor productivity of Chinese manufacturing enterprises”, Economic Research Journal, Vol. 50 No. 2, pp. 61-74 (in Chinese).
Yang, S.F., Chen, K.M. and Huang, T.H. (2013), “Outward foreign direct investment and technical efficiency: evidence from Taiwan’s manufacturing firms”, Journal of Asian Economics, Vol. 27 No. 27, pp. 7-17.
Ye, J. and Zhao, Y.P. (2016), “Outward foreign direct investment and reverse technology spillover: an empirical analysis based on company-level microdata”, Journal of International Trade, Vol. 1 No. 12, pp. 134-144 (in Chinese).
|
{}
|
# Is Newton's applicable here too?
Algebra Level 4
$\large { \begin{cases} { a+b =1} \\ {ax+by=2}\\ {ax^{2}+by^{2}=-6}\\ {ax^{3}+by^{3}=8}\\ \end{cases} }$ We are given that $$a,b,x,$$ and $$y$$ are complex number that satisfy the system of equations above.
if $$ax^{2015}+by^{2015}$$ can be written as $$p\cdot q^{r}$$ for integers $$p,q,r$$ with $$q,r$$ primes, evaluate $$p+q+r$$.
×
|
{}
|
# Discrete Mathematics and Combinatorics Commons™
## All Articles in Discrete Mathematics and Combinatorics
1,081 full-text articles. Page 1 of 42.
Unomaha Problem Of The Week (2021-2022 Edition), 2022 University of Nebraska at Omaha
#### Unomaha Problem Of The Week (2021-2022 Edition), Brad Horner, Jordan M. Sahs
##### Student Research and Creative Activity Fair
The University of Omaha math department's Problem of the Week was taken over in Fall 2019 from faculty by the authors. The structure: each semester (Fall and Spring), three problems are given per week for twelve weeks, with each problem worth ten points - mimicking the structure of arguably the most well-regarded university math competition around, the Putnam Competition, with prizes awarded to top-scorers at semester's end. The weekly competition was halted midway through Spring 2020 due to COVID-19, but relaunched again in Fall 2021, with massive changes.
Now there are three difficulty tiers to POW problems, roughly corresponding ...
An Even 2-Factor In The Line Graph Of A Cubic Graph, 2022 Yokohama National University
#### An Even 2-Factor In The Line Graph Of A Cubic Graph, Seungjae Eom, Kenta Ozeki
##### Theory and Applications of Graphs
An even 2-factor is one such that each cycle is of even length. A 4- regular graph G is 4-edge-colorable if and only if G has two edge-disjoint even 2- factors whose union contains all edges in G. It is known that the line graph of a cubic graph without 3-edge-coloring is not 4-edge-colorable. Hence, we are interested in whether those graphs have an even 2-factor. Bonisoli and Bonvicini proved that the line graph of a connected cubic graph G with an even number of edges has an even 2-factor, if G has a perfect matching [Even cycles and even ...
On Two-Player Pebbling, 2022 Lehigh University
#### On Two-Player Pebbling, Garth Isaak, Matthew Prudente, Andrea Potylycki, William Fagley, Joseph Marcinik
##### Communications on Number Theory and Combinatorial Theory
Graph pebbling can be extended to a two-player game on a graph G, called Two-Player Graph Pebbling, with players Mover and Defender. The players each use pebbling moves, the act of removing two pebbles from one vertex and placing one of the pebbles on an adjacent vertex, to win. Mover wins if they can place a pebble on a specified vertex. Defender wins if the specified vertex is pebble-free and there are no more pebbling moves on the vertices of G. The Two-Player Pebbling Number of a graph G, η(G), is the minimum m such that for every arrangement ...
2022 California Lutheran University
#### Tiling Rectangles And 2-Deficient Rectangles With L-Pentominoes, Monica Kane
We investigate tiling rectangles and 2-deficient rectangles with L-pentominoes. First, we determine exactly when a rectangle can be tiled with L-pentominoes. We then determine locations for pairs of unit squares that can always be removed from an m × n rectangle to produce a tileable 2-deficient rectangle when m ≡ 1 (mod 5), n ≡ 2 (mod 5) and when m ≡ 3 (mod 5), n ≡ 4 (mod 5).
#### Extremal Problems In Graph Saturation And Covering, Adam Volk
##### Dissertations, Theses, and Student Research Papers in Mathematics
This dissertation considers several problems in extremal graph theory with the aim of finding the maximum or minimum number of certain subgraph counts given local conditions. The local conditions of interest to us are saturation and covering. Given graphs F and H, a graph G is said to be F-saturated if it does not contain any copy of F, but the addition of any missing edge in G creates at least one copy of F. We say that G is H-covered if every vertex of G is contained in at least one copy of H. In the former setting, we ...
2022 University of Arkansas, Fayetteville
#### 3-Uniform 4-Path Decompositions Of Complete 3-Uniform Hypergraphs, Rachel Mccann
##### Mathematical Sciences Undergraduate Honors Theses
The complete 3-uniform hypergraph of order v is denoted as Kv and consists of vertex set V with size v and edge set E, containing all 3-element subsets of V. We consider a 3-uniform hypergraph P7, a path with vertex set {v1, v2, v3, v4, v5, v6, v7} and edge set {{v1, v2, v3}, {v2, v3, v4}, {v4, v5, v6}, {v5, v6, v7}}. We provide the necessary and sufficient conditions for the existence of a decomposition of Kv ...
Modern Theory Of Copositive Matrices, 2022 William & Mary
#### Modern Theory Of Copositive Matrices, Yuqiao Li
Copositivity is a generalization of positive semidefiniteness. It has applications in theoretical economics, operations research, and statistics. An $n$-by-$n$ real, symmetric matrix $A$ is copositive (CoP) if $x^T Ax \ge 0$ for any nonnegative vector $x \ge 0.$ The set of all CoP matrices forms a convex cone. A CoP matrix is ordinary if it can be written as the sum of a positive semidefinite (PSD) matrix and a symmetric nonnegative (sN) matrix. When $n < 5,$ all CoP matrices are ordinary. However, recognizing whether a given CoP matrix is ordinary and determining an ordinary decomposition (PSD + sN) is still an unsolved problem. Here, we give an overview on modern theory of CoP matrices, talk about our progress on the ordinary recognition and decomposition problem, and emphasis the graph theory aspect of ordinary CoP matrices.
Quantum Dimension Polynomials: A Networked-Numbers Game Approach, 2022 Murray State University
#### Quantum Dimension Polynomials: A Networked-Numbers Game Approach, Nicholas Gaubatz
##### Honors College Theses
The Networked-Numbers Game--a mathematical "game'' played on a simple graph--is incredibly accessible and yet surprisingly rich in content. The Game is known to contain deep connections to the finite-dimensional simple Lie algebras over the complex numbers. On the other hand, Quantum Dimension Polynomials (QDPs)--enumerative expressions traditionally understood through root systems--corresponding to the above Lie algebras are complicated to derive and often inaccessible to undergraduates. In this thesis, the Networked-Numbers Game is defined and some known properties are presented. Next, the significance of the QDPs as a method to count combinatorially interesting structures is relayed. Ultimately, a novel closed-form expression ...
How To Guard An Art Gallery: A Simple Mathematical Problem, 2022 St. John Fisher College
#### How To Guard An Art Gallery: A Simple Mathematical Problem, Natalie Petruzelli
##### The Review: A Journal of Undergraduate Student Research
The art gallery problem is a geometry question that seeks to find the minimum number of guards necessary to guard an art gallery based on the qualities of the museum’s shape, specifically the number of walls. Solved by Václav Chvátal in 1975, the resulting Art Gallery Theorem dictates that ⌊n/3⌋ guards are always sufficient and sometimes necessary to guard an art gallery with n walls. This theorem, along with the argument that proves it, are accessible and interesting results even to one with little to no mathematical knowledge, introducing readers to common concepts in both geometry and graph ...
2022 Prospect High School, Saratoga
#### A New Method To Compute The Hadamard Product Of Two Rational Functions, Ishan Kar
The Hadamard product (denoted by∗) of two power series A(x) =a0+a1x+a2x2+···and B(x) =b0+b1x+b2x2+··· is the power series A(x)∗B(x) =a0b0+a1b1x+a2b2x2+···. Although it is well known that the Hadamard product of two rational functions is also rational, a closed form expression of the Hadamard product of rational functions has not been found. Since any rational power series can be expanded by partial fractions as a polynomial ...
2022 Bethel University
#### Some Np-Complete Edge Packing And Partitioning Problems In Planar Graphs, Jed Yang
##### Communications on Number Theory and Combinatorial Theory
Graph packing and partitioning problems have been studied in many contexts, including from the algorithmic complexity perspective. Consider the packing problem of determining whether a graph contains a spanning tree and a cycle that do not share edges. Bernáth and Király proved that this decision problem is NP-complete and asked if the same result holds when restricting to planar graphs. Similarly, they showed that the packing problem with a spanning tree and a path between two distinguished vertices is NP-complete. They also established the NP-completeness of the partitioning problem of determining whether the edge set of a graph can be ...
2022 Louisiana State University and Agricultural and Mechanical College
#### Characterizations Of Certain Classes Of Graphs And Matroids, Jagdeep Singh
##### LSU Doctoral Dissertations
If a theorem about graphs can be expressed in terms of edges and cycles only, it probably exemplifies a more general theorem about matroids." Most of my work draws inspiration from this assertion, made by Tutte in 1979.
In 2004, Ehrenfeucht, Harju and Rozenberg proved that all graphs can be constructed from complete graphs via a sequence of the operations of complementation, switching edges and non-edges at a vertex, and local complementation. In Chapter 2, we consider the binary matroid analogue of each of these graph operations. We prove that the analogue of the result of Ehrenfeucht et. al. does ...
Unavoidable Structures In Large And Infinite Graphs, 2022 Louisiana State University and Agricultural and Mechanical College
#### Unavoidable Structures In Large And Infinite Graphs, Sarah Allred
##### LSU Doctoral Dissertations
In this work, we present results on the unavoidable structures in large connected and large 2-connected graphs. For the relation of induced subgraphs, Ramsey proved that for every positive integer r, every sufficiently large graph contains as an induced subgraph either Kr or Kr. It is well known that, for every positive integer r, every sufficiently large connected graph contains an induced subgraph isomorphic to one of Kr, K1,r, and Pr. We prove an analogous result for 2-connected graphs. Similarly, for infinite graphs, every infinite connected graph contains an induced subgraph isomorphic to one ...
The Enumeration Of Minimum Path Covers Of Trees, 2022 William & Mary
#### The Enumeration Of Minimum Path Covers Of Trees, Merielyn Sher
A path cover of a tree T is a collection of induced paths of T that are vertex disjoint and cover all the vertices of T. A minimum path cover (MPC) of T is a path cover with the minimum possible number of paths, and that minimum number is called the path cover number of T. A tree can have just one or several MPC's. Prior results have established equality between the path cover number of a tree T and the largest possible multiplicity of an eigenvalue that can occur in a symmetric matrix whose graph is that tree ...
Prime Labelings On Planar Grid Graphs, 2022 University of Pittsburgh - Johnstown
#### Prime Labelings On Planar Grid Graphs, Stephen James Curran
##### Theory and Applications of Graphs
It is known that for any prime p and any integer n such that 1≤np there exists a prime labeling on the pxn planar grid graph PpxPn. We show that PpxPn has a prime labeling for any odd prime p and any integer n such that that p<n≤p2.
Characterizing Edge Betweenness-Uniform Graphs, 2022 University of Economics, Tajovského
#### Characterizing Edge Betweenness-Uniform Graphs, Jana Coroničová Hurajová, Tomas Madaras, Darren A. Narayan
##### Theory and Applications of Graphs
The \emph{betweenness centality} of an edge $e$ is, summed over all $u,v\in V(G)$, the ratio of the number of shortest $u,v$-paths in $G$ containing $e$ to the number of shortest $u,v$-paths in $G$. Graphs whose vertices all have the same edge betweenness centrality are called \emph{edge betweeness-uniform}. It was recently shown by Madaras, Hurajová, Newman, Miranda, Fl\'{o}rez, and Narayan that of the over 11.7 million graphs with ten vertices or fewer, only four graphs are edge betweenness-uniform but not edge-transitive.In this paper we present new results involving ...
Chromatic Polynomials Of Signed Book Graphs, 2022 Indian Institute of Technology Guwahati
#### Chromatic Polynomials Of Signed Book Graphs, Deepak Sehrawat, Bikash Bhattacharjya
##### Theory and Applications of Graphs
For $m \geq 3$ and $n \geq 1$, the $m$-cycle \textit{book graph} $B(m,n)$ consists of $n$ copies of the cycle $C_m$ with one common edge. In this paper, we prove that (a) the number of switching non-isomorphic signed $B(m,n)$ is $n+1$, and (b) the chromatic number of a signed $B(m,n)$ is either 2 or 3. We also obtain explicit formulas for the chromatic polynomials and the zero-free chromatic polynomials of switching non-isomorphic signed book graphs.
Moving Polygon Methods For Incompressible Fluid Dynamics, 2022 University of Massachusetts Amherst
#### Moving Polygon Methods For Incompressible Fluid Dynamics, Chris Chartrand
##### Doctoral Dissertations
Hybrid particle-mesh numerical approaches are proposed to solve incompressible fluid flows. The methods discussed in this work consist of a collection of particles each wrapped in their own polygon mesh cell, which then move through the domain as the flow evolves. Variables such as pressure, velocity, mass, and momentum are located either on the mesh or on the particles themselves, depending on the specific algorithm described, and each will be shown to have its own advantages and disadvantages. This work explores what is required to obtain local conservation of mass, momentum, and convergence for the velocity and pressure in a ...
Minimality Of Integer Bar Visibility Graphs, 2022 Portland State University
#### Minimality Of Integer Bar Visibility Graphs, Emily Dehoff
##### University Honors Theses
A visibility representation is an association between the set of vertices in a graph and a set of objects in the plane such that two objects have an unobstructed, positive-width line of sight between them if and only if their two associated vertices are adjacent. In this paper, we focus on integer bar visibility graphs (IBVGs), which use horizontal line segments with integer endpoints to represent the vertices of a given graph. We present results on the exact widths of IBVGs of paths, cycles, and stars, and lower bounds on trees and general graphs. In our main results, we find ...
Application Of The Combinatorial Nullstellensatz To Integer-Magic Graph Labelings, 2022 San Jose State University
#### Application Of The Combinatorial Nullstellensatz To Integer-Magic Graph Labelings, Richard M. Low, Dan Roberts
##### Theory and Applications of Graphs
Let $A$ be a nontrivial abelian group and $A^* = A \setminus \{0\}$. A graph is $A$-magic if there exists an edge labeling $f$ using elements of $A^*$ which induces a constant vertex labeling of the graph. Such a labeling $f$ is called an $A$-magic labeling and the constant value of the induced vertex labeling is called an $A$-magic value. In this paper, we use the Combinatorial Nullstellensatz to show the existence of $\mathbb{Z}_p$-magic labelings (prime $p \geq 3$ ) for various graphs, without having to construct the $\mathbb{Z}_p$-magic labelings. Through many examples ...
|
{}
|
64 588
Assignments Done
99,2%
Successfully Done
In September 2018
# Answer to Question #63265 in Macroeconomics for Aisha
Question #63265
You are given the following information about the commodity and Money markets of a closed economy without government intervention.
The commodity market
Consumption function: C = 50 + 2/5Y
Investment function: I = 790 – 21r
The Money Market
Precautionary and Transactions demand for money: MDT = 1/6 Y
Speculative demand for money: MDS = 1200 -18r
Money supply: MS = 1250
Required:
(i) Determine the equilibrium levels of income and interest rate for this economy.
(ii) Using a well labelled diagram, illustrate the equilibrium condition in part (i) above.
C = 50 + 2/5Y, I = 790 – 21r
MDT = 1/6 Y, MDS = 1200 -18r, MS = 1250
In equilibrium Y = C + I and MD = MDT + MDS = MS, so:
50 + 2/5Y = 790 – 21r
1/6 Y + 1200 - 18r = 1250,
2/5Y + 21r = 740
1/6Y - 18r = 50 -> 2/5Y - 43.2r = 120, so if we subtract second equation from the first, we will get:
64.2r = 620
r = 9.66%
Y = (740 - 21*9.66)/0.4 = $1343 So, r = 9.66% and Y =$1343 are the equilibrium levels of income and interest rate for this economy.
Need a fast expert's response?
Submit order
and get a quick answer at the best price
for any assignment or question with DETAILED EXPLANATIONS!
Assignment Expert
25.11.16, 14:46
Dear Aisha,
please use panel for submitting new questions
Aisha
20.11.16, 10:21
Using a well labelled diagram, illustrate the equilibrium condition in part (i) above. I would to know how it will be like. Thanks.
Aisha
11.11.16, 13:34
Thank you.
|
{}
|
# Langlands–Shahidi method
(Redirected from Langlands-Shahidi method)
In mathematics, the Langlands–Shahidi method provides the means to define automorphic L-functions in many cases that arise with connected reductive groups over a number field. This includes Rankin–Selberg products for cuspidal automorphic representations of general linear groups. The method develops the theory of the local coefficient, which links to the global theory via Eisenstein series. The resulting L-functions satisfy a number of analytic properties, including an important functional equation.
## The local coefficient
The setting is in the generality of a connected quasi-split reductive group G, together with a Levi subgroup M, defined over a local field F. For example, if G = Gl is a classical group of rank l, its maximal Levi subgroups are of the form GL(m) × Gn, where Gn is a classical group of rank n and of the same type as Gl, l = m + n. F. Shahidi develops the theory of the local coefficient for irreducible generic representations of M(F).[1] The local coefficient is defined by means of the uniqueness property of Whittaker models paired with the theory of intertwining operators for representations obtained by parabolic induction from generic representations.
The global intertwining operator appearing in the functional equation of Langlands' theory of Eisenstein series[2] can be decomposed as a product of local intertwining operators. When M is a maximal Levi subgroup, local coefficients arise from Fourier coefficients of appropriately chosen Eisenstein series and satisfy a crude functional equation involving a product of partial L-functions.
## Local factors and functional equation
An induction step refines the crude functional equation of a globally generic cuspidal automorphic representation ${\displaystyle \pi =\otimes '\pi _{v}}$ to individual functional equations of partial L-functions and γ-factors:[3]
${\displaystyle L^{S}(s,\pi ,r_{i})=\prod _{v\in S}\gamma _{i}(s,\pi _{v},\psi _{v})L^{S}(1-s,{\tilde {\pi }},r_{i}).}$
The details are technical: s a complex variable, S a finite set of places (of the underlying global field) with ${\displaystyle \pi _{v}}$ unramified for v outside of S, and ${\displaystyle r=\oplus r_{i}}$ is the adjoint action of M on the complex Lie algebra of a specific subgroup of the Langlands dual group of G. When G is the special linear group SL(2), and M = T is the maximal torus of diagonal matrices, then π is a Größencharakter and the corresponding γ-factors are the local factors of Tate's thesis.
The γ-factors are uniquely characterized by their role in the functional equation and a list of local properties, including multiplicativity with respect to parabolic induction. They satisfy a relationship involving Artin L-functions and Artin root numbers when v gives an archimedean local field or when v is non-archimedean and ${\displaystyle \pi _{v}}$ is a constituent of an unramified principal series representation of M(F). Local L-functions and root numbers ε${\displaystyle (s,\pi _{v},r_{i,v},\psi _{v})}$ are then defined at every place, including ${\displaystyle v\in S}$, by means of Langlands classification for p-adic groups. The functional equation takes the form
${\displaystyle L(s,\pi ,r_{i})=\epsilon (s,\pi ,r_{i})L(1-s,{\tilde {\pi }},r_{i}),}$
where ${\displaystyle L(s,\pi ,r_{i})}$ and ${\displaystyle \epsilon (s,\pi ,r_{i})}$ are the completed global L-function and root number.
## Examples of automorphic L-functions
• ${\displaystyle L(s,\pi _{1}\times \pi _{2})}$, the Rankin–Selberg L-function of cuspidal automorphic representations ${\displaystyle \pi _{1}}$ of GL(m) and ${\displaystyle \pi _{2}}$ of GL(n).
• ${\displaystyle L(s,\tau \times \pi )}$, where τ is a cuspidal automorphic representation of GL(m) and π is a globally generic cuspidal automorphic representation of a classical group G.
• ${\displaystyle L(s,\tau ,r)}$, with τ as before and r a symmetric square, an exterior square, or an Asai representation of the dual group of GL(n).
A full list of Langlands–Shahidi L-functions[4] depends on the quasi-split group G and maximal Levi subgroup M. More specifically, the decomposition of the adjoint action ${\displaystyle r=\oplus r_{i}}$ can be classified using Dynkin diagrams. A first study of automorphic L-functions via the theory of Eisenstein Series can be found in Langlands' Euler Products,[5] under the assumption that the automorphic representations are everywhere unramified. What the Langlands–Shahidi method provides is the definition of L-functions and root numbers with no other condition on the representation of M other than requiring the existence of a Whittaker model.
## Analytic properties of L-functions
Global L-functions are said to be nice[6] if they satisfy:
1. ${\displaystyle L(s,\pi ,r),\ L(s,{\tilde {\pi }},r)}$ extend to entire functions of the complex variable s.
2. ${\displaystyle L(s,\pi ,r),\ L(s,{\tilde {\pi }},r)}$ are bounded in vertical strips.
3. (Functional Equation) ${\displaystyle L(s,\pi ,r)=\epsilon (s,\pi ,r)L(1-s,{\tilde {\pi }},r)}$.
Langlands–Shahidi L-functions satisfy the functional equation. Progress towards boundedness in vertical strips was made by S. S. Gelbart and F. Shahidi.[7] And, after incorporating twists by highly ramified characters, Langlands–Shahidi L-functions do become entire.[8]
Another result is the non-vanishing of L-functions. For Rankin–Selberg products of general linear groups it states that ${\displaystyle L(1+it,\pi _{1}\times \pi _{2})}$ is non-zero for every real number t.[9]
## Applications to functoriality and to representation theory of p-adic groups
• Functoriality for the classical groups: A cuspidal globally generic automorphic representation of a classical group admits a Langlands functorial lift to an automorphic representation of GL(N),[10] where N depends on the classical group. Then, the Ramanujan bounds of W. Luo, Z. Rudnick and P. Sarnak[11] for GL(N) over number fields yield non-trivial bounds for the generalized Ramanujan conjecture of the classical groups.
• Symmetric powers for GL(2): Proofs of functoriality for the symmetric cube and for the symmetric fourth[12] powers of cuspidal automorphic representations of GL(2) were made possible by the Langlands–Shahidi method. Progress towards higher Symmetric powers leads to the best possible bounds towards the Ramanujan–Peterson conjecture of automorphic cusp forms of GL(2).
• Representations of p-adic groups: Applications involving Harish-Chandra μ functions (from the Plancherel formula) and to complementary series of p-adic reductive groups are possible. For example, GL(n) appears as the Siegel Levi subgroup of a classical group G. If π is a smooth irreducible ramified supercuspidal representation of GL(n, F) over a field F of p-adic numbers, and ${\displaystyle I(\pi )=I(0,\pi )}$ is irreducible, then:
1. ${\displaystyle I(s,\pi )}$ is irreducible and in the complementary series for 0 < s < 1;
2. ${\displaystyle I(1,\pi )}$ is reducible and has a unique generic non-supercuspidal discrete series subrepresentation;
3. ${\displaystyle I(s,\pi )}$ is irreducible and never in the complementary series for s > 1.
Here, ${\displaystyle I(s,\pi )}$ is obtained by unitary parabolic induction from
• ${\displaystyle \pi \otimes |\det |^{s}}$ if G = SO(2n), Sp(2n), or U(n+1, n);
• ${\displaystyle \pi \otimes |\det |^{s/2}}$ if G = SO(2n+1) or U(n, n).
## References
1. ^ F. Shahidi, On certain L-functions, American Journal of Mathematics 103 (1981), 297–355.
2. ^ R. P. Langlands, On the Functional Equations Satisfied by Eisenstein Series, Lecture Notes in Math., Vol. 544, Springer-Verlag, Berlin-Heidelberg-New York, 1976.
3. ^ F. Shahidi, A proof of Langlands conjecture on Plancherel measures; Complementary series for p-adic groups, Annals of Mathematics 132 (1990), 273–330.
4. ^ F. Shahidi, Eisenstein Series and Automorphic L-functions, Colloquium Publications, Vol. 58, American Mathematical Society, Providence, Rhode Island, 2010. ISBN 978-0-8218-4989-7
5. ^ R. P. Langlands, Euler Products, Yale Univ. Press, New Haven, 1971
6. ^ J. W. Cogdell and I. I. Piatetski–Shapiro, Converse theorems for GL(n), Publications Mathématiques de l'IHÉS 79 (1994), 157–214.
7. ^ S. Gelbart and F. Shahidi, Boundedness of automorphic L-functions in vertical strips, Journal of the American Mathematical Society, 14 (2001), 79–107.
8. ^ H. H. Kim and F. Shahidi, Functorial products for GL(2) × GL(3) and the symmetric cube for GL(2), Annals of Mathematics 155 (2002), 837–893.
9. ^ F. Shahidi, On nonvanishing of L-functions. Bull. Amer. Math. Soc. (N.S.) 2 (1980), no. 3, 462–464.
10. ^ J. W. Cogdell, H. H. Kim, I. I. Piatetski–Shapiro, and F. Shahidi, Functoriality for the classical groups, Publications Mathématiques de l'IHÉS 99 (2004), 163–233
11. ^ W. Luo, Z. Rudnick, and P. Sarnak, On the generalized Ramanujan conjecture for GL(n), Proceedings of Symposia in Pure Mathematics 66, part 2 (1999), 301–310.
12. ^ H. H. Kim, Functoriality for the exterior square of GL(4) and the symmetric fourth of GL(2), Journal of the American Mathematical Society 16 (2002), 131–183.
|
{}
|
# How is any kind of structural reliability achieved if the probability of annual wind speed exceedance is 0.02?
Above is the definition of fundamental basic wind velocity from Eurocode 1. According to its instructions, wind loads on building are supposed to be calculated based on this velocity. It says that this velocity needs to have an annual risk of being exceeded of 2 %. But isn't this quite a risk? If we design our building's extreme wind loads according to this value, doesn't that mean our building has 2 % chance of failing? To me that seems quite high.
Also according to Eurocode, in reliability class RC2, we should have a reliability index of at least 4.7, which approximately corresponds to probability of failure less than $$10^{-6}$$. This is much lower than 2%.
• " If we design our building's extreme wind loads according to this value, doesn't that mean our building has 2 % chance of failing?" No. Suppose the design velocity is 25 m/s. Do you think a velocity of 25.01 m/s is 100% certain to cause failure? Apr 17 '21 at 13:13
• @alephzero Well, maybe not 100%, but for that wind velocity we cannot say anything about the reliability anymore. If we only have one velocity to work with, which has 2% probability to be exceeded, then we know only that our design has 2% reliability. The point is, we don't know what the probability of 25.01 m/s wind has, and so we can't say anything about the reliability anymore, right? Apr 17 '21 at 13:32
• @S.Rotos I think you are confusing the reliability index in this. There is a very good example in this set of slides that explains the concept of reliability class. In order to calculate it you need to know the mean value and the standard deviation of the resistance of a structure and the load. Apr 17 '21 at 14:23
• @NMech I have actually read those slides. So here is what I understand: the reliability index is directly related to the probability of failure, which is calculated using the probability distributions of the load and the resistance. But to get the distribution of the load (here the wind load) you would need a more complete knowledge about the distribution of wind speeds than one value that has a probability of being exceeded of 2%, which in turn is something Eurocode does not give. Do you have more knowledge as to what kind of wind distribution Eurocode assumes? Apr 17 '21 at 15:09
• @S.Rotos No I don't have any knowledge on the wind distribution that is assumed in Eurocodes (It should be a Weibull and not a gaussian distribution which is very skewed). However, even if someone knew the distribution (which is very specific to the location), you'd have to incorporate that into the deviation of the total actions on the structure. So, the point, is that you can't directly relate the 2% probability to the probability of the Reliability class. Apr 17 '21 at 15:19
Buildings are typically designed with a 50-year lifespan. So a 2% yearly chance of passing the limit makes intuitive sense.
But you're right: that'd basically mean we expect the structure to collapse sometime during those 50 years.
So that load with a 2% yearly chance of being surpassed makes sense... but only as a starting point.
The thing to remember is that you aren't getting that wind load, calculating its internal stresses and then checking to see if your members resist that load.
After all, there's a bunch of other safety factors involved in that calculation. If you're using LRFD design procedures, then there are two factors: one for the applied load and one for the structure's strength. The factors themselves depend on the code you're using.
In Brazil, the relevant code is usually NBR 8681, which gives a safety factor of 1.4 for wind loads (when they're the primary load in a combination). For the structural strength, the code depends on the material. For steel, it's 1.35.
So, using these coefficients, you start out with a wind load $$W$$ representing a 2% chance. But you then effectively design your structure for a load of $$1.4 \cdot 1.35 W = 1.89W$$, almost double the initial value.
And given that the distribution of wind speeds is roughly normal (well, more Weibull, but close enough) and that we start at the tail of the distribution (2%), getting a load twice as large as that throws us deep into the tail, with a much lower chance of happening, basically ever.
If you take a read more carefully the Eurocode, it says "2% annually".
That should be translated that in a full year, there is only a 2% chance to get to that wind level. So one simple way to think about this is that there is one chance in 50 years for that failure to occur (which is considered a reasonable scenario.
The more correct mathematical way is that, if there is a chance of 2% in a year to exceed that wind, then the annual chance is 98%. Since, the chance in a year is statistically independent from the next year, then the chance of not exceed the wind speed for 10 years would be equal to $$0.98^n$$. So, the chance that the wind would be exceeded (in a 10 min average) once after 10 years is equal to $$1-0.98^{10}=18\% (approx 20\%)$$, after 20 years it increases to $$33\%$$, and after 35 years there is approximately a $$50\%$$ to get a 10-min average gust.
# Gust factor
Also keep in mind that this is a gust of wind is measured in an average of 10 minutes (there are approximately 50000 10-minutes is year).
When you take the average of a 10 minutes, then you know that somewhere in the 10-minute there are higher values. Usually, they measure an average of 3s for a gust, and even then the wind speed (depending on a multitude of factor, surface roughness, height from ground, location, surrounding topology ), the gust factor to convert a 10 min value to a 3 second value is in the order of 1.5 to 2. So a 10 minute average gust of 100 kph, probably has within it a gust wind of about 150 to 200kph (which is insanely high), since the load load quadruples if you double the wind speed.
## accidental actions
Additionally, the wind (as snow and fire) are considered chance(/accidental I am not sure about the translations) actions, and as such they get combined with the dead loads and the other actions.
If I remember correctly (from the top of my head - this is for illustration purposes), if determine a load of $$P_D$$ for dead load, $$P_W$$ for wind and $$P_S$$ for snow you'd need to perform checks for the following combinations (for a class of structures):
1. Only dead loads: $$150\% P_D$$
2. Only dead loads and wind: $$100\% P_D + 100\% P_W$$
3. Only dead loads and snow: $$100\% P_D + 100\% P_S$$
4. Only dead loads wind as a primary action and snow as secondary: $$100\% P_D + 90\% P_W + 20\%P_W$$
5. Only dead loads wind as a primary snow and wind as secondary: $$100\% P_D + 90\% P_S + 20\% P_W$$
The above percentages are for illustration purposes only (I'll try to find the correct combinations but it will take me a while).
• Hmm, I still am missing something. If we consider one year, we then have the probability of 2 % of exceeding the fundamental basic wind velocity. Is that not the probability overall? Or is the total probability in 50 years something different? Maybe I'm missing something about probability in general.. Apr 17 '21 at 14:00
• @S.Rotos I was adding the "correct" mathematical way to look into probability. Hopefully that would make more sense now Apr 17 '21 at 14:11
• @SolarMike you are right on both accounts a) there are slighthly less than 100k (87600 to be exact) 10 min intervals in a year (I missed a zero and the 10min, thanks for noticing), and b) hospitals or other critical structures (eg dams) don't apply to the Eurocode rules. Also, another reason, that they didn't design to Eurocode standards in New Orleans is that US doesn't have to (Europe technically also doesn't have to because Eurocode is considered a standard and not legislation). Apr 17 '21 at 14:16
• @NMech good job, removing my comments :) Apr 17 '21 at 14:18
• @S.Rotos "Maybe I'm missing something about probability in general." You are missing the fact that you can't treat one source of load as a "reason for failure" in isolation from everything else. Apr 17 '21 at 15:51
|
{}
|
# Re: [R] Font settings in xfig
From: Scionforbai <scionforbai_at_gmail.com>
Date: Fri, 16 May 2008 14:56:57 +0200
> Is there a reason you are going through this route to get figures > into LaTeX instead of using postscript (or PDF for pdflatex)?
To have LaTeX-formatted text printed onto your pdf figures, to include in LaTeX documents.
R cannot output 'special' text in xfig. You need to post-process the .fig file, according to the fig format
(http://www.xfig.org/userman/fig-format.html), replacing, on lines starting with '4', the correct values for font and font_flags. Using awk (assuming you work on Linux) this is straightforward :
awk '$1==4{$6=0;$9=2}{print}' R_FILE.fig > OUT.fig Then, in order to obtain a pdf figure with LaTeX-formatted text, you need a simple driver.tex: driver.tex : \documentclass{article} \usepackage{epsfig} \usepackage{color} %(note: you might not might not need to do this) \begin{document} \pagestyle{empty} \input{FILE.pstex_t} \end{document} Now you can go through the compilation: fig2dev -L pstex OUT.fig > OUT.pstex fig2dev -L pstex_t -p OUT.pstex OUT.fig > OUT.pstex_t sed s/FILE/"OUT"/ driver.tex > ./OUT.tex latex OUT.tex dvips -E OUT.dvi -o OUT.eps epstopdf OUT.eps Of course you need R to write the correct Latex math strings (like$\sigma^2\$). Hope this helps,
scionforbai
R-help_at_r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. Received on Fri 16 May 2008 - 13:01:01 GMT
Archive maintained by Robert King, hosted by the discipline of statistics at the University of Newcastle, Australia.
Archive generated by hypermail 2.2.0, at Fri 16 May 2008 - 17:30:37 GMT.
Mailing list information is available at https://stat.ethz.ch/mailman/listinfo/r-help. Please read the posting guide before posting to the list.
|
{}
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Dokl. Akad. Nauk: Year: Volume: Issue: Page: Find
MATHEMATICS Spectral decompositions corresponding to an arbitrary nonnegative selfadjoint extension of Laplace's operatorSh. A. Alimov, V. A. Il'in 9 Everywhere diverging extended Hermite–Fejér interpolation processesD. L. Berman 13 Solution of an anticanonical equation with periodic coefficientsV. I. Derguzov 17 On the oscillation of solutions of vector differential equationsYu. I. Domshlak 21 On the uniqueness of a solution of the Cauchy problem for general linear equationsN. Yu. Iokhvidovich 24 The asymptotic behavior of the solution of a problem with initial jump for a system of quasilinear equations, with identical principal part, containing a small parameterK. A. Kasymov 28 Theorems of Phragmén–Lindelöf type for solutions of elliptic equations of high orderE. M. Landis 32 Scattering theory and the “nonphysical sheet” for a system of ordinary differential equationsB. S. Pavlov 36 Asymptotic error formulae for the solutions of systems of ordinary differential equations by functional numerical methodsJ. V. Rakitskii 40 Ill-posed problems and geometries of Banach spacesV. P. Tanana 43 Generalization of Ljapunov’s second method and the investigation of some resonance problemsM. M. Khapaev 46 Generation of periodic motions from the equilibrium stateG. Ya. Shafranov 50 Spaces generated by genfunctionsI. V. Shragin 53 Solution of a certain algebra problem that arises in control theoryV. A. Yakubovich 57 AERODYNAMICS On a representation of Boltzmann equationI. A. Ender, A. I. Ender 61 PHYSICS Temperature dependence of stimulated and nonstimulated current in polycrystalline $\mathrm{CdSe}$ at $77$ – $273^\circ$ KA. G. Gol'dman, M. M. Pyshnyi 65 6Æ Rhodamine laser with increased spectral intensity and retunable frequencyV. I. Kravchenko, A. A. Smirnov, M. S. Soskin 69 On a principle underlying the classical physicsYu. I. Kulakov 72 GEOPHYSICS Experimental aero gravimetric measurements over the Caspian SeaA. M. Lozinskaya, I. L. Yashayaev 80 CRYSTALLOGRAPHY A new modification of structural type of the scheelite – crystal structure $\mathrm{Nd}_2\mathrm{WO}_6$T. M. Polyanskaya, S. V. Borisov, N. V. Belov 83 Apatite-like phases in $\mathrm{MeO}$ – $\mathrm{Nd}_2\mathrm{O}_3$ – $\mathrm{SiO}_2$ systems (where $\mathrm{Me}=\mathrm{Mg},\mathrm{Ca},\mathrm{Ba}$) based on infrared spectroscopy dataA. M. Shevyakov, I. F. Andreev, T. A. Tunik, V. V. Shurukhin 87
|
{}
|
What effect does cold weather have on helicopter performance?
Recently I was flying around my RC helicopter on a 40 $^\circ$F (4$^\circ$C).
At some point in the flight the rotor just stopped moving in mid-air and then I regained control back at ground level. I never had any issues before in warmer weather, so I assume it's a cold-weather related issue with this model.
This leaves me with two questions:
• Could something like this happen in a full-size helicopter like a Robinson R22?
• What effect does temperature have on helicopter performance in general?
• The rotors stopping abruptly is NOT something regular helicopters will suffer from normally, I'm pretty sure of that... You may have a problem with battery performance at low temperature, or if you were outdoors and the controller is infra-red, the signal may have been interrupted. Or there could be any sort of random glitch in the electronics.
– Andy
Oct 23 '15 at 12:08
• @Ethan If I were you, I would ask this questions on Electronics.SE since I'm pretty sure it's a battery- or electronics-related problem. Oct 23 '15 at 12:25
• @Ethan, I cleaned up the question to make it more to the point, and aviation related. Feel free to rolll back the changes if you're not satisfied with the result Oct 23 '15 at 12:31
• First you should try to understand if the result was the rotor being blocked, or the engine that had stopped. I doubt reasons could be compared, a RC motor is electric (I assume) and R22 is a piston engine or a turbine. What could stop one will likely no affect the other. If this is the rotor itself, due to a lubrication or thermal contraction cause, the two mechanisms are not at all similarly built and are not affected the same way (my assumption). Don't you have the temperature operating range in the manual, by chance?
– mins
Oct 23 '15 at 19:48
• I think (also given the two closevotes on this question), you might want to ask these questions in a different manner: instead of "I played with my quadcopter and then X happened", your question might be better received by the community when you ask "I was wondering whether X could happen; here's what I found so far in my own research". To me, it sounds like you post whatever your quadcopter does without any prior research (whether this is true or not). Oct 24 '15 at 23:31
Cold weather definitely affects the performance of helicopters (as it does all aircraft), because it affects the density of the air they move through.
I will ignore icing so let's assume dry air.
The lift equation is:
$$L = \frac{1}{2} \rho V^2 S C_L$$
• $L$ = Lift
• $\rho$ = density of the air.
• $V$ = velocity of the aerofoil (rotor)
• $S$ = the wing area of the rotor
• $C_L$ = Coefficient of lift , which is determined by the type of rotor and angle of attack.
As you can see, lift is proportional to the density of the air and since density at any given altitude is inversely proportional to the temperature, the colder the air, the more dense it is. So, for a given rotor, spinning at a given speed at a given altitude, the colder the air, the more lift is produced.
Engines also operate more efficiently when the air is colder since the density is higher and the charge of air/fuel introduced into the engine is greater. (See here for more information.
Of course, this does not apply to your RCH apart from some effects on the efficiency of the batteries and motors (which may perform more poorly at lower temperatures).
So in general, the colder it is, the higher the performance. In most helicopters, the difference in performance on a cold, dry winter day is very noticeable and much better than on a humid, hot summer day.
It's also worth understanding the properties of air. The colder the air, the lower the dewpoint so the less moisture it can contain. Moisture in the air also lowers peformance.
The problems you encountered with your RC model aren't problems with helicopters in general.
Colder weather does not make you more susceptible to vortex ring state (aside from some minor side effects): You would still only descend into the vortex if you did not pilot the helicopter correctly. (Note that you cannot enter vortex ring state with the rotors stopped since they are not producing a vortex. Also note that on a real helicopter, if the rotors ever stop, they will not start again no matter what you do.)
BTW, I'm being pedantic, but helicopters do not have propellors. Technically, they are the same thing, but asking a helicopter pilot about the propellor annoys them, even if they don't show it. It's as bad a calling a helicopter a "chopper". Only Arnie and Chuck get to call helicopters "choppers". And that's only because helicopter pilots are universally weedy and would get stomped by Arnie or Chuck if they argued.
• I tweaked this a little to line it up with the edits to the question, if I butchered any of the helicopter-specific stuff feel free to roll it back :) Oct 23 '15 at 16:37
• What is vortex ring state. Is that the wake vortex coming from the propellers. The helicopter started to shake like crazy when I descended it into its own downwash too fast Oct 23 '15 at 18:48
• @Ethan Shaking is one of the symptoms of VRS. See this Oct 23 '15 at 20:52
First, I don't think cold weather had anything to do the stopping of rotor. Probably a malfunction.
However, Helicopters do face a number of problems during flight in cold conditions
• Ice accumulation in leading edges- Very few helicopters have leading edge anti-icing systems and are certified for flying in cold conditions (like North Sea).
• The possibility of misting in windshield.
• Whiteout conditions, usually while taking off and landing.
• Ice/Snow ingestion in engines.
• Icing and blockage of fuel lines etc.
Most of these have to do with icing and not with cold conditions as such. In fact, as Lift is directly proportional to density (L $\propto$ $\rho$), helicopters have better performance in cold weather (as air density is higher) and rotors produce more lift. Engine power is also usually increased. It is in hot-high conditions that helicopter lift production is critically affected.
I'm adding this as the question has been cleaned up and changed a bit (effect of temperature in helicopter performance).
Temperature does affect the performance of the helicopter, as the density varies with temperature. The performance of the helicopter depends on three main atmospheric parameters (assuming all other helicopter parameters are equal):
• Density altitude (air density)
• Gross weight
• Wind velocity (during takeoff, hovering, and landing)
The lift produced by the Helicopter blades is given by,
$L \ = \ \frac{1}{2} C_{L} \rho V^{2} S$
Now, in helicopters, $S$, the blade area, $V$, due to rotor speed are fixed. So, as the density varies (due to altitude or weather), the lift coefficient $C_{L}$ has to be adjusted by varying the angle of attack of the blade.
As the density decreases (due to hotter temperature or higher altitude), the required angle of attack (for producing same lift) increases. However, the maximum angle is limited by the blade stall angle. The net result is that the helicopter's ability to hover (or fly) at a particular altitude (or weight) is reduced due to change in temperature.
The following image shows the hover chart for CH-47D. It is obvious from the chart that the hover performance is significantly improved as the temperatures fall, at least till 0$^{\circ}$
Source: globalsecurity.org
• On a model, yes, cold weather might have something to do with propellers stopping: batteries don't like cold. Though +4°C is not that cold. Oct 23 '15 at 12:20
• @JanHudec I'm assuming that the batteries are LiPo or Li-ion. They should usually operate at 4°C, though I can't be sure. Also, Ethan says that he regained control in a few seconds. I'm not sure if its battery problem Oct 23 '15 at 12:30
• Indeed, 4°C does not sound like the batteries should already have problems. Oct 23 '15 at 12:39
Besides the aerodynamic aspects of cold temperatues, there is another aspect concerning the thermodynamics of the engine. Power and efficacy are better the colder the air is.
However, in your RC model, maybe the clearance between some moving components is too small so they gall as the material expands at higher temperatures.
|
{}
|
# Mass of grain of sand
#### scc
1. Homework Statement
Fine grains of beach sand are assumed to be spheres of radius 38.1 micro meters (can't figure out the scientific sign). These grains are made of silicon dioxide which has a density of 2600 kilograms/meters^3. What is the mass in kg?
2. Homework Equations
3. The Attempt at a Solution
I input 3 different answers and they're all wrong (603.2 kg and others near that - I know I'm way off). I'm not sure at all how to calculate. Please help!
Related Introductory Physics Homework Help News on Phys.org
#### Borek
Mentor
Show how you got your answers, it is hard to help not knowing what you are doing wrong.
#### whatsoever
Since you don't tell how you got 603.2kg, I tried guessing where you went wrong... and I guess
that when you convert the density from kg/m3 to kg/$$\mu$$m3, you multiplied it by 10-6 when you are supposed to multiply it by (10-6)3
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
{}
|
# Is Latex Flammable? And Biodegradable? (Truth Revealed!)
If you choose the prescription option, you are responsible for ensuring the safety of your bedroom setting. Although many latex mattress websites say that latex does not burn quickly, “latex and urethanes both burn hot,” and “latex will self-ignite” under high-temperature settings, according to an interview with Mr Holt.
Latex is one of the most common rubber byproducts, composed of around 40% water and 55% water, making latex rubber. Latex is the soft white substance beneath the tree’s bark harvested during rubber harvesting.
People have questioned whether latex is a natural material due to its strength. Latex has a synthetic feel when utilized in products such as rubber gloves, tyres, and tennis shoes. If you look at these items, you’d think they’re made of synthetic rubber.
However, this does not rule out the existence of synthetic latexes. Most synthetic latexes are made from petroleum-based compounds that are clumped, dried, and shipped to a production factory.
The environmental impact of latex is being debated. Is latex biodegradable and flammable? Is there any negative impact on the environment? Dive in, and we’ll address these concerns in this blog article.
## Is Latex Flammable?
Everyone is concerned about their health and safety, which is why many of the goods we used to utilise are no longer in use. We quit utilising materials detrimental to us and the environment as we discover better ones.
The flammability of a product is a serious concern. Knowing a product’s flammability property defines and specifies what you can use it for and where you can use it. Is latex flammable?
Because latex is made from rubber, it has the properties of rubber, and we may determine the qualities of latex by looking at the properties of rubber. Rubber (organic and synthetic latex) has a high ignition temperature of 260 to 313 degrees Celsius, making it non-flammable (500 to 500 Fahrenheit).
However, if rubber (latex) begins to burn, it might be difficult to extinguish. It emits highly poisonous smoke that contains deadly chemicals damaging to humans, creatures, and the environment.
When we say the material is biodegradable, we mean that bacteria and other natural organisms can break it down without harming the environment. When biodegradable materials come into contact with living beings at the disposal site, they may partially or completely dissolve.
Biodegradable products have numerous benefits for human health and environmental safety, ranging from lowering environmental pollutants to fertilising and increasing soil health.
So, now that we know how valuable biodegradable items are, we need to know if latex is a biodegradable product.
Organic latex is the only type that will biodegrade. Organic latex degrades naturally; however, whether natural, synthetic, or blended, latex does not biodegrade. You may be asking why natural latex does not biodegrade and the distinction between organic and natural latex.
Both natural and organic latex is obtained from rubber trees; however, natural latex is filtered when harvested. Some acids are added to help it clump, making it simpler to roll into sheets with a mill.
After the latex is rolled into sheets, it is pre-vulcanized by heating and chemical addition; this procedure makes it easy to use and carry. On the other hand, organic latex is entirely gathered with no chemicals added.
Furthermore, organic latex is grown differently than natural latex, and stringent organic methods govern organic latex growing. Organic latex is sourced from isolated tree plantations that do not utilise pesticides or other chemicals on the trees.
Natural latex’s characteristics are altered by chemicals, rendering it non-biodegradable, from the insecticides used to protect the rubber trees to the chemicals utilised in latex production. These have negative consequences; therefore, the latex will not biodegrade.
Synthetic or mixed latex is entirely made of petroleum-based compounds. These are not biodegradable in any way. Because they lack organic content, typically, organisms cannot break them down.
## Is Latex Bad For the Environment?
One advantage of latex is that it is biodegradable and does not demand the destruction of any plant or animal to be created. Because no trees are cut down during the latex harvesting process, it is both sustainable and carbon-negative.
Many people have pondered whether latex has adverse environmental effects. We’ll get started straight away.
Latex is a wholly organic and natural product of water and several natural proteins. The tree uses the sap to protect itself from health hazards such as mould, insects, fungus, and bacterial illnesses. Because the tree replenishes itself, harvesting does not subject it to danger.
Latex is 100% safe for the environment. First and foremost, the harvesting procedure is sustainable and environmentally friendly. The bark must be removed, causing sap to seep out. And though just a significant portion is harvested, the tree has plenty.
Natural and organic latex is not harmful to the environment. But synthetic latex is harmful. Toxins are produced in quantities by the chemicals used to manufacture synthetic latex to make an environment inhabitable because none of the chemicals is sustainable.
## How Long Does Latex Take To Biodegrade?
The duration of time required for latex to biodegrade varies. All latexes, organic, natural, and synthetic, biodegrade at various rates. On the other hand, organic latex will biodegrade in the shortest time (between 1 and 4 years).
It will require natural latex; for example, a condom composed of natural latex will biodegrade entirely in fifty to one hundred years. A synthetic latex condom of the same mass will take 500 years to biodegrade entirely.
However, if any of the three latexes have come into touch with impurities or poisons, it will take longer to biodegrade. So, if organic latex is mixed with chemicals, biodegradation will take more than four years.
## Conclusion
Latex and rubber have long been used equally, yet there appear to be some distinctions. Rubber is the finished product, whereas latex is the liquid form obtained from rubber trees.
This blog post should have shown you how to use latex properly and dispose of it without harming the environment.
|
{}
|
Surprising behavior of Part[ ]
Look at this list
list={1,2,3,4}
Obiously
list[[1, 2]]
throws the error:
During evaluation of Part::partd: Part specification {1,2,3,4}[[1,2]] is longer than depth of object.
However:
list[[1,All]]
yields (Mathematica 12 on Windows)
Integer[]
which does not make much sense to me. Moreover, weird constructs like list[[All,All,All,All,All]] do not throw an error.
Can this (at least to me) strange behaviour of the basic command Part[] be explained? (Needless to say, that this is just a simple demonstration of the effect, but it occured during execution a larger program with strange consequences.)
EDIT: I just realized that this question is closely related to mine and adds some more aspects to it.
• Indeed a bit irregular, but not completely derstandable (=ununderstandable ;) - cf list[[1, 0]]. – Henrik Schumacher Apr 29 '19 at 12:46
• There are similar oddities with Span: Consider e[[1 ;; -1]] and e[[2 ;; -1]], with e being 1, f[], and f[1]. – Michael E2 Apr 29 '19 at 14:24
Compare:
17[[All]]
(* Integer[] *)
The evaluation of expr[[All]] should result in h[p1, p2,...] where h is Head[expr] and p1, p2,... is a possibly empty list (not List) of all the parts of expr. An Integer has no parts (because it is atomic), but its head is Integer. Hence we get Integer with an empty argument list.
• Well, I do accept that 17[[All]] returns an Integer[ ], but I am surprised that it does not throw a message, as e.g. xxx[[All]] returns Symbol[ ] but does throw a message. 1.2[[All]] does not as well. – Michael Weyrauch Apr 29 '19 at 15:09
• @MichaelWeyrauch xxx[[All]] does not throw a message. It is Symbol[] that throws the message. It's not Part. – Szabolcs Apr 29 '19 at 17:06
Consider the following example:
We have a list of 3-tuples:
list = {{1,2,3}, {5,4,3}, {9,7,5}, {0,3,8}}
Take the first element of each sublist.
list[[All, 1]]
(* {1,5,9,0} *)
If the list had 4 elements, the result will have 4 elements too. In general, if the list has $$n$$ elements, the result should also have $$n$$ elements. But what if the list is empty, i.e. $$n=0$$? It would be reasonable to get a result with 0 elements, of course. But for that to work, Part needs to be a bit flexible, after all, we can't take the 1st element of nothing ... or can we?
list = {}
list[[All, 1]]
(* {} *)
Well, it works. And it is extremely useful because I do not need to have extra code for these special cases.
This explains why it is okay to add more indexes to Part than the number of levels that the expression has. This is why you can have 10 Alls in there. But you could also have list[[All, All, 1, 1]], not just list[[All, All, All, All]]
As for why 1[[All]] is Integer[], Michael has already explained it. My answer is only for explaining why you can have many Alls in there.
• I just wrote code that relies on this yesterday. I find it very useful. – Szabolcs Apr 29 '19 at 17:05
|
{}
|
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnlilne Submission ㆍMy Manuscript - For Reviewers - For Editors
A note on zeros of bounded holomorphic functions in weakly pseudoconvex domains in $\mathbb{C}^2$ Bull. Korean Math. Soc. 2017 Vol. 54, No. 3, 993-1002 https://doi.org/10.4134/BKMS.b160429Published online January 9, 2017Printed May 31, 2017 Ly Kim Ha Vietnam National University Abstract : Let $\Omega$ be a bounded, uniformly totally pseudoconvex domain in $\mathbb{C}^2$ with the smooth boundary $b\Omega$. Assuming that $\Omega$ satisfies the negative $\bar\partial$ property. Let $M$ be a positive, finite area divisor of $\Omega$. In this paper, we will prove that: if $\Omega$ admits a maximal type $F$ and the $\rm \check{C}$eck cohomology class of the second order vanishes in $\Omega$, there is a bounded holomorphic function in $\Omega$ such that its zero set is $M$. The proof is based on the method given by Shaw \cite{Sha89}. Keywords : pseudoconvex domains, Poincar\'e-Lelong equation, zero set, finite area, $\bar{\partial}_b$-operator, Henkin solution MSC numbers : 32W05, 32W10, 32W50, 32A26, 32A35, 32A60, 32F18, 32T25, 32U40 Downloads: Full-text PDF
|
{}
|
# Groups and Rings course suggestion
I am a physics undergrad, looking to explore pure maths.
I apologize if this question is not appropriate for MathSE, but I couldn't resist posting it. Feel free to close it down.
I haven't taken any formal course in maths uptil now, but have done some linear algebra, representation theory, topology, differential geometry by self-study from mathematical physics textbook. I am thinking of taking a masters level course on groups and rings at my uni. The course structure is here.
http://mat.iitm.ac.in/msc%20course%20content/ma532.html
Please could you guide me if it would be wise to directly take a MSc level algebra course with no prior exposure to a formal pure math course. This may sound immature, but I can physically put only maybe 7-8 hours a week outside class for this course. I have an ongoing research project, 6 theory courses, and independent QFT. I am really interested in learning this, but I have no idea if it would be too ambitious.
In particular my question is :
1. How many hours of effort would I require to put for this course (check the link)?
2. Would I able to cope up with rigorous, and problems. I am really not aware of the rigour in mathematics, but i really enjoy how simple definitions can lead to beautiful results, and help to express things clearly.
• btw if you end up taking the course, consider checking out Herstein (as mentioned on the website) - I found it super helpful when I was learning this stuff! – uncookedfalcon Jan 7 '13 at 18:13
• Whats the difference between his too books, 'Topics in Algebra' and 'Abstract Algebra'? Which of them are you recommending? – user23238 Jan 8 '13 at 3:50
• oh yeah a fair point - I was recommending 'Topics in Algebra', and haven't looked at the other – uncookedfalcon Jan 8 '13 at 6:38
I may be going against the grain here, but this is a bad idea.
If you have never taken a formal mathematics course you probably aren't going to be familiar enough with proof writing to attempt a second year algebra course. It's talking about review of groups and rings, which means you should probably have worked with them before. It would be one thing if you could put a lot of time in it, but only $5$ to $6$ hours a week? I've had single problems take me that long in abstract algebra courses. If you're really taking $6$ other courses and doing research there's no way you're going from zero to master's level algebra.
• Thanks a lot for the reply. This seems to be the popular advice given by by peers and friends. I just wanted to be sure, by asking the Math SE community. I don't want to regret later not taking this course, and I am pretty sure I won't be able to take this course later in the future during undergrad as the slot is usually taken up. – user23238 Jan 7 '13 at 18:56
• @Alexander : To go against your answer, group and ring theory is not really master's level in algebra. I think the reason why they call it "review" is because they're going to skip some stuff in order to combine a year of material in one semester. Notice how there is nothing else but "reviews" in the course content. I agree he should not take the course for his sanity (as said in my answer). The rest of your answer is totally relevant. +1 – Patrick Da Silva Jan 7 '13 at 19:36
• @ramanujan_dirac Believe me, it hurts to recommend that somebody not take a course in my favorite field of mathematics- it would be a shame to miss the opportunity to take group and ring theory as they are both beautiful subjects. Maybe you could audit, or just sit in on lectures? – Alexander Gruber Jan 7 '13 at 22:59
• I have decided to take the course. :) – user23238 Jan 9 '13 at 16:45
• Would you recommend Dummit&Foote, Herstein or Artin? I foind Artin the best and most logically written. – user23238 Jan 9 '13 at 16:46
The course you are taking, by the contents described on your link, would be the two algebra courses given in my pure math program for undergrads combined together (one concerning group theory, the other one concerning ring theory). If you are thinking about doing this in one semester, I suggest a little maturity in algebra, and linear algebra is definitely not enough.
I know some physicists myself who did a math-representation theory course after doing a lot of physics (including the study of characters as they do it in physics courses), and their first reaction was that "they had never seen this before". In other words, they had studied characters the way it's done in physics, which is totally taught differently than the way it's done in math for the simple reason physicists use characters, but mathematicians prove statements about them, which is a different point of view.
You could also ask your teacher about it ; if the course is targeted at mathematicians, I would say you would spend a huge amount of time on it and wouldn't suggest you do that. If the course is targeted at physicists, then you can have a shot.
If you have never done "proof-style math" (which for mathematicians is just "math"), then don't even think about taking the course, you will spend too much time on it(you could be able to do it but you would pay the price). You can still use the reference they give to read about it though (Dummit & Foote) as it is an excellent book, or ask the teacher if you can just sit in the course and listen, do the exercises at home when you can and show them to your teacher if he's friendly enough to help you learn.
Hope that helps,
• Thanks, this is a really well thought answer, as you know how differently physicists and mathematicians do math.This answer really helps. Though, the (irrational) part of me wanting to do the course, is extremely stubborn at this moment. I would be great if I get some more illuminating answers. – user23238 Jan 7 '13 at 18:42
• @ramanujan : I would correct your comment by saying that physicists don't do math, they do physics. I never saw a physicist prove a theorem in a physics course... (in the sense that they don't do proofs). Most of the time they make sense out of things by computing stuff and by giving the main idea. – Patrick Da Silva Jan 7 '13 at 19:33
• Yeah. I agree with that. – user23238 Jan 7 '13 at 19:50
I agree with Jasper's comment regarding the relative difficulty of subject material.
However since, to your credit, you recognize the benefits of becoming more skillful in proofs, perhaps I could make the following suggestion.
It might be beneficial for a period of time cultivating that skill. It will serve you immensely in the future. So think of it as training. Toward that end, here are two suggestions for self-study (pick one) that will be very useful:
-- This is a link to a download of a real analysis course taught by Fields Medal winner Vaughan Jones. He is a master and presents that material so as to cultivate your ability do do proofs. Real analysis is a key element in math study and is important if you want to go on to functional analysis:
|
{}
|
Definition:Isolated Point (Real Analysis)
Definition
Let $S \subseteq \R$ be a subset of the set of real numbers.
Let $\alpha \in S$.
Then $\alpha$ is an isolated point of $S$ if and only if there exists an open interval of $\R$ whose midpoint is $\alpha$ which contains no points of $S$ except $\alpha$:
$\exists \epsilon \in \R_{>0}: \openint {\alpha - \epsilon} {\alpha + \epsilon} \cap S = \set \alpha$
By definition of $\epsilon$-neighborhood in the context of the real number line under the usual (Euclidean) metric:
$\map {N_\epsilon} \alpha := \openint {\alpha - \epsilon} {\alpha + \epsilon}$
it can be seen that this definition is compatible with that for a metric space:
$\exists \epsilon \in \R_{>0}: \map {N_\epsilon} \alpha \cap S = \set \alpha$
|
{}
|
### Top 10 Arxiv Papers Today in Machine Learning
##### #1. Analysis of Irregular Spatial Data with Machine Learning: Classification of Building Patterns with a Graph Convolutional Neural Network
###### Xiongfeng Yan, Tinghua Ai
Machine learning methods such as convolutional neural networks (CNNs) are becoming an integral part of scientific research in many disciplines, spatial vector data often fail to be analyzed using these powerful learning methods because of its irregularities. With the aid of graph Fourier transform and convolution theorem, it is possible to convert the convolution as a point-wise product in Fourier domain and construct a learning architecture of CNN on graph for the analysis task of irregular spatial data. In this study, we used the classification task of building patterns as a case study to test this method, and experiments showed that this method has achieved outstanding results in identifying regular and irregular patterns, and has significantly improved in comparing with other methods.
more | pdf | html
###### Tweets
arxivml: "Analysis of Irregular Spatial Data with Machine Learning: Classification of Building Patterns with a Graph Convolu… https://t.co/XwMdLOb5RC
nmfeeds: [O] https://t.co/PhJx9JIFPv Analysis of Irregular Spatial Data with Machine Learning: Classification of Building Patterns ...
Memoirs: Analysis of Irregular Spatial Data with Machine Learning: Classification of Building Patterns with a Graph Convolutional Neural Network. https://t.co/StsSt7Bv5c
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 2
Total Words: 2872
Unqiue Words: 1081
##### #2. Receiver Operating Characteristic Curves and Confidence Bands for Support Vector Machines
###### Daniel J. Luckett, Eric B. Laber, Samer S. El-Kamary, Cheng Fan, Ravi Jhaveri, Charles M. Perou, Fatma M. Shebl, Michael R. Kosorok
Many problems that appear in biomedical decision making, such as diagnosing disease and predicting response to treatment, can be expressed as binary classification problems. The costs of false positives and false negatives vary across application domains and receiver operating characteristic (ROC) curves provide a visual representation of this trade-off. Nonparametric estimators for the ROC curve, such as a weighted support vector machine (SVM), are desirable because they are robust to model misspecification. While weighted SVMs have great potential for estimating ROC curves, their theoretical properties were heretofore underdeveloped. We propose a method for constructing confidence bands for the SVM ROC curve and provide the theoretical justification for the SVM ROC curve by showing that the risk function of the estimated decision rule is uniformly consistent across the weight parameter. We demonstrate the proposed confidence band method and the superior sensitivity and specificity of the weighted SVM compared to commonly used...
more | pdf | html
###### Tweets
arxiv_org: Receiver Operating Characteristic Curves and Confidence Bands for Support Vector Machines. https://t.co/DrZTs3WR4R https://t.co/g61WrLS9cA
HubBucket: RT @arxiv_org: Receiver Operating Characteristic Curves and Confidence Bands for Support Vector Machines. https://t.co/DrZTs3WR4R https://t…
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 8
Total Words: 11320
Unqiue Words: 2363
##### #3. Quantification under prior probability shift: the ratio estimator and its extensions
###### Afonso Fernandes Vaz, Rafael Izbicki, Rafael Bassi Stern
The quantification problem consists of determining the prevalence of a given label in a target population. However, one often has access to the labels in a sample from the training population but not in the target population. A common assumption in this situation is that of prior probability shift, that is, once the labels are known, the distribution of the features is the same in the training and target populations. In this paper, we derive a new lower bound for the risk of the quantification problem under the prior shift assumption. Complementing this lower bound, we present a new approximately minimax class of estimators, ratio estimators, which generalize several previous proposals in the literature. Using a weaker version of the prior shift assumption, which can be tested, we show that ratio estimators can be used to build confidence intervals for the quantification problem. We also extend the ratio estimator so that it can: (i) incorporate labels from the target population, when they are available and (ii) estimate how the...
more | pdf | html
None.
###### Tweets
arxiv_org: Quantification under prior probability shift: the ratio estimator and its extensions. https://t.co/vCtSDqJ2XW https://t.co/FIF7eHto8V
HubBucket: RT @arxiv_org: Quantification under prior probability shift: the ratio estimator and its extensions. https://t.co/vCtSDqJ2XW https://t.co/F…
udmrzn: RT @arxiv_org: Quantification under prior probability shift: the ratio estimator and its extensions. https://t.co/vCtSDqJ2XW https://t.co/F…
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 3
Total Words: 11908
Unqiue Words: 2193
##### #4. Efficient Bayesian Inference of Sigmoidal Gaussian Cox Processes
###### Christian Donner, Manfred Opper
We present an approximate Bayesian inference approach for estimating the intensity of a inhomogeneous Poisson process, where the intensity function is modelled using a Gaussian process (GP) prior via a sigmoid link function. Augmenting the model using a latent marked Poisson process and P\'olya--Gamma random variables we obtain a representation of the likelihood which is conjugate to the GP prior. We approximate the posterior using a free--form mean field approximation together with the framework of sparse GPs. Furthermore, as alternative approximation we suggest a sparse Laplace approximation of the posterior, for which an efficient expectation--maximisation algorithm is derived to find the posterior's mode. Results of both algorithms compare well with exact inference obtained by a Markov Chain Monte Carlo sampler and standard variational Gauss approach, while being one order of magnitude faster.
more | pdf | html
###### Tweets
hiropon_matsu: "Efficient Bayesian Inference of Sigmoidal Gaussian Cox Processes" https://t.co/MQDtFOAtzE
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 2
Total Words: 11455
Unqiue Words: 2673
##### #5. Dynamic Assortment Selection under the Nested Logit Models
###### Xi Chen, Yining Wang, Yuan Zhou
We study a stylized dynamic assortment planning problem during a selling season of finite length $T$, by considering a nested multinomial logit model with $M$ nests and $N$ items per nest. Our policy simultaneously learns customers' choice behavior and makes dynamic decisions on assortments based on the current knowledge. It achieves the regret at the order of $\tilde{O}(\sqrt{MNT}+MN^2)$, where $M$ is the number of nests and $N$ is the number of products in each nest. We further provide a lower bound result of $\Omega(\sqrt{MT})$, which shows the optimality of the upper bound when $T>M$ and $N$ is small. However, the $N^2$ term in the upper bound is not ideal for applications where $N$ is large as compared to $T$. To address this issue, we further generalize our first policy by introducing a discretization technique, which leads to a regret of $\tilde{O}(\sqrt{M}T^{2/3}+MNT^{1/3})$ with a specific choice of discretization granularity. It improves the previous regret bound whenever $N>T^{1/3}$. We provide numerical results to...
more | pdf | html
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 3
Total Words: 17873
Unqiue Words: 3153
##### #6. Uplift Modeling from Separate Labels
###### Ikko Yamane, Florian Yger, Jamal Atif, Masashi Sugiyama
Uplift modeling is aimed at estimating the incremental impact of an action on an individual's behavior, which is useful in various application domains such as targeted marketing (advertisement campaigns) and personalized medicine (medical treatments). Conventional methods of uplift modeling require every instance to be jointly equipped with two types of labels: the taken action and its outcome. However, obtaining two labels for each instance at the same time is difficult or expensive in many real-world problems. In this paper, we propose a novel method of uplift modeling that is applicable to a more practical setting where only one type of labels is available for each instance. We show a generalization error bound for the proposed method and demonstrate its effectiveness through experiments.
more | pdf | html
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 4
Total Words: 9308
Unqiue Words: 2424
##### #7. Exact information propagation through fully-connected feed forward neural networks
###### Rebekka Burkholz, Alina Dubatovka
Neural network ensembles at initialisation give rise to the trainability and training speed of neural networks and thus support parameter choices at initialisation. These insights rely so far on mean field approximations that assume infinite layer width and study average squared signals. Thus, information about the full output distribution gets lost. Therefore, we derive the output distribution exactly (without mean field assumptions), for fully-connected networks with Gaussian weights and biases. The layer-wise transition of the signal distribution is guided by a linear integral operator, whose kernel has a closed form solution in case of rectified linear units for nonlinear activations. This enables us to analyze some of its spectral properties, for instance, the shape of the stationary distribution for different parameter choices and the dynamics of signal propagation.
more | pdf | html
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 2
Total Words: 4861
Unqiue Words: 1334
##### #8. Error Bounds for Piecewise Smooth and Switching Regression
###### Fabien Lauer
The paper deals with regression problems, in which the nonsmooth target is assumed to switch between different operating modes. Specifically, piecewise smooth (PWS) regression considers target functions switching deterministically via a partition of the input space, while switching regression considers arbitrary switching laws. The paper derives generalization error bounds in these two settings by following the approach based on Rademacher complexities. For PWS regression, our derivation involves a chaining argument and a decomposition of the covering numbers of PWS classes in terms of the ones of their component functions and the capacity of the classifier partitioning the input space. This yields error bounds with a radical dependency on the number of modes. For switching regression, the decomposition can be performed directly at the level of the Rademacher complexities, which yields bounds with a linear dependency on the number of modes. By using once more chaining and a decomposition at the level of covering numbers, we show...
more | pdf | html
None.
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 1
Total Words: 11326
Unqiue Words: 2382
##### #9. Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising
###### Yuan Yuan, Xiaojing Dong, Chen Dong, Yiwen Sun, Zhenyu Yan, Abhishek Pani
Predicting keywords performance, such as number of impressions, click-through rate (CTR), conversion rate (CVR), revenue per click (RPC), and cost per click (CPC), is critical for sponsored search in the online advertising industry. An interesting phenomenon is that, despite the size of the overall data, the data are very sparse at the individual unit level. To overcome the sparsity and leverage hierarchical information across the data structure, we propose a Dynamic Hierarchical Empirical Bayesian (DHEB) model that dynamically determines the hierarchy through a data-driven process and provides shrinkage-based estimations. Our method is also equipped with an efficient empirical approach to derive inferences through the hierarchy. We evaluate the proposed method in both simulated and real-world datasets and compare to several competitive models. The results favor the proposed method among all comparisons in terms of both accuracy and efficiency. In the end, we design a two-phase system to serve prediction in real time.
more | pdf | html
###### Tweets
arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co/OS9SzjzgSM
M157q_News_RSS: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. (arXiv:1809.02213v1 [https://t.co/eOmVsbWZjL https://t.co/2J3mIjH6n2
arxivml: "Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising", Yuan Yuan, Xiaojing Dong,… https://t.co/hTiANDWc4s
dan_marinazzo: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
EldarSilver: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
elasticjava: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
udmrzn: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
gaialive: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
morioka: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
bottom100x100: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
vnzloy: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
PerthMLGroup: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
esigma6: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
AssistedEvolve: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
11shubh_laabh11: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
festivalWon: RT @arxiv_org: Dynamic Hierarchical Empirical Bayes: A Predictive Model Applied to Online Advertising. https://t.co/N3zvytNV94 https://t.co…
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 6
Total Words: 5638
Unqiue Words: 1582
##### #10. HMLasso: Lasso for High Dimensional and Highly Missing Data
###### Masaaki Takada, Hironori Fujisawa, Takeichiro Nishikawa
Sparse regression such as Lasso has achieved great success in dealing with high dimensional data for several decades. However, there are few methods applicable to missing data, which often occurs in high dimensional data. Recently, CoCoLasso was proposed to deal with high dimensional missing data, but it still suffers from highly missing data. In this paper, we propose a novel Lasso-type regression technique for Highly Missing data, called `HMLasso'. We use the mean imputed covariance matrix, which is notorious in general due to its estimation bias for missing data. However, we effectively incorporate it into Lasso, by using a useful connection with the pairwise covariance matrix. The resulting optimization problem can be seen as a weighted modification of CoCoLasso with the missing ratios, and is quite effective for highly missing data. To the best of our knowledge, this is the first method that can efficiently deal with both high dimensional and highly missing data. We show that the proposed method is beneficial with regards to...
more | pdf | html
###### Tweets
arxivml: "HMLasso: Lasso for High Dimensional and Highly Missing Data", Masaaki Takada, Hironori Fujisawa, Takeichiro Nishik… https://t.co/UQRkIuStTK
FerrumA: Lasso regression for highly missing data: https://t.co/LPKqAKQr60 (How good, practical for large p and p>n?)
FerrumA: LASSO regression for highly missing data: https://t.co/LPKqAKQr60 (experiments for p << n).
ComputerPapers: HMLasso: Lasso for High Dimensional and Highly Missing Data. https://t.co/U55nlLKzrs
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 3
Total Words: 8323
Unqiue Words: 1961
Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day.
Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter).
To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else).
To see beautiful figures extracted from papers, follow us on Instagram.
Tracking 72,995 papers.
###### Search
Sort results based on if they are interesting or reproducible.
Interesting
Reproducible
Online
###### Stats
Tracking 72,995 papers.
|
{}
|
# Common Questions
Why is potential flow a relevant model of fluid flow?
Potential flow is an accurate model in any flow where the effects of viscosity are negligible. This applies outside of boundary layers in aerodynamic and hydrodynamic flows. In most cases boundary layers are thin and attached to the solid surfaces, so potential flow is a widely applicable approximation.
What is the difference between potential flow and irrotational flow?
Potential flow occurs by definition when a flow is irrotational, inviscid, and incompressible. Irrotationality is a necessary but not sufficient condition for potential flow.
Is potential flow necessarily inviscid and vice versa?
Inviscid flow is a necessary condition for potential flow, but not all cases of inviscid flow satisfy potential flow. Inviscid flow can be rotationally and therefore not satisfy the conditions for potential flow. Laplace’s equation is the governing equation for potential flow, but the Euler equations give the more general description of inviscid flow, that may be rotational or irrotational.
How can a vortex be irrotational?
A vortex with a velocity distribution that is inversely proportional to the radius is a free or irrotational vortex. The velocity field is curl free ($\nabla \times v = 0$), which satisfies irrotationality. This point vortex does have a central point where vorticity goes to infinity, but everywhere else in the domain it is irrotational.
How is a velocity field derived from a potential function?
In potential flow, the velocity field can be determined by taking the gradient of the scalar potential: $v = \nabla \phi$.
How to find a potential from a velocity field?
The scalar potential can be derived from the velocity field by integrating the velocity components in accordance with the Cauchy-Riemann equations.
|
{}
|
# Computational Biomechanics
Dr. Kewei Li
## Two-dimensional BVP: Heat Conduction
In the lecture, we have mentioned the Fourier’s law of heat conduction which describes the relationship between the rate of heat transfer and the temperature gradient. You can read more about this law here. The Newton’s law of cooling is a discrete analogue of Fourier’s law.
Go Back
|
{}
|
Found 1,704 Documents (Results 1–100)
100
MathJax
A well-balanced asymptotic preserving scheme for the two-dimensional rotating shallow water equations with nonflat bottom topography. (English)Zbl 07556265
MSC: 76M12 76U05 65M08
Full Text:
Full Text:
Selection of radial basis functions for the accuracy of meshfree Galerkin method in rotating Euler-Bernoulli beam problem. (English)Zbl 07549861
MSC: 74S99 74K10 74H45
Full Text:
A boundary integral method to investigate pattern formation in a rotating Hele-Shaw cell with time dependent gap. (English)Zbl 07546698
MSC: 76-XX 92-XX
Full Text:
Unsteady MHD free convection flow past a vertical permeable flat plate in a rotating frame of reference with constant heat source and variable thermal boundary condition in a nanofluid. (English)Zbl 07533165
MSC: 76-XX 35-XX
Full Text:
Full Text:
A new class of higher-order decoupled schemes for the incompressible Navier-Stokes equations and applications to rotating dynamics. (English)Zbl 07527723
MSC: 65Mxx 76Dxx 76Mxx
Full Text:
A sliding-mesh interface method for three dimensional high order spectral difference solver. (English)Zbl 07518063
MSC: 65Mxx 76Mxx 35Lxx
Full Text:
Double-deck structure in the fluid flow induced by a uniformly rotating disk with small symmetric irregularities on its surface. (English)Zbl 07516868
MSC: 76D10 76M45
Full Text:
MSC: 76-XX
Full Text:
Helicity distributions and transfer in turbulent channel flows with streamwise rotation. (English)Zbl 1486.76057
MSC: 76F40 76F55 76U05
Full Text:
MSC: 76-XX
Full Text:
Full Text:
MSC: 76-XX
Full Text:
Full Text:
MSC: 76-XX
Full Text:
Retraction notice to: “Competing instabilities of rotating boundary-layer flows in an axial free-stream”. (English)Zbl 1478.76038
MSC: 76F06 76D10 76U05
Full Text:
Full Text:
New optimized implicit-explicit Runge-Kutta methods with applications to the hyperbolic conservation laws. (English)Zbl 07516460
MSC: 65Mxx 76Mxx 65Lxx
Full Text:
Full Text:
Thermal versus isothermal rotating shallow water equations: comparison of dynamical processes by simulations with a novel well-balanced central-upwind scheme. (English)Zbl 1482.76087
MSC: 76M20 65M06 76U60
Full Text:
Nonstationary flow of a viscous incompressible electrically conductive fluid on a rotating plate. (English. Russian original)Zbl 07483351
Fluid Dyn. 56, No. 7, 943-953 (2021); translation from Prikl. Mat. Mekh. 85, No. 5, 587-600 (2021).
MSC: 76W05 76U05 76D10
Full Text:
Full Text:
Full Text:
MSC: 74J05
Full Text:
Full Text:
Full Text:
Bifurcation of equilibrium forms of a gas column rotating with constant speed around its axis of symmetry. (English)Zbl 1477.76038
MSC: 76E07 76U05 76M30
Full Text:
On rotational waves of limit amplitude. (English. Russian original)Zbl 1477.35168
Funct. Anal. Appl. 55, No. 2, 165-169 (2021); translation from Funkts. Anal. Prilozh. 55, No. 2, 107-112 (2021).
Full Text:
Full Text:
Localized exponential time differencing method for shallow water equations: algorithms and numerical study. (English)Zbl 1473.65141
MSC: 65M08 65M55 76U05
Full Text:
Force balance in rapidly rotating Rayleigh-Bénard convection. (English)Zbl 1475.76083
MSC: 76R10 76U05 80A19
Full Text:
Full Text:
Full Text:
Numerical simulation of flow of a shear-thinning Carreau fluid over a transversely oscillating cylinder. (English)Zbl 07392426
MSC: 76A05 76M99 76D17
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
MSC: 82-XX
Full Text:
Full Text:
Mean flow generation due to longitudinal librations of sidewalls of a rotating annulus. (English)Zbl 1482.76134
MSC: 76U05 76D10
Full Text:
Curvature effect on asymptotic profiles of spiral curves. (English)Zbl 1484.35041
MSC: 35B36 34B15 53A04
Full Text:
Existence of rotating-periodic solutions for nonlinear second order vector differential equations. (English)Zbl 07460803
MSC: 34C25 34B15 47H11
Full Text:
Full Text:
MHD boundary layer flow and heat transfer of rotating dusty nanofluid over a stretching surface. (English)Zbl 1462.80007
MSC: 80A19 76W05 76U05
Full Text:
Full Text:
Full Text:
Influence of the Marangoni effect on the emergence of fluid rotation in a thermogravitational boundary layer. (English. Russian original)Zbl 1451.76043
J. Appl. Mech. Tech. Phys. 61, No. 3, 417-425 (2020); translation from Prikl. Mekh. Tekh. Fiz. 61, No. 3, 120-128 (2020).
Full Text:
Investigation of rotating detonation waves in an annular gap. (English. Russian original)Zbl 1454.80011
Proc. Steklov Inst. Math. 310, 185-201 (2020); translation from Tr. Mat. Inst. Steklova 310, 199-216 (2020).
Full Text:
Full Text:
Calculation of the mean velocity profile for strongly turbulent Taylor-Couette flow at arbitrary radius ratios. (English)Zbl 1460.76451
MSC: 76F40 76F45 76U05
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
The FVC scheme on unstructured meshes for the two-dimensional shallow water equations. (English)Zbl 1454.65100
Klöfkorn, Robert (ed.) et al., Finite volumes for complex applications IX – methods, theoretical aspects, examples. FVCA 9, Bergen, Norway, June 15–19, 2020. In 2 volumes. Volume I and II. Cham: Springer. Springer Proc. Math. Stat. 323, 455-465 (2020).
Full Text:
A cell-centered finite volume method for the Navier-Stokes/Biot model. (English)Zbl 1454.65080
Klöfkorn, Robert (ed.) et al., Finite volumes for complex applications IX – methods, theoretical aspects, examples. FVCA 9, Bergen, Norway, June 15–19, 2020. In 2 volumes. Volume I and II. Cham: Springer. Springer Proc. Math. Stat. 323, 325-333 (2020).
Full Text:
Full Text:
Full Text:
Full Text:
Reflection of oscillating internal shear layers: nonlinear corrections. (English)Zbl 1460.76260
MSC: 76D33 76U05
Full Text:
Full Text:
Full Text:
Full Text:
Heat transfer analysis of CNT-nanofluid between two rotating plates in the presence of viscous dissipation effect. (English)Zbl 1441.80002
Manna, Santanu (ed.) et al., Mathematical modelling and scientific computing with applications. Proceedings of the international conference, ICMMSC 2018, Indore, India, July 19–21, 2018. Singapore: Springer. Springer Proc. Math. Stat. 308, 279-295 (2020).
Full Text:
Full Text:
Stratification-dominant limit of the rotating stratified viscous Boussinesq equations with stress-free boundary. (English)Zbl 1440.35271
MSC: 35Q35 76U05 35D30
Full Text:
Full Text:
Mechanically driven circulation in a rotating cylinder/disk device. (English)Zbl 1473.76070
MSC: 76U05 76F40
Full Text:
Full Text:
Boundary-conforming finite element methods for twin-screw extruders using spline-based parameterization techniques. (English)Zbl 1442.74229
MSC: 74S05 65N30 74C05
Full Text:
Turbulent boundary-layer flow beneath a vortex. II: Power-law swirl. (English)Zbl 1460.76480
MSC: 76F40 76D17 76U05
Full Text:
Turbulent boundary-layer flow beneath a vortex. I: Turbulent Bödewadt flow. (English)Zbl 1460.76479
MSC: 76F40 76D17 76U05
Full Text:
Full Text:
Damping effects in boundary layers for rotating fluids with small viscosity. (English)Zbl 1434.76025
MSC: 76D03 76D05 76U05
Full Text:
Full Text:
Large-scale structures in high-Reynolds-number rotating Waleffe flow. (English)Zbl 1460.76434
MSC: 76F35 76F40 76U05
Full Text:
Controlling secondary flow in Taylor-Couette turbulence through spanwise-varying roughness. (English)Zbl 1430.76325
MSC: 76F70 76F40 76-05
Full Text:
Mathematical analysis and numerical resolution of a heat transfer problem arising in water recirculation. (English)Zbl 1435.80003
J. Comput. Appl. Math. 366, Article ID 112402, 25 p. (2020); corrigendum ibid. 375, Article ID 112859, 2 p. (2020).
Full Text:
Full Text:
Full Text:
Unsteady motion of a viscous conducting fluid between rotating parallel walls in a transverse magnetic field. (English. Russian original)Zbl 1457.76182
Fluid Dyn. 54, No. 8, 1043-1050 (2019); translation from Prikl. Mat. Mekh. 83, No. 5-6, 770-778 (2019).
MSC: 76U05 76W05 76D10
Full Text:
Phase-separated vortex-lattice in a rotating binary Bose-Einstein condensate. (English)Zbl 1464.82007
MSC: 82C10 65M06 76Y05
Full Text:
Full Text:
Heat transfer analysis in a micropolar fluid with non-linear thermal radiation and second-order velocity slip. (English)Zbl 1445.80006
Kumar, B. Rushi (ed.) et al., Applied mathematics and scientific computing. International conference on advances in mathematical sciences, ICAMS, Vellore, India, December 1–3, 2017. Volume II. Selected papers. Cham: Birkhäuser. Trends Math., 385-395 (2019).
Full Text:
Automatic blocking of shapes using evolutionary algorithm. (English)Zbl 1447.76024
Roca, Xevi (ed.) et al., Proceedings of the 27th International Meshing Roundtable (IMR), Albuquerque, NM, USA, October 1–5, 2018. Cham: Springer. Lect. Notes Comput. Sci. Eng. 127, 169-188 (2019).
Full Text:
Analytical approach to Galerkin BEMs on polyhedral surfaces. (English)Zbl 1443.76159
MSC: 76M15 76M10 76U05
Full Text:
Full Text:
Full Text:
all top 5
all top 5
all top 5
all top 3
all top 3
|
{}
|
# Beta hat marginal distribution in Strong OLS setting
If in OLS setting I assume: Epsilon|X is normally distributed (mean= 0; variance= sigma^2 * Identity matrix)
We have that the estimated coefficients (each "beta hat j") is conditional on X normally distributed.
But if we consider "beta hat j" not conditional on X in this setting, what is its distribution? Does it keep Gaussian distribution with stochastic parameters? If it is not like that, why?
• To answer this question, we would need to know (or make assumptions about) the distribution of $X,$ so it looks like this won't be answerable unless you can supply that additional information. – whuber Dec 15 '18 at 20:32
|
{}
|
# Bibliotek
Musik » Fatboy Slim »
## Don't Let the Man Get You Down
41 spelade låtar | Gå till låtsida
Låtar (41)
Låt Album Längd Datum
Don't Let the Man Get You Down 3:17 19 sep 2012, 13:25
Don't Let the Man Get You Down 3:17 5 jun 2012, 21:56
Don't Let the Man Get You Down 3:17 9 jun 2010, 20:43
Don't Let the Man Get You Down 3:17 7 mar 2010, 16:47
Don't Let the Man Get You Down 3:17 7 mar 2010, 16:47
Don't Let the Man Get You Down 3:17 27 jan 2010, 16:32
Don't Let the Man Get You Down 3:17 25 jan 2010, 16:10
Don't Let the Man Get You Down 3:17 25 jan 2010, 16:10
Don't Let the Man Get You Down 3:17 8 okt 2009, 17:41
Don't Let the Man Get You Down 3:17 8 jun 2009, 18:40
Don't Let the Man Get You Down 3:17 8 jun 2009, 18:40
Don't Let the Man Get You Down 3:17 1 jun 2009, 20:11
Don't Let the Man Get You Down 3:17 28 apr 2009, 20:59
Don't Let the Man Get You Down 3:17 21 apr 2009, 21:04
Don't Let the Man Get You Down 3:17 17 apr 2009, 18:07
Don't Let the Man Get You Down 3:17 1 mar 2009, 12:12
Don't Let the Man Get You Down 3:17 28 feb 2009, 13:14
Don't Let the Man Get You Down 3:17 27 feb 2009, 16:41
Don't Let the Man Get You Down 3:17 15 feb 2009, 14:53
Don't Let the Man Get You Down 3:17 12 jan 2009, 22:50
Don't Let the Man Get You Down 3:17 4 jan 2009, 23:05
Don't Let the Man Get You Down 3:17 25 dec 2008, 22:02
Don't Let the Man Get You Down 3:17 24 dec 2008, 16:08
Don't Let the Man Get You Down 3:17 18 dec 2008, 18:04
Don't Let the Man Get You Down 3:17 17 dec 2008, 07:23
Don't Let the Man Get You Down 3:17 23 nov 2008, 22:04
Don't Let the Man Get You Down 3:17 16 nov 2008, 20:28
Don't Let the Man Get You Down 3:17 15 nov 2008, 14:32
Don't Let the Man Get You Down 3:17 9 nov 2008, 14:16
Don't Let the Man Get You Down 3:17 3 nov 2008, 20:41
Don't Let the Man Get You Down 3:17 2 nov 2008, 15:50
Don't Let the Man Get You Down 3:17 2 nov 2008, 14:03
Don't Let the Man Get You Down 3:17 2 nov 2008, 00:10
Don't Let the Man Get You Down 3:17 1 nov 2008, 23:51
Don't Let the Man Get You Down 3:17 1 nov 2008, 16:33
Don't Let the Man Get You Down 3:17 1 nov 2008, 14:32
Don't Let the Man Get You Down 3:17 31 okt 2008, 19:00
Don't Let the Man Get You Down 3:17 31 okt 2008, 18:42
Don't Let the Man Get You Down 3:17 30 okt 2008, 16:16
Don't Let the Man Get You Down 3:17 30 okt 2008, 15:52
Don't Let the Man Get You Down 3:17 30 okt 2008, 15:49
|
{}
|
Update of LDM-294
XMLWordPrintable
Details
• Type: RFC
• Status: Implemented
• Resolution: Done
• Component/s:
• Labels:
None
Description
For the JDR/JSR I woudl like to reissue LDM-294.
There are updates to Org charts, glossary, 3rd party software, risk management, milestones.
Document can be found at https://ldm-294.lsst.io/v/RFC-621/index.html
Activity
Wil O'Mullane created issue -
Field Original Value New Value
Summary New issue of LDM-294 Update of LDM-294
Description For the JDR/JSR I woudl like to reissue LDM-294. Three are updates to Org charts, glossary, 3rd party software, risk management, milestones. For the JDR/JSR I woudl like to reissue LDM-294. There are updates to Org charts, glossary, 3rd party software, risk management, milestones.
Status Proposed [ 10805 ] Flagged [ 10606 ]
Description For the JDR/JSR I woudl like to reissue LDM-294. There are updates to Org charts, glossary, 3rd party software, risk management, milestones. For the JDR/JSR I woudl like to reissue LDM-294. There are updates to Org charts, glossary, 3rd party software, risk management, milestones. Document can be found at https://ldm-294.lsst.io/v/RFC-621/index.html
Link This issue is triggered by DM-20304 [ DM-20304 ]
Hide
John Swinbank added a comment - - edited
There are substantial changes to the product tree (DM Product Properties.csv) in this version vs. the last release:
$git diff --shortstat docushare-v11 DM\ Product\ Properties.csv 1 file changed, 112 insertions(+), 104 deletions(-) (and that's on a 112-line-long file!) Unfortunately, the diff is very hard to read. I read this RFC as effectively baselining the updated product tree, so it would be very useful to have a short summary of exactly what the changes are. Show John Swinbank added a comment - - edited There are substantial changes to the product tree ( DM Product Properties.csv ) in this version vs. the last release:$ git diff --shortstat docushare-v11 DM\ Product\ Properties.csv 1 file changed, 112 insertions(+), 104 deletions(-) (and that's on a 112-line-long file!) Unfortunately, the diff is very hard to read. I read this RFC as effectively baselining the updated product tree, so it would be very useful to have a short summary of exactly what the changes are.
Hide
John Swinbank added a comment - - edited
• I see the mention of a glossary in the changelog, and I see the aglossary.tex file. However, the PDF at https://ldm-294.lsst.io/v/RFC-621/LDM-294.pdf does not actually seem to contain a glossary.
• Figure 11, “DM System Science Team organisation”, is blank.
• Several of the glossary entries chosen are questionable. For example, there are several instances of Data \gls{Release} Production, where \gls{Data Release Production} would be more appropriate.
• Not changed in this version, but I note that the description of the SUIT group and CalTech should be substantially revised. We should ticket this work before accepting this RFC.
• Also not changed in this version, but does the DM Systems Engineering Team actually exist? Apparently I am a member, but I am not aware of it ever having met. Were I a reviewer, I might well ask to see meeting notes or other evidence that it actually does something.
• aglossary.tex is unstable. That is, if I check out the source from git, I am provided with a copy of aglossary.tex, but if I run make that file is changed into something different. Which is correct?
• The caption for figure 1 appears in the middle of figure 1 (ie, superimposed on the figure itself).
Show
John Swinbank added a comment - - edited I see the mention of a glossary in the changelog, and I see the aglossary.tex file. However, the PDF at https://ldm-294.lsst.io/v/RFC-621/LDM-294.pdf does not actually seem to contain a glossary. Figure 11, “DM System Science Team organisation”, is blank. Several of the glossary entries chosen are questionable. For example, there are several instances of Data \gls{Release} Production , where \gls{Data Release Production } would be more appropriate. Not changed in this version, but I note that the description of the SUIT group and CalTech should be substantially revised. We should ticket this work before accepting this RFC. Also not changed in this version, but does the DM Systems Engineering Team actually exist? Apparently I am a member, but I am not aware of it ever having met. Were I a reviewer, I might well ask to see meeting notes or other evidence that it actually does something. aglossary.tex is unstable. That is, if I check out the source from git, I am provided with a copy of aglossary.tex , but if I run make that file is changed into something different. Which is correct? The caption for figure 1 appears in the middle of figure 1 (ie, superimposed on the figure itself).
Hide
John Swinbank added a comment -
PR at https://github.com/lsst/LDM-294/pull/78 address some of the above issues.
Show
John Swinbank added a comment - PR at https://github.com/lsst/LDM-294/pull/78 address some of the above issues.
Hide
Gabriele Comoretto added a comment -
This is what has changed in the product tree in respect to the tag v3.7
Services
• Prompt Services group:
• reintroduced "Prompt Proc. Ingest Service"
(from my understanding, this is required in order to distinguish between the prompt ingestion made at base enclave and the prompt processing made at NCSA. It was removed in one of the last changes.KT may be more precise)
Software Products
• LSP SW group
• LSP Web SW has been replaced by SUIT and SUIT Online Help
(after some discussion with Gregory the past months, my understanding is that these two software products will be used to implement the LSP portal)
• Science Pipelines SW group
• all Calibration Softwares have been merged into Calibration SW
(the calibration software to use in each context, daily, DPR, etc, should be the same, but with different configurations and input data. This simplifies the approach, one SW product instead of 4, but provides a better granularity in respect to what we have now, all in lsst_distrib.)
• added Science Pipeline Distribution software (lsst_distrib)
(this is the only software product we are doing releases now, therefore, it has to be added to the product tree.)
(my understanding is that some packages in the Science Pipelines are plugins. The proposal here is to have one software product that includes all those packages)
Hardware and Cots
• added Third Party Libs group
(The distinction between COTS and Third Party Libs is: a COTS is a tool used to implement the infrastructure and a Third Party lib is a software on which our SW products depends on
at build time and run time. This list is not complete.)
Added new top-level group: Ref Data
(my understanding is that data product used in LSST processing are important, therefore they deserve their own section in the product tree. This list is not complete)
Show
Hide
John Swinbank added a comment -
Further PR at https://github.com/lsst/LDM-294/pull/79 addresses problems with the glossary (ensuring it gets built by Travis; ensuring that all entries get properly recursively expanded; fixing formatting issues).
Show
John Swinbank added a comment - Further PR at https://github.com/lsst/LDM-294/pull/79 addresses problems with the glossary (ensuring it gets built by Travis; ensuring that all entries get properly recursively expanded; fixing formatting issues).
Hide
John Swinbank added a comment -
Thank you Gabriele Comoretto — that summary is very helpful! A couple of comments:
• I believe that what you are calling “Calibration Software” is actually software for generating calibration products, not software for performing calibration. I think it would be more helpful if the name reflected that. (I notice, though, that this is also an issue in LDM-148 and apparently I didn't complain about it there, so maybe we should approve this for consistency and file a new ticket to fix it later.)
• Can we tighten up the definition of “plugin”? Do you mean this to refer to all meas_extensions_ packages, or everything which provides a meas_base-style measurement plugin, or something else? I'm not sure I understand why it's meaningful to distinguish these plugins from the rest of “Sci Pipelines Libs” — can you explain?
Thanks again!
Show
John Swinbank added a comment - Thank you Gabriele Comoretto — that summary is very helpful! A couple of comments: I believe that what you are calling “Calibration Software” is actually software for generating calibration products, not software for performing calibration. I think it would be more helpful if the name reflected that. (I notice, though, that this is also an issue in LDM-148 and apparently I didn't complain about it there, so maybe we should approve this for consistency and file a new ticket to fix it later.) Can we tighten up the definition of “plugin”? Do you mean this to refer to all meas_extensions_ packages, or everything which provides a meas_base-style measurement plugin, or something else? I'm not sure I understand why it's meaningful to distinguish these plugins from the rest of “Sci Pipelines Libs” — can you explain? Thanks again!
Hide
John Swinbank added a comment -
Adopting this so we can baseline the updated document for upcoming review: it's a clear step forward compared to what was there before.
I would like to revisit the design of the product tree; I'm a little bit worried that this RFC is effectively bouncing some immature thinking there into the baseline. I'll ticket that work to be done after we have the review out of the way.
Show
John Swinbank added a comment - Adopting this so we can baseline the updated document for upcoming review: it's a clear step forward compared to what was there before. I would like to revisit the design of the product tree; I'm a little bit worried that this RFC is effectively bouncing some immature thinking there into the baseline. I'll ticket that work to be done after we have the review out of the way.
Status Flagged [ 10606 ] Adopted [ 10806 ]
Resolution Done [ 10000 ] Status Adopted [ 10806 ] Implemented [ 11105 ]
Hide
Wil O'Mullane added a comment -
Show
Wil O'Mullane added a comment - v3.8 https://docushare.lsstcorp.org/docushare/dsweb/Get/Version-60624/LDM-294.pdf
Link This issue relates to DM-20838 [ DM-20838 ]
Hide
Kian-Tat Lim added a comment -
Yes, "Calibration Software" is for generating calibration products. There was a general desire to keep names short, but adding "Products" to the name would be reasonable (while something like "Products Generation" might be less desirable).
The distinction between Science Pipelines Libraries and Plugins is that the former are part of every release and distribution; it was thought that various combinations of the latter may be released or distributed, and they might be released independently of the Libraries. Plugins, in my mind, include things like meas_ and obs_ packages that fulfill more-or-less-defined interfaces to general frameworks. In theory, all Tasks could be considered plugins, but most seem so fundamental that there's no reason to split them out.
Show
Kian-Tat Lim added a comment - Yes, "Calibration Software" is for generating calibration products. There was a general desire to keep names short, but adding "Products" to the name would be reasonable (while something like "Products Generation" might be less desirable). The distinction between Science Pipelines Libraries and Plugins is that the former are part of every release and distribution; it was thought that various combinations of the latter may be released or distributed, and they might be released independently of the Libraries. Plugins, in my mind, include things like meas_ and obs_ packages that fulfill more-or-less-defined interfaces to general frameworks. In theory, all Tasks could be considered plugins, but most seem so fundamental that there's no reason to split them out.
People
• Assignee:
Wil O'Mullane
Reporter:
Wil O'Mullane
Watchers:
Gabriele Comoretto, John Swinbank, Kian-Tat Lim, Wil O'Mullane
|
{}
|
# How to digitize an AC signal?
Let's say I have a sinusoidal signal of amplitude 5V and frequency 100Hz. I need an accuracy of 0.1% in the digital signal.
How to determine the ADC parameters required?
Do I need to apply an offset first?
How is the process different than that for DC?
• By accuracy of 0.1% you mean in the converted voltage level? You may also need to specify a sample rate significantly higher than the Nyquist frequency. – Icy Oct 15 '15 at 11:16
• Yes, the voltage level – Hassaan Oct 15 '15 at 11:58
I need an accuracy of 0.1% in the digital signal.
An 8 bit ADC has a resolution of 1 in 256 = 0.39% therefore you need to choose an ADC that is at least 10 bit (0.097% resolution). But resolution isn't accuracy and accuracy can't be achieved without a greater resolution so you need to start digging into the specifications of an ADC that might suit your needs.
You need to consider INL, DNL, offsets, gain-errors and noise to be confident about predicting accuracy. INL stands for integral non-linearity and here's a picture to help you understand: -
Basically it's a curvature problem and below here's DNL (differential non-linearity): -
Offset problems: -
Gain error: -
Pretty pictures stolen from here
Do I need to apply an offset first?
If it's an AC signal then you'll likely need to apply a DC offset to position the centre-line of the AC signal to the mid-point of the ADC input range.
How is the process different than that for DC?
It's no different - a slow moving DC value should be treated in the same way as a repetitive AC signal.
• could you please show me how you determined the % resolution? – Hassaan Oct 15 '15 at 11:13
• It's just $$1 / (2^n)$$ – pjc50 Oct 15 '15 at 11:14
• @pjc50 That was a very dramatic comment ... the suspense was intolerable before my eyes reached the math! – Pål GD Oct 15 '15 at 16:58
Another method of getting the offset (so that your signal is centred in the ADC conversion range) is to use an amplifier specifically designed for the task such as the AD8132.
This device features a Vocm input, and will offset the input signal by the DC level on this pin. It is common to expose the reference voltage used by the ADC (which is often the maximum voltage that can be converted) and divide it by 2 to get an offset in the centre of the ADC conversion span.
Your 0.1% accuracy requirement requires a signal to noise ratio of 60dB, so layout will be important (it always is in ADCs, it simply becomes more and more important with higher precision requirements).
There are many factors to consider when using ADCs to get the best out of them, and indeed to get what you need out of them. I particularly recommend reading the Effective Number of Bits (ENOB) section to understand what an ADC can actually do successfully. The PCB layout section is a source of excellent advice, as well.
Hope that helps guide you a little.
[Update] Added note that other manufacturers supply these devices.
Although I linked an Analog Devices part, ADC drivers (specifically designed for the purpose) are available from TI, Linear Tech and Maxim has some nice parts.
There are ways to avoid digitizing the DC offset, called "pedestal" in http://www.ti.com/lit/an/sbaa015/sbaa015.pdf The basic idea is (again) to use an RC network to create a low-pass filter (surprisingly!), so the details are different (than in your last question). The basic idea is:
In this circuit, the unfiltered signal is presented to the non-inverting input of the A/D converter and the DC portion of the signal is presented to the inverting input of the A/D converter. The 12-bit A/D converter will digitize the difference between its two inputs, consequently rejecting the DC portion of the signal.
I need an accuracy of 0.1% in the digital signal.
For that you need at least a 10-bit ADC. That means that you may get a quantization error of at most 1/2^n of your range, provided your signal fills the whole range of your ADC. However, this only accounts for the theorical minimum resolution you need. For real requirements, you need to account for noise, and that depends on a lot of stuff. To begin, you better use a 100Hz low-pass (antialiasing) filter (with the more stages, the better). That should take care of high-frequency noise. Then you should check the ADC's datasheet for non-linearity (which is usually considered as noise) and other internal error sources. Then you should check correct impedance matching (which when done incorrectly may increase noise). And since you are dealing with a very slow signal, you may also try Oversampling to get some additional bits from sampling at higher frequencies (software or hardware approaches). At higher rates you can also apply digital filters after sampling and before decimating.
How to determine the ADC parameters required?
Your most important parameters are resolution, sample-rate, noise and range. Assuming your "amplitude" means "peak amplitude", you need at least a 10V ADC to sample. Those are not common, so you'll probably need an attenuator or some other way to scale down your signal. Remember that those devices usually add more noise to the system (as does filters too) and that resistors add thermal noise too. Since you probably want to scale the signal a little more to avoid clipping and to allow for noise, you want to take into consideration how this affects your accuracy (your requirement is 0.1% of the signal, not of the ADC range).
Do I need to apply an offset first?
If your ADC is single-ended, you do need an offset. Adding an analog offset to the signal also adds its noise, so you should be careful selecting an analog reference which matches your requirements. Using a voltage divider from the power supply will probably get you into trouble because of noise (specially if using a switching regulator) and feedback. You most probably want to select a differential ADC to sample AC signals.
How is the process different than that for DC?
It is not. If you add an offset to a DC signal and use a single-ended ADC, you are actually sampling a DC signal, from which you'll have to subtract the offset afterwards. If you're using a differential ADC you have almost the same internals, except that you'll have a variable reference (which is part of the signal itself and usually avoids common mode effects) and the conversion allows for negative values.
|
{}
|
# Undo the Linux trash command
Can we undo operations done in a terminal, for example, file deletion through rm?
Solutions obtained:
1. Aliasing
2. Undelete utilities
3. Backup utilities
4. LibTrash
5. Versioning (FUSE)
-
On rm: It unlinks a file from it's inode. The question: "Where do files go when the rmcommand is issued" -> unix.stackexchange.com/questions/10883/… might also be a good read for you. – erch Dec 8 '13 at 15:52
There is no general "undo" for every operation in terminal. If you wish to recover a file after using rm you will need to look into recovery software.
An option to prevent you from future mistakes is to make aliases for alternative commands to remove files. Add them to your ~/.bashrc and get into the habit of using them instead of rm.
1. alias rmi='rm -i'
If you use rmi, you will be prompted for confirmation of future deletes. Try to avoid developing the habit of pressing y immediately after you issue an rmi command, as that will defeat the purpose.
2. You could also move files deleted by the trsh command in terminal to your recycle bin (on KDE and Gnome):
alias trsh='mv --target-directory="$HOME/.Trash"' If you use trsh, you will have a limited "undelete" capability. Beware that trsh dir1/file1 dir2/file1 may still cause unrecoverable data loss. - +1 for a much better answer than mine =) – The How-To Geek Aug 28 '09 at 6:27 the second alias is very, very smart. +1. – LiraNuna Aug 28 '09 at 7:58 trash() { mv$@ ~/.Trash; } # bash function, not an alias. Changing the expected behavior of rm is a bad idea, IMHO. – Richard Hoskins Aug 28 '09 at 12:43
The risk with aliasing rm to rm -i is that you get used to the safety net that it gives you. Then you go onto another machine, which doesn't have that alias... – Dentrasi Aug 29 '09 at 9:35
For the reasons that Richard and Dentrasi mention, I prefer to create a custom function or to alias rmi -> rm -i. It's really a mistake to simply get in the way of an existing program by aliasing over it. – Telemachus Aug 31 '09 at 2:15
There exist undelete utilities for ext2, but most other Linux filesystems are stuck in the Stone Age and don't have any advanced usability features. Sad state of affairs considering gigantic drives with enough space to never delete a file again are commonplace.
So you are stuck with three options:
1. Do backup regularly, for example with a command like:
rsync -axvRP --backup --backupdir=/backup/incr/$(date -I) /home/ /backup/root/ 2. Use a version control tool such as git for all your work. While this will not protect against against a crazy rm -r that kills the repository, it will protect against regular troubles as you will be using git rm not raw rm. 3. Be extra careful and don't trust too much in rm -i, trash-cli and friends, as most data you will lose on the shell you will not lose by accidental rm, but by misdirected pipes, mistyped output files, misdirected mv and stuff, i.e., things that will overwrite your data, not just delete it. Do all three for maximum amount of safety. - It's pretty hilarious to call ext2 modern. The newer filesystems use more complicated on-disk formats than ext2/3, and had to give up easy-to-find locations where you might find files to un-delete. They're designed for people who make backups of things they care about. BTW, I always use mv and cp -i, since I don't normally want to clobber anything. I normally type \rm, because I have rm aliased to rm -i, too, but I don't want to answer its question. – Peter Cordes Dec 12 '09 at 4:19 Option 1: See Undelete Linux Files from an ext2 File System. This page points to a program, written by Sebastian Hetze of the LunetIX company, that (as the title suggests) undeletes recently deleted files from an ext2 filesystem. Example usage: # undelete -d /dev/hdc3 -a 10 Warnings: • The original web site is gone. The link, above, is to the Internet Archive. • The site(s) are in a mixture of English and German. • As stated above, the tool is designed specifically for the ext2 filesystem. It is unlikely to work on any other filesystem type; especially not ones other than extN. Option 2: I have rsnapshot (rsync) running on my machine which makes snapshots hourly of my selected folders. It incrementally does this every hour, 2 hours or whatever you tell CRON to do. After a full day it recycles these snapshots into one daily snapshot and after 7 days in a weekly so on and so on. This makes me able to go back in time for about a month or so for every hour! It is pretty good with disk space as it creates symbolic links to files which never changed... - Just to clarify, #1 is specific to ext2 filesystems. It wouldn't work on an ext3 filesystem. – nagul Aug 31 '09 at 9:35 TestDisk can undelete files from FAT, exFAT, NTFS and ext2 filesystems. - There's a larger question here that's worth addressing. Shell commands are not chatty (they don't double check what you want), and they expect you to know what you're doing. This is fundamental to how they are designed. It's a feature, not a bug. Some people feel macho when they use such commands, which I think is pretty silly, but it is important to understand the dangers. You can do a great deal of damage in a terminal, even if you're not root. I think you probably really just cared about rm, but since you said "Can we undo the the operations done in terminal", I thought this was worth saying. The general answer is no, you can't. - check this ... migh be helpful http://artmees.github.io/rm/ suppose you did rm very_important_file from the terminal. recovering this file is a tedious and not always successful process instead if you used the script mentioned up. you don't have to worry about this because rm very_important_file mv very_important_file ~/.Trash/ are equivalent. the script handles more cases and doesn't alter your system rm at all and that is because it's put into the user local bin folder so it shadows the system rm and yet doesn't affect it or disable using it this is a refined aliasing approach but without losing any feature - It'd be good if you could add some explanation as to what this does and how it applies to the question, maybe add an example, etc. – slhck Dec 7 '13 at 12:24 Recover using grep on /dev/partition (Linux or Unix Recover deleted files – undelete files), grep -b 'search-text' /dev/partition > file.txt Just a try. - Two more technical solutions have not be named yet: 1. libtrash: A dynamic library which can be preloaded in your shell which intercepts the deleting/removing syscalls and moves the files to a trash folder instead (much like an alias but works for any application when preloaded). 2. A versioning file system. If you delete (or edit, or copy, or ...) a file, you can just revert to an old state. This could be done with a FUSE filesystem and one of its versioning filesystems. - You could make rm an alias for the trash command (You will need to install trash first.) Add this to your ~/.bashrc : alias rm='trash' This is preferable to alias rm='mv --target-directory=$HOME/.Trash' since ~/.Trash is NOT the trash folder for gnome. It is better IMHO to let trash figure out where the actual trash folder is.
btw I would have posted this in a comment but I don't have enough rep.
-
+1 for "since ~/.Trash is NOT the trash folder for gnome." However, rm shouldn't be aliased to trash either. You should just use trash instead of rm if that's what you want. – nagul Aug 31 '09 at 9:35
True, but for some people old habits die hard. – Alvin Row Aug 31 '09 at 15:50
'rmtrash' is another option. – Itachi Jun 12 at 6:10
You could use trash-cli if you use KDE when you run a gui. This is command line utility to delete/restore using the KDE trash facilities.
-
trash-cli provides also works with GNOME trash and it's designed to provide rm options compatibility (for the aliasing). – Andrea Francia Dec 2 '09 at 8:04
Don't forget to alias rm=trash so that your typical command-line screw-ups come with undo buttons. – Ryan Thompson Feb 19 '10 at 23:25
I think alias rm=trash is potentially dangerous, if there is any chance you will use someone else's system one day and forget (or ssh). Much safer to just get used to writing trash instead of rm. – Sparhawk Feb 28 '14 at 4:20
There is no recycle bin for the command line.
You could try some of the various undelete utilities, but there's no guarantee that they would work.
-
Yes, there is a trash for the command line. See trash-cli in another answer. – Sparhawk Feb 28 '14 at 4:19
|
{}
|
## Abstract
The dominant business model in the institutional brokerage industry is under attack from discount brokers, crossing networks, and ECNs, which provide trade execution at less than half the cost of full-service commissions. In contrast, full-service brokers produce research; provide capital, time, and expertise to facilitate trade execution; and allocate initial public offerings. These valuable but costly services are bundled with trade execution and paid for with a high per-share commission. In this article, we examine the economics of the bundled commission market and find associated patterns of institutional trading that are not generally recognized. We also use this framework to describe the structure of the institutional brokerage industry.
Although commissions were deregulated on May 1, 1975, the continued reliance on high per-share commissions is puzzling. In the other industries deregulated in the 1970s, such as trucking, banking, and airlines, the bundling of services and inducement in kind disappeared quickly after the onset of competition. In addition, high per-share commissions do not appear to be the most natural way for brokers to charge for trade execution, since, as with any transaction cost, commissions should significantly reduce turnover, as argued in Constantinides (1986) and Vayanos (1998).1 Yet the practice persists over thirty years after deregulation.2
The main reason for such a prolonged survival of bundling of execution and services after deregulation is most likely the safe-harbor provision of Section 28(e) of the 1975 Amendments to the Securities Act. Section 28(e) permits institutions to pay for various investment-related services out of brokerage commissions, rather than out of the management fee. While this exception facilitates the continuance of bundling, it is not a sufficient condition, as payment for these services can take other forms. The underlying economics of per-share commissions and their impact remain largely unexplored. Similar to airlines and restaurants, brokers provide many services that they either cannot, or do not wish to, sell outright. Instead, they allocate them to their best clients as a reward for past business and an inducement for future business. We contend that brokers and their institutional clients enter into long-term agreements specifying a level of service (premium or standard) and the overall payment for it. The payment for these services is rendered through the appropriate allocation of order flow to brokers, as institutional per-share commissions are already set in the contract. These contracts may be informal, yet well understood by the parties involved. Cumulative institutional commissions therefore represent a metering device that determines the allocation of commission dollars, making it simple for a broker to keep a detailed profit-and-loss account for each client, as noted in Kelly and Hechinger (2004).
In this article, we suggest that per-share commissions constitute a convenient and legally safe-harbored way of charging a prearranged fixed payment for a broker’s premium services. If one takes this view of commissions, then the predictions of the quid pro quo theoretical models of commissions and trading, such as Brennan and Hughes (1991) and Brennan and Chordia (1993), may change, and empirical estimates of marginal trading costs need to be reexamined. Our framework can explain the continued existence of high per-share commissions despite notable competition from discount brokers and ECNs, and also severs the link between the characteristics of a trade (such as price and size) and the commission applied to it. On the other hand, it links commissions to the value of the premium services supplied by full-service brokers. Finally, this framework helps predict institutions’ allocation of trading volume across brokers.
We use a proprietary database of institutional trades in 1999–2003 from Abel/Noser, which allows us to identify over 25 million trades in NYSE-traded stocks submitted by over six hundred institutions (identified by an ID number, which we can follow) to over one thousand brokers, whose identities we know. The data identify the security, the trade size, the average execution price, and the commission. The evidence from these data supports our hypothesis. First, we show that there is relatively little variation in per-share institutional commissions across transactions, regardless of the institution or broker involved. In fact, the majority of institutional client-broker pairs use only one or two different per-share commissions for all their transactions, which indicates that the characteristics of a trade are not driving commissions. Indeed, we find that the most important determinant of the per-share commission on any trade is the prior-period commission paid by that institutional client to that same broker. These results are stable through time and are consistent with commissions being a metering device used to pay for a broker’s premium services, which implies that full-service commissions are an average and not a marginal cost of trading.
Second, if institutions pay for premium services through commissions, this should affect their order flow allocation across brokers. Gargantuan institutions, such as Fidelity Investments, can allocate small proportions of their volume and still obtain the premium status from most brokers. Most institutions, however, face a trade-off between the need to hide their trading strategy by dispersing their trades and the benefits of concentrating their order flow with a small set of brokers, for whom they become important clients and receive premium services. Consistent with this hypothesized trade-off, we find that institutions indeed concentrate their volume with a few brokers, and smaller institutions concentrate significantly more. These findings are consistent over time. Third, we find that smaller institutions also pay higher per-share commissions and tend to have higher turnover, two facts consistent with their desire to increase their total payment so as to receive premium services from at least some brokers. Fourth, we contrast our hypothesis with a simple alternative model wherein institutions allocate their order flow to minimize their total execution costs. While some results on the patterns of institutional order flow are consistent with both hypotheses (as these are not mutually exclusive), others are inconsistent with the cost-minimization model, yet corroborate our hypotheses.
Finally, we show that the proportion of volume executed at discount commission rates increases over time at the expense of full-service commissions placing downward pressure on full-service commissions rates. Even the practice of bundling services with execution is under competitive pressure. We show that a stronger emphasis on buy-side execution costs forces many full-service brokers to provide low-cost execution in-house. Recent regulatory measures appear to have reduced the profitability of premium broker services (Kadan et al. 2006). In addition, Hintz and Tang (2003) document brokers’ increasing reliance on hedge funds for commission revenue, but these clients demand liquidity rather than proprietary research. Together, these factors imply that the value of brokers’ premium services has declined for many institutional clients, while the importance of liquidity has increased. Consequently, it is reasonable to presume that the process of the unbundling of research from execution, which has already begun, will accelerate in the future.
The article is organized as follows. Section 1 puts the commissions in the perspective of the extant literature. Section 2 presents commissions in the context of a long-term contract and presents supporting evidence. Section 3 examines the market for a broker’s premium services. Section 4 generates and tests hypotheses regarding the allocation of trading volume across brokers. Section 5 concludes.
## History of Commissions and the Literature
Prior to 1975, commissions were tightly regulated by the SEC, and essentially fixed. Copeland (1979) reports that prior to 1975, institutional commissions on the NYSE were a direct function of both price and shares traded and calculated as:
(1)
$$Commission{\mkern 1mu}\, per{\mkern 1mu}\, share = A + B \times Price.$$
The coefficients $$A$$ and $$B$$ could vary with trade size and commissions on trades above $300,000 could be negotiated. As with many other industries under price regulation, such as airlines, banking, and trucking, brokers who were prohibited from competing for clients with lower-priced commissions reverted to offering auxiliary services. Thus, prior to 1975, the bundling of services with execution was the norm. The May 1975 deregulation abolished fixed commissions, resulting in two major impacts on securities trading. First, commissions fell rapidly, though not uniformly, across all trade sizes (Tinic and West 1980). Figure 1 presents a 1977–2004 time series of average institutional commission rates reconstructed from Greenwich Associates survey data. The figure shows how the average institutional commissions fell from the mid-teens (in cents per share) in the late 1970s to just under 5 cents per share in 2004. The decline in real terms is much more dramatic. Figure 1 Historical commissions per share Institutional investor average cents-per-share commission, 1977–2004. Data are from Greenwich Associates and represent the unweighted average commission that is calculated from proprietary survey data. Figure 1 Historical commissions per share Institutional investor average cents-per-share commission, 1977–2004. Data are from Greenwich Associates and represent the unweighted average commission that is calculated from proprietary survey data. The second major impact of deregulation was that discount brokers began to trade NYSE-listed stocks. For the first time, institutional investors were able to unbundle trade execution from the provision of ancillary services. Initially, discount brokers captured little institutional trading volume: the discount market share was only 6% in 1980 (Jarrell 1984). By 2003, over 40% of institutional volume in our sample was executed at discount prices. While this is a significant change, it is still small relative to other industries that underwent deregulation. Early research modeled the post-deregulation commissions as a negotiated marginal cost of trade execution, but the evidence was mixed. While Ofer and Melnick (1978) claimed that commission rates represent the costs of executing various trades, Jarrell (1984) finds that commissions per share were relatively invariant to their estimated per-share cost, with the profits from large trades subsidizing losses from executing small trades. Starting with Edmister (1978) and Edmister and Subramanian (1982), the focus shifts to measuring commissions as a percentage of price. Reflecting this view, Figure 2 presents percentage institutional commissions on the NYSE in our 1999–2003 sample. In this graph, institutional commissions appear to be a continuously distributed transaction cost, which seems consistent with the extant literature. The largest frequencies are between 5 and 15 basis points of the stock price, and there is a long right tail, which gradually dies out (we truncate it at 33 bps for ease of presentation). However, the representation of commissions in basis points is misleading for U.S. stocks.3 In fact, the variation in commissions in Figure 2 comes primarily from price variation rather than from commission variation. To illustrate this point, Figure 3 presents commissions in cents per share in 1999 and in 2003. For clarity, we round commissions to the nearest tenth of a cent, making one hundred different price points available to institutional brokers. Ignoring most available prices, brokers in the United States price commissions primarily in exact cents per share. Commissions of 5 and 6 cents constitute the majority of observations in the 1999 sample, with the bulk of the rest executed at 2, 3, or 1 cent per share, respectively. While Figure 3 shows the distribution of institutional commissions in 2003 to be similar to that in 1999, the increased competition from ECNs has significantly reduced average commissions. This trend is mainly reflected in the paucity of commissions per share above 5 cents in 2003; commissions of 6 cents per share have almost disappeared. Figure 2 Institutional percentage commission costs on the NYSE The distribution of commissions in Abel/Noser’s NYSE-listed institutional trading data 1999–2003, as a percentage of stock price. Commissions per share are divided by the reported execution price to calculate percentage commission transactions cost. Zero cents per share commissions are not analyzed and the distribution is truncated at 33 bps. Figure 2 Institutional percentage commission costs on the NYSE The distribution of commissions in Abel/Noser’s NYSE-listed institutional trading data 1999–2003, as a percentage of stock price. Commissions per share are divided by the reported execution price to calculate percentage commission transactions cost. Zero cents per share commissions are not analyzed and the distribution is truncated at 33 bps. Figure 3 Per share institutional commissions for the NYSE-listed stocks in 1999 and 2003 All commissions per share are rounded to the nearest one-tenth of one cent. Zero cents per share commissions are not analyzed in this distribution, and the distribution is truncated above ten cents per share, where only a few observations reside. The resulting distribution of commissions is presented below. Few of the possible pricing nodes are actively used; institutions rely on whole number pricing, primarily at 2 and 5 cents per share. Figure 3 Per share institutional commissions for the NYSE-listed stocks in 1999 and 2003 All commissions per share are rounded to the nearest one-tenth of one cent. Zero cents per share commissions are not analyzed in this distribution, and the distribution is truncated above ten cents per share, where only a few observations reside. The resulting distribution of commissions is presented below. Few of the possible pricing nodes are actively used; institutions rely on whole number pricing, primarily at 2 and 5 cents per share. Figure 4 presents the frequency of commissions by five trade-size categories. For trades under five hundred shares, low commissions are somewhat more prevalent: 25% of all small trades are executed at 2 cents per share, while 52% are executed at 5 or 6 cents per share. For large trades of over 10,000 shares, only 14% are executed at 2 cents per share, while 67% are executed at 5 or 6 cents per share. Consistent with our point that commissions are not negotiated trade-by-trade, Sofianos (2001) contends that this variation in commission rates across trade size is likely due to the choice of trading venue by the client and not by client-broker negotiations over the commission rate on a particular trade. Figure 4 Per share institutional commissions for the NYSE-listed stocks in 1999–2003 by trade size All commissions per share are rounded to the nearest cent. Zero cents per share commissions are not included, and the distribution is truncated above ten cents per share, losing only a few observations. Overall frequency of trades at each commission price is presented for five trade-size categories. Figure 4 Per share institutional commissions for the NYSE-listed stocks in 1999–2003 by trade size All commissions per share are rounded to the nearest cent. Zero cents per share commissions are not included, and the distribution is truncated above ten cents per share, losing only a few observations. Overall frequency of trades at each commission price is presented for five trade-size categories. An extensive literature treats (explicitly or implicitly) commissions as a marginal execution cost, including Copeland (1979); Loeb (1983); Roll (1984); Berkowitz, Logue, and Noser (1988); Brennan and Hughes (1991); Dermody and Prisman (1993); Chan and Lakonishok (1993, 1995); Livingston and O’Neal (1996); Keim and Madhavan (1997); Bertsimas and Lo (1998); and Conrad, Johnson, and Wahal (2001). These studies find that commission costs, while smaller than price impact costs, are still significant, and thus should have a material impact on various decisions by investors.4 Since the continuous distribution of percentage commissions is an artifact of price variability and has little to do with the determination of actual commissions, the interpretations of these findings may require revision. A significant part of what these studies consider a marginal cost of execution is not, in fact, a marginal cost at all. For example, Brennan and Hughes (1991) argue that firms can affect the level of analyst coverage they receive by splitting their stock. In their model, splits increase the potential commission revenue generated by trading the stock. However, if the total institutional commissions paid to a particular broker are predetermined, then the broker receives little or no marginal revenue benefit from the split, so the results of Brennan and Hughes (1991) may require an alternative explanation. We expand on this idea below. ## Commission in the Context of a Long-Term Contract Why should institutional commissions on the NYSE-listed stocks be charged using a few discrete cents-per-share prices? Given the downward trend in average commissions (Figure 1), the market for institutional execution appears competitive (Blake and Schack 2002). Why then is the distribution of per-share commissions in Figure 3 largely bimodal, a trend that accelerates as average per-share commissions fall? One possibility is that the discrete distribution of commissions in Figure 3 is consistent with the extant claim that commissions depend on the cost of executing a trade, as long as trades come in two discrete categories of difficulty. We test the prediction of this hypothesis using Abel/Noser data. At the same time, we propose an alternative hypothesis that per-share commissions are determined by the broker as part of a long-term contract and are not subject to change or negotiations on a trade-by-trade basis. We conjecture that brokers use the commissions not only to charge for basic execution, but also for the premium services they provide. Each broker can provide several levels of service, each for its own total price per period. Given these prices, institutions decide which level of service fits their needs. If they choose to remain with the standard level of services, they are under no obligation to the broker. If, however, they choose to acquire premium status, then they pay for it by routing enough trading volume to this broker and paying the prespecified per-share commission.5 Brokers and their client institutions then monitor the level of services received and the volume traded, keeping detailed accounts of each.6 Under this hypothesis, per-share full-service commissions do not represent the marginal cost of trading for an institution, but rather serve as a metering device representing the average cost of services. Marginal transaction costs reduce trading volume, which makes the use of high commissions by brokers puzzling. However, if the total commission payment per period is largely predetermined and the basic execution is available at competitive prices, then the effect of commissions on volume and trade size should be minimal. As long as an institution can trade with a discount broker or an ECN, its desired trading volume is set using the ECN’s low transaction costs. Higher commissions, which include payment for other services, are inframarginal for the institution and thus should not affect the trading decision. This implies that bundling services with execution is not detrimental to trading because per-share commissions in excess of 1–3 cents are payments for broker services; therefore, they should have a minimal effect on volume. The practice of paying for investment-related services out of commissions (rather than the management fee) was explicitly permitted under the safe harbor provision of Section 28(e) of the 1975 Amendments to the Securities Act. This significant advantage allows institutions to keep their management fees low. Paying a commission arranged in advance is also attractive to institutions relative to negotiating commissions on a trade-by-trade basis, which takes time and impacts immediacy of execution in a volatile market. Kavajecz and Keim (2005) argue that negotiations are costly, because they reveal details about each particular trade. Once the details of a trade are revealed to a broker, the institution cannot withdraw the information if the commission is unacceptable. A prearranged commission charge avoids these costs. Institutional brokerage is not the only competitive industry that charges for services through something similar to the commissions relationship we have described. An analogous market mechanism is found in the airlines’ frequent-flier programs. Airlines possess valuable assets that they cannot (or prefer not to) sell outright, such as empty first-class seats. These seats are often allocated to valuable customers based on the number of miles the customer has flown with the airline. The level of services is a step function of the accumulated miles. Travelers tend to concentrate their trips on their frequent-flier airline to ensure continued access to the airline’s premium services. Both miles flown and total commissions represent easy-to-compute (for both parties) metrics that efficiently measure the importance of a client to each business.7 The cent-per-share denomination of commissions, common in the United States, is not necessary for the long-term contract to exist and is not central to our argument. Commissions in Europe and Japan are traditionally quoted in basis points of trade value. The standard full-service institutional commission in Europe today is 15 bps, which yields similar revenue for the broker on a stock priced around$25–$30 as a U.S. commission of 4–5 cents per share.8 Similarly, electronic execution in Europe is priced at 5 bps, which is comparable to 2 cents per share in the United States. Thus, the key element of the commission contract described here is not the basis on which the metering is done, be it cents or basis points, but the metering itself, which explains the continued existence of a premium commission in the presence of cheaper alternatives.9 Our conjecture also encompasses soft dollars, which represent an explicit charge for services purchased from outside vendors. They have been studied by regulators (SEC 1998), practitioners (Bennett 2002), and academics (Blume 1993; Conrad, Johnson, and Wahal 2001). While applicable to soft dollars, our conjecture also extends to all full-service commissions, whether or not they are recorded in a separate soft-dollar account. The difference in our emphasis is not merely semantic. First, according to Bennett (2002), in his report for Greenwich Associates, explicit soft-dollar commissions constitute only 27% of all full-service commissions, while the SEC (1998) reports that the seventy institutions it surveyed direct only about 8% of their total commissions to soft-dollar accounts. However, our data indicate that in 1999 over 70% of all commissions were above discount commission levels (in 2003 this number falls to 58%), implying that the market for premium commissions is much larger than conventional definitions of soft-dollar payments. Second, our argument applies in regulatory regimes, such as the U.K., where explicit soft-dollar arrangements are ruled out but where informal contracts for premium services still exist. Finally, explicit soft-dollar payments are predominantly used to buy third-party services: according to the SEC (1998), the most common use of soft dollars is as a payment to data vendors such as Standard and Poor’s, First Call, and Bloomberg. Thus, soft dollars do not necessarily yield the same predictions regarding the allocation of order flow as the premium service hypothesis. Viewing commissions as dependent on the cost of executing a trade implies that commissions should be mostly determined by individual trade characteristics, such as difficulty, price, and size. On the contrary, if brokers and their institutional clients predetermine commissions in a long-term agreement, there is no reason to negotiate commissions on a trade-by-trade basis, and the same commission can be charged repeatedly. Therefore, the main testable prediction of our conjecture is the persistence of commissions on trades between the same institutional client-broker pair. In an environment with little or no trade-by-trade negotiation over commissions, variables normally used to proxy for the execution cost of a trade should be relatively unimportant in determining per-share commissions. Our conjecture is consistent with previous empirical studies that find no significant correlation between the commission costs with execution costs (Berkowitz, Logue, and Noser 1988; Chan and Lakonishok 1993, 1995; Conrad, Johnson, and Wahal 2001). We proceed to test these alternatives directly using a large dataset of institutional trades. ### Data Our primary data source consists of 25,643,364 trades for NYSE-listed stocks by 683 institutional investors executed between January 1, 1999, and December 31, 2003. The proprietary data are obtained from Abel/Noser Corporation, an NYSE member firm and a leading provider of transaction cost analysis to institutional investors. Abel/Noser most often receives direct feeds from institutional investors’ compliance departments; therefore, the database represents a complete record of an institution’s trading. The database includes several unique items: the executing broker; an institutional client identification number, which permits us to track trades associated with each of the 683 institutions; and a buy/sell trade indicator. In addition, the database contains the commission cost of each trade, its size, date, and the average execution price.10 We next identify the broker used for each trade. There are 1064 brokers in the database; however, many brokers appear infrequently.11 To concentrate on the most important participants, we restrict the sample to brokers who execute at least fifty trades in a calendar quarter. After imposing this restriction, only two hundred seventy active brokers remain in the Abel/Noser data, yet they account for over 98% of the original observations. We have further truncated the sample by deleting observations with commissions above 10 cents per share (2.01% of the sample), as well as those with zero commissions (2.77%). The resulting sample consists of 24,093,939 trades. The size of the institutional client appears in several hypotheses, therefore we sort the clients into five quintiles, ranked by trading volume. We present the aggregate trading statistics by quintile in Table 1, which indicates that trading activity is highly skewed toward the largest clients. The highest-volume quintile dominates the other quintiles in terms of total trading volume, number of trades, and total commissions paid to brokers. As a robustness check, we verify that the average stock price per trade is roughly equal across quintiles, which indicates that differently sized institutions are not trading vastly differently priced stocks. Table 1 Description of institutional client trading activity in the sample Client quintile by trading volume 1 = low 5 = high Aggregate trading Total share volume (000s) 185,638 879,143 2,441,521 7,959,280 215,753,230 Total commission ($ 000s) 9,076 41,486 115,954 366,122 9,414,552
Trades 109,430 377,670 872,102 2,039,570 20,695,167
Volume per client ($000s) 46,370 207,776 588,440 1,854,271 53,809,910 Commissions per client ($ 000s) 67 303 846 2,672 69,225
Trades per client 805 2,756 6,365 14,887 152,170
Average commission/share 4.83 4.70 4.77 4.49 3.90
Average commission $/trade 82.95 109.85 132.96 179.51 454.92 Average trade size 1,696 2,327 2,800 3,902 10,425 Average price$/share 40.19 40.03 40.50 39.59 42.69
1 = low 5 = high
Total share volume (000s) 185,638 879,143 2,441,521 7,959,280 215,753,230
Total commission ($000s) 9,076 41,486 115,954 366,122 9,414,552 Trades 109,430 377,670 872,102 2,039,570 20,695,167 Average per client trading Volume per client ($ 000s) 46,370 207,776 588,440 1,854,271 53,809,910
Commissions per client ($000s) 67 303 846 2,672 69,225 Trades per client 805 2,756 6,365 14,887 152,170 Average commission/share 4.83 4.70 4.77 4.49 3.90 Average commission$/trade 82.95 109.85 132.96 179.51 454.92
Average trade size 1,696 2,327 2,800 3,902 10,425
Average price /share 40.19 40.03 40.50 39.59 42.69 This table presents summary information on the trading activity of 683 institutional clients in the Abel/Noser dataset for 1999-2003. Institutional clients are sorted into five quintiles by total trading volume (shares executed). Total share volume, Total commission, and the number of Trades are sum totals for each client quintile. Volume per client, Commissions per client, and Trades per client represent the average across all clients in a quintile. Average commissions per share, per trade, trade size, and price per share are averages of all trades in each quintile. ### Results We initially demonstrate that institutional commissions behave as if they were generated in a long-term contract using two empirical tests. First, for every calendar quarter in our sample, we identify the trades of client-broker pairs (keeping only pairs with at least five trades in that quarter), and calculate the mode of the commission distribution for each client-broker pair.12 In our framework, where institutional per-share commissions are part of a long-term contract and are relatively constant over time, we expect the modal commission to dominate traditional measures of execution costs in predicting commissions in the subsequent quarter. Table 2 presents the transition matrix between the mode of the commission distribution for each client-broker pair in the prior quarter and the mode for the same pair in the posterior quarter. The number of client-broker pairs in the prior (post) quarter is presented on the extreme right (bottom); altogether there are 4,776 client-broker pairs. The post-period row at the bottom of Table 2 is the number of post-period pairs that execute at a particular commission and therefore represents the unconditional distribution of the modal commission in the post-period. If commissions are negotiated on a trade-by-trade basis, then the distribution of modal post-period commissions should be independent of the prior-period commission. Instead, the data show that the actual transition probabilities depend heavily on the mode of the prior-period commission and are dramatically different from the unconditional distribution probabilities, as demonstrated by comparing the conditional probabilities along the main diagonal and the unconditional probabilities along the bottom row. To verify the importance of the prior-period commission on the frequency of post-period commissions, we perform a likelihood ratio test (Greene 1997). In each case, the hypothesis that the conditional probabilities are equal to the unconditional probabilities is strongly rejected. Hence, the observed frequencies of post-period commissions are significantly affected by prior-period commissions. Table 2 Transition matrix of commission mode Mode of cents Mode of cents per share in the post period Mean no. of client broker per share in the pairs at that commission prior period 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 in prior period 1.0 71.7 11.7 3.8 2.0 8.9 2.0 0.0 0.0 0.0 0.0 59 2.0 2.6 82.9 3.6 1.1 7.8 1.8 0.1 0.0 0.0 0.0 302 3.0 1.0 5.9 65.5 2.3 20.2 5.8 0.1 0.0 0.0 0.0 281 4.0 0.9 3.3 5.3 61.7 23.3 5.3 0.2 0.0 0.1 0.0 144 5.0 0.2 0.9 1.9 1.6 89.4 5.8 0.1 0.0 0.0 0.0 2,850 6.0 0.1 0.7 1.8 1.1 23.2 72.6 0.3 0.1 0.0 0.0 1,097 7.0 0.0 1.6 1.4 1.8 17.6 19.2 56.6 1.9 1.4 0.0 30 8.0 0.0 0.1 0.1 0.0 9.7 14.9 7.1 66.9 0.0 0.0 9.0 0.0 0.0 2.5 2.5 12.5 25.0 0.0 0.0 57.5 0.0 10.0 0.0 0.0 1.6 0.0 0.0 12.7 0.0 7.9 6.4 65.0 Mean no. of client 63 309 279 158 2,928 1,002 24 4,776 broker pairs at that (1.3%) (6.5%) (5.8%) (3.3%) (61.3%) (21.0%) (0.5%) (0.2%) (0.04%) (0.1%) commission in post period Mode of cents Mode of cents per share in the post period Mean no. of client broker per share in the pairs at that commission prior period 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 in prior period 1.0 71.7 11.7 3.8 2.0 8.9 2.0 0.0 0.0 0.0 0.0 59 2.0 2.6 82.9 3.6 1.1 7.8 1.8 0.1 0.0 0.0 0.0 302 3.0 1.0 5.9 65.5 2.3 20.2 5.8 0.1 0.0 0.0 0.0 281 4.0 0.9 3.3 5.3 61.7 23.3 5.3 0.2 0.0 0.1 0.0 144 5.0 0.2 0.9 1.9 1.6 89.4 5.8 0.1 0.0 0.0 0.0 2,850 6.0 0.1 0.7 1.8 1.1 23.2 72.6 0.3 0.1 0.0 0.0 1,097 7.0 0.0 1.6 1.4 1.8 17.6 19.2 56.6 1.9 1.4 0.0 30 8.0 0.0 0.1 0.1 0.0 9.7 14.9 7.1 66.9 0.0 0.0 9.0 0.0 0.0 2.5 2.5 12.5 25.0 0.0 0.0 57.5 0.0 10.0 0.0 0.0 1.6 0.0 0.0 12.7 0.0 7.9 6.4 65.0 Mean no. of client 63 309 279 158 2,928 1,002 24 4,776 broker pairs at that (1.3%) (6.5%) (5.8%) (3.3%) (61.3%) (21.0%) (0.5%) (0.2%) (0.04%) (0.1%) commission in post period This table presents the mode of the commission distribution between a specific institutional client and a specific broker. Commission modes are calculated quarter-to-quarter beginning in the first quarter of 1999 and ending in the third quarter of 2003. Mode of cents per share in the prior period is the mode of the client-broker commission distribution in the initial quarter. Mode of cents per share in the post-period is the mode of the commission distribution between the same broker-client pairs for trades executed in the following quarter. The mean number of client-broker pairs for all initial (following) quarters at each commission price is at right (bottom). Pairs with less than five trades executed in a quarter are omitted, as are pairs with fractional modes (7.3% of the sample) for clarity. The fact that the prior-period commission between a client-broker pair is a strong predictor of the future modal commissions between that client-broker pair is consistent with the conjecture that per-share commissions represent average costs in long-term client-broker agreements. Next, we extend our tests of this hypothesis by contrasting the ability of standard measures of execution costs to predict trade-by-trade commissions against the ability of the prior-period mode. Similar to many other authors, Roll (1984) assumes that brokerage commissions are negotiated on the basis of execution difficulty. Table 3 presents regressions, which estimate Equation (2) with and without the prior-period modal commission: (2) \begin{align} Commission\,per\,share =& \alpha + {{\rm{\beta }}_1}\Pr ice + {{\rm{\beta }}_2}Shares + {{\rm{\beta }}_3}Mkt\% \\ & + {{\rm{\beta }}_4}Mode + {{\rm{\beta }}_5}Cvol + {{\rm{\beta }}_6}Bvol + {\rm{\eta }}{\rm{.}} \\ \end{align} In Equation (2), commission per share on a trade in the post-period is the function of the following: $$Price,$$ the execution price; $$Shares,$$ the trade size in log shares; $$Mkt \%,$$ the trade size as a percentage of that day’s trading volume in the stock; $$Mode,$$ the mode of the prior-period commission distribution for each client-broker pair; $$Cvol,$$ the volume-based quintile size rank (smallest (1)–largest (5)) of the institutional client; and $$Bvol,$$ the volume-based quintile size rank of the executing broker. Table 3 Determinants of institutional commissions OLS Adjusted Log likelihood from Sample $$N=$$ Intercept Price Shares Mkt % Prior Mode Cvol Bvol R2 (%) a logit regression All 16,495,326 3.32 0.0000 0.089 -0.104 0.98 -1,330,695 (0.001) (0.001) (0.001) (0.001) All 16,495,326 0.406 0.0000 0.037 -0.026 0.83 65.80 -930,933 (0.001) (0.001) (0.001) (0.001) (0.001) Low cost 6,120,536 1.96 -0.0000 0.0015 0.047 0.01 -400,575 (0.001) (0.001) (0.001) (0.001) Low cost 6,120,536 1.31 -0.0000 0.0004 0.038 0.28 26.03 -370,531 (0.001) (0.003) (0.001) (0.001) (0.001) Low cost 6,120,536 2.04 -0.0000 -0.0006 0.037 0.28 -0.0083 -0.173 26.62 -367,941 (0.001) (0.002) (0.001) (0.001) (0.001) (0.001) (0.001) High cost 10,374,790 5.20 0.0000 -0.005 -0.007 0.03 -487,434 (0.001) (0.262) (0.001) (0.001) High cost 10,374,790 3.86 0.0000 0.0005 -0.003 0.26 12.84 -436,740 (0.001) (0.045) (0.011) (0.001) (0.001) High cost 10,374,790 4.05 0.0000 0.0022 -0.027 0.26 -0.049 -0.002 13.03 -432,881 (0.001) (0.008) (0.001) (0.001) (0.001) (0.001) (0.001) OLS Adjusted Log likelihood from Sample $$N=$$ Intercept Price Shares Mkt % Prior Mode Cvol Bvol R2 (%) a logit regression All 16,495,326 3.32 0.0000 0.089 -0.104 0.98 -1,330,695 (0.001) (0.001) (0.001) (0.001) All 16,495,326 0.406 0.0000 0.037 -0.026 0.83 65.80 -930,933 (0.001) (0.001) (0.001) (0.001) (0.001) Low cost 6,120,536 1.96 -0.0000 0.0015 0.047 0.01 -400,575 (0.001) (0.001) (0.001) (0.001) Low cost 6,120,536 1.31 -0.0000 0.0004 0.038 0.28 26.03 -370,531 (0.001) (0.003) (0.001) (0.001) (0.001) Low cost 6,120,536 2.04 -0.0000 -0.0006 0.037 0.28 -0.0083 -0.173 26.62 -367,941 (0.001) (0.002) (0.001) (0.001) (0.001) (0.001) (0.001) High cost 10,374,790 5.20 0.0000 -0.005 -0.007 0.03 -487,434 (0.001) (0.262) (0.001) (0.001) High cost 10,374,790 3.86 0.0000 0.0005 -0.003 0.26 12.84 -436,740 (0.001) (0.045) (0.011) (0.001) (0.001) High cost 10,374,790 4.05 0.0000 0.0022 -0.027 0.26 -0.049 -0.002 13.03 -432,881 (0.001) (0.008) (0.001) (0.001) (0.001) (0.001) (0.001) This table presents the results of regressions using commissions per share in each quarter from the second quarter of 1999 to the last quarter of 2003 as the dependent variable. Commissions per share are truncated at ten cents a share and rounded to the nearest 1/10 of a cent. Zero cent commissions are not analyzed. Shares is the trade size, Price is the trade price. Mkt % is the size of the trade divided by the daily volume in the traded stock. Prior Mode is the mode of each client-broker pair commission per share cost in the preceding quarter. Cvol is the institution’s quintile rank among all institutions in the sample. Bvol measures the brokers’ quintile rank among all brokers in the sample. Low cost commissions are those trades with executed commissions per share less than or equal to 3 cents per share (Low cost). High cost commissions are those trades executed with executed commissions per share between 4 and 10 cents per share (High cost). All combines both low-cost and high-cost commissions. Log likelihood presents the goodness of fit statistic from an ordered logit regression specification of each regression with coefficient p-values reported in parentheses below the coefficient estimates. The explanatory power of the prior-period $$Mode$$ relative to the explanatory power of the execution cost variables— $$Price,$$$$Shares,$$ and $$Mkt \%$$ —is the key to interpreting Equation (2). Our hypothesis suggests that the $$Mode$$ will have a strong explanatory power and a positive coefficient. Alternatively, if the execution costs of a particular trade really do affect commissions, then we expect the execution cost variables to influence the post-period commission. The effect of $$Price$$ on commissions per share should be positive as higher priced stocks may require higher capital commitments from facilitating brokers. Larger trades may be more difficult to execute, so $$Shares$$ should be positively related to commissions per share. $$Mkt \%$$ is a measure of trade difficulty: the larger the trade relative to daily volume, the greater total liquidity the trade demands. Hence, $$Mkt \%$$ should be positively related to commissions per share. $$Cvol$$ and $$Bvol$$ are included as control variables that measure potential effects in commission rates related to the size of the client or the size of the broker. The first two regressions in Table 3 present two specifications of Equation (2) for all 16.5 million trades in the client-broker pairs sample (All). Under the null hypothesis that commissions can be represented as a continuous distribution of marginal transaction costs, OLS estimation is appropriate. However, as Figure 3 demonstrates, the distribution of commissions per share is discrete, not continuous. Thus, we also present the loglikelihoods from ordered Logit regressions to confirm that the OLS inferences about the economic significance of each regression are robust.13 Execution cost variables do not explain much: although trade size $$Shares$$ has the predicted sign, trade difficulty $$(Mkt$$%) does not, and the regression only manages an $$R^{2}$$ of 0.01. However, adding the prior $$Mode$$ as an additional explanatory variable increases the $$R^{2}$$ dramatically to 0.66. This striking result shows that past commissions dominate the trade-specific characteristics in explaining the trade-by-trade commissions. Given the relatively bimodal distribution of commissions per share presented in Figure 3, it is possible that the prior mode simply proxies for differences between the commission levels at full-service brokers as opposed to ECNs and discount brokers. To check the robustness of our results, we examine three regression specifications, estimating commissions per share separately for low cost (per-share commissions ≤3 cents) and high cost (>3 cents) markets separately. Again, we are primarily interested in the relative explanatory power of the execution cost proxies against our long-term agreement proxy ( $$Mode$$ ). It turns out that in both subsamples, the execution cost variables do very poorly, obtaining $$R^{2}$$ of 0.01%–0.03%. In both markets, adding the $$Mode$$ to the specification significantly increases the explanatory power of the regression to 26% and 13%, respectively. Client size $$(Cvol)$$ and Broker size $$(Bvol)$$ are both negative and significant but do not significantly increase the explanatory power of the regressions. This evidence indicates that individual trade commission costs are not driven by the characteristics of particular trades and indicates how far the market has evolved from the regulated commission market of Equation (1). Being invariant to the costs of trade execution, commissions are unlikely to represent marginal execution costs.14 Thus, the commission charge in both markets is best explained by the prior modal commission between a client-broker pair. This result is not driven by the safe-harbor provision: under Section 28(e), brokers could charge the marginal cost per trade plus a fixed markup. In this case, the fixed markup would be captured in the intercept of these regressions and the varying marginal cost of the trade should be captured by the coefficients of the independent variables and the prior mode would not much matter. However, we do not observe this result in Table 3. The prior-period mode explains post-period commissions well because commissions are rarely negotiated on a trade-by-trade basis. Figure 5 presents the frequency distribution of commissions between our 4,776 client-broker pairs. Overall, 43.5% of all client-broker pairs in the sample only use a single per-share commission on all the trades they transact. An additional 30% of client-broker pairs pay only two commission prices and over 92.6% of all client-broker pairs use four or fewer commission prices. Clearly, trade-by-trade negotiation of commissions must play a relatively minor role in the institutional market. Yet more than one per-share commission can, in some instances, be used to fulfill the terms of the long-term contract. Figure 5 Frequency of number of different commissions per share by institutional client-broker pairs in 1999–2003 Percent represents the average over 20 quarters of the distribution of the number of commissions executed by a client-broker pair. Each institutional-client executed at least five trades with each broker during a quarter. Truncation at ten commissions removes 2.2% of the data from this graph. Figure 5 Frequency of number of different commissions per share by institutional client-broker pairs in 1999–2003 Percent represents the average over 20 quarters of the distribution of the number of commissions executed by a client-broker pair. Each institutional-client executed at least five trades with each broker during a quarter. Truncation at ten commissions removes 2.2% of the data from this graph. The relationships between four prominent full-service brokers and their larger clients (with at least fifty trades per client per quarter) in the first quarters of 1999 and 2003 serve as an illustration of their response to low-cost competition. For example, Morgan Stanley offers low-cost (not exceeding 3 cent) executions to only thirty-one of these clients in 1999, but extends the arrangement to fifty-eight clients by 2003. Bear Stearns’ low-cost commission relationships rise from thirty-one to forty-one during the same period. The change at Goldman Sachs is more dramatic: low-cost commission alternatives are charged to twenty-two of their largest clients in 1999, and to fifty-five in 2003. Finally, Merrill Lynch’s low-cost relationships almost triple from twenty-one in 1999 to sixty-one in 2003. These market changes are reflected in the decline of average commissions presented in Figure 1 and the shift in the distribution of commissions toward ECN prices presented in Figure 3. Overall, it seems that the data are consistent with our conjecture and provide little support to the commission being determined by the cost of execution. ## Value of Premium Services ### Equilibrium in the premium service market Full-service brokers provide many services, the most prominent are timely information provision, the reduction of market impact on trade execution, and IPO access. We argue that even if these services were sold separately, the equilibrium would not take the form of a spot market, where these services are paid for on a quid pro quo basis, but would rather evolve into a long-term contract between brokers and institutions.15 Clients observe only very crude proxies for the quality of these services on a daily basis. Removing the noise by averaging over a large number of events makes performance measurement far more accurate and easy to evaluate. We conjecture that the equilibrium in the market for premium services takes the following form. The contract between a broker and a client is set for a specific period. Each broker provides a level of service corresponding to each client’s choice and ensures that it gets appropriate payment. Premium clients receive top priority from the research department in providing timely information, the trading desk gives the foremost attention to their trades, and investment banking provides them with large IPO allocations upon demand. The absolute price for this level of service is high, as evidenced by the size of the commission market. At the end of the period, the institution evaluates the average quality (value) of services it received from the broker and decides at what level to continue the relationship.16 At the opposite end of the scale, there are clients that demand only basic execution without any additional services, and so no long-term contract is required. Our conjectured equilibrium is a variation on the Klein and Leffler (1981) equilibrium of product quality assurance. In their model, a high-quality producer prices the product above its marginal cost. The customers are willing to pay more relative to cheaper substitutes as long as the quality is maintained above some predetermined level. The producer could cheat and produce a cheaper product, but this behavior would stop the stream of future positive profits associated with producing the high-quality good and receiving the premium price. Thus, the equilibrium yields a high-quality, high-price market even in the presence of low-cost substitutes. Applied to the brokerage services market, the model suggests that institutional clients can use repeated interactions to ensure high-quality service provision from their full-service brokers even without a formal contract and in the presence of discount brokers. Below, we specify the nature of the most important premium services that fall in that category. #### Timely information provision. Timmons (2000) quotes an anonymous sell-side analyst as saying, “I kept my Buy rating, but I told my favorite investors to sell.” Clearly, some clients are getting better information from their analysts than others. From any single client’s perspective, the value of information the client receives crucially depends on the timing of its transmission from the broker. As prices adjust to reflect information imbedded in trades (Glosten and Milgrom 1985; Kyle 1985; Easley and O’Hara 1987), information loses its value upon revelation to additional market participants. Thus, the scarce resource in this context is the client’s place in the information queue: those called first by the broker get the most valuable information.17 This feature of information provision implies that clients have strong incentives to purchase a place near the head of the broker’s queue. However, information quality that reflects one’s place in the queue is hard to verify in any specific instance, as it is based on realized returns in a volatile market. Idiosyncratic effects tend to cancel over many independent observations, which suggests that the quality of research services provided by brokers can be best evaluated over a long period. #### Trade execution. Institutional clients frequently demand that their brokers minimize the price impact of their trades. The time, skill, effort, and capital allocated by the broker to provide a counterparty for a trade determine the degree of its price impact. The sheer number of variables that could potentially affect execution on a particular trade suggests that ascertaining execution quality on a trade-by-trade basis is practically impossible. However, the idiosyncratic variables affecting execution quality on a particular trade tend to cancel out over time, so the precision of estimates of broker’s performance improves over longer horizons. Indeed, the extensive use by institutional investors of such firms as Abel/Noser, which specialize in providing comparative analysis of brokers’ execution costs over time, suggests that the agreements based on execution cost measures are likely long term as well.18,19 #### IPOs. A broker’s best institutional clients get larger allocations of “hot” IPOs, and the larger profits associated with them (Fulghieri and Spiegel 1993; Nimalendran, Ritter, and Zhang 2007).20 It is obvious that brokers cannot explicitly charge for this service, and so they allocate shares to those who pay for them implicitly. Reuter (2006) finds confirming evidence through the correlation between mutual-fund commissions paid to underwriting brokers and post-IPO fund holdings. The fact that Reuter (2006) finds a significant relation despite the relatively infrequent reports from his data sources is strong evidence that the IPO allocation decision is at least partly based on long-term relationships between brokers and their clients. The difficulty in measuring the quality of these premium services on a quid pro quo basis suggests that long-term agreements, which fix the level of service and the required payment over a long period, are appropriate in the institutional market. ### Value of information provision: An illustration Our hypothesis assumes tangible benefits from brokers’ premium services, in particular, the timeliness and precision of sell-side analysts’ information. We provide an illustration of the value of such service by investigating institutional trading around changes in analysts’ recommendations. We use a sample of 7010 analysts’ recommendations changes from First Call during the 1999–2003 period for the NYSE stocks for which we also have Abel/Noser data. Panel A of Table 4 presents average event-day abnormal returns for the analysts’ recommendation changes and finds them in line with prior research (Elton, Gruber, and Grossman 1986; Womack 1996). Upgrades produce an average abnormal return of 1.93% $$(t$$-statistic = 17.8) and downgrades produce an average abnormal return of −3.73% $$(t$$-statistic = −24.0). These recommendation changes seem to be informative, and hence timely trading in these stocks on these days may provide profit opportunities. Table 4 Analysts’ recommendation changes and the subsequent trading Panel A Mean T-statistic All upgrades 3,125 1.93% 17.81 upgrades to strong buy 1,805 1.95% 13.96 upgrades to buy 1,180 1.95% 10.49 upgrades to hold 140 1.67% 10.80 All downgrades 3,885 −3.73% −24.04 downgrades to buy 1,321 −2.83% −13.87 downgrades to hold 2,313 −4.06% −19.02 downgrades to sell 251 −4.54% −6.14 Panel B Trade Trade through through the Other Brokers Changing Broker Difference Number of trades 219,320 3,965 Improvement over VWAP – cents 2.34 6.76 4.42 (0.0013) (0.0082) (5.27) Improvement over Close – cents 9.07 19.31 10.23 (0.0020) (0.015) (6.62) Commissions per share – cents 4.35 5.05 0.70 (0.00003) (0.0002) (37.79) Commissions paid per trade () 745.8 965.6 219.8
(8.61) (70.80) (3.08)
Share volume 15,967 18,709 2,742
(173.19) (1,324.74) (2.05)
Mkt % 0.49 0.70 0.21
(0.0002) (0.0005) (3.87)
Panel A
Mean T-statistic
upgrades to hold 140 1.67% 10.80
downgrades to hold 2,313 −4.06% −19.02
downgrades to sell 251 −4.54% −6.14
Panel B
through through the
Other Brokers Changing Broker Difference
Improvement over VWAP – cents 2.34 6.76 4.42
(0.0013) (0.0082) (5.27)
Improvement over Close – cents 9.07 19.31 10.23
(0.0020) (0.015) (6.62)
Commissions per share – cents 4.35 5.05 0.70
(0.00003) (0.0002) (37.79)
Commissions paid per trade ($) 745.8 965.6 219.8 (8.61) (70.80) (3.08) Share volume 15,967 18,709 2,742 (173.19) (1,324.74) (2.05) Mkt % 0.49 0.70 0.21 (0.0002) (0.0005) (3.87) Abnormal returns for 7010 NYSE-listed analyst recommendation changes in stocks that appeared in both First Call and Abel/Noser between 1999 and 2003. The abnormal returns reported are market-adjusted returns. The sample of brokerage recommendation changes consists of 7010 NYSE-listed analysts’ recommendation changes in stocks that appeared in both First Call and Abel/Noser between 1999 and 2003. Improvement over VWAP is $$I$$(a buy-sell indicator variable) times the difference between the value-weighted average price for the day and the execution price of the trade. Improvement over Close is $$I$$ times the difference between the closing price for that day and the execution price of the trade. Share volume is the number of shares in a trade. Mkt %is the size of the trade divided by the daily volume in the traded stock. Standard errors are reported in parentheses. T-statistics for the difference in means test are presented in parentheses below the mean difference in the Difference column. Significant differences (5%) are in boldface. Panel B of Table 4 presents an analysis of institutional trades in the recommended stock on the day analysts change their recommendations. We test whether the recommending broker’s clients receive superior information by comparing the profitability of client trades in the recommended stock on the day of the release of the report against the profitability of trades by nonclients. This is a powerful and direct test of the informational value of being a client of a full-service broker. Institutions trading through the recommending broker are by definition clients of that broker. Although not required to trade with the recommending broker, many clients apparently do, perhaps to reward the analyst whose bonus is often tied to the commission revenue generated by their recommendations.21 Institutions that trade through the recommending broker obtained prices that average 19.31 cents per share better than the closing price, while trades through nonrecommending brokers received price gains of 9.07 cents relative to the close. These profits are comparable to existing evidence of trading gains for clients receiving notification of reports before they are broadly disseminated.22 Examining commission costs, we find that trades through the recommending broker on the day of the recommendation change paid higher average commissions—5.05 cents per share—than trades in the same stock on the same day executed through any other broker—4.35 cents per share. This difference reflects the fact that research providers are primarily full-service brokers who usually charge commissions of 5 or 6 cents per share. In this case, the extra commission payment is profitable as institutions that trade through the recommending broker gain significant price improvement in return for the higher commission. Thus, clients of the recommendation-changing broker made more profitable trades, despite the fact that these trades were, on average, significantly more difficult to execute, as measured by the size of the trade relative to that day’s trading volume. The profitability results support our assertion that brokers’ services are valuable. Since a large portion of the gain from trading on analysts’ recommendations is likely to dissipate quickly, access to early and precise information from the brokers’ research department is a valuable asset. ## Institutional Trading Patterns How do institutions allocate their volume across various brokers? Several decisions are involved: how many brokers to use; which volumes to allocate; and how to allocate them among the chosen set. We present two ways to address this question. One is based solely on the cost of execution; the other focuses only on the effects implied by the payment for premium services hypothesis. ### Hypotheses based on cost minimization We identify three types of costs that institutions must take into account when allocating their order flow to brokers (we assume that prices of brokers are competitive). 1. Fixed cost: Cost of adding an additional broker to a client’s list of brokers to cover electronic connections, billing, clearing, other back-office services, as well as regulatory compliance costs. This cost provides incentives for institutions to limit their number of brokers. If no other costs are present, this would imply a single broker for each market for each institution. 2. Cost of frontrunning: An institution may not want to send too much trading volume to a broker to prevent the broker from frontrunning.23 This problem is much more significant for large institutions that occasionally submit very large orders. This cost induces the client to distribute volume evenly across brokers (yet randomize on every transaction), to split orders among brokers, and to increase the number of brokers used. 3. Trading strategy recognition: Institutions do not want the market to recognize that their trades represent a significant proportion of a particular broker’s volume. They are afraid of market participants recognizing their trading patterns and increasing their price-impact costs.24 As it is easier to hide a small volume than a large one, small institutions should not be worried about this cost. Large institutions can minimize this cost by increasing the number of brokers they use and allocating volume proportionate to the broker’s size. Thus, an institution minimizing this cost would send comparable trading volumes to two of its equally sized brokers.25 We postulate several hypotheses based on institutions minimizing these costs (the letter “c” indicates that these are derived from cost considerations):26 Hypothesis 1c:Institutions allocate higher percentage of their volume to larger brokers among the brokers they use, so to hide their order flow. Hypothesis 2c: Smaller institutions allocate their volume more evenly than larger institutions, as they are less worried about frontrunning and strategy recognition. Hypothesis 3c: Smaller institutions employ fewer brokers than larger institutions, since they face the same fixed costs, but lower frontrunning and strategy recognition costs. Hypothesis 4c: Minimizing trading costs should not increase the turnover of smaller institutions relative to that of larger institutions. If small institutions pay higher per-share commissions to cover the fixed setup costs, then this trading impediment should reduce the turnover of smaller institutions relative to that of the larger ones. If the commissions are the same across institutions, there should be no difference between the turnover of larger and smaller institutions. Hypothesis 5c: Similar-sized brokers should receive similar proportions of volume from the same institution, while similar-sized institutions should send similar proportions of volume to a particular broker they work with. In addition to these costs, we have outlined the benefits available through client-broker relationships. These benefits produce an alternative set of hypotheses. ### Hypotheses based on payment for services We have argued that an institution willing to pay the price of a premium service package receives early access to analysts’ research, priority in difficult trade executions, more capital committed to its trades, and a disproportionately larger share of IPOs. To obtain these services, institutions pay a fixed fee charged through a relatively constant per-share commission. The total payment is a product of the number of shares and the commission per share. In this framework, institutions allocate volume strategically so as to obtain premier status at as many service-providing brokers as possible. To do so they must concentrate their order flow with a subset of brokers to generate sufficient revenue within this subset. Under our conjecture, institutional trading patterns must, therefore, reflect a pattern of concentration (“bunching”) of trades with particular brokers. Note that larger institutions can automatically become premier clients with more brokers due to their size. Smaller ones should use fewer brokers to generate more volume per broker, as well as bunch more extensively. Small institutions could also increase their payments to brokers by increasing their turnover or by agreeing to pay higher commissions per share.27 This is particularly relevant for smaller institutions that may want to increase their service above the level they would receive based on their size. We postulate a second set of testable hypotheses based on our conjecture (indicated by “s”): Hypothesis 1s: Institutions disproportionately “bunch” their order flow with particular brokers to receive a premier level of service. Bunching is not necessarily related to a broker’s size. Hypothesis 2s: Smaller institutions bunch more than larger institutions due to their desire to obtain premier status with at least some brokers. Hypothesis 3s: Smaller institutions employ fewer brokers than larger institutions since the sheer size of large institutions allows them to attain premier status with more brokers, while small institutions need to concentrate their trades on fewer brokers. Hypothesis 4s: Smaller institutions may be willing to increase turnover and pay a higher per-share commission to generate higher commission revenues and receive additional services. Hypothesis 5s: Similar-sized brokers may receive vastly different allocations from the same institution, depending on whether the institution wishes to become an important client and receive a particular broker’s premier level of service. At the same time, two similar-sized institutions may send vastly different allocations to a particular broker for the same reason. While the intuition behind the two sets of hypotheses is completely different, some of them generate the same predictions. Under the cost-minimization alternative, institutions allocate a higher percentage of volume to the larger brokers they use to hide their order flow more effectively. It is not possible to distinguish between Hypotheses 1c and 1s with an analysis of institutional allocation of volume alone. Similarly, small institutions could employ fewer brokers (Hypothesis 3c and 3s) due to their desire to achieve premier services from some brokers, the fixed costs involved, or both reasons. However, several predictions of the cost-minimization alternative are contrary to our hypotheses. Under the cost minimization alternative, small institutions, because of their small size, are less concerned with frontrunning and trading strategy recognition and would allocate their volume more evenly than large institutions. This prediction is contrary to Hypothesis 2s. Increasing the turnover would obviously increase costs for small institutions contrary to Hypothesis 4s. Finally, because the institution’s total trading volume relative to the broker’s total trading volume determines the ability to hide institutional trades, similar-sized brokers should receive similar proportions of volume from the same institution, while similar-sized institutions should send similar proportions of volume to a particular broker. This prediction is contrary to Hypothesis 5s. As our hypotheses and the cost-minimization alternative are not mutually exclusive, we cannot claim that by testing these hypotheses we can reject one of these ideas. Instead, we interpret our findings as indicating which conjecture provides stronger empirical effects. ### Results Table 1 shows that small institutions spend significantly less in terms of total commission dollars. How do the four smallest quintiles compete for broker services? Hypothesis 4s suggests that smaller institutions may pay higher per-share commissions.28 Table 1 shows that institutions in these quintiles indeed pay 15%–20% higher commissions per share than the institutions in the largest quintile. However, the difference in per-share commissions across size quintiles is dwarfed by the large differences in average trading volume. Thus, the total commission payment to a broker is potentially driven much more by the allocation of the trading volume (bunching) than by the size of the per-share commission. #### Concentration of institutional trading. That order flow is the primary determinant of broker revenue has consequences for an institution’s trading volume allocation decisions. Panel A of Table 5 presents institutional concentration of order flow as a function of institution size (quintile).29 We examine both versions of Hypotheses 1 and 2 by calculating broker concentration as the average market share (percentage of each client’s total commission dollars) that clients in each quintile send to their highest-revenue brokers. Table 5 Institutional concentration: bunching of order flow Panel A: Institutional concentration of order flow Client quintile by trading volume 1 = low 5 = high F-test All brokers Average number of brokers per client 30.7 52.1 61.2 71.6 79.3 Broker Concentration (% of client commissions) Top broker 40.7 27.9 24.7 22.2 20.8 12.4** Top 1-3 60.2 47.2 43.4 40.6 37.8 15.4** Top 1-5 69.9 57.8 54.3 51.6 49.4 15.6** Top 10 83.3 74.1 71.1 69.1 68.2 15.3** Low-cost trading Average number of low-cost brokers per client 8.8 7.5 8.9 9.9 13.3 Percentage of client commissions paid to all low-cost brokers 11.4 9.7 11.4 12.7 17.3 2.4* Panel B: Institutional concentration of order flow – Robustness test Herfindahl Index 23.6 21.0 19.7 17.9 17.8 4.46** Zeta Regression Intercept - α -1.12 -1.21 -1.25 -1.32 -1.46 (0.050) (0.031) (0.029) (0.023) (0.025) Coefficient - θ 1.09 0.96 0.90 0.82 0.76 (0.062) (0.039) (0.033) (0.027) (0.037) R2 91.7 91.8 91.8 91.5 90.9 % of institutions with θ > 1 41.9 32.0 25.7 21.3 14.0 % of institutions with θ < 1 46.8 48.4 64.7 70.6 75.0 Panel C: Institutional average commissions by institutions Commission Cost (cents per share) All brokers Top broker 5.01 4.94 4.76 4.53 4.46 2.95* Top 1-3 5.04 5.02 4.88 4.67 4.45 10.81** Top 1-5 5.07 5.04 4.93 4.72 4.48 19.45** Top 1-10 5.11 5.08 5.01 4.80 4.55 40.09** All brokers 5.04 5.07 5.01 4.90 4.80 71.63** Panel D: Average broker rank for institutional clients’ top brokers Average broker rank - [median] (out of 261 active brokers) Top broker 35.4 32.5 33.1 31.7 27.4 [14] [15] [16] [15] [11] (5.90) (5.08) (5.29) (5.44) (4.08) Second broker 29.8 30.7 29.7 29.4 26.6 [13] [14] [11] [12] [10] (4.51) (4.99) (4.86) (4.88) (4.43) Third broker 29.8 29.2 30.1 25.2 28.5 [12] [12] [10] [9] [9] (5.21) (4.78) (5.00) (4.45) (4.48) Client quintile by trading volume 1 = low 5 = high Fourth broker 27.0 27.6 26.0 27.2 29.3 [9] [10] [10] [10] [9] (5.39) (5.32) (4.31) (4.73) (4.68) Fifth broker 30.1 27.6 29.6 24.4 23.3 [13] [11] [12] [10] [8] (5.43) (4.86) (4.99) (4.31) (4.12) Panel A: Institutional concentration of order flow Client quintile by trading volume 1 = low 5 = high F-test All brokers Average number of brokers per client 30.7 52.1 61.2 71.6 79.3 Broker Concentration (% of client commissions) Top broker 40.7 27.9 24.7 22.2 20.8 12.4** Top 1-3 60.2 47.2 43.4 40.6 37.8 15.4** Top 1-5 69.9 57.8 54.3 51.6 49.4 15.6** Top 10 83.3 74.1 71.1 69.1 68.2 15.3** Low-cost trading Average number of low-cost brokers per client 8.8 7.5 8.9 9.9 13.3 Percentage of client commissions paid to all low-cost brokers 11.4 9.7 11.4 12.7 17.3 2.4* Panel B: Institutional concentration of order flow – Robustness test Herfindahl Index 23.6 21.0 19.7 17.9 17.8 4.46** Zeta Regression Intercept - α -1.12 -1.21 -1.25 -1.32 -1.46 (0.050) (0.031) (0.029) (0.023) (0.025) Coefficient - θ 1.09 0.96 0.90 0.82 0.76 (0.062) (0.039) (0.033) (0.027) (0.037) R2 91.7 91.8 91.8 91.5 90.9 % of institutions with θ > 1 41.9 32.0 25.7 21.3 14.0 % of institutions with θ < 1 46.8 48.4 64.7 70.6 75.0 Panel C: Institutional average commissions by institutions Commission Cost (cents per share) All brokers Top broker 5.01 4.94 4.76 4.53 4.46 2.95* Top 1-3 5.04 5.02 4.88 4.67 4.45 10.81** Top 1-5 5.07 5.04 4.93 4.72 4.48 19.45** Top 1-10 5.11 5.08 5.01 4.80 4.55 40.09** All brokers 5.04 5.07 5.01 4.90 4.80 71.63** Panel D: Average broker rank for institutional clients’ top brokers Average broker rank - [median] (out of 261 active brokers) Top broker 35.4 32.5 33.1 31.7 27.4 [14] [15] [16] [15] [11] (5.90) (5.08) (5.29) (5.44) (4.08) Second broker 29.8 30.7 29.7 29.4 26.6 [13] [14] [11] [12] [10] (4.51) (4.99) (4.86) (4.88) (4.43) Third broker 29.8 29.2 30.1 25.2 28.5 [12] [12] [10] [9] [9] (5.21) (4.78) (5.00) (4.45) (4.48) Client quintile by trading volume 1 = low 5 = high Fourth broker 27.0 27.6 26.0 27.2 29.3 [9] [10] [10] [10] [9] (5.39) (5.32) (4.31) (4.73) (4.68) Fifth broker 30.1 27.6 29.6 24.4 23.3 [13] [11] [12] [10] [8] (5.43) (4.86) (4.99) (4.31) (4.12) Panel A presents institutional client market share statistics by client quintile. The Average number of brokers per client represents the average across institutions in a quintile. Broker concentration is the average of the percentage of their total commission dollars each client sends to their highest-volume broker(s). Broker concentration statistics are presented separately for both all brokers and ECN trading only. F-tests examine the null hypothesis of equality along each row. Low-cost trading tracks commissions less than or equal to 3 cents per share. The Herfindahl Index represents the (normalized) sum of the squared market shares of a client’s ten largest brokers. Zeta estimation presents the average intercept, coefficient, and R-squared values in each quintile for client-by-client estimation of: log (broker market share) = α + $${\theta }$$ log (broker rank) + ɛ. Larger $${\theta }$$ indicates more concentration. Percentage significantly greater or less than 1 is the percentage of each quintile’s $${\theta }$$ coefficients that are significantly greater or less than one at the 5 % level. Standard errors are in parentheses below the panel B coefficient estimates. Commission cost in panel C is the average per-share commissions on trades sent by clients to their highest volume broker(s). Panel D presents quintile-averaged statistics on Average broker rank, out of 270, for institutional clients’ five most important brokers. Below each category’s average, medians are presented in brackets and standard errors are presented in parentheses. The symbols * and ** indicate that the variation across quintiles is significantly different than zero at the 0.05 or 0.01 level, respectively. Both versions of Hypothesis 1 predict order-flow bunching: a skewed allocation of client trades toward their most important brokers, which is precisely what we observe in the data. The largest institutions send 20.8% of their commission dollars to their top broker, whereas an evenly distributed allocation, which would best disguise their trading strategies, would allocate only about 1% of their order flow to each broker. The largest institutions concentrate their order flow with a few top brokers: 37.8% of their commission dollars goes to their top three brokers, 49.4% to their top five brokers, and 68.2% to their top ten brokers.30 Hypothesis 2s predicts that small institutions concentrate their trading more than large institutions, while 2c predicts the opposite. We show that the bunching of order flow with an institution’s most important brokers increases as the size of the institution decreases. Panel A reveals that the percentage of commission dollars executed with their top broker increases monotonically with client size to a maximum of 40.7% for the smallest quintile. The null hypothesis that order flow executed with a top broker is independent of institution size is rejected at the 1% level with an F-statistic of 12.4. The top three, top five, and top ten broker categories show the same pattern of institutional bunching and similar rejections of equality across quintiles. This pattern is consistent with large institutions having the flexibility to become premier clients to many brokers, while small institutions are forced to concentrate their trading with only a few. This finding is inconsistent with the predictions of the pure cost-minimization model. Nor can the bunching result be explained by the soft-dollar arrangements, which primarily provide for data services (SEC 1998). Institutions do not care which broker provides soft-dollar credits to the data vendor, and, therefore, have no incentive to bunch. Thus, while the competition for valuable services from the broker may encourage bunching, soft-dollar arrangements are not likely to do so. Figure 6 tracks top broker’s order flow annually in 1999–2003. The allocation of order flow is consistent across client quintiles throughout our sample period, and similar patterns also prevail in our other top broker classifications. This figure indicates that the institutional trading patterns we document (and thus our conjectured commission contract) is consistent throughout our sample period. For specific agents, these important client-broker relationships are stable as well. In untabulated results, we find that a top broker in a quarter has an 89.7% chance of remaining in that client’s list of top ten brokers in the following quarter. For relationships that we can track over the entire sample period, a client’s initial top broker remains their top broker throughout the sample period 25.2% of the time. Initial top brokers remain in a client’s top ten brokers 72.0% of the time. These numbers show a level of competition for a client’s revenue stream, as top brokers are occasionally displaced. Yet displaced top brokers often remain important to the client and remain a competitive threat to reassert their dominant position. Figure 6 Institutional bunching over time Top broker bunching is the average allocation to a client’s most important broker. Cross-sectional averages for each client quintile and year are exhibited below. Figure 6 Institutional bunching over time Top broker bunching is the average allocation to a client’s most important broker. Cross-sectional averages for each client quintile and year are exhibited below. This competition to be a client’s most important broker may contribute to the overall declining trend in commissions. In general, when top brokers are replaced, we find that the replacement broker charges an average of 0.10–0.15 cent lower per-share commissions than the former top broker. When the former top broker is retained by the client, we find that they lower their commissions to conform to the replacement broker’s price. Thus, despite the high retention ratios between specific clients and brokers, the threat of being replaced keeps commissions competitive.31 Institutional bunching of order flow enhances the competitive threat; if institutions had dispersed their trades, a broker’s rank with a client would be much less important. Panel A of Table 5 also separates out low-cost trading commissions (not exceeding 3 cents per share). Low-cost execution does not vary much in the lower size quintiles, but the largest institutions use low-cost commissions for a greater proportion of their execution volume than do smaller institutions (F-statistic =2.44). This is consistent with our first two hypotheses, as large institutions can easily pay for a broker’s premium services with only a fraction of their total share volume, and so are free to execute a greater percentage of their trading at low prices. Hypotheses 3c and 3s predict that large institutions will use more brokers than small institutions. As predicted, the average number of brokers used by institutions in each client quintile is increasing in the size of the institution. The smallest institutions use only 30.7 brokers on average, while the largest average 79.3 brokers. This pattern is also present in the low-cost market, where the smallest institutions use an average of 8.8 brokers, while the largest quintile uses an average of 13.3. #### Additional tests of institutional bunching. We perform two additional tests on the degree of bunching. It could appear that large institutions do less bunching simply because they use more brokers, rather than choosing to concentrate their trading to earn premium services. To account for this fact, we conduct additional tests that restrict our attention to an institution’s top ten brokers, which constitute the bulk of any institution’s order flow and commission dollars.32 First, we normalize to one the proportions of each institution’s order flow to calculate each institution’s Herfindahl-Hirschmann Index (HHI), which is the sum of squared proportions of volume sent to every one of the institution’s top ten brokers times 100. The results are presented by client quintile in panel B of Table 5. It is clear that large institutions have a significantly more even distribution than the small ones, even after equalizing the number of brokers used. By comparison, a uniform distribution with ten brokers would yield an HHI of 10, compared to our findings of 17.8 for the largest institutions, and 23.6 for the smallest ones. Next, we perform a parametric estimation of an institution’s order flow allocation using the Zeta distribution, which is a discrete probability distribution commonly used in the natural sciences to measure concentration of types within a population. Let us denote by $$Z_{k}$$ the proportion of volume that an institution sends to a broker ranked $$k (k = 1$$ being the largest) out of its $$K$$ brokers. Zeta distribution implies that the proportion of volume allocated by an institution to its $$k$$th largest broker is (3) $${{\rm{Z}}_k} = \frac{{C(K,{\rm{\theta }})}}{{{k^{\rm{\theta }}}}}\forall k \le K,$$ where $$C(K, {\theta })$$ is a normalizing constant that increases in θ. Higher θ implies a less even distribution of volume allocation and hence a greater degree of order flow concentration. For example, θ = 0corresponds to a uniform distribution (HHI = 10), θ = 0.75 corresponds to an HHI of about 14, whereas θ = 1.1 corresponds to an HHI of about 20. Taking logs on both sides of Equation (3), we obtain an equation that allows us to estimate θ: (4) $$\log ({Z_k}) = \alpha - {\rm{\theta }}\log (k) + {\rm{\varepsilon }},$$ where $$k$$ is the rank of the executing broker for this institution. We perform the estimation separately for every institution in the sample, and then average the results by size quintile. Panel B of Table 5 clearly indicates that the Zeta distribution provides a good fit for client order flow. The distribution is significantly more concentrated for smaller institutions than for large ones, which is evident from the estimates of θ and the intercept. Order flow concentration declines with institution size and the effect is most pronounced for the smallest institutions. We also show the percentage of each quintile’s θ coefficients that are significantly lower or higher than 1, which is the value of θ that coincides with the HHI of the median-sized institutions. The majority of the institutions in the largest quintile have θ significantly lower than 1 (75%), and only a small minority are significantly higher than 1 (14%), while the corresponding values for the smaller institutions are 42% and 47%, respectively. These results indicate that bunching is far more pronounced for smaller institutions, consistent with Hypothesis 2s, but inconsistent with 2c. #### Commission size and broker rank. Hypothesis 4s suggests that, in addition to more extensive bunching, smaller institutions may also pay higher commissions to generate more profits for the broker and gain premier status. Panel C of Table 5 presents average per-share commissions for the institutions in each quintile.33 This result is supportive of the hypothesis that smaller institutions are willing to pay more to get premium services from at least some of their brokers. While this difference is modest relative to the large differences in share volume across quintiles reported in Table 1, it may still affect small clients’ relative positions with their brokers. As we have stated earlier, the high-commission share volume sent to a broker essentially determines the importance of an institution to a particular broker; nevertheless, consistent with Hypothesis 4s, the smallest institutions are willing to pay a per-share commission premium, particularly to their top brokers. The cost minimization alternative suggests that volume allocation depends on broker size. Small institutions, which have no difficulty hiding their trades, should be indifferent between the large and the small brokers, while large institutions should strictly prefer larger brokers. Moreover, similar-sized brokers should get similar allocations from similar-sized institutions. If we assume that large brokers also provide more premium services, then the services hypothesis also suggests that large institutions will tend to use the largest brokers, as their volume ensures they will be important clients to any sized broker. For the smaller institutions, there is a trade-off between being a less important client for a large broker, or a more important client to a smaller broker. We do not know a priori their optimal choice. Panel D of Table 5 examines the average and median broker size ranks (out of 270 active brokers) for an institution’s five most important brokers, averaged within institutional size quintiles. The data reveal that each of the quintile average and the median ranks is below 50, which indicates that all institutions regardless of size concentrate their order flow with the largest brokers, presumably because this group provides the most valuable services.34 Nevertheless, the comparison of means and medians also indicates that smaller institutions do tend to use somewhat smaller brokers as their top broker, which allows them to compete more effectively for these brokers’ services. Further, we know from panel A of Table 5 that an institution’s top broker receives a much larger allocation of order flow than their fifth largest broker, yet the average size rank of the latter is lower than the average size rank for the top broker. These results show that similar-sized brokers receive vastly different allocations of order flow, and provide direct support for Hypothesis 5s, while contradicting Hypothesis 5c. Given the conclusions in Chan and Lakonishok (1993, 1995) that an institution’s identity is the paramount factor in determining execution costs, the cost-considerations here are strong; thus, our results indicate that the institutions must place a high value on broker services to deviate from a strategy of hiding in the order flow as effectively as possible. Overall, our evidence is consistent with all clients concentrating their trades to capture the benefits from moving up higher in the queue for a broker’s premium services. This pattern is most pronounced for small clients, where the benefits from bunching outweigh the potential costs. Yet another way for small institutions to pay for services is to increase their volume of trading beyond what is required by their investment strategies, as stated in Hypothesis 4s. We test this hypothesis using Thompson’s mutual fund quarterly holding data from 1997 to 2002. To avoid outliers, we first remove all the fund-quarter observations where the NAV was smaller than$10 million at the beginning of the quarter, or grew by more than 50% during the quarter. For each fund, we calculate the change in the number of shares of every security held over the course of the quarter, and treat it as the fund’s trading volume in this security. We then multiply volume by the average quarterly price and aggregate over all securities, which yields an estimate of the total trading volume in dollars. Dividing trading volume by the NAV at the beginning of the quarter generates a turnover estimate. Each fund is then assigned to an NAV quintile and we calculate average turnover statistics by quintile. The annual averages are presented in Figure 7, which clearly shows that funds in the smallest quintile exhibit much higher turnover than funds in the two largest quintiles (the differences are significant at the 10% level).35
Figure 7
Mutual funds’ turnover by year and size
This figure uses Thompson’s mutual fund quarterly holding data from 1997 to 2002. We remove all observations where the fund was smaller than $10M at the beginning of the quarter, or grew by more than 50% during the quarter. For each fund we calculate the change in the number of shares of every security held by the fund over the course of the quarter, and treat it as the fund’s trading volume. We then multiply trading volume by the average quarterly price and aggregate over all securities, yielding the estimate of total dollar trading volume. We then divide by the NAV at the beginning of the quarter to obtain an estimate of the turnover. Each fund is then assigned a NAV quintile and the average turnover statistics for each quintile are presented in the figure (Quintile 1 is the smallest institutions). Figure 7 Mutual funds’ turnover by year and size This figure uses Thompson’s mutual fund quarterly holding data from 1997 to 2002. We remove all observations where the fund was smaller than$10M at the beginning of the quarter, or grew by more than 50% during the quarter. For each fund we calculate the change in the number of shares of every security held by the fund over the course of the quarter, and treat it as the fund’s trading volume. We then multiply trading volume by the average quarterly price and aggregate over all securities, yielding the estimate of total dollar trading volume. We then divide by the NAV at the beginning of the quarter to obtain an estimate of the turnover. Each fund is then assigned a NAV quintile and the average turnover statistics for each quintile are presented in the figure (Quintile 1 is the smallest institutions).
Although small institutions may have higher turnover for other reasons, these results are consistent with our interpretation of the market for brokers’ premium services. Small institutions that cannot generate sufficient brokerage revenues may attempt to increase their ability to procure premium services by concentrating their trading with only a few brokers, paying higher per-share commissions, and increasing their turnover to provide the required revenues to the chosen brokers.
## Discussion and Conclusion
Brokers provide many valuable services that are difficult to sell explicitly. Moreover, the quality of these services is hard to evaluate over a short period, which calls for a long-term contract. Thus brokers need a simple mechanism that facilitates charging for these services over time. We conjecture that the total revenue a broker receives from a client during a period is a prearranged fixed fee for the level of services the client desires for that period. The client pays this fee through per-share commissions on trades sent to the broker. Thus, full-service commissions represent an average per-share cost of broker services. This framework sheds new light on trading volume allocation among brokers, and the possible future of the institutional brokerage industry.
First, we show that brokerage commissions are not set trade-by-trade, as assumed in the past, but rather determined in the context of a long-term contract. The distribution of institutional commissions indicates that proxies for the execution costs of a trade are relatively unimportant determinants of per-share commission charges. Instead, past commissions are the strongest predictor of future commissions. This result is inconsistent with the view of commissions as a continuous execution cost negotiated on a trade-by-trade basis, yet supports our view that per-share commissions are a convenient way for institutions and brokers to track the revenues a client sends to a broker. Both parties need only concentrate on the volume of trade directed to a broker to calculate the payment rendered and gauge the importance of a client. Volume allocations accompanied by stable per-share commissions accumulate the fixed fee for the broker’s services. A client that sends enough order flow to a particular broker expects to receive a premier level of service in return.
Viewing commissions as an average cost has important consequences for understanding the allocation of institutional order flow and the consequent payment of billions of dollars in commissions. We document that smaller institutions use fewer brokers than large institutions, at least partly due to the associated fixed costs, but also because it facilitates concentration (bunching) of their order flow with particular brokers. Institutions bunch their order flow with a small subset of brokers, from whom they receive premier services. We find that small institutions tend to concentrate their order flow significantly more than large institutions in order to become relatively important clients to a small set of brokers. These results are stable throughout our five-year sample period.
Bunching order flow is not an optimal strategy for hiding one’s identity from the market. Therefore, if bunching partially reveals an institution’s identity, it imposes significant price-impact costs on institutions. These costs must be offset by benefits to a bunching strategy. Understanding the costs and benefits of the commission contract is crucial for diagnosing the rapid changes in the full-service commission market. A soaring demand for liquidity has led to the emergence of alternative trading systems such as Liquidnet, UNX, and ITG, whose low-cost executions drive the significant decline in average commissions that we have documented. These alternative systems sometimes entirely bypass traditional brokers and many institutions implement order-routing programs that use brokers as one execution choice among many potential destinations for the order, diminishing the role of traditional full-service brokers.
At the same time, the value of the traditional premium services appears to be declining. The post-2000 IPO market offers fewer opportunities for brokers to allocate historically profitable IPOs than that of the late 1990s. Regulation Fair Disclosure, which restricts selective disclosure of management information, has reduced the precision of analyst information (Bailey et al. 2003). The 2003 adoption of the Global Settlement between the SEC and ten of the largest full-service brokers specifically restricts analysts’ involvement in investment banking departments. This restriction, coupled with declining commissions, reduces the revenues supporting research departments. Kadan et al. (2006) document that from September 2002 to December 2004, the ten brokers covered in the Global Settlement have among them discontinued coverage of 914 firms, or 12.2% of their covered firms. Smith and Linebagh (2006) contend that research cuts at Morgan Stanley are directly attributable to ECN competition for execution. Institutions have responded to the lower value of brokerage research and IPO services, as well as to the continuing pressure from their investors, by demanding lower commissions and more capital to facilitate their transactions.
Facing increased capital costs and lower commission revenues, brokers face a significant decline in the return on equity employed in the institutional trading business (Hintz and Tang 2003). As these changes continue, mid-size institutional brokers find it hard to maintain profitability in the current environment. This loss of profitability was the stated reason behind Wells Fargo’s announcement in August 2005 of its complete withdrawal from the institutional equity business. Recently, Prudential Securities also announced their withdrawal from the institutional brokerage market.
At the same time, we show that even when faced with increasing competition and reduced ability to offer non-priced valuable services, the basic two-price commission market structure is still intact, which suggests that it provides economic benefits. Large brokers are able to maintain profitability through investment banking and by dramatically increasing the allocation of capital to proprietary trading. For a large broker, capturing order flow through internal execution (even at the discount commission levels) has the ancillary benefit of providing information to the broker’s proprietary trading desk. Large brokers are thus involved in a race to provide low-cost, high-liquidity execution to their institutional clients, while at the same time extracting valuable information from the order flow they observe. Mid-sized brokers incur many of the costs of large brokers, but if they cannot invest in cutting-edge execution technology and proprietary trading, they may find the institutional equity business increasingly unprofitable and exit entirely. Consequently, the full-service segment of the institutional brokerage industry may become increasingly specialized, competitive, and concentrated in the near future.
## References
Aitken
M.
Garvey
G.
Swan
P.
How Brokers Facilitate Trade for Long-Term Clients in Competitive Securities Markets
,
1995
, vol.
68
(pg.
1
-
33
)
Bailey
W.
Haitao
L.
Mao
C.
Zhong
R.
Regulation Fair Disclosure and Earnings Information: Market, Analyst and Corporate Responses
Journal of Finance
,
2003
, vol.
58
(pg.
2487
-
514
)
Bennett
J.
A Closer Focus on Trading Costs
2002
Report to Institutional Investors, Greenwich Associates, Stamford, CT
Berkowitz
S.
Logue
D.
The Portfolio Turnover Explosion Explored
Journal of Portfolio Management
,
1987
, vol.
13
(pg.
38
-
46
)
Berkowitz
S.
Logue
D.
Noser
E.
The Total Cost of Transactions on the NYSE
Journal of Finance
,
1988
, vol.
43
(pg.
97
-
112
)
Bertsimas
D.
Lo
A.
Optimal Control of Execution Costs
Journal of Financial Markets
,
1998
, vol.
1
(pg.
1
-
50
)
Blake
R.
Schack
J.
Institutional Investor
,
2002
(pg.
58
-
68
April
Blume
M.
Soft Dollars and the Brokerage Industry
Financial Analysts Journal
,
1993
, vol.
49
(pg.
6
-
44
)
Boehmer
E.
Jennings
R.
Wei
L.
Public Disclosure and Private Decisions: Equity Market Execution Quality and Order Routing
Review of Financial Studies
,
2007
, vol.
20
(pg.
315
-
58
)
Brennan
M.
Chordia
T.
Brokerage Commission Schedules
Journal of Finance
,
1993
, vol.
48
(pg.
1379
-
402
)
Brennan
M.
Hughes
P.
Stock Prices and the Supply of Information
Journal of Finance
,
1991
, vol.
46
(pg.
1665
-
91
)
Chan
L.
Lakonishok
J.
Journal of Financial Economics
,
1993
, vol.
33
(pg.
173
-
99
)
Chan
L.
Lakonishok
J.
The Behavior of Stock Prices Around Institutional Orders
Journal of Finance
,
1995
, vol.
50
(pg.
713
-
35
)
J.
Johnson
K.
Wahal
S.
Journal of Finance
,
2001
, vol.
46
(pg.
397
-
416
)
Constantinides
G.
Capital Market Equilibrium with Transaction Costs
Journal of Political Economy
,
1986
, vol.
94
(pg.
842
-
62
)
Corwin
S.
Schultz
P.
The Role of Underwriting Syndicates: Pricing, Information Production and Underwriter Competition
Journal of Finance
,
2005
, vol.
60
(pg.
443
-
86
)
Copeland
T.
Liquidity Changes Following Stock Splits
Journal of Finance
,
1979
, vol.
34
(pg.
115
-
41
)
Prisman
Dermody J., E.
No Arbitrage and Valuation in Markets with Realistic Transaction Costs
Journal of Financial and Quantitative Analysis
,
1993
, vol.
28
(pg.
65
-
80
)
Easley
D.
O’Hara
M.
Price, Trade Size and Information in Securities Markets
Journal of Financial Economics
,
1987
, vol.
19
(pg.
69
-
90
)
Edmister
R.
Commission Cost Structure: Shifts and Scale Economics
Journal of Finance
,
1978
, vol.
33
(pg.
477
-
86
)
Edmister
R.
Subramanian
N.
Determinants of Brokerage Rates for Institutional Investors: A Note
Journal of Finance
,
1982
, vol.
37
(pg.
1087
-
93
)
Elton
E.
Gruber
M.
Grossman
S.
Discrete Expectational Data and Portfolio Performance
Journal of Finance
,
1986
, vol.
46
(pg.
699
-
714
)
Foucault
T.
Desgranges
G.
Reputation-Based Pricing and Price Improvements in Dealership Markets
2002
Working Paper, HEC and CEPR
Fulghieri
P.
Spiegel
M.
A Theory of the Distribution of Underpriced Initial Public Offers by Investment Banks
Journal of Economics and Management Strategy
,
1993
, vol.
2
(pg.
509
-
30
)
Glosten
L.
Milgrom
P.
Bid, Ask and Transactions Prices in a Specialist Market with Heterogeneously Informed Orders
Journal of Financial Economics
,
1985
, vol.
14
(pg.
71
-
100
)
Green
C.
Journal of Financial and Quantitative Analysis
,
2006
, vol.
41
(pg.
1
-
24
)
Greene
W.
Econometric Analysis
,
1997
3rd ed
Prentice Hall
Hechinger
J.
Deciphering Funds’ Hidden Costs
The Wall Street Journal
,
2004
March 17, D1
Hintz
B.
Tang
K. L.
U.S. Brokerage: Institutional Equities at a Crossroads
2003
New York, NY
Sanford Bernstein
Horan
S.
Johnsen
B.
Does Soft Dollar Brokerage Benefit Portfolio Investors: Agency Problem or Solution?
2004
Law and Economics Working Paper 04-50, George Mason University
Irvine
P.
The Accounting Review
,
2004
, vol.
79
(pg.
125
-
49
)
Jarrell
G.
Change at the Exchange: The Causes and Effects of Deregulation
Journal of Law and Economics
,
1984
, vol.
27
(pg.
273
-
312
)
Jenkinson
T.
Ljungqvist
A.
Going Public: The Theory and Evidence on How Companies Raise Equity Finance.
,
2000
Oxford, UK
Clarendon Press
Johnsen
B.
Property Rights to Investment Research: The Agency Costs of Soft Dollar Brokerage
Yale Journal of Regulation
,
1994
, vol.
11
(pg.
75
-
113
)
O.
L.
Wang
R.
Zach
T.
Conflicts of Interest and Stock Recommendations—The Effects of the Global Settlement and Related Regulations
Review of Financial Studies
,
2006
Advance Access published on January 12, 2009; doi:10.1093/rfs/hhn109
Kavajecz
K.
Keim
D.
Packaging Liquidity: Blind Auctions and Transaction Cost Efficiencies
Journal of Financial and Quantitative Analysis
,
2005
, vol.
40
(pg.
465
-
92
)
Keim
D.
A.
Execution Costs and Investment Style: An Inter-exchange Analysis of Institutional Equity Orders
Journal of Financial Economics
,
1997
, vol.
46
(pg.
265
-
92
)
Kelly
K.
Hechinger
J.
How Fidelity’s Trading Chief Pinches Pennies on Wall Street
The Wall Street Journal
,
2004
October 12, D1
Kim
S. K.
Lin
J.
Slovin
M.
Market Structure, Informed Trading and Analysts’ Recommendations
Journal of Financial and Quantitative Analysis
,
1997
, vol.
32
(pg.
507
-
24
)
Klein
B.
Leffler
K.
The Role of Market Forces in Assuring Contractual Performance
Journal of Political Economy
,
1981
, vol.
89
(pg.
615
-
41
)
Kyle
A.
Econometrica
,
1985
, vol.
53
(pg.
1335
-
55
)
Livingston
M.
O’Neal
E.
Mutual Fund Brokerage Commissions
Journal of Financial Research
,
1996
, vol.
39
(pg.
273
-
92
)
Logue
D.
Managing Corporate Pension Plans
,
1991
New York
Harper Publishing
Loeb
T.
Financial Analysts Journal
,
1983
, vol.
39
(pg.
39
-
44
)
Ljungqvist
A.
Marston
F.
Wilhelm
W.
Competing for Securities Underwriting Mandates: Banking Relationships and Analyst Recommendations
Journal of Finance
,
2006
, vol.
61
(pg.
301
-
40
)
Nimalendran
M.
Ritter
J.
Zhang
D.
Do Today’s Trades Affect Tomorrow’s IPO Allocation?
Journal of Financial Economics
,
2007
, vol.
84
(pg.
97
-
109
)
Ofer
A.
Melnick
A.
Price Deregulation in the Securities Industry: An Empirical Analysis
The Bell Journal of Economics
,
1978
, vol.
9
(pg.
633
-
41
)
Pethokoukis
J.
The Thorn in Fidelity’s Side
U.S. News and World Report
,
1997
September 8
Reuter
J.
Are IPO Allocations for Sale? Evidence from Mutual Funds
Journal of Finance
,
2006
, vol.
61
(pg.
2289
-
324
)
Roll
R.
A Simple Effective Measure of the Bid-Ask Spread in an Efficient Market
Journal of Finance
,
1984
, vol.
39
(pg.
1127
-
39
)
Schwartz
R.
Steil
B.
Journal of Portfolio Management
,
2002
, vol.
28
(pg.
39
-
49
)
Securities and Exchange Commission
Inspection Report on the Soft Dollar Practices of Broker-dealers, Investment Advisors and Mutual Funds
1998
Smith
R.
Linebagh
K.
Morgan Stanley Plans Reduction in Research Jobs
The Wall Street Journal
,
2006
March 22, C1
Sofianos
G.
The Changing NASDAQ Market and Institutional Transactions Fees. Goldman Sachs Derivatives and Trading Research
2001
May 31
Timmons
H.
I Kept on the Buy Rating, but I Told My Favorite Investors to Sell
,
2000
March 27
Tinic
S.
West
R.
The Securities Industry under Negotiated Commissions: Changes in the Structure and Performance of NYSE Member Firms
Bell Journal of Economics
,
1980
, vol.
11
(pg.
29
-
41
)
Vayanos
D.
Transaction Costs and Asset Prices: A Dynamic Equilibrium Model
Review of Financial Studies
,
1998
, vol.
11
(pg.
1
-
58
)
Womack
K.
Do Brokerage Analysts’ Recommendations Have Investment Value?
Journal of Finance
,
1996
, vol.
51
(pg.
1137
-
67
)
1
Since the level of trading volume remains one of the more puzzling problems in finance, any market feature that impedes trading makes it even more puzzling. We argue later that full-service brokerage commissions do not constitute marginal cost and thus do not significantly impede trading.
2
Total commission revenues have been steadily increasing over time: from $1.74 billion from all sources in 1974 to$13.2 billion paid by institutional investors alone in 2005. Despite the growth of electronic trading, full-service commission payments still dominate U.S. institutional execution. Sofianos (2001) notes that institutional commission rates remain considerably higher than the marginal cost of trade execution.
3
On the other hand, it is common for commissions for European or Japanese stocks to be quoted in basis points. As we demonstrate later, our results are not dependent on the type of quoting mechanism.
4
Commissions costs also have a significant impact on the cost of owning mutual funds. Hechinger (2004) reports that Lipper Inc. studied two thousand funds for The Wall Street Journal and found that brokerage commissions can more than double the cost of owning fund shares.
5
For example, 3 million shares sent during a quarter at 5 cents per share generate a payment of $150,000 from the institution to its broker, as opposed to only$60,000 at 2 cents per share.
6
Our conversations with market participants suggest that this is the way commissions are set and monitored. As per-share commissions are relatively constant, each broker must only measure the total number of shares received from an institution over the contract period to ensure that it receives enough revenue to continue providing the agreed-upon level of service. Where these institutions execute the rest of their trades is immaterial, as institutions have no incentive to reduce their level of trade with a broker unless they are dissatisfied with the services they receive.
7
Another puzzling example of a linear contract based on a measure unrelated to performance is found in advertising. Advertising agencies receive revenues proportional to total media billings for their campaign. As in brokerage services, the quality of a single campaign is hard to quantify and contract upon, and thus the parties cannot base a payment on an objective performance measure. Instead, payments are based on an easily measurable variable that is under the full control of the client, who, therefore, determines the total payment. It is well known that firms frequently change their advertising agencies in search of better creativity. What is less known is that it is not uncommon for an agency to dismiss the firm if its billings are too low for the required effort.
8
When commission deregulation finally arrived in Japan in October 1999, the Japanese commission contract changed from a function of price and volume (similar to that in the pre-deregulation NYSE) to European percentage commissions.
9
Our hypothesis implies that the institutional commissions in Europe, as represented in cents per share, should be distributed continuously, whereas the distribution of commissions in basis points should be discrete. While we do not have data available to test this hypothesis directly, from the limited data that we have seen and conversations with industry practitioners, this seems to be the case.
10
Client orders can be executed in a single trade or broken into multiple trades. All results in the article are at the trade level. Robustness tests consolidating trades into orders produce similar results.
11
We account for broker mergers. We track them using Ljungqvist, Marston, and Wilhelm (2006), Corwin and Schultz (2005), and several news and information services.
12
Our results are robust to alternative definitions of client-broker pairs. Using a minimum of five trades increases the noise in the mode estimate relative to larger cutoff points and, therefore, our presented results are conservative.
13
To save space we do not report the coefficient results from the logit specification. The results are similar and are available upon request. We also ran an OLS regression using log commissions as the dependent variable. The results were similar to the specifications presented.
14
Nor is the trade-by-trade commission rate sensitive to the actual measures of execution costs that we can determine using these data. We calculate execution prices relative to the value-weighted average price (VWAP) and include this cost in unreported regression specifications. On a trade-by-trade basis, there is no significant correlation between execution cost and commissions per share (ρ = − 0.008), nor do costs have a significant effect on the results in Table 3.
15
Fulghieri and Spiegel (1993) present a model of IPO underpricing wherein broker services are complements. In their model, large clients received the most profitable IPO allocations.
16
Conversations with institutional traders and research directors indicate that the quality of a broker’s services can be considered fixed over a quarter or six months. Over longer periods, a broker’s relative quality can deteriorate, in which case the institution pays a high price for inferior service, or it can improve, in which case the broker would wish to be compensated.
17
Historically, information was delivered by telephone and the broker determined the ordering of the queue, hence the name First Call for a well-known research distribution network. More recently, electronic dissemination of analysts’ research notes ensures that most clients receive some information at approximately the same time. Today’s queue revolves around a race to receive elaboration from the analyst on the brief First Call note to ascertain the value of the analyst’s information.
18
Aitken, Garvey, and Swan (1995) and Foucault and Desgranges (2002) also discuss long-term relationships for trading services.
19
Our analysis of execution costs relative to VWAP provides suggestive evidence supporting this idea. In preliminary tests, not presented here, we find that the actual execution costs had no significant effect in determining commissions. At the same time, when we examine aggregate commissions, we do find a suggestive pattern. Specifically, when we examine only low-cost discount commissions (not exceeding 3 cents), we find institutions execute trades at almost exactly the VWAP, while high-cost trades earn price improvement of about 1 cent per share on average. The average difference in commissions is 3.1 cents, indicating that, in aggregate, high-cost commissions are receiving about one-third of the benefit from their higher commissions through improved execution.
20
There is a consensus in the IPO literature that underwriters compensate institutions that consistently provide them with information about the fundamental values of the issuing firms (Jenkinson and Ljungquist 2000). Production of this information requires institutions to invest in research capabilities, which is not economical if institutions are awarded small positions in IPOs. Consequently, there are imbedded economies of scale in the IPO pre-issue market.
21
Irvine (2004) reports that brokerage-firm trading volume in the recommended stock rises significantly after Buy recommendations. Similarly, Green (2006) notes that recommendation changes for NASDAQ stocks result in aggressive quoting behavior from the affiliated market makers, as if accommodating customer order flow. Neither a cost-minimization framework nor the existence of soft dollars explains this result as directly as our conjecture that brokers reward profitable clients with premium services.
22
For example, Kim, Lin, and Slovin (1997) find that traders who execute before Dow Jones widely disseminates an initial recommendation earn 32.3 cents-per-share intraday profit. Green (2006) finds that early traders on the day’s First Call reports analysts’ recommendation changes earn 45 cents-per-share profit when buying on upgrades and 52 cents-per-share profit by shorting on downgrades.
23
A broker who knows that a client has a large buy (sell) order can start buying from (selling to) the book, driving prices up, and then selling to (buying from) the institution at higher prices. Shwartz and Steil (2002) survey twenty-seven major investment management firms and conclude that frontrunning costs are important to buy-side institutions; such costs are a primary driver of the buy-side’s demand for trading immediacy.
24
Chan and Lakonishok (1993, 1995) conclude that the most important determinant of the price impact of an institutional trade is the identity of the institution behind the trade.
25
An interesting case is Fidelity, which could easily dominate any broker’s volume, but then the market will know that this broker’s trades have a high probability of being Fidelity’s trades. Market participants actively try to determine Fidelity’s trading patterns, which it actively tries to hide (Pethokoukis 1997).
26
Predictions below can be derived formally in a context of a model with the above assumptions. Details are available upon request.
27
This agency cost argument has been made relative to soft dollars by Berkowitz and Logue (1987) and Logue (1991). On the other hand, Johnsen (1994) and Horan and Johnsen (2004) argue that soft dollars may ameliorate agency cost issues.
28
Recall that we expect large institutions to be high-revenue, high-cost customers. This contention implies that smaller institutions need not compete on total commission revenue, but rather on net profitability to the broker.
29
Using information from the Securities Industry Association, and company websites, we classified our two hundred seventy active brokers into five types: full service, discount, ECN, wholesaler, and other brokers. Full-service brokers (144) are the most frequent broker type. Discount brokers, ECNs, and wholesalers generally do not provide premium services, while other brokers usually provide a single premium service. Tests of institutional trading patterns using only full-service brokers produce similar results to those presented.
30
Table 5 reports institutional averages by commission dollars because commission dollars represent the important economic variable: brokers’ revenue. Similar conclusions are obtained from share volume, but the reader should note that using commission dollars represents the low-cost market as a relatively less important execution method.
31
In unreported results, we also use Rule 606 (Dash-6) data to examine broker competition in trade execution. We find that newly promoted top brokers use more market centers and executed greater volume in alternative venues than existing top brokers. Based on the results in Boehmer, Jennings, and Wei (2007), who use Rule 605 (Dash-5) data to examine how execution quality affects order routing, we interpret this activity as greater effort on behalf of newly promoted brokers at seeking out low-cost execution for their clients. We thank the referee for this suggestion.
32
We chose ten brokers to ensure that our tests include almost all clients. However, these results are also robust to alternative cutoffs.
33
In Table 5, the structure of our tests mandates that per-share commissions are averaged quarterly; thus, they differ somewhat from the trade-weighted averages reported in Table 1. Given the overall decline in commissions over time (Figure 1) and the general increase in trading volume over time, the Table 1 averages weight the lower cost commissions later in the sample relatively more.
34
This result is not tautological. Table 1 indicates that a broker’s size rank in the sample is primarily determined by the largest institutions. Yet, the four smallest quintile institutions, whose allocations do not significantly affect brokers’ size ranks, choose to concentrate their order flow with the same set of brokers as the largest quintile.
35
Using the same technique, we also tested this hypothesis on the CDA/Spectrum data from 1994 to 2000, which provides quarterly holdings data on all investment managers with over \$100 million in assets. We found that in four of Spectrum’s five institutional type classifications, turnover significantly declines as size increases, as predicted.
## Author notes
We would like to thank Abel/Noser, Greenwich Associates, and the Institutional Broker Estimate Service for providing the data. We also thank Ekkehart Boehmer, Chitru Fernando, Terence Lim, Marc Lipson, Maureen O’Hara, Christine Parlour, Chester Spatt, George Sofianos, Daniel Weaver, seminar participants at the Hebrew University, HEC (France), Tel Aviv University, Texas A&M, the NASDAQ Economic Advisory Board, and participants at the New York Stock Exchange conference, the Yale-Nasdaq conference, and the FIRS Capri conference for their comments. We also thank Granit San for her assistance with the CDA/Spectrum data, David Hunter for his help with Thompson mutual funds data, and Michael Borns for his expert editorial assistance. Goldstein gratefully acknowledges financial support from the Babson College Board of Research; Kandel and Wiener are grateful to the Krueger Center for Financial Research at the Hebrew University for financial support. We apologize for any errors remaining in the article.
|
{}
|
# A quick summary of the past six months
Posted on August 24th, 2012
Previous Article :: Next Article
Hi there. This post is a quick summary of few things have been working on over the past six months. Things were quite busy over here!
## 1) Starfish Development
Our main project was developing a new 2D rarefied gas / plasma solver called Starfish. This effort was funded by NASA/GRC SBIR Phase I effort for which we are extremely grateful. Starfish was designed with the goal of supporting Glenn Research Center’s high power Hall thruster program. However, instead of developing just another Hall thruster code, we decided to develop Starfish in a modular and general fashion. As such, Starfish is really a general 2D solver that includes the components necessary to tackle the Hall thruster problem, but at the same time, is also applicable to completely different disciplines. Some possible applications include plasma processing, non-equilibrium plasmas, the plasma-wall interface, rarefied gases, contamination transport, electrostatic return, atmospheric discharges, and any type of electric propulsion analysis. Some of the important code features include support for multiple meshes, surface geometry defined by linear or cubic splines, arbitrary number of kinetic (PIC) or fluid (CFD/MHD) species, inter-material interactions including chemical reactions and collisions, and a detailed wall interface model. The code also couples with out kinetic code Lynx for self-consistent computation of electron transport. As part of this work, we were also interested in designing a logo, which you can see on the right. The logo was developed by Glenn Fletcher, and in it he captured the fact that Starfish is a 2D solver with many application areas (i.e. the multiple arms of a starfish).
You can find out more about Starfish on the code page, particleincell.com/starfish. We will be posting many articles in the near future demonstrating the various features of the code so make sure to subscribe to the newsletter. We are also currently looking for beta testers so please let us know if you are interested. We would love to hear from researchers in the industry, academia, and the government working on actual plasma problems that could benefit from a two-dimensional analysis. As part of this testing, we want to make sure the code captures the needs of you, the customer. Unfortunately due to us playing it safe with ITAR, for now the code is available only to U.S. persons residing in the United States. We do plan to release the code in some form to a wider audience in the future after consulting with experts in the export law.
## 2) New publications
We also published one journal and one conference paper. The journal paper is Brieda, L., and Keidar, M., “Plasma-wall interaction in Hall thrusters with magnetic lens configuration”, Journal of Applied Physics, Vol. 111, No. 123302, 2012, and the conference paper is Brieda, L., and Keidar, M., “Development of the Starfish Plasma Simulation Code and Update on Multiscale Modeling of Hall Thrusters”, 48th AIAA Joint Propulsion Conference, Atlanta, GA, 2012, AIAA-2012-4015. The first paper looked at the effect an inclined and converging magnetic field plays on the wall sheath. The second paper summarized our ongoing effort to model Hall thrusters with a multiscale approach, in which we combine different algorithms to investigate different temporal and spatial scales. It also discusses development of Starfish and coupling it with your kinetic code Lynx. Among one of the important findings was that synergistic effects seem to play an important role in electron transport, $mu_{walls + collisions} > mu_{walls} + mu_{collisions}$
## 3) Plasma Cell
In addition, we started supporting numerical analysis of a plasma-wall interface experiment being conducted at Georgia Tech under AFOSR funding. This experiment aims to obtain a better understanding of the plasma-wall interface, and determine how surface material properties, secondary electron emission, and magnetic fields impact the formation of the sheath. The figure below shows that surface irregularities quickly become smoothed away in the sheath. This result also demonstrates the code’s ability to load relatively complex geometries.
## 4) Java VTK GUI
Finally, we also worked with a California-based CFD company to enhance their Java-based GUI by incorporating VTK-support for data visualization. That’s all we can say about this project for now. However, in the near future we will start work on jVTE, a Java reincarnation of capVTE. This effort will lead to a light-weight, Java based VTK visualization solution that can be used either in a standalone fashion or coupled with existing solvers. Stay tuned and let us know if you are interested in this technology.
Subscribe to the newsletter and follow us on Twitter. Send us an email if you have any questions.
### 12 comments to “A quick summary of the past six months”
1. April 25, 2013 at 2:09 am
I hope you can raise more money for the Glenn Research Center. The work you are doing on starfish is so groundbreaking.
• April 25, 2013 at 6:07 am
Thanks Bill! That Phase I was a great seed funding and allowed us to get most of the grunt work done. But now I feel that is better to develop the rest of code with internal funding as that should give us bit more freedom in the final direction we want to take the code.
At this point we are still developing some of the basic features (such as DMSC, various potential solvers), so the applications are not too Hall thruster specific right now. But as soon as those are done and the code is released, we will turn the focus back on Hall thrusters, and hopefully get some additional support from Glenn and other centers. And if not financial, we should at the least be able to start some collaboration.
2. Mike Duan
December 9, 2017 at 3:13 am
I hope you can write a document about how to get started with the Starfish in detail. And I will appreciate that very much.
• December 16, 2017 at 10:07 am
Hi Mike, a Starfish User’s Guide is on my short list of things to do.
3. Shaun Andrews
January 14, 2018 at 1:23 pm
Starfish is a brilliant piece of work! I am trying to use it to simulate a simple ion thruster operating in the ionosphere. What would be your recommendations for the xml file inputs for a basic Xenon ion thruster plume, and the associated interactions?
• January 18, 2018 at 8:34 am
Hi Shaun, in my master’s thesis work I used a Maxwellian source to model an ion thruster. I think something like that should still work.
• Shaun
January 18, 2018 at 7:31 pm
Thanks Lubos, I use a Maxwellian source of Xe+ ions and have MCC and DSMC collisions turned on, but the output appears to be just that of a uniform beam with no observable CEX cloud. The electron density also stays constant over the entire domain. Any idea to what I am doing wrong?
• January 18, 2018 at 10:58 pm
Do you have neutrals? It may be easiest to email me your input files and I’ll take a look over the weekend.
• Shaun
January 19, 2018 at 11:42 am
I have experimented with adding slow moving neutrals, but that does not seem to work. That would be fantastic if you can have a look via email. What is your address?
4. Grant
February 6, 2018 at 12:46 pm
I have not yet tried Starfish, but it looks like a very useful project! One question I have is on the implementation of time varying boundaries. My quick look at the documentation suggests it’s easy to set up a Dirichlet boundary with a set voltage. Is it possible currently to prescribe a changing potential?
• February 6, 2018 at 5:25 pm
Hi Grant, that functionality does not currently exist but I agree that it could be useful. I will try to implement it in the near future.
|
{}
|
# When students work in a chemistry lab, the location of which item would be the most important for each
6 answers
###### Question:
When students work in a chemistry lab, the location of which item would be the most important for each student to know?
a. broom and dust pan, in case the student needs to sweep up broken glass
b. baking soda, in case the student needs to neutralize an acid spill
c. ice maker, in case the student needs to cool off hot glassware
d. safety shower, in case a chemical is spilled on the student’s body
## Answers
4 answers
### The owner of the rancho grande has 3044 yd of fencing with which to enclose a rectangular piece of grazing
The owner of the rancho grande has 3044 yd of fencing with which to enclose a rectangular piece of grazing land situated along the straight portion of a river. if fencing is not required along the river, what are the dimensions of the largest area he can enclose?...
10 answers
### PLEASE HELP ASAP (will mark brainliest if answered correctly)Which of the following statements best
PLEASE HELP ASAP (will mark brainliest if answered correctly) Which of the following statements best explains the significance of the Battle of Lexington? A)The Battle of Lexington ended the American Revolution. B)The Battle of Lexington was the first battle the minutemen won. C)The Battle of Lexin...
10 answers
### Choose the graph of the function f(x) = x - 2. click on the graph until the correct graph appears
Choose the graph of the function f(x) = x - 2. click on the graph until the correct graph appears...
4 answers
### Why does the author ask the reader to 'assume for a minute that it is possible' in paragraph 3? (5 points)ОаObОсOdShe
Why does the author ask the reader to "assume for a minute that it is possible" in paragraph 3? (5 points) Оа Ob Ос Od She does not believe the evidence is very convincing on its own. She wants to give specific evidence for why the theory is flawed. She wants to provide evidence that does not s...
5 answers
### I really need some help here!Find the volume of the solid where the cone and half sphere are hollow. Use 3.14 for pi.I also need
I really need some help here! Find the volume of the solid where the cone and half sphere are hollow. Use 3.14 for pi. I also need help with this one finding the volume yellow for this second question blue for the first question whoever gets the right answer for either of these questions will be b...
4 answers
### Which of the following is true of random lead generation? Random lead generation is used when there is an existing database of
Which of the following is true of random lead generation? Random lead generation is used when there is an existing database of names and addresses of prospects. Random lead generation usually requires a low number of contacts to gain a sale. Households and businesses are insignificant sources for ra...
5 answers
### Magma that reaches earth's surface and flows from volcanoes true or false
Magma that reaches earth's surface and flows from volcanoes true or false...
4 answers
### 4. A recent study indicated that 29% of the 100 women over age 55 in the study were widowsHow large a sample must you
4. A recent study indicated that 29% of the 100 women over age 55 in the study were widows How large a sample must you take to be 90% confident that the estimate is within 5% of the true proportion of women over age 55 who are widows?...
10 answers
### Look at picture for question. medals and points.
Look at picture for question. medals and points. $Look at picture for question. medals and points.$...
3 answers
### How did the french and indian war affect the native americans
How did the french and indian war affect the native americans ...
3 answers
### De qué manera tanto hombres como mujeres podemos lograr transformar estas ideas ;< ayudaaa pliss para horita pli
De qué manera tanto hombres como mujeres podemos lograr transformar estas ideas ;< ayudaaa pliss para horita pli...
3 answers
### What is 35,124 to the nearest ten thousand?
What is 35,124 to the nearest ten thousand?...
10 answers
### Which phrase best completes the table ?- Important Egyptian Achievements -Developed a system of writing
Which phrase best completes the table ? - Important Egyptian Achievements - Developed a system of writing - Became a major exporter of grain- ?- A. Created a unique style of art B. Conquered the worlds largest empire C. Gave women the same rights as men D. Developed the first irrigation system...
4 answers
### Write an expression to show the quotient of the sum of 6 and m divided by 2.
Write an expression to show the quotient of the sum of 6 and m divided by 2....
4 answers
### Have been at the forefront of every important social change in the united states from the beginning
Have been at the forefront of every important social change in the united states from the beginning of the nation....
2 answers
### Which type of burn involves destruction of the epidermis dermis and subcutaneous layers of the skin?
Which type of burn involves destruction of the epidermis dermis and subcutaneous layers of the skin?...
3 answers
### What problems did muhammad encounter while trying to spread the message of islam?
What problems did muhammad encounter while trying to spread the message of islam?...
7 answers
### Find the surface area and volume of a sphere with a radius of 7 in. Round answers to tenths as necessary.
Find the surface area and volume of a sphere with a radius of 7 in. Round answers to tenths as necessary....
4 answers
### For three months after the training program, the sales director collected the sales data for the 200 sales representatives fromthe Midwest and Northeast
For three months after the training program, the sales director collected the sales data for the 200 sales representatives from the Midwest and Northeast regions. The director then broke down the number of sales orders for the representatives according to whether they received training or not. This...
4 answers
### Humans can protect forests for future generations by .
Humans can protect forests for future generations by ....
-- 0.055542--
|
{}
|
# precise official definition of a cell complex and CW-complex
I would be very grateful If someone could state a precise definition (direct one and inductive one) of a cell complex and CW-complex, since my intuition is telling that some restriction is missing and also, the definitions from several books seem to be very different. Conciseness is much desired, since everything seems so complicated.
1) DEFINITIONS From Introduction to Topological Manifolds (J. M. Lee)
and from Algebraic Topology (T. tom Dieck):
These definitions are so complicated, that I can't really see what's going on.
Let $\mathbb{B^n}$ denote a closed n-ball. As far as I know, a cell complex is a space, obtained as $X=\cup_{i\in\mathbb{N}_0} X^{(i)}$, such that
• $X^{(0)}$ is a discrete space and
• $X^{(n)}$ is obtained from $X^{(n-1)}$ by attaching $n$-cells, i.e. $X^{(n)}$ $=$ $X^{(n-1)}\cup_{f_\lambda}\coprod_{\lambda\in\Lambda}\mathbb{B^n}$ $=$ $X^{(n-1)}\coprod\coprod_{\lambda\in\Lambda}\mathbb{B^n}/_{x\sim f_\lambda(x);\; x\in\mathbb{S}^{n-1},\lambda\in\Lambda}$ and
• $A\subseteq X$ is closed in $X$ $\Longleftrightarrow$ $\forall n\in\mathbb{N}_0$: $A\cap X^{(n)}$ is closed in $X^{(n)}$.
2) MY PROBLEM: But shouldn't there be some condition on $f_\lambda$? For example, if we have a graph ($1$-dimensional complex) consisting of a single vertex and a single edge. Then when we are attaching $\mathbb{B^2}$, we can set $f$ to map the whole $S^1$ to a single point on the edge, that isn't the vertex. Thus we get a very weird space:
Shouldn't $f$ go along each loop/edge in $X^{(1)}$ integer many times and not stop in the middle? Also, how do we prevent $f$ from oscillating infninitely? For example, if $X^{(1)}$ contains two edges $a,b\subseteq\{0\}\times\mathbb{R}\subseteq\mathbb{R}^2$ with $a\cap b=\{(0,0)\}$, then $f(x)=(0,x^2\sin(1/x))$ can go infinitely many times into $a$ and $b$.
3) UNNECESSARY: If you have time/patience/interest, the definitions of a simplicial complex, abstract simplicial complex, Whitehead complex and any other complex are also welcome.
-
The definition of a simplicial complex or abstract simplicial complex is much easier to grasp, and you may want to look at those Wikipedia articles first. The definition of a CW-complex is, I guess, a more flexible and powerful version of these. It looks long and unwieldy because it needs to both include a large class of spaces and be possible to prove strong theorems about. – Qiaochu Yuan May 29 '11 at 19:50
I think I understand the definition of an (abstract) simplicial complex, but I wanted to see the definition in the context of cell complexes: what additional condition do we assume. – Leon May 29 '11 at 20:00
You should keep in mind that there is no such thing as an official definition of CW-complexes (or of anything, really!) and that the concept can be presented in various forms. – Mariano Suárez-Alvarez May 29 '11 at 20:05
So in my particular case, it's common for CW-complexes to mean different things in different books? – Leon May 29 '11 at 20:08
@Qiaochu, @Mariano: I'm asking this because I have problems understanding the formulation of the theorem about fundamental groups of CW-complexes. – Leon May 29 '11 at 20:14
No part of the definition of a CW complex prevents the sorts of examples that you have indicated. In particular:
1. It is perfectly fine for the entire boundary of a 2-cell to be attached to a single point in the middle of a 1-cell.
2. It is perfectly fine for the boundary of a 2-cell to be attached to a 1-cell in an oscillating fashion, e.g. locally resembling $x \sin(1/x)$.
I think you have a good understanding of the definition of a CW complex, you are just confused about how a definition this general could be useful.
The reason is that algebraic topology is mostly defined up to homotopy equivalence. This is usually defined as follows: two spaces $X$ and $Y$ are homotopy equivalent if there exist maps $f\colon X\to Y$ and $g\colon Y \to X$ so that $f\circ g$ and $g\circ f$ are homotopic to the identity map. Homotopy equivalent spaces have the same homotopy groups, the same homology and cohomology groups, and essentially all the same homotopical properties.
The reason that wildness of the attaching maps $f_\lambda$ is unimportant is that any "wild" CW complex is homotopy equivalent to a "tame" CW complex. In particular, the homotopy type of a CW complex is entirely determined by the homotopy classes of the attaching maps. That is, if you replace one of the attaching maps by a homotopic map, then the resulting CW complex is homotopy equivalent.
For example, if the entire boundary of a 2-cell is attached to the middle of a 1-cell, this is homotopic to a map that attaches the entire boundary to a nearby 0-cell, so the two resulting complexes are homotopy equivalent. Similarly, any oscillating map like $x\sin(1/x)$ is homotopic to a more reasonable map, so any CW complex with an oscillating attachment is homotopy equivalent to one with nicer attaching maps.
Indeed, assuming we are willing to work up to homotopy equivalence, we can assume that the boundary of any 2-cell maps to either
1. A single 0-cell via a constant map, or
2. To a closed loop of edges in the 1-skeleton, using a map which is locally an embedding.
This is how topologists tend to think of cell complexes for the purposes of algebraic topology.
Incidentally, the reason that it is useful to allow arbitrary attaching maps is that it glosses over the problem of how to choose a "nice" representative for each homotopy class of maps from the boundary of an $n$-cell to the $(n-1)$-skeleton. Though it's obvious how such "nice" representatives should work when $n=2$, it becomes less clear in higher dimensions.
For example, it is possible to have a CW complex whose $2$-skeleton is a 2-sphere, and then attach a $4$-cell to the $2$-skeleton using a non-trivial element of $\pi_3(S^2)$. That is, the boundary of the $4$-cell is a $3$-sphere $S^3$, which is being attached to the sphere $S^2$ via the Hopf map $S^3 \to S^2$. This is actually a very useful cell complex, because it gives a cell structure for the projective space $\mathbb{C}P^2$ with only three cells.
Simplicial complexes are much simpler: they are truly just combinatorial objects, with simplexes glued together using the simple linear identifications. The disadvantage is that a simplicial complex must usually have many more simplices than the cells of a cell complex. For example, there exists a CW complex for the torus that has only four cells, while a simplical complex homeomorphic to a torus requires at least a couple dozen simplices. In general, putting a CW structure on a space requires only homotopy information, while putting a simplicial structure on the space requires a much more careful consideration of the geometry.
Edit: By the way, the best place to learn about cell complexes and their relation to homotopy equivalence is Chapter 0 of Hatcher's algebraic topology book.
-
@Jim: "the homotopy type of a CW complex is entirely determined by the homotopy classes of the attaching maps": amazing, and very enlightening, thank you very much. I hope this is proven somewhere in Hatcher. – Leon May 29 '11 at 21:36
I don't think there's an official theorem of that form in Hatcher. Those types of theorems are usually homework problems in his textbook. – Ryan Budney May 29 '11 at 23:35
@Leon: Hatcher states the theorem in the case of a single attachment. See the italicized statement immediately before example 0.11. – Jim Belk May 30 '11 at 1:35
Also, Proposition 0.16 is important for showing that a change in one attaching map can be extended to higher-dimensional cells. – Jim Belk May 30 '11 at 1:37
There are many possible cell decompositions of an open disk. One of the simplest would be to choose a homeomorphism of the disk with $\mathbb{R}^n$, and then use the decomposition into cubes (with $\mathbb{Z}^n$ as the 0-cells, and so forth). – Jim Belk May 30 '11 at 17:25
|
{}
|
# Analytical solution for bound state energies of infinite well
Tags:
1. Oct 20, 2014
### Bob007
Hi there
I am trying to find bound state energies assuming infinite potential. I have been told it can be done by analytically solving Right Hand Side and Left Hand Side of an equation such as:
E^1/2 tan(2ma^2E/4hbar)^1/2 = (V0-E)^1/2
If solved properly, it should give one curve (RHS), crossed by several LHS curves. Intersection points are the answers I am looking for. Each intersection corresponds to one n. I am wondering if it can be done by Matlab or Mathematica? Sorry if it is too basic :)
Thanks
2. Oct 21, 2014
### ShayanJ
That doesn't specify what the potential is! Assuming infinite potential where?
Also you can't solve RHS and LHS of an equation, you solve the equation itself!!!
|
{}
|
# Apparent frequency as function of distance
So the Doppler effect says that the frequency of sound changes due to relative motion of source and observer. My question is if there any expression that tells how the apparent frequency changes in terms of the distance between the observer and source.
I know we have $$f'=\frac{(v\pm v_O)}{(v\pm v_s)},$$ but there is no explanation for the rate of change in frequency.
To a pretty good approximation, the frequency of a sound wave does not change as it travels. In a plane wave solution, all points along the wave are oscillating up and down with the same period; and so the number of cycles per second is the same at the source as at the observer, no matter where the observer is located.
There are some minor effects (due to non-linearities in the medium) which can cause frequencies to change. However, these are usually negligible, and are usually glossed over in introductory physics classes.
EDIT: You commented that there should be a relation between distance and frequency because the pitch changes when it comes closer. But that's not quite true; the pitch changes while it is moving, which is not quite the same thing.
Suppose you have a speaker that is 10 m away and at rest, and it is emitting a 440 Hz tone. While the speaker is at rest, you hear a 440 Hz tone. If the speaker then moves towards you, you will hear a higher frequency while the speaker is moving. But if it stops 5 m away from you, the frequency you hear will return to 440 Hz.
In principle, if you know what the "shifted frequency" was, you could figure out how fast the speaker was moving; and if you measured the amount of time that you heard the shifted frequency, you could multiply the velocity by the amount of time to find the distance the speaker travelled. But that only tells you the displacement of the speaker during its motion, not its initial or final distance from you.
• I meant to say that we listen the frequency changing when a source approach us; if it comes closer then apparent frequency would be higher than when it was farther. If that was right then there should be relation of apparent frequency and the distance. – Anil Paudel Jan 23 '19 at 15:59
• @AnilPaudel, there is an effect when the moving object's closest approach is some distance away vs. very close to you. – David White Jan 23 '19 at 16:53
• @AnilPaudel: See my edits. – Michael Seifert Jan 23 '19 at 22:06
|
{}
|
1. ## Cauchy <=> convergent
So I am trying to prove that a sequence is cauchy if and only if it is convergent using only the Bolzano Weierstrass therom for sequences
[Any bounded sequence must have at least one convergent subsequence]
So far, all I have gotten in the forward direction (cauchy => convergent) is that a cauchy sequence is bounded, and hence by the Bolzano Weierstrass theorem, there must be at least ONE convergent subsequence. I have a theorem in the book which states:
[A sequence converges to A iff each of its subsequences converge to A.]
Is this the proper theorem to use in my problem and if so how do I show that ALL of the subsequences converge, or even that more than one subsequence exists at all?
Also in the backwards direction (convergent => cauchy), I am completely at a loss. Any help would be greatly appreciated.
Thanks!
2. Originally Posted by dannyboycurtis
Also in the backwards direction (convergent => cauchy), I am completely at a loss. Any help would be greatly appreciated.
$\displaystyle \text{If }\left( {x_n } \right) \to L\text{ then }\varepsilon > 0\, \Rightarrow \,\left( {\exists N} \right)\left[ {n \geqslant N\, \Rightarrow \,\left| {x_n - L} \right| < \frac{\varepsilon}{2} } \right]$
If $\displaystyle n,m\ge N$ then $\displaystyle \left| {x_n - x_m } \right| \leqslant \left| {x_n - L} \right| + \left| {L - x_m } \right| < \frac{\varepsilon }{2} + \frac{\varepsilon }{2} = \varepsilon$
3. [QUOTE=dannyboycurtis;380174]So I am trying to prove that a sequence is cauchy if and only if it is convergent using only the Bolzano Weierstrass therom for sequences
[Any bounded sequence must have at least one convergent subsequence]
So far, all I have gotten in the forward direction (cauchy => convergent) is that a cauchy sequence is bounded, and hence by the Bolzano Weierstrass theorem, there must be at least ONE convergent subsequence. I have a theorem in the book which states:
[A sequence converges to A iff each of its subsequences converge to A.]
Is this the proper theorem to use in my problem and if so how do I show that ALL of the subsequences converge, or even that more than one subsequence exists at all?[quote]
Let $\displaystyle \{a_n\}$ be your sequence and $\displaystyle \{a_{n_k}\}$ be the subsequence that converges to A. Given any $\displaystyle \epsilon> 0$ there exist $\displaystyle N_1$ such that if $\displaystyle n> N$ and $\displaystyle a_n$ is in the convergent subsequence then $\displaystyle |a_n- A|< \epsilon/2$. Also, because $\displaystyle \{a_n\}$ is a Cauchy sequence, there exist $\displaystyle N_2$ such that if m and n are both larger than $\displaystyle N_2$, [tex]|a_m- a_n|< \epsilon/2.
Now, let N be the larger of $\displaystyle N_1$ and $\displaystyle N_2$ so that if n> N, both are true. Since $\displaystyle \{a_{n_k}\}$ is an infinite sequence, there certainly exist $\displaystyle n_k> N$ also. Then, for n> N, $\displaystyle |a_n- A|\le |a_n- a_{n_k}|+ |a_{n_k}- A|$ and both are less than $\displaystyle \epsilon/2$.
Also in the backwards direction (convergent => cauchy), I am completely at a loss. Any help would be greatly appreciated.
Thanks!
Plato showed this. And, by the way, the proof in this direction does NOT require "Bolzano Weierstrasse" or any form of "completeness". For example, in the rational numbers Cauchy sequences are not necessarily convergent, but any convergent sequence is a Cauchy sequence.
|
{}
|
# Synopsis: Diffraction of a nonexistent beam of light
Researchers demonstrate a new type of nonlinear diffraction with no linear analog.
When a nearly monochromatic wave passes through a spatially modulated medium, a diffracted beam, whose wave vector satisfies the vector sum of the incident wave vector and an integer multiple of the medium’s reciprocal lattice vector, can be observed. This is the familiar Bragg condition that is well known in optics and has also been applied to atom-optics and matter waves.
Now, Solomon Saltiel and collaborators from Australia, Bulgaria, Israel, and Denmark have demonstrated diffraction by a virtual beam of light. Writing in Physical Review Letters, the team observed a type of nonlinear diffraction that has no linear analog. In their experiments, two copolarized but noncollinear beams are incident on a nonlinear medium in which the sign of the second-order nonlinear susceptibility ${\chi }^{2}$ is spatially modulated. The interaction of the two beams through this nonlinear medium results in a diffraction pattern of the second harmonic of the fundamental wave as if a third nonexistent beam of a frequency of the fundamental beams is incident on the medium too, with a propagation direction which is the bisector of the two real beams. Saltiel et al. present a physical picture in which two beams at the fundamental frequency generate, through the nonlinearity of the medium, three waves at the second harmonic frequency.
Results of this work should be readily applicable to acoustic waves in periodic structures and matter-waves in optical lattices. – Frank Narducci
### Announcements
More Announcements »
Optics
## Previous Synopsis
Superconductivity
## Next Synopsis
Atomic and Molecular Physics
## Related Articles
Optics
### Focus: Strong Light Reflection from Few Atoms
Up to 75% of light reflects from just 2000 atoms aligned along an optical fiber, an arrangement that could be useful in photonic circuits. Read More »
Optics
### Synopsis: Controlling a Laser’s Phase
A compact scheme can directly modulate the phase of a laser without a bulky external modulator. Read More »
Photonics
### Focus: Chip Changes Photon Color While Preserving Quantumness
A new device that can potentially be scaled up for quantum computing converts visible light to infrared light suitable for fiber-optic transmission without destroying the light’s quantum state. Read More »
|
{}
|
### Home > MC2 > Chapter 3 > Lesson 3.2.1 > Problem3-61
3-61.
Find the missing information from the following relationships.
1. Mark has downloaded four times as many songs on his music player as Chloe. If Mark has $440$ songs, how many songs does Chloe have?
Since Mark has $4$ times as many songs as Chloe and he has $440$ songs, what number of songs could Chloe have that would be a fourth of Mark's total songs?
$\text{Mark's songs}=4(\text{Chloe's songs})$
$440 \text{ songs} = 4(\text{Chloe's songs})$
What number times $4$ would equal $440$ songs? Use this equation to solve for Chloe's total number of songs.
2. Cici likes to collect shoes, but she only has half the number of pairs of shoes that her friend Aubree has. If Cici has $42$ pairs of shoes, how many pairs of shoes does Aubree have?
$\text{Cici's # of shoes }=\frac{1}{2}(\text{Aubree's # of shoes)}$
$2(\text{Cici's # of shoes})=\text{Aubree's # of shoes}$. Since Cici has $42$ pairs of shoes, use this equation to solve for the number of pairs of shoes Aubree has.
$84$ pairs of shoes.
Can you explain how you got this answer from the equation above?
3. Tito walked three more miles than Danielle. If Danielle walked $2$ miles, how far did Tito walk?
Refer to parts (a) and (b).
$\text{Tito's miles} = \text{Danielle's miles} + 3 \text{ miles}$
|
{}
|
# Method of Proving Euler's Formula?
1. Dec 21, 2012
### Mandelbroth
The other day, I was thinking about Fourier series. Because eix is periodic, with period of 2 pi, we can use the Fourier series...
$\displaystyle \frac{1}{2\pi} \int_{-\pi}^{\pi} e^{ix} \ dx + \frac{1}{\pi}\sum_{n=1}^{\infty}\left[\left(\int_{-\pi}^{\pi} e^{ix} cos(nx) \ dx \right) cos(nx) + \left(\int_{-\pi}^{\pi} e^{ix} sin(nx) \ dx\right) sin(nx)\right] = \frac{0}{2\pi} + \frac{1}{\pi}(\pi cos{x} + i\pi sin{x}) = cos{x} + i sin{x}$
...to describe eix, right?
Isn't this an easier way to prove Euler's formula than using Taylor expansion? Or...am I missing something?
2. Dec 21, 2012
### HallsofIvy
How do you determine that eix has period $2\pi$ with using Euler's formula?
Sorry- I meant "without using Euler's formula".
Last edited by a moderator: Dec 22, 2012
3. Dec 21, 2012
### Dickfore
Because $e^{z_1 + z_2} = e^{z_1} e^{z_2}$, and $e^{2 \pi i} = 1$.
4. Dec 21, 2012
### pasmith
So how do you know that $e^{2\pi i} = 1$ without using Euler's formula?
5. Dec 21, 2012
### Mandelbroth
With Euler's formula? Sine and cosine have a period of 2 pi. Therefore, an addition of the two also has a period of 2 pi.
Without, $\displaystyle e^{ix} = \lim_{n \rightarrow \infty}\left(1 + \frac{ix}{n}\right)^n$. Observing the graph of $f(x) = \left(1+\frac{ix}{n}\right)^n$, it can be seen that the graph looks more and more sinusoidal as n grows larger. Using a large n, it would be fairly easy to approximate the period of the function's limit as n approaches infinity. Using WolframAlpha (an essential tool for deriving Euler's formula in the 1700s :tongue:) to graph f(x) with n=100, we get this (link goes to WolframAlpha). It can be seen that the imaginary part looks like it is approaching the form of the sine function, and the real part looks like cosine. If we look at x ≈ 3.14, the imaginary part is around 0 and the real part is around -1. This can also be used to answer
$\displaystyle \forall k\in \mathbb{Z}, \ e^{2\pi i k} = \lim_{n \rightarrow \infty} \left(1+\frac{2\pi i k}{n}\right)^n = 1$.
Though, you do make a valid point. I already knew that eix was periodic based on Euler's formula. Thus, there is a minor element of circular logic to my "proof". However, assuming that we say that the function f from above is asymptotically periodic, is there any reason that this is not correct?
Last edited: Dec 21, 2012
6. Dec 21, 2012
### pasmith
Why is that obvious (without assuming Euler's formula)? I think if you try to prove that rigorously, you will end up writing something like
$$\lim_{n \to \infty} \left(1 + \frac{2\pi k \mathrm{i}}{n}\right)^n = \sum_{n=0}^{\infty} \frac{(2\pi k\mathrm{i})^n}{n!} = \sum_{n=0}^{\infty} \frac{(-1)^n(2\pi k)^{2n}}{(2n)!} + \mathrm{i} \sum_{n=0}^{\infty} \frac{(-1)^n(2\pi k)^{2n+1}}{(2n+1)!}\\ = \cos(2\pi k) + \mathrm{i}\sin(2\pi k) = 1$$
which is just the Taylor series proof of Euler's formula in the special case $x = 2\pi k$.
7. Dec 21, 2012
### Mandelbroth
Obvious? :uhh: (It's not...)
Perhaps, if someone was ridiculously intent on not using Euler's formula, they might use $\displaystyle 2\pi i k = ln(\frac{1}{2}) + \int_{\frac{1}{2}}^{e^{2\pi i k}} \frac{1}{t} \ dt$?
Last edited: Dec 21, 2012
8. Dec 22, 2012
### Mandelbroth
I'm sorry. I was wrong here. The equation should be $\displaystyle ln(e^{2\pi i k}) = \int_{1}^{e^{2\pi i k}} \frac{1}{t} \ dt = 0$. I forgot that not all properties of logarithms are consistent between real numbers and complex numbers.
9. Dec 22, 2012
### Dickfore
So, how do you know $\cos 0 = 1$?
10. Dec 26, 2012
### HassanEE
I see where HallsofIvy is getting at. Before you go ahead and find the Fourier series, you have to show periodicity. For f(x) to be periodic it must satisfy f(x+T)=f(x) for all t. If f(x)=eix then eix=ei(x+T). At this point you will not be able to proceed unless you used Euler's formula (or Taylor series expansion, but you are trying to avoid that altogether).
You mentioned graphs of eix, but to plot these graphs you need to use Euler's formula!
If you were to find a way to show periodicity without using Euler's formula, then you can probably go ahead and use the Fourier series to prove Euler.
11. Dec 26, 2012
### rbj
the easiest way i know of to prove Euler's formula was in wikipedia for a while, but they deleted it (for no good reason).
Consider the function
$$f(x) = \frac{e^{ix}}{\cos(x) + i \sin(x)}$$
where
$$i^2 = -1$$.
so, treating $i$ as a constant with the above property, compute the first derivative of $f(x)$ and also what $f(0)$ is.
what does that tell you?
12. Jan 25, 2013
### Boorglar
I use the limit definition of the exponential:
$$e^{i\pi}=\lim_{n→∞}(1+\frac{ i\pi }{ n })$$
Then I rewrite $1 + \frac{ i\pi }{ n }$
as:
$$\sqrt{1+\frac{\pi^2}{n^2}}[\cos(\arctan(\frac{\pi}{n})) + i\sin(\arctan(\frac{\pi}{n}))]$$
i.e. in the trigonometric form (the module is the stuff with the square root, and the argument is arctan(pi/n) ). (Not assuming Euler's formula here)
Then using de Moivre's formula on this rewritten number (which again can be proven by induction for integer n, and then for rational n, without Euler's formula),
$$(1+\frac{i\pi}{n})^n = \sqrt{1+\frac{\pi^2}{n^2}}^n[\cos(\arctan(\frac{\pi}{n})) + i\sin(\arctan(\frac{\pi}{n}))]^n = \sqrt{1+\frac{\pi^2}{n^2}}^n[\cos(n\arctan(\frac{\pi}{n})) + i\sin(n\arctan(\frac{\pi}{n}))]$$
The limit as n approaches infinity of the module raised to the n-th power approaches 1, and the limit for n*arctan(pi/n) will be pi (both can be proven with l'Hopital's rule or some other theorem about limits, which do not use Euler's formula).
And plugging pi into cos and sin will finally give the answer of -1.
You could argue that this is just a special case of a proof of Euler's formula, but at least I didn't use it anywhere in the proof.
Oh and a technical detail: when I took the limit, I actually did it only for rational n, but I think that the continuity of (1+i*pi/n)^n implies the limit for any real n, because Q is dense (but I haven't checked into that).
Last edited: Jan 25, 2013
13. Jan 25, 2013
### Boorglar
Oops, I copy-pasted my previous post from a previous post I wrote, but the Latex apparently didn't...
EDIT: Ok, I rewrote my above post in a more readable format.
Also, I just realised this thread asks for a proof of Euler's formula in general, not the special case for pi. But I think my argument is exactly the same, by replacing pi with x.
(My original answer was to a person asking for proof of Euler's identity without using Euler's formula)
Last edited: Jan 25, 2013
|
{}
|
## Discovery Learning – Rational Expressions
This week I tried something a little different in my Elementary Algebra class for simplifying rational expressions. Typically I walk through an example of simplifying a numerical fraction to lowest terms and try to extract the key ideas and apply them to rational expressions containing variables. Based on some things I had seen/heard at AMATYC, particularly about how WolframAlpha (W|A) will impact the way we teach, I decided to try a new approach.
At the top of the board I wrote $Simplify: \frac{x^2+8x+15}{x^2-9}$
At the bottom of the board I wrote $\frac{x+5}{x-3}$
Then I said to my class “The correct result for this problem is at the bottom of the board. How do we get it?”, and I gave them a moment to think about it. I could see the wheels spinning.
• “How does $x^2+8x+15$ turn into $x+5$?”
• “How does $x^2-9$ turn into $x-3$?”
And then it happened – someone said we need to factor the numerator and denominator. So many times I have taught this and told my students “This is how you do it.” It seemed that the process was more accessible to my students because it came from one of them instead of coming top down from the instructor.
After walking through a few examples, I introduced multiplication of rational expressions and later division of rational expressions in the same way. It might have been the easiest day I have ever had teaching these topics. This will definitely not be the last time I use this approach!
Summary
The idea is to give your students a problem and an answer, and see if they can find the path to get there. Do you use this approach in your classroom? Have you used WolframAlpha in a similar way? Please share any experience or opinions by commenting on this blog, or you can reach me through the contact page at my web site – georgewoodbury.com.
-George
I am a math instructor at College of the Sequoias in Visalia, CA. Each Wednesday I post an article related to General Teaching on my blog. If there’s a particular topic you’d like me to address, or if you have a question or a comment, please let me know. You can reach me through the contact page on my website – http://georgewoodbury.com
|
{}
|
# Tag Info
10
If you are not worried about the speed or exact contour of hand, below is a simple solution. The method is like this : You take each contour and find distance to other contours. If distance is less than 50, they are nearby and you put them together. If not, they are put as different. So checking distance to each contour is a time consuming process. Takes a ...
8
Here's a step-by-step procedure for erosion/dilation by hand: Print out A and se on two sheets of paper Place the se paper on every pixel of the A sheet in turn At each position: Take the pixel values of A at the respective positions where se is 1. For the first top-left position, this would be 0,0,1,1 as I have tried to illustrate here: For an erosion, the ...
6
I am somewhat surprised that feature points don't work that well. I have had success registering shapes like yours using either Harris points, this is a corner detector, in combination with the RANSAC algorithm. See the wiki or Peter Kovesi his site Using a feature detector like SURF or SIFT in combination with an edge map of the image prior to feature ...
6
If you're willing to add/subtract etc. morphologically transformed images, you can count how many signal pixels are in the vicinity of each pixel, and threshold based on that number. img = imread('http://i.stack.imgur.com/wicpc.png'); n = false(3);n(4) = 1; s = false(3);s(6) = 1; w = false(3);w(2) = 1; e = false(3);e(8) = 1; %# note that you could ...
4
You will want to use the morphological closing operation, which is the erosion of the dilation of an image. This means that you first dilate the image, removing the small 'holes' in the interior of the image, and then you erode the image, shrinking the boundary. If you want the final boundary to be 'smaller' than the original boundary, simply erode more than ...
4
Just a naive suggestion: do you know about component labeling? The technique is about finding chunks of "touching" pixels and assigning them a label, e.g. an integer number. You can then interrogate each chink separately, looking for the pixel that share the same label. In MATLAB, here is the function that does it trivially: bwlabel
4
To fix the connectivity issue, you can try a close operation: cv::Mat structuringElement = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(40, 40)); cv::morphologyEx( inputImage, outputImage, cv::MORPH_CLOSE, structuringElement ); I doubt that this will produce the results that you want, but you can give it a try.
3
I guess the basic question here becomes - what difference does non-flat structuring elements make w.r.t flat structuring elements ? From the definition of dilation one can see that the structuring element can define a support (like in flat structuring elements) or another function itself (gray scale structuring elements). Look at wiki which defines the ...
3
This is quite an interesting problem to solve! Try a median filter. See the reference here and here for more details. Though I haven't put my hands to simulate your problem, this is a suggestion. My gut feeling says that it might give you great benefit because, it is known to counter salt-n-pepper type of noise. In your case, the images has extra white ...
3
You can also run a distance transform on the image, then detect local maxima (searching for pixels with highest/lowest value of all pixels in 3x3 pixel patch - can be larger depending on expected minimum distance between the original blobs). Note that for detecting features of size 1-3 pixels, you need to double your sampling frequency (either upscale the ...
3
After couple days of research, I have found another interesting solution. The Similar algorithm for segmentation of overlapping objects is described in this article (the algorithm below is an adaptation for objects of parallelogram shape). It uses properties of the boundary to perform separation. Perhaps someone may find it interesting. Overlapping ...
3
What you are essentially doing is a matched filter. However, thanks to Hough transform, your filter (line) is oriented and therefore I would call it an oriented matched filter. For generating the Bresenham line and sampling the pixels you might want the use the OpenCV line iterator. The simple usage would be similar to: cv::LineIterator it(image, pt1, pt2, ...
2
For planar elements (implied by the wording "structuring element") the containment of origin is enough to maintain the properties of anti-extensivity for erosion, and extensivity for dilation as can be found in many texts and you also pointed that out. So, yes, this is enough for the non-negativity for the arithmetic difference (this is directly shown by ...
2
I looked up in Jaehne, Gonzalez, Soille (the one you've posted as well as Mathematical Morphology and Its Applications to Image and Signal Processing) and some other special morphological papers and haven't found neither any design criteria for the structuring element nor any special hints why it has to be symmetrical. Personally I think that a symmetrical ...
2
It looks like you're "oversegmenting" your image. Morphological operations, as bjnoernz has suggested, would help. In particular, a watershedding approach should get closer to what you want than just checking distance (as in python example above). See http://cmm.ensmp.fr/~beucher/wtshed.html.
2
One option would be to apply repeated morphological erosion to the image until it is fully eroded. At that point, each of the blobs shown above would be reduced down to a single pixel; you could take the locations of those pixels to be the list of points that you're looking for.
2
You are correct. Erosion of that image with that structuring element should cause all of the 1's to disappear. There are two ways to look at it. One is that only ones that are surrounded by ones will remain. There are no such ones. The other is that you will only have ones remain where you can fit the structure element into a set of ones in the image, ...
2
I am afraid that whatever way you pick to do this, it is not going to be straightforward because to assign the targets to clusters you are going to have to go through the image (at least once). I suppose that getting the points is the easier problem of the two (you are probably already applying some form of thresholding for example). To recover the ...
2
According to this paper, the grayscale dilation of image $I$ by a non-flat structuring element $S$ is defined as follows: $[I ⊕ S](x,y) = \displaystyle\max_{(s,t)∈S}\{I(x-s,y-t)+S(s,t)\}$ Since the origin of the structuring element is in the top right corner, we have that $s∈[0,1]$ and $t∈[-2,0]$ excluding the point $(s,t)=(0,-1)$. So using your example of ...
2
I guess this depends on the digital distance transform that one is approximating on the 3d grid and there are various local connectivities possible. There is an implementation in ImageJ here. It would also be good to verify if you are using a non-flat structuring element or a correct 3d structuring element. Read Matlab reference here. In the place of ...
2
Referring to MATLAB, the basic steps are Determine the connected components: CC = bwconncomp(BW, conn); Compute the area of each component: S = regionprops(CC, 'Area'); Remove small objects: L = labelmatrix(CC); BW2 = ismember(L, find([S.Area] >= P)); At the last step, after obtaining $L$, you might as well retain the component with the largest area ...
2
Looks like you are using Matlab. Try bwareaopen(I, N), where I is the original binary image and N is the estimated size of each unwanted connected region. You can try edit bwareaopen for more details. Basically the algorithm tries to find the size of connected regions. Connected-component labeling with union-find algorithm is expected to get you there.
2
You can use a line structure with 90 degree and a pre-defined thickness to erode the image. Then scan along the x-axis. In each y-axis line, it can be viewed as connection parts if there are more than 4 pixels.
1
Take a look to this paper from Santiago Velasco, it's about Conditional Toggle Mapping, and it's application to denoising.
1
Why do not use hough transform for finding lines and then finding table region? you can use hough transform to find horizontal and vertical lines. and then extract region of lines.
1
Though I didnot play with the rolling ball algorithm, I guess based on your comparison results it is quite obvious that the two algorithms you are interested in donot have a big difference. This is often the results, when you tried to compare some algorithm which is claimed to be better with some classic algorithm: a paper method is often no better than the ...
1
You are actually drawing the skeleton of the background (brighter region). change cv::threshold(image, image, threshold, 255, cv::THRESH_BINARY); in your codes to cv::threshold(image, image, threshold, 255, cv::THRESH_BINARY_INV); You should be able to get the right skeleton.
1
If you know the median filter principle, that's exactly the same type of operation. For each pixel, you take a look to all the neighbor pixels (defined by the structuring element), and you take the max (dilation) or the min (erosion). Therefore it's like the median filter, but instead of taking the median value, you take the min/max. The mathematical ...
1
Structuring element is mostly supposed to be a binary array (though you can try 0 0 7 7 by yourself). H=[0 0 1 1] u, v are the pixel coordinate at I. In your example, the only pixel value that will change is I(1,2) (the array label starts from 0 instead of 1). I = 1 2 3 3 7 2 Consider I(1,2), from the formula, I(1,2) I(1,1), and I(1,0)...
1
The asymmetric structuring elements produce a translation dilation on the original set or image. The size of the translation is determined by the offset in the center of the structuring element. For example you could try this using matlab for the dilation operator: I = imread('circles.png'); se = strel('disk',10); %you could see it with se = strel('line',5,...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# Eigenvalues of $A$ compared to $A^H$
How are the eigenvalues of $A^H$ related to the eigenvalues of $A$?
Here $A^H$ is the conjugate transpose of $A$
-
1. Please edit your question so it is self-contained (and does not rely on reading the title to make sense). 2. 33 percent acceptance rate? You don't like the answers you are getting on this site? – Gerry Myerson May 2 '12 at 1:50 There's a bug with my account that is preventing me from upvoting answers! :( I don't know how to resolve that issue... – quantum May 2 '12 at 2:18 Thanks for the heads up. I wasn't sure what the "accept rate" meant. I've remedied that. – quantum May 2 '12 at 2:23 You can contact moderators via team+math@stackexchange.com with account problems. – Gerry Myerson May 2 '12 at 3:20 Yup I just did, thanks. – quantum May 2 '12 at 3:20
Hint: For a matrix $X$, the determinant $det(X^H)=\overline{det(X)}$ where the overline indicates complex conjugation.
Examine $det((I\lambda-A)^H)$
|
{}
|
Perl Monk, Perl Meditation PerlMonks
### Re: Perl allows you to change the definition of false to make it true. (And vice versa.)
on Dec 14, 2012 at 14:26 UTC ( #1008843=note: print w/ replies, xml ) Need Help??
This should please the comp.lang.c old timers:
use strict; use warnings; use 5.010; BEGIN { &Internals::SvREADONLY(\undef, 0); \${\undef} = "Demons are coming out of my nose!"; };
I wonder if you can redefine other read-only values? I could not get the following to work.
BEGIN { &Internals::SvREADONLY(\'blue', 0); \${\'blue'} = 'turtle'; &Internals::SvREADONLY(\5, 0); \${\5} = 5.000000000001; };
(I have no idea why you would want to do this apart from JAPH or intentional sabotage)
Comment on Re: Perl allows you to change the definition of false to make it true. (And vice versa.)
Re^2: Perl allows you to change the definition of false to make it true. (And vice versa.)
by tobyink (Abbot) on Dec 14, 2012 at 15:14 UTC
AFAIK, only true, false and undef. I thought perhaps "0 but true" might work (as that has special handling - it doesn't warn if used in numeric context) but no joy.
perl -E'sub Monkey::do{say\$_,for@_,do{(\$monkey=[caller(0)]->[3])=~s{::}{ }and\$monkey}}"Monkey say"->Monkey::do'
Create A New User
Node Status?
node history
Node Type: note [id://1008843]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others surveying the Monastery: (8)
As of 2015-01-31 15:53 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
|
{}
|
# Homework Help: Rearranging the Projectile Motion Base Quadratic Formula for Initial Velocity
1. May 23, 2010
### Pearlhammer
1. The problem statement, all variables and given/known data
What I'm supposed to do is to rearrange the this formula -16t^2+Vt+h and solve it for V.
V= Initial Velocity t= time (throwing the ball in a parabolic arc) h= height
I know what the height is and it is 6ft. I also have the time which is 2.03 seconds.
How do I rearrange for V? I got an answer but I'm not sure and doubting if it is correct.
2. Relevant equations
-16t^2+Vt+h
-16t^2+Vt+6 (if you plug in the height.)
3. The attempt at a solution
-16t^2+Vt+6 What I started with
-16t^2+Vt= -6 Subtracted 6 to the other side
Vt= -6+16t^2 Added -16t^2 to both sides.
V= (-6/t)+16t Divided both sides (every term) by t. This is my answer so far.
I have no idea if I'm correct or not and I have the feeling I'm not. Please help, i'm in US grade 9.
Last edited: May 23, 2010
2. May 23, 2010
### Mentallic
Is that meant to be the physics equation for motion:
$$h=Vt+1/2gt^2$$ ?
So then re-arranging you have $$-1/2gt^2-Vt+h=0$$
It doesn't seem to be consistent with what you started with. But if we assume you started with teh correct equation, yes, you've done it right.
3. May 23, 2010
### Pearlhammer
Thank you. What I was talking about was just simply if I had the correct answer for:
Taking -16t^2+Vt+h
and rearranging for V to be by itself on one side of the equation.
4. May 24, 2010
### ktgster
-16t^2+Vt+h = 0
-16t^2+Vt = -h
Vt = -h + 16t^2
V = (-h + 16t^2)/t
your right op, but my way is better
5. May 24, 2010
### Pearlhammer
Thank you very much guys!
6. May 24, 2010
### Mentallic
How is your way better? It's exactly the same...
|
{}
|
# Planar wave normally incident on dielectric boundary
1. Oct 15, 2012
### cfitzU2
PROBLEM: I am asked to consider a parallel polarized planar wave with frequency ω is normally incident on a dielectric boundary. The incident time average power flux P_i = 100 w/m^2. The first medium is free space and the second has vaccum permeability but ε=4ε_0. We are also given that the medium is characterized by conductivity σ.
I am then asked to find the reflected time average power flux P_reflected in two cases
(i) σ/(ω*ε_0)<<1 and (ii) σ/(ω*ε_0)>>1
My problem is that, while performing the calculation I see no use for the quantity in question, namely, σ/(ω*ε_0)...
ATTEMPT: I am able to find the reflected time average power flux using P_i = (1/2*η)(E_i)^2 = 100 w/m^2 with P_reflected = (1/2*η)(E_r)^2 and E_r = ρE_i where ρ=(η_2 - η)/(η+η_2).
This is especially simple after noting that η_2 = (1/2)η
The calculation leads to P_reflected = 100/9 w/m^2
I see no opportunity to consider how varying the conductivity changes that number... I realize that as σ gets big η_2 must go to zero... but we are given a fixed ε=4*ε_0
Am I missing something or is this a poorly posed (or trick) problem??
2. Oct 18, 2012
### note360
Note that the permitivity can me complex, but only the real part has been given. Remember that $\eta$ is described interms of: $\eta = \sqrt{\frac{\mu_0}{\textbf{ε}}}$, where $\epsilon$ is actually the complex permitivity.
There is plenty potential to use $\frac{\sigma}{\omega * \epsilon_{0}} << 1$ and $\frac{\sigma}{\omega * \epsilon_{0}} >> 1$ cases
|
{}
|
Want to ask us a question? Click here
Browse Questions
Ad
0 votes
# Find the equation of the plane passing through the point ( 1, -1, 2 ) having ( 2, 3, 2 ) as direction ratios of normal to the plane?
Can you answer this question?
0 votes
0 answers
0 votes
0 answers
0 votes
0 answers
0 votes
1 answer
0 votes
0 answers
0 votes
1 answer
0 votes
0 answers
|
{}
|
# Which d-block cation has the maximum hydration enthalpy?
Among $$\ce{Mn^2+}$$, $$\ce{V^2+}$$, $$\ce{Ni^2+}$$ and $$\ce{Ti^2+}$$, which cation is having the highest hydration energy to form the aqua complex $$\ce{[M(H2O)6]^2+}$$?
I guess $$\ce{Mn^2+}$$ should be the answer as it has the most stable electronic configuration.
Here, $$\ce{Ni}$$ has the smallest atomic radii compared to all other options due to greater effective nuclear charge, so it's ionic radii would also follow the same trend. Hence, $$\ce{Ni^2+}$$ would have greater hydration enthalpy.
|
{}
|
Introduction
This paper presents several extensions of the Skip-gram model intriduced by Mikolov et al. [8]. Skip-gram model is an efficient method for learning highquality vector representations of words from large amounts of unstructured text data. The word representations computed using this model are very interesting because the learned vectors explicitly encode many linguistic regularities and patterns. Somewhat surprisingly, many of these patterns can be represented as linear translations. For example, the result of a vector calculation vec(“Madrid”) - vec(“Spain”) + vec(“France”) is closer to vec(“Paris”) than to any other word vector. The authors of this paper show that subsampling of frequent words during training results in a significant speedup and improves accuracy of the representations of less frequent words. In addition, a simplified variant of Noise Contrastive Estimation (NCE) [4] for training the Skip-gram model is presented that results in faster training and better vector representations for frequent words, compared to more complex hierarchical softmax that was used in the prior work [8]. It also shown that a non-obvious degree of language understanding can be obtained by using basic mathematical operations on the word vector representations. For example, vec(“Russia”) + vec(“river”) is close to vec(“Volga River”), and vec(“Germany”) + vec(“capital”) is close to vec(“Berlin”).
The Skip-gram Model
The training objective of the Skip-gram model is to find word representations that are useful for predicting the surrounding words in a sentence or a document. More formally, given a sequence of training words $w_1, w_2,..., w_T$ the objective of the Skip-gram model is to maximize the average log probability:
$\frac{1}{T} \sum_{t=1}^{T} \sum_{-c\leq j\leq c} log(p(w_{t+j}|w_t))$
where $c$ is the size of the training context (which can be a function of the center word $w_t$) and $p(w_{t+j}|w_t)$ is defined using softmax function:
$p(w_O|w_I) = \frac{exp ({v'_{W_O}}^T v_{W_I})}{\sum{w=1}^{W} exp ({v'_{W}}^T v_{W_I})}$
Here, $v_w$ and $v'_w$ are the “input” and “output” vector representations of $w$, and $W$ is the number of words in the vocabulary.
Hierarchical Softmax
Hierarchical Softmax is a computationally efficient approximation of the full softmax [12]. Hierarchical Softmax evaluate only about $log_2(W)$ output nodes instead of evaluating $W$ nodes in the neural network to obtain the probability distribution.
The hierarchical softmax uses a binary tree representation of the output layer with the $W$ words as its leaves and, for each node, explicitly represents the relative probabilities of its child nodes. These define a random walk that assigns probabilities to words.
Let $n(w,j)$ be the $j^{th}$ node on the path from the root to $w$, and let $L(w)$ be the length of this path, so $n(w,1) = root$ and $n(w,L(w)) = w$. In addition, for any inner node $n$, let $ch(n)$ be an arbitrary fixed child of $n$ and let $[[x]]$ be 1 if $x$ is true and -1 otherwise. Then the hierarchical softmax defines $p(w_O|w_I )$ as follows:
$p(w|w_I) = \prod_{j=1}^{L(w)-1} \sigma ([[n(w,j+1)=ch(n(w,j))]]{v'_{n(w,j)}}^T v_{W_I})$
where
$\sigma (x)=\frac{1}{1+exp(-x)}$
In this paper, a binary Huffman tree is used as the structure for the hierarchical softmax because it assigns short codes to the frequent words which results in fast training. It has been observed before that grouping words together by their frequency works well as a very simple speedup technique for the neural network based language models [5,8].
Negative Sampling
Noise Contrastive Estimation (NCE) is an alternative to the hierarchical softmax. NCE indicates that a good model should be able to differentiate data from noise by means of logistic regression. While NCE can be shown to approximately maximize the log probability of the softmax, the Skipgram model is only concerned with learning high-quality vector representations, so we are free to simplify NCE as long as the vector representations retain their quality. Negative sampling (NEG) is defined by the objective:
$log \sigma ({v'_{W_O}}^T v_{W_I})+\sum_{i=1}^{k} \mathbb{E}_{w_i\sim P_n(w)}[log \sigma ({-v'_{W_i}}^T v_{W_I})]$
The main difference between the Negative sampling and NCE is that NCE needs both samples and the numerical probabilities of the noise distribution, while Negative sampling uses only samples. And while NCE approximatelymaximizes the log probability of the softmax, this property is not important for our application.
|
{}
|
# A quantity measuring the separability of Banach spaces
Let $$X$$ be a Banach space. It is natural for us to introduce a quantity measuring the separability of sets as follows: for a subset $$A$$ of $$X$$, we set
$$\textrm{sep}(A)=\inf\{\epsilon>0: A\subseteq K+\epsilon B_{X}$$ for some countable subset $$K$$ of $$X\}$$.
Clearly, $$A$$ is separable if and only if $$\textrm{sep}(A)=0$$.
It is elementary that a Banach space $$X$$ is separable if $$X^{*}$$ is separable. My question is about a quantitative version of this result.
Question. Does there exist a universal constant $$C$$ such that $$\textrm{sep}(B_{X})\leq C\cdot \textrm{sep}(B_{X^{*}}) \, ?$$
For the unit ball $$B_X$$ of the Banach space there are only two possibilities:
sep$$(B_X)= 1$$, if $$B_X$$ is not separable, and sep$$(B_X)=0$$ if $$B_X$$ is separable. Indeed, if sep$$(B_X)<1$$ there are $$\varepsilon <1$$ and a countable subset $$K\subseteq B_X$$ with $$B_X\subseteq K + \varepsilon B_X$$. But this can be iterated, i.e., $$B_X\subseteq K +\varepsilon (K+\varepsilon B_X) \subseteq K_1+\varepsilon^2 B_X$$ where $$K_1=K+\varepsilon K$$ is again countable. Inductively, this implies $$B_X\subseteq K_n +\varepsilon^n B_X$$ for a countable set $$K_n$$ and hence sep$$(B_X)=0$$.
Therefore, $$C=1$$ satisfies the desired inequality sep$$(B_X)\le$$ sep$$(B_{X^*})$$.
|
{}
|
Questions & Answers
Question
Answers
# If 45mm is expressed in cm then answer is 4.5 cm$(a){\text{ True}} \\ (b){\text{ False}} \\$
Answer Verified
Hint: In this question we have to tell whether conversion of 45mm into cm is 4.4 cm or not. Use the basic unit system conversion rule that 1cm = 10mm. Then use the unitary method to convert 45mm into cm using this unit system conversion rule.
Complete step-by-step answer:
As we know 1mm = $\dfrac{1}{{1000}}$m
And 1 m = 100 cm, so use this property in above equation we have,
Therefore 1 mm = $\left( {\dfrac{1}{{1000}}} \right)\left( {100{\text{ cm}}} \right)$
Now cancel out two zero from numerator and denominator we have,
$\Rightarrow 1{\text{ mm = }}\dfrac{1}{{10}}{\text{ cm}}$
Now we have to find out the value of 45 mm.
So multiply the above equation by 45 then we have,
$\Rightarrow 45{\text{ mm = 45}} \times \left( {\dfrac{1}{{10}}{\text{ cm}}} \right)$
Now as we know $\dfrac{{45}}{{10}} = 4.5$ so use this property in above equation we have,
$\Rightarrow 45{\text{ mm = 4}}{\text{.5 cm}}$
Hence the given problem statement is true.
Hence option (A) is correct.
Note: Whenever we face such types of problems the key concept is to have the basic understating of the unit system for length measurement. Using this system every unit for measurement of length can be converted into another. This concept will help to get on the right track to get the answer.
Bookmark added to your notes.
View Notes
|
{}
|
## Algebra: A Combined Approach (4th Edition)
$6$
Step 1: $(5x)^{0}+5x^{0}$. Step 2: Using the zero exponent rule, $(5x)^{0}=1$ and $x^{0}=1$. Step 3: Therefore the equation becomes $1+5(1)$. Step 4: $1+5(1)=6$.
|
{}
|
## Strangeloop
I will be speaking at StangeLoop this September. Check out the website for up to the minute details. My talk is titled The Eight Fallacies of Distributed Computing based on Peter Deutsch and James Gosling's earlier discussion of these fallacies.
### Abstract
The wild growth of the web and mobile has driven the need for scalability in server architectures. Scalability is generally achieved by building distributed solutions, yet we developers still make some pretty simple mistakes due to what are often common misconceptions. Many of these misconceptions are summarized by the eight fallacies of distributed computing.
I first heard about these fallacies in an off-hand remark by James Gosling at JavaOne in 2000. Gosling attributed them to Peter Deutsch. Although the eight fallacies are not widely known, as a group they summarize many of the dangers faced by distributed application developers.
The eight fallacies are listed as simple statements that express important concepts in distributed computing:
Fallacy 1: The Network Is Reliable.
Fallacy 2: Latency Is Zero.
Fallacy 3: Bandwidth Is Infinite.
Fallacy 4: The Network Is Secure.
Fallacy 5: Topology Doesn't Change.
Fallacy 6: There Is One Administrator.
Fallacy 7: Transport Cost Is Zero.
Fallacy 8: The Network is Homogeneous.
In this talk I will expand on each fallacy by providing context, examples and a sprinkling of some real world experiences from my time at NeXT, PRI, TIBCO and Fog Creek. Hopefully most of the examples will be humorous, but others may likely seem tragic. My ultimate goal is to provide you with new ways to visualize possible problems in your distributed architectures, and motivate you to re-evaluate your designs while maintaining a healthy fear of back-hoes.
If you are going to be in St. Louis in September, please stop by to say hello.
|
{}
|
# Pure Rotational Motion!
The moment of inertia of a body about a given axis is $$1.2kg~m^{2}$$ . Initially the body is at rest. In order to produce a Rotational Kinetic Energy of $$1500J$$ an angular acceleration of $$25rad~s^{-2}$$ must be applied about the axis of rotation for a duration of how many seconds ?????
×
|
{}
|
Journal topic
Ocean Sci., 15, 1327–1340, 2019
https://doi.org/10.5194/os-15-1327-2019
Ocean Sci., 15, 1327–1340, 2019
https://doi.org/10.5194/os-15-1327-2019
Research article 07 Oct 2019
Research article | 07 Oct 2019
# The Pelagic In situ Observation System (PELAGIOS) to reveal biodiversity, behavior, and ecology of elusive oceanic fauna
The Pelagic In situ Observation System (PELAGIOS) to reveal biodiversity, behavior, and ecology of elusive oceanic fauna
Henk-Jan Hoving1, Svenja Christiansen2, Eduard Fabrizius1, Helena Hauss1, Rainer Kiko1, Peter Linke1, Philipp Neitzel1, Uwe Piatkowski1, and Arne Körtzinger1,3 Henk-Jan Hoving et al.
• 1GEOMAR, Helmholtz Centre for Ocean Research Kiel, Düsternbrooker Weg 20, 24105 Kiel, Germany
• 2University of Oslo, Blindernveien 31, 0371 Oslo, Norway
• 3Christian Albrecht University Kiel, Christian-Albrechts-Platz 4, 24118 Kiel, Germany
Correspondence: Henk-Jan Hoving (hhoving@geomar.de)
Abstract
There is a need for cost-efficient tools to explore deep-ocean ecosystems to collect baseline biological observations on pelagic fauna (zooplankton and nekton) and establish the vertical ecological zonation in the deep sea. The Pelagic In situ Observation System (PELAGIOS) is a 3000 m rated slowly (0.5 m s−1) towed camera system with LED illumination, an integrated oceanographic sensor set (CTD-O2) and telemetry allowing for online data acquisition and video inspection (low definition). The high-definition video is stored on the camera and later annotated using software and related to concomitantly recorded environmental data. The PELAGIOS is particularly suitable for open-ocean observations of gelatinous fauna, which is notoriously under-sampled by nets and/or destroyed by fixatives. In addition to counts, diversity, and distribution data as a function of depth and environmental conditions (T, S, O2), in situ observations of behavior, orientation, and species interactions are collected. Here, we present an overview of the technical setup of the PELAGIOS as well as example observations and analyses from the eastern tropical North Atlantic. Comparisons to data from the Multiple Opening/Closing Net and Environmental Sensing System (MOCNESS) net sampling and data from the Underwater Vision Profiler (UVP) are provided and discussed.
1 Introduction
The open-ocean pelagic zones include the largest, yet least explored habitats on the planet (Robison, 2004; Webb et al., 2010; Ramirez-Llodra et al., 2010). Since the first oceanographic expeditions, oceanic communities of macrozooplankton and micronekton have been sampled using nets (Wiebe and Benfield, 2003). Such sampling has revealed a community typically consisting of crustaceans, cephalopods, fishes, and some sturdy and commonly found gelatinous fauna (Benfield et al., 1996). Underwater observations in the open ocean via scuba diving (Hamner et al., 1975) and later via submersibles (Robison, 1983; Robison and Wishner, 1990) and in situ camera systems (Picheral et al., 2010) revealed that a variety of organisms are much more abundant in the open ocean than previously estimated from net sampling (Robison, 2004). This was particularly true for fragile gelatinous zooplankton, a diverse taxonomic group, including the ctenophores and cnidarians (Remsen et al., 2004; Haddock, 2004) as well as polychaetes (Christiansen et al., 2018), Rhizaria (Biard et al., 2016), and pelagic tunicates (Remsen et al., 2004; Neitzel, 2017), which often are too delicate to be quantified using nets as they are damaged beyond identification, or they are easily destroyed by the use of common fixatives.
Underwater (in situ) observations in the pelagic ocean not only revealed a previously unknown community, they also allowed the collection of fine-scale distribution patterns of plankton in relation to biotic and abiotic factors (e.g., Haslob et al., 2009; Möller et al., 2013; Hauss et al., 2016) as well as information on posture, interactions, and behavior (Hamner and Robison, 1992; Robison, 2004; Robison, 1999; Hoving et al., 2013, 2016). Submersibles have proven to be valuable instruments to study deep-sea pelagic biology (e.g., Robison, 1987; Bush et al., 2007; Hoving et al., 2017). Using video transecting methodology, pelagic remotely operated vehicle (ROV) surveys have been applied to study inter- and intra-annual variation in mesopelagic zooplankton communities (Robison et al., 1998; Hull et al., 2011) and to explore deep pelagic communities in different oceans (Youngbluth et al., 2008; Hosia et al., 2017; Robison et al., 2010). However, due to high costs as well as technological and logistical challenges, regular submersible operations are still restricted to very few institutes and geographical locations. Hence, there is a need for the development of additional more cost-effective methodologies to explore and document deep-sea communities via in situ observations.
In the last decades, a variety of optical instruments has been developed to image and quantify plankton in situ (Benfield et al., 2007). The factors that typically differentiate the available plankton imaging technologies are the size fraction of the observed organisms, illumination type, resolution of collected images or video, depth rating, deployment mode (e.g., autonomous, towed, CTD-mounted), and towing speed. Examples of instruments include the autonomous Underwater Vision Profiler (UVP5; Picheral et al., 2010), the Lightframe On-sight Key species Investigations (LOKI; Schulz et al., 2010) and towed plankton recorders (In Situ Ichthyoplankton Imaging System – ISIIS; Cowen and Guigand, 2008; for a review, see Benfield et al., 2007). These instruments can be deployed from ships of opportunity and collect detailed information on fine-scale distribution and diversity patterns of particles and plankton. The data reveal biological patterns on a global scale (Kiko et al., 2017) and of previously underappreciated plankton species (Biard et al., 2016). More recently, optical (and acoustic) instruments have been combined with autonomous gliders, rapidly increasing spatial resolution (Ohman et al., 2019).
Various towed camera platforms have been developed that can obtain video transect observations above the deep sea floor. Examples are the TowCam (WHOI), the DTIS (Deep Towed Imaging system, NIWA), the WASP vehicle (Wide Angle Seafloor Photography), OFOS (Ocean Floor Observation System, GEOMAR), and the more recent version OFOBS (Ocean Floor Observation and Bathymetry System; Purser et al., 2018). All these instruments are used for video or photo transects of the seafloor, with a downward looking camera and typically a set of lasers for size reference. However, published descriptions of optical systems, other than ROVs and submersibles that visualize macrozooplankton and micronekton (> 1 cm) in the water column undisturbed by a filtering device or cuvette are, to the best of our knowledge, restricted to one (Madin et al., 2006). The Large Area Plankton Imaging System (LAPIS) is the only towed system that was developed for the documentation of larger organisms in the water column (Madin et al., 2006). LAPIS visualizes organisms between 1 and 100 cm, it combines a high-resolution color digital charge-coupled device (CCD) camera using progressive scanning interline-transfer technology with flashing strobes, and it is towed at 1 knot via a fiber optic wire. LAPIS collects still images, illumination is sideways, and organisms have to enter an illuminated volume to be visualized. Deployments in the Southern Ocean enabled the reconstruction of depth distributions of the pelagic fauna (salps, medusae) but also allowed some behavior observations, e.g., the molting of krill (Madin et al., 2006). More publications of data collected with LAPIS are unavailable to our knowledge. In addition to LAPIS, we wanted to develop a towed pelagic observation system that collects video during horizontal transects (with forward projected light), in a similar way to pelagic ROV video transects, in order to document behavior in addition to diversity, species-specific distribution, and abundance data of pelagic fauna.
The functional requirements for the instrument were (1) the ability to visualize organisms > 1 cm in waters down to 1000 m with high-definition video, (2) the possibility of deploying the instrument from ships of opportunity in an autonomous or transmitting mode, (3) for it to be lightweight and practical so it can be deployed easily and safely with two deck persons and a winch operator, (4) to enable the correlation of observations with environmental parameters (S, T, O2) and other sensor data, and (5) to make observations comparable to ROV video transects in other reference areas. We present a description of the Pelagic In situ Observation System (PELAGIOS), examples of the kind of biological information it may gather, as well as biological discoveries that have resulted from deployments on research cruises in the eastern tropical North Atlantic.
2 Pelagic In Situ Observation System
## 2.1 Technical specifications
The PELAGIOS consists of an aluminum frame (length: 2 m) that carries the oceanographic equipment (Fig. 1). White light LED arrays (four LEDs produced at GEOMAR, two LED arrays of the type LightSphere of Deep-Sea Power and Light ©), which illuminate the water in front of the system, are mounted on an aluminum ring (diameter: 1.2 m). Power is provided by two lithium batteries (24 V; 32 Ah) in a deep-sea housing. High-definition video is collected continuously by a forward-viewing deep-sea camera (1Cam Alpha, SubC Imaging ©), which is mounted in the center of the ring. We used the maximum frame rate of 50 frames s−1 but a lower frame rate is possible. A CTD (SBE 19 SeaCAT, Sea-Bird Scientific ©) with an oxygen sensor (SBE 43, Sea-Bird Scientific ©) records environmental data. A deep-sea telemetry (DST-6, Sea and Sun Technology ©; Linke et al., 2015) transmits video and CTD data to a deck unit on board allowing a low-resolution preview (600×480 lines) of the high-definition video that is stored locally on the SD card (256 GB) of the camera. The power from the batteries is distributed to the LEDs via the camera. The 1Cam Alpha camera is programmable in such a way that there is a delay between providing power to the camera (by connecting to the battery) and the start of recording and switching on the LEDs. This enables the illumination to be turned on only underwater, and prevents overheating of the LED arrays while out of the water. During a cruise with the German research vessel Maria S. Merian (MSM 49) we mounted a steel scale bar in front of the camera at a distance of 1 m. The distance between the centers of the white marks on the bar measured 5 cm.
Figure 1(a) The Pelagic In Situ Observations System (PELAGIOS) with battery (1), CTD (2), telemetry (3), camera (4), LEDs (5), and depressor (6) during deployment from R/V Poseidon in February 2018 (photo: Karen Hissmann).
## 2.2 Video transects
The PELAGIOS is towed horizontally at specified depths of 20–3000 m. The standard towing speed over ground is 1 knot (0.51 m s−1), and the speed is monitored via the ship's navigational system. A video transect at a particular depth can take as long as desired and is terminated by lowering the PELAGIOS to the next depth. Maximum deployment time with full batteries is approximately 6 h. The typical transect duration is 10–30 min. The depth of the PELAGIOS can be monitored via online CTD data. Figure 2 shows the trajectories of the PELAGIOS at different depths in the water column during a video transect down to 900 m. The deployment from deck into the water and the reverse is fast and typically takes only about 5 min (see video clip in the Supplement). It is possible to deploy PELAGIOS in “blind mode”, where only the depth is monitored using an online depth sensor (e.g., Hydrobios ©) and the video (without transmitted preview) is recorded locally on the camera. The system can be operated completely blind (i.e., with no communication between deck and underwater unit) where the target depth is estimated from the length and angle of the wire put out, and the actual depth is recorded on the system by CTD or an offline pressure sensor, e.g., SBE Microcat ©.
Figure 2Stairwise trajectory of PELAGIOS through the water column, to the desired depths with concomitantly measured environmental data.
## 2.3 Video analysis and curation
After a deployment, the video (consisting of individual clips of 1 h) is downloaded from the camera. Synchronization between video and CTD data is done by setting all instruments to UTC prior to deployment, which allows the data and video to be linked during analysis. The video is annotated using the Video Annotation and Reference System (VARS) developed at the Monterey Bay Aquarium Research Institute (Schlining and Jacobsen, 2006). This annotation program allows for frame grabs from the video, including time code. A knowledge base allows for the insertion of taxonomic names and hierarchy, and a query allows searching the created database. While many kinds of annotation software are available (for a review, see Gomes-Pereira et al., 2016), we consider VARS to be the most suitable for our purposes since it combines the features of high-resolution video playback with a user-friendly annotation interface and the automatic creation of an annotation database which can easily be accessed through the various search functions and tools of the query. The taxonomic hierarchy and phylogenetic trees in the database are directly applicable to our video transects. Since this software was developed by the Monterey Bay Aquarium Research Institute (MBARI), which also maintains the most extensive databases of deep pelagic observations, it makes communication about and comparison of observations and data practical. Videos are transported on hard drives after an expedition and are transferred for long-term storage on servers maintained by the central data and computing center at GEOMAR.
## 2.4 Sample volume
To estimate the sample volume of the PELAGIOS we compared video counts from the PELAGIOS with concomitantly obtained abundance data from an Underwater Vision Profiler (UVP5; Picheral et al., 2010). Four deployments from the R/V Maria S. Merian cruise MSM 49 (28 November–21 December 2015, Las Palmas de Gran Canaria, Spain – Mindelo, Cabo Verde) were used for the comparison where a UVP5 was mounted underneath the PELAGIOS. The UVP5 takes between 6 and 11 images per second of a defined volume (1.03 L) and thus enables a quantitative assessment of particle and zooplankton abundances. Objects with an equivalent spherical diameter (ESD) > 0.5 mm are saved as images, which can be classified into different zooplankton, phytoplankton, and particle categories. For the comparison between PELAGIOS and the UVP5, we used the pelagic polychaete Poeobius sp., as (1) this organism could be observed well on both instruments, (2) Poeobius sp. is not an active swimmer and lacks an escape response and (3) it was locally very abundant, thus providing a good basis for the direct instrument comparison.
The UVP5 images were classified as described in Christiansen et al. (2018). Poeobius sp. abundance (ind. (individuals) m−3) was calculated for 20 s time bins and all bins of one distinct depth step (with durations of 10–11 min at depths $\le =\mathrm{50}$ m, 19–22 min at depths < 350 m, and 9–11 min at depths > =350 m) averaged. These mean abundances were compared to the PELAGIOS counts (ind. s−1) of the same depth step. A linear model between the PELAGIOS counts as a function of UVP5 abundance provided a highly significant relationship (linear regression: p< 0.001; adjusted r2= 0.69; Fig. 3). The linear regression slope b (0.116 m3 s−1; standard error 0.01 m3 s−1) between the PELAGIOS-based count (CPELAGIOS, ind. s−1) and mean UVP-based abundance (AUVP, ind. m−3),
$\begin{array}{}\text{(1)}& {C}_{\mathrm{PELAGIOS}}=b\cdot {A}_{\mathrm{UVP}}+a,\end{array}$
was used to estimate the volume recorded per time in cubic meters per second (b) and the field of view in square meters (btowing speed) recorded by PELAGIOS.
Figure 3PELAGIOS video counts of Poeobius sp. as a function of UVP5-derived abundance on the same transects at two stations on cruise MSM 49 on R/V Maria S. Merian.
From this calculation it can be derived that PELAGIOS recorded an average volume of 0.116 m3 s−1 at a towing speed of 1 knot (=0.51 m s−1). A cross-sectional view field of approximately 0.23 m2 of PELAGIOS can be expected, compared to a theoretical field of view (FOV) of 0.45 m2 based upon the maximum image dimensions (0.80 m × 0.56 m) at 1 m distance from the lens. We can now calculate the individuals observed by PELAGIOS per time to individuals per volume. To do so we use the number of individuals in one transect and divide this number by the duration of the transect to obtain individuals per minute, and we divide this by 60 to get the individuals per second. From the UVP–PELAGIOS comparison we derived a conversion factor of 6 to calculate the number of individuals per second to the number of individuals per cubic meter. This value is then multiplied by 1000 to go from cubic meter to 1000 m3.
## 2.5 Abundance, size, and diversity
To provide an example of the type of data that can be obtained with the PELAGIOS, we report here on day and night video transects down to 950 m in the eastern tropical North Atlantic, on the northwestern slope of Senghor Seamount (1714.2 N, 2200.7 W; bottom depth of approximately 1000 m). The results from the video annotations show that faunal abundances depend on the depth of deployment, and time of the day. During a transect of 21.2 min at 400 m, 226 individuals (1058 ind 1000 m−3) were encountered during the day (the three dominant groups were fishes, euphausiids, and appendicularians) compared to 196 individuals (591 ind. 1000 m−3; transect length 33.1 min) during the night (the four dominant organism groups were fishes, chaetognaths, medusae, and ctenophores). Overall abundance of chaetognaths, decapods and mysids, and fishes was higher during the night. The peak of euphausiids' abundance at 400 m shifts to the surface at night (Fig. 4). The higher abundance of decapods and mysids and chaetognaths at night may indicate lateral migration or daytime avoidance. The vertical migration that was observed for fishes and crustaceans was much less clear for the gelatinous zooplankton groups including medusae and appendicularians (Fig. 4). Ctenophores and siphonophores were abundant in the surface at night (but we did not perform transects at 20 m during the day), and the thaliaceans migrated vertically and were most abundant in shallow waters at night. The total number of annotated organisms for the daytime transects (total transect time 187 min; max depth 950 m) was 835 compared to 1865 organisms for the longer nighttime transects (total transect time 292 min; max depth 900). The high abundance of gelatinous zooplankton (128) annotated organisms (899 ind. 1000 m−3) belonging to the three dominant groups of Ctenophora (53), Siphonophorae (21), and Thaliacea (44) in the topmost layer (20 m) at night is remarkable. Below this layer, the depth profile shows a minimum in numbers of annotated individuals at 100, 200, and 300 m water depth with a smaller peak of 57 gelatinous organisms (299 ind. 1000 m−3) at 450 m. Compared to this, the depth distribution during the daytime shows a more regular, almost Gaussian shape with a maximum of 31 (254 ind. 1000 m−3) and 54 (253 ind. 1000 m−3) gelatinous organisms at 200 and 400 m water depth, respectively.
Table 1Taxonomic groups which were encountered during pelagic video transects with PELAGIOS in the eastern tropical Atlantic.
The faunal observations made by PELAGIOS include a wide variety of taxa (Table 1; Figs. 5 and 6), spanning in size from radiolarians to large siphonophores (such as Praya dubia and Apolemia). Chaetognaths were the dominant faunal group. Typical examples of fragile organisms that were not present or identifiable in the Multiple Opening/Closing Net and Environmental Sensing System (MOCNESS) samples from the same cruise (Christiansen et al., 2016; Lüskow et al., 2019) but which can be efficiently observed by PELAGIOS include large larvaceans (probably Bathochordaeus and Mesochordaeus), pelagic polychaetes (Poeobius, Tomopteris) (Fig. 5), and smaller siphonophores (such as Bargmannia and Lilyopsis; the latter can be easily distinguished by their fluorescent body parts) (Fig. 5). Observed medusae belonged to the genera Periphylla, Halitrephes, Haliscera, Crossota, Colobonema, Solmissus, and Solmundella (Fig. 5). Venus girdles (Cestum spp.), Beroe, cydippids, and ctenophores such as Thalassocalyce inconstans, Leucothea, Bathocyroe (see Harbison et al., 1978, for differences in robustness among ctenophores) were encountered (Fig. 5). Cephalopod observations were rare but small individual cranchiid squids were observed in the upper 50 m at night. Mastigoteuthid squids were observed with their mantle in a vertical orientation and with extended tentacles in waters below 500 m. The large squid, Taningia danae was observed during transects. Other pelagic molluscs include the nudibranch Phylliroe and different pteropod species. Observed fishes are snipe eels, hatchet fishes, lantern fishes, and Cyclothone. Fishes are among the dominant organisms encountered during PELAGIOS transects, but it is often impossible to identify fishes to species level from the video.
Figure 4Day and night comparison of faunal observations obtained by PELAGIOS at the northwest flank of Senghor Seamount: (a) fishes, krill, chaetognaths, and decapods; (b) gelatinous zooplankton groups.
## 2.6 Individual behavior
In situ observations by PELAGIOS video may reveal direct observations on individual behavior. Decapod shrimps were observed to release a blue or green bioluminescent cloud after performing their tail flip as part of the escape response (Fig. 6d). Potential reproductive behavior was observed for two specimens of krill which were seen in what could be a mating position, and salps were observed to reproduce asexually by the release of salp oozoids (Fig. 6c). Feeding behaviors were observed for large prayid siphonophores and calycophoran siphonophores which had their tentacles extended. Poeobius worms were observed with their mucus web deployed to capture particulate matter (Christiansen et al., 2018) (Fig. 6a). Narcomedusae of the genus Solmissus were observed with their tentacles stretched up and down, which is a feeding posture. In situ observations by the PELAGIOS also showed the natural body position of pelagic organisms. Snipe eels were observed in a vertical position with their heads up, while dragonfish and some myctophids were observed in an oblique body position with their head down (Fig. 6b).
Figure 5Examples of organisms encountered during pelagic video transects with PELAGIOS during cruise MSM49 in the eastern tropical Atlantic. (a) A medusa Halitrephes sp. (b) A siphonophore Praya dubia(c) A tomopterid worm. (d) The ctenophore Thalassocalyce inconstans(e) The medusa Solmissus(f) The ctenophore Cestum. The distance between the white bands on the horizontal bar on the bottom of the images is 5 cm.
3 Discussion
PELAGIOS is a pelagic ocean exploration tool that fills a gap in the array of observation instruments that exist in biological oceanography, as transparent and fragile organisms (> 1 cm) are up to now under-sampled by both net-based and optical systems. The PELAGIOS video transects are comparable to ROV video transects and can be obtained in a cost-effective way. The resulting data can provide information on diversity, distribution and abundance of large (> 1 cm), fragile zooplankton and also of some nekton and rare species. Due to the collection of HD color video, the behavior, color, and position of larger gelatinous planktonic organisms in the water column are documented, which may provide additional ecological information that cannot be obtained by nets or existing plankton recorders. The PELAGIOS system complements gear that is suitable for stratified observations and collections of robust mesozooplankton and micronekton (MOCNESS, Hydrobios Multinet, and others) and optical systems that are suitable for high-resolution sampling of small and abundant organisms (e.g., VPR, UVP5) (e.g., Benfield et al., 2007; Picheral et al., 2010; Biard et al., 2016). The instrument can be deployed with a small team and from vessels of opportunity, in transmission or blind mode. Due to the relatively simple design we experienced limited technical failures, which makes the PELAGIOS a reliable tool for oceanic expeditions. While thus far the system has only been deployed in the open ocean, it can be used in any pelagic environment with water that has reasonable clearance and visibility. The data obtained after annotation of the video can be uploaded into databases (e.g., the large database PANGAEA) after publication of the results allowing for efficient data sharing and curation.
Figure 6Examples of behaviors observed during pelagic video transects with the PELAGIOS. (a) Poeobius sp. in a feeding position with a mucus web (left side of the animal); (b) a dragonfish of the family Stomiidae in a vertical position; (c) a salp releasing a blastozoid chain; (d) a crustacean releasing two bioluminescent clouds while performing an escape response. The distance between the white bands on the horizontal bar on the bottom of the images is 5 cm.
The clear distribution patterns that we observed in some animal groups (fish, crustaceans, and some gelatinous fauna) after annotating the video transects confirm that established biological processes such as diurnal vertical migration (e.g., Barham, 1963) can be detected in PELAGIOS data and that the distribution data that we observe for the organisms encountered are representative of the natural situation. It has to be noted, though, that while the observed distribution patterns should be representative, care must be taken with regard to abundance estimates of especially actively and fast-swimming organisms. Some fish and crustaceans react to the presence of underwater instrumentation (e.g., Stoner et al., 2008). Gear avoidance (e.g., Kaartvedt et al., 2012) can lead to an underestimation of abundance, whereas attraction to the camera lights (e.g., Utne-Palm et al., 2018; Wiebe et al., 2003) would result in an overestimation. The large bioluminescent squid Taningia danae seemed to be attracted to the lights of the PELAGIOS, and attraction behavior of this species has been described in other publications (Kubodera et al., 2007). Compared to day transects, the high abundance of gelatinous organisms close to the surface during night is likely to be partly an effect of the higher contrast in the videos of the night transects and better visibility of the gelatinous fauna than during day transects. Therefore, we did not perform transects shallower than 50 m during the day. Many of the observed gelatinous fauna might be present as well at shallow depths during daylight but are not detectable in “blue-water conditions”. The difference between the taxa encountered during the day and night transect may also be due to the trapping of organisms at the slopes of Senghor Seamount during the day (Isaacs and Schwartzlose, 1965; Genin, 2004) or by other causes for patchiness (Haury et al., 2000). However, from a methodological point of view it should be noted that while the ship's towing speed is typically 1 knot, the current speeds at the survey depths may differ, also between day and night. Currents may result in a more or less sampled volume of water and hence a variation in plankton being visualized. Since abundance estimation relies on an accurate determination of the image volume, it needs to be pointed out that it is our aim to better technically constrain the image area in future developments (now derived from UVP quantitative observations) and to include flowmeter measurements.
After annotation, the PELAGIOS video transects may be used to reconstruct species-specific distribution patterns, which can be related to environmental conditions (Neitzel, 2017; Hoving et al., 2019a). Such data are also valuable for overlap comparison in distribution patterns of consumers and food items (see, e.g., Haslob et al., 2009; Möller et al., 2012). The data can also be used in biological studies that aim to predict the consequences of a changing ocean with altering oceanographic features and conditions for species' distributions, as has been done for net sampling of mesozooplankton (Wishner et al., 2013). One example of changing oceanographic conditions is the global trend of oxygen loss in the world oceans (Oschlies et al., 2018). Oxygen minimum zones (OMZs) occur naturally in the mesopelagic zone (Robinson et al., 2010), and in different oceans they have been found to expand horizontally and vertically as a result of climate change (Stramma et al., 2008; Oschlies et al., 2018). The expansion of OMZs may result in a habitat reduction of the pelagic fauna (e.g., Stramma et al., 2012) or increase the habitat for species with hypoxia tolerance (Gilly et al., 2013). To predict the potential consequences of OMZ expansion for pelagic invertebrates, we investigated the abundance and distribution of distinct large gelatinous zooplankton species, including medusae, ctenophores, siphonophores, and appendicularians, in the eastern tropical North Atlantic using PELAGIOS video transects and correlated the biological patterns to the oxygen gradients (Neitzel, 2017; Hoving et al., 2019a).
During various cruises, the UVP5 was mounted underneath the PELAGIOS providing concomitant data on macrozooplankton and nekton (PELAGIOS) as well as particles and mesozooplankton (UVP5). The combination of the two instruments provides a great opportunity to assess both the mesopelagic fauna and particles during one sampling event. The joint deployment of the PELAGIOS and UVP5 also allowed an estimation of the sampled water volume of the PELAGIOS as described above. The linear relationship between counts of the nonmoving Poeobius sp. with UVP5 and the PELAGIOS indicates the comparability of the two different methods for animals in this size class and provides a correction factor to estimate organism abundance (ind. m−3) from PELAGIOS count (ind. s−1) data.
The field of view (FOV) derived from the UVP5 comparison for the PELAGIOS was estimated to be 0.23 m2 in comparison to 0.45 m2 based on measurement of the scale bar at 1 m from the camera. The angle of view of the PELAGIOS is 80, and therefore the field of view (FOV) is much smaller than the FOV of video transects with a wide-angle lens, e.g., by ROV Tiburon (Robison et al., 2010). When comparing the FOV, it is important to take into account the object that is observed. We provided an estimate of the FOV using Poeobius sp., which is a small organism that can be detected only when it is close to the camera. Therefore, the area of the FOV for the quantification of Poeobius sp. is smaller than when quantifying larger organisms, and the initial identification distance differs between species (Reisenbichler et al., 2016).
We compared PELAGIOS video transects with MOCNESS net (opening 1 m2) abundance data by integrating the PELAGIOS counts over the respective depth strata of the MOCNESS that happened at the same cruise (Lüskow et al., 2019). The diversity of the gelatinous zooplankton in the total MOCNESS catch is much lower (eight different taxa) (Lüskow et al., 2019) than in the pooled video transects (53 different annotated taxa) at the same station. The ctenophore Beroe is an example of a gelatinous organism captured in MOCNESS hauls and also observed on PELAGIOS transects. Normalization and subsequent standardization of the encountered Beroe in MOCNESS and PELAGIOS transects show that on the same station and the same depths, PELAGIOS observes 3–5 times more Beroe at the three depths where they were encountered by both instruments. Additionally, the PELAGIOS also repeatedly observed Beroe at depths where they were not captured by MOCNESS at all (although there were also depths where PELAGIOS did not observe any Beroe). Preliminary comparisons of the data obtained with PELAGIOS and with MOCNESS indicate substantial differences in the documented fauna, a phenomenon also observed in previous comparisons between optical and net data (Remsen et al., 2004). Many more gelatinous taxa were observed during PELAGIOS video transects than were captured in MOCNESS catches at the same station (data presented here, Lüskow et al., 2019) due to the delicate nature of many ctenophores, medusae, and siphonophores, preventing their intact capture by nets. A notable exception are the small and robust calycophoran colonies of the families Diphyidae and Abylidae, which were also captured by MOCNESS. In contrast, avoidance behavior of strongly and fast-swimming jellyfish (e.g., Atolla, Periphylla), which may escape from the relatively slowly towed PELAGIOS, may explain their increased occurrence in nets compared to video recordings. While PELAGIOS is certainly suitable for visualizing delicate gelatinous fauna, it cannot replace net or ROV sampling since complementary specimen collections are needed to validate the identity of organisms that were observed during PELAGIOS video observations. Therefore, it is desired that net tows with open and closing nets such as Multinet Maxi or MOCNESS are performed in the same areas or that collections during submersible dives are made. An advantage of ROVs over PELAGIOS is the ROV's ability to stop on organisms for detailed close-up recording and potentially the collection of the observed organisms. This is not possible with PELAGIOS as the ship tows the instrument.
While the imaging processing pipeline is not as streamlined as in other optical systems that use still images such as the VPR or the UVP5, the potential of the PELAGIOS as an exploration tool is illustrated by the discovery of previously undocumented animals. An example is the ctenophore Kiyohimea usagi (Matsumoto and Robison, 1992), which was observed seven times by the PELAGIOS and once by the manned submersible JAGO during cruises in the eastern tropical North Atlantic. This large (> 40 cm wide) lobate ctenophore was previously unknown from the Atlantic Ocean and demonstrates how in situ observations in epipelagic waters can result in the discovery of relatively large fauna (Hoving et al., 2018). Since gelatinous organisms are increasingly recognized as vital players in the oceanic food web (Choy et al., 2017) and in the biological carbon pump (Robison et al., 2005), in situ observations with tools like the PELAGIOS can provide new important insights into the oceanic ecosystem and the carbon cycle. But small gelatinous organisms may also have a large biogeochemical impact on their environment. This was illustrated by the discovery of the pelagic polychaete Poeobius sp. during the PELAGIOS video transects in the eastern tropical North Atlantic (Christiansen et al., 2018). The observations of the PELAGIOS provided the first evidence for the occurrence of Poeobius sp. in the Atlantic Ocean. During the R/V Meteor cruise M119, Poeobius was found to be extremely abundant in a mesoscale eddy. Following this discovery, it was possible to reconstruct the horizontal and vertical distribution of Atlantic Poeobius in great detail using an extensive database of the UVP5 (956 vertical CTD or UVP5 profiles) in the eastern tropical North Atlantic, and to establish that the high local abundance of Poeobius was directly related to the presence of mesoscale eddies in which they substantially intercepted the particle export flux to the deep sea (Christiansen et al., 2018).
Future effort should be focused on improving the assessment of the sample volume by integrating technology that can quantify it (e.g., current meters, a stereo-camera setup or a laser-based system). A stereo-camera set up would also allow for size measurements of the observed organisms, which could be beneficial to estimate the biomass of the observed organisms from published size-to-weight relationships. It might also be possible to obtain similar information based on structure-from-motion approaches that proved successful in benthic video imaging (Burns et al., 2015). The PELAGIOS system can also be a platform for other sensors. For example, the PELAGIOS was used to mount and test the TuLUMIS multispectral camera (Liu et al., 2018). Future developments include the preparation of the system for deployments down to 6000 m water depth. The integration of acoustic sensors would be valuable to measure the target strength of camera-observed organisms, to estimate gear avoidance or attraction, and to estimate biomass and the abundance of organisms outside the field of view of the camera. We strongly encourage the use of complementary instruments to tackle the relative importance of a wide range of organisms in the oceanic pelagic ecosystem.
Data availability
Data availability.
The datasets generated and/or analyzed during the current study are available in the PANGAEA repository: https://doi.org/10.1594/PANGAEA.902247 (Hoving et al., 2019b).
Author contributions
Author contributions.
This instrument was designed, tested and applied by HJH and EF. RK and HH developed the idea of combining the PELAGIOS with the UVP5. PN and SC analyzed the data in this paper in consultation with HJH, RK, and HH. AK, UP, and PL added valuable input to the further development of the instrument and its application and/or the data interpretation. All authors contributed to writing the paper. All authors approved the final submitted paper.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
Our sincere gratitude goes to Ralf Schwarz, Sven Sturm, and other colleagues of GEOMAR's Technology and Logistics Centre as well as Svend Mees for their indispensable support in design and construction during the development of PELAGIOS. We want to thank the crew of the research vessels Meteor, Maria S. Merian, and Poseidon for their excellent support during research expeditions and Bernd Christiansen (University of Hamburg) for collaboration and leading the expedition MSM49. Anneke Denda and Florian Lüskow are acknowledged for their help on the MOCNESS samples of gelatinous zooplankton collected during MSM49. Shiptime on R/V Maria S. Merian and supporting funds were provided by the German Research Foundation (DFG) (grant MSM49 to Bernd Christiansen). We also thank the DFG for providing financial support to HJH under grants HO 5569/1-2 (Emmy Noether Junior Research Group) and grant CP1218 of the Cluster of Excellence 80 “The Future Ocean”. Rainer Kiko and Svenja Christiansen were supported by grant CP1650 of the Cluster of Excellence 80 “The Future Ocean”. “The Future Ocean” is funded within the framework of the Excellence Initiative by the DFG on behalf of the German federal and state governments. Rainer Kiko and Helena Hauss were supported by the DFG as part of the Collaborative Research Centre (SFB) 754 “Climate-Biogeochemistry Interactions in the Tropical Ocean”.
Financial support
Financial support.
This research has been supported by the German Research Foundation (grant nos. HO 5569/1-2, CP1218, CP1650, and SFB754).
Review statement
Review statement.
This paper was edited by Mario Hoppema and reviewed by two anonymous referees.
References
Barham, E. G.: Siphonophores and the deep scattering layer, Science, 140, 826–828, 1963.
Benfield, M. C., Davis, C. S., Wiebe, P. H., Gallager, S. M., Lough, R. G., and Copley, N. J.: Video Plankton Recorder estimates of copepod, pteropod and larvacean distributions from a stratified region of Georges Bank with comparative measurements from a MOCNESS sampler, Deep-Sea Res. Pt. II, 43, 1925–1945, 1996.
Benfield, M. C., Grosjean, P., Culverhouse, P. F., Irigoien, X., Sieracki, M. E., Lopez-Urrutia, A., Dam, H. G., Hu, Q., Davis, C. S., Hansen, A., Pilskaln, C. H., Riseman, E. M., Schultz, H., Utgoff,, P. E., and Gorsky, G.: RAPID: Research on Automated Plankton Identification, Oceanography, 20, 172–187, 2007.
Biard, T., Picheral, M., Mayot, N., Vandromme, P., Hauss, H., Gorsky, G., Guid, L., Kiko, R., and Not, F.: In situ imaging reveals the biomass of giant protists in the global ocean, Nature, 532, 504–507, 2016.
Burns, J. H. R., Delparte, D., Gates, R. D., and Takabayashi, M.: Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs, PeerJ, 3, e1077, https://doi.org/10.7717/peerj.1077, 2015.
Bush, S. L., Caldwell, R. L., and Robison, B. H.: Ink utilization by mesopelagic squid, Mar. Biol., 152, 485–494, 2007.
Choy, C. A., Haddock, S. H. D., and Robison, B. H.: Deep pelagic food web structure as revealed by in situ feeding observations, Proc. R. Soc. B. Biol. Sci., 284, 1869, https://doi.org/10.1098/rspb.2017.2116, 2017.
Christiansen, B., Buchholz, C., Buchholz, F., Chi, X., Christiansen, S., Denda, A., Fabrizius, E., Hauss, H., Hoving, H. J. T., Janßen, S., Kaufmann, M., Kronschnabel, A., Lischka, A., Lüskow, F., Martin, B., Merten, V., Silva, P., Pinheiro, N., Springer, B., Zankl, S., and Zeimet, T.: SEAMOX: The Influence of Seamounts and Oxygen Minimum Zones on Pelagic Fauna in the Eastern Tropical Atlantic, Technical Cruise Report Cruise No. MSM49 28 November–21 December 2015 Las Palmas de Gran Canaria (Spain)-Mindelo (Republic of Cape Verde), 2016.
Christiansen, S., Hoving, H. J. T., Schütte, F., Hauss, H., Karstensen, J. Körtzinger, A., Schröder, M., Stemmann, L., Christiansen, B., Picheral, M., Brandt, P., Robison, B. H., Koch, R., and Kiko, R.: Particulate matter flux interception in oceanic mesoscale eddies by the polychaete Poeobius sp., Limnol. Oceanogr., 63, 2093–2109, 2018.
Cowen, R. K. and Guigand, C. M.: In situ ichthyoplankton imaging system (ISIIS): system design and preliminary results, Limnol. Oceanogr.-Meth, 6, 126–132, 2008.
Genin, A.: Bio-physical coupling in the formation of zooplankton and fish aggregations over abrupt topographies, J. Marine Syst., 50, 3–20, 2004.
Gilly, W. F., Beman, J. M., Litvin, S. Y., and Robison, B. H.: Oceanographic and biological effects of shoaling of the oxygen minimum zone, Annual Review in Marine Science, 5, 393–420, 2013.
Gomes-Pereira, J. N., Auger, V., Beisiegel, K., Benjamin, R., Bergmann, M., Bowden, D., Buhl-Mortensen, P., De Leo, F. C., Dionísio, G., Durden, J. M., Edwards, L., Friedman, A., Greinert, J., Jacobsen-Stout, N., Lerner, S., Leslie, M., Nattkemper, T. W., Sameoto, J. A., Schoening, T., Schouten, R., Seager, J., Singh, H., Soubigou, O., Tojeira, I., van den Beld, I., Dias, F., Tempera, F., and Santos, R. S.: Current and future trends in marine image annotation software, Prog. Oceanogr., 149, 106–120, 2016.
Haddock, S. H. D.: A golden age of gelata: past and future research on planktonic ctenophores and cnidarians, Hydrobiologia, 530, 549–556, 2004.
Hamner, W. M., Madin, L. P., Alldredge, A. L., Gilmer, R. M., and Hamner, P. P.: Underwater observations of gelatinous zooplankton: sampling problems, feeding biology and behavior, Limnol. Oceanogr., 20, 907–917, 1975.
Hamner, W. M. and Robison, B. H.: In situ observations of giant appendicularians in Monterey Bay, Deep-Sea Res., 39, 1299–1313, 1992.
Harbison, G., Madin, L., and Swanberg, N.: On the natural history and distribution of oceanic ctenophores, Deep-Sea Res., 25, 233–256, 1978.
Haslob, H., Rohlf, N., and Schnack, D.: Small scale distribution patterns and vertical migration of North Sea herring larvae (Clupea harengus, Teleostei: Clupeidae) in relation to abiotic and biotic factors, Sci. Mar., 73, 13–22, 2009.
Haury, L., Fey, C., Newland, C., and Genin, A.: Zooplankton distribution around four eastern North Pacific seamounts, Prog. Oceanogr., 45, 69–105, 2000.
Hauss, H., Christiansen, S., Schütte, F., Kiko, R., Edvam Lima, M., Rodrigues, E., Karstensen, J., Löscher, C. R., Körtzinger, A., and Fiedler, B.: Dead zone or oasis in the open ocean? Zooplankton distribution and migration in low-oxygen modewater eddies, Biogeosciences, 13, 1977–1989, https://doi.org/10.5194/bg-13-1977-2016, 2016.
Hosia, A., Falkenhaug, T., Baxter, E. J., and Pagès, F.: Abundance, distribution and diversity of gelatinous predators along the Mid Atlantic Ridge: A comparison of different sampling methodologies, PLoS One, 12, e0187491, https://doi.org/10.1371/journal.pone.0187491, 2017.
Hoving, H. J. T. and Robison, B. H.: Deep-sea in situ observations of gonatid squid and their prey reveal high occurrence of cannibalism, Deep-Sea Res. Pt. I, 116, 94–98, 2016.
Hoving, H. J. T., Zeidberg, L., Benfield, M., Bush, S., Robison, B. H., and Vecchione, M.: First in situ observations of the deep-sea squid Grimalditeuthis bonplandi reveals unique use of tentacles, Proc. R. Soc. B, 280, 1769, https://doi.org/10.1098/rspb.2013.1463, 2013.
Hoving, H. J. T., Bush, S. L., Haddock, S. H. D., and Robison, B. H.: Bathyal feasting: post-spawning squid as a source of carbon for deep-sea benthic communities, Proc. R. Soc. B, 284, 20172096, https://doi.org/10.1098/rspb.2017.2096, 2017.
Hoving, H.-J., Neitzel, P., and Robison, B.: In situ observations lead to the discovery of the large ctenophore Kiyohimea usagi (Lobata: Eurhamphaeidae) in the eastern tropical Atlantic, Zootaxa, 4526, 232–238, 2018.
Hoving, H. J. T. et al.: Vertical ecology of gelatinous fauna in relation to the mesopelagic oxygen minimum zone of the eastern Atlantic, in preparation, 2019a.
Hoving, H.-J. T., Christiansen, S., Fabrizius, E., Hauss, H., Kiko, R., Linke, P., Neitzel, P., Piatkowski, U., and Körtzinger, A.: Data from video supporting the technological description of the pelagic in situ observation system PELAGIOS, PANGAEA, https://doi.org/10.1594/PANGAEA.902247, 2019b.
Hull, P. M., Osborn, K. J., Norris, R. D., and Robison, B. H.: Seasonality and depth distribution of a mesopelagic foraminifer, Hastigerinella digitata, in Monterey Bay, California, Limnol. Oceanogr., 56, 562–576, 2011.
Isaacs, J. D. and Schwartzlose, R. A.: Migrant sound scatterers: interaction with the sea floor, Science, 150, 1810–1813, 1965.
Kaartvedt, S., Staby, A., and Aksens, D. L.: Efficient trawl avoidance by mesopelagic fishes causes large underestimation of their biomass, Mar. Ecol. Prog. Ser., 456, 1–6, 2012.
Kiko, R., Biastoch, A., Brandt, P., Cravatte, S., Hauss, H., Hummels, R., Kriest, I., Marin, F., McDonnell, A. M. P., Oschlies, A., Picheral, M., Schwarzkopf, F. U., Thurnherr, A. M., and Stemmann, L.: Biological and physical influences on marine snowfall at the equator, Nat. Geosci., 42, 852–858, 2017.
Kubodera, T., Koyama, Y., and Mori, K.: Observations of wild hunting behaviour and bioluminescence of a large deep-sea, eight-armed squid, Taningia danae, P. Roy. Soc. B, 274, 1029–1034, 2007.
Linke, P., Schmidt, M., Rohleder, M., Al-Barakati, A., and Al-Farawati, R.: Novel online digital video and high-speed data broadcasting via standard coaxial cable onboard marine operating vessels, Mar. Tech. Soc. J., 49, 7–18, 2015.
Liu, H., Sticklus, J., Köser, K., Hoving, H. J. T., Ying, C., Hong, S., Greinert, J., and Schoening, T.: TuLUMIS – A tunable LED-based underwater multispectral imaging system, Opt. Express, 26, 7811–7828, 2018.
Lüskow, F., Hoving, H. J. T., Christiansen, B., Chi, X., Silva, P., Neitzel, P., and Jaspers, C.: Distribution and biomass of gelatinous zooplankton in relation to an oxygen minimum zone and a shallow seamount in the Eastern Tropical Atlantic Ocean, in preparation, 2019.
Madin, L., Horgan, E., Gallager, S., Eaton, J., and Girard, A.: LAPIS: A new imaging tool for macrozooplankton, IEEE J. Oceanic Eng., https://doi.org/10.1109/OCEANS.2006.307106, 1-4244-0115-1/06, 2006.
Matsumoto, G. I. and Robison, B. H.: Kiyohimea usagi, a new species of lobate ctenophore from the Monterey Submarine Canyon, B. Mar. Sci., 51, 19–29, 1992.
Möller, K. O., John, M. S., Temming, A., Floeter, J., Sell, A. F., Herrmann, J.-P., and Möllmann, C.: Marine snow, zooplankton and thin layers: indications of a trophic link from small-scale sampling with the Video Plankton Recorder, Mar. Ecol. Prog. Ser., 468, 57–69, 2012.
Neitzel, P.: The impact of the oxygen minimum zone on the vertical distribution and abundance of gelatinous macrozooplankton in the Eastern Tropical Atlantic. MSc Thesis, Christian-Albrechts-Universität Kiel, Germany, 75 pp., 2017.
Ohman, M. D., Davis, R. E., Sherman, J. T., Grindley, K. R., Whitmore, B. M., Nickels, C. F., and Ellen, J. S.: Zooglider: An autonomous vehicle for optical and acoustic sensing of zooplankton, Limnol. Oceanogr.-Meth., 17, 69–86, 2019.
Oschlies, A., Brandt, P., Stramma, L., and Schmidtko, S.: Drivers and mechanisms of ocean deoxygenation, Nat. Geosci., 11, 467–473, 2018.
Picheral, M., Guidi, L., Stemmann, L., Karl, D. M., Iddaoud, G., and Gorsky, G.: The Underwater Vision Profiler 5: An advanced instrument for high spatial resolution studies of particle size spectra and zooplankton, Limnol. Oceangr.-Meth., 8, 462–473, 2010.
Purser, A., Marcon, Y., Dreutter, S., Hoge, U., Sablotny, B., Hehemann, L., Lemburg, J., Dorschel, B., Biebow, H., and Boetius, A.: Ocean floor observation and bathymetry system (OFOBS): A new towed camera/sonar system for deep-sea habitat surveys, IEEE J. Oceanic Eng., 44, 87–99, 2018.
Ramirez-Llodra, E., Brandt, A., Danovaro, R., De Mol, B., Escobar, E., German, C. R., Levin, L. A., Martinez Arbizu, P., Menot, L., Buhl-Mortensen, P., Narayanaswamy, B. E., Smith, C. R., Tittensor, D. P., Tyler, P. A., Vanreusel, A., and Vecchione, M.: Deep, diverse and definitely different: unique attributes of the world's largest ecosystem, Biogeosciences, 7, 2851–2899, https://doi.org/10.5194/bg-7-2851-2010, 2010.
Reisenbichler, K. R., Chaffey, M. R., Cazenave, F., McEwen, R. S., Henthorn, R. G., Sherlock, R. E., and Robison, B. H.: Automating MBARI’s midwater time-series video surveys: the transition from ROV to AUV, IEEE J. Ocean Eng., OCEANS 2016 MTS/IEEE Monterey, https://doi.org/10.1109/OCEANS.2016.7761499, 2016.
Remsen, A., Hopkins, T. L., and Samson, S.: What you see is not what you catch: a comparison of concurrently collected net, Optical Plankton Counter, and Shadowed Image Particle Profiling Evaluation Recorder data from the northeast Gulf of Mexico, Deep-Sea Res. Pt. I, 51, 129–151, 2004.
Robinson, C., Steinberg, D. K., Anderson, T. R., Arístegui, J., Carlson, C. A., Frost, J. R., Ghiglione, J.-F., Hernández-León, S., Jackson, G. A., Koppelmann, R., Quéguiner, B., Ragueneau, O., Rassoulzadegan, F., Robison, B. H., Tamburini, C., Tanaka, T., Wishner, K. F., and Zhang, J.: Mesopelagic zone ecology and biogeochemistry – a synthesis, Deep-Sea Res. Pt. II, 57, 1504–1518, 2010.
Robison, B. H.: Midwater biological research with the WASP ADS, Mar. Tech. Soc. J., 17, 21–27, 1983.
Robison, B. H.: The coevolution of undersea vehicles and deep-sea research, Mar. Tech. Soc. J., 33, 65–73, 1999.
Robison, B. H.: Deep pelagic biology, J. Exp. Mar. Boil. Ecol., 300, 253–272, 2004.
Robison, B. H. and Wishner, K.: Biological research needs for submersible access to the greatest ocean depths, Mar. Tech. Soc. J., 24, 34–37, 1990.
Robison, B. H., Reisenbichler, K. R., Sherlock, R. E., Silguero, J. M. B., and Chavez, F. P.: Seasonal abundance of the siphonophore, Nanomia bijuga, in Monterey Bay, Deep-Sea Res. II, 45, 1741–1752, 1998.
Robison, B. H., Reisenbichler, K. R., and Sherlock, R. E.: Giant larvacean houses: rapid carbon transport to the deep sea floor, Science, 308, 1609–1611, 2005.
Robison, B. H., Sherlock, R. E., and Reisenbichler, K.: The bathypelagic community of Monterey Bay, Deep-Sea Res. Pt. II, 57, 1551–1556, 2010.
Schlining, B. and Jacobsen Stout, N.: MBARI's Video Annotation and Reference System. In: Proceedings of the Marine Technology Society/Institute of Electrical and Electronics Engineers Oceans Conference, Boston, Massachusetts, 1–5, 2006.
Schulz, J., Barz, K., Ayon, P., Lüdtke, A., Zielinski, O., Mengedoht, D., and Hirche, H. J.: Imaging of plankton specimens with the Lightframe On-sight Keyspecies Investigation (LOKI) system, J. Eur. Opt. Soc.-Rapid, 5S, 1–9, https://doi.org/10.2971/jeos.2010.10017s, 2010.
Stoner, A. W., Laurel, B. J., and Hurst, T. P.: Using a baited camera to assess relative abundance of juvenile Pacific cod: Field and laboratory trials, J. Exp. Mar. Biol. Ecol., 354, 202–211, 2008.
Stramma, L., Johnson, G. C., Sprintall, J., and Mohrholz, V.: Expanding Oxygen-Minimum Zones in the Tropical Oceans, Science, 320, 655–658, 2008.
Stramma, L., Prince, E. D., Schmidtko, S., Luo, J., Hoolihan, J. P., Visbeck, M., Wallace, D. W. R., Brandt, P., and Körtzinger, A.: Expansion of oxygen minimum zones may reduce available habitat for tropical pelagic fishes, Nat. Clim. Change, 2, 33–37, 2012.
Utne-Palm, A. C., Breen, M., Lokkeborg, S., and Humborstad, O. B.: Behavioural responses of krill and cod to artificial light in laboratory experiments, PloS One, 13, e0190918, 2018.
Webb, T. J., Vanden Berghe, E., and O'Dor, R.: Biodiversity's Big Wet Secret: The Global Distribution of Marine Biological Records Reveals Chronic Under-Exploration of the Deep Pelagic Ocean, PLoS ONE, 5, e10223, https://doi.org/10.1371/journal.pone.0010223, 2010.
Wiebe, P. H. and Benfield, M. C.: From the Hensen net toward four-dimensional biological oceanography, Prog. Oceanogr., 56, 7–136, 2003.
Wishner, K. F., Outram, D. M., Seibel, B. A., Daly, K. L., and Williams, R. L.: Zooplankton in the eastern tropical north Pacific: Boundary effects of oxygen minimum zone expansion, Deep-Sea Res. Pt. I, 79, 122–140, 2013.
Youngbluth, M., Sørnes, T., Hosia, A., and Stemmann, L.: Vertical distribution and relative abundance of gelatinous zooplankton, in situ observations near the Mid-Atlantic Ridge, Deep-Sea Res. Pt. II, 55, 119–125, 2008.
|
{}
|
# At a height of 100 km, what speed do you need to be going to escape Earth's gravity?
I understand that the escape velocity of Earth is 11 km/s. However, Earth's gravitational sphere of influence is not infinite, so it is possible to go slower than that and still escape the sphere of influence (Because of the sun.) If a rocket starts accelerating from 0 on Earth's surface, what speed would it have to be going at, say, 100 km above the Earth's surface for it to escape Earth's gravity? How would you calculate this?
• Where the rocket starts from is irrelevant. All you need is the gravitational force at the altitude of interest to calculate escape velocity at that location. – Carl Witthoft Nov 19 '19 at 12:41
• Escape velocity is a function of altitude above a body and the mass of that body, it is not a single fixed value. Escape velocity at 1000k is different to escape velocity on the surface of the earth. – PeteBlackerThe3rd Nov 20 '19 at 15:25
Earth's sphere of influence has a radius of about 924000km. A highly eccentric orbit with a perigee at 100km altitude and an apogee at the SOI radius has a semimajor axis of 465239km. Throwing that into the vis-viva equation $$v^2 = GM\left(\frac{2}{r} - \frac{1}{a} \right)$$ where $$G$$ is the gravitational constant, $$M$$ the mass of earth, $$r$$ the orbital radius (not altitude) at perigee and $$a$$ the semimajor axis of the orbit gives a velocity at perigee of 11.05km/s. Using the escape velocity equation $$V_e = \sqrt{2GM \over r}$$ you'd get an escape velocity at 100km altitude of 11.09km/s, so there's a small saving to be had but really, the edge of the sphere of influence is quite a long way away.
Whilst this isn't quite what you asked for, I'd be startled if your direct ascent trajectory involved a speed dramatically different from this figure.
• You're kind of going at a simple problem in a difficult way :-) – Carl Witthoft Nov 19 '19 at 12:43
• @CarlWitthoft eh, it works. Does something really count as difficult if it doesn't involve solving anything, or even engaging in any integration? – Starfish Prime Nov 19 '19 at 21:28
• This doesn't actually answer the question. – Russell Borogove Nov 20 '19 at 2:27
Earth's gravitational sphere of influence is not infinite
That's your problem. The force of gravity does have an infinite range. There is no place in the universe where earth's gravity is not felt.
As a result, it does not matter where you start, in order to escape earth, you need 11 km/s relative to the earth. If you first get into orbit at 100 km, then you already need a speed of 7 km/s for that. From that orbit, you only need an additional 4 km/s to escape earth altogether - but only because you are already going at 7 km/s.
EDIT
As pointed out by @uhoh the escape velocity does vary with altitude. However, the difference at 100 km is so small that I ignored it. Like the OP I approximated the escape velocity as 11 km/s. In actual fact, at ground level it is 11.186 km/s, and at 100 km it reduces to 11.099 km/s.
The same approximation also ignores the fact that, if you are far enough from earth (924,000 km), the sun's gravity is stronger than earth's, and you leave the earth's Sphere of Influence - which is not an exact sphere. This is further complicated by the fact that it ignores the influence of the other planets and the moon. For example, when travelling to the moon, the gravitational field of the moon exceeds that of the earth (and the sun) when you're about 40,000 km from the moon.
• Earth's gravitational sphere of influence is certainly finite and has a radius of 945,000 km. I recommend that you have a look at that link and then revise your answer accordingly. It is true that there is no limit to a $1/r^2$ force (other than the speed of light), but "sphere of influence" is a well-recognized technical term. – uhoh Nov 19 '19 at 1:03
• Also, Earth's escape velocity is certainly lower at 100 km, about 0.8% lower in fact! Ignoring atmospheric drag and rotation, if it is 11.186 km/s at the surface (the ballistic speed necessary to asymptotically escape to infinity), you'd only need 11.099 km/s starting from 100 km altitude. – uhoh Nov 19 '19 at 1:10
• Escape velocity from a body does depend on starting altitude, as is clear when you remember that no matter what altitude you start from, that body's gravity will be decelerating you the whole way. – Russell Borogove Nov 19 '19 at 2:47
• @uhoh Your comment looks like a pretty decent answer to me. May I suggest you expand it a bit into one? – Diego Sánchez Nov 19 '19 at 10:37
• @DiegoSánchez my comments are meant to be coaching; hopefully the author will revise the answer and use some of this. If that happens I'll reverse my temporary down vote to an up vote. When people revise a post in response to a comment, they usually will leave a short message like @uhoh I've made an edit, how does that look? I'll get a notification then and can adjust my vote. In the mean time if someone else posts an answer that would be great too! – uhoh Nov 19 '19 at 12:13
|
{}
|
1. ## diverging or converging?
determine whether the integral diverges or converges
if it converges then evaluate the integral
integral from 0 to infinity: xe^-(x^2) dx
integral from 0 to 1: xln(x) dx
integral from 1 to infinity: (ln t) / (t^2) dt
integral from 0 to 2: 1 / (t-1) dt
i have solved the first integral and i got the answer as 1/2 but how do i show if it is diverging or converging?
2. Originally Posted by razorfever
determine whether the integral diverges or converges
if it converges then evaluate the integral
integral from 0 to infinity: xe^-(x^2) dx
integral from 0 to 1: xln(x) dx
integral from 1 to infinity: (ln t) / (t^2) dt
integral from 0 to 2: 1 / (t-1) dt
i have solved the first integral and i got the answer as 1/2 but how do i show if it is diverging or converging?
The integral converges if it's value is finite. If it's value is infinite, it is divergent.
$\int_0^1 {x\ln(x) dx} = \bigg[\frac{x^2}{2}\ln(x)\bigg]_0^1 - \int_0^1{\frac{x^2}{2} \times \frac{1}{x} dx}$
$= \bigg[\frac{x^2}{2}\ln(x)\bigg]_0^1 - \int_0^1{\frac{x}{2} dx}$
$= \bigg[\frac{x^2}{2}\ln(x)\bigg]_0^1 - \bigg[{\frac{x^2}{4}}\bigg]^1_0$
This integral is divergent because when you plug in the limits in the first expression, you end up with the term $\ln(0)$. This is not defined, and hence the integral has no finite value, it does not converge.
Try the other 2 and let us know what results you obtain.
3. Originally Posted by Mush
The integral converges if it's value is finite. If it's value is infinite, it is divergent.
$\int_0^1 {x\ln(x) dx} = \bigg[\frac{x^2}{2}\ln(x)\bigg]_0^1 - \int_0^1{\frac{x^2}{2} \times \frac{1}{x} dx}$
$= \bigg[\frac{x^2}{2}\ln(x)\bigg]_0^1 - \int_0^1{\frac{x}{2} dx}$
$= \bigg[\frac{x^2}{2}\ln(x)\bigg]_0^1 - \bigg[{\frac{x^2}{4}}\bigg]^1_0$
This integral is divergent
Since $\underset{x\to 0}{\mathop{\lim }}\,x\ln x=0,$ then $x\ln x$ is continuous on $(0,1],$ hence integrable on $[0,1].$
4. so does that mean its diverging or converging, now im totally confused??
5. Convergent.
6. hah so i was right ... thanks for clearing that up
7. Originally Posted by razorfever
determine whether the integral diverges or converges
if it converges then evaluate the integral
integral from 0 to infinity: xe^-(x^2) dx
i have solved the first integral and i got the answer as 1/2 but how do i show if it is diverging or converging?
$\int_{0}^{\infty }{x{{e}^{-{{x}^{2}}}}\,dx}=\int_{0}^{1}{x{{e}^{-{{x}^{2}}}}\,dx}+\int_{1}^{\infty }{x{{e}^{-{{x}^{2}}}}\,dx}.$ Let's just worry about the second piece since the first one is integrable on $[0,1]$ for being a continuous function on that interval. Now, for $x\ge1$ we have $\frac{x}{{{e}^{{{x}^{2}}}}}\le \frac{x}{{{e}^{x}}}=x{{e}^{-x}},$ hence by direct comparison test your integral converges because $\int_{1}^{\infty }{x{{e}^{-x}}\,dx},$ does.
Second and third question were already answered, and for your last question, you just need to split that integral into two ones. I mean, $\int_{0}^{2}{\frac{dt}{t-1}}=\int_{0}^{1}{\frac{dt}{t-1}}+\int_{1}^{2}{\frac{dt}{t-1}}.$
And there's no much here, these are clearly divergent integrals, so your integral diverges.
|
{}
|
## Creating a C++ Hash-Table quickly, that has a QString as its Key-Type.
This posting will assume that the reader has a rough idea, what a hash table is. In case this is not so, this WiKiPedia Article explains that as a generality. Just by reading that article, especially if the reader is of a slightly older generation like me, he or she could get the impression that, putting a hash-table into a practical C++ program is arduous and complex. In reality, the most efficient implementation possible for a hash table, requires some attention to such ideas as, whether the hashes will generally tend to be coprime with the modulus of the hash table, or at least, have the lowest GCDs possible. Often, bucket-arrays with sizes that are powers of (2), and hashes that contain higher prime factors help. But, when writing practical code in C++, one will find that the “Standard Template Library” (‘STL’) already provides one, that has been implemented by experts. It can be put into almost any C++ program, as an ‘unordered_map’. (:1)
But there is a caveat. Any valid data-type meant to act as a key needs to have a hashing function defined, as a template specialization, and not all data-types have one by far. And of course, this hashing function should be as close to unique as possible, for unique key-values, while never changing, if there could be more than one possible representation of the same key-value. Conveniently, C-strings, which are denoted in C or C++ as the old-fashioned ‘char *’ data-type, happen to have this template specialization. But, because C-strings are a very old type of data-structure, and, because the reader may be in the process of writing code that uses the Qt GUI library, the reader may want to use ‘QString’ as his key-data-type, only to find, that a template specialization has not been provided…
For such cases, C++ allows the programmer to define his own hashing function, for QStrings if so chosen. In fact, Qt even allows a quick-and-dirty way to do it, via the ‘qHash()’ function. But there is something I do not like about ‘qHash()’ and its relatives. They tend to produce 32-bit hash-codes! While this does not directly cause an error – only the least-significant 32 bits within a modern 64-bit data-field will contain non-zero values – I think it’s very weak to be using 32-bit hash-codes anyway.
Further, I read somewhere that the latest versions of Qt – as of 5.14? – do, in fact, define such a specialization, so that the programmer does not need to worry about it anymore. But, IF the programmer is still using an older version of Qt, like me, THEN he or she might want to define their own 64-bit hash-function. And so, the following is the result which I arrived at this afternoon. I’ve tested that it will not generate any warnings when compiled under Qt 5.7.1, and that a trivial ‘main()’ function that uses it, will only generate warnings, about the fact that the trivial ‘main()’ function also received the standard ‘argc’ and ‘argv’ parameters, but made no use of them. Otherwise, the resulting executable also produced no messages at run-time, while having put one key-value pair into a sample hash table… (:2)
/* File 'Hash_QStr.h'
*
* Regular use of a QString in an unordered_map doesn't
* work, because in earlier versons of Qt, there was no
* std::hash<QString>() specialization.
* Thus, one could be included everywhere the unordered_map
* is to be used...
*/
#ifndef HASH_QSTR_H
#define HASH_QSTR_H
#include <unordered_map>
#include <QString>
//#include <QHash>
#include <QChar>
#define PRIME 5351
/*
namespace cust {
template<typename T>
struct hash32 {};
template<> struct hash32<QString> {
std::size_t operator()(const QString& s) const noexcept {
return (size_t) qHash(s);
}
};
}
*/
namespace cust
{
inline size_t QStr_Orther(const QChar & mychar) noexcept {
return ((mychar.row() << 8) | mychar.cell());
}
template<typename T>
struct hash64 {};
template<> struct hash64<QString>
{
size_t operator()(const QString& s) const noexcept
{
size_t hash = 0;
for (int i = 0; i < s.size(); i++)
hash = (hash << 4) + hash + QStr_Orther(s.at(i));
return hash * PRIME;
}
};
}
#endif // HASH_QSTR_H
(Updated 4/12/2021, 8h00… )
|
{}
|
# Lab 5
Deadline: EOL (End of Lab) Friday, October 18th
## Objectives
• TSWBAT practice debugging RISC-V assembly code.
• TSWBAT write RISC-V functions that use pointers.
## RISC-V Simulator
Like in previous weeks, we will be using the Venus RISC-V simulator (which can be found online here).
## Exercise 1: Debugging megalistmanips.s
In Lab 4, you completed a RISC-V procedure that applied a function to every element of a linked list. In this lab, you will be working with a similar (but slightly more complex) version of that procedure.
Now, instead of having a linked list of int’s, our data structure is a linked list of int arrays. Remember that when dealing with arrays in struct’s, we need to explicitly store the size of the array. In C code, here’s what the data structure looks like:
struct node {
int *arr;
int size;
struct node *next;
};
Also, here’s what the new map function does: it traverses the linked list and for each element in each array of each node, it applies the passed-in function to it, and stores it back into the array.
void map(struct node *head, int (*f)(int)) {
for (int i = 0; i < head->size; i++) {
}
}
### Action Item
Record your answers to the following questions in a text file. Some of the questions will require you to run the RISC-V code using Venus’ simulator tab.
1. Find the five mistakes inside the map function in megalistmanips.s. Read all of the commented lines under the map function in megalistmanips.s (before it returns with jr ra), and make sure that the lines do what the comments say. Some hints:
• Why do we need to save stuff on the stack before we call jal?
• What’s the difference between add t0, s0, x0 and lw t0, 0(s0)?
• Pay attention to the types of attributes in a struct node.
• Note: you need only focus on map, mapLoop, and done functions but it’s worth understanding the full program.
• Note: you may not use any s registers outside of s0 and s1.
2. For this exercise, we are requiring that you don’t use any extra save registers in your implementation. While you normally can use the save registers to store values that you want to use after returning from a function (in this case, when we’re calling f in map), we want you to use temporary registers instead and follow their caller/callee conventions. The provided map implementation uses the s0 and s1 registers, so we’ll require that you don’t use s2-s11.
3. Make an ordered list of each of the five mistakes, and the corrections you made to fix them.
4. Save your corrected code in the megalistmanips.s file. Running on Venus, you should see the following output:
Lists before:
5 2 7 8 1
1 6 3 8 4
5 2 7 4 3
1 2 3 4 7
5 6 7 8 9
Lists after:
30 6 56 72 2
2 42 12 72 20
30 6 56 20 12
2 6 12 20 56
30 42 56 72 90
### Checkpoint
At this point, make sure that you are comfortable with the following. Note that these will not be part of the lab checkoff, but are meant to benchmark how comfortable you are with the material in the exercise.
• You should know how to debug in Venus, including stepping through code and inspecting the contents of registers.
• You should understand how RISC-V interfaces with memory.
• You should understand CALLER/CALLEE conventions in RISC-V.
## Exercise 2: Write a function without branches
Consider the discrete-valued function f defined on integers in the set {-3, -2, -1, 0, 1, 2, 3}. Here’s the function definition:
f(-3) = 6
f(-2) = 61
f(-1) = 17
f(0) = -38
f(1) = 19
f(2) = 42
f(3) = 5
### Action Item
1. Implement the function in discrete_fn.s in RISC-V, with the condition that your code may NOT use any branch and/or jump instructions!
2. Save your corrected code in a file discrete_fn.s.
|
{}
|
Abstract
By introducing a labor market into the neoclassical asset pricing model, limited capital market participation can be an equilibrium outcome. Labor contracts are derived endogenously as part of a dynamic equilibrium in a production economy. Firms write labor contracts that insure workers, allowing agents to achieve a Pareto optimal allocation even when the span of asset markets is restricted to just stocks and bonds. Capital markets facilitate this risk sharing because it is there that firms offload the labor market risk they assumed from workers. In effect, by investing in capital markets, investors provide insurance to wage earners who then optimally choose not to participate in capital markets. (JEL G11, G12)
A commonly-held view amongst financial economists is that a significant fraction of wealth consists of non-tradable assets, most notably human capital wealth. Indeed, this hypothesis is often used to explain why one of the key predictions of the Capital Asset Pricing Model (CAPM) does not hold, that all agents hold the same portfolio of risky assets. Because investors should use the capital markets to diversify as much risk as possible, and because non-tradable human capital exposure varies across individuals, investors should optimally choose to hold different portfolios of risky assets. Although this explanation certainly has the potential to explain the cross-sectional variation in portfolio holdings, it also necessarily implies wide stock market participation. However, the fact is that the majority of people do not participate in the capital markets. Not only do these individuals appear to eschew the opportunity to partially hedge their human capital exposure, the hedging of human capital risk does not appear to be a primary motivator for the minority of people who actually do participate in capital markets. Instead, the anecdotal evidence suggests that rather than a desire to hedge, what motivates most investors is a willingness take on additional risk because they find the risk-return tradeoff attractive.1 The objective of this paper is to put forward a plausible explanation for these two characteristics of investor behavior, i.e., limited participation in capital markets, and risk-taking behavior by those who do participate.
The most commonly cited explanation for why most people do not participate in capital markets is barriers to entry, although in economies such as the United States it is difficult to accept that significant economic barriers to entry exist.2 Instead, most researchers cite educational barriers to entry because research has shown that education level is strongly correlated with participation.3 But the problem with this explanation for limited stock market participation is that it does not address the question of why the educational barriers exist at all. After all, we see wide participation in arguably more complicated financial products such as mortgages, auto leases, and insurance. In these cases, the educational barriers to entry were removed by the motivation to make profits—firms invested considerable resources in educating people so they could sell these products. Given the welfare gain to hedging non-tradable human capital, why does a similar economic motivation to educate consumers to hold stocks apparently not exist?
Market incompleteness may potentially offer an explanation for limited stock market participation. For instance, the asset span might be so “narrow” that the stock market offers little opportunity for Pareto improving trades. Although rarely cited explicitly, this explanation is implicit in the literature on non-traded wealth. But, for this explanation to be credible, one must also then account for why the asset span does not endogenously expand. In fact, the span of traded assets has changed only marginally in recent years, despite the explosion in the number of new assets. More importantly, one would not naturally expect incompleteness to result in non-participation. Indeed, the low correlation between human capital and stock market returns documented in Lustig and Van Nieuwerburgh (2008) should suggest that despite the incompleteness, the stock market offers diversification benefits that would imply wide participation. Thus, market incompleteness appears to be an unlikely explanation for limited stock market participation.
If frictions, like barriers to entry and market incompleteness, are not preventing agents from participating, then they must be choosing not to participate. One possibility is that agents’ initial endowments and productivities are naturally so close to a Pareto optimal allocation that there is little reason to engage in further trade. But considering the heterogeneity in actual endowments, this explanation seems implausible. A more plausible possibility is that some agents are able to share risk by trading in other markets and therefore trading in stock markets provides little incremental benefit.
Building on this insight, we identify the labor market as one such market and posit that the unwillingness of some individuals to use capital markets is a consequence of the fact that they are able to share enough risk through their wage contracts so that the benefit of trading in capital markets is small. A Pareto optimal allocation can therefore be achieved even with the “narrow” asset span we observe in actual stock markets, implying that limited stock market participation is an efficient equilibrium outcome.
We focus on labor markets because they are an ideal place to share risk. The structure of most firms has historically been built around long-term tailored labor contracts between the firm and its workers. To understand how these contracts can share risk, one need look no further than the high profile bankruptcies of General Motors and Chrysler in 2009. In the year preceding the bankruptcy, all three U.S. automakers burned through billions of dollars of shareholder equity by continuing to manufacture cars even when it did not appear in their economic interest to do so.4 The only plausible explanation for this behavior is the companies’ commitments to their labor force.5 These auto companies are certainly not alone. Many, if not most, companies operate at a loss during recessions, indicating that if companies had more flexibility to curtail production, unemployment rates would be substantially higher during recessions. Indeed, viewed this way, one might wonder why all risk cannot be optimally shared in labor markets. The problem is that long-term labor contracts are not necessarily efficient for all employees—some employees are better off retaining the flexibility to switch jobs. Because of this labor market mobility, to achieve efficient risk sharing, asset markets are also required.
So what determines who participates in the stock market? Several studies suggest intelligence as the key distinguishing factor between stock market participants and nonparticipants (see, e.g., Christelis, Jappelli, and Padula 2010; Grinblatt, Keloharju, and Linnainmaa 2011), so it is natural to conjecture that differences in intelligence plays an important role in determining participation. Alfred Binet, the inventor of the IQ test, associated intelligence with being able to adapt one’s self to different circumstances, and the developmental cognitive psychologist Reuven Feuerstein describes intelligence as “the unique propensity of human beings to […] adapt to the changing demands of a life situation” (see Feuerstein 1990). Such flexibility to adapt suggests that what separates stock market participants from nonparticipants may be the ability to hedge risk by adjusting to changing economic conditions. In a Pareto efficient outcome, more flexible participants insure less flexible nonparticipants. This is the starting point for our analysis.
Our model delivers a number of insights. First, it calls into question one of the basic assumptions in asset pricing—that because asset markets do not span labor risk, human capital is not traded. In our equilibrium, less flexible workers use the labor market to trade human capital risk. The implication is that even though risk is shared efficiently, because not all wealth is traded in equity markets, the equity risk premium is not the same as the risk premium for consumption risk. As a result, we can generate a substantial equity risk premium even while the risk premium for consumption risk is modest.
Second, we show that our approach naturally explains the weak empirical relationship between the dynamics of asset prices on the one hand and labor income and consumption on the other. Specifically, asset returns are much more volatile than, and almost uncorrelated with, aggregate labor income growth, and only moderately correlated with consumption growth. Moreover, not only is consumption volatility significantly lower than the volatility of asset returns, but the two series behave manifestly differently. For example, the average quarterly volatility of the S&P 500 Index is 68% higher during recessions (as identified by the National Bureau of Economic Research (NBER)).6 Yet, the concomitant increase in consumption volatility is much smaller if it exists at all—over the period 1947–2009, the point estimate of the volatility of (seasonally adjusted) quarterly GDP growth in NBER recessions is only 11% higher than in expansions.7
Third, we explain why equity wealth is considerably more volatile and human capital wealth less volatile than total wealth.8 Because human capital wealth is traditionally measured using wage income, that is, the income that results once risk sharing has already taken place in the labor market, traditional measures underestimate the volatility of human capital wealth.
Finally, in our model a majority of workers choose not to participate in equity markets because their labor contracts already efficiently share risk. Consequently, these workers choose to remain employed with a single employer. This result implies that such workers consume their wage income, that is, workers who do not change jobs often are more likely to have no other investment income and so for these workers, consumption and wages are identical. Because wages can be measured more accurately than consumption, this result has an important implication as it suggests that the wages of such workers can be used as a proxy for consumption in a test of the consumption CAPM.
The paper is organized as follows. In the next section we provide a brief literature review. In Section 2 we introduce the model. In Section 3, we derive the Pareto efficient outcome and show how it can be implemented as an equilibrium outcome under realistic restrictions. Section 4 discusses the asset pricing implications both in the time series and in the cross section. We discuss the robustness of the model’s predictions in Section 5. Section 6 makes some concluding remarks. All proofs are left to the Appendix.
1. Background
The idea that one role of the firm is to insure its workers’ human capital risk dates at least as far back as Knight (1921). Knight takes as a primitive that the job of worker and manager entails taking on different risks and notes that entrepreneurs bear most of the risk. Using this idea, Kihlstrom and Laffont (1979) endogenizes who, in a general equilibrium, becomes an entrepreneur and who becomes a worker. Less risk-averse agents choose to be entrepreneurs who then optimally insure workers. However, the wage contract in that paper is exogenously imposed rather than an endogenous response to the desire to optimally share risk and so the resulting equilibrium is not Pareto efficient.
The papers that first recognized the importance of endogenizing the wage contract, and therefore the ones most closely related to our paper, are Dreze (1989) and Danthine and Donaldson (2002). Like us, Dreze (1989) considers the interaction between a labor and capital market in general equilibrium and focuses on efficient risk sharing. Our point of departure is how we model production. Dreze does not consider the implication of productive heterogeneity. Consequently, there is no natural reason (beyond differences in risk aversion and wealth) for some workers to insure other workers in Dreze’s model. Hence, the model does not explain limited capital market participation or focus on the return to bear labor risk.
Danthine and Donaldson (2002), like us, explicitly model both labor and financial markets with agent heterogeneity. Their model features investors and workers, but, importantly, Danthine and Donaldson (2002) do not allow workers to invest or investors to work and so their model does not address endogenous capital market participation. In their model, all workers are insured by investors who are endowed with wealth rather than productivity and hence have a precautionary reason to save, which they do by investing in firms. Because this motive is missing in our model, prices must adjust in our model to induce some workers to insure other workers.
Guvenen (2009) studies the effect of limited stock market participation in a model with heterogeneity in agents’ elasticity of intertemporal substitution (EIS), modeled with Epstein Zin preferences. Like Danthine and Donaldson (2002), Guvenen (2009) does not focus on the reasons for limited market participation, the participation rate is exogenously specified in his model. The objective in his paper is to build a model with limited participation that can reproduce important asset pricing moments.
Our paper also contributes to the large literature, which started with Mayers (1972), studying the effect of non-tradable wealth in financial markets. The main results in that literature are that investors should no longer hold the same portfolio of risky assets and the single factor pricing relation must be adjusted. Although Fama and Schwert (1977) finds little evidence supporting Mayer’s model, both Campbell (1996) and Jagannathan and Wang (1997) find that adding a measure of human capital risk significantly increases the explanatory power of the CAPM. Santos and Veronesi (2006) find that the labor income to consumption ratio has predictive power for stock returns and can help explain risk premia in the cross-section. Because wage contracts provide insurance for human capital risk, our model implies that wages (the typical measure of human capital used in the literature) should have explanatory power for stock returns.
The theoretical predictions of the neoclassical asset pricing model rely on effectively complete markets, so initially researchers were tempted to attribute the failure of those models to market incompleteness. However, Telmer (1993) and Heaton and Lucas (1996) convincingly argue that market incompleteness cannot account for important puzzles, such as the apparently high risk premium of the market portfolio. As we show in this paper, quite the opposite intuition might be true. The failure of the models might stem from the fact that agents actually share risk more completely than is supposed in the literature. If labor markets effectively share risk, then because equity holders are the ultimate insurers of human capital risk, they will demand a high risk premium. As we will demonstrate, our results are consistent with the findings in Mankiw and Zeldes (1991) and Brav, Constantinides, and Geczy (2002) in that those who choose not to participate are less wealthy, less educated, and more reliant on wage income as their source of wealth. Furthermore, consistent with the anecdotal evidence, the primary motivation for investing in capital markets is the attractive risk-return tradeoff offered, not a desire to hedge human capital risk.
2. Model
Like any source of risk, human capital risk has both an idiosyncratic component and a systematic component. Although the idiosyncratic component is likely to be large, especially early in a person’s career, we will focus exclusively on the systematic component because we are interested in the implications of how agents share risk in the economy. Idiosyncratic risk, by its very nature, can be diversified away, so there is little reason for any agent to hold this risk in a complete market equilibrium. Consequently, the risk-sharing implications of sharing idiosyncratic risk are well understood.9
Given our objective to study how systemic risk is shared in the economy, our model must include heterogeneous agents. An important source of individual heterogeneity in the economy is worker flexibility: some workers only have access to a single production technology while others can choose between production technologies. Building on this insight, we model productivity as follows. Our economy consists of a continuum of workers that produce a single, perishable, consumption good, using a technology that is parameterized as follows: The total instantaneous output produced is $${A_t}\left( {b + fs} \right).$$$${A_t}$$ is a common component and $$\left( {b + fs} \right)$$ is an individual component we term an individual worker’s production technology, where $${s}$$ is the variable that captures the current state of the economy. Inflexible workers are endowed with a fixed $${b}$$ and $${f}$$ while flexible workers can choose $${b}$$ and $${f}$$ throughout their career (by switching industries).
We model the production technology set as follows. There is a closed set of production technologies (industries), $$\mathcal {P} \subset \left[ {{\underline b},1} \right] \times \left[ {0,\bar K} \right],$$ for some $$\bar K \gt 0$$ and $${\underline b} \lt 1.$$ Each inflexible agent only has access to a single production technology in this set, $$\left( {b,f} \right),$$ and produces $${A_t}(b + {fs})$$ of a consumption good, where $${A_t} \equiv {A_0}{e^{rt}}$$ is the non-stochastic10 part of production common to all agents. We assume that all inflexible agents have access to technologies with $$b \ge 0,$$ to ensure that each individual’s production is nonnegative in all states of the world. Flexible agents have access to every production technology in $${\cal P}.$$ The production technology set has the properties that $$({\underline b},{\bar K}) \in {\mathcal P},$$$$(1,0) \in {\mathcal P},$$$$({b},{\bar K}) \in {\mathcal P} \Rightarrow b = {\underline b},$$ and $$(1,f) \in {\mathcal P} \Rightarrow f = 0.$$
The dynamics of $${A_t}$$ are meant to capture overall economic growth and allows us to model recessions as a relative drop in productivity. The stochastic process $${s}$$ is a diffusion process on $${\mathbb{R}_+}$$ that summarizes the state of the world:
$$ds = \mu \left( s \right)dt + \sigma \left( s \right)d\omega .$$
We will model $${s}$$ as a mean-reverting square root process,
(1)
$$ds = \theta (\bar s - s)dt + \sigma \sqrt s\,\,\,d\omega ,$$
where the condition $$2\theta {\bar s} \gt {\sigma ^2}$$ ensures strict positivity. The mean-reversion introduces a business cycle interpretation, although, as we shall see, much of the theory goes through for general $$\mu (s)$$ and $$\sigma (s),$$ so long as $${\mu}$$ and $${\rm \sigma}$$ are smooth, $${\rm \sigma}$$ is strictly positive, and the growth conditions $$\mu (s)| \le {c_1}(1 + s),$$$$\sigma (s) \le {c_2}(1 + s)$$ are satisfied for finite constants, $${c_{1}}$$ and $${c_{2}}.$$ It is natural to define a recession as states for which $$s \lt \bar {s},$$ whereas an expansion is when $$s \gt \bar {s}.$$
Let the inflexible agents be indexed by $$i \in {\cal I} = [0,\alpha ],$$ where $$0 \lt \alpha \lt 1,$$ with agent $${i}$$ working in industry $$({b_i},{f_i}).$$ Here, we assume that $${b_{i}}$$ and $${f_{i}}$$ are measurable functions that are nondegenerate in the sense that it is neither the case that the full mass of agents work in industries with $${b=0},$$ nor in industry $$(1,0).$$ Then the total productivity of all inflexible agents in the economy is:
$${A_t}{K_I}\left( s \right) = {A_0}{e^{rt}}{K_I}\left( s \right),$$
where:
(2)
$${K_I}(s) \equiv \int_{i \in \mathcal I} {\frac{1}{\alpha }} \left( {{b_i} + {f_i}s} \right)di = \bar b + \bar fs.$$
Note that $$0 \lt \bar {b} \lt 1$$ and $$0 \lt \bar {f} \lt \overline {K}.$$
The rest of the agents in the economy are flexible agents, comprising mass $$1 - \alpha,$$$$i \in \mathcal {F} = (1 - \alpha ,1].$$ Because these agents have access to any production technology in $$\mathcal {P}$$ and are free to move between production technologies at any point in time, for a given $${s},$$ it is optimal for them to work in an industry $$(b^\ast, f^\ast),$$ which solves:
(3)
$$(b^*,f^*) = \mathop {\arg \max }\limits_{(b,f) \in \mathcal P} b + fs,$$
leading to the optimal productivity of flexible agents:
$${A_t}{K_F}\left( s \right) = {A_0}{e^{rt}}{K_F}\left( s \right),$$
where:
(4)
$${K_F}(s) \equiv {b^ * }(s) + {f^ * }(s)s.$$
Notice that, at any point in time, all flexible agents choose to work in industries that generate the same output. Lemma 2, in the Appendix, shows that $$K_F(s)$$ is bounded below by 1, is convex, and asymptotes a slope of $$\overline K.$$
We next assume that flexible agents can work part time in different industries, i.e., if $$({b_1},{f_1}) \in \mathcal {P}$$ and $$({b_2},{f_2}) \in \mathcal {P},$$ then $$(\lambda {b_1} + (1 - \lambda ){b_2},\lambda {f_1} + (1 - \lambda ){f_2}) \in \mathcal {P}$$ for all $$\lambda \in [0,1].$$ This implies that for all $$b \in [\underline {b}, 1],$$ there is a $$(b,f) \in \mathcal {P}.$$ Now, flexible agents will only consider production technologies on the efficient frontier, $$(b,f (b)),$$ where $$f(b)\overset{\text{def}}= \max \{ f:(b,f) \in \mathcal {P}\},$$ and it follows immediately that $${f}$$ is a strictly decreasing, concave function defined on $$b \in [\underline {b},1],$$ such that $$f(\underline {b}) = \overline {K}$$ and $$f(1)=0.$$ Going forward, we make the additional technical assumptions that $${f}$$ is strictly concave, twice continuously differentiable, and that $$f^\prime (\underline {b}) =0,$$ and $${f^\prime }(1) = - \infty.$$ Under these assumptions, Lemma 3, in the Appendix, ensures that $${K_F}(s)$$ is a diffusion process (which is, of course, also true of $${K_I}(s)$$). The total output in the economy at time $${t}$$ is:
(5)
$${A_t}{K_{tot}}({s_t}) = {A_0}{e^{rt}}{K_{tot}}({s_t}),$$
where
(6)
$${K_{tot}}(s) \equiv \alpha {K_I}(s) + (1 - \alpha ){K_F}(s),$$
implying that $${K_{tot}}(s)$$ is also a diffusion process.
Figure 1 plots the production function for flexible workers and the average inflexible worker. Note that because $${f}$$($${b}$$) is concave, flexible workers are always more productive than the average inflexible worker. Lemma 1 adds the additional observation that worker mobility implies that flexible workers will move into safer jobs in bad times and riskier jobs in good times so that they will have a natural advantage in providing insurance to inflexible workers.
Lemma 1.
The following results hold for the volatility of the agents’ productivity:
• For low $${s},$$ the volatility of the flexible agent’s productivity is lower than that of the inflexible agent:
$$Vol\left( {\frac{{d{K_F}}}{{{K_F}}}} \right) \lt Vol\left( {\frac{{d{K_I}}}{{{K_I}}}} \right).$$
• For high $${s},$$ the volatility of the flexible agent’s productivity is higher than that of the inflexible agent,
$$Vol\left( {\frac{{d{K_F}}}{{{K_F}}}} \right) \gt Vol\left( {\frac{{d{K_I}}}{{{K_I}}}} \right).$$
Workers and firms are organized as follows. A worker can choose either to work for himself and produce the consumption good, or he can choose to “sell” his production to a firm and earn a wage instead. Workers are also owners, they are free to invest in firms through the capital markets and consume any dividend payments. In equilibrium, markets must clear; all firms must attract enough investment capital to fulfill their wage obligations. Finally, we assume that all agents are infinitely lived, with constant relative risk-aversion (CRRA), risk-aversion coefficient $$\gamma \gt 0,$$ and expected utility of consumption:
(7)
$${U_i}(t) = {E_t}\left[ {\int_t^\infty {{e^{ - \rho (s - t)}}} u({c_s})ds} \right].$$
Here,
(8)
$$u\left( c \right) = \left\{ {\begin{array}{*{20}{c}} \hfill {\log \left( c \right),} & \hfill {\gamma = 1,} \\ \hfill {\frac{{{c^{1 - \gamma }}}}{{1 - \gamma }},} & \hfill {\gamma \ne 1.} \\ \end{array}} \right.$$
3. Equilibrium
We begin by deriving the complete market Pareto optimal equilibrium and then explain how this equilibrium can be implemented. Because $${A_t}{K_{tot}}$$ maximizes total output, any Pareto optimal equilibrium must have this output.
Figure 1
Average Production of Flexible and Inflexible agents, $$K_{F}(s)$$and$$K_{I}(s)$$: Because $${f}$$($${b}$$) is concave, flexible workers are always more productive than the average inflexible worker. Of course, in each state there is always an inflexible worker who is as productive as the flexible workers are.
Figure 1
Average Production of Flexible and Inflexible agents, $$K_{F}(s)$$and$$K_{I}(s)$$: Because $${f}$$($${b}$$) is concave, flexible workers are always more productive than the average inflexible worker. Of course, in each state there is always an inflexible worker who is as productive as the flexible workers are.
3.1 Complete markets competitive outcome
Under the complete markets assumption, a representative agent with utility $${u_{r}}$$ exists, such that the solution to the representative agent problem is identical to the solution of the multi-agent problem. Moreover, all agents have constant relative risk aversion (CRRA) utility functions with the same $${\rm \gamma},$$ so $${u_{r}}$$ is also of the CRRA form, with the same $${\rm \gamma}.$$ Thus, in a complete market equilibrium, the value of any asset generating instantaneous consumption flow $$\delta ({s_t},t)dt,$$ is:
(9)
$$\begin{array}{c} P\left( {{s_t}} \right) = \frac{1}{{{u'_r}\left( {{A_t}{K_{tot}}\left( {{s_t}} \right)} \right)}}E\left[ {\int_t^\infty {{e^{ - \rho \left( {\tau - t} \right)}}{{u'}_r}} \left( {{A_\tau }{K_{tot}}\left( {{s_\tau }} \right)} \right)\delta \left( {{s_\tau },t} \right)d\tau } \right] \\ = {K_{tot}}{\left( {{s_t}} \right)^\gamma }E\left[ {\int_t^\infty {{e^{ - \hat \rho \left( {\tau - t} \right)}}} {K_{tot}}{{\left( {{s_\tau }} \right)}^{ - \gamma }}\delta \left( {{s_\tau },t} \right)d\tau } \right], \\ \end{array}$$
where $$\hat \rho = \rho + \gamma r.$$ Hence, the total value of human capital of all agents of each type (their total wealth) at time $$t=0$$ is:
(10)
$${W_I} \equiv \alpha {A_0}{K_{tot}}{\left( {{s_0}} \right)^\gamma }E\left[ {\int_0^\infty {{e^{ - \left( {\hat \rho - r} \right)\tau }}{K_{tot}}} {{\left( {{s_\tau }} \right)}^{ - \gamma }}{K_I}\left( {{s_\tau }} \right)d\tau } \right],$$
(11)
$${W_F} \equiv \left( {1 - \alpha } \right){A_0}{K_{tot}}{\left( {{s_0}} \right)^\gamma }E\left[ {\int_0^\infty {{e^{ - \left( {\hat \rho - r} \right)\tau }}{K_{tot}}} {{\left( {{s_\tau }} \right)}^{ - \gamma }}{K_F}\left( {{s_\tau }} \right)d\tau } \right].$$
Any Pareto optimal equilibrium features perfect risk sharing; all agents’ consumption across states have the same ordinal ranking. Moreover, because of the CRRA assumptions, it is well known that a stronger result applies in our equilibrium; all agents’ ratio of consumption across any two states is the same. In other words, every agent consumes the same fraction of total output in every state:
(12)
$${c_I}\left( {s,t} \right) = \eta {A_t}{K_{tot}}\left( s \right) = \eta {A_t}\left( {\alpha {K_I}\left( s \right) + \left( {1 - \alpha } \right){K_F}\left( s \right)} \right)$$
(13)
$${c_F}\left( {s,t} \right) = \left( {1 - \eta } \right){A_t}{K_{tot}}\left( s \right) = \left( {1 - \eta } \right){A_t}\left( {\alpha {K_I}\left( s \right) + \left( {1 - \alpha } \right){K_F}\left( s \right)} \right)$$
where $${c_I}$$ and $${c_F}$$ is the aggregate consumption of all the inflexible and flexible agents, respectively, and $${\rm \eta}$$ is the fraction of the total output consumed by all the inflexible agents. Given that the agents can trade their human capital, from the budget constraint at time 0 it follows that,
(14)
$$\eta = \frac{{{W_I}}}{{{W_I} + {W_F}}}.$$
We can also view $$\eta$$ as a function of the initial state, $$\eta (s_0),$$ and since the consumption of a flexible agent in no state is less than the consumption of an inflexible agent, it immediately follows that $${\rm \eta}$$ is bounded above by $${\rm \alpha}.$$ An identical argument implies that $${\rm \eta}$$ is bounded below by the inflexible agent’s consumption fraction in the state where his share of productivity is minimized, which must occur either when $${s=0}$$ or $$s = \infty.$$ Since $$\frac{{{K_I}}}{{{K_I} + {K_F}}}$$ is continuous, and decreasing for large $${s},$$ it follows that so is $${\rm \eta},$$ and that if we define $${s^*}\overset{\text{def}}=\min \{ \arg {\max_{{s_0}}}\eta ({s_0})\},$$ then $${s^*} \lt \infty.$$ The wealth share of the inflexible agent is thus maximized at $$S^\ast,$$ and we denote the wealth share at this point by $$\eta^\ast \equiv \eta(s^\ast).$$ Finally, as the state variable tends to infinity, both types of agents’ productivity converge to a linear function of the state variable so the value of insurance becomes negligible and the equilibrium converges to one with no risk sharing where each agent consumes what he produces.
To solve explicitly for the equilibrium requires computing the expectation in (11), which can be accomplished using standard techniques from dynamic programming:
Proposition 1.
The price, $$P(s,t),$$ of an asset that pays dividends $$\delta(s,t)$$ satisfies the PDE:
(15)
$$\begin{array}{*{20}{c}} {{P_t} + \left( {\mu \left( s \right) - \gamma R\left( s \right)\sigma {{\left( s \right)}^2}} \right){P_s} + \frac{{\sigma {{\left( s \right)}^2}}}{2}{P_{ss}}} \hfill \\ { - \left( {\hat \rho + \gamma \mu \left( s \right)R\left( s \right) - \frac{{\sigma {{\left( s \right)}^2}}}{2}\gamma \left( {\gamma + 1} \right)R{{\left( s \right)}^2} + \frac{{\sigma {{\left( s \right)}^2}}}{2}\gamma T\left( s \right)} \right)P + \delta \left( {s,t} \right) = 0,} \hfill \\ \end{array}$$
where:
(16)
$$\begin{array}{*{20}{c}} {R\left( s \right) = \frac{{{{K'_{tot}}}\left( s \right)}}{{{K_{tot}}\left( s \right)}},} & {and} & {T\left( s \right) = \frac{{{K''_{tot}}\left( s \right)}}{{{K_{tot}}\left( s \right)}}.} \\ \end{array}$$
An immediate implication of Proposition 1 is that the instantaneous risk free interest rate is captured by the term in front of $${P}$$:11
(17)
$${r_s} \equiv \hat \rho + \gamma \mu \left( s \right)R\left( s \right) - \frac{{\sigma {{\left( s \right)}^2}}}{2}\gamma \left( {\gamma + 1} \right)R{\left( s \right)^2} + \frac{{\sigma {{\left( s \right)}^2}}}{2}\gamma T\left( s \right).$$
In general, we will need to solve (15) numerically, which may be nontrivial because it is defined over the whole of the positive real line, $$s \in {\mathbb {R}_ + }.$$ It is also not a priori clear what the boundary conditions are either at $${s=0},$$ or at $$s = \infty$$ where $${P_s}$$ may become unbounded. We follow Parlour, Stanton, and Walden (2012) and avoid these issues by making the transformation, $$z\overset{\text{def}}=\frac{s}{{s + 1}}$$ to get:
Proposition 2.
The price, $$P(s,t),$$ of an asset that pays dividends $$\delta (s,t),$$ where $$\delta (s,t) \le c{e^{rt}}{K_{tot}}{(s)^\gamma }$$ for some positive constant $${c},$$ and $$t \lt T$$ is:
$$P\left( {s,t} \right) = {K_{tot}}{\left( s \right)^\gamma }Q\left( {\frac{s}{{s + 1}},t} \right),$$
where $$Q:[0,1] \times [0,T] \to {\mathbb {R}_ + }$$ solves the PDE:
(18)
$$\begin{array}{l} {Q_t} + {\left( {1 - z} \right)^2}\left( {\mu \left( {\frac{z}{{1 - z}}} \right) - \sigma {{\left( {\frac{z}{{1 - z}}} \right)}^2}\left( {1 - z} \right)} \right){Q_z} \\ \quad + \frac{1}{2}{\left( {1 - z} \right)^4}\sigma {\left( {\frac{z}{{1 - z}}} \right)^2}{Q_{zz}} - \hat \rho Q + \delta \left( {\frac{z}{{1 - z}},t} \right){K_{tot}}{\left( {\frac{z}{{1 - z}}} \right)^{ - \gamma }} = 0, \\ \end{array}$$
and $$Q (z,T) = 0.$$
Without loss of generality, we assume that $$A_0 =1$$ going forward, since all variables are homogeneous of degree zero or one in $${A_0}.$$ All the numerical solutions in this paper were derived by solving (18).
3.2 Implementation
We now show how the complete market equilibrium can be implemented in an incomplete market economy that uses labor markets as an additional risk sharing tool. Obviously, because our object is to provide insights on how actual markets, which are far from complete, share risk, it is important that we model both asset and labor markets realistically. Hence, we restrict agents’ and firms’ ability to write and trade contracts in the following ways:
Restriction 1.
• Binding contracts cannot be written directly between agents.
• Firms may enter into binding contracts with agents subject to the following restrictions: (1) Limited liability may not be violated. (2) Workers and equity holders cannot be required to make payments.
• Banks may enter into short-term debt contracts with agents and firms, paying an interest rate $${r_s}.$$
These restrictions reflect the practical limitations of markets. Because individualized binding contracts cannot trade in anonymous markets, a matching mechanism does not exist that would allow for widespread use of bilateral contracts as a risk-sharing device. Perhaps because there are far fewer firms than agents in the economy, so it is easier to match firms and agents, we do observe binding bilateral labor contracts written between agents and firms. However, even these contracts are limited. Both equity and labor contracts are one-sided in the sense that typically firms commit to make payments to agents. Agents very rarely commit to make payments to firms and courts rarely enforce such contracts. The only condition under which agents can enter a contract that commits them to make payments is if they take a loan from a bank. Both firms and agents can either borrow or lend from a bank subject to the condition that in equilibrium the supply of loans must equal the amount of deposits. Thus, the span of traded assets consists of debt and equity. As we will see, there is no default in equilibrium so the interest rate banks pay is the risk-free rate.
We also impose the following restriction on the industries in which firms operate.
Restriction 2.
Firms are restricted to operate in only one industry. That is, all workers in a firm must have the same $${b}$$ and $${f}.$$
In reality, most firms operate in a single industry so most workers switch jobs when they change occupations.12 Although conglomerates do exist, even these firms typically operate in only a few industries. Our results would not change if we allowed firms to operate in a subset of industries. What we cannot allow is a firm that operates in every industry.
We assume that there is a (very) small cost to dynamic trading in capital markets:
Restriction 3.
Dynamic trading in equity markets imposes a utility cost of $$\epsilon = {0^ + }$$ per unit time.
This restriction captures the transaction costs of active trading, as well as the utility cost of designating time and effort to active portfolio rebalancing strategies. The condition implies that an equilibrium outcome that does not require active portfolio trading in asset markets dominates an equilibrium that is identical in real terms, but that does require active portfolio trading. For tractability, we do not impose any transaction costs of switching jobs, although it can be argued that such costs are also present, and in fact may be higher than the costs of dynamic trading in asset markets. In Section 5 we will evaluate the importance of this assumption by introducing a cost of switching jobs and argue that the main implications of our model are robust.
We now describe how the complete markets equilibrium can be implemented under these restrictions. At first glance it might appear as if asset markets are unnecessary. After all, we allow firms to write bilateral contracts with agents, so by serving as an intermediary, firms can effectively allow agents to write bilateral contracts between themselves. For example, firms could hire both types of workers, pool their production, and reallocate it by paying wages equal to a constant fraction of total production. However, such contracts alone cannot implement the Pareto optimal equilibrium. The reason is that in such an equilibrium, although risk is efficiently shared conditional on production, total production is not maximized as flexible workers must switch industries to maximize their production. But the only way for the firm to pool production and reallocate it would be to extract a commitment of lifetime employment from flexible workers. Such a commitment is suboptimal. Because of the need for worker mobility, both labor and asset markets are required to implement the complete markets equilibrium.
To achieve the complete market equilibrium, all inflexible agents sign a binding employment contract with firms in the industry of their specialty that commits both parties to lifetime employment.13 Agents give up all their productivity and in return receive a wage equal to their Pareto optimal equilibrium allocation, $$\eta ({s_0}){A_t}{K_{tot}}(s),$$ in every future state $${s}.$$ Flexible agents either choose to work for themselves, or work for firms and earn wages equal to their productivity. In some states, inflexible wages will exceed productivity. Because firms cannot force investors to make payments, firms require capital to credibly commit to the labor contract. They raise this capital by issuing limited liability equity. In states in which wages exceed productivity, the firm uses this capital to make up the shortfall and does not pay dividends. For the moment, we restrict attention to states in which the capital in the firm is positive.
Flexible agents purchase the equity by borrowing the required capital from the bank. Firms then redeposit the capital in the bank (ensuring that the supply of deposits equals the demand for loans) and pay instantaneous dividend flows equal to:
$${A_t}\max \left( {\alpha {K_I}\left( s \right) + {C_t}{r_s} - \eta {K_{tot}}\left( s \right),0} \right),$$
where $${A_t}{C_t}$$ is the amount of capital owned by the firm at time $${t}$$ and $$\eta \equiv \eta(s_0).$$ Thus, flexible agents consume:
(19)
$${A_t}\left[ {\left( {1 - \alpha } \right){K_F}\left( s \right) + \max \left( {\alpha {K_I}\left( s \right) + {C_t}{r_s} - \eta {K_{tot}}\left( s \right),0} \right) - {C_t}{r_s}} \right],$$
where we assume (and later show) that flexible agents always choose to adjust their bank loans to match the capital firms deposit in the bank. Using (6), when dividends are positive, the term in square brackets in (19) becomes:
(20)
$$\left( {1 - \alpha } \right){K_F}\left( s \right) + \alpha {K_I}\left( s \right) + {C_t}{r_s} - \eta {K_{tot}}\left( s \right) - {C_t}{r_s} = \left( {1 - \eta } \right){K_{tot}}\left( s \right),$$
so flexible agents consume their complete market allocation and $$d C_t = 0.$$ Similarly, when dividends are zero we get:
(21)
$$\left( {1 - \alpha } \right){K_F}\left( s \right) - {C_t}{r_s} + \frac{{d{C_t}}}{{dt}}.$$
Now the stochastic change in firm capital equals the shortfall, that is,
$$d{C_t} = \left( {\alpha {K_I}\left( s \right) + {C_t}{r_s} - \eta {K_{tot}}\left( s \right)} \right)dt.$$
Substituting this expression into (21) gives:
(22)
$$\left( {1 - \alpha } \right){K_F}\left( s \right) - {C_t}{r_s} + \left( {\alpha {K_I}\left( s \right) + {C_t}{r_s} - \eta {K_{tot}}\left( s \right)} \right) = \left( {1 - \eta } \right){K_{tot}}\left( s \right),$$
so the flexible agent consumes his complete markets allocation in every state in which the firm’s capital is positive.
Finally, consider the first time that either the value of the firm drops to zero or the firm’s capital drops to zero. In such a state, the firm can raise additional capital by issuing new equity (either by repurchasing existing equity for zero and issuing new equity to raise capital, or if the equity is not worth zero, issuing new equity at the market price). Hence by always issuing new capital in this state, the firm can ensure that neither its capital, nor its value, will drop below zero and that it never pays negative dividends. Thus, in this equilibrium both agents always consume their complete markets allocation, which is Pareto optimal. This implies that flexible agents cannot be better off by following a different borrowing policy, justifying our assumption that they will always choose to borrow the amount firms deposit in the bank. Moreover, since this outcome implies a passive investment strategy for inflexible, as well as flexible, agents, this equilibrium implementation is optimal under the assumption of a small but positive cost of active rebalancing.14
Proposition 3 summarizes these results.
Proposition 3.
The following implementation leads to the complete market Pareto efficient outcome:
• Flexible workers either work for themselves or for a firm, which pays the instantaneous wage $$w_F = A_t K_F (s_t).$$
• Inflexible workers work for publicly traded firms, which pay instantaneous wages equal to a constant multiple of aggregate production. In aggregate, firms pay the inflexible wage:
$${w_I} = \eta {A_t}{K_{tot}}\left( {{s_t}} \right).$$
• In states in which inflexible productivity plus interest on bank deposits exceeds wages, firms pay dividends equal to:
(23)
$${A_t}\left[ {\alpha {K_I}\left( s \right) + {C_t}{r_s} - \eta {K_{tot}}\left( s \right)} \right],$$
and retain capital $$A_t C_t$$ with $$dC_t = 0.$$
• In states in which inflexible productivity plus interest on bank deposits does not exceed wages, firms pay no dividends and reduce capital to make wage payments:
(24)
$$d{C_t} = \left( {\alpha {K_I}\left( s \right) + {C_t}{r_s} - \eta {K_{tot}}\left( s \right)} \right)dt.$$
• The flexible workers own all the equity in the stock market. They pay for this equity by borrowing the capital from banks. Firms redeposit the capital in banks. Flexible workers optimally adjust their borrowing to ensure that at all times the supply of deposits equals the demand for loans.
• Whenever: (1) the price of the firm drops to zero, the firm raises new capital by repurchasing old equity for nothing and issuing new equity or (2) the amount of capital drops to zero, the firm raises new capital by issuing new equity at the market price.
There are two important distinguishing characteristics of this solution. First, it features limited capital market participation because only flexible workers participate in capital markets. Indeed, because job mobility precludes flexible workers from sharing risk in labor markets, they must participate in capital markets for any risk sharing to take place. Without understanding the importance of the labor market, one might naïvely look at inflexible workers’ wealth and conclude that because this wealth is not traded in asset markets, they would be better off using asset markets to hedge some of this exposure. But, in equilibrium inflexible workers choose not to further hedge their human capital risk exposure because it is not beneficial. In addition, flexible workers choose to hold equity, not because of a desire to hedge (they choose to increase the riskiness of their position) but because of the compensation they receive in terms of a high equity risk premium.
The implication, that inflexible workers choose not to participate in markets, is consistent with one of the most robust findings in the literature—that wealth, education, and intelligence are positively correlated with stock market participation (see Mankiw and Zeldes 1991; Grinblatt, Keloharju, and Linnainmaa 2011). Clearly, flexible workers are wealthier in our model, but more importantly, if productive flexibility derives from education or intelligence, then they are likely to have higher IQ scores and be better educated. In fact, Christiansen, Joensen, and Rangvid (2008) show that the degree of economics education is casually (positively) related to stock market participation. They interpret this result as evidence that non-participation derives from educational barriers to entry. But their results are also consistent with flexibility. Not all education provides productive flexibility so we would expect to see variation in the type of education and stock market participation. Their study clearly documents this variation. Finally, note that non-participation in capital markets implies that inflexible workers also do not hold bonds, that is, they choose not to save. This result might help to explain the low savings rate observed in the U.S.; the reason workers choose not to save is that their labor contracts effectively do the saving for them.15
Another implication of limited stock market participation is that inflexible workers’ consumption is financed solely from their wage income. In the neoclassical model, the consumption of any optimizing agent should price assets, so this observation implies that inflexible workers’ wages should explain asset returns. Because wages are measured much more accurately than consumption, the implication of our model is that the wage income of workers who do not switch jobs should do a better job explaining asset returns than their consumption. In contrast, aggregate wages, which include wages of both flexible and inflexible workers, are considerably less informative about expected returns, in line with the findings of Fama and Schwert (1977). Moreover, since aggregate wages are $${w_F} + {w_I} = {A_t}({K_F} + (1 + \eta )({K_F} + {K_I})),$$ whereas total instantaneous dividends are $$\alpha K_I - \eta (K_I + K_F) + C_r r_s,$$ it is clear that the relationship between aggregate wages and stock returns varies over the business cycle and can be negative in some states of the world.
The second distinguishing characteristic of our solution is that firm equity can be thought of as an option-like claim on total consumption. We therefore expect the volatility of equity returns to exceed the volatility of total consumption. Because we do not have idiosyncratic risk in our setting, this volatility imparts risk because equity is considerably more risky than total consumption.
That equity can be viewed as an option is well known. However, normally this insight is derived using financial leverage. In our case, the firm has no debt; indeed, it actually holds cash. In a standard setting, this would mean that equity would not have option characteristics; in fact, because of the cash, equity would be less risky than the firm’s assets. In our setting, it is not financial leverage that gives equity option-like characteristics, but the operating leverage resulting from wage commitments. Notice that this operating leverage is considerably more risky than the typical kind of operating leverage studied in the literature. Typically, firms have the option to shut down. When firm is losing money it can reduce its scale or shut down altogether. However, in our case firms optimally choose to give up this option—they commit to continue to pay wages even when, ex post, the value maximizing decision would be to shut down and pay out the remaining capital to equity holders. In other words, the firm can not recoup its cash in bad states as the capital is already “owned” by the workers through their labor contracts. Effectively investors choose to make their capital investments completely irreversible.
Because of this operating leverage, asset returns vary with the business cycle in a highly nonlinear fashion. This means that the unconditional link between real variables and asset returns can look quite weak even though they are instantaneously perfectly correlated. In fact, as we shall see, our results are in line with the findings in Duffee (2005), that consumption and equity returns are only weakly related in bad times, but are highly correlated in good times.
4. Asset Pricing Implications
Although the primary focus of our model is capital market participation, our equilibrium features novel asset pricing implications. Because flexible agents insure inflexible agents and use equity as the means to accomplish this transfer, equity is primarily an insurance contract in our model. This insurance imparts non-linearities in the price of equity. In this section we show how these non-linearities lead to characteristics that have the potential to at least partially explain some important asset pricing puzzles.
4.1 Parameterization
To study the non-linearities in equity prices, we must pick a set of values to parameterize the model. In this subsection we explain how we chose these parameters. It is important to appreciate that we are not picking the parameter set to show that the model can match important moments in the data. To expect a model as stylized as ours to explain all the important moments in the data is naïve. To begin with, firms in our model consist exclusively of labor, they have no physical capital. Nor are these firms levered, indeed they hold cash. Agents all have the same utility function (CRRA) and are able to risk share perfectly. Labor markets are frictionless and agents have no alternative sources of wealth. None of these assumptions are realistic and all are likely to affect the magnitudes of the key moments. Consequently, our goal for this section is much less ambitious. We simply show that the same forces that explain limited market participation give rise to non-linearities in asset prices reminiscent of some important asset pricing puzzles.
We begin by assuming that flexible workers make up one-third of the working population, implying that two-thirds of the population choose not to participate directly in capital markets, in line with the estimates reported in Haliassos and Bertaut (1995), Guiso et al. (2003), Hong, Kubik, and Stein (2004), Christiansen, Joensen, and Rangvid (2008), and Malmendier and Nagel (2011). In addition, $${f}$$ has the form:
(25)
$$f\left( b \right) = \frac{{\left( {1 - b} \right)\left( {3\overline K\sqrt {\overline K - b\overline K} - \left( {1 - b} \right)\sqrt {\overline K - b\overline K} + 2{{\overline K}^2}} \right)}} {\left(\sqrt {\overline K - b\overline K} + \left( {1 - b} \right)\right) \left( {\sqrt {\overline K - b\overline K} + \overline K} \right)},\quad b \in \left[ {1 - K,1} \right].$$
implying that the total production of flexible agents is:
(26)
$${K_F}\left( s \right) = 1 + \overline K\frac{{{s^2}}}{{s + 1}}.$$
We assume that flexible workers’ limiting productive sensitivity to the state variable, $$\overline K,$$ is 2.3 and that the average inflexible worker has $$\bar {b} = 0.16,$$ and $$\bar {f} = 1.28.$$ Thus,
(27)
$${K_I}\left( s \right) = 0.16 + 1.28s.$$
We plotted these two production functions in Figure 1. Note that (27) is also the production of the average or representative firm in the economy. Consequently, we define the market portfolio as the equity claim on this firm. These choices, together with the other parameter choices described below, imply that inflexible agents have approximatey 50% of the wealth in the economy (recall that they make up 66% of the economy).
The state process, $${s},$$ evolves according to (1), with parameter values $$\theta = 0.003,$$$$\sigma = 6\%,$$ and $$\bar {s} = 0.67.$$ The economy is thus in a recession when $$s \lt 0.67$$ and when $$s \gt 0.67$$ it is in a boom. With these parameters, the unconditional probability that $$0 \lt s_t \lt 2.5$$ is over 99%, so we focus on this range. We start the economy at $$s_0 = 0.8.$$ The long-term growth rate of the economy is $$r = 1.2\%,$$ with volatility of about 4%, which is in line with what was used in Mehra and Prescott (1985). We pick a relative risk aversion coefficient of 8.5, within the range Mehra and Prescott (1985, 154) consider reasonable, and impatience parameter $$\rho = 2\%.$$
The initial capital the firm raises is arbitrary in our model. Because we have an effectively complete asset market, the Modigliani–Miller proposition implies that the firm’s capital structure is irrelevant. Of course, in a world with frictions the amount of capital raised will be affected by a tradeoff between the benefits (e.g., lower transaction costs) and costs (e.g., increased taxes and agency costs). We pick a level of initial capital for the representative firm that ensures that it almost never needs to return to capital markets16 and to match the price volatility of the market, which we set to 15.4%. This leads to initial capital of $$C_0 = 0.98.$$Table 1 summarizes these parameter choices.
Table 1
Parameter values
Variable Symbol Value
Average lnflexible constant production $$\bar {b}$$ 0.16
Average flexible limiting production sensitivity $$\bar {K}$$ 2.3
Impatience Parameter ρ 2%
Risk aversion $${\rm \gamma}$$ 8.5
Long-term growth rate $${r}$$ 1.2%
% of population Inflexible $${\rm \alpha}$$ 67%
Inflexible variable production $$\bar {f}$$ 1.28
State variable volatility $${\rm \sigma}$$ 6%
Initial $${s}$$ $${s_0}$$ 0.8
Mean reversion speed $${\theta}$$ 0.003
Long-term Mean $$\bar {s}$$ 0.67
Initial capital $${c_{0}}$$ 0.98
Variable Symbol Value
Average lnflexible constant production $$\bar {b}$$ 0.16
Average flexible limiting production sensitivity $$\bar {K}$$ 2.3
Impatience Parameter ρ 2%
Risk aversion $${\rm \gamma}$$ 8.5
Long-term growth rate $${r}$$ 1.2%
% of population Inflexible $${\rm \alpha}$$ 67%
Inflexible variable production $$\bar {f}$$ 1.28
State variable volatility $${\rm \sigma}$$ 6%
Initial $${s}$$ $${s_0}$$ 0.8
Mean reversion speed $${\theta}$$ 0.003
Long-term Mean $$\bar {s}$$ 0.67
Initial capital $${c_{0}}$$ 0.98
4.2 Implications for aggregate market
We calculate the equilibrium by solving (18) numerically.17 To compute $${W_i},$$ we set $$\delta = {K_i}(s)$$ for each agent type $$i \in \{ I,F\}$$ in (15). Table 2 summarizes this equilibrium. We then compute $${\rm \eta},$$ the inflexible agents’ wealth share, by solving (14) and get 51.2%.
Table 2
Equilibrium moment values at $${s_0}$$
Variable Symbol Value
Risk-free rate $${r_s}$$ 3.8%
Firm expected return $${r_e}$$ 8.4%
Firm volatility $${\sigma _p}$$ 15.4%
Consumption volatility $${\sigma _c}$$ 4.5%
Wealth fraction $${\rm \eta},$$ 51.2%
Probability $$s \lt 2.5$$ 99%
Unconditional correlation $${\rho _{pc}}$$ 0.45
Equity premium $${r_e} - {r_s}$$ 4.6%
Variable Symbol Value
Risk-free rate $${r_s}$$ 3.8%
Firm expected return $${r_e}$$ 8.4%
Firm volatility $${\sigma _p}$$ 15.4%
Consumption volatility $${\sigma _c}$$ 4.5%
Wealth fraction $${\rm \eta},$$ 51.2%
Probability $$s \lt 2.5$$ 99%
Unconditional correlation $${\rho _{pc}}$$ 0.45
Equity premium $${r_e} - {r_s}$$ 4.6%
Of course, since our model is a complete market with a diffusion risk structure, instantaneous asset pricing moments are defined by a standard stochastic discount factor relationship, with all the restrictions that these imply. For example, standard bounds on the instantaneous market Sharpe ratio hold in our model. Nevertheless, because of the time-varying operational leverage associated with the representative firms’ wage contracts, and the associated nonlinearities of dividend payments, the model can give rise to much larger unconditional asset pricing moments than the standard model. In this equilibrium, the risk-free rate is 3.8% and the firm’s expected return is 8.4%, leading to an equity risk premium of 4.6%. The model also delivers larger second moments. Market volatility is 15.4% whereas consumption volatility is only 4.5%. More interesting is the unconditional correlation between equity returns and consumption. Because this is a standard neo-classical model, consumption prices assets, so the instantaneous correlation between equity returns and consumption is either 1 or −1 (as we will shortly see, equity values can be decreasing in the state variable). However, the average correlation across all states is only 0.45, that is, the unconditional correlation (what empiricists typically measure) is substantially lower than the instantaneous correlation.
The instantaneous equity premium is $${r_e} - {r_s} = {\rho _{pc}}\gamma {\sigma _c}{\sigma _p}.$$ If we use unconditional moments to evaluate this expression, we get $$0.45 \times 8.5 \times 4.5\% \times 15.4\% = 2.7\% ,$$ which is quite a bit lower than the actual unconditional equity premium of 4.6%. This disparity occurs because the unconditional estimate of the correlation is not a good proxy for the actual variable of interest, the instantaneous correlation. The correlation between consumption and returns is different in expansions and contractions. The estimated correlation in our calibration conditional on being in a contraction ($$s \lt 0.67$$) is 0.12, whereas the estimated correlation conditional on being in an expansion ($$s \gt 0.67$$) is 0.99. This disparity is in line with the results in Duffee (2005) that the correlation between stock returns and consumption growth is low (about 0) in bad times, and high (about 0.6) in good times.
The value of the market (that is, the equity claim on the representative firm) is equal to the amount of cash held plus the value of inflexible worker average productivity minus the value of the wage commitment.18 As the top panel of Figure 2 demonstrates, this price function is highly nonlinear in $${s}.$$ Its option-like qualities are self-evident. The function is insensitive to the state variable for low values of $${s}$$; it is actually slightly decreasing for very low $${s}.$$ It then increases rapidly in a convex fashion for low to high $${s}.$$
Figure 2
Value of equity (upper panel) and total dividends (lower panel) as a function of $${s}$$
The vertical dashed line marks $$\bar s,$$ the average value of the state variable.
Figure 2
Value of equity (upper panel) and total dividends (lower panel) as a function of $${s}$$
The vertical dashed line marks $$\bar s,$$ the average value of the state variable.
To understand the dynamic behavior of the market across the business cycle, note that as the state of the economy worsens, dividend payouts decrease, reflecting the fact that wages exceed productivity. Consequently, the value of the firm drops sharply. When the economy deteriorates further, productivity continues to drop but agents propensity to consume does not drop by as much. The reason is that because the state variable is mean reverting, agents understand that the current state is temporary. They therefore anticipate that the economy is likely to improve and because they want to smooth consumption, they have a propensity to borrow to consume. In equilibrium, net borrowing is zero, so interest rates have to rise to clear markets, as is evident in Figure 3. Because the firm holds cash, this increase in interest rates generates interest income for the firm. Eventually, the increase in interest rates dominates the decrease in worker productivity, so that dividends begin to increase again, as is evident in the lower panel in Figure 2. The result is that for very low values of the state variable the value of the firm is actually decreasing in $${s}.$$19 As a result, the instantaneous correlation between consumption and equity returns flips from 1 to −1, giving rise to the effects noted above. Clearly, our correlation results depend critically on the presence of this region, which in turn derives from mean reversion in the state variable. Without this mean reversion the value of equity would not be decreasing in $${s}$$ at low values. Note that $${D}$$ and $${P}$$ are both nonnegative within the range of the plots, and therefore there is no need for refinancing within this range.20
Figure 3
Expected return: Expected return on the firm’s stock (solid blue line) and the (short) risk free rate (dotted red line). The vertical dashed line marks $$\bar s,$$ the average value of the state variable.
Figure 3
Expected return: Expected return on the firm’s stock (solid blue line) and the (short) risk free rate (dotted red line). The vertical dashed line marks $$\bar s,$$ the average value of the state variable.
The overall message is that whereas in normal times, the performance of the stock market is well aligned with the state of the economy, in the bad states of the world, the link between the economy and equity is weaker. Figure 3 plots the expected return of the market, together with the risk-free rate. The most striking element in the plot is the difference in behavior of the equity risk premium in expansions and contractions. When $$s \gt \bar s,$$ the equity risk premium is decreasing in the state variable, which makes intuitive sense. Equity is an insurance contract, for high values of the state variable the insurance contract is not very risky. In contractions, the equity risk premium continues to increase as the state deteriorates, reaching a maximum of about 12%. But then, as the value of equity becomes less sensitive to the state of the economy, the risk premium begins to drop and, for low enough values of $${s},$$ actually becomes negative. At the mean point of state variable, $$s = \bar s = 0.67,$$ expected stock returns are about 10%, which is close to the unconditional expected return of 8.4%. However, at $$\bar s,$$ the equity risk premium is over 9%, which is substantially higher than the unconditional equity premium of 4.6%.
It is informative to compare our results with those in Danthine and Donaldson (2002). Although Danthine and Donaldson (2002) also model the effect of labor markets on asset prices, they are unable to generate a significant market risk premium in a model without frictions.21 The reason is that in their model, investors are not workers and hence have a precautionary reason to invest. In our model, flexible workers must be induced to invest, that is, take on additional risk in equilibrium. That implies that the return on equity (the means by which flexible workers are enticed to take on this risk) has to be high enough to induce this behavior. This is a key insight in our model. Rather than a place to hedge risk and smooth consumption, asset markets are a place where investors are enticed to take on extra risk.
Because we do not have idiosyncratic risk in our model, an increase in the risk premium must be associated with an increase in volatility. Figure 4 confirms this insight. The volatility of the firm initially explodes in bad states, in line with the empirical evidence cited in the introduction. But then it actually starts decreasing, reflecting the fact that the value of equity becomes less sensitive to the state variable (see Figure 2, top panel), until it reaches zero close to 0.2. For even lower $${s},$$ price is decreasing in $${s},$$ and so volatility increases. Finally, volatility decreases again as $${s}$$ approaches 0 and as the volatility of $${ds}$$ becomes negligible, once again decreasing the equity volatility. Thus, the volatility of the firm is a nonlinear function of $${s},$$ with regions of very high volatility in bad states of the world. More importantly, the same is not true of the volatility of consumption growth (red dashed line in Figure 4). It is virtually unaffected by the level of $${s}.$$ Our model therefore delivers an almost complete disconnect between consumption volatility and asset volatility. In line with the empirical evidence, consumption growth volatility is low and virtually unaffected by the business cycle, yet equity volatility is high and much more sensitive to the business cycle.
Figure 4
Volatility: The blue solid curve is firm volatility ($$\frac{{V'}}{V}\sigma (s)$$), the red dashed curve is consumption growth volatility ($$\frac{{{K_{tot'}}}}{{{V_{tot}}}}\sigma (s)$$), and the grey dotted line is interest rate volatility. The vertical dashed line marks $$\bar s,$$ the average value of the state variable.
Figure 4
Volatility: The blue solid curve is firm volatility ($$\frac{{V'}}{V}\sigma (s)$$), the red dashed curve is consumption growth volatility ($$\frac{{{K_{tot'}}}}{{{V_{tot}}}}\sigma (s)$$), and the grey dotted line is interest rate volatility. The vertical dashed line marks $$\bar s,$$ the average value of the state variable.
Finally, it makes sense to consider how the equilibrium changes as a function of the initial state. Figure 5 plots the equilibrium share of total consumption of inflexible agents, $$\eta ({s_0}).$$ What is clear is that for most values of the state variable, the equilibrium consumption share is insensitive to the initial state. Moreover, when the inflexible share of total productivity is higher than $$\eta ({s_0}),$$ inflexible agents pay flexible agents an “insurance premium,” whereas the flexible agents pay the inflexible agents “insurance payout” when the inflexible share is lower than $$\eta ({s_0}).$$ The typical case is thus one where flexible agents insure inflexible agents against low states and inflexible agents pay an insurance premium high states. The figure also shows the initial aggregate instantaneous relative productivity, $$\frac{{{K_I}}}{{{K_{tot}}}},$$ of inflexible agents. Note that $$\frac{{{K_I}}}{{{K_{tot}}}}$$ reaches its maximum at $${s_0} \approx 1.2,$$ whereas $${\eta ^ * } \approx 0.54$$ occurs at $${s_0} = {s^ * } \approx 2.5.$$ The reason $${\rm \eta}$$ is maximized to the right of the point where the inflexible agent’s relative productivity is maximized is that the value of insurance is greater in the bad states than in the good states. Hence, inflexible agents are willing to pay more for insurance at the point where their production share is maximized than at points to the right, so $${\rm \eta}$$ continues to increase.
Figure 5
Sensitivity of the Equilibrium to the Initial State: The blue solid curve is the equilibrium share of wealth of the inflexible worker at $${t=0},$$$${\rm \eta},$$ as a function of the initial $${s_0}.$$ The red dashed curve is the inflexible agent’s aggregate share of total productivity at time 0.
Figure 5
Sensitivity of the Equilibrium to the Initial State: The blue solid curve is the equilibrium share of wealth of the inflexible worker at $${t=0},$$$${\rm \eta},$$ as a function of the initial $${s_0}.$$ The red dashed curve is the inflexible agent’s aggregate share of total productivity at time 0.
4.3 Comparative statics
Since the equilibrium production, and thereby the other equilibrium properties of the model, is nonlinear it is difficult to draw global inferences about the sensitivity of the equilibrium to the parameter choices, but it is straightforward to do a local, comparative static, analysis. In Table 3, we show the elasticity of equilibrium variables with respect to the parameters of the model. Each column represents the effect of a 1% change in the parameter listed in the column head on the equilibrium variable listed in the row head. For example, the top left element is $$\frac{{\partial {r_s}/{r_s}}}{{\partial \bar b/\bar b}} = 0.472,$$ i.e., a 1% increase in $$\bar b$$ (from 0.16 to 0.1616) leads to approximately an 0.47% increase in $${r_s}$$ (from 3.8% to 3.817%). We stress that the approximation is only valid for small changes of parameters.
Table 3
Comparative statics
$$\bar b$$ $$\bar f$$ $$K$$ $$\sigma$$ $$r$$ $$\alpha$$ $$\gamma$$ $$\theta$$ $$\bar s$$
$${r_s}$$ 0.472 −1.172 −0.793 −2.505 2.684 −2.116 −1.722 −0.326 −0.160
$${\sigma _p}$$ −0.419 0.939 −0.155 1.239 −0.669 1.431 0.613 0.082 −0.023
$${\sigma _c}$$ −0.113 0.279 −0.189 0.596 0.505 0.0845 0.046
$${\rho _{pc}}$$ 0.0004 −1.849 −0.046 −3.571 1.956 −3.789 −1.849 0.079 0.820
$${r_e}$$ 0.249 −0.733 −0.274 −0.919 1.203 −1.699 −0.536 0.108 0.079
$${r_e} - {r_s}$$ 0.065 −0.370 0.155 0.391 −0.021 −1.438 0.425 0.467 0.277
$$\bar b$$ $$\bar f$$ $$K$$ $$\sigma$$ $$r$$ $$\alpha$$ $$\gamma$$ $$\theta$$ $$\bar s$$
$${r_s}$$ 0.472 −1.172 −0.793 −2.505 2.684 −2.116 −1.722 −0.326 −0.160
$${\sigma _p}$$ −0.419 0.939 −0.155 1.239 −0.669 1.431 0.613 0.082 −0.023
$${\sigma _c}$$ −0.113 0.279 −0.189 0.596 0.505 0.0845 0.046
$${\rho _{pc}}$$ 0.0004 −1.849 −0.046 −3.571 1.956 −3.789 −1.849 0.079 0.820
$${r_e}$$ 0.249 −0.733 −0.274 −0.919 1.203 −1.699 −0.536 0.108 0.079
$${r_e} - {r_s}$$ 0.065 −0.370 0.155 0.391 −0.021 −1.438 0.425 0.467 0.277
Elasticity of equilibrium variables in a local neighborhood of equilibrium. Each column represents the effect of a 1% change in the parameter listed in the column head on the equilibrium variable listed in the row head. For example, the top left element is $$\frac{{\partial {r_5}/{r_5}}}{{\partial \bar b/\bar b}} = 0.472,$$ i.e., that a 1% increase in $$\bar b$$ (from 0.16 to 0.1616) approximately leads to a 0.47% increase in $${r_s}$$ (from 3.8% to 3.817%).
The equilibrium is most sensitive to changes in the volatility of the state variable, $${\rm \sigma},$$ and the fraction of inflexible workers, $${\rm \alpha}.$$ Given that the equilibrium is determined by the price that risk-averse agents are willing to pay for consumption in different states of the world, it is not surprising that the sensitivity with respect to $${\rm \sigma}$$ is high. Similar sensitivity arises in the standard Lucas model, where, for example, the market price of risk is a quadratic function of $${\rm \sigma}.$$ Similarly, the sensitivity with respect to $${\rm \alpha}$$ arises because $${\rm \alpha}$$ determines the amount of insurance that is provided in bad states by the flexible agents. The equilibrium is relatively less sensitive to the technology parameters ($$\bar b,\bar f,$$ and $${K}$$) and the parameters governing the mean reversion ($${\theta}$$ and $$\bar s$$).
It is interesting to compare the sensitivity of equity volatility, $${\sigma _p},$$ and consumption volatility, $${\sigma _c}$$ to small changes in the parameters. Because our equilibrium features complete risk sharing, it is perhaps not surprising that consumption volatility is not very sensitive to changes in the underlying parameters. But what is perhaps more surprising is that equity volatility is considerably more sensitive. Note also that the equity risk premium is particularly sensitive to $${\rm \alpha},$$ the fraction of inflexible agents. Obviously, the ratio of flexible to inflexible agents is a critical determinant of the cost of insurance.
5. Robustness
Although our model is stylized and many of our assumptions are restrictive, we demonstrate in this section that most of our key assumptions can be relaxed without changing the main conclusion in the paper—that limited capital market participation is a consequence of the role labor markets play in sharing risk.
5.1 Idiosyncratic risk
We restrict attention to systematic uncertainty not because we believe idiosyncratic uncertainty is unimportant (it is surely more important early in a worker’s life than systematic uncertainty), but because it is straight forward to see that introducing idiosyncratic uncertainty does not alter our conclusions. In the presence of idiosyncratic worker and thus firm risk, it is clear that the firm can still offer the same Pareto optimal compensation (based on aggregate production) to its workers. Because idiosyncratic risk is not priced in the market, the firm’s value-maximizing strategy stays the same in the presence of such risk. The main difference is that the firm may have to refinance at another point in time (because its value may reach zero because of idiosyncratic shocks) and there will be an additional source of cross sectional-variation.
5.2 Risk sharing and non-participation
In our model all agents optimally share risk. Because inflexible workers achieve this solely in the labor market, they do not need to participate in asset markets and hence we are able to derive the stark prediction of complete risk sharing even with significant non-participation. In practice, agents do appear to face uninsurable risks and, as a consequence, the evidence for complete risk-sharing is weak.22 In light of these facts, it is natural to question whether our model actually does explain limited market participation.
It is important to appreciate that our results do not depend on complete risk sharing, merely that for many participants asset markets provide no more opportunity to share risk than labor markets. Here, the evidence is not definitive. For example, Guvenen (2007) rejects prefect risk sharing for stockholders but cannot reject the hypothesis that non-stockholders share risk perfectly. Guvenen regards this finding as puzzling but it clearly supports our insight that the reason non-stockholders choose not to invest is that they have already shared risk optimally in the labor market. In addition, Guiso, Pistaferri, and Schivardi (2005) demonstrate in a sample of Italian firms that wage contracts completely insure transitory idiosyncratic shocks to firm performance and partially insure permanent shocks.
Almost all the studies that reject complete risk sharing study the change in consumption that results from an idiosyncratic shock to income (e.g., Nelson 1994; Hayashi, Altonji, and Kotlikoff 1996). For the significant fraction of people who rely exclusively on their labor contracts for risk sharing, a (negative) idiosyncratic shock to income is by definition a risk that is not insured in the labor market. So the fact that the studies find evidence that individuals do not use asset markets to offset this risk is at least consistent with the idea that asset markets do not provide additional risk-sharing opportunities. Indeed, Cochrane (1991) is particularly informative on this question because that study does not use idiosyncratic shocks to income as a measure of idiosyncratic risk. Consistent with our insights, that study finds strong evidence of full insurance for one measure, temporary illness, suggesting that labor markets do in fact provide significant insurance opportunities to non-participants.
Other evidence that may appear to be inconsistent with our results are the studies that have found that the consumption patterns of participants and non-participants differ.23 Clearly, in our model the consumption of participants and non-participants are identical: both consume a fixed fraction of total production. But this is an artifact of the assumption of CRRA preferences. It is quite possible to have an economy with complete risk sharing without participants and non-participants having identical consumption dynamics. If we go outside of the CRRA setting, different covariances between individual consumption and asset returns arise because of wealth effects. Specifically, if flexible workers are richer, and also closer to risk-neutral, their consumption covaries more with asset returns (since they take on proportionally more risk), even if there is perfect risk sharing. The same argument holds for the consumption volatility of different agents. Furthermore, if we go outside of the setting with diffusion processes, similar results are obtained for the instantaneous correlation of consumption of different agents.
Finally, note that because non-participants are, on average, less wealthy, there are also good reasons to expect their consumption to be measured considerably less accurately than participant consumption. For example, poorer individuals are more likely to pay for a larger fraction of their consumption (e.g., house cleaning services) using their own labor, implying that a greater portion of their consumption is not measured by consumption expenditures.24 Hence, the evidence in both Mankiw and Zeldes (1991) (who measure correlation between consumption growth and excess asset returns) and Vissing-Jorgensen (2002) (who measures the elasticity of intertemporal substitution) that participant consumption is better able to reconcile asset pricing anomalies, might simply be reflective of the greater degree of measurement error in non-participant consumption.
In our model we allow agents to commit to lifetime employment with a single firm. In reality, although firms often commit to employ agents, rarely do agents explicitly commit to stay with firms. However, if we introduce a cost to switching jobs, we can support our equilibrium even with a restriction on agent commitment. When there is a cost to switching jobs, workers will only choose to switch jobs when the benefits exceed the costs of switching. Because $$\eta \left( {{s_0}} \right)$$ is flat around $${S^ * }$$ (see Figure 5), only a small cost of switching jobs is required to ensure that inflexible agents will not have an incentive to switch jobs for most starting values of $${s},$$ making an explicit commitment unnecessary. For instance, in our example, if the cost of moving is greater than 5% of wealth, the representative inflexible worker will never choose to switch jobs because $$\eta = 0.512$$ and $${\eta ^ * } \approx 0.54,$$ making an explicit commitment not to quit unnecessary. The same argument is not true for flexible workers, as is evidenced by Figure 1. By switching jobs flexible experience a substantial productivity increase, so they will continue to switch jobs, albeit less frequently.
To quantify how a small switching cost would affect our equilibrium, we can consider the problem in partial equilibrium by taking the pricing kernel of the friction free economy as exogenous, and characterizing the optimal choices of an agent who faces a cost to switch jobs. With a cost of switching of just 1% of net wealth, the expected time for the average inflexible agent to make his first job switch is 50 years. And because each switch brings the agent closer to $${\eta ^ * },$$ the second job switch takes even longer. Over a period of 500 years, we found that the inflexible agent switched an average of once every 150 years. Consequently, the cost to the firm in our equilibrium of not obtaining an explicit commitment to lifetime employment from the inflexible agent is trivial.
The introduction of this switching cost also does not affect the welfare or investment behavior of flexible agents. Figure 6 plots the effect of a switching cost of 1% on the behavior of flexible agents. The thick blue line marks the chosen $${b}$$ for a given level of the state variable $${s}.$$ Given any point on the thick blue line, the two lines labeled “Switch” show the minimum change in $${s}$$ that triggers a job-switch. When a switch is made, a new $${b}$$ on the thick blue line is chosen at the new level of $${s}.$$ Because the benefits of switching are larger for flexible agents, their frequency of job switching is a factor of 10 higher than the average inflexible agent; the flexible agent switches about once every 15 years. Of course, this rate of job switching is much lower than the equilibrium without switching costs. Nevertheless, this drop in the frequency of job switching hardly affects the welfare of flexible agents. The net decrease in wealth (i.e., the direct cost of switching as well as the cost of the resulting suboptimal job match) is less than 2% of the wealth of a flexible agent who faces no switching costs). More importantly, investment decisions are almost identical to decisions without frictions. The average dividend the flexible agent receives is only 0.4% lower than what he would receive if he did not face the switching cost. The reason is that switches, although rare, occur in exactly the states of the world when they are worth the most (i.e., in the bad states when insurance is very valuable and in the really good states when a flexible agent can choose an extremely productive industry). Not being able to switch in states where the value of switching is low is not worth much, hence the effect of the switching cost on the flexible agent’s investment strategy is marginal.
Figure 6
Switching strategy of a flexible agent who faces switching costs of 1% of net wealth: The thick blue line corresponds to the chosen $${b},$$ as a function of $${s},$$ the state in which the switch occurred. For a given $${b},$$ the states within the switch-lines are states where the costs outweigh the benefits of a switch. When a switch-line is touched, the flexible agent switches to the $${b}$$ (on the thick blue line) that corresponds to the current state. The thin red line is the $${b}$$ chosen by a flexible agent who faces no switching costs.
Figure 6
Switching strategy of a flexible agent who faces switching costs of 1% of net wealth: The thick blue line corresponds to the chosen $${b},$$ as a function of $${s},$$ the state in which the switch occurred. For a given $${b},$$ the states within the switch-lines are states where the costs outweigh the benefits of a switch. When a switch-line is touched, the flexible agent switches to the $${b}$$ (on the thick blue line) that corresponds to the current state. The thin red line is the $${b}$$ chosen by a flexible agent who faces no switching costs.
Note that flexible agents optimally respond to the introduction of switching costs in two ways. First, they switch jobs far less often. Second, they choose different jobs than they would if there were no switching costs, as is evidenced in Figure 6 (thin red line). As the figure illustrates, flexible agents chooser a safer industry (one with higher $${b}$$) when they face a switching cost. Because the agent facing frictions anticipates that she will not immediately switch again if the economy continues to deteriorate, she chooses to get extra insurance by picking a safer industry than an unrestricted agent.
Given the choice, inflexible workers would like to commit to firms. Consequently, in an economy in which explicit commitment is outlawed, inflexible workers have an incentive to increase the costs of switching and thereby implicitly commit to firms. Deferred compensation contracts (like pension funds and stock vesting periods) are a form of implicit commitment because they increase the cost of switching. In addition, many union contracts explicitly tie wages to seniority with the firm, making a job switch very costly and thereby implicitly committing workers to firms. Hence, the maintained assumption in our parameterization, that inflexible workers face the same costs of switching as flexible workers, is likely unrealistic as flexible workers likely face lower costs. In line with these observations, there is convincing empirical evidence suggesting that in fact a large fraction of workers do indeed implicitly commit to firms. For example, Hall (1982) finds that after an initial job search in which employees might work for short periods for different employers, half of all men then work the rest of their lives for a single employer.
5.4 Unemployment
One may question the relevance of a model in which firms choose to commit never to fire employees. Such a commitment is optimal in our model because workers always have positive productivity and so the model always features full employment. In practice, however, it is natural to expect that in some states of the world some inflexible workers may have negative productivity, that is, the value of their productivity is less than the effort expended. In such situations, it is optimal for the firm to fire such workers and pay them a severance package.
We also ignore the role of the government in sharing risk. Were we to include a government that supplied unemployment insurance, it would not be optimal for the firm to provide insurance in the unproductive states. Instead we would observe states in which the firm fires workers without severance and instead workers collect unemployment insurance. Clearly, the government would have to tax workers to finance this insurance, but so long as the government insurance is restricted to only a subset of states, there would still be a role for the firm to provide labor market insurance in the remaining states. Because risk is shared through a combination of government insurance and the labor contract, inflexible workers have little reason to participate in equity markets.
Finally, it is worth emphasizing that our model does not feature full insurance; all workers are worse off in bad states. Thus even with efficient risk sharing, the widespread economic suffering that is characteristic of a downturn is not inconsistent with our model. Both workers and investors suffer losses.
6. Conclusion
Our objective in this paper is to demonstrate the potential importance of explicitly modeling labor markets within the neoclassical asset pricing model. We show that under realistic assumptions that restrict the span of allowable contracts in both labor and asset markets, neither asset markets nor labor markets alone can share risk efficiently. Together, the two markets can share risk efficiently, and as a result the model has the potential to shed new light on some of the most important normative challenges faced by the neoclassical asset pricing model.
Our main contribution shows that when the two markets co-exist, a large fraction of agents will optimally choose not to participate in capital markets. These agents share risk solely in labor markets. Agents who do participate in equity markets ultimately bear the risk insured by the firm’s labor contracts, implying that equity is relatively risky because the price of market risk is a highly non-linear function of the state. As a result the model also delivers a very large disparity in the volatility of consumption and the volatility of asset prices, a time-varying dependence between consumption growth and asset returns, and a seemingly high market risk premium.
To many readers it may be surprising that our model can deliver a high equity risk premium in what is otherwise a standard neoclassical complete market model. What is important to appreciate is that there is a distinction in our model between the premium for bearing market risk and the premium for bearing consumption risk. In our model, like any neoclassical complete market model, the premium for bearing consumption risk is low. However, the premium for bearing market risk, that is, for investing in equity markets, is high and attributable to the operating leverage imparted by the insurance in the firm’s labor contracts. This operating leverage is higher than the operating leverage typically cited in the literature because the firm commits to operate even in states when the ex post value-maximizing strategy is to shut down. In effect, equity is an option on consumption, and hence the market risk premium is much larger than the risk premium for bearing consumption risk. We believe this mechanism offers one plausible explanation for the high operating leverage ratios needed to explain the high observed market risk premium.
A central feature of our paper is that equity holders insure workers. Although outside of our model, one would expect workers to differ in their attitudes to risk and consequently use risk as one criteria for selecting where to work. If they do this, one might expect a clientele effect to result. Some firms will specialize in providing more insurance than others and hence attract more risk averse workers. If this is the case, one would expect differences in job tenure across firms to explain firm riskiness.
A new insight in this paper is that inflexible workers just consume their wages, that is, the wages of inflexible workers is identical to their consumption and so should price assets. Because non-participants also do not switch jobs, labor mobility could be used to disentangle the wages of participants and non-participants.25 In light of the fact that wages are measured much more accurately than consumption, this insight opens up the possibility of using wage changes of workers who choose not to switch jobs often to test the consumption CAPM.
Appendix
Proof of Lemma 1.
We have:
$$Vol{\left( {\frac{{d{K_F}}}{{{K_F}}}} \right)^2} = {\left( {\frac{{{K'_F}}}{{{K_F}}}\sigma \left( s \right)} \right)^2}dt,$$
whereas:
$$Vol{\left( {\frac{{d{K_I}}}{{{K_I}}}} \right)^2} = {\left( {\frac{{{K'_I}}}{{{K_I}}}\sigma \left( s \right)} \right)^2}dt = {\left( {\frac{{\bar f}}{{\bar b + \bar f_s}}\sigma \left( s \right)} \right)^2}dt.$$
From Lemma 2, $$\frac{{{K'_F}}}{{{K_F}}}$$ converges to 0 for small $${s},$$ whereas $$\frac{{\bar f}}{{\bar b + \bar fs}}$$ converges to $$\frac{{\bar f}}{{\bar b}} \gt 0,$$ so for small $${s},$$ the inflexible worker’s productivity indeed has higher volatility.
For large $${s},$$ we have:
$$\frac{{d{K_I}}}{{{K_I}}} = \frac{{\bar f}}{{\bar b + \bar fs}} = \frac{1}{{\frac{{\bar b}}{{\bar f}} + s}}.$$
Moreover, from Lemma 2 it follows that:
$$\frac{{d{K_F}}}{{{K_F}}} = \frac{{f\left( {{b^ * }\left( s \right)} \right)}}{{{b^ * }\left( s \right) + f\left( {{b^ * }\left( s \right)} \right)s}} = \frac{1}{{\frac{{{b^ * }\left( s \right)}}{{f\left( {{b^ * }\left( s \right)} \right)}} + s}}.$$
The inequality therefore follows if $$\frac{{{b^ * }(s)}}{{f({b^ * }(s)}} \lt \frac{{\bar b}}{{\bar f}},$$ but since $${b^ * }(s) \to {\underline b}$$ for large $${s}$$ (from the proof of Lemma 2) and therefore $$f({b^ * }(s)) \to \overline K,$$ the flexible worker’s productivity is indeed riskier for large $${s}.$$
Lemma 2.
The optimal production function of flexible agents satisfies:
• $${K_F}(0) = 1,$$
• $${\lim _{s \to \infty }}\frac{{{K_F}(s)}}{s} = \overline K,$$
• $${K_F}(s)$$ is a convex function of $${s}.$$
Proof:
• Follows since $${f}$$ is decreasing and $$f({\underline b}) = \overline K.$$
• Clearly, $${\overline K_s} \le {C_F}(s) \le {\overline K_s} + 1$$ for all $${s},$$ since the lower bound can be realized by choosing $$b(s) = 1,$$ and the upper bound follows from the constraint that $$b \le 1.$$ (b) therefore immediately follows.
• Follows since $$b + f(b)s$$ is (weakly) convex as a function of $${s}$$ for each $${b}$$ and the maximum of a set of convex functions is convex.
Lemma 3.
$${K_F}(s)$$ is a twice continuously differentiable, strictly convex function, such that $${K'_F}(0) = 0$$ and $${\lim _{s \to \infty }}{K'_F}(s) = \overline K.$$
Proof:
The flexible worker solves $${\max _{b\in\left[{\underline b,\,1}\right]}}b + f(b)s.$$ The first order condition is $$f'(b) = - \frac{1}{s},$$ and since $$f'$$ is a continuously differentiable, strictly decreasing, mapping from $$\left[ {\underline b} {,1} \right]$$ onto $$\left( { - \infty ,0} \right],$$ the implicit function theorem implies that there is a unique, decreasing, continuously differentiable solution to the first order condition, $${b^ * }(s),$$ such that $${b^*}(0) = 1$$ and $${\lim _{s \to \infty }}{b^ * }(s) = 0.$$ Since the second order condition is $$f''(b)s \lt 0,$$ this function indeed yields the maximal strategy, $${K_F}(s) = {b^ * }(s) + f({b^ * }(s))s.$$
Now, $${K'_F}(s) = {b^{ *\prime}}(s) + {b^{ *\prime}}(s)f({b^ * }(s))s + f({b^ * }(s)) = 0 + f({b^ * }(s)),$$ so $${K'_F}(0) = f({b^ * }(0)) = f(1) = 0,$$ and $${K'_F}(\infty ) = f({b^ * }(\infty )) = f({\underline b}) = \overline K.$$ Moreover, $${K''_F}(s) = {b^{ *\prime}}(s)f'({b^ * }(s)) = - \frac{{{b^{ *\prime}}(s)}}{s},$$ which is continuous and positive, so $${K_F}$$ is indeed strictly convex and twice continuously differentiable.
Proof of Proposition 1:
From (9), the price at $${t=0}$$ of a general asset, paying an instantaneous dividend stream, $$g(s,t)dt,$$ where $${g}$$ is a continuous function, is:
(28)
$$P\left( s \right) = K_{tot}^\gamma E\left[ {\int_0^\infty {{e^{ - \rho t}}} \frac{{g\left( {s,t} \right)}}{{{K_{tot}}{{\left( t \right)}^\gamma }}}dt} \right]\overset{\text{def}}=K_{tot}^\gamma Q(s,0),$$
where:
$$Q{\left( {s,t} \right)}\overset{\text{def}}=E\left[ {\int_t^\infty {{e^{ - \rho \left( {\tau - t} \right)}}\frac{{g\left( {s,\tau } \right)}}{{{K_{tot}}{{\left( \tau \right)}^\gamma }}}} d\tau } \right].$$
From Feynman-Kac’s formula (see Karatzas and Shreve 1991) it follows that $$Q \in {C^{1,2}}({\mathbb{R}_ + } \times {\mathbb{R}_ + }),$$ and that $${Q}$$ satisfies the PDE:
(29)
$${Q_t} + \mu \left( s \right){Q_s} + \frac{{\sigma {{\left( s \right)}^2}}}{2}{Q_{ss}} - \hat \rho Q + \frac{g}{{K_{tot}^\gamma }} = 0.$$
Since $${K_{tot}}$$ is smooth, it follows that $${P}$$ is also smooth and since $$Q = \frac{P}{{K_{tot}^\gamma }},$$ it follows that $$Q' = \frac{P'}{{K_{tot}^\gamma }} - \gamma \frac{{P{K'_{tot}}}}{{K_{tot}^{\gamma + 1}}}$$ and $$Q'' = \frac{{P''}}{{K_{tot}^\gamma }} - 2\gamma \frac{{P'{K'_{tot}}}}{{K_{tot}^{\gamma + 1}}} - \gamma \frac{{P{K''_{tot}}}}{{K_{tot}^{\gamma + 1}}} + \gamma (\gamma + 1)\frac{{P{{({K'_{tot}})}^2}}}{{K_{tot}^{\gamma + 2}}}.$$ By plugging these expressions into (29), and defining $$R(s) = \frac{{{K'_{tot}}}}{{{K_{tot}}}},$$ and $$T(s) = \frac{{{K''_{tot}}}}{{{K_{tot}}}},$$ we arrive at (15).▪
Proof of Proposition 2:
Equation (18) follows directly from (29) and the transformation $$s = \frac{z}{{1 - z}}.$$
The PDE is of parabolic type, and typically both boundary conditions at $${z=0}$$ and $${z=1},$$ and a terminal condition at $${t=T}$$ are needed for such PDEs to be well-posed. However, the second order ($${Q_{zz}}$$) term vanishes at the boundaries, and using the same solution method as in Parlour, Stanton, and Walden (2012), where a similar transformation is made, it is straightforward to show that no boundary condition are needed at $${z=1}$$ or $${z=0},$$ since the term in front of $${Q_{z}}$$ is positive at $${z=0}$$ (i.e., it is an outflow boundary) and zero at $${z=1}.$$
We appreciate comments from Kerry Back, Nicolae Garleanu, an anonymous referee, the Editor, Wayne Ferson, seminar participants at the Chinese University of Hong Kong, Columbia University, Erasmus University, Goethe University Frankfurt, INSEAD, London School of Economics, Singapore Management University, UCLA, University of Calgary, University of Chicago, University of Michigan, UT Austin, the European Summer Symposium in Financial Markets, Gerzensee 2009, the 2010 Stanford SITE conference on New Models in Financial Markets, and the 2010 WFA.
1 For example, considerable resources are devoted to advising people on how to find high return investments, whereas advice on investments with good hedging characteristics is largely non-existent.
2Andersen and Nielsen (2011) provide evidence suggesting that non-participation does not derive from financial barriers to entry.
3Mankiw and Zeldes (1991) document the relation between education and participation and Hong, Kubik, and Stein (2004) document that non-formal education, such as social interaction, is also correlated with participation. Malmendier and Nagel (2011) provide evidence that irrationality might also play a role—investors appear to misestimate the return to investing in capital markets because they put too much weight on their own experience.
4 The combined losses of GM and Ford totaled $46 billion in 2008, which exceeded (by$18 billion) the combined market value of both companies at the beginning of the year.
5 One might argue that these companies followed this strategy because they anticipated a government bailout, but this logic cannot explain Ford’s strategy.
6 It is 21.4% in recessions and 12.7% at other times. Quarterly volatility is defined to be the standard deviation of daily returns of the S&P 500 Index over the quarter over the period 1962–2009. This difference is highly statistically significant.
7 Using quarterly data published by the BEA, the volatility of GDP growth in recessions is 4.66% while it is 4.19% at other times over the 1947–2009 time period.
9 Although it’s not the focus of the paper, Harris and Holmström (1982) makes it clear how agents share idiosyncratic labor risk. That paper shows that most, but not all, of this risk can be removed by the labor contract. Under the optimal labor contract, firms insure all agents against negative realizations of idiosyncratic labor risk but agents remain exposed to some positive realizations. Of course, the owners of these firms do not have to expose themselves to this labor risk because by holding a large portfolio of firms, they can diversify the risk away.
10 It is straightforward to extend our analysis to allow stochastic growth in $${A_t}$$ as long as innovations in $${A_t}$$ are independent of innovations in $${s}.$$
11 Heuristically, a risk-free zero coupon bond with maturity at $${dt}$$ will have a price that is almost independent of $${P},$$ so $${P_s}$$ and $$P_{ss}}$$ are close to zero. Therefore, the local dynamics are $$P_t - r_s P =0,$$ i.e., $$\frac{{dP}}{P} = - {r_s}dt$$ so the short-term discount rate is indeed $${r_s}.$$
12Moscarini and Thomsson (2007) document that 63% of workers who changed occupations also changed employers.
13 In reality, employment contracts that bind workers are not enforceable. However, about half the working population do in fact work for a single employer (see Hall 1982), suggesting that the lifetime employment contract is nevertheless common, that is, that firms use other means to commit employees to lifetime employment. We discuss this extensively in Section 5.
14 In fact, if in addition one assumes a (small) one-time cost of stock market participation, it is easy to show that this optimal implementation is unique, since it minimizes the fraction of the population that participates in the market.
15 Introducing a life-cycle dimension in our model would not change this, as long as the employer pays the Pareto optimal consumption share at all times.
16 The expected time to refinancing is over 1,000 years.
17 In our numerical calculations, we use a finite horizon set-up, where the horizon is large enough to get convergence to the solution of the infinite horizon economy (i.e., the steady state solution). An advantage of this approach is that it leads to a unique solution, allowing us to avoid the nontrivial issue of defining transversality conditions to rule out bubble solutions in an economy with nonlinear dynamics.
18 Flexible worker productivity and wages can be ignored because flexible workers always earn their productivity.
19 In steady state, the probability that the state variable will be in this region is 25%.
20 For $$s \gt 2.5,$$ there will be a point at which dividends turn negative, so that refinancing might be needed at some point.
21 They introduce two frictions in their model: (1) adjustment costs, which provides only a modest increase in the risk premium and (2) changes in the “bargaining power” of workers and investors that effectively prevents perfect risk sharing.
22 For example, although Mace (1991) does not reject full insurance, Cochrane (1991) rejects it for long illness and involuntary job loss but not for spells of unemployment, loss of work due to strikes and involuntary moves.
23 For example, Mankiw and Zeldes (1991) find that participants have a higher covariance and correlation with asset prices than that of non-participants, and that consumption volatility of participants is higher than that of nonparticipants and Vissing-Jorgensen (2002) finds that elasticities of intertemporal substitutions differ for asset holders and non-asset holders, although the difference is not statistically significant (at the 5% confidence level) in either study. Malloy, Moskowitz, and Vissing-Jorgensen (2009) find that only looking at stock-holders gives a better calibration of the moments of stock market returns.
24Aguiar and Hurst (2005) have demonstrated in a different context how ignoring this measurement problem can lead to misleading conclusions.
25 A measure of labor mobility across firms (which should be closely related to our flexibility notion) is constructed in Donangelo (2009), using survey data from the Bureau of Labor Statistics over the period 1988–2008.
References
Aguiar
M
Hurst
E
Consumption versus expenditure
Journal of Political Economy
,
2005
, vol.
113
(pg.
919
-
48
)
Andersen
S
Nielsen
K M
Participation constraints in the stock market: Evidence from unexpected inheritance due to sudden death
Review of Financial Studies
,
2011
, vol.
24
(pg.
1667
-
97
)
Brav
A
Constantinides
G M
Geczy
C C
Asset pricing with heterogeneous consumers and limited participation: Empirical evidence
Journal of Political Economy
,
2002
, vol.
110
(pg.
793
-
824
)
Campbell
J Y
Understanding risk and return
Journal of Political Economy
,
1996
, vol.
104
(pg.
298
-
345
)
Christelis
D
Jappelli
T
M
Cognitive abilities and portfolio choice
European Economic Review
,
2010
, vol.
54
(pg.
18
-
38
)
Christiansen
C
Joensen
J S
Rangvid
J
Are economists more likely to hold stocks?
Review of Finance
,
2008
, vol.
12
(pg.
465
-
96
)
Cochrane
J H
A Simple test of consumption insurance
The Journal of Political Economy
,
1991
, vol.
99
(pg.
957
-
76
)
Danthine
J-P
Donaldson
J B
Labour relations and asset returns
Review of Economic Studies
,
2002
, vol.
69
(pg.
41
-
64
)
Donangelo
A
Human capital mobility and expected stock returns
2009
Working Paper, University of California, Berkeley
Dreze, J. H. 1989. The role of securities and labor contracts in the optimal allocation of risk-bearing. In Risk, Information and Insurance, ed. H. Louberge. Kluwer Academic Publishers
Duffee
G
Time variation in the covariance between stock returns and consumption growth
Journal of Finance
,
2005
, vol.
60
(pg.
1673
-
712
)
Fama
E F
Schwert
G W
Human capital and capital market equilibrium
Journal of Financial Economics
,
1977
, vol.
4
(pg.
95
-
125
)
Feuerstein, R. 1990. The theory of structural modifiability. In Learning and thinking styles: Classroom interaction, ed. B. Presseisen. National Education Associations
Grinblatt
M
Keloharju
M
Linnainmaa
J
IQ and stock market participation
Journal of Finance
,
2011
, vol.
66
(pg.
2121
-
64
)
Guiso
L
Haliassos
M
Jappelli
T
Claessens
S
Household stockholding in europe: Where do we stand and where do we go?
Economic Policy
,
2003
, vol.
18
(pg.
125
-
170
)
Guiso
L
Pistaferri
L
Schivardi
F
Insurance within the firm
Journal of Political Economy
,
2005
, vol.
113
(pg.
1054
-
87
)
Guvenen
F
Do stockholders share risk more effectively than nonstockholders?
Review of Economics and Statistics
,
2007
, vol.
89
(pg.
275
-
88
)
Guvenen
F
A parsimonious macroeconomic model for asset pricing
Econometrica
,
2009
, vol.
77
(pg.
1711
-
50
)
Haliassos
M
Bertaut
C C
Why do so few hold stocks?
Economic Journal
,
1995
, vol.
105
(pg.
1110
-
29
)
Hall
R E
The importance of lifetime jobs in the U.S. Economy
American Economic Review
,
1982
, vol.
72
(pg.
716
-
24
)
Harris
M
Holmström
B
A theory of wage dynamics
Review of Economic Studies
,
1982
, vol.
49
(pg.
315
-
33
)
Hayashi
F
Altonji
J
Kotlikoff
L
Risk-sharing between and within families
Econometrica
,
1996
, vol.
64
(pg.
261
-
94
)
Heaton
J
Lucas
D J
Evaluating the effects of incomplete markets on risk sharing and asset pricing
Journal of Political Economy
,
1996
, vol.
104
(pg.
443
-
87
)
Hong
H
Kubik
J D
Stein
J C
Social interaction and stock-market participation
Journal of Finance
,
2004
, vol.
59
(pg.
137
-
63
)
Jagannathan
R
Wang
Z
The conditional CAPM and the cross-section of expected returns
Journal of Finance
,
1997
, vol.
51
(pg.
3
-
53
)
Karatzas
I
Shreve
S E
Brownian Motion and Stochastic Calculus
,
1991
New York, NY
Springer-Verlag
Kihlstrom
R E
Laffont
J-J
A general equilibrium entrepreneurial theory of firm formation based on risk aversion
Journal of Political Economy
,
1979
, vol.
87
(pg.
719
-
48
)
Lustig
H
Van Nieuwerburgh
S
The returns on human capital: good news on wall street is bad news on main street
Review of Financial Studies
,
2008
, vol.
21
(pg.
2097
-
137
)
Lustig
H N
Van Nieuwerburgh
S
Verdelhan
A
The wealth-consumption ratio: A litmus test for consumption-based asset pricing models
SSRN eLibrary
,
2007
Mace
B J
Full insurance in the presence of aggregate uncertainty
Journal of Political Economy
,
1991
, vol.
99
(pg.
928
-
56
)
Malloy
C J
Moskowitz
T J
Vissing-Jorgensen
A
Long-run stockholder consumption risk and asset returns
Journal of Finance
,
2009
, vol.
64
(pg.
2427
-
79
)
Malmendier
U
Nagel
S
Depression babies: Do macroeconomic experiences affect risk-taking?
Quarterly Journal of Economics
,
2011
, vol.
126
(pg.
373
-
16
)
Mankiw
N G
Zeldes
S P
The consumption of stockholders and nonstockholders
Journal of Financial Economics
,
1991
, vol.
29
(pg.
97
-
112
)
Mayers, D. 1972. Nonmarketable assets and capital market equilibrium under uncertainty. In Studies in the Theory of Capital Markets, ed. M. C. Jensen. Praeger, 223–48
Mehra
R
Prescott
E C
Journal of Monetary Economics
,
1985
, vol.
15
(pg.
145
-
61
)
Moscarini
G
Thomsson
K
Occupational and job mobility in the US
Scandanavian Journal of Economics
,
2007
, vol.
109
(pg.
807
-
36
)
Nelson
J
On testing for full insurance using consumer expenditure survey data
Journal of Political Economy
,
1994
, vol.
102
(pg.
384
-
94
)
Parlour
C
Stanton
R
Walden
J
Financial flexibility, bank capital flows, and asset prices
Journal of Finance
,
2012
, vol.
67
(pg.
1685
-
1772
)
Santos
T
Veronesi
P
Labor income and predictable stock returns
Review of Financial Studies
,
2006
, vol.
19
(pg.
1
-
44
)
Telmer
C I
Asset-pricing puzzles and incomplete markets
Journal of Finance
,
1993
, vol.
48
(pg.
1803
-
32
)
Vissing-Jorgensen
A
Limited asset market participation and the elasticity of intertemporal substitution
Journal of Political Economy
,
2002
, vol.
110
(pg.
825
-
53
)
|
{}
|
##### Notes
This printable supports Common Core Mathematics Standard 4.MD.A.2.
##### Print Instructions
NOTE: Only your test content will print.
To preview this test, click on the File menu and select Print Preview.
See our guide on How To Change Browser Print Settings to customize headers and footers before printing.
# Measuring with Rulers - Inches (Grade 4)
Print Test (Only the test content will print)
## Measuring with Rulers - Inches
1.
Mark $1 1/2$ inches on the ruler.
2.
Mark $3 1/4$ inches on the ruler.
3.
Mark $4 3/4$ inches on the ruler.
4.
Draw a vertical line through $2 3/16$ inches.
5.
The numbers on this ruler mark inches, not centimeters. If you were not given this information, explain how you would figure out if this ruler measured inches or centimeters.
6.
You measure a paper clip with the ruler shown. One end of the paper clip is at the 0 and the other end is at the 7th line after the 1. How many inches long is the paper clip?
7.
Ava needs to find the length of a leaf as part of a science experiment. The stem is at the 0 and the tip of the leaf goes to the 5th line after the 6. How many inches is the leaf?
8.
Sketch a drawing of a pencil that is $5 5/8$ inches long, on the ruler.
9.
Sketch a pencil that is $4 7/16$ inches long, on the ruler.
10.
Ethan measures the width of his book with the ruler and says it is $5 1/2$ inches wide. Lanie measures the same book with the ruler and says the book is $5 8/16$ inches wide. Who is correct? Explain.
You need to be a HelpTeaching.com member to access free printables.
|
{}
|
# Cubic Function
A cubic function is a polynomial function of degree $3$ . It can be written in the form $f\left(x\right)=a{x}^{3}+b{x}^{2}+cx+d$ , where $a,b,c$ and $d$ are real numbers and $a\ne 0$ .
It can also be written as $f\left(x\right)=a{\left(x+b\right)}^{3}+c$ , where $a,b$ and $c$ are real numbers and $a\ne 0$ .
Example 1:
Graph the function $f\left(x\right)=-2{\left(x+1\right)}^{3}-3$
Example 2:
Graph the function $f\left(x\right)={x}^{3}-6{x}^{2}+12x-3$
|
{}
|
The goal of motifator is to allow users to generate spatial motifs that replicate the ones seen in real-world data, without including any potentially identifying information.
## Installation
You can install motifator from github with:
# install.packages("devtools")
devtools::install_github("jzelner/motifator")
## Example
Consider this example: You want to compare the impact of spatial clustering of different levels of vaccination coverage in a model of infectious disease transmission. So, let’s make a map where we assume that about 50% of the population is vaccinated, but there is no spatial clustering.
Our metric of spatial clustering for this example is Moran’s I. Theoretical values of I run from -1 to 1. I = -1 implies anticorrelation, in which neighboring cells have values that are maximally different from each other. By contrast, values closer to +1 indicate strong spatial autocorrelation.
Let’s sample a map with strong spatial correlation:
require(motifator)
m <- sampleProportion(10, i_target = 0.9, mean_target = 0.5)
#> Joining, by = c("x", "y")
#>
#> SAMPLING FOR MODEL 'gpcor' NOW (CHAIN 1).
#>
#> Gradient evaluation took 0.001615 seconds
#> 1000 transitions using 10 leapfrog steps per transition would take 16.15 seconds.
#>
#>
#> Iteration: 1 / 1000 [ 0%] (Warmup)
#> Iteration: 100 / 1000 [ 10%] (Warmup)
#> Iteration: 200 / 1000 [ 20%] (Warmup)
#> Iteration: 300 / 1000 [ 30%] (Warmup)
#> Iteration: 400 / 1000 [ 40%] (Warmup)
#> Iteration: 500 / 1000 [ 50%] (Warmup)
#> Iteration: 501 / 1000 [ 50%] (Sampling)
#> Iteration: 600 / 1000 [ 60%] (Sampling)
#> Iteration: 700 / 1000 [ 70%] (Sampling)
#> Iteration: 800 / 1000 [ 80%] (Sampling)
#> Iteration: 900 / 1000 [ 90%] (Sampling)
#> Iteration: 1000 / 1000 [100%] (Sampling)
#>
#> Elapsed Time: 25.3592 seconds (Warm-up)
#> 22.5426 seconds (Sampling)
#> 47.9018 seconds (Total)
#> Warning: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
#> http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
#> Warning: There were 1 chains where the estimated Bayesian Fraction of Missing Information was low. See
#> http://mc-stan.org/misc/warnings.html#bfmi-low
#> Warning: Examine the pairs() plot to diagnose sampling problems
plotMap(m)
This results in an average I value of 0.79 and an average proportion of 0.51.
Now for comparison, let’s do the same thing but for a 10 x 10 map with no spatial structure and the same mean:
m <- sampleProportion(10, i_target = 0.0, mean_target = 0.5)
#> Joining, by = c("x", "y")
#>
#> SAMPLING FOR MODEL 'gpcor' NOW (CHAIN 1).
#>
#> Gradient evaluation took 0.001442 seconds
#> 1000 transitions using 10 leapfrog steps per transition would take 14.42 seconds.
#>
#>
#> Iteration: 1 / 1000 [ 0%] (Warmup)
#> Iteration: 100 / 1000 [ 10%] (Warmup)
#> Iteration: 200 / 1000 [ 20%] (Warmup)
#> Iteration: 300 / 1000 [ 30%] (Warmup)
#> Iteration: 400 / 1000 [ 40%] (Warmup)
#> Iteration: 500 / 1000 [ 50%] (Warmup)
#> Iteration: 501 / 1000 [ 50%] (Sampling)
#> Iteration: 600 / 1000 [ 60%] (Sampling)
#> Iteration: 700 / 1000 [ 70%] (Sampling)
#> Iteration: 800 / 1000 [ 80%] (Sampling)
#> Iteration: 900 / 1000 [ 90%] (Sampling)
#> Iteration: 1000 / 1000 [100%] (Sampling)
#>
#> Elapsed Time: 13.0166 seconds (Warm-up)
#> 9.53646 seconds (Sampling)
#> 22.5531 seconds (Total)
#> Warning: There were 1 chains where the estimated Bayesian Fraction of Missing Information was low. See
#> http://mc-stan.org/misc/warnings.html#bfmi-low
#> Warning: Examine the pairs() plot to diagnose sampling problems
plotMap(m)
This results in an average I value of 0.04 and an average proportion of 0.50.
And with anti-correlation:
m <- sampleProportion(10, i_target = -0.9, mean_target = 0.5)
#> Joining, by = c("x", "y")
#>
#> SAMPLING FOR MODEL 'gpcor' NOW (CHAIN 1).
#>
#> Gradient evaluation took 0.001501 seconds
#> 1000 transitions using 10 leapfrog steps per transition would take 15.01 seconds.
#>
#>
#> Iteration: 1 / 1000 [ 0%] (Warmup)
#> Iteration: 100 / 1000 [ 10%] (Warmup)
#> Iteration: 200 / 1000 [ 20%] (Warmup)
#> Iteration: 300 / 1000 [ 30%] (Warmup)
#> Iteration: 400 / 1000 [ 40%] (Warmup)
#> Iteration: 500 / 1000 [ 50%] (Warmup)
#> Iteration: 501 / 1000 [ 50%] (Sampling)
#> Iteration: 600 / 1000 [ 60%] (Sampling)
#> Iteration: 700 / 1000 [ 70%] (Sampling)
#> Iteration: 800 / 1000 [ 80%] (Sampling)
#> Iteration: 900 / 1000 [ 90%] (Sampling)
#> Iteration: 1000 / 1000 [100%] (Sampling)
#>
#> Elapsed Time: 12.2454 seconds (Warm-up)
#> 9.7475 seconds (Sampling)
#> 21.9929 seconds (Total)
#> Warning: There were 1 chains where the estimated Bayesian Fraction of Missing Information was low. See
#> http://mc-stan.org/misc/warnings.html#bfmi-low
#> Warning: Examine the pairs() plot to diagnose sampling problems
plotMap(m)
This results in an average I value of -0.14 and an average proportion of 0.50. This value is a bit short of our target value of I, but we can see that the variance of the cell values has increased significantly. Acheiving the theoretical maximum value of -1 may be particularly infeasible for continuous outputs constrained to (0,1) because it necessitates a perfectly inverse relationship in the value of neighboring cells. Future iterations should pre-derive the minimum and maximum values of I given the output range and allow users to set the correlation parameter as a function of this range.
|
{}
|
# Question #52ae1
Dec 31, 2014
$k = \pm \frac{5}{2}$
You have $4 {x}^{2} - 8 k x + 9 = 0$
and you know that the two solutions, ${x}_{2} \mathmr{and} {x}_{1}$, are related as:
${x}_{2} - {x}_{1} = 4$
You solve your equation with $k$ in place:
${x}_{1 , 2} = \frac{8 k \pm \sqrt{64 {k}^{2} - 4 \left(4 \cdot 9\right)}}{8}$
Giving:
${x}_{1 , 2} = \frac{8 k \pm \sqrt{64 {k}^{2} - 144}}{8}$
These are two solutions: one with the + sign and the other with the - sign. Substituting in: ${x}_{2} - {x}_{1} = 4$ you get
$\frac{8 k + \sqrt{64 {k}^{2} - 144}}{8} - \frac{8 k - \sqrt{64 {k}^{2} - 144}}{8} = 4$
rearranging and taking as common denominator $8$ you'll get:
$2 \cdot \left(\sqrt{64 {k}^{2} - 144}\right) = 32$
$\left(\sqrt{64 {k}^{2} - 144}\right) = 16$
$64 {k}^{2} - 144 = 256$
$k = \sqrt{\frac{400}{64}} = \pm \frac{5}{2}$
|
{}
|
## §1.1: On analysis of boolean functions
This is a book about boolean functions, \begin{equation*} f : \{0,1\}^n \to \{0,1\}. \end{equation*} Here $f$ maps each length-$n$ binary vector, or string, into a single binary value, or bit. Boolean functions arise in many areas of computer science and mathematics. Here are some examples:
[...]
|
{}
|
# Multiply defined labels with \usebox
I want to duplicate some content without re-defining its labels. For example,
\newbox{\foobox}
\sbox{\foobox}{\vbox{\section{Foo}\label{sec:foo}}
\usebox{\foobox}
\usebox{\foobox}
will emit a warning, because sec:foo is defined more than once. Is there any way to prevent the second \usebox from writing to the aux file?
-
You basically can't, with this approach. When the \vbox is built, the \label command is translated into a whatsit that contains a \write instruction and it can't be disabled any more, because the vertical list is already packaged. – egreg Feb 6 '14 at 18:55
That's quite unfortunate. Is there any way to duplicate content without affecting what is to be typeset? My original intent was to typeset a bunch of labelled theorems multiple times. With a plain macro instead of a box, the theorem numbering counters (of course) were also incremented the second time, which I do not want. – Kristóf Marussy Feb 6 '14 at 19:57
There are two problems. (1) Avoid \label doing a \write in the later usages: this might be solved by absorbing the text as a macro replacement text; (2) “freezing” all counters to their current value. Could you be more specific about the problem and what counters you need to be frozen? – egreg Feb 6 '14 at 20:39
I assume, you want to have the first \label intact and to disable the others. Then the following uses a trick:
1. The box is created with two commands \BeginIgnoreAux{<id>} and \EndIgnoreAux before and after the original contents. These macros write markers into the .aux file with the result, the label stuff gets in between.
2. If the .aux file is read, then the label stuff is executed as usual the first time. If \BeginIgnoreAux{<id>} is called more then once with the same <id>, then the label stuff is ignored. This avoids duplicate labels.
3. LaTeX reads the .aux file twice. At the end of the document the .toc files are written, for example. Therefore Macro \AuxResetIgnoreStuff clears the <id>s.
\documentclass{article}
\makeatletter
\global\let\AuxResetIgnoreStuff\@empty
\usepackage{auxhook}
% Macros inside the .aux' file
\newcommand*{\AuxBeginIgnore}[1]{%
\@ifundefined{ignore@#1}{%
\global\expandafter\let\csname ignore@#1\endcsname\@empty
\expandafter{%
\expandafter\global\expandafter\let\csname ignore@#1\endcsname\relax
}%
}\AuxSkip
}
\def\AuxSkip#1\AuxEndIgnore{}
\let\AuxEndIgnore\relax
% User commands
\newcommand*{\BeginIgnoreAux}[1]{%
\protected@write\@auxout{}{%
\string\AuxBeginIgnore{#1}%
}%
}
\newcommand*{\EndIgnoreAux}[1]{%
\protected@write\@auxout{}{%
\string\AuxEndIgnore
}%
}
\makeatother
\begin{document}
\tableofcontents
\newbox{\foobox}
\sbox{\foobox}{%
\BeginIgnoreAux{foo}%
\vbox{\section{Foo}\label{sec:foo}}%
\EndIgnoreAux
}
\noindent
\usebox{\foobox}
\usebox{\foobox}
\noindent
Reference to the first label: \ref{sec:foo}.
\end{document}
The .aux file contains:
\relax
\AuxResetIgnoreStuff
\AuxBeginIgnore{foo}
\@writefile{toc}{\contentsline {section}{\numberline {1}Foo}{1}}
\newlabel{sec:foo}{{1}{1}}
\AuxEndIgnore
\AuxBeginIgnore{foo}
\@writefile{toc}{\contentsline {section}{\numberline {1}Foo}{1}}
\newlabel{sec:foo}{{1}{1}}
\AuxEndIgnore
And the .toc file:
\contentsline {section}{\numberline {1}Foo}{1}
`
-
Thanks! This works like a charm. :) – Kristóf Marussy Feb 7 '14 at 10:33
|
{}
|
Quick Forum Questions
Recommended Posts
6 hours ago, michel123456 said:
Yes it used to be somewhere near the smilies button. Now it appears as a <>"code" button. See screenshot below.
Our options are different. I've confirmed now on several browsers. The button is there for me on each of the following: Chrome, M$Edge, and Safari. This might be as simple as you attempting to clear cache. Or, when refreshing attempt a hard refresh instead of a quick one (ctrl+F5 on windows or Command+R on iOS). Sorry I can't be of more help, but our views of the editor window options are definitely not the same. The button is still there when I test. Link to comment Share on other sites • Replies 278 • Created • Last Reply Top Posters In This Topic Popular Days Top Posters In This Topic Popular Days Posted Images On 12/7/2020 at 5:00 PM, iNow said: Our options are different. I've confirmed now on several browsers. The button is there for me on each of the following: Chrome, M$ Edge, and Safari.
This might be as simple as you attempting to clear cache. Or, when refreshing attempt a hard refresh instead of a quick one (ctrl+F5 on windows or Command+R on iOS).
Sorry I can't be of more help, but our views of the editor window options are definitely not the same. The button is still there when I test.
I had the button once. It disappeared lately. I wonder maybe it is related to the fact that moderators have restrained my posting abilities: I cannot post in the Relativity subforum.
Share on other sites
2 minutes ago, michel123456 said:
I wonder maybe it is related to the fact that moderators have restrained my posting abilities: I cannot post in the Relativity subforum
An interesting hypothesis, but one which strikes me as extremely unlikely given the way the text editor is handled by forum software. Either way, I'm only able to speculate at this time as to root cause and have no way of validating.
Share on other sites
16 hours ago, iNow said:
An interesting hypothesis, but one which strikes me as extremely unlikely given the way the text editor is handled by forum software. Either way, I'm only able to speculate at this time as to root cause and have no way of validating.
Anyway the browser pays no role. I logged in with IE, Chrome & Mozilla & it changed nothing.
Share on other sites
Did somebody answer the OPs question about a picture for the avatar?
Share on other sites
You mean 15 years ago?
Share on other sites
10 hours ago, Bartholomew Jones said:
Did somebody answer the OPs question about a picture for the avatar?
Any response would be moot since the forum software has changed. The picture can be changed in your profile (click on your user name, select profile. In the upper right there’s something that says “cover photo”. I think that’s where you can upload a picture)
Share on other sites
• 2 weeks later...
@Sensei expresses that when anyone announce/advertise his/her website here ,it would be risk for the security of that website.
so,does this forum limits shares on personal websites because of this ?
Edited by ahmet
Share on other sites
5 hours ago, ahmet said:
does this forum limits shares on personal websites because of this ?
No. Spam is limited and deleted because it’s annoying, obtrusive, and unwelcome
Share on other sites
5 hours ago, ahmet said:
@Sensei expresses that when anyone announce/advertise his/her website here ,it would be risk for the security of that website.
so,does this forum limits shares on personal websites because of this ?
It’s not a security risk, as such. It’s to keep spammers away, who would only be posting so that links to their site appeared and improve their search-engine ranking. Such posts would not tend to contribute to any discussion. It’s not our function to drive traffic to others’ sites. Annoying and obtrusive, as iNow says.
As the rules say, if it’s not commercial, it’s OK to put in your signature.
Share on other sites
• 3 weeks later...
I just noted some possible confusion in another thread regarding the quote function, I was not involved but I did spot a possible reason. I do not know if this applies to all browsers or platforms. I will borrow a post from @swansont above to illustrate; here is a screenshot from the post above:
If I quote a part of Ahmet's statement from Swansont's post it will look like Swansont said it instead of Ahmet:
On 12/19/2020 at 4:10 PM, swansont said:
risk for the security
Anyone know if this is an intended feature that serves a purpose?
Note that if I instead include some of Swansont's test the nestled quotes seems OK:
On 12/19/2020 at 4:10 PM, swansont said:
On 12/19/2020 at 10:48 AM, ahmet said:
expresses that when anyone announce/advertise his/her website here ,it would be risk for the security of that website.
so,does this forum limits shares on personal websites because of this ?
It’s not a security risk, as such
Here is a screenshot of the quote, in the unlikely case that this observation of mine depends on my browser or other aspects of my local system
and
Share on other sites
When I use the quote function on a post I only see the words of the person I’m quoting.
What are people doing to quote one person from someone else’s post? Is this a copy-paste issue? Or is this using the “quote multiple posts” function
Share on other sites
Posted (edited)
2 hours ago, Ghideon said:
risk for the security
I just highlighted and quoted from your quote within Ghideon's post above. It attributes the words to be from Ghideon instead of you.
Edited by zapatos
Share on other sites
7 minutes ago, zapatos said:
risk for the security
I see.
Yeah, that’s obviously a bug in the forum software
Share on other sites
3 hours ago, Ghideon said:
I just noted some possible confusion in another thread regarding the quote function, I was not involved but I did spot a possible reason. I do not know if this applies to all browsers or platforms. I will borrow a post from @swansont above to illustrate; here is a screenshot from the post above: [...]
I've noticed the same bug. Thanks for pointing it out.
Share on other sites
Posted (edited)
The old software acted like that as well.
I did found out quite recently that if you leave a page with stuff, perhaps but not necessarily including quotes, in the input editor
and go to another page of the same thread
The editor clears but hides your stuff.
So you must first click in the editor box, before you do anything else on arrival at page 3, from say page 5, to pick up another point.
It can get quite tricky to assemble answers this way and is very easy to loose a substantial amount of input work.
I also note that if you change page with the quote function as the first thing in your edit box, the software sometimes chops the first bit out or reassigns the author of the quote.
I look back at a recent post, which is made a nonsense by just this effect I did not spot before I hit the reply button.
Other forums have some even more unwieldy procedures.
So on balance, frustrating as it sometimes is, the SF model is still better than most.
Edited by studiot
Share on other sites
Keep in mind nobody at SFN has anything to do with the forum software. It comes from elsewhere. At best, admins can config a few buttons here and there, but not make changes that would fix this quote bug you found (unless perhaps a fix has been deployed within a new software version which is still pending manual download... it’s possible an admin here is needed to initiate that process).
Share on other sites
• 6 months later...
Just a quick thank you to the mod who moved my political cartoon, which I had posted in the general Jokes thread before my site explorations had discovered a political humor thread. Kindness and assistance to this newbie much appreciated. (this post can be deleted, once read by the relevant party, if it minimizes clutter)
Cheers.
Share on other sites
7 minutes ago, TheVat said:
Just a quick thank you to the mod who moved my political cartoon, which I had posted in the general Jokes thread before my site explorations had discovered a political humor thread. Kindness and assistance to this newbie much appreciated. (this post can be deleted, once read by the relevant party, if it minimizes clutter)
Cheers.
One of our members suggested it via the report post function. The system worked as envisioned.
Share on other sites
• 2 weeks later...
On 12/10/2020 at 5:03 AM, swansont said:
Any response would be moot since the forum software has changed. The picture can be changed in your profile (click on your user name, select profile. In the upper right there’s something that says “cover photo”. I think that’s where you can upload a picture)
In case anyone wonders, the "cover photo" option does NOT load your avatar pic. It just provides a pic at the top of your profile page. I cannot find an option for loading an avatar pic.
Share on other sites
26 minutes ago, TheVat said:
In case anyone wonders, the "cover photo" option does NOT load your avatar pic. It just provides a pic at the top of your profile page. I cannot find an option for loading an avatar pic.
Click on the little icon bottom left of your profile picture
Share on other sites
Posted (edited)
Thank you SJ. That doesn't work (and it's on the top left, on my photo) . That just allows adding or removal of the profile page photo. I wonder if this is a function restricted for newer members?
Edited by TheVat
Share on other sites
Posted (edited)
13 minutes ago, TheVat said:
Thank you SJ. That doesn't work (and it's on the top left, on my photo) . That just allows adding or removal of the profile page photo. I wonder if this is a function restricted for newer members?
Your profile pic is your avatar. As you can see here. I'm sure a mod can sort it. Have you tried putting the picture up, then logging out, then in? It might need to do that to update the threads.
Edited by StringJunky
Share on other sites
It worked for me, editing your profile. I will remove this, but first can see if you can change it. Click on the pic, and then the photo icon in your profile
Share on other sites
Thanks, Swanson. Burp. Now the profile page shows the editable avatar pic, with the editing button on the bottom left as String had described. And thanks again, String.
I logged on this session with a Windows desktop, and everything worked as described. All the trouble I was having was on a somewhat aged Chrome tablet, so I now wonder if the glitch was with me.
Next lesson (which I hope to master through self-guided study): cropping that stupid monkey so it comes out right and you can all enjoy his terrific hairdo.
Create an account
Register a new account
|
{}
|
# Infinite potential barrier Q
1. Nov 28, 2008
### ballzac
Just a quick question. Solving the Schrodinger equation (time-independent) for an infinite potential barrier, I end up with two wavefunctions.
In region I,
$$V(x)=0$$
$$\Rightarrow\psi(x)=Acos(\frac{\sqrt{2mE}}{\hbar}x)+Bsin(\frac{\sqrt{2mE}}{\hbar}x)$$
In region II,
$$V(x)=\infty$$
$$\Rightarrow\psi(x)=Ce^\infty+De^{-\infty}=\infty+0$$
Clearly in the infinite potential region, the wavefunction must equal zero, so it is clear that the term with constant factor C must vanish. I interpret this as meaning that the D term is the wave that penetrates the barrier (in this case it does not because $$e^{-\infty}=0$$), and the C term is a wave coming from the right, that does not exist and therefore $$C=0$$. If I am right in assuming this, then how can one prove mathematically, that $$C=0$$, with out resorting to non-mathematical common sense? If I'm wrong, then what is the explanation?
Thanks in advance, and sorry if there are any errors in the above...made a lot of typos the first time around :\$
2. Nov 29, 2008
### malawi_glenn
The probability to penetrate an infinite high potential barrier is 0, so wavefunction vanishes everywhere in that region.
Also, try to get you whole wave function continous at the boundary.. you'll see that only C = 0 works.
3. Nov 29, 2008
### ballzac
MG,
The first line of your reply is basically what I was getting at, but I neglected to mention probability, which is of course the motivation for assuming there is no wave function in that region.
Secondly, are you saying that
$$Ce^{\infty}+De^{-\infty}$$
must equal zero or infinity depending on choices of C and D,
but the wave function in region I (which must equal the above at the barrier) can only be infinite if A and/or B are infinite, and they are both finite so the above must be zero?
If we can assume that the potential can be infinite, could we not also take the wave function to be infinite? I'm guessing the answer is no, but not sure why.
Also, as I have not used these forums much, I'm not sure of the etiquette. I'm thinking I should have posted this in the homework section, but as I am finished uni for the year and have just been mucking around with QM among other things, it never occured to me to post this kind of thing in the homework section. Please forgive me if I've erred. :)
4. Nov 29, 2008
### malawi_glenn
You must know that the wavefunction should also be normalized and that an infinite value of the wavefunction means that the probability for finding particle at a particular point is... (infinite) which is unphysical.
No problem, you are atleast showing you works and thoughts, very good! keep it up.
5. Nov 29, 2008
### ballzac
D'uh. Of course. It's easy to overlook the obvious when learning. Thanks :)
|
{}
|
# How do you find the domain and range of sine, cosine, and tangent?
Nov 19, 2014
The domain and range of trigonometric functions are determined directly from the definition of these functions.
Let's start from the definition.
Trigonometric functions are defined using a unit circle on a coordinate plane - a circle of a radius $1$ centered at the origin of coordinates $O$.
Consider a point $A$ on this circle and an angle from the positive direction of the $X$-axis (ray $O X$) to a ray $O A$ that connects a center of a unit circle with our point $A$. This angle can be measured in degrees or, more commonly in the analysis of trigonometric functions, in radians.
The value of this angle can be positive (if we go counterclockwise from $O X$ to $O A$) or negative (going clockwise). It can be greater by absolute value than a full angle (the one of $360$ degrees or $2 \pi$ radians), in which case the position of a point $A$ is determined by circulating around the center of a unit circle more than once.
Each value of an angle from $- \infty$ to $+ \infty$ (in degrees or, more preferably, radians) corresponds to a position of a point on the unit circle. For each such angle the values of trigonometric functions are defined as follows.
1. Function $y = \sin \left(x\right)$ is defined as the ordinate ($Y$-coordinate) of a point on a unit circle that corresponds to an angle of $x$ radians. Therefore, the domain of this function is all real numbers from $- \infty$ to $+ \infty$. The range is from $- 1$ to $+ 1$ since this is an ordinate of a point on a unit circle.
2. Function $y = \cos \left(x\right)$ is defined as the abscissa ($X$-coordinate) of a point on a unit circle that corresponds to an angle of $x$ radians. Therefore, the domain of this function is all real numbers from $- \infty$ to $+ \infty$. The range is from $- 1$ to $+ 1$ since this is an abscissa of a point on a unit circle.
3. Function $y = \tan \left(x\right)$ is defined as $\sin \frac{x}{\cos} \left(x\right)$. The domain of this function is all real numbers except those where $\cos \left(x\right) = 0$, that is all angles except those that correspond to points $\left(0 , 1\right)$ and $\left(0 , - 1\right)$. These angles where $y = \tan \left(x\right)$ is undefined are $\frac{\pi}{2} + \pi \cdot N$ radians, where $N$ - any integer number. The range is, obviously, all real numbers from $- \infty$ to $+ \infty$.
Of special interest might be the graphs of these functions. You can refer to a series of lectures on Unizor dedicated to detailed analysis of these functions, their graphs and behavior.
|
{}
|
# Remainder Theorem With Second Degree Divisor
What is the remainder when x2013 is divided by x2– 1? This problem was taken from 2009 MTAP-Dep-Ed Math Challenge Finals. The problem looks very easy but if you don’t know the basic principle how to deal with this problem you can never answer this in just a minute. This article will teach you how to deal with problem like this.
Let :
$P(x)$ be a polynomial with degree n.
$D(x)$ be the divisor in second degree.
$Q(x)$ be the quotient when $P(x)$ is divided by $D(x)$
$R(x)$ be the remainder when $P(x)$ is divided by $D(x)$
If $D(x)$ is a 1st degree expression, the remainder is constant.
If $D(x)$ is a second degree expression and the remainder is a in 1st degree expression in the form of $Ax+B$.
We can say that $P(x) = D(x)(Q(x))+R(x)$
Worked Problem 1:
What is the remainder when $x^2013$ is divided by $x^2- 1?$
Given:
$P(x)= x^2013$ $D(x)= x^2- 1 or (x-1)(x+1)$ Required: $R(x)$
Solution:
$P(x) = D(x)(Q(x))+R(x)$ $x^2013= (Q(x))(x-1)(x+1)+Ax +B$
Let $x=1$ to simplify things out.
$(1)^{2013}= (Q(1))(1-1)(1+1)+A(1) +B$
$1=A+B$, let’s call this equation 1.
Let $x=-1$
$(-1)^{2013}= (Q(11))(-1-1)(-1+1)+A(-1) +B$
$-1=-A+B$ let’s call this equation 2.
Solve equation 1 and 2 simultaneously.
$A=1 , B=0$
Therefore the remainder is $1(x)+0$ or $x$.
Worked Problems 2:
What is the remainder when $x^{2014}-2x^{2013}+1$ is divided by $x^2-2x$?
Solution:
Given:
$P(x)= x^{2014}-2x^{2013}+1$ $x^2-2x$
$P(x) = D(x)(Q(x))+R(x)$ $x^{2014}-2x^{2013}+1= Q(x)(x)(x-2)+Ax+B$
Let x=2
$(2)^{2014}-2(2)^{2013}+1= Q(2)(2)(2-2)+A(2)+B$ $2^{2014}-2^{2014}+1= 0+2A+B$
$2A+B=1$, equation 1
Let x=0
$(0)^{2014}-2(0)^{2013}+1= Q(0)(0)(0-2)+A(0)+B$ $1=B$
From equation 1,
$2A+B=1$
$2A+1=1$,
$A=0$
The remainder is $(0)x+1$ or simply $1$
### Dan
Blogger and a Math enthusiast. Has no interest in Mathematics until MMC came. Aside from doing math, he also loves to travel and watch movies.
### 2 Responses
1. Ula says:
Hi, i think that i saw you visited my website thus i came to “return the favor”.I’m trying to find things to enhance my site!I suppose its ok to use a few of your ideas!!
2. Hello there! Do you know if they make any plugins to help with SEO? I’m trying to get my blog to rank for some targeted keywords but I’m not seeing very good results. If you know of any please share. Kudos!
|
{}
|
# NCERT Solutions for Class 9 Maths Chapter 8 Quadrilaterals
NCERT Solutions for Class 9 Maths Chapter 8 Quadrilaterals: Many shapes around us are in the form of quadrilaterals. Such as top of a table, paper, wall, roof etc. So studying about properties of different types of quadrilaterals is necessary. The NCERT Solutions for Class 9 Maths Chapter 8 we will study about parallelogram and their properties. A quadrilateral with parallel opposite sides is a parallelogram. After the introduction, the NCERT Solutions for Class 9 Maths Chapter 8 Quadrilaterals starts with recollecting the angle sum property of a quadrilateral. That is the sum of angles of a quadrilateral is 360 degrees. This is obtained by dividing the quadrilateral into two triangles by drawing a diagonal. We know that the sum of angles of a triangle is 180 degrees. So the sum of angles of the two triangles will be 360 degrees.
The next topic this chapter of NCERT Solutions for Class 9 Maths talks briefly about different types of quadrilaterals. And finally, the NCERT Solutions of this chapter introduces different properties of the parallelogram and the midpoint theorem. This chapter of NCERT Solutions for Class 9 contains two exercises. The first exercise is related to the properties of parallelogram and second exercise focus on midpoint theorem along with the properties of a parallelogram.
The main topics of NCERT Solutions for Class 9 Maths Chapter 8 are given below
8.1 Introduction
8.2Angle Sum Property of a Quadrilateral
8.4 Properties of a Parallelogram
8.5 Another Condition for a Quadrilateral to be a Parallelogram
8.6 The Mid-point Theorem
Below is the list of properties mentioned in the NCERT Solutions for Class 9 Maths Chapter 8 Quadrilaterals:
1. The diagonal of a parallelogram divides it into two congruent triangles
In figure 1 parallelogram ABCD, the diagonal AC divides it into two congruent triangles ABC and CDA
2. Opposite sides of a parallelogram are equal. That is in figure 1 AD=BC and AB=DC
3. If each pair of opposite sides of a quadrilateral are equal then it is a parallelogram. That is in figure 1 if AD=BC and AB=DC then ABCD is a parallelogram
4. In a parallelogram opposite angles are equal. That is angle A = angle C and angle D = angle B
5. If in a quadrilateral, each pair of opposite angles is equal, then it is a parallelogram.
6. Diagonals of a parallelogram bisect each other
7. If diagonals of a quadrilateral bisect each other then it is a parallelogram
The properties 5 says that in a parallelogram ABCD as in figure 2 if AC and BD are diagonals and if diagonals intersect at O then AO=OC and OD=OB and property 6 says that if AO=OC and OD=OB then ABCD is a parallelogram.
8. In a quadrilateral, if a pair of sides are equal and parallel then it is a parallelogram
Another topic mentioned in the NCERT Solutions for Class 9 Maths Chapter 8 Quadrilaterals is the midpoint theorem. The midpoint theorems are listed below
9. The line segment joining the mid-points of two sides of a triangle is parallel to the third side
10. The line drawn through the mid-point of one side of a triangle, parallel to another side bisects the third side.
The theorem 9 says that if E and F are the midpoint of sides AB and AC respectively as shown in figure 3 then EF is parallel to BC and the theorem 10 says that EF= 0.5 BC
## NCERT Solutions for Class 9 Maths - Chapter Wise
Chapter No. Chapter Name Chapter 1 Number Systems Chapter 2 Polynomials Chapter 3 Coordinate Geometry Chapter 4 Linear Equations In Two Variables Chapter 5 Introduction to Euclid's Geometry Chapter 6 Lines And Angles Chapter 7 Triangles Chapter 9 Areas of Parallelograms and Triangles Chapter 10 Circles Chapter 11 Constructions Chapter 12 Heron’s Formula Chapter 13 Surface Areas and Volumes Chapter 14 Statistics Chapter 15 Probability
|
{}
|
# American Institute of Mathematical Sciences
May 2009, 3(2): 259-274. doi: 10.3934/ipi.2009.3.259
## A time-domain probe method for three-dimensional rough surface reconstructions
1 Department of Mathematics, University of Reading, Whiteknights, PO Box 220, Berkshire, RG6 6AX, United Kingdom, United Kingdom
Received November 2008 Revised March 2009 Published May 2009
The task of this paper is to develop a Time-Domain Probe Method for the reconstruction of impenetrable scatterers. The basic idea of the method is to use pulses in the time domain and the time-dependent response of the scatterer to reconstruct its location and shape. The method is based on the basic causality principle of time-dependent scattering. The method is independent of the boundary condition and is applicable for limited aperture scattering data.
In particular, we discuss the reconstruction of the shape of a rough surface in three dimensions from time-domain measurements of the scattered field. In practise, measurement data is collected where the incident field is given by a pulse. We formulate the time-domain field reconstruction problem equivalently via frequency-domain integral equations or via a retarded boundary integral equation based on results of Bamberger, Ha-Duong, Lubich. In contrast to pure frequency domain methods here we use a time-domain characterization of the unknown shape for its reconstruction.
Our paper will describe the Time-Domain Probe Method and relate it to previous frequency-domain approaches on sampling and probe methods by Colton, Kirsch, Ikehata, Potthast, Luke, Sylvester et al. The approach significantly extends recent work of Chandler-Wilde and Lines (2005) and Luke and Potthast (2006) on the time-domain point source method. We provide a complete convergence analysis for the method for the rough surface scattering case and provide numerical simulations and examples.
Citation: Corinna Burkard, Roland Potthast. A time-domain probe method for three-dimensional rough surface reconstructions. Inverse Problems & Imaging, 2009, 3 (2) : 259-274. doi: 10.3934/ipi.2009.3.259
[1] Michiyuki Watanabe. Inverse $N$-body scattering with the time-dependent hartree-fock approximation. Inverse Problems & Imaging, 2021, 15 (3) : 499-517. doi: 10.3934/ipi.2021002 [2] Jan Sokołowski, Jan Stebel. Shape optimization for non-Newtonian fluids in time-dependent domains. Evolution Equations & Control Theory, 2014, 3 (2) : 331-348. doi: 10.3934/eect.2014.3.331 [3] Chan Liu, Jin Wen, Zhidong Zhang. Reconstruction of the time-dependent source term in a stochastic fractional diffusion equation. Inverse Problems & Imaging, 2020, 14 (6) : 1001-1024. doi: 10.3934/ipi.2020053 [4] Bernard Ducomet. Asymptotics for 1D flows with time-dependent external fields. Conference Publications, 2007, 2007 (Special) : 323-333. doi: 10.3934/proc.2007.2007.323 [5] Yueqiang Shang, Qihui Zhang. A subgrid stabilizing postprocessed mixed finite element method for the time-dependent Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3119-3142. doi: 10.3934/dcdsb.2020222 [6] Masahiro Ikeda, Ziheng Tu, Kyouhei Wakasa. Small data blow-up of semi-linear wave equation with scattering dissipation and time-dependent mass. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021011 [7] Mourad Choulli, Yavar Kian. Stability of the determination of a time-dependent coefficient in parabolic equations. Mathematical Control & Related Fields, 2013, 3 (2) : 143-160. doi: 10.3934/mcrf.2013.3.143 [8] Feng Zhou, Chunyou Sun, Xin Li. Dynamics for the damped wave equations on time-dependent domains. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1645-1674. doi: 10.3934/dcdsb.2018068 [9] Stephen Anco, Maria Rosa, Maria Luz Gandarias. Conservation laws and symmetries of time-dependent generalized KdV equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 607-615. doi: 10.3934/dcdss.2018035 [10] Fengjuan Meng, Meihua Yang, Chengkui Zhong. Attractors for wave equations with nonlinear damping on time-dependent space. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 205-225. doi: 10.3934/dcdsb.2016.21.205 [11] Tingting Liu, Qiaozhen Ma. Time-dependent asymptotic behavior of the solution for plate equations with linear memory. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4595-4616. doi: 10.3934/dcdsb.2018178 [12] Ying Sui, Huimin Yu. Singularity formation for compressible Euler equations with time-dependent damping. Discrete & Continuous Dynamical Systems, 2021, 41 (10) : 4921-4941. doi: 10.3934/dcds.2021062 [13] Matti Viikinkoski, Mikko Kaasalainen. Shape reconstruction from images: Pixel fields and Fourier transform. Inverse Problems & Imaging, 2014, 8 (3) : 885-900. doi: 10.3934/ipi.2014.8.885 [14] Mikko Kaasalainen. Multimodal inverse problems: Maximum compatibility estimate and shape reconstruction. Inverse Problems & Imaging, 2011, 5 (1) : 37-57. doi: 10.3934/ipi.2011.5.37 [15] Alexander Zlotnik, Ilya Zlotnik. Finite element method with discrete transparent boundary conditions for the time-dependent 1D Schrödinger equation. Kinetic & Related Models, 2012, 5 (3) : 639-667. doi: 10.3934/krm.2012.5.639 [16] Lekbir Afraites, Chorouk Masnaoui, Mourad Nachaoui. Shape optimization method for an inverse geometric source problem and stability at critical shape. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021006 [17] Qiwei Wu, Liping Luan. Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Communications on Pure & Applied Analysis, 2021, 20 (3) : 995-1023. doi: 10.3934/cpaa.2021003 [18] Toshiyuki Suzuki. Scattering theory for semilinear Schrödinger equations with an inverse-square potential via energy methods. Evolution Equations & Control Theory, 2019, 8 (2) : 447-471. doi: 10.3934/eect.2019022 [19] Roman Chapko. On a Hybrid method for shape reconstruction of a buried object in an elastostatic half plane. Inverse Problems & Imaging, 2009, 3 (2) : 199-210. doi: 10.3934/ipi.2009.3.199 [20] Mohammed Al Horani, Angelo Favini, Hiroki Tanabe. Inverse problems for evolution equations with time dependent operator-coefficients. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 737-744. doi: 10.3934/dcdss.2016025
2020 Impact Factor: 1.639
|
{}
|
Find the derivative of the functions: 5sec x + 4cos x
Asked by Abhisek | 1 year ago | 56
##### Solution :-
$$f'(x)=\lim\limits_{h \to 0}\dfrac{f(x+h)-f(x)}{h}$$
$$f(x)= 5 sec x + 4 cos x$$
$$f(x)= \dfrac{d}{dx}(5secx+4cosx)$$
By further calculation, we get
$$5\dfrac{d}{dx}(secx)+4\dfrac{d}{dx}(cosx)$$
$$5 \;secx\; tanx + 4 \times (-sinx)$$
$$5 \;secx \;tanx - 4 sinx$$
Answered by Pragya Singh | 1 year ago
### Related Questions
#### Differentiate with respect to x xn loga x
Differentiate with respect to x xn loga x
#### Differentiate with respect to x xn tan x
Differentiate with respect to x xn tan x
#### Differentiate with respect to x x2 ex log x
Differentiate with respect to x x2 ex log x
|
{}
|
# Illogical conclusion about a mass on a spring [closed]
If you place a weight on a spring and it is an equilibrium, then you have this equation:
$$mg = kx$$
you would solve for k and get:
$$k = mg/x$$
but, if we used conservation of energy, assuming that when the mass is on the spring it is at x = 0, we would get:
$$1/2kx^2 = mgx$$ we would get $$k = 2mg/x$$
What assumption am I making that is false? How would the k value be different with two correct statements?
• You need to include the kinetic energy of the mass. Dec 12, 2022 at 5:44
• Welcome to Physics! Please use MathJax to type equations and mathematical symbols. Dec 12, 2022 at 5:44
• @josephh Thanks a lot! I realized right when I submitted. That makes everything a lot clearer. Thanks for your help! Dec 12, 2022 at 6:02
• No worries at all. Dec 12, 2022 at 6:02
• Dec 12, 2022 at 6:14
|
{}
|
# std::cyl_bessel_i, std::cyl_bessel_if, std::cyl_bessel_il
double cyl_bessel_i( double ν, double x ); float cyl_bessel_if( float ν, float x ); long double cyl_bessel_il( long double ν, long double x ); (1) (since C++17) Promoted cyl_bessel_i( Arithmetic ν, Arithmetic x ); (2) (since C++17)
1) Computes the regular modified cylindrical Bessel function of ν and x.
2) A set of overloads or a function template for all combinations of arguments of arithmetic type not covered by (1). If any argument has integral type, it is cast to double. If any argument is long double, then the return type Promoted is also long double, otherwise the return type is always double.
## Contents
### Parameters
ν - the order of the function x - the argument of the function)
### Return value
If no errors occur, value of the regular modified cylindrical Bessel function of ν and x, that is I
ν
(x) = Σ
k=0
(x/2)ν+2k k!Γ(ν+k+1)
(for x≥0), is returned.
### Error handling
Errors may be reported as specified in math_errhandling
• If the argument is NaN, NaN is returned and domain error is not reported
• If ν>=128, the behavior is implementation-defined
### Notes
Implementations that do not support C++17, but support ISO 29124:2010, provide this function if __STDCPP_MATH_SPEC_FUNCS__ is defined by the implementation to a value at least 201003L and if the user defines __STDCPP_WANT_MATH_SPEC_FUNCS__ before including any standard library headers.
Implementations that do not support ISO 29124:2010 but support TR 19768:2007 (TR1), provide this function in the header tr1/cmath and namespace std::tr1
An implementation of this function is also available in boost.math
### Example
#include <cmath>
#include <iostream>
int main()
{
// spot check for ν == 0
double x = 1.2345;
std::cout << "I_0(" << x << ") = " << std::cyl_bessel_i(0, x) << '\n';
// series expansion for I_0
double fct = 1;
double sum = 0;
for(int k = 0; k < 5; fct*=++k) {
sum += std::pow((x/2),2*k) / std::pow(fct,2);
std::cout << "sum = " << sum << '\n';
}
}
Output:
I_0(1.2345) = 1.41886
sum = 1
sum = 1.381
sum = 1.41729
sum = 1.41882
sum = 1.41886
|
{}
|
# 17 The ns-3 Network Simulator
In this chapter we take a somewhat cursory look at the ns-3 simulator, intended as a replacement for ns-2. The project is managed by the NS-3 Consortium, and all materials are available at nsnam.org.
Ns-3 represents a rather sharp break from ns-2. Gone is the Tcl programming interface; instead, ns-3 simulation programs are written in the C++ language, with extensive calls to the ns-3 library, although they are often still referred to as simulation “scripts”. As the simulator core itself is also written in C++, this in some cases allows improved interaction between configuration and execution. However, configuration and execution are still in most cases quite separate: at the end of the simulation script comes a call Simulator::Run() – akin to ns-2’s $ns run – at which point the user-written C++ has done its job and the library takes over. To configure a simple simulation, an ns-2 Tcl script had to create nodes and links, create network-connection “agents” attached to nodes, and create traffic-generating applications attached to agents. Much the same applies to ns-3, but in addition each node must be configured with its network interfaces, and each network interface must be assigned an IP address. ## 17.1 Installing and Running ns-3 We here outline the steps for installing ns-3 under Linux from the “allinone” tar file, assuming that all prerequisite packages (such as gcc) are already in place. Much more general installation instructions can be found at www.nsnam.org. In particular, serious users are likely to want to download the current Mercurial repository directly. Information is also available for Windows and Macintosh installation, although perhaps the simplest option for Windows users is to run ns-3 in a Linux virtual machine. The first step is to unzip the tar file; this should leave a directory named ns-allinone-3.nn, where nn reflects the version number (20 in the author’s installation as of this 2014 writing). This directory is the root of the ns-3 system; it contains a build.py (python) script and the primary ns-3 directory ns-3.nn. All that is necessary is to run the build.py script: ./build.py Considerable configuration and then compiler output should ensue, hopefully terminating with a list of “Modules built” and “Modules not built”. From this point on, most ns-3 work will take place in the subdirectory ns-3.nn, that is, in ns-allinone-3.nn/ns-3.nn. This development directory contains the source directory src, the script directory scratch, and the execution script waf. The development directory also contains a directory examples containing a rich set of example scripts. The scripts in examples/tutorial are described in depth in the ns-3 tutorial in doc/tutorial. ### 17.1.1 Running a Script Let us now run a script, for example, the file first.cc included in the examples/tutorial directory. We first copy this file into the directory “scratch”, and then, in the parent development directory, enter the command ./waf --run first The program is compiled and, if compilation is successful, is run. In fact, every uncompiled program in the scratch directory is compiled, meaning that projects in progress that are not yet compilable must be kept elsewhere. One convenient strategy is to maintain multiple project directories, and link them symbolically to scratch as needed. The ns-3 system includes support for command-line options; the following example illustrates the passing by command line of the value 3 for the variable nCsma: ./waf --run "second --nCsma=3" ### 17.1.2 Compilation Errors By default, ns-3 enables the -Werror option to the compiler, meaning that all warnings are treated as errors. This is good practice for contributed or published scripts, but can be rather exasperating for beginners. To disable this, edit the file waf-tools/cflags.py (relative to the development directory). Change the line self.warnings_flags = [['-Wall'], ['-Werror'], ['-Wextra']] to self.warnings_flags = [['-Wall'], ['-Wextra']] Then, in the development directory, run ./waf configure ./waf build ## 17.2 A Single TCP Sender We begin by translating the single-TCP-sender script of 16.2 A Single TCP Sender. The full program is in basic1.cc; we now review most of it line-by-line; some standard things such as #include directives are omitted. /* Network topology: A----R----B A--R: 10 Mbps / 10 ms delay R--B: 800 kbps / 50 ms delay queue at R: size 7 */ using namespace ns3; std::string fileNameRoot = "basic1"; // base name for trace files, etc void CwndChange (Ptr<OutputStreamWrapper> stream, uint32_t oldCwnd, uint32_t newCwnd) { *stream->GetStream () << Simulator::Now ().GetSeconds () << " " << newCwnd << std::endl; } static void TraceCwnd () // Trace changes to the congestion window { AsciiTraceHelper ascii; Ptr<OutputStreamWrapper> stream = ascii.CreateFileStream (fileNameRoot + ".cwnd"); Config::ConnectWithoutContext ("/NodeList/0/$ns3::TcpL4Protocol/SocketList/0/CongestionWindow", MakeBoundCallback (&CwndChange,stream));
}
The function TraceCwnd() arranges for tracing of cwnd; the function CwndChange is a callback, invoked by the ns-3 system whenever cwnd changes. Such callbacks are common in ns-3.
The parameter string beginning /NodeList/0/... is an example of the configuration namespace. Each ns-3 attribute can be accessed this way. See 17.2.2 The Ascii Tracefile below.
int main (int argc, char *argv[])
{
int tcpSegmentSize = 1000;
Config::SetDefault ("ns3::TcpSocket::SegmentSize", UintegerValue (tcpSegmentSize));
Config::SetDefault ("ns3::TcpSocket::DelAckCount", UintegerValue (0));
Config::SetDefault ("ns3::TcpL4Protocol::SocketType", StringValue ("ns3::TcpReno"));
Config::SetDefault ("ns3::RttEstimator::MinRTO", TimeValue (MilliSeconds (500)));
The use of Config::SetDefault() allows us to configure objects that will not exist until some later point, perhaps not until the ns-3 simulator is running. The first parameter is an attribute string, of the form ns3::class::attribute. A partial list of attributes is at https://www.nsnam.org/docs/release/3.19/doxygen/group___attribute_list.html. Attributes of a class can also be determined by a command such as the following:
./waf --run "basic1 --PrintAttributes=ns3::TcpSocket
The advantage of the Config::SetDefault mechanism is that often objects are created indirectly, perhaps by “helper” classes, and so direct setting of class properties can be problematic.
It is perfectly acceptable to issue some Config::SetDefault calls, then create some objects (perhaps implicitly), and then change the defaults (again with Config::SetDefault) for creation of additional objects.
We pick the TCP congesion-control algorithm by setting ns3::TcpL4Protocol::SocketType. Options are TcpRfc793 (no congestion control), TcpTahoe, TcpReno, TcpNewReno and TcpWestwood. TCP Cubic and SACK TCP are not supported natively (though they are available if the Network Simulation Cradle is installed).
Setting the DelAckCount attribute to 0 disables delayed ACKs. Setting the MinRTO value to 500 ms avoids some unexpected hard timeouts. We will return to both of these below in 17.2.3 Unexpected Timeouts and Other Phenomena.
Next comes our local variables and command-line-option processing. In ns-3 the latter is handled via the CommandLine object, which also recognized the --PrintAttributes option above. Using the --PrintHelp option gives a list of variables that can be set via command-line arguments.
unsigned int runtime = 20; // seconds
int delayAR = 10; // ms
int delayRB = 50; // ms
double bottleneckBW= 0.8; // Mbps
double fastBW = 10; // Mbps
uint32_t queuesize = 7;
uint32_t maxBytes = 0; // 0 means "unlimited"
CommandLine cmd;
// Here, we define command line options overriding some of the above.
cmd.AddValue ("runtime", "How long the applications should send data", runtime);
cmd.AddValue ("queuesize", "queue size at R", queuesize);
cmd.AddValue ("tcpSegmentSize", "TCP segment size", tcpSegmentSize);
cmd.Parse (argc, argv);
std::cout << "queuesize=" << queuesize << ", delayRB=" << delayRB << std::endl;
Next we create three nodes, illustrating the use of smart pointers and CreateObject().
Ptr<Node> A = CreateObject<Node> ();
Ptr<Node> R = CreateObject<Node> ();
Ptr<Node> B = CreateObject<Node> ();
Class Ptr is a “smart pointer” that manages memory through reference counting. The template function CreateObject acts as the ns-3 preferred alternative to operator new. Parameters for objects created this way can be supplied via Config::SetDefault, or by some later method call applied to the Ptr object. For Node objects, for example, we might call A -> AddDevice(...).
A convenient alternative to creating nodes individually is to create a container of nodes all at once:
NodeContainer allNodes;
allNodes.Create(3);
Ptr<Node> A = allNodes.Get(0);
...
After the nodes are in place we create our point-to-point links, using the PointToPointHelper class. We also create NetDeviceContainer objects; we don’t use these here (we could simply call AR.Install(A,R)), but will need them below when assigning IPv4 addresses.
// use PointToPointChannel and PointToPointNetDevice
NetDeviceContainer devAR, devRB;
PointToPointHelper AR, RB;
// create point-to-point link from A to R
AR.SetDeviceAttribute ("DataRate", DataRateValue (DataRate (fastBW * 1000 * 1000)));
AR.SetChannelAttribute ("Delay", TimeValue (MilliSeconds (delayAR)));
devAR = AR.Install(A, R);
// create point-to-point link from R to B
RB.SetDeviceAttribute ("DataRate", DataRateValue (DataRate (bottleneckBW * 1000 * 1000)));
RB.SetChannelAttribute ("Delay", TimeValue (MilliSeconds (delayRB)));
RB.SetQueue("ns3::DropTailQueue", "MaxPackets", UintegerValue(queuesize));
devRB = RB.Install(R,B);
Next we hand out IPv4 addresses. The Ipv4AddressHelper class can help us with individual LANs (eg A–R and R–B), but it is up to us to make sure our two LANs are on different subnets. If we attempt to put A and B on the same subnet, routing will simply fail, just as it would if we were to do this with real network nodes.
InternetStackHelper internet;
internet.Install (A);
internet.Install (R);
internet.Install (B);
ipv4.SetBase ("10.0.0.0", "255.255.255.0");
Ipv4InterfaceContainer ipv4Interfaces;
ipv4.SetBase ("10.0.1.0", "255.255.255.0");
Ipv4GlobalRoutingHelper::PopulateRoutingTables ();
Next we print out the addresses assigned. This gives us a peek at the GetObject template and the ns-3 object-aggregation model. The original Node objects we created earlier were quite generic; they gained their Ipv4 component in the code above. Now we retrieve that component with the GetObject<Ipv4>() calls below.
Ptr<Ipv4> A4 = A->GetObject<Ipv4>(); // gets node A's IPv4 subsystem
Ptr<Ipv4> B4 = B->GetObject<Ipv4>();
Ptr<Ipv4> R4 = R->GetObject<Ipv4>();
In general, A->GetObject<T> returns the component of type T that has been “aggregated” to Ptr<Object> A; often this aggregation is invisible to the script programmer but an understanding of how it works is sometimes useful. The aggregation is handled by the ns-3 Object class, which contains an internal list m_aggregates of aggregated companion objects. At most one object of a given type can be aggregated to another, making GetObject<T> unambiguous. Given a Ptr<Object> A, we can obtain an iterator over the aggregated companions via A->GetAggregateIterator(), of type Object::AggregateIterator. From each Ptr<const Object> B returned by this iterator, we can call B->GetInstanceTypeId().GetName() to get the class name of B.
The GetAddress() calls take two parameters; the first specfies the interface (a value of 0 gives the loopback interface) and the second distinguishes between multiple addresses assigned to the same interface (which is not happening here). The call A4->GetAddress(1,0) returns an Ipv4InterfaceAddress object containing, among other things, an IP address, a broadcast address and a netmask; GetLocal() returns the first of these.
Next we create the receiver on B, using a PacketSinkHelper. A receiver is, in essense, a read-only form of an application server.
// create a sink on B
uint16_t Bport = 80;
ApplicationContainer sinkAppA = sinkA.Install (B);
sinkAppA.Start (Seconds (0.01));
// the following means the receiver will run 1 min longer than the sender app.
sinkAppA.Stop (Seconds (runtime + 60.0));
Now comes the sending application, on A. We must configure and create a BulkSendApplication, attach it to A, and arrange for a connection to be created to B. The BulkSendHelper class simplifies this.
BulkSendHelper sourceAhelper ("ns3::TcpSocketFactory", sinkAddr);
sourceAhelper.SetAttribute ("MaxBytes", UintegerValue (maxBytes));
sourceAhelper.SetAttribute ("SendSize", UintegerValue (tcpSegmentSize));
ApplicationContainer sourceAppsA = sourceAhelper.Install (A);
sourceAppsA.Start (Seconds (0.0));
sourceAppsA.Stop (Seconds (runtime));
If we did not want to use the helper class here, the easiest way to create the BulkSendApplication is with an ObjectFactory. We configure the factory with the type we want to create and the relevant configuration parameters, and then call factory.Create(). (We could have used the Config::SetDefault() mechanism and CreateObject() as well.)
ObjectFactory factory;
factory.SetTypeId ("ns3::BulkSendApplication");
factory.Set ("Protocol", StringValue ("ns3::TcpSocketFactory"));
factory.Set ("MaxBytes", UintegerValue (maxBytes));
factory.Set ("SendSize", UintegerValue (tcpSegmentSize));
Ptr<Object> bulkSendAppObj = factory.Create();
Ptr<Application> bulkSendApp = bulkSendAppObj -> GetObject<Application>();
bulkSendApp->SetStartTime(Seconds(0.0));
bulkSendApp->SetStopTime(Seconds(runtime));
The above gives us no direct access to the actual TCP connection. Yet another alternative is to start by creating the TCP socket and connecting it:
Ptr<Socket> tcpsock = Socket::CreateSocket (A, TcpSocketFactory::GetTypeId ());
tcpsock->Bind();
However, there is then no mechanism for creating a BulkSendApplication that uses a pre-existing socket. (For a workaround, see the tutorial example fifth.cc.)
Before beginning execution, we set up tracing; we will look at the tracefile format later. We use the AR PointToPointHelper class here, but both ascii and pcap tracing apply to the entire A–R–B network.
// Set up tracing
AsciiTraceHelper ascii;
std::string tfname = fileNameRoot + ".tr";
AR.EnableAsciiAll (ascii.CreateFileStream (tfname));
// Setup tracing for cwnd
Simulator::Schedule(Seconds(0.01),&TraceCwnd); // this Time cannot be 0.0
// This tells ns-3 to generate pcap traces, including "-node#-dev#-" in filename
AR.EnablePcapAll (fileNameRoot); // ".pcap" suffix is added automatically
This last creates four .pcap files, eg
basic1-0-0.pcap
basic1-1-0.pcap
basic1-1-1.pcap
basic1-2-0.pcap
The first number refers to the node (A=0, R=1, B=2) and the second to the interface. A packet arriving at R but dropped there will appear in the second .pcap file but not the third. These files can be viewed with WireShark.
Finally we are ready to start the simulator! The BulkSendApplication will stop at time runtime, but traffic may be in progress. We allow it an additional 60 seconds to clear. We also, after the simulation has run, print out the number of bytes received by B.
Simulator::Stop (Seconds (runtime+60));
Simulator::Run ();
Ptr<PacketSink> sink1 = DynamicCast<PacketSink> (sinkAppA.Get (0));
std::cout << "Total Bytes Received from A: " << sink1->GetTotalRx () << std::endl;
return 0;
}
### 17.2.1 Running the Script
When we run the script and plot the cwnd trace data (here for about 12 seconds), we get the following:
Compare this graph to that in 16.2.1 Graph of cwnd v time produced by ns-2. The slow-start phase earlier ended at around 2.0 and now ends closer to 3.0. There are several modest differences, including the halving of cwnd just before T=1 and the peak around T=2.6; these were not apparent in the ns-2 graph.
After slow-start is over, the graphs are quite similar; cwnd ranges from 10 to 21. The period before was 1.946 seconds; here it is 2.0548; the difference is likely due to a more accurate implementation of the recovery algorithm.
One striking difference is the presence of the near-vertical line of dots just after each peak. What is happening here is that ns-3 implements the cwnd inflation/deflation algorithm outlined at the tail end of 13.4 TCP Reno and Fast Recovery. When three dupACKs are received, cwnd is set to cwnd/2 + 3, and is then allowed to increase to 1.5×cwnd. See the end of 13.4 TCP Reno and Fast Recovery.
### 17.2.2 The Ascii Tracefile
Below are four lines from the tracefile, starting with the record showing packet 271 (Seq=271001) being dropped by R.
d 4.9823 /NodeList/1/DeviceList/1/$ns3::PointToPointNetDevice/TxQueue/Drop ns3::PppHeader (Point-to-Point Protocol: IP (0x0021)) ns3::Ipv4Header (tos 0x0 DSCP Default ECN Not-ECT ttl 63 id 296 protocol 6 offset (bytes) 0 flags [none] length: 1040 10.0.0.1 > 10.0.1.2) ns3::TcpHeader (49153 > 80 [ ACK ] Seq=271001 Ack=1 Win=65535) Payload (size=1000) r 4.98312 /NodeList/2/DeviceList/0/$ns3::PointToPointNetDevice/MacRx ns3::Ipv4Header (tos 0x0 DSCP Default ECN Not-ECT ttl 63 id 283 protocol 6 offset (bytes) 0 flags [none] length: 1040 10.0.0.1 > 10.0.1.2) ns3::TcpHeader (49153 > 80 [ ACK ] Seq=258001 Ack=1 Win=65535) Payload (size=1000)
+ 4.98312 /NodeList/2/DeviceList/0/$ns3::PointToPointNetDevice/TxQueue/Enqueue ns3::PppHeader (Point-to-Point Protocol: IP (0x0021)) ns3::Ipv4Header (tos 0x0 DSCP Default ECN Not-ECT ttl 64 id 271 protocol 6 offset (bytes) 0 flags [none] length: 40 10.0.1.2 > 10.0.0.1) ns3::TcpHeader (80 > 49153 [ ACK ] Seq=1 Ack=259001 Win=65535) - 4.98312 /NodeList/2/DeviceList/0/$ns3::PointToPointNetDevice/TxQueue/Dequeue ns3::PppHeader (Point-to-Point Protocol: IP (0x0021)) ns3::Ipv4Header (tos 0x0 DSCP Default ECN Not-ECT ttl 64 id 271 protocol 6 offset (bytes) 0 flags [none] length: 40 10.0.1.2 > 10.0.0.1) ns3::TcpHeader (80 > 49153 [ ACK ] Seq=1 Ack=259001 Win=65535)
As with ns-2, the first letter indicates the action: r for received, d for dropped, + for enqueued, - for dequeued. For Wi-Fi tracefiles, t is for transmitted. The second field represents the time.
The third field represents the name of the event in the configuration namespace, sometimes called the configuration path name. The NodeList value represents the node (A=0, etc), the DeviceList represents the interface, and the final part of the name repeats the action: Drop, MacRx, Enqueue, Dequeue.
After that come a series of class names (eg ns3::Ipv4Header, ns3::TcpHeader), from the ns-3 attribute system, followed in each case by a parenthesized list of class-specific trace information.
In the output above, the final three records all refer to node B (/NodeList/2/). Packet 258 has just arrived (Seq=258001), and ACK 259001 is then enqueued and sent.
### 17.2.3 Unexpected Timeouts and Other Phenomena
In the discussion of the script above at 17.2 A Single TCP Sender we mentioned that we set ns3::TcpSocket::DelAckCount to 0, to disable delayed ACKs, and ns3::RttEstimator::MinRTO to 500 ms, to avoid unexpected timeouts.
If we comment out the line disabling delayed ACKs, little changes in our graph, except that the spacing between consecutive TCP teeth now almost doubles to 3.776. This is because with delayed ACKs the receiver sends only half as many ACKs, and the sender does not take this into account when incrementing cwnd (that is, the sender does not implement the suggestion of RFC 3465 mentioned in 13.2.1 TCP Reno Per-ACK Responses).
If we leave out the MinRTO adjustment, and set tcpSegmentSize to 960, we get a more serious problem: the graph now looks something like this:
We can enable ns-3’s internal logging in the TcpReno class by entering the commands below, before running the script. (In some cases, as with WifiHelper::EnableLogComponents(), logging output can be enabled from within the script.) Once enabled, logging output is written to stderr.
NS_LOG=TcpReno=level_info
export NS_LOG
The log output shows the initial dupACK at 8.54:
8.54069 [node 0] Triple dupack. Reset cwnd to 12960, ssthresh to 10080
But then, despite Fast Recovery proceding normally, we get a hard timeout:
8.71463 [node 0] RTO. Reset cwnd to 960, ssthresh to 14400, restart from seqnum 510721
What is happening here is that the RTO interval was just a little too short, probably due to the use of the “awkward” segment size of 960.
After the timeout, there is another triple-dupACK!
8.90344 [node 0] Triple dupack. Reset cwnd to 6240, ssthresh to 3360
Shortly thereafter, at T=8.98, cwnd is reset to 3360, in accordance with the Fast Recovery rules.
The overall effect is that cwnd is reset, not to 10, but to about 3.4 (in packets). This significantly slows down throughput.
In recovering from the hard timeout, the sequence number is reset to Seq=510721 (packet 532), as this was the last packet acknowledged. Unfortunately, several later packets had in fact made it through to B. By looking at the tracefile, we can see that at T=8.7818, B received Seq=538561, or packet 561. Thus, when A begins retransmitting packets 533, 534, etc after the timeout, B’s response is to send the ACK the highest packet it has received, packet 561 (Ack=539521).
This scenario is not what the designers of Fast Recovery had in mind; it is likely triggered by a too-conservative timeout estimate. Still, exactly how to fix it is an interesting question; one approach might be to ignore, in Fast Recovery, triple dupACKs of packets now beyond what the sender is currently sending.
## 17.3 Wireless
We next present the wireless simulation of 16.6 Wireless Simulation. The full script is at wireless.cc; the animation output for the netanim player is at wireless.xml. As before, we have one mover node moving horizontally 150 meters above a row of five fixed nodes spaced 200 meters apart. The limit of transmission is set to be 250 meters, meaning that a fixed node goes out of range of the mover node just as the latter passes over directly above the next fixed node. As before, we use Ad hoc On-demand Distance Vector (AODV) as the routing protocol. When the mover passes over fixed node N, it goes out of range of fixed node N-1, at which point AODV finds a new route to mover through fixed node N.
As in ns-2, wireless simulations tend to require considerably more configuration than point-to-point simulations. We now review the source code line-by-line. We start with two callback functions and the global variables they will need to access.
using namespace ns3;
Ptr<ConstantVelocityMobilityModel> cvmm;
double position_interval = 1.0;
std::string tracebase = "scratch/wireless";
// two callbacks
void printPosition()
{
Vector thePos = cvmm->GetPosition();
Simulator::Schedule(Seconds(position_interval), &printPosition);
std::cout << "position: " << thePos << std::endl;
}
void stopMover()
{
cvmm -> SetVelocity(Vector(0,0,0));
}
Next comes the data rate:
int main (int argc, char *argv[])
{
std::string phyMode = "DsssRate1Mbps";
The phyMode string represents the Wi-Fi data rate (and modulation technique). DSSS rates are DsssRate1Mbps, DsssRate2Mbps, DsssRate5_5Mbps and DsssRate11Mbps. Also available are ErpOfdmRate constants to 54 Mbps and OfdmRate constants to 150 Mbps with a 40 MHz band-width (GetOfdmRate150MbpsBW40MHz). All these are defined in src/wifi/model/wifi-phy.cc.
Next are the variables that determine the layout and network behavior. The factor variable allows slowing down the speed of the mover node but correspondingly extending the runtime (though the new-route-discovery time is not scaled):
int bottomrow = 5; // number of bottom-row nodes
int spacing = 200; // between bottom-row nodes
int mheight = 150; // height of mover above bottom row
int brheight = 50; // height of bottom row
int X = (bottomrow-1)*spacing+1; // X is the horizontal dimension of the field
int packetsize = 500;
double factor = 1.0; // allows slowing down rate and extending runtime; same total # of packets
int endtime = (int)100*factor;
double speed = (X-1.0)/endtime;
double bitrate = 80*1000.0/factor; // *average* transmission rate, in bits/sec
uint32_t interval = 1000*packetsize*8/bitrate*1000; // in microsec
uint32_t packetcount = 1000000*endtime/ interval;
std::cout << "interval = " << interval <<", rate=" << bitrate << ", packetcount=" << packetcount << std::endl;
There are some niceties in calculating the packet transmission interval above; if we do it instead as 1000000*packetsize*8/bitrate then we sometimes run into 32-bit overflow problems or integer-division-roundoff problems.
Now we configure some Wi-Fi settings.
// disable fragmentation for frames below 2200 bytes
Config::SetDefault ("ns3::WifiRemoteStationManager::FragmentationThreshold", StringValue ("2200"));
// turn off RTS/CTS for frames below 2200 bytes
Config::SetDefault ("ns3::WifiRemoteStationManager::RtsCtsThreshold", StringValue ("2200"));
// Set non-unicast data rate to be the same as that of unicast
Config::SetDefault ("ns3::WifiRemoteStationManager::NonUnicastMode", StringValue (phyMode));
Here we create the mover node with CreateObject<Node>(), but the fixed nodes are created via a NodeContainer, as is more typical with larger simulations
// Create nodes
NodeContainer fixedpos;
fixedpos.Create(bottomrow);
Ptr<Node> lowerleft = fixedpos.Get(0);
Ptr<Node> mover = CreateObject<Node>();
Now we put together a set of “helper” objects for more Wi-Fi configuration. We must configure both the PHY (physical) and MAC layers.
// The below set of helpers will help us to put together the desired Wi-Fi behavior
WifiHelper wifi;
wifi.SetStandard (WIFI_PHY_STANDARD_80211b);
wifi.SetRemoteStationManager ("ns3::AarfWifiManager"); // Use AARF rate control
The AARF rate changes can be viewed by enabling the appropriate logging with, at the shell level before ./waf, NS_LOG=AarfWifiManager=level_debug. We are not otherwise interested in rate scaling (3.7.2 Dynamic Rate Scaling) here, though.
The PHY layer helper is YansWifiPhyHelper. The YANS project (Yet Another Network Simulator) was an influential precursor to ns-3; see [LH06]. Note the AddPropagationLoss configuration, where we set the Wi-Fi range to 250 meters. The MAC layer helper is NqosWifiMacHelper; the “nqos” means “no quality-of-service”, ie no use of Wi-Fi PCF (3.7.7 Wi-Fi Polling Mode).
// The PHY layer here is "yans"
YansWifiPhyHelper wifiPhyHelper = YansWifiPhyHelper::Default ();
// for .pcap tracing
YansWifiChannelHelper wifiChannelHelper; // *not* ::Default() !
wifiChannelHelper.SetPropagationDelay ("ns3::ConstantSpeedPropagationDelayModel"); // pld: default?
// the following has an absolute cutoff at distance > 250
Ptr<YansWifiChannel> pchan = wifiChannelHelper.Create ();
wifiPhyHelper.SetChannel (pchan);
NqosWifiMacHelper wifiMacHelper = NqosWifiMacHelper::Default ();
NetDeviceContainer devices = wifi.Install (wifiPhyHelper, wifiMacHelper, fixedpos);
At this point the basic Wi-Fi configuration is done! The next step is to work on the positions and motion. First we establish the positions of the fixed nodes.
MobilityHelper sessile; // for fixed nodes
Ptr<ListPositionAllocator> positionAlloc = CreateObject<ListPositionAllocator> ();
int Xpos = 0;
for (int i=0; i<bottomrow; i++) {
Xpos += spacing;
}
sessile.SetPositionAllocator (positionAlloc);
sessile.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
sessile.Install (fixedpos);
Next we set up the mover node. ConstantVelocityMobilityModel is a subclass of MobilityModel. At the end we print out a couple things just for confirmation.
Vector pos (0, mheight+brheight, 0);
Vector vel (speed, 0, 0);
MobilityHelper mobile;
mobile.SetMobilityModel("ns3::ConstantVelocityMobilityModel"); // no Attributes
mobile.Install(mover);
cvmm = mover->GetObject<ConstantVelocityMobilityModel> ();
cvmm->SetPosition(pos);
cvmm->SetVelocity(vel);
std::cout << "position: " << cvmm->GetPosition() << " velocity: " << cvmm->GetVelocity() << std::endl;
std::cout << "mover mobility model: " << mobile.GetMobilityModelType() << std::endl;
Now we configure Ad hoc On-demand Distance Vector routing.
AodvHelper aodv;
OlsrHelper olsr;
Ipv4ListRoutingHelper listrouting;
//listrouting.Add(olsr, 10); // generates less traffic
listrouting.Add(aodv, 10); // fastest to find new routes
Uncommenting the olsr line (and commenting out the last line) is all that is necessary to change to OLSR routing. OLSR is slower to find new routes, but sends less traffic.
Now we set up the IP addresses. This is straightforward as all the nodes are on a single subnet.
InternetStackHelper internet;
internet.SetRoutingHelper(listrouting);
internet.Install (fixedpos);
internet.Install (mover);
ipv4.SetBase ("10.1.1.0", "255.255.255.0"); // there is only one subnet
Ipv4InterfaceContainer i = ipv4.Assign (devices);
Now we create a receiving application UdpServer on node mover, and a sending application UdpClient on the lower-left node. These applications generate their own sequence numbers, which show up in the ns-3 tracefiles marked with ns3::SeqTsHeader. As in 17.2 A Single TCP Sender, we use Config::SetDefault() and CreateObject<>() to construct the applications.
uint16_t port = 80;
// create a receiving application (UdpServer) on node mover
Config::SetDefault("ns3::UdpServer::Port", UintegerValue(port));
Ptr<UdpServer> UdpRecvApp = CreateObject<UdpServer>();
UdpRecvApp->SetStartTime(Seconds(0.0));
UdpRecvApp->SetStopTime(Seconds(endtime+60));
Ptr<Ipv4> m4 = mover->GetObject<Ipv4>();
Here is the UdpClient sending application:
Config::SetDefault("ns3::UdpClient::MaxPackets", UintegerValue(packetcount));
Config::SetDefault("ns3::UdpClient::PacketSize", UintegerValue(packetsize));
Config::SetDefault("ns3::UdpClient::Interval", TimeValue (MicroSeconds (interval)));
Ptr<UdpClient> UdpSendApp = CreateObject<UdpClient>();
UdpSendApp -> SetStartTime(Seconds(0.0));
UdpSendApp -> SetStopTime(Seconds(endtime));
We now set up tracing. The first, commented-out line enables pcap-format tracing, which we do not need here. The YansWifiPhyHelper object supports tracing only of “receive” (r) and “transmit” (t) records; the PointtoPointHelper of 17.2 A Single TCP Sender also traced enqueue and drop records.
//wifiPhyHelper.EnablePcap (tracebase, devices);
AsciiTraceHelper ascii;
wifiPhyHelper.EnableAsciiAll (ascii.CreateFileStream (tracebase + ".tr"));
// create animation file, to be run with 'netanim'
AnimationInterface anim (tracebase + ".xml");
anim.SetMobilityPollInterval(Seconds(0.1));
If we view the animation with netanim, the moving node’s motion is clear. The mover node, however, sometimes appears to transmit back to both the fixed-row node below left and the fixed-row node below right. These transmissions represent the Wi-Fi link-layer ACKs; they appear to be sent to two fixed-row nodes because what netanim is actually displaying with its blue links is transmission every other node in range.
We can also “view” the motion in text format by uncommenting the first line below.
//Simulator::Schedule(Seconds(position_interval), &printPosition);
Simulator::Schedule(Seconds(endtime), &stopMover);
Finally it is time to run the simulator, and print some final output.
Simulator::Stop(Seconds (endtime+60));
Simulator::Run ();
Simulator::Destroy ();
std::cout << "packets received: " << pktsRecd << std::endl;
std::cout << "packets recorded as lost: " << (UdpRecvApp->GetLost()) << std::endl;
std::cout << "packets actually lost: " << (packetcount - pktsRecd) << std::endl;
return 0;
}
### 17.3.1 Tracefile Analysis
The tracefile provides no enqueue records, and Wi-Fi doesn’t have fixed links; how can we verify that packets are being forwarded correctly? One thing we can do with the tracefile is to look at each value of the UdpServer application sequence number, and record
• when it was received by node mover
• when it was transmitted by any fixed-row node
If we do this, we get output like the following:
packet 0 received at 0.0248642, forwarded by 0 at 0.0201597
packet 1 received at 0.0547045, forwarded by 0 at 0.05
...
packet 499 received at 24.9506, forwarded by 0 at 24.95
packet 500 NOT recd, forwarded by 0 at 25, forwarded by 0 at 25.0019, forwarded by 0 at 25.0035, forwarded by 0 at 25.0071, forwarded by 0 at 25.0097, forwarded by 0 at 25.0159, forwarded by 0 at 25.0281
packet 501 received at 25.0864, forwarded by 0 at 25.0767, forwarded by 1 at 25.0817
packet 502 received at 25.1098, forwarded by 0 at 25.1, forwarded by 1 at 25.1051
...
packet 1000 NOT recd, forwarded by 0 at 50, forwarded by 1 at 50.001, forwarded by 1 at 50.003, forwarded by 1 at 50.0059, forwarded by 1 at 50.0087, forwarded by 1 at 50.0151, forwarded by 1 at 50.0239, forwarded by 1 at 50.0341
packet 1001 received at 50.082, forwarded by 0 at 50.0683, forwarded by 1 at 50.0722, forwarded by 2 at 50.0773
packet 1002 received at 50.1107, forwarded by 0 at 50.1, forwarded by 1 at 50.101, forwarded by 2 at 50.106
...
packet 1499 received at 74.9525, forwarded by 0 at 74.95, forwarded by 1 at 74.951, forwarded by 2 at 74.9519
packet 1500 NOT recd, forwarded by 0 at 75, forwarded by 1 at 75.001, forwarded by 2 at 75.0019, forwarded by 2 at 75.0039, forwarded by 2 at 75.005, forwarded by 2 at 75.0084, forwarded by 2 at 75.0124, forwarded by 2 at 75.0277, forwarded by 2 at 75.0361
packet 1501 NOT recd, forwarded by 0 at 75.05
packet 1502 received at 75.1484, forwarded by 0 at 75.1287, forwarded by 1 at 75.1299, forwarded by 1 at 75.1314, forwarded by 1 at 75.1326, forwarded by 2 at 75.1386, forwarded by 3 at 75.1437
packet 1503 received at 75.1621, forwarded by 0 at 75.15, forwarded by 1 at 75.151, forwarded by 2 at 75.1523, forwarded by 3 at 75.1574
...
That is, packets 0-499 were transmitted only by node 0. Packet 500 was never received by mover, but there were seven transmission attempts; these seven attempts follow the rules described in 3.7.1 Wi-Fi and Collisions. Packets starting at 501 were transmitted by node 0 and then later by node 1. Similarly, packet 1000 was lost, and after that each packet arriving at mover was first transmitted by nodes 0, 1 and 2, in that order. In other words, packets are indeed being forwarded rightward along the line of fixed-row nodes until a node is reached that is in range of mover.
### 17.3.2 AODV Performance
If we change the line
listrouting.Add(aodv, 10);
to
listrouting.Add(dsdv, 10);
we find that the loss count goes from 4 packets out of 2000 to 398 out of 2000; for OLSR routing the loss count is 426. As we discussed in 16.6 Wireless Simulation, the loss of one data packet triggers the AODV implementation to look for a new route. The DSDV and OLSR implementations, on the other hand, only look for new routes at regularly spaced intervals.
In preparation.
|
{}
|
# Crazy integral
1. Nov 1, 2007
### Doom of Doom
$$\int_{0}^{2\pi} \frac{dx}{1+e^{sin(x)}}$$
How would you evaluate this integral? Where do you even start?
2. Nov 1, 2007
### hotcommodity
Well, we know that $$\int \frac{dx}{1 + x^{2}} = arctan(x) + C$$.
How can we think of $$e^{sin(x)}$$ as a term that's been squared?
3. Nov 2, 2007
### Gib Z
No idea :(
I would have let u=sin x, so the integral becomes $$\int \frac{1}{1+e^u} \frac{du}{\sqrt{1-u^2}}$$ then did integration by parts.
4. Nov 2, 2007
### hotcommodity
I was completely wrong about using $$\int \frac{dx}{1 + x^{2}} = arctan(x) + C$$.
I tried evaluating the integral using my TI-89, but it wouldn't evaluate it unless I put in endpoints.
5. Nov 2, 2007
### Count Iblis
I would try the following. Put $$z=\exp\left(i x\right)$$. The integral then becomes:
$$\oint\frac{1}{1+\exp\left(\frac{z-z^{-1}}{2i}\right)}\frac{dz}{iz}$$
where the contour integral is over the unit circle in the complex plane. Next, use the residue theorem.
6. Nov 2, 2007
### Doom of Doom
Actually, I think I found a way to do this without using complex analysis.
Let $$f(x) = \frac{1}{1+e^{sin(x)}}$$ and $$g(x)=f(x-\pi)-\frac{1}{2} = \frac{1}{1+e^{sin(x-\pi)}}-\frac{1}{2}$$.
I will now show that g(x) is an odd function on the interval $$\left[-\pi,\pi\right]$$. For this to be true, I need g(x)=-g(-x), or g(x)+g(-x)=0.
Using the fact that sin(x-pi)=-sin(x) and sin(-x-pi)=sin(x),
$$g(x)+g(-x)=$$
$$=\frac{1}{1+e^{sin(x-\pi)}}-\frac{1}{2}+\frac{1}{1+e^{sin(-x-\pi)}}-\frac{1}{2}$$
$$=\frac{1}{1+e^{-sin(x)}}+\frac{1}{1+e^{sin(x)}}-1$$
$$=\frac{1+e^{sin(x)}+1+e^{-sin(x)}}{(1+e^{-sin(x)})(1+e^{sin(x)})}-1$$
$$=\frac{2+e^{sin(x)}+e^{-sin(x)}}{1+e^{sin(x)}+e^{-sin(x)}+1}-1$$
$$=1-1=0$$
Therefore, g(x) is odd, which implies that $$\int_{0}^{2\pi}g(x+\pi) dx=0$$.
Because $$f(x)=g(x+\pi)+\frac{1}{2}$$,
$$\int_{0}^{2\pi}f(x) dx$$ simply becomes $$\int_{0}^{2\pi}\frac{1}{2} dx$$, which is obviously pi.
7. Nov 2, 2007
### PowerIso
I made it into a power series and got the value to be accurate to .0000001 and it does look like pi. :)
8. Nov 2, 2007
### robert Ihnot
What happens here is that the integral is equal to: $$\int_{0}^{\pi} \frac{dx}{1+e^{sin(x)}}+$$ $$\int_{0}^{\pi} \frac{dx}{1+e^{-sin(x)}}$$
The second integral then becomes: $$\int_{0}^{\pi}\frac{e^{sinx}}{1+e^{sinx}}$$
Thus adding the integrals reduces to $$\int_{0}^{\pi}dx$$ as Doom of Doom has already figured out.
Last edited: Nov 2, 2007
9. Nov 3, 2007
### ice109
holy crap so many different ways to do one integral. i like doom of dooms the best
|
{}
|
Are there true 360 degree cameras?
I'm looking for a true 360 degree camera, so that's both vertical and horizontal 360 degrees. I've seen some clip-ons for the iPhone, but am really not wanting to by an iPhone 4S for this.
What I would like to see: - decent resolution: at least 1920x1080. - and 30fps
I can't settle for an alternative that's not vertically 360 degrees (or at least much more than 180 degrees).
I've seen the ball camera, which isn't out yet, but would be really well suited for my case, given it does video as well.
And here comes another 360/180 product: RICOH THETA - spherical panoramic photos, short videos, wi-fi control
I can't settle for an alternative that's not vertically 360 degrees (or at least much more than 180 degrees).
A camera that shoots in every possible direction is said to have a field of view of 360 (horizontal) x 180 (vertical) degrees. Having more than that means you will be capturing some or all of the scene twice. Consider an imaginary arc that spans 180 degrees, from the top to the bottom. Now rotate this arc 360 degrees horizontally and you have covered the whole sphere.
What I would like to see: - decent resolution: at least 1920x1080. - and 30fps
For a 360 x 180 degree panorama the width will always be twice the height. Resolutions that make sense are 1920x960 or 2160x1080. You can't have exactly 1920x1080 unless you stretch, squeeze, crop or letterbox.
I've seen the ball camera, which isn't out yet, but would be really well suited for my case, given it does video as well.
If you refer to this product, then I don't see any mention that it records video. All it seems to do is take a single panorama when it reaches the highest altitude.
Another similar product is the Tamaggo 360-imager. It isn't out yet either, and while it captures 360 degrees horizontally the vertical range is less than 180 degrees. And it doesn't do video.
Yet another panoramic camera, the DIY Streetview Camera System, is a much better (and expensive) system than the other ones. Based on the example images on their site it shoots much better quality panoramas. It cannot do video, unfortunately. The fastest rate it can shoot at is one panorama every 3 seconds.
I think with time and patience one could build a DIY 360x180 video recording system. You would mount a bunch of small video cameras (maybe video enabled point & shoots or GoPros) to cover every possible direction with some amount of overlap. You want cameras that can be set to manual mode, since you want to get a consistent look from all of them. For syncing the cameras you could build some sort of controller that signals the cameras to start recording all at the same time, or else just start them manually and once all the cameras are running clap to get an audio cue that can help you sync the videos during post-processing.
To process the multiple video streams into a panoramic movie you first need to calibrate. For this you would take one set of images and build a panorama manually using, say, Hugin or Panorama Tools. Once you have a panorama project file that contains all the stitching parameters you can take the videos, break them into sequences of individual images using ffmpeg, run each set of images through the calibrated Panorama Tools project, and finally assemble a new movie with the stitched images using ffmpeg again.
Let me know if you make one of these ;-)
• Thanks for the sound advice. I've been reconsidering given the limited availability and thought a security cam might cut it for high resolution photos (stitching while panning). Do you happen to know if any work was done with this, so I don't do double work programming it myself? Is this a crazy idea? – RobotRock Aug 11 '12 at 20:59
• Oh, sorry, I read over the part where you included the tools. Thanks. – RobotRock Aug 11 '12 at 22:20
• @Photographer1: if you can live with less than 360x180 degrees, then two, three or four point & shoots on a bracket is a better solution than a security camera, in my opinion. With the help of Hugin and ffmpeg I think you can do it. – Miguel Aug 12 '12 at 2:54
• You don't need to capture some or all of the scene twice to exceed 360°x180°. The Nikon 8mm/2.8 circular fisheye, for instance, has a 210° field of view. If you point the camera directly upwards, it will capture a 360° panorama that not only covers the 180° from horizon to horizon, but extends 15° below the horizon line as well. Greater coverage (a larger angle of view) simply reduces the blind spot. There is an upper limit for a single exposure, of course (the camera cannot see though itself, no matter what the lens looks like), but 180° is not it. – user2719 Aug 12 '12 at 15:26
Yes, there are a number of true 360 degree cameras that can do 1080 HD/30fps out on the market at the moment.
With the advent of 360x180 content on Youtube and Facebook, there's been an explosion of 360 cameras (video and still) of late. Many of these are action cameras. Here are the cameras in the $350-$500 price range, that are listed in this June 2016 MacWorld article:
• Ricoh Theta S
• 360fly 4K
• Giroptic 360cam
• VSN Mobil V.360-degree Action Cam
• ALLie Camera
• Kodak PixPro SP360 4K Action Cam
• LG 360 Cam
• Nikon KeyMission 360
• Samsung Galaxy Gear 360
There are a number of fisheye lenses that capture 180 degrees. You would have to, of course, rotate the camera and shoot at least two and probably 4 shots and then stitch them together.
There are commercial panoramic heads that will rotate the head for you, or you could hack up something with an Arduino, a servo, etc.
I think, IMHO, that its pretty crazy and I don't think the image will be as cool as you hope. but hey, this is art. go for it.
Remember, with the really wide fisheyes, you end up with your feet or the tripod in the image.
I found a couple of alternatives by searching google with 360 camera.
http://www.360.tv/
http://www.bublcam.com/
Maybe, Ricoh Theta S matches your requirement. It has all the features you asked for which includes - HD recording (up to 25 mins), 30fps, Wi-Fi capability.
Ricoh Theta S is the upgraded version of Ricoh Theta which was mentioned by szulat.
• could you please add some of the specific features that this camera offers to enhance your answer. – damned truths Dec 2 '15 at 22:50
• ...and how this adds more information than szulat's answer. – inkista Dec 3 '15 at 4:18
If you have a professional budget, you should look at the Lytro immerge
• Could you elaborate on that link. What is good about this camera, etc.? Also this camera is not available to everyone at the moment. – damned truths Dec 6 '15 at 3:57
Just today on the hugin-ptx mailing list I saw someone promoting the http://www.geonaute360.com/, maybe that's what you intend. Not full 360x180, but very close to it.
There are setups available using multiple GoPro cameras. eg 6 cameras on the outside of a cube, to give 360° coverage.
GoPro make the Omni system. This uses 6 GoPro Hero4 Black cameras, on a cube. All of the cameras are synchronised, so will all record a picture at the same time. And they can all be controlled with a single remote. But is very expensive - $5000 for the full system, including the rig, with 6 cameras, software, and accessories. Or$1500 just for the rig.
Or there are a variety of cheaper rigs, for mounting multiple GoPro cameras. Some are cubes, or circles, or various other shapes, depends on what coverage you want. Some use up to 10 cameras, for even higher resolution. Though most these cheaper rigs are not synchronised, so would be more difficult to take a photo with all of them at once. Video footage could be synchronised with software afterwards, but it would be more work.
|
{}
|
Tricki
## Divide and conquer
### Quick description
When an mathematical object that exhibits many distinct features, each of which makes it more difficult to control that object effectively, try decomposing the object into better-behaved (and non-interacting) components, each of which only has one of the features dominating. Then control each of the components separately, and "sum up" to control the original object.
Note that in many cases, the components may be "messier" than the original object (because of the presence of cutoffs, mollifiers, truncations, etc.). Nevertheless, these messier components may be easier to control, especially if one is only seeking upper bounds or approximations rather than exact computations, particularly if one is prepared to lose multiplicative constants in the final bound.
### Example 1
The divide-and-conquer strategy is particularly good for estimating "linear" expressions, such as an integral over a domain. One can divide up the domain into several components where the integrand is exhibiting different behavior, and bound the contribution of each component separately.
For instance, suppose one wants to compute the convolution integral for some fixed non-zero and some exponents . The integrand here exhibits three distinct modes of "bad" behavior: a singularity as , a singularity as , and a slow decay as . Rather than deal with all these modes at once, one can decompose the domain into three regions where one only sees one of these modes at a time. There are a number of ways to do this, but here is one way: decompose into
• I. The region where ;
• II. The region where ;
• III. The region where .
In region I, the dominant mode of behavior is the singularity as . The term is comparable in magnitude here to the quantity , which is independent of and can thus be pulled out of the integral (here we are using the trick "bound the integrand by something simpler"), leaving us with the (tractable) task of computing the integral .
In region II, the dominant mode of behavior is the singularity as . The term is comparable in magnitude here to the quantity , and can be pulled out of the integral also, leaving one with the tractable integral .
In region III, the dominant mode of behavior is the decay as . Here, and are comparable in magnitude to each other, so the integral can be simplified (up to multiplicative constants) as . The additional restriction is not significantly helping us here (this is intuitively obvious after drawing a picture), and we can estimate this contribution by the tractable integral .
(Should this example be worked out more fully?)
### Example 2
The Divide and Conquer has been described in the context of Estimating Integrals but of course it can be equally useful in Estimating Sums. Let us exhibit that with a simple example. Suppose we need to estimate the series
where, say, . Remembering that and one expects the sum to be something (a bit) worse than and (a lot) better than close to the 'dangerous' endpoint . Thus a good guess would be for some . We will try to prove such an estimate by using the Divide and Conquer trick in order to 'decouple' the two terms and , in the sum.
In order to decide where to break the sum, the following observation is quite important. Whenever we have that . This means that in the region we can just use the bound without losing anything but a multiplicative constant. Thus we write
For the first term in the sum above we have that
where we have also used the Bounding the sum by an integral trick. Observe that this gives automatically a lower bound for our sum , the latter being a sum of positive terms. We expect to be the main term of the sum. In order to estimate the second term we can use a dyadic decomposition and write
For the inner sum we use the Base times height trick where the base is the number of terms in the sum, and the height is essentially . Hence we get
Of course the first term dominates when so
Observe that the same calculation can be carried out by just write the power series of and then using the Square and rearrange trick to reach to the same conclusion, and in fact, in a much more elegant way. However this requires that one already knows the result and just wants to prove it. A second and more important advantage of the Divide and Conquer trick is that it is much more flexible. Thus one could for example apply the same steps in order to estimate a series of the form
for some arbitrary and .
### General discussion
A particularly useful special case of the divide-and-conquer strategy is dyadic decomposition.
(Need some non-integration examples of divide-and-conquer strategies, e.g. in combinatorics. Suggestions?)
|
{}
|
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
$\text{slope: } 2 ,\\\\\text{$y$-intercept: } -\dfrac{3}{2}$
Using the properties of equality, the given equation, $12x-6y=9$, is equivalent to \begin{array}{l} -6y=-12x+9 \\\\ y=\dfrac{-12}{-6}x+\dfrac{9}{-6} \\\\ y=2x-\dfrac{3}{2} .\end{array} Using $y=mx+b$ where $m$ is the slope and $b$ is the $y$-intercept, then the equation above has the following characteristics: \begin{array}{l} \text{slope: } 2 ,\\\\\text{$y$-intercept: } -\dfrac{3}{2} .\end{array}
|
{}
|
# How much does performance differ between people?
post by Max_Daniel, Benjamin_Todd · 2021-03-25T22:56:32.660Z · EA · GW · 74 comments
## Contents
Acknowledgments
Endnotes
None
by Max Daniel & Benjamin Todd
[ETA: See also this summary of our findings + potential lessons by Ben for the 80k blog.]
Some people seem to achieve orders of magnitudes more than others in the same job. For instance, among companies funded by Y Combinator the top 0.5% account for more than ⅔ of the total market value; and among successful bestseller authors, the top 1% stay on the New York Times bestseller list more than 25 times longer than the median author in that group.
This is a striking and often unappreciated fact, but raises many questions. How many jobs have these huge differences in achievements? More importantly, why can achievements differ so much, and can we identify future top performers in advance? Are some people much more talented? Have they spent more time practicing key skills? Did they have more supportive environments, or start with more resources? Or did the top performers just get lucky?
More precisely, when recruiting, for instance, we’d want to know the following: when predicting the future performance of different people in a given job, what does the distribution of predicted (‘ex-ante’) performance look like?
This is an important question for EA community building and hiring. For instance, if it’s possible to identify people who will be able to have a particularly large positive impact on the world ahead of time, we’d likely want to take a more targeted approach to outreach.
More concretely, we may be interested in two different ways in which we could encounter large performance differences :
1. If we look at a random person, by how much should we expect their performance to differ from the average?
2. What share of total output should we expect to come from the small fraction of people we’re most optimistic about (say, the top 1% or top 0.1%) – that is, how heavy-tailed is the distribution of ex-ante performance?
(See this appendix for how these two notions differ from each other.)
Depending on the decision we’re facing we might be more interested in one or the other. Here we mostly focused on the second question, i.e., on how heavy the tails are.
This post contains our findings from a shallow literature review and theoretical arguments. Max was the lead author, building on some initial work by Ben, who also provided several rounds of comments.
You can see a short summary of our findings below.
We expect this post to be useful for:
• (Primarily:) Junior EA researchers who want to do further research in this area. See in particular the section on Further research.
• (Secondarily:) EA decision-makers who want to get a rough sense of what we do and don’t know about predicting performance. See in particular this summary and the bolded parts in our section on Findings.
• We weren’t maximally diligent with double-checking our spreadsheets etc.; if you wanted to rely heavily on a specific number we give, you might want to do additional vetting.
To determine the distribution of predicted performance, we proceed in two steps:
1. We start with how ex-post performance is distributed. That is, how much did the performance of different people vary when we look back at completed tasks? On these questions, we’ll review empirical evidence on both typical jobs and expert performance (e.g. research).
2. Then we ask how ex-ante performance is distributed. That is, when we employ our best methods to predict future performance by different people, how will these predictions vary? On these questions, we review empirical evidence on measurable factors correlating with performance as well as the implications of theoretical considerations on which kinds of processes will generate different types of distributions.
Here we adopt a very loose conception of performance that includes both short-term (e.g. sales made on one day) and long-term achievements (e.g. citations over a whole career). We also allow for performance metrics to be influenced by things beyond the performer’s control.
Our overall bottom lines are:
• Ex-post performance appears ‘heavy-tailed’ in many relevant domains, but with very large differences in how heavy-tailed: the top 1% account for between 4% to over 80% of the total. For instance, we find ‘heavy-tailed’ distributions (e.g. log-normal, power law) of scientific citations, startup valuations, income, and media sales. By contrast, a large meta-analysis reports ‘thin-tailed’ (Gaussian) distributions for ex-post performance in less complex jobs such as cook or mail carrier [1]: the top 1% account for 3-3.7% of the total. These figures illustrate that the difference between ‘thin-tailed’ and ‘heavy-tailed’ distributions can be modest in the range that matters in practice, while differences between ‘heavy-tailed’ distributions can be massive. (More.)
• Ex-ante performance is heavy-tailed in at least one relevant domain: science. More precisely, future citations as well as awards (e.g. Nobel Prize) are predicted by past citations in a range of disciplines, and in mathematics by scores at the International Maths Olympiad. (More.)
• More broadly, there are known, measurable correlates of performance in many domains (e.g. general mental ability). Several of them appear to remain valid in the tails. (More.)
• However, these correlations by itself don’t tell us much about the shape of the ex-ante performance distribution: in particular, they would be consistent with either thin-tailed or heavy-tailed ex-ante performance. (More.)
• Uncertainty should move us toward acting as if ex-ante performance was heavy-tailed – because if you have some credence in it being heavy-tailed, it’s heavy-tailed in expectation – but not all the way, and less so the smaller our credence in heavy-tails. (More.)
• To infer the shape of the ex-ante performance distribution, it would be more useful to have a mechanistic understanding of the process generating performance, but such fine-grained causal theories of performance are rarely available. (More.)
• Nevertheless, our best guess is that moderately to extremely heavy-tailed ex-ante performance is widespread at least for ‘complex’ and ‘scaleable’ tasks. (I.e. ones where the performance metric can in practice range over many orders of magnitude and isn’t artificially truncated.) This is based on our best guess at the causal processes that generate performance combined with the empirical data we’ve seen. However, we think this is debatable rather than conclusively established by the literature we reviewed. (More.)
• There are several opportunities for valuable further research. (More.)
Overall, doing this investigation probably made us a little less confident that highly heavy-tailed distributions of ex-ante performance are widespread, and think that common arguments for it are often too quick. That said, we still think there are often large differences in performance (e.g. some software engineers have 10-times the output of others), these are somewhat predictable, and it’s often reasonable to act on the assumption that the ex-ante distribution is heavy-tailed in many relevant domains (broadly, when dealing with something like ‘expert’ performance as opposed to ‘typical’ jobs).
Some advice for how to work with these concepts in practice:
• In practice, don’t treat ‘heavy-tailed’ as a binary property. Instead, ask how heavy the tails of some quantity of interest are, for instance by identifying the frequency of outliers you’re interested in (e.g. top 1%, top 0.1%, …) and comparing them to the median or looking at their share of the total. [2]
• Carefully choose the underlying population and the metric for performance, in a way that’s tailored to the purpose of your analysis. In particular, be mindful of whether you’re looking at the full distribution or some tail (e.g. wealth of all citizens vs. wealth of billionaires).
In an appendix, we provide more detail on some background considerations:
• The conceptual difference between ‘high variance’ and ‘heavy tails’: Neither property implies the other. Both mean that unusually good opportunities are much better than typical ones. However, only heavy tails imply that outliers account for a large share of the total, and that naive extrapolation underestimates the size of future outliers. (More.)
• We can often distinguish heavy-tailed from light-tailed data by eyeballing (e.g. in a log-log plot), but it’s hard to empirically distinguish different heavy-tailed distributions from one another (e.g. log-normal vs. power laws). When extrapolating beyond the range of observed data, we advise to proceed with caution and to not take the specific distributions reported in papers at face value. (More.)
• There is a small number of papers in industrial-organizational psychology on the specific question whether performance in typical jobs is normally distributed or heavy-tailed. However, we don’t give much weight to these papers because their broad high-level conclusion (“it depends”) is obvious but we have doubts about the statistical methods behind their more specific claims. (More.)
• We also quote (in more detail than in the main text) the results from a meta-analysis of predictors of salary, promotions, and career satisfaction. (More.)
• We provide a technical discussion of how our metrics for heavy-tailedness are affected by the ‘cutoff’ value at which the tail starts. (More.)
Finally, we provide a glossary of the key terms we use, such as performance or heavy-tailed.
For more details, see our full write-up.
Acknowledgments
We'd like to thank Owen Cotton-Barratt and Denise Melchin for helpful comments on earlier drafts of our write-up, as well as Aaron Gertler for advice on how to best post this piece on the Forum.
Most of Max's work on this project was done while he was part of the Research Scholars Programme (RSP) at FHI, and he's grateful to the RSP management and FHI operations teams for keeping FHI/RSP running, and to Hamish Hobbs and Nora Ammann for support with productivity and accountability.
We're also grateful to Balsa Delibasic for compiling and formatting the reference list.
Endnotes
[1] For performance in “high-complexity” jobs such as attorney or physician, that meta-analysis (Hunter et al. 1990) reports a coefficient of variation that’s about 1.5x as large as for ‘medium-complexity' jobs. Unfortunately, we can’t calculate how heavy-tailed the performance distribution for high-complexity jobs is: for this we would need to stipulate a particular type of distribution (e.g. normal, log-normal), but Hunter et al. only report that the distribution does not appear to be normal (unlike for the low- and medium-complexity cases).
[2] Similarly, don’t treat ‘heavy-tailed’ as an asymptotic property – i.e. one that by definition need only hold for values above some arbitrarily large value. Instead, consider the range of values that matter in practice. For instance, a distribution that exhibits heavy tails only for values greater than 10^100 would be heavy-tailed in the asymptotic sense. But for e.g. income in USD values like 10^100 would never show up in practice – if your distribution is supposed to correspond to income in USD you’d only be interested in a much smaller range, say up to 10^10. Note that this advice is in contrast to the standard definition of ‘heavy-tailed’ in mathematical contexts, where it usually is defined as an asymptotic property. Relatedly, a distribution that only takes values in some finite range – e.g. between 0 and 10 billion – is never heavy-tailed in the mathematical-asymptotic sense, but it may well be in the “practical” sense (where you anyway cannot empirically distinguish between a distribution that can take arbitrarily large values and one that is “cut off” beyond some very large maximum).
comment by Khorton · 2021-03-26T00:18:44.316Z · EA(p) · GW(p)
"the top 1% stay on the New York Times bestseller list more than 25 times longer than the median author in that group."
FWIW my intuition is not that this author is 25x more talented, but rather that the author and their marketing team are a little bit more talented in a winner-takes-most market.
I wanted to point this out because I regularly see numbers like this used to justify claims that individuals vary significantly in talent or productivity. It's important to keep the business model in mind if you're claiming talent based on sales!
(Research citations are also a winner-takes-most market; people end up citing the same paper even if it's not much better than the next best paper.)
Replies from: Max_Daniel, Stefan_Schubert, reallyeli
comment by Max_Daniel · 2021-03-26T08:32:58.600Z · EA(p) · GW(p)
I fully agree with this, and think we essentially say as much in the post/document. This is e.g. why we've raised different explanations in the 2nd paragraph, immediately after referring to the phenomenon to be explained.
Curious if you think we could have done a better job at clarifying that we don't think differences in outcomes can only be explained by differences in talent?
Replies from: esantorella, MichaelPlant, Khorton
comment by esantorella · 2021-03-27T14:01:49.337Z · EA(p) · GW(p)
Let me try a different framing and see if that helps. Economic factors mediate how individual task performance translates into firm success. In industries with winner-takes-most effects, small differences in task performance cause huge differences in payoffs. "The Economics of Superstars" is a classic 1981 paper on this. But many industries aren't like that.
Knowing your industry tells you how important it is to hire the right people. If you're hiring someone to write an economics textbook (an example from the "Superstars" paper), you'd better hire the best textbook-writer you can find, because almost no one buys the tenth-best economics textbook. But if you're running a local landscaping company, you don't need the world's best landscaper. And if your industry has incumbent "superstar" firms protected by first-mover advantages, economies of scale, or network effects, it may not matter much who you hire.
So in what kind of "industry" are the EA organizations you want to help with hiring? Is there some factor that multiplies or negates small individual differences in task performance?
Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-03-30T19:46:54.982Z · EA(p) · GW(p)
I think that's a good summary, but it's not only winner-takes-all effects that generate heavy-tailed outcomes.
You can get heavy tailed outcomes if performance is the product of two normally distributed factors (e.g. intelligence x effort).
It can also arise from the other factors that Max lists in another comment (e.g. scalable outputs, complex production).
Luck can also produce heavy tailed outcomes if it amplifies outcomes or is itself heavy-tailed.
Replies from: esantorella
comment by esantorella · 2021-04-02T11:41:20.448Z · EA(p) · GW(p)
My point is more "context matters," even if you're talking about a specific skill like programming, and that the contexts that generated the examples in this post may be meaningfully different from the contexts that EA organizations are working in.
I don't necessarily disagree with anything you and Max have written; it's just a difference of emphasis, especially when it comes to advising people who are making hiring decisions.
comment by MichaelPlant · 2021-03-26T12:42:06.135Z · EA(p) · GW(p)
I was going to raise a similar comment to what others have said here. I hope this adds something.
I think we need to distinguish quality and quantity of 'output' from 'success' (the outcome of their output). I am deliberately not using 'performance' as it's unclear, in common language, which one of the two it refers to. Various outputs are sometimes very reproducible - anyone can listen to a music track, or read an academic paper. There are often huge rewards to being the best vs second best - eg winning in sports. And sometimes success generates further success (the 'Matthew effect') - more people want to work with you, etc. Hence, I don't find it all weird to think that small differences in outputs, as measured on some cardinal scale, sometimes generate huge differences in outcomes.
I'm not sure exactly what follows from this. I'm a bit worried you're concentrated on the wrong metric - success - when it's outputs that are more important. Can you explain why you focus on outcomes?
Let's say you're thinking about funding research. How much does it matter to fund the best person? I mean, they will get most of the credit, but if you fund the less-than-best, that person's work is probably not much worse and ends up being used by the best person anyway. If the best person gets 1,000 more citations, should you be prepared to spend 1,000 more to fund their work? Not obviously.
I'm suspicious you can do a good job of predicting ex ante outcomes. After all, that's what VCs would want to do and they have enormous resources. Their strategy is basically to pick as many plausible winners as they can fund.
It might be interesting to investigate differences in quality and quantity of outputs separately. Intuitively, it seems the best people do produce lots more work than the good people, but it's less obvious the quality of the best people is much higher than of the good. I recognise all these terms are vague.
Replies from: Benjamin_Todd, Max_Daniel, Max_Daniel, Max_Daniel
comment by Benjamin_Todd · 2021-03-28T19:40:29.451Z · EA(p) · GW(p)
On your main point, this was the kind of thing we were trying to make clearer, so it's disappointing that hasn't come through.
Just on the particular VC example:
I'm suspicious you can do a good job of predicting ex ante outcomes. After all, that's what VCs would want to do and they have enormous resources. Their strategy is basically to pick as many plausible winners as they can fund.
Most VCs only pick from the top 1-5% of startups. E.g. YC's acceptance rate is 1%, and very few startups they reject make it to series A. More data on VC acceptance rates here: https://80000hours.org/2014/06/the-payoff-and-probability-of-obtaining-venture-capital/
So, I think that while it's mostly luck once you get down to the top 1-5%, I think there's a lot of predictors before that.
Also see more on predictors of startup performance here: https://80000hours.org/2012/02/entrepreneurship-a-game-of-poker-not-roulette/
Replies from: John_Maxwell_IV, MichaelPlant
comment by John_Maxwell (John_Maxwell_IV) · 2021-03-30T22:40:40.384Z · EA(p) · GW(p)
YC having a low acceptance rate could mean they are highly confident in their ability to predict ex ante outcomes. It could also mean that they get a lot of unserious applications. Essays such as this one by Paul Graham bemoaning the difficulty of predicting ex ante outcomes make me think it is more the latter. ("it's mostly luck once you get down to the top 1-5%" makes it sound to me like ultra-successful startups should have elite founders, but my take on Graham's essay is that ultra-successful startups tend to be unusual, often in a way that makes them look non-elite according to traditional metrics -- I tend to suspect this is true of exceptionally innovative people more generally)
comment by MichaelPlant · 2021-03-29T12:45:30.167Z · EA(p) · GW(p)
Hello Ben.
I'm not trying to be obtuse, it wasn't super clear to me on a quick-ish skim; maybe if I'd paid more attention I've have clocked it.
Yup, I was too hasty on VCs. It seems like they are pretty confident they know what the top >5% are, but not that can say anything more precise than. (Although I wonder what evidence indicates they can reliably tell the top 5% from those below, rather than they just think they can).
Replies from: Ben_West, Max_Daniel
comment by Ben_West · 2021-03-31T21:30:45.561Z · EA(p) · GW(p)
(Although I wonder what evidence indicates they can reliably tell the top 5% from those below, rather than they just think they can).
The Canadian inventors assistance program provides a rating of how good an invention is to inventors for a nominal fee. A large fraction of the people who get a bad rating try to make a company anyway, so we can judge the accuracy of their evaluations.
55% of the inventions which they give the highest rating to achieve commercial success, compared to 0% for the lowest rating.
Replies from: MichaelPlant, Paulindrome, Linch, Max_Daniel
comment by MichaelPlant · 2021-04-01T08:23:16.030Z · EA(p) · GW(p)
ah, this is great. evidence the selectors could tell the top 2% from the rest, but 2%-20% was much of a muchness. Shame that it doesn't give any more information on 'commercial success'.
comment by Paulindrome · 2021-04-01T19:54:17.227Z · EA(p) · GW(p)
This is amazing data, and not what I would have expected - I've just had my mind changed on the predictability of invention success. Thanks!
comment by Linch · 2021-04-01T15:37:38.325Z · EA(p) · GW(p)
This is really cool, thank you!
comment by Max_Daniel · 2021-04-01T07:45:22.700Z · EA(p) · GW(p)
That's very interesting, thanks for sharing!
comment by Max_Daniel · 2021-03-29T15:09:44.807Z · EA(p) · GW(p)
I'm not trying to be obtuse, it wasn't super clear to me on a quick-ish skim; maybe if I'd paid more attention I've have clocked it.
FWIW I think it's the authors' job to anticipate how their audience is going to engage with their writing, where they're coming from etc. - You were not the only one who reacted by pushing back against our framing as evident e.g. from Khorton's much upvoted comment [EA(p) · GW(p)].
So no matter what we tried to convey, and what info is in the post or document if one reads closely enough, I think this primarily means that I (as main author of the wording in the post) could have done a better job, not that you or anyone else is being obtuse.
comment by Max_Daniel · 2021-03-26T13:41:30.713Z · EA(p) · GW(p)
I'm suspicious you can do a good job of predicting ex ante outcomes. After all, that's what VCs would want to do and they have enormous resources. Their strategy is basically to pick as many plausible winners as they can fund.
I agree that looking at e.g. VC practices is relevant evidence. However, it seems to me that if VCs thought they couldn't predict anything, they would allocate their capital by a uniform lottery among all applicants, or something like that. I'm not aware of a VC adopting such a strategy (though possible I just haven't heard of it); to the extent that they can distinguish "plausible" from "implausible" winners, this does suggest some amount of ex-ante predictability. Similarly, my vague impression is that VCs and other investors often specialize by domain/sector, which suggests they think they can utilize their knowledge and network when making decisions ex ante.
Sure, predictability may be "low" in some sense, but I'm not sure we're saying anything that would commit us to denying this.
Replies from: MichaelPlant
comment by MichaelPlant · 2021-03-26T14:49:41.380Z · EA(p) · GW(p)
Yeah, I'd be interested to know if VC were better than chance. Not quite sure how you would assess this, but probably someone's tried.
But here's where it seems relevant. If you want to pick the top 1% of people, as they provide so much of the value, but you can only pick the top 10%, then your efforts to pick are much less cost-effective and you would likely want to rethink how you did it.
Replies from: Max_Daniel, Max_Daniel
comment by Max_Daniel · 2021-03-26T17:17:05.130Z · EA(p) · GW(p)
I think it's plausible that VCs aren't better than chance when choosing between a suitably restricted "population", i.e. investment opportunities that have passed some bar of "plausibility".
I don't think it's plausible that they are no better than chance simpliciter. In that case I would expect to see a lot of VCs who cut costs by investing literally zero time into assessing investment opportunities and literally fund on a first-come first-serve or lottery basis.
comment by Max_Daniel · 2021-03-26T17:21:44.825Z · EA(p) · GW(p)
And yes, I totally agree that how well we can predict (rather than just the question whether predictability is zero or nonzero) is relevant in practice.
If the ex-post distribution is heavy-tailed, there are a bunch of subtle considerations here I'd love someone to tease out. For example, if you have a prediction method that is very good for the bottom 90% but biased toward 'typical' outcomes, i.e. the median, then you might be better off in expectation to allocate by a lottery over the full population (b/c this gets you the mean, which for heavy-tailed distributions will be much higher than the median).
Replies from: Ben_West
comment by Ben_West · 2021-03-31T21:35:06.359Z · EA(p) · GW(p)
Data from the IAP indicates that they can identify the top few percent of successful inventions with pretty good accuracy. (Where "success" is a binary variable – not sure how they perform if you measure financial returns.)
comment by Max_Daniel · 2021-03-26T13:34:54.757Z · EA(p) · GW(p)
I'm not sure exactly what follows from this. I'm a bit worried you're concentrated on the wrong metric - success - when it's outputs that are more important. Can you explain why you focus on outcomes?
I'm not sure I agree that outputs are more important. I think it depends a lot on the question or decision we're considering, which is why I highlighted a careful choice of metric as one of the key pieces of advice.
So e.g. if our goal is to set performance incentives (e.g. salaries), then it may be best to reward people for things that are under their control. E.g. pay people more if they work longer hours (inputs), or if there are fewer spelling mistakes in their report (cardinal output metric) or whatever. At other times, paying more attention to inputs or outputs rather than outcomes or things beyond the individual performer's control may be justified by considerations around e.g. fairness or equality.
All of these things are of course really important to get right within the EA community as well, whether or not we care about them instrumentally or intrinsically. There are lot of tricky and messy questions here.
But if we can say anything general, then I think that especially in EA contexts we care more, ore more often, about outcomes/success/impact on the world, and less about inputs and outputs, than usual. We want to maximize well-being, and from 'the point of view of the universe' it doesn't ultimately matter if someone is happy because someone else produced more outputs or because the same outputs had greater effects. Nor does it ultimately matter if impact differences are due to differences in talent, resource endowments, motivation, luck, or ...
Another way to see this is that often actors that care more about inputs or outputs do so because they don't internalize all the benefits from outcomes. But if a decision is motivated by impartial altruism, there is a sense in which there are no externalities.
Of course, we need to make all the usual caveats against 'naive consequentialism'. But I do think there is something important in this observation.
Replies from: MichaelPlant
comment by MichaelPlant · 2021-03-26T15:00:13.130Z · EA(p) · GW(p)
I was thinking the emphasis on outputs might be the important part as those are more controllable than outcomes, and so the decision-relevant bit, even though we want to maximise impartial value (outcomes).
I can imagine someone thinking the following way: "we must find and fund the best scientists because they have such outsized outcomes, in terms of citations." But that might be naive if it's really just the top scientist who gets the citations and the work of all the good scientists has a more or less equal contribution to impartial value.
FWIW, it's not clear we're disagreeing!
comment by Max_Daniel · 2021-03-26T13:13:27.966Z · EA(p) · GW(p)
Thanks for this comment!
I'm sympathetic to the point that we're lumping together quite different things under the vague label "performance", perhaps stretching its beyond its common use. That's why I said in bold that we're using a loose notion of performance. But it's possible it would have been better if I had spent more time to come up with a better terminology.
Replies from: MichaelPlant
comment by MichaelPlant · 2021-03-26T14:50:32.416Z · EA(p) · GW(p)
Okay good! Yeah, I would be curious to see how much it changed the analysis distinguishing outputs from outcomes and, further, between different types of outputs.
comment by Khorton · 2021-03-26T09:19:37.810Z · EA(p) · GW(p)
I think the language of a person who "achieves orders of magnitude more" suggests that their output (research, book, etc) is orders of magnitude better, instead of just being more popular. Sometimes more popular is better, but often in EA that's not what we're focused on.
I also believe you're talking about hiring individuals in this piece(?), but most of your examples are about successful teams, which have different qualities to successful individuals.
I thought your examples of Math Olympiad scores correlating with Nobel Prize wins was a useful exception to this trend, because those are about an individual and aren't just about popularity.
Replies from: Max_Daniel
comment by Max_Daniel · 2021-03-26T09:41:20.817Z · EA(p) · GW(p)
Thanks for clarifying!
FWIW I think I see the distinction between popularity and other qualities as less clear as you seem to do. For instance, I would expect that book sales and startup returns are also affected by how "good" in whatever other sense the book or startup product is. Conversely, I would guess that realistically Nobel Prizes and other scientific awards are also about popularity and not just about the quality of the scientific work by other standards. I'm happy to concede that, in some sense, book sales seem more affected by popularity than Nobel Prizes, but it seems a somewhat important insight to me that neither is "just about popularity" nor "just about achievement/talent/quality/whatever".
It's also not that clear to me whether there is an obviously more adequate standard of overall "goodness" here: how much joy the book brings readers? What literary critics would say about the book? I think the ultimate lesson here is that the choice of metric is really important, and depends a lot on what you want to know or decide, which is why "Carefully choose the underlying population and the metric for performance" is one of our key points of advice. I can see that saying something vague and general like "some people achieve more" and then giving examples of specific metrics pushes against this insight by suggesting that these are the metrics we should generally most care about. FWIW I still feel OK about our wording here since I feel like in an opening paragraph we need to balance nuance/detail and conciseness / getting the reader interested.
As an aside, my vague impression is that it's somewhat controversial to what extent successful teams have different qualities to successful individuals. In some sense this is of course true since there are team properties that don't even make sense for individuals. However, my memory is that for a while there was some more specific work in psychology that was allegedly identifying properties that predicted team success better than the individual abilities of its members, which then largely didn't replicate.
Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2021-03-26T11:09:41.497Z · EA(p) · GW(p)
However, my memory is that for a while there was some more specific work in psychology that was allegedly identifying properties that predicted team success better than the individual abilities of its members, which then largely didn't replicate.
Woolley et al (2010) was an influential paper arguing that individual intelligence doesn't predict collective intelligence well. Here's one paper criticising them. I'm sure there are plenty of other relevant papers (I seem to recall one paper providing positive evidence that individual intelligence predicted group performance fairly well, but can't find it now).
Replies from: Max_Daniel
comment by Max_Daniel · 2021-03-26T11:25:45.745Z · EA(p) · GW(p)
Great, thank you! I do believe work by Woolley was what I had in mind.
comment by Stefan_Schubert · 2021-03-26T00:37:21.350Z · EA(p) · GW(p)
Fwiw, I wrote a post [LW · GW] explaining such dynamics a few years ago.
comment by reallyeli · 2021-03-26T07:03:23.012Z · EA(p) · GW(p)
Agreed. The slight initial edge that drives the eventual enormous success in the winner-takes-most market can also be provided by something other than talent — that is, by something other than people trying to do things and succeeding at what they tried to do. For example, the success of Fifty Shades of Grey seems best explained by luck.
Replies from: Erich_Grunewald
comment by Erich_Grunewald · 2021-03-26T07:44:56.047Z · EA(p) · GW(p)
I was going to comment something to this effect, too. The authors write:
For instance, we find ‘heavy-tailed’ distributions (e.g. log-normal, power law) of scientific citations, startup valuations, income, and media sales. By contrast, a large meta-analysis reports ‘thin-tailed’ (Gaussian) distributions for ex-post performance in less complex jobs such as cook or mail carrier: the top 1% account for 3-3.7% of the total.
But there’s an important difference between these groups – the products involved in the first group are cheaply reproducible (any number of people can read the same papers, invest in the same start-up or read the same articles – I don’t know how to interpret income here) & those in the second group are not (not everyone can use the same cook or mail carrier).
So I propose that the difference there has less to do with the complexity of the jobs & more to do with how reproducible the products involved are.
Replies from: Max_Daniel
comment by Max_Daniel · 2021-03-26T08:39:33.885Z · EA(p) · GW(p)
I think you're right that complexity at the very least isn't the only cause/explanation for these differences.
E.g. Aguinis et al. (2016) find that, based on an analysis of a very large number of productivity data sets, the following properties make a heavy-tailed output distribution more likely:
• Multiplicity of productivity,
• Monopolistic productivity,
• Job autonomy,
• Job complexity,
• No productivity ceiling (I guess your point is a special case of this: if the marginal cost of increasing output becomes too high too soon, there will effectively be a ceiling; but there can also e.g. be ceilings imposed by the output metric we use, such as when a manager gives a productivity rating on a 1-10 scale)
As we explain in the paper, I have some open questions about the statistical approach in that paper. So I currently don't take their analysis to be that much evidence that this is in fact right. However, they also sound right to me just based on priors and based on theoretical considerations (such as the ones in our section on why we expect heavy-tailed ex-ante performance to be widespread).
In the part you quoted, I wrote "less complex jobs" because the data I'm reporting is from a paper that explicitly distinguishes low-, medium-, and high-complexity jobs, and finds that only the first two types of job potentially have a Gaussian output distribution (this is Hunter et al. 1990). [TBC, I understand that the reader won't know this, and I do think my current wording is a bit sloppy/bad/will predictably lead to the valid pushback you made.]
[References in the doc linked in the OP.]
Replies from: Erich_Grunewald, Benjamin_Todd
comment by Erich_Grunewald · 2021-03-26T21:27:28.636Z · EA(p) · GW(p)
Thanks for the clarification & references!
comment by Benjamin_Todd · 2021-03-30T15:30:55.123Z · EA(p) · GW(p)
This is cool.
One theoretical point in favour of complexity is that complex production often looks like an 'o-ring' process, which will create heavy-tailed outcomes.
comment by Linch · 2021-03-26T00:52:22.071Z · EA(p) · GW(p)
Thanks for this. I do think there's a bit of sloppiness in EA discussions about heavy-tailed distributions in general, and the specific question of differences in ex ante predictable job performance in particular. So it's really good to see clearer work/thinking about this.
I have two high-level operationalization concerns here:
1. Whether performance is ex ante predictable seems to be a larger function of our predictive ability than of the world. As an extreme example of what I mean, if you take our world on November 7, 2016 and run high-fidelity simulations 1,000,000 times , I expect 1 million/1 million of those simulations to end up with Donald Trump winning the 2016 US presidential election. Similarly, with perfect predictive ability, I think the correlation between ex ante predicted work performance and ex post actual performance approach 1 (up to quantum) . This may seem like a minor technical point, but I think it's important to be careful of the reasoning here when we ask whether claims are expected to generalize from domains with large and obvious track records and proxies (eg past paper citations to future paper citations) or even domains where the ex ante proxy may well have been defined ex post (Math Olympiad records to research mathematics) to domains of effective altruism where we're interested in something like counterfactual/Shapley impact [EA · GW]*.
2. There's counterfactual credit assignment issues for pretty much everything EA is concerned with, whereas if you're just interested in individual salaries or job performance in academia, a simple proxy like \$s or citations is fine. Suppose Usain Bolt is 0.2 seconds slower at running 100 meters. Does anybody actually think this will result in huge differences in the popularity of sports, or percentage of economic output attributable to the "run really fast" fraction of the economy, never mind our probability of spreading utopia throughout the stars? But nonetheless Usain Bolt likely makes a lot more money, has a lot more prestige, etc than the 2nd/3rd fastest runners. Similarly, academics seem to worry constantly about getting "scooped" whereas they rarely worry about scooping others, so a small edge in intelligence or connections or whatever can be leveraged to a huge difference in potential citations, while being basically irrelevant to counterfactual impact. Whereas in EA research it matters a lot whether being "first" means you're 5 years ahead of the next-best candidate or 5 days.
Griping aside, I think this is a great piece and I look forward to perusing it and giving more careful comments in the coming weeks!
*ETA: In contrast, if it's the same variable(s) that we can use to ex ante predict a variety of good outcomes of work performance across domains, then we can be relatively more confident that this will generalize to EA notions. Eg, fundamental general mental ability, integrity, etc.
Replies from: Max_Daniel
comment by Max_Daniel · 2021-03-26T08:53:44.508Z · EA(p) · GW(p)
Thanks for these points!
My super quick take is that 1. definitely sounds right and important to me, and I think it would have been good if we had discussed this more in the doc.
I think 2. points to the super important question (which I think we've mentioned somewhere under Further research) how typical performance/output metrics relate to what we ultimately care about in EA contexts, i.e. positive impact on well-being. At first glance I'd guess that sometimes these metrics 'overstate' heavy-tailedness of EA impact (for e.g. the reasons you mentioned), but sometimes they might also 'understate' them. For instance, the metrics might not 'internalize' all the effects on the world (e.g. 'field building' effects from early-stage efforts), or for some EA situations the 'market' may be even more winner-takes-most than usual (e.g. for some AI alignment efforts it only matters if you can influence DeepMind), or the 'production function' might have higher returns to talent than usual (e.g. perhaps founding a nonprofit or contributing valuable research to preparadigmatic fields is "extra hard" in a way not captured by standard metrics when compared to easier cases).
comment by AGB · 2021-04-02T18:31:40.021Z · EA(p) · GW(p)
Hi Max and Ben, a few related thoughts below. Many of these are mentioned in various places in the doc, so seem to have been understood, but nonetheless have implications for your summary and qualitative commentary, which I sometimes think misses the mark.
• Many distributions are heavy-tailed mathematically, but not in the common use of that term, which I think is closer to 'how concentrated is the thing into the top 0.1%/1%/etc.', and thus 'how important is it I find top performers' or 'how important is it to attract the top performers'. For example, you write the following:
What share of total output should we expect to come from the small fraction of people we’re most optimistic about (say, the top 1% or top 0.1%) – that is, how heavy-tailed is the distribution of ex-ante performance?
• Often, you can't derive this directly from the distribution's mathematical type. In particular, you cannot derive it from whether a distribution is heavy-tailed in the mathematical sense.
• Log-normal distributions are particuarly common and are a particular offender here, because they tend to occur whenever lots of independent factors are multiplied together. But here is the approximate* fraction of value that comes from the top 1% in a few different log-normal distributions:
EXP(N(0,0.0001)) -> 1.02%
EXP(N(0,0001)) -> 1.08%
EXP(N(0,0.01)) -> 1.28%
EXP(N(0,0.1)) -> 2.22%
EXP(N(0,1)) -> 9.5%
• For a real-world example, geometric brownian motion is the most common model of stock prices, and produces a log-normal distribution of prices, but models based on GBM actually produce pretty thin tails in the commonsense use, which are in turn much thinner than the tails in real stock markets, as (in?)famously chronicled in Taleb's Black Swan among others. Since I'm a finance person who came of age right as that book was written, I'm particularly used to thinking of the log-normal distribution as 'the stupidly-thin-tailed one', and have a brief moment of confusion every time it is referred to as 'heavy-tailed'.
• The above, in my opinion, highlights the folly of ever thinking 'well, log-normal distributions are heavy-tailed, and this should be log-normal because things got multiplied together, so the top 1% must be at least a few percent of the overall value'. Log-normal distributions with low variance are practically indistinguishable from normal distributions. In fact, as I understand it many oft-used examples of normal distributions, such as height and other biological properties, are actually believed to follow a log-normal distribution.
***
I'd guess we agree on the above, though if not I'd welcome a correction. But I'll go ahead and flag bits of your summary that look weird to me assuming we agree on the mathematical facts:
By contrast, a large meta-analysis reports ‘thin-tailed’ (Gaussian) distributions for ex-post performance in less complex jobs such as cook or mail carrier [1]: the top 1% account for 3-3.7% of the total.
I haven't read the meta-analysis, but I'd tentatively bet that much like biological properties these jobs actually follow log-normal distributions and they just couldn't tell (and weren't trying to tell) the difference.
These figures illustrate that the difference between ‘thin-tailed’ and ‘heavy-tailed’ distributions can be modest in the range that matters in practice
I agree with the direction of this statement, but it's actually worse than that: depending on the tail of interest "heavy-tailed distributions" can have thinner tails than "thin-tailed distributions"! For example, compare my numbers for the top 1% of various log-normal distributions to the right-hand-side of a standard N(0,1) normal distribution where we cut off negative values (~3.5% in top 1%).
It's also somewhat common to see comments like this from 80k staff (This from Ben Todd elsewhere in this thread):
You can get heavy tailed outcomes if performance is the product of two normally distributed factors (e.g. intelligence x effort).
You indeed can, but like the log-normal distribution this will tend to have pretty thin tails in the common use of the term. For example, multipling two N(100,225) distributions together, chosen because this is roughly the distribution of IQ, gets you a distribution where the top 1% account for 1.6% of the total. Looping back to my above thought, I'd also guess that performance on jobs like cook and mail-carrier look very close to this, and empirically were observed to have similarly thin tails (aptitude x intelligence x effort might in fact be the right framing for these jobs).
***
Ultimately, the recommendation I would give is much the same as the bottom line presented, which I was very happy to see. Indeed, I'm mostly grumbling because I want to discourage anything which treats heavy-tailed as a binary property**, as parts of the summary/commentary tend to, see above.
Some advice for how to work with these concepts in practice:
• In practice, don’t treat ‘heavy-tailed’ as a binary property. Instead, ask how heavy the tails of some quantity of interest are, for instance by identifying the frequency of outliers you’re interested in (e.g. top 1%, top 0.1%, …) and comparing them to the median or looking at their share of the total. [2]
• Carefully choose the underlying population and the metric for performance, in a way that’s tailored to the purpose of your analysis. In particular, be mindful of whether you’re looking at the full distribution or some tail (e.g. wealth of all citizens vs. wealth of billionaires).
*Approximate because I was lazy and just simulated 10000 values to get these and other quoted numbers. AFAIK the true values are not sufficiently different to affect the point I'm making.
**If it were up to me, I'd taboo [? · GW] the term 'heavy-tailed' entirely, because having an oft-used term whose mathematical and commonsense notions differ is an obvious recipe for miscommunication in a STEM-heavy community like this one.
Replies from: Max_Daniel, Max_Daniel, Max_Daniel
comment by Max_Daniel · 2021-04-03T10:51:35.402Z · EA(p) · GW(p)
As an aside, for a good and philosophically rigorous criticism of cavalier assumptions of normality or (arguably) pseudo-explanations that involve the central limit theorem, I'd recommend Lyon (2014), Why are Normal Distributions Normal?
Basically I think that whenever we are in the business of understanding how things actually work/"why" we're seeing the data distributions we're seeing, often-invoked explanations like the CLT or "multiplicative" CLT are kind of the tip of the iceberg that provides the "actual" explanation (rather then being literally correct by themselves), this iceberg having to do with the principle of maximum entropy / the tendency for entropy to increase / 'universality' and the fact that certain types of distributions are 'attractors' for a wide range of generating processes. I'm too much of an 'abstract algebra person' to have a clear sense of what's going on, but I think it's fairly clear that the folk story of "a lot of things 'are' normally distributed because of 'the' central limit theorem" is at best an 'approximation' and at worst misleading.
(One 'mathematical' way to see this is that it's fishy that there are so many different versions of the CLT rather than one clear 'canonical' or 'maximally general' one. I guess stuff like this also is why I tend to find common introductions to statistics horribly unaesthetic and have had a hard time engaging with them.)
comment by Max_Daniel · 2021-04-03T10:38:35.670Z · EA(p) · GW(p)
I haven't read the meta-analysis, but I'd tentatively bet that much like biological properties these jobs actually follow log-normal distributions and they just couldn't tell (and weren't trying to tell) the difference.
I kind of agree with this (and this is why I deliberately said that "they report a Gaussian distribution" rather than e.g. "performance is normally distributed"). In particular, yes, they just assumed a normal distribution and then ran with this in all cases in which it didn't lead to obvious problems/bad fits no matter the parameters. They did not compare Gaussian with other models.
I still think it's accurate and useful to say that they were using (and didn't reject) a normal distribution as model for low- and medium-complexity jobs as this does tell you something about how the data looks like. (Since there is a lot of possible data where no normal distribution is a reasonable fit.)
I also agree that probably a log-normal model is "closer to the truth" than a normal one. But on the other hand I think it's pretty clear that actually neither a normal nor a log-normal model is fully correct. Indeed, what would it mean that "jobs actually follow a certain type of distribution"? If we're just talking about fitting a distribution to data, we will never get a perfect fit, and all we can do is providing goodness-of-fit statistics for different models (which usually won't conclusively identify any single one). This kind of brute/naive empiricism just won't and can't get us to "how things actually work". On the other hand, if we try to build a model of the causal generating mechanism of job performance it seems clear that the 'truth' will be much more complex and messy - we will only have finitely many contributing things (and a log-normal distribution is something we'd get at best "in the limit"), the contributing factors won't all be independent etc. etc. Indeed, "probability distribution" to me basically seems like the wrong type to talk about when we're in the business of understanding "how things actually work" - what we want then is really a richer and more complex model (in the sense that we could have several different models that would yield the same approximate data distribution but that would paint a fairly different picture of "how things actually work"; basically I'm saying that things like 'quantum mechanics' or 'the Solow growth model' or whatever have much more structure and are not a single probability distribution).
Replies from: AGB
comment by AGB · 2021-04-03T10:55:30.498Z · EA(p) · GW(p)
Briefly on this, I think my issue becomes clearer if you look at the full section.
If we agree that log-normal is more likely than normal, and log-normal distributions are heavy-tailed, then saying 'By contrast, [performance in these jobs] is thin-tailed' is just incorrect? Assuming you meant the mathematical senses of heavy-tailed and thin-tailed here, which I guess I'm not sure if you did.
This uncertainty and resulting inability to assess whether this section is true or false obviously loops back to why I would prefer not to use the term 'heavy-tailed' at all, which I will address in more detail in my reply to your other comment.
Ex-post performance appears ‘heavy-tailed’ in many relevant domains, but with very large differences in how heavy-tailed: the top 1% account for between 4% to over 80% of the total. For instance, we find ‘heavy-tailed’ distributions (e.g. log-normal, power law) of scientific citations, startup valuations, income, and media sales. By contrast, a large meta-analysis reports ‘thin-tailed’ (Gaussian) distributions for ex-post performance in less complex jobs such as cook or mail carrier
Replies from: Max_Daniel
comment by Max_Daniel · 2021-04-03T16:20:48.336Z · EA(p) · GW(p)
I think the main takeaway here is that you find that section confusing, and that's not something one can "argue away", and does point to room for improvement in my writing. :)
With that being said, note that we in fact don't say anywhere that anything 'is thin-tailed'. We just say that some paper 'reports' a thin-tailed distribution, which seems uncontroversially true. (OTOH I can totally see that the "by contrast" is confusing on some readings. And I also agree that it basically doesn't matter what we say literally - if people read what we say as claiming that something is thin-tailed, then that's a problem.)
FWIW, from my perspective the key observations (which I apparently failed to convey in a clear way at least for you) here are:
• The top 1% share of ex-post "performance" [though see elsewhere that maybe that's not the ideal term] data reported in the literature varies a lot, at least between 3% and 80%. So usually you'll want to know roughly where on the spectrum you are for the job/task/situation relevant to you rather than just whether or not some binary property holds.
• The range of top 1% shares is almost as large for data for which the sources used a mathematically 'heavy-tailed' type of distribution as model. In particular, there are some cases where we some source reports a mathematically 'heavy-tailed' distribution but where the top 1% share is barely larger than for other data based on a mathematically 'thin-tailed' distribution.
• (As discussed elsewhere, it's of course mathematically possible to have a mathematically 'thin-tailed' distribution with a larger top 1% share than a mathematically 'heavy-tailed' distribution. But the above observation is about what we in fact find in the literature rather than about what's mathematically possible. I think the key point here is not so much that we haven't found a 'thin-tailed' distribution with larger top 1% share than some 'heavy-tailed' distribution. but that the mathematical 'heavy-tailed' property doesn't cleanly distinguish data/distributions by their top 1% share even in practice.)
• So don't look at whether the type of distribution used is 'thin-tailed' or 'heavy-tailed' in the mathematical sense, ask how heavy-tailed in the everyday sense (as operationalized by top 1% share or whatever you care about) your data/distribution is.
So basically what I tried to do is mentioning that we find both mathematically thin-tailed and mathematically heavy-tailed distributions reported in the literature in order to point out that this arguably isn't the key thing to pay attention to. (But yeah I can totally see that this is not coming across in the summary as currently worded.)
As I tried to explain in my previous comment [EA(p) · GW(p)], I think the question whether performance in some domain is actually 'thin-tailed' or 'heavy-tailed' in the mathematical sense is closer to ill-posed or meaningless than true or false. Hence why I set aside the issue of whether a normal distribution or similar-looking log-normal distribution is the better model.
comment by Max_Daniel · 2021-04-03T10:22:01.174Z · EA(p) · GW(p)
Yeah, I think we agree on the maths, and I'm quite sympathetic to your recommendations regarding framing based on this. In fact, emphasizing "top x% share" as metric and avoiding any suggestion that it's practically useful to treat "heavy-tailed" as a binary property were my key goals for the last round of revisions I made to the summary - but it seems like I didn't fully succeed.
FWIW, I maybe wouldn't go quite as far as you suggest in some places. I think the issue of "mathematically 'heavy-tailed' distributions may not be heavy-tailed in practice in the everyday sense" is an instance of a broader issue that crops up whenever one uses mathematical concepts that are defined in asymptotic terms in applied contexts.
To give just one example, consider that we often talk of "linear growth", "exponential growth", etc. I think this is quite useful, and that it would overall be bad to 'taboo' these terms and always replace them with some 'model-agnostic' metric that can be calculated for finitely many data points. But there we have the analog issue that depending on the parameters an e.g. exponential function can for practical purposes look very much like a linear function over the relevant finite range of data.
Another example would be computational complexity, e.g. when we talk about algorithms being "polynomial" or "exponential" regarding how many steps they require as function of the size of their inputs.
Yet another example would be attractors in dynamical systems.
In these and many other cases we encounter the same phenomenon that we often talk in terms of mathematical concepts that by definition only tell us that some property holds "eventually", i.e. in the limit of arbitrarily long amounts of time, arbitrarily much data, or similar.
Of course, being aware of this really is important. In practice it often is crucial to have an intuition or more precise quantitative bounds on e.g. whether we have enough data points to be able to use some computational method that's only guaranteed to work in the limit of infinite data. And sometimes we are better off using some algorithm that for sufficiently large inputs would be slower than some alternative, etc.
But on the other hand, talk in terms of 'asymptotic' concepts often is useful as well. I think one reason for why is that in practice when e.g. we say that something "looks like a heavy-tailed distribution" or that something "looks like exponential growth" we tend to mean "the top 1% share is relatively large / it would be hard to fit e.g. a normal distribution" or "it would be hard to fit a straight line to this data" etc., as opposed to just e.g. "there is a mathematically heavy-tailed distribution that with the right parameters provides a reasonable fit" or "there is an exponential function that with the right parameters provides a reasonable fit". That is, the conventions for the use of these terms are significantly influenced by "practical" considerations (and things like Grice's communication maxims) rather than just their mathematical definition.
So e.g. concretely when in practice we say that something is "log-normally distributed" we often do mean that it looks more heavy-tailed in the everyday sense than a normal distribution (even though it is a mathematical fact that there are log-normal distributions that are relatively thin-tailed in the everyday sense - indeed we can make most types of distributions arbitrarily thin-tailed or heavy-tailed in this sense!).
Replies from: AGB
comment by AGB · 2021-04-03T11:27:05.690Z · EA(p) · GW(p)
So taking a step back for a second, I think the primary point of collaborative written or spoken communication is to take the picture or conceptual map in my head and put it in your head, as accurately as possible. Use of any terms should, in my view, be assessed against whether those terms are likely to create the right picture in a reader's or listener's head. I appreciate this is a somewhat extreme position.
If everytime you use the term heavy-tailed (and it's used a lot - a quick CTRL + F tells me it's in the OP 25 times) I have to guess from context whether you mean the mathematical or commonsense definitions, it's more difficult to parse what you actually mean in any given sentence. If someone is reading and doesn't even know that those definitions substantially differ, they'll probably come away with bad conclusions.
This isn't a hypothetical corner case - I keep seeing people come to bad (or at least unsupported) conclusions in exactly this way, while thinking that their reasoning is mathematically sound and thus nigh-incontrovertible. To quote myself above:
The above, in my opinion, highlights the folly of ever thinking 'well, log-normal distributions are heavy-tailed, and this should be log-normal because things got multiplied together, so the top 1% must be at least a few percent of the overall value'.
If I noticed that use of terms like 'linear growth' or 'exponential growth' were similarly leading to bad conclusions, e.g. by being extrapolated too far beyond the range of data in the sample, I would be similarly opposed to their use. But I don't, so I'm not.
If I noticed that engineers at firms I have worked for were obsessed with replacing exponential algorithms with polynomial algorithms because they are better in some limit case, but worse in the actual use cases, I would point this out and suggest they stop thinking in those terms. But this hasn't happened, so I haven't ever done so.
I do notice that use of the term heavy-tailed (as a binary) in EA, especially with reference to the log-normal distribution, is causing people to make claims about how we should expect this to be 'a heavy-tailed distribution' and how important it therefore is to attract the top 1%, and so...you get the idea.
Still, a full taboo is unrealistic and was intended as an aside; closer to 'in my ideal world' or 'this is what I aim for my own writing [EA(p) · GW(p)]', rather than a practical suggestion to others. As I said, I think the actual suggestions made in this summary are good - replacing the question 'is this heavy-tailed or not' with 'how heavy-tailed is this' should do the trick- and hope to see them become more widely adopted.
Replies from: Max_Daniel
comment by Max_Daniel · 2021-04-03T16:31:22.668Z · EA(p) · GW(p)
I'm not sure how extreme your general take on communication is, and I think at least I have a fairly similar view.
I agree that the kind of practical experiences you mention can be a good reason to be more careful with the use of some mathematical concepts but not others. I think I've seen fewer instances of people making fallacious inferences based on something being log-normal, but if I had I think I might have arrived at similar aspirations as you regarding how to frame things.
(An invalid type of argument I have seen frequently is actually the "things multiply, so we get a log-normal" part. But as you have pointed out in your top-level comment, if we multiply a small number of thin-tailed and low-variance factors we'll get something that's not exactly a 'paradigmatic example' of a log-normal distribution even though we could reasonably approximate it with one. On the other hand, if the conditions of the 'multiplicative CLT' aren't fulfilled we can easily get something with heavier tails than a log-normal. See also fn26 in our doc:
We’ve sometimes encountered the misconception that products of light-tailed factors always converge to a log-normal distribution. However, in fact, depending on the details the limit can also be another type of heavy-tailed distribution, such as a power law (see, e.g., Mitzenmacher 2004, sc. 5-7 for an accessible discussion and examples). Relevant details include whether there is a strictly positive minimum value beyond which products can’t fall (ibid., sc. 5.1), random variation in the number of factors (ibid., sc. 7), and correlations between factors.
)
comment by Benjamin_Todd · 2021-03-30T19:48:01.029Z · EA(p) · GW(p)
I tried to sum up the key messages in plain language in a Twitter thread, in case that helps clarify.
Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2021-05-09T13:39:58.606Z · EA(p) · GW(p)
And now I've created a more accurate summary here: https://80000hours.org/2021/05/how-much-do-people-differ-in-productivity/
comment by eca · 2021-03-26T15:08:02.208Z · EA(p) · GW(p)
Great post! Seems like the predictability questions is impt given how much power laws surface in discussion of EA stuff.
More precisely, future citations as well as awards (e.g. Nobel Prize) are predicted by past citations in a range of disciplines
I want to argue that things which look like predicting future citations from past citations are at least partially "uninteresting" in their predictability, in a certain important sense.
(I think this is related to other comments, and have not read your google doc, so apologies if I'm restating. But I think its worth drawing out this distinction)
In many cases I can think of wanting good ex-ante prediction of heavy-tailed outcomes, I want to make these predictions about a collection which is in an "early stage". For example, I might want to predict which EAs will be successful academics, or which of 10 startups seed rounds I should invest in.
Having better predictive performance at earlier stages gives you a massive multiplier in heavy-tailed domains: investing in a Series C is dramatically more expensive than a seed investment.
Given this, I would really love to have a function which takes in the intrinsic characteristics of an object, and outputs a good prediction of performance.
Citations are not intrinsic characteristics.
When someone is choosing who to cite, they look at - among other things- how many citations they have. All else equal, a paper/author with more citations will get cited more than a paper with less citations. Given the limited attention span of academics (myself as case in point) the more highly cited paper will tend to get cited even if the alternative paper is objectively better.
(Ed Boyden at MIT has this idea of "hidden gems" in the literature which are extremely undercited papers with great ideas: I believe the original idea for PCR, a molecular bio technique, had been languishing for at least 5 years with very little attention before later rediscover. This is evidence for the failure of citations to track quality.)
Domains in which "the rich get richer" are known to follow heavy-tailed distributions (with an extra condition or two) by this story of preferential attachment
In domains dominated by this effect we can predict ex-ante that the earliest settlers in a given "niche" are most likely to end up dominating the upper tail of the power law. But if the niche is empty, and we are asked to predict which of a set would be able to set up shop in the niche--based on intrinsic characteristics--we should be more skeptical of our predictive ability, it seems to me.
Besides citations, I'd argue that many/most other prestige-driven enterprises have at least a non-negligible component of their variance explained by preferential attachment. I don't think it's a coincidence that the oldest Universities in a geography also seem to be more prestigious, for example. This dynamic is also present in links on the interwebs and lots of other interesting places.
I'm currently most interested in how predictable heavy-tailed outcomes are before you have seen the citation-count analogue, because it seems like a lot of potentially valuable EA work is in niches which don't exist yet.
That doesn't mean the other type of predictability is useless, though. It seems like maybe on the margin we should actually be happier defaulting to making a bet on whichever option has accumulated the most "citations" to date instead of trusting our judgement of the intrinsic characteristics.
Anyhoo- thanks again for looking into this!
Replies from: Max_Daniel, Max_Daniel, Max_Daniel
comment by Max_Daniel · 2021-03-26T17:44:22.966Z · EA(p) · GW(p)
Thanks! I agree with a lot of this.
I think the case of citations / scientific success is a bit subtle:
• My guess is that the preferential attachment story applies most straightforwardly at the level of papers rather than scientists. E.g. I would expect that scientists who want to cite something on topic X will cite the most-cited paper on X rather than first looking for papers on X and then looking up the total citations of their authors.
• I think the Sinatra et al. (2016) findings which we discuss in our relevant section push at least slightly against a story that says it's all just about "who was first in some niche". In particular, if preferential attachment at the level of scientists was a key driver, then I would expect authors who get lucky early in their career - i.e. publish a much-cited paper early - to get more total citations. In particular, citations to future papers by a fixed scientist should depend on citations to past papers by the same scientist. But that is not what Sinatra et al. find - they instead find that within the career of a fixed scientist the per-paper citations seem entirely random.
• Instead their model uses citations to estimate an 'intrinsic characteristic' that differs between scientists - what they call Q.
• (I don't think this is very strong evidence that such an intrinsic quality 'exists' because this is just how they choose the class of models they fit. Their model fits the data reasonably well, but we don't know if a different model with different bells and whistles wouldn't fit the data just as well or better. But note that, at least in my view, the idea that there are ability differences between scientists that correlate with citations looks likely on priors anyway, e.g. because of what we know about GMA/the 'positive manifold' of cognitive tasks or garden-variety impressions that some scientists just seem smarter than others.)
• The International Maths Olympiad (IMO) paper seems like a clear example of our ability to measure an 'intrinsic characteristic' before we've seen the analog of a citation counts. IMO participants are high school students, and the paper finds that even among people who participated in the IMO in the same year and got their PhD from the same department IMO scores correlate with citations, awards, etc. Now, we might think that maybe maths is extreme in that success there depends unusually much on fluid intelligence or something like that, and I'm somewhat sympathetic to that point / think it's partly correct. But on priors I would find it very surprising if this phenomenon was completely idiosyncratic to maths. Like, I'd be willing to bet that scores at the International Physics Olympiad, International Biology Olympiad, etc., as well as simply GMA or high school grades or whatever, correlate with future citations in the respective fields.
• The IMO example is particularly remarkable because it's in the extreme tail of performance. If we're not particularly interested in the tail, then I think some of studies on more garden-variety predictors such as GMA or personality we cite in the relevant section give similar examples.
Replies from: eca
comment by eca · 2021-03-28T01:13:31.904Z · EA(p) · GW(p)
Interesting! Many great threads here. I definitely agree that some component of scientific achievement is predictable, and the IMO example is excellent evidence for this. Didn't mean to imply any sort of disagreement with the premise that talent matters; I was instead pointing at a component of the variance in outcomes which follows different rules.
Fwiw, my actual bet is that to become a top-of-field academic you need both talent AND to get very lucky with early career buzz. The latter is an instantiation of preferential attachment. I'd guess for each top-of-field academic there are at least 10 similarly talented people who got unlucky in the paper lottery and didn't have enough prestige to make it to the next stage in the process.
It sounds like I should probably just read Sinatra, but its quite surprising to me that publishing a highly cited paper early in one's career isn't correlated with larger total number of citations, at the high-performing tail (did I understand that right? Were they considering the right tail?). Anecdotally I notice that the top profs I know tend to have had a big paper/ discovery early. I.e. Ed Boyden who I have been thinking of because he has interesting takes on metascience, ~invented optogenetics in his PhD in 2005 (at least I think this was the story?) and it remains his most cited paper to this day by a factor of ~3
On the scientist vs paper preferential attachment story, I could buy that. I was pondering while writing my comment how much is person-prestige driven vs. paper driven. I think for the most-part you're right that its paper driven but I decided this caches out as effectively the same thing. My reasoning was if number of citations per paper is power law-ish then because citations per scientist is just the sum of these, it will be dominated by the top few papers. Therefore preferential attachment on the level of papers will produce "rich get richer" on the level of scientists, and this is still an example of the things because its not an intrinsic characteristic.
That said, my highly anecdotal experience is that there is actually a per-person effect at the very top. I've been lucky to work with George Church, one of the top profs in synthetic biology. Folks in the lab literally talk about "the George Effect" when submitting papers to top journals: the paper is more attractive simply because George's name is on it.
But my sense is that I should look into some of the refs you provided! (thanks :)
Replies from: Max_Daniel, eca
comment by Max_Daniel · 2021-03-29T11:51:28.399Z · EA(p) · GW(p)
its quite surprising to me that publishing a highly cited paper early in one's career isn't correlated with larger total number of citations, at the high-performing tail (did I understand that right? Were they considering the right tail?
No, they considered the full distribution of scientists with long careers and sustained publication activity (which themselves form the tail of the larger population of everyone with a PhD).
That is, their analysis includes the right tail but wasn't exclusively focused on it. Since by its very nature there will only be few data points in the right tail, it won't have a lot of weight when fitting their model. So it could in principle be the case that if we looked only at the right tail specifically this would suggest a different model.
It is certainly possible that early successes may play a larger causal role in the extreme right tail - we often find distributions that are mostly log-normal, but with a power-law tail, suggesting that the extreme tail may follow different dynamics.
comment by eca · 2021-03-28T01:32:12.130Z · EA(p) · GW(p)
Sorry meant to write "component of scientific achievement is predictable from intrinsic characteristics" in that first line
comment by Max_Daniel · 2021-03-26T17:49:10.326Z · EA(p) · GW(p)
Relatedly, you might be interested in these two footnotes discussing how impressive it is that Sinatra et al. (2016) - the main paper we discuss in the doc - can predict the evolution of the Hirsch index (a citation measure) over a full career based on the the Hirsch index after the 20 or 50 papers:
Note that the evolution of the Hirsch index depends on two things: (i) citations to future papers and (ii) the evolution of citations to past papers. It seems easier to predict (ii) than (i), but we care more about (i). This raises the worry that predictions of the Hirsch index are a poor proxy of what we care about – predicting citations to future work – because successful predictions of the Hirsch index may work largely by predicting (ii) but not (i). This does make Sinatra and colleagues’ ability to predict the Hirsch index less impressive and useful, but the worry is attenuated by two observations: first, the internal validity of their model for predicting successful scientific careers is independently supported by its ability to predict Nobel prizes and other awards; second, they can predict the Hirsch index over a very long period, when it is increasingly dominated by future work rather than accumulating citations to past work.
Acuna, Allesina, & Kording (2012) had previously proposed a simple linear model for predicting scientists’ Hirsch index. However, the validity of their model for the purpose of predicting the quality of future work is undermined more strongly by the worry explained in the previous footnote; in addition, the reported validity of their model is inflated by their heterogeneous sample that, unlike the sample analyzed by Sinatra et al. (2016), contains both early- and late-career scientists. (Both points were observed by Penner et al. 2013.)
Replies from: eca
comment by eca · 2021-03-28T01:28:54.713Z · EA(p) · GW(p)
Neat. I'd be curious if anyone has tried blinding the predictive algorithm to prestige: ie no past citation information or journal impact factors. And instead strictly use paper content (sounds like a project for GPT-6).
It might be interesting also to think about how talent vs. prestige-based models explain the cases of scientists whose work was groundbreaking but did not garner attention at the time. I'm thinking, e.g. of someone like Kjell Keppe who basically described PCR, the foundational molbio method, a decade early.
If you look at natural experiments in which two groups publish the ~same thing, but only one makes the news, the fully talent-based model (I think?) predicts that there should not be a significant difference between citations and other markers of academic success (unless your model of talent is including something about marketing which seems like a stretch to me).
comment by Max_Daniel · 2021-03-29T11:44:19.539Z · EA(p) · GW(p)
Ed Boyden at MIT has this idea of "hidden gems" in the literature which are extremely undercited papers with great ideas: I believe the original idea for PCR, a molecular bio technique, had been languishing for at least 5 years with very little attention before later rediscover.
A related phenomenon has been studied in the scientometrics literature under the label 'sleeping beauties'.
Here is what Clauset et al. (2017, pp. 478f.) say in their review of the scientometrics/'science of science' field:
However, some discoveries do not follow these rules, and the exceptions demonstrate that there can be more to scientific impact than visibility, luck, and positive feedback. For instance, some papers far exceed the predictions made by sim- ple preferential attachment (5, 6). And then there are the “sleeping beauties” in science: discoveries that lay dormant and largely unnoticed for long periods of time before suddenly attracting great attention (7–9). A systematic analysis of nearly 25 million publications in the natural and social sciences over the past 100 years found that sleeping beauties occur in all fields of study (9).
Examples include a now famous 1935 paper by Einstein, Podolsky, and Rosen on quantum me- chanics; a 1936 paper by Wenzel on waterproofing materials; and a 1958 paper by Rosenblatt on artificial neural networks. The awakening of slumbering papers may be fundamentally un- predictable in part because science itself must advance before the implications of the discovery can unfold.
[See doc linked in the OP for full reference.]
Replies from: Lukas_Gloor, eca
comment by Lukas_Gloor · 2021-03-29T13:03:41.427Z · EA(p) · GW(p)
The awakening of slumbering papers may be fundamentally un- predictable in part because science itself must advance before the implications of the discovery can unfold.
Except to the authors themselves, who may often have an inkling that their paper is important. E.g., I think Rosenblatt was incredibly excited/convinced about the insights in that sleeping beauty paper. (Small chance my memory is wrong about this, or that he changed his mind at some point.)
I don't think this is just a nitpicky comment on the passage you quoted. I find it plausible that there's some hard-to-study quantity around 'research taste' that predicts impact quite well. It'd be hard to study because the hypothesis is that only very few people have it. To tell who has it, you kind of need to have it a bit yourself. But one decent way to measure it is asking people who are universally regarded as 'having it' to comment on who else they think also has it. (I know this process would lead to unfair network effects and may result in false negatives and so on, but I'm advancing a descriptive observation here; I'm not advocating for a specific system on how to evaluate individuals.)
Related: I remember a comment (can't find it anymore) somewhere by Liv Boeree or some other poker player familiar with EA. The commenter explained that monetary results aren't the greatest metric for assessing the skill of top poker players. Instead, it's best to go with assessments by expert peers. (I think this holds mostly for large-field tournaments, not online cash games.)
Replies from: Max_Daniel, Lukas_Gloor
comment by Max_Daniel · 2021-03-29T15:11:15.415Z · EA(p) · GW(p)
Related: I remember a comment (can't find it anymore) somewhere by Liv Boeree or some other poker player familiar with EA. The commenter explained that monetary results aren't the greatest metric for assessing the skill of top poker players. Instead, it's best to go with assessments by expert peers. (I think this holds mostly for large-field tournaments, not online cash games.)
If I remember correctly, Linchuan Zhang made or referred to that comment somewhere on the Forum when saying that it was similar with assessing forecaster skill. (Or maybe it was you? :P)
Replies from: Linch
comment by Linch · 2021-03-29T20:35:51.604Z · EA(p) · GW(p)
I have indeed made that comment somewhere. It was one of the more insightful/memorable comments she made when I interviewed her, but tragically I didn't end up writing down that question in the final document (maybe due to my own lack of researcher taste? :P)
That said, human memory is fallible etc so maybe it'd be worthwhile to circle back to Liv and ask if she still endorses this, and/or ask other poker players how much they agree with it.
Replies from: alexrjl
comment by alexrjl · 2021-03-29T21:02:01.865Z · EA(p) · GW(p)
I've been much less successful than LivB but would endorse it, though I'd note that there are substantially better objective metrics than cash prizes for many kinds of online play, and I'd have a harder time arguing that those were less reliable than subjective judgements of other good players. It somewhat depends on sample though, at the highest stakes the combination of v small playerpool and fairly small samples make this quite believable.
comment by Lukas_Gloor · 2021-03-29T13:14:40.459Z · EA(p) · GW(p)
To give an example of what would go into research taste, consider the issue of reference class tennis (rationalist jargon for arguments on whether a given analogy has merit, or two people throwing widely different analogies at each other in an argument). That issue comes up a lot especially in preparadigmatic branches of science. Some people may have good intuitions about this sort of thing, while others may be hopelessly bad at it. Since arguments of that form feel notoriously intractable to outsiders, it would make sense if "being good at reference class tennis" were a skill that's hard to evaluate.
comment by eca · 2021-03-30T16:09:09.978Z · EA(p) · GW(p)
Yeah this is great; I think Ed probably called them sleeping beauties and I was just misremembering :)
Thanks for the references!
comment by Ben_West · 2021-04-17T03:59:42.112Z · EA(p) · GW(p)
Basic statistics question: the GMA predictors research seems to mostly be using the Pearson correlation coefficient, which I understand to measure linear correlation between variables.
But a linear correlation would imply that billionaires have an IQ of 10,000 or something which is clearly implausible. Are these correlations actually measuring something which could plausibly be linearly related (e.g. Z score for both IQ and income)?
I read through a few of the papers cited and didn't see any mention of this. I expect this to be especially significant at the tails, which is what you are looking at here.
Replies from: Max_Daniel
comment by Max_Daniel · 2021-04-17T10:04:23.921Z · EA(p) · GW(p)
Are these correlations actually measuring something which could plausibly be linearly related (e.g. Z score for both IQ and income)?
I haven't looked at the papers to check and don't remember but my guess would be that it's something like this. Plus maybe some papers looking at things other than correlation.
comment by Jan_Kulveit · 2021-04-10T16:09:39.771Z · EA(p) · GW(p)
1.
For different take on very similar topic check this discussion [LW · GW] between me and Ben Pace (my reasoning was based on the same Sinatra paper).
For practical purposes, in case of scientists, one of my conclusions was
Translating into the language of digging for gold, the prospectors differ in their speed and ability to extract gold from the deposits (Q). The gold in the deposits actually is randomly distributed. To extract exceptional value, you have to have both high Q and be very lucky. What is encouraging in selecting the talent is the Q seems relatively stable in the career and can be usefully estimated after ~20 publications. I would guess you can predict even with less data, but the correct "formula" would be trying to disentangle interestingness of the problems the person is working on from the interestingness of the results.
2.
For practical purposes, my impression is some EA recruitment efforts could be more often at risk of over-filtering by ex-ante proxies and being bitten by tails coming apart [LW · GW], rather than at risk of not being selective enough.
Also, often the practical optimization question is how much effort you should spend on on how extreme tail of the ex-ante distribution.
3.
Meta-observation is someone should really recommend more EAs to join the complex systems / complex networks community.
Most of the findings from this research project seem to be based on research originating in complex networks community, including research directions such as "science of success", and there is more which can be readily used, "translated" or distilled.
comment by meerpirat · 2021-04-02T12:20:26.409Z · EA(p) · GW(p)
Nice, I think developing a deeper understanding here seems pretty useful, especially as I don't think the EA community can just copy the best hiring practices of existing institutions due to lack in shared goals (e.g. most big tech firms) or suboptimal hiring practices (e.g. non-profits & most? places in academia).
I'm really interested in the relation between the increasing number of AI researchers and the associated rate of new ideas in AI. I'm not really sure how to think about this yet and would be interested in your (or anybody's) thoughts. Some initial thoughts:
If the distribution of rates of ideas over all people that could do AI research is really heavy-tailed, and the people with the highest rates of ideas would've worked on AI even before the funding started to increase, maybe one would expect less of an increase in the rate of ideas (ignoring that more funding will make those researchers also more productive).
• my vague intuition here is that the distribution is not extremely heavy-tailed (e.g. the top 1% researchers with the most ideas contribute maybe 10% of all ideas?) and that more funding will capture many AI researchers that will end up landing in the top 10% quantile (e.g. every doubling of AI researchers will replace 2% of the top 10%?)
• I'm not sure to which if any distribution in your report I could relate the distribution of rates of ideas over all people who can do AI research. Number of papers written over the whole career might fit best, right? (see table extracted from your report)
Replies from: Max_Daniel
comment by Max_Daniel · 2021-04-02T13:00:20.608Z · EA(p) · GW(p)
I'm really interested in the relation between the increasing number of AI researchers and the associated rate of new ideas in AI.
Yeah, that's an interesting question.
One type of relevant data that's different from looking at the output distribution across scientists is just looking at the evolution of total researcher hours on one hand and measures of total output on the other hand. Bloom and colleagues' Are ideas getting harder to find? collects such data and finds that research productivity, i.e. roughly output her researcher hour, has been falling everywhere they look:
The number of researchers required today to achieve the famous doubling of computer chip density is more than 18 times larger than the number required in the early 1970s. More generally, everywhere we look we find that ideas, and the exponential growth they imply, are getting harder to find.
Replies from: meerpirat
comment by meerpirat · 2021-04-02T15:37:53.716Z · EA(p) · GW(p)
Thanks, yes, that seems much more relevant. The cases in that paper feel slightly different in that I expect AI and ML to currently be much more "open" fields where I expect orders of magnitude more paths of ideas that can lead towards transformative AI than
• paths of ideas leading to higher transistor counts on a CPU (hmm, because it's a relatively narrow technology confronting physical extremes?)
• paths of ideas leading to higher crop yields (because evolution already invested a lot of work in optimizing energy conversion?)
• paths of ideas leading to decreased mortality of specific diseases (because this is about interventions in extremely complex biochemical pathways that are still not well understood?)
Maybe I could empirically ground my impression of "openness" by looking at the breadth of cited papers at top ML conferences, indicating how highly branched the paths of ideas currently are compared to other fields? And maybe I could look at the diversity of PIs/institutions of the papers that report new state-of-the-art results in prominent benchmarks, which indicates how easy it is to come into the field and have very good new ideas?
comment by Jsevillamol · 2021-04-05T22:21:50.623Z · EA(p) · GW(p)
Thank you for writing this!
While this is not the high note of the paper, I read with quite some interest your notes about heavy tailed distributions.
I think that the concept of heavy tailed distributions underpins a lot of considerations in EA [? · GW], yet as you remark many people (including me) are still quite confused about how to formalize the concept [LW · GW] effectively, how often it applies in real life.
Glad to see more thinking going into this!
comment by Charles He · 2021-03-27T18:40:49.199Z · EA(p) · GW(p)
Heyo Heyo!
C-dawg in the house!
I have concerns about how this post and research is framed and motivated.
This is because its methods imply a certain worldview and is trying to help hiring or recruiting decisions in EA orgs, and we should be cautious.
Star systems
Like, I think, loosely speaking, I think “star systems” is a useful concept / counterexample to this post.
In this view of the world, someone’s in a “star system” if a small number of people get all the rewards, but not from what we would comfortably call productivity or performance.
So, like, for intuition, most Olympic athletes train near poverty but a small number manage to “get on a cereal box” and become a millionaire. They have higher ability, but we wouldn’t say that Gold medal winners are 1000x more productive than someone they beat by 0.05 seconds.
You might view “Star systems” negatively because they are unfair—Yes, and in addition to inequality, they have may have very negative effects: they promote echo chambers in R1 research, and also support abuse like that committed by Harvey Weinstein.
However, “star systems” might be natural and optimal given how organizations and projects need to be executed. For intuition, there can be only one architect of a building or one CEO of an org.
It’s probably not difficult to build a model where people of very similar ability work together and end up with a CEO model with very unequal incomes. It’s not clear this isn’t optimal or even “unfair”.
So what?
Your paper is a study or measure of performance.
But as suggested almost immediately above, it seems hard (frankly, maybe even harmful) to measure performance if we don't take into account structures like "star systems", and probably many other complex factors.
Your intro, well written, is very clear and suggests we care about productivity because 1) it seems like a small number of people are very valuable and 2) suggests this in the most direct and useful sense of how EA orgs should hire.
Honestly, I took a quick scan (It’s 51 pages long! I’m willing to do more if there's specific need in the reply). But I know someone is experienced in empirical economic research, including econometrics, history of thought, causality, and how various studies, methodologies and world-views end up being adopted by organizations.
It’s hard not to pattern match this to something reductive like “Cross-country regressions”, which basically is inadequate (might say it’s an also-ran or reductive dead end).
Overall, you are measuring things like finance, number of papers, and equity, and I don’t see you making a comment or nod to the “Star systems” issue, which may be one of several structural concepts that are relevant.
To me, getting into performance/productivity/production functions seems to be a deceptively strong statement.
It would influence cultures and worldviews, and greatly worsen things, if for example, this was an echo-chamber.
Alternative / being constructive?
It's nice to try to end with something constructive.
I think this is an incredibly important area.
I know someone who built multiple startups and teams. Choosing the right people, from a cofounder to the first 50 hires is absolutely key. Honestly, it’s something akin to dating, for many of the same reasons.
So, well, like my 15 second response is that I would consider approaching this in a different way:
I think if the goal is help EA orgs, you should study successful and not successful EA orgs and figure out what works. Their individual experience is powerful and starting from interviews of successful CEOs and working upwards from what lessons are important and effective in 2021 and beyond in the specific area.
If you want to study exotic, super-star beyond-elite people and figure out how to find/foster/create them, you should study exotic, super-star beyond-elite people. Again, this probably involves huge amounts of domain knowledge, getting into the weeds and understanding multiple world-views and theories of change.
Well, I would write more but it's not clear there’s more 5 people who will read to this point, so I'll end now.
Also, here's a picture of a cat:
Replies from: vaidehi_agarwalla
comment by vaidehi_agarwalla · 2021-03-27T22:49:20.178Z · EA(p) · GW(p)
On a meta-level and unrelated to the post, I very much appreciated the intro and the picture of the cat :)
comment by velutvulpes (james_aung) · 2021-03-26T11:30:28.932Z · EA(p) · GW(p)
Minor typo: "it’s often to reasonable to act on the assumption" probably should be "it’s often reasonable to act on the assumption"
Replies from: Max_Daniel
comment by Max_Daniel · 2021-03-26T11:54:19.603Z · EA(p) · GW(p)
Thanks! Fixed in post and doc.
comment by Max_Daniel · 2021-05-05T21:00:04.912Z · EA(p) · GW(p)
[The following is a lightly edited response I gave in an email conversation.]
What is your overall intuition about positive feedback loops and the importance of personal fit? Do they make it (i) more important, since a small difference in ability will compound over time or (ii) less important since they effectively amplify luck / make the whole process more noisy?
My overall intuition is that the full picture we paint suggests personal fit, and especially being in the tail of personal fit, is more important than one might naively think (at least in domains where ex-post output really is very heavy-tailed). But also how to "allocate"/combine different resources is very important. This is less b/c of feedback loops specifically but more b/c of an implication of multiplicative models:
[So one big caveat is that the following only applies to situations that are in fact well described by a multiplicative model. It's somewhat unclear which these are.]
If ex-post output is very heavy-tailed, total ex-post output will be disproportionately determined by outliers. If ex-post output is multiplicative, then these outliers are precisely those cases where all of the inputs/factors have very high values.
So this could mean: total impact will be disproportionately due to people who are highly talented, have had lots of practice at a number of relevant skills, are highly motivated, work in a great & supportive environment, can focus on their job rather than having to worry about their personal or financial security, etc., and got lucky.
If this is right, then I think it adds interesting nuance on discussions around general mental ability (GMA). Yes, there is substantial evidence indicating that we can measure a 'general ability factor' that is a valid predictor for ~all more specific cognitive abilities. And it's useful to be aware of that, e.g. b/c almost all extreme outlier performers in jobs that rely crucially on cognitive abilities will be people with high GMA. (This is consistent with many/most people having the potential to be "pretty good" at these jobs.) However, conversely, it does NOT follow that GMA is the only thing paying attention to. Yes, a high-GMA person will likely be "good at everything" (because everything is at least moderately correlated). But there are differences in how good, and these really matter. To get to the global optimum, you really need to allocate the high-GMA person to a job that relies on whatever specific cognitive ability they're best at, supply all other 'factors of production' at maximal quality, etc. You have to optimize everything.
I.e. this is the toy model: Say we have ten different jobs, to . Output in all of them is multiplicative. They all rely on different specific cognitive abilities to (i.e. each is a factor in the 'production function' for but not the other). We know that there is a "general ability factor" that correlates decently well with all of the . So if you want to hire for any it can make sense to select on , especially if you can't measure the relevant directly. However, after you selected on you might end up with, say, 10 candidates who are all high on . It does NOT follow that you should allocate them at random between the jobs because " is the only thing that matters". Instead you should try hard to identify for any person which they're best at, and then allocate them to . For any given person, the difference between their and might look small b/c they're all correlated, but because output is multiplicative this "small" difference will get amplified.
(If true, this might rationalize the common practice/advice of: "Hire great people, and then find the niche they do best in.")
I think Kremer discusses related observations quite explicitly in his o-ring model. In particular, he makes the basic observation that if output is multiplicative you maximize total output by "assortative matching". This is basically just the observation that if and you have four inputs with etc., then
- i.e. you maximize total output by matching inputs by quality/value rather than by "mixing" high- and low-quality inputs. It's his explanation for why we're seeing "elite firms", why some countries are doing better than others, etc. In a multiplicative world, the global optimum is a mix of 'high-ability' people working in high-stakes environments with other 'high-ability' people on one hand, and on the other hand 'low-ability' people working in low-stakes environments with other 'low-ability' people. Rather than a uniform setup with mixed-ability teams everywhere, "balancing out" worse environments with better people, etc.
(Ofc as with all simple models, even if we think the multiplicative story does a good job at roughly capturing reality, there will be a bunch of other considerations that are relevant in practice. E.g. if unchecked this dynamic might lead to undesirable inequality, and the model might ignore dynamic effects such as people improving their skills by working with higher-skilled people, etc.)
comment by John_Maxwell (John_Maxwell_IV) · 2021-03-30T23:29:51.791Z · EA(p) · GW(p)
It might be worth discussing the larger question which is being asked. For example, your IMO paper seems to be work by researchers who advocate looser immigration policies for talented youth who want to move to developed countries. The larger question is "What is the expected scientific impact of letting a marginal IMO medalist type person from Honduras immigrate to the US?"
These quotes [LW · GW] from great mathematicians all downplay the importance of math competitions. I think this is partially because the larger question they're interested in is different, something like: "How many people need go into math for us to reap most of the mathematical breakthroughs that this generation is capable of?"
comment by FlorianH · 2021-04-01T04:12:22.543Z · EA(p) · GW(p)
Surprised to see nothing (did I overlook?) about: The People vs. The Project/Job: The title, and the lead sentence,
Some people seem to achieve orders of magnitudes more than others in the same job.
suggest the work focuses essentially on people's performance, but already in the motivational examples
For instance, among companies funded by Y Combinator the top 0.5% account for more than ⅔ of the total market value; and among successful bestseller authors [wait, it's their books, no?], the top 1% stay on the New York Times bestseller list more than 25 times longer than the median author in that group.
(emphasis and [] added by me)
I think I have not explicitly seen discussed whether at all it is the people, or more the exact project (the startup, the book(s)) they work on, that is the successful element, although the outcome is a sort of product of the two. Theoretically, in one (obviously wrong) extreme case: Maybe all Y-Combinator CEOs were similarly performing persons, but some of the startups simply are the right projects!
My gut feeling is that making this fundamental distinction explicit would make the discussion/analysis of performance more tractable.
|
{}
|
# TrigonometryTrying to resolve a trig identity
#### dwsmith
##### Well-known member
I am trying to resolve a trig identity for some notes I am typing up. On paper, I wrote recall $e(\sin(E_1) - \sin(E_0)) = 2\cos(\zeta)\sin(E_m)$. I have no idea what I am recalling this from now at least.
Identities I have set up are:
\begin{align}
E_p &= \frac{1}{2}(E_1 + E_2)\\
E_m &= \frac{1}{2}(E_1 - E_2)\\
x &= a\cos(E)\\
y &= a\sqrt{1 - e^2}\sin(E)\\
\cos(\zeta) &= e\cos(E_p)\\
\alpha &= \zeta + E_m\\
\beta &= \zeta - E_m
\end{align}
Lambert Section
this may be easier to understand if you look at it.
#### MarkFL
$$\displaystyle e\left(\sin\left(E_1 \right)-\sin\left(E_2 \right) \right)$$
$$\displaystyle 2e\sin\left(\frac{E_1-E_2}{2} \right)\cos\left(\frac{E_1+E_2}{2} \right)$$
$$\displaystyle 2\cos(\zeta)\sin\left(E_m \right)$$
|
{}
|
# Dealing with more complex entities in an ECS architecture
Preamble: I'm working on a level editor for my engine, which uses an ECS architecture. I have around a dozen component types so far, mostly dealing with graphics, like lighting components, model, and other mesh related things like skeleton and collider components.
My engine only supports 1 of each component type per entity, and systems can deal with optional components as well.
Today I realized I was uncertain on how I should tie-together the building block components used for basic entities into a more complex one.
For example, let's say I want to make a torch / flaming stick, using the components found in a model entity and a lighting entity.
If the model's origin is in the center of the shaft, then the light and fire would appear at the center of the model.
Further, if I wanted a weird torch with fire on one end of the stick and smoke on the other, they would both need different offsets before rendering.
Do I need to create a system just for torch's, adjusting all the components to fit the model? This doesn't sound like it scales well to different models needing 'decorations'.
Should I instead be creating an attachment component, with its own offset variables? What dictates which components of an entity use the offset? Should I be making attachment types for every component type that could need an offset?
Or rather should most components just have a spatial offset as another parameter the user can control?
Anyone know any commonly used solutions?
• This sounds like a job for a transformation hierarchy: one entity and its components represents the torch, and a different entity with its components represents the flame. The transform of the flame is parented to the torch, to move with it at an offset dictated by its local position. All your components rely only on their entity's transform's world position, no additional offsets/special cases required. Same with wheels on a car, etc. Presumably you've looked into this? Aug 3 '19 at 23:31
• I don't think this would work, what if one part of this original entity needs to influence a part of the other? Since they're split into 2 entities, no system can act on the entire set of components defining a torch. Also a system having to look up a component's entity's parent entity's components to find a transformation per-component would totally trash the cache, eliminating one huge benefit of ecs in general Aug 4 '19 at 0:24
Games typically approach this type of issue using a transformation hierarchy.
In this model, each entity can be treated as a "child" of a "parent" entity. The entity's local position and orientation components are interpreted to be relative to the position and orientation of the parent.
Parent entities may in turn have their own parents, grandparents, etc., all the way up to "root" objects that are positioned absolutely in world space.
This lets us build complex articulated structures - like a car with independently spinning/suspended wheels and passengers/cargo that follow it along for the ride, or a humanoid character with a hand that pivots at the wrist of a forearm that pivots at the elbow of an upper arm that pivots at the shoulder... all using one simple transformation composition rule, rather than adding special-case offsets and variants throughout all our components.
Contrary to the concern raised in the comments, this does not need to be unduly unfriendly to the cache. In a simple model, we can associate the following data with each entity:
• a local position
• a local orientation
• a local uniform scale
• a parent entity index (-1 for "this is a root entity")
• a world position
• a world orientation
• a world uniform scale
Most systems make updates only to the local properties (say, a skeletal animation system rotating each bone's local orientation about its pivot) so they don't need to peek into the hierarchy at all and can do their work strictly on one entity at a time (friendly for parallelization). We can defer updates to the world properties until the next physics or rendering step where we need final positions & such.
If we store our entities in non-decreasing order of parent entity ID (this is not too onerous to maintain, since re-parenting is very rare compared to routine transformation updates), then we can update the whole hierarchy's world properties in one linear scan:
• First, we update all the root entities by copying their local parameters to their world parameters
• Next, we walk these lists of components with two indices: the current entity we're updating, and the parent index.
Both these indices move strictly forward through the arrays, so we're not jumping back and forth randomly thrashing the cache. In fact we'll often update several entities in a row with the same or adjacent parents, getting excellent cache utilization despite the indirection.
• For each of these entities, we update the global properties like so:
parent = parentIndex[current];
•
worldPosition[current] = worldPosition[parent] + worldOrientation[parent] * localPosition[current]; worldOrientation[current] = worldOrientation[parent] * localOrientation[current]; worldScale[current] = worldScale[parent] * localScale[current];
You can even parallelize this work if needed, by dividing your root objects between separate arrays, and placing child objects in the same array as the root. Then each array can have its hierarchy updated independently of the others.
Most interaction between entities in the hierarchy can be accomplished with message systems. So the Torch system doesn't need to directly manipulate the Light Source component on the torch's child entity. It can just leave it a "Turn on" / "Turn off" message when it needs to change state, and the Light system can process that message when it iterates its Light Source components later in the frame.
Now, there will be occasional scripts that do need to reach across to different entities make their decisions and updates. Like an AI awareness system that needs to gather information about nearby entities to update the current entity's state. That's OK though - and largely unavoidable anyway, even with a flat hierarchy. The goal of a data-oriented ECS is not to outright forbid reference-following, it's to keep costly reference-chasing out of the hottest code paths - the things we need to iterate thousands of times per frame.
You can have thousands of animated characters swinging tens of thousands of individual bone transforms around with minimal cache misses, so the less predictable parts of your game scripts - like the player character control logic that only needs to run for a handful of local players each frame - has the breathing room to do its work.
Use the data orientation where it helps you do lots of stuff quickly, but don't let it be a wall that stops you from getting the gameplay behaviour you want.
• So if I'm understanding this correctly, a transformation component stores 2 transforms, a local, and a final world-transform, then all I would need to do is to switch systems over to using the world-transform, and then a new system/mechanism to update the world-transforms every frame. Perhaps I can figure out a way to get that to work, though it will require significant work updating everything, including level serialization and entity inspection w/ tree traversal. Not to mention a messaging system to get these entities to work together Aug 4 '19 at 22:58
• It's up to you how granular you want to divide your components. You could think of it as monolithic as one "Transform Component" including all these fields, or as granular as a "Local Position Component" and a "Local Orientation Component" and a "Transform Parent Component" and a "World Position Component"... Other optimizations you can consider are things like a dirty flag to iterate over only entities whose world transforms might have changed this frame and skip any stationary ones. Trickier to get contiguous/predictable access this way, but may be profitable if many objects are static. Aug 4 '19 at 23:18
|
{}
|
# Francesco Maurolico
### Quick Info
Born
16 September 1494
Messina, Kingdom of Sicily (now Italy)
Died
22 July 1575
Messina, Kingdom of Sicily (now Italy)
Summary
Francisco Maurolico was an Italian Benedictine who wrote important books on Greek mathematics. He also worked on geometry, the theory of numbers, optics, conics and mechanics.
### Biography
Francesco Maurolico's name is Greek and is transcribed in a variety of different ways in addition to 'Francesco Maurolico' which is the most common. His first name is sometimes given as Francisco while other forms of his family name are Maurolyco, Maruli or Marulli. He used a Latin form of his name in his publications, giving again different forms: Maurolycus, Maurolicus or Maurolycius. His family were originally Greek but had fled to Messina, in Sicily, to escape from Turkish invasions of their homeland, so their first language was Greek. Francesco's father, Antonio Maurolico, had fled from Constantinople to Messina where he was tutored by the Greek scholar and grammarian Constantine Lascaris who had also fled from Constantinople, settling in Messina in 1466. Antonio became a physician and then Master of the Messina Mint (Sicily minted its own currency at this time). He married Penuccia and they had seven sons: Girolamo, Gio-Saluo, Silvestro, Matteo, Francesco, Gio-Pietro, and Giacomo, and one daughter Laurea. The family were well-off owning two homes in and around Messina, a town house and a villa outside the city. Much of Francesco's education came from his mother, described as a wise and noble women, and his father, who taught him Greek, mathematics and astronomy. However, he also learned much from Francesco Faraone, a priest in Messina, who taught him grammar and rhetoric. His education gave him the critical approach of the humanist Renaissance scholars. Thus Maurolico would believe in approaching problems using his intellect, and using his own practical experience.
Influenced by his upbringing he entered the Church and was ordained a priest in 1521. However, disease spread through Messina and many died during the next few years, including Maurolico's father, two of his brothers and his sister. Maurolico left the city for a while to escape from the deadly illness. Around 1525 he made a visit to Rome and spent some time there. After his father's death, Maurolico inherited sufficient wealth to allow him to live without working for several years. Although he was not the eldest of his parent's children, his older brothers died before his father so he inherited. Maurolico was able to concentrate on his scholarly pursuits during these years and he produced significant contributions in a broad range of different topics although his best work was done on mathematics. Certainly he was not a wealthy man and required patrons to be able to publish his work and several leading men of Messina appear to have put up funds since their names appear in dedications in Maurolico's work. For example, Grammatica rudimenta, which he published in 1528, was dedicated to Ettore Pignatelli. In the same year he was requested by the governor of Messina, Giovanni Marullo, to lecture on The Sphere of Sacrobosco and Euclid's Elements. In fact the governor attended Maurolico's lectures on these topics. Perhaps as a result of these lectures, Maurolico wrote up his rearrangement and translation of part of Euclid's Elements and completed the task on 9 July 1532. It was on that day that he signed the dedication but it remained unpublished for 43 years, only being printed and published in 1575 as part of Opuscula Mathematica which we say more about below. J-P Sutto [50] discusses Maurolico's work on Euclid:-
In his compendium of Euclid's 'Elements', Francesco Maurolico modifies the theory of proportions. The Sicilian concentrates on equalities of ratios of Book 5 and tries to avoid handling of equimultiples. He concentrates on isolating 'named' ratios - of a number to a number - and he constantly compares ratios and named ratios.
Another of Maurolico's patrons was Giovanni Ventimiglia, 6th Marchese di Geraci, Prince of Castelbuono (a town in the province of Palermo, Sicily), and Governor of Messina. He had married Dona Isabella de Montcada dei Conti di Aitona in 1527 and their son Simone Ventimiglia (1528-60) was also a patron of Maurolico who lived at their estate for long periods during 1547-50. He was able to use the tower of their Pollina Castle from which to make astronomical observations. In 1550, he became a Benedictine and Simone Ventimiglia conferred on him the Abbey of Santa Maria del Parto (today called the Santuario di San Guglielmo) in Castelbuono. At this time the Benedictine Order was one of the two main monastic orders which formed the basis of Christian life in Sicily, the other being the Franciscans. As a Benedictine, Maurolico would have a life of simplicity, a feature that was much loved by the people. However in a 1543 publication he describes events which took place in 1540 (see for example [29]):-
The insolence of the Spanish soldiers returning from the capture of Castelnuovo gave us a winter harsher than usual, so that the barbarians would not incur such consideration from us; among which tumults even I (who would not laugh?), having put down my ruler and compass, was driven to take up arms for a time. For the example of my Archimedes warned me that in such danger I should not be devoting myself to drawing lines and circles.
After several years in the Abbey in Castelbuono, Maurolico returned to Messina where he was appointed as an abbot in the cathedral. In fact he lived his whole life in Sicily except for short periods in Rome and Naples. In Messina, St Ignatius had founded the first ever Jesuit college in 1548. This later became the Studium Generale and is now the University of Messina. Maurolico was involved in the mathematical curriculum that was set up in the College being appointed as professor of mathematics in 1569. It is interesting to note that his contract specified that he was required to teach the theory of music as a branch of mathematics. He also served as head of the mint in Messina and he was in charge of the fortifications of Messina (in collaboration with Antonio Ferramolino worked for the Spanish crown in Sicily for during years 1530-50), largely dominated by the impressive San Salvatore fort, a task he was given by Charles V's viceroy Giovanni de Vega who was President of the Sicilian Parliament in 1546. Maurolico was also mathematics tutor to one of Giovanni de Vega's two sons. He was involved in designing churches in the city, and the building of two famous fountains, the Fontana di Orione in front of the cathedral and the Fontana del Nettuno designed by G Montorsoli. In addition he was appointed to write a history of Sicily which was published under the title Sicanicarum reum compendium in 1562. Ten years earlier he had received a salary for two years from the Senate of Messina to allow him to work on this History as well as his mathematics texts.
Maurolico wrote important books on Greek mathematics, restored many ancient works from scant information and translated many ancient texts such as those by Theodosius, Menelaus, Autolycus, Euclid, Apollonius and Archimedes. Some appeared in Theodosii Sphaericorum Elementorum Libri iii which he published in 1558. The work contains nine separate items by Maurolico, some translations, some commentaries, and a work of his own De Sphaera Sermo. The Theodosii also contains a table of secants and, although Delambre credited him with the first use of this function, it had appeared earlier in the work of Copernicus. However, Maurolico does prove secant = radius / cosine and secant = (radius × tangent) / sine and gives a table of secants from 0° to 45°. Maurolico also worked on geometry, the theory of numbers (L E Dickson notes some of his results), optics, conics and mechanics, writing important books on these topics which we will discuss in more detail below. Maurolico completed a restoration of books V and VI of Apollonius's Conics in 1547 working from the scant details that Apollonius gives in the Preface to the work. These were not published until 1654, about 80 years after his death, as Emendatio et restitutio conicorum Apollonii Pergaei. Both Guglielmo Libri and Gino Loria claimed that this achievement alone showed that Maurolico was a genius.
In 1535 he wrote Cosmographia in the form of three dialogues, which was published in 1543. In it he states that he completed the book:-
... on Thursday 21 October 1535, the day that the Emperor Charles V came to Messina on his return from the African campaign.
Maurolico gave methods for measuring the Earth in Cosmographia which were later used by Jean Picard in measuring the meridian in 1670. However, he believed that the Earth is the centre of the universe and he dismisses Copernicus's sun centred universe without mentioning him by name (see [14]):-
And it would not be necessary for astronomers to refute any other principles as regards the earth, if diversity of opinion and human fickleness had not so grown that it is doubted whether one may perhaps believe and say the earth turns on its axis whilst the heavens stay at rest.
Later, however, he made a much more personal attack on Copernicus which we quote below.
Maurolico made astronomical observations, in particular he observed the supernova which appeared in Cassiopeia in 1572 now known as 'Tycho's supernova'. Tycho Brahe published details of his observations in 1574. Some details of Maurolico's observations were published by Christopher Clavius but full details of Maurolico's observations were never published and only rediscovered in 1960 by C Doris Hellman. The manuscript that she discovered was dated 6 November 1572, five days before Brahe made his observations. Perhaps there is an argument for renaming 'Tycho's supernova' as 'Maurolico's supernova'. This would, if nothing else, give this important mathematician some wider recognition which he so clearly deserves.
By 1569 Maurolico was making considerable efforts to have a collection of his unpublished work printed. He wrote to Francisco Borgia, general of the Jesuit Order, on 16 April 1569 asking him to assist in publishing:-
... certain compendia, in which I have treated all the essentials compactly and inserted most of the topics omitted, ignored or overlooked by others.
The work, Opuscula Mathematica, was dedicated to the governor of Messina who had agreed to help with the expenses. Maurolico also wrote to Christopher Clavius:-
... requesting his aid in editing or correcting my essays.
The general of the Jesuit Order replied in a letter of 8 July 1569 saying that:-
... steps will be taken to see that the book is sent to Venice and recommended to the rector of our college, so that he may turn it over to a suitable printer for publication.
In fact the book went to the bookseller Giovanni Comisino in Venice but over five years later the work had still not been published and the general of the Jesuit Order (the successor to the one Maurolico wrote to in 1569) wrote to the Venetian governor:-
... please find out from the bookseller Giovanni Comisino what he has done about the printing of Abbot Maurolico's books, because Sicily asks us about them.
Christopher Clavius visited Maurolico in 1574 and Maurolico gave him various manuscripts of his optical works under the title Photismi de lumine, et umbra. These treatises were mostly written in 1554 and Clavius promised to have them published in Rome. One treatise discusses the rainbow about which Maurolico writes [10]:-
But how does it happen, you ask, that the altitude of the rainbow is not exactly 45°, but a little less as ascertained by observation? I do not know how to answer this or what reason I may offer, unless it be that the falling drops are somewhat elongated or somewhat flattened, and thus, varying from the spherical form, change the angle of reflection and hence also the straightness of the ray which in the case of a perfect sphere comes back at an angle of forty-five degrees.
Another treatise in Photismi de lumine, et umbra, called De conspiciliis, discusses lenses [23]:-
Having recognized that the humour crystallinus, assumed to be the seat of the visual power of the eye, is in fact a bi-convex lens, Maurolico included in this treatise a discussion of the working of the eye.
By November 1574 the Opuscula Mathematica was in press but there must have been a further delay since when the work was published it did not contain Maurolico's dedication to the governor of Messina but rather a dedication written by the publisher dated 26 July 1575, four days after Maurolico's death. The Opuscula Mathematica contains seven of Maurolico's treatises including De instrumentis astronomicis on the theory and use of the principal astronomical instruments. Another of the treatises is De Sphaera Liber Unus and, in a postscript to this work, Maurolico addresses the reader (see [14]):-
I wrote the foregoing work, gentle reader, not so that you would peruse only my treatment and ignore all the others, but so that you would understand the others better because of my discussion, and from it learn what was omitted by the others. I have no doubt that on the basis of my elementary exposition you will read more circumspectly, and judge more perspicaciously, what you see in Sacrobosco, Robert Grosseteste or Campanus. Grosseteste did not put an end to the reading of Sacrobosco, nor did Campanus put an end to the reading of Grosseteste, as perhaps he thought he did. In like manner Peurbach's 'Theoricae', although extremely accurate and worked out in accordance with the Ptolemaic system, could not completely eliminate the teachings of Al-Bitruji (Alpetragius) and the ravings of Gerard of Cremona. Georg Peurbach and Regiomontanus contented themselves with warning their readers to learn to the best of their ability what to reject and what to accept. But not even Atlas, who supports the heavens, despite all his vigour would have the strength to correct every mistake that has been made and to lead everyone's mind to the path of truth. There is toleration even for Nicholas Copernicus, who maintained that the sun is still and the earth has a circular motion; and yet he deserves a whip or a scourge rather than a refutation. Let us therefore go on to the remaining topics, lest we waste our time for nothing.
Domenico Scinà writes in 1808 (see the new edition of his Elogio di Francesco Maurolico [5]):-
Nobody can reasonably find fault with Maurolico and chide him for not having sided with Copernicus. I do not thereby intend to excuse our Francesco. On the contrary I readily admit that he did not know how, and was unable, to save himself from the common contagion of his times.
A companion volume Arithmeticorum libri duo, written in 1557, was published at the same time as the Opuscula Mathematica. This work on number theory contains a proof by induction that is claimed to be the first genuine such proof [15]:-
Maurolycus in Book I of his arithmetic begins with the definitions of different kinds of numbers, namely, even, odd, triangular, square, numeri parte altera longiores, etc. By definition the nth triangular number is the sum of the integers from 1 to $n$ inclusive and the $n$th numerus parte altera longior is $n(n - 1)$.
Defining the nth square number as $n^{2}$ and the nth odd number as $2n - 1$, Maurolico goes on to prove results such as:
Proposition VI. The $n$th integer plus the preceding integer equals the $n$th odd number.
Proposition VIII. The $n$th triangular number doubled equals the following numerus parte altera longior.
Proposition X. The $n$th numerus parte altera longior plus $n$ equals the $n$th square number.
Proposition XI. The $n$th triangular number plus the preceding triangular number equals the $n$th square number.
Proposition XIII. The $n$th square number plus the following odd number equals the following square number.
Proposition XV. The sum of the first $n$ odd integers is equal to the $n$th square number.
In the proof of this last Proposition, Maurolico makes clear use induction. After proving the first few cases he uses Proposition XIII to prove the inductive step from $n$ to $n + 1$.
We have concentrated on Maurolico's mathematical work but Caterina Pirina writes [37]:-
... Maurolico's interests were not restricted to science. A list of his numerous writings appears in the "Index Lucubrationum" at the end of the volume of his 'Opuscula Mathematica' of 1575. It includes humanistic works, two libelli of carmina and epigrams, and his Latin verse translation from the Greek, Poemata Phocylidis et Pythagorae Moralia, as well as six books of Diodorus Siculus and six books of the elements of grammar.
In 1613 further mathematical works by Maurolico were published by his nephew Francesco, Barone della Foresta, and his brother Silvestro. They had inherited several of Maurolico's unpublished works which they published together with the biography Vita dell'Abbate del Parto D Francesco Maurolico written by Francesco. Perhaps the most significant of Maurolico's works to be published at this time was Problemata mechanica originally completed in 1569. W R Laird writes about this work in [30]:-
The first thinker to attempt to reconcile Archimedean statics and the 'Mechanical Problems' was the mathematician and abbot Francesco Maurolico ... Although Maurolico preferred to ground his mechanics in Archimedean statics, he nevertheless recognized the application of mechanics beyond conditions of equilibrium, to the motion of heavy bodies - the moving of weights with machines and the harnessing of impetus. His notion of the primacy of Archimedean statics in mechanics came to be shared by all the mathematical practitioners who followed.
In 1685 Maurolico's De momentis aequalibus, completed in 1548, was published in which he investigated the centre of gravity of a paraboloid (see [36] for more details). In 1876 Federico Napoli published further Maurolico manuscripts including:
Demonstratio algebrae, which is an elementary text looking at quadratic equations and problems whose solution reduces to solving a quadratic;
Geometricarum questionum (Books 1 and 2), which is a work on trigonometry and solid geometry but also discusses the method for measuring the Earth that Maurolico proposed earlier in Cosmographia;
Brevis demonstratio centri in parabola, a text dated 1565, which examines mechanics problems of the type considered in his commentary on Archimedes. In this work he determines the centre of gravity of a segment of a paraboloid of revolution bounded by a plane perpendicular to the axis.
We should not think that all these posthumous publications complete the printing of Maurolico's treatises. For example, in 1995 in [54], Roberta Tassora gives the previously unpublished text of Maurolico's reconstruction of the treatise De sectione cylindri originally written by Serenus of Antinoeia. Maurolico completed this work in 1534.
### References (show)
1. A Masotti, Biography in Dictionary of Scientific Biography (New York 1970-1990).
2. R Bellè, L'ottica di Francesco Maurolico (Università di Pisa, Tesi di laurea, 2000-01).
3. Commemorazione del IV centenario di Francesco Maurolico MDCCCXCIV (Messina, 1896).
4. H Crew (trs.), The Photismi de lumine of Maurolycus. A chapter in late medieval optics (Macmillan, New York, 1940).
5. D Scinà, Elogio di Francesco Maurolico (Salvatore Sciascia Editore, Rome, 1994).
6. J-P Sutto, Francesco Maurolico, mathématicien italien de la Renaissance (1494-1575) (Thèse de doctorat, Université Paris VII-Denis Diderot, 1998).
7. F Amodeo, Il trattato delle coniche di Francesco Maurolico, Bibliotheca Mathematica IX (1908), 123-138.
8. R Bellè, The Jesuits and the publication of Francesco Maurolico's works on optics (Italian), Boll. Stor. Sci. Mat. 26 (2) (2006), 211-243.
9. D Bessot, Ellipse conique et cylindrique chez Francesco Maurolico, in Histoire et épistemologie dans l'éducation mathématique 2 (Université Catholique de Louvain, 2001), 147.
10. C B Boyer, Descartes and the Radius of the Rainbow, Isis 43 (2) (1952), 95-98.
11. A Brigaglia, La ricostruzione dei libri V e VI delle Coniche da parte di F Maurolico, Boll. Storia Sci. Mat. 17 (2) (1997), 267-307.
12. A Brigaglia, Maurolico's reconstruction of the fifth and sixth book of Appolonius's Conics, in Medieval and classical traditions and the renaissance of physico-mathematical sciences in the 16th century (Brepols, Turnhout, 2001), 47-57.
13. A Brigaglia, Maurolico e le matematiche del secolo XVI, in C Dollo (ed.), Filosofia e scienze nella Sicilia dei secoli XVI e XVII 1 (Catania 1996), 15-27.
14. W Burke-Gaffney, Celestial Mechanics in the Sixteenth Century, The Scientific Monthly 44 (2) (1937), 150-156.
15. W H Bussey, The Origin of Mathematical Induction, Amer. Math. Monthly 24 (5) (1917), 199-207.
16. J Cassinet, The first arithmetic book of Francisco Maurolico, written in 1557 and printed in 1575: a step towards a theory of numbers, in C Hay (ed.), Mathematics from manuscript to print 1300-1600 (Oxford, 1988), 162-179.
17. M Clagett, The works of Francesco Maurolico, Physis - Riv. Internaz. Storia Sci. 16 (2) (1974), 149-198.
18. M Clagett, Francesco Maurolico's use of Medieval archimedean texts: the 'De sphaera et cylindro', in Science and History: Studies in honour of Edward Rosen (Ossolineum, 1978), 37-52.
19. I B Cohen, Review: The Photismi de lumine of Maurolycus translated by H Crew, Isis 33 (2) (1941), 251-253.
20. P d'Alessandro and P D Napolitani, I primi contatti fra Maurolico et Clavio: una nuova edizione della lettera di Francesco Maurolico a Francisco Borgia, Nuncius Ann. Storia Sci. 16 (2) (2001), 520-522.
21. L De Marchi, Una lettera inedita del Maurolico a proposito della battaglia di Lepanto, Rendiconti dell'Istituto Lombardo di scienze, lettere ed arti (2) 16 (1883), 466-467.
22. C Dollo, Astrologia e astronomia in Sicilia: da Francesco Maurolico a G B Hodierna, 1535-1660, Giornale Critico della Filosofia Italiana 6 (3) (1986), 366-398.
23. T Frangenberg, Perspectivist Aristotelianism: Three Case-Studies of Cinquecento Visual Theory, Journal of the Warburg and Courtauld Institutes 54 (1991), 137-158.
24. A C Garibaldi, La doctrine des sections du cône appliquée à la gnomonique chez F Maurolico, in Medieval and classical traditions and the renaissance of physico-mathematical sciences in the 16th century (Brepols, Turnhout, 2001), 93-98.
25. R Gatto, Some aspects of Maurolico's optics, in Medieval and classical traditions and the renaissance of physico-mathematical sciences in the 16th century (Brepols, Turnhout, 2001), 83-92.
26. V Gavagna and R Moscheo, Francesco Maurolico's Theonis datorum libelli duo (Italian), Boll. Stor. Sci. Mat. 22 (2) (2002), 267-348.
27. E Giusti, Maurolico et Archimède: sources et datation du premier livre du De momentis aequalibus, in Medieval and classical traditions and the renaissance of physico-mathematical sciences in the 16th century (Brepols, Turnhout, 2001), 33-40.
28. C D Hellman, Maurolyco's Lost Essay on the New Star of 1572, Isis 51 (1960), 322-336.
29. W R Laird, Archimedes among the Humanists, Isis 82 (4) (1991), 628-638.
30. W R Laird, The Scope of Renaissance Mechanics, Osiris (2) 2 (1986), 43-68.
31. G Micheli, I 'Problemata Mechanica' di Francesco Maurolico, in C Dollo (ed.), Filosofia e scienze nella Sicilia dei secoli XVI e XVII 1 (Catania 1996), 29-37.
32. R Moscheo, Greek heritage and the scientific work of Francesco Maurolico, in Medieval and classical traditions and the renaissance of physico-mathematical sciences in the 16th century (Brepols, Turnhout, 2001), 15-22.
33. F Napoli, Intorno alla vita ed ai lavoeri di Francesco Maurolico, Bullettino di bibliografia a di storia delle scienze mathematiche e fisiche 9 (1876), 1-121.
34. P D Napolitani, Le edizioni dei Classici: Commandino e Maurolico, in W Moretti and L Pepe (eds.), Torquato Tasso and the University, Ferrara, 1995 (Olschki, Florence, 1997), 119-141.
35. P D Napolitani, A l'aube de la révolution scientifique: de Galilée à Maurolico, in Medieval and classical traditions and the renaissance of physico-mathematical sciences in the 16th century (Brepols, Turnhout, 2001), 9-13.
36. P D Napolitani and J-P Sutto, Francesco Maurolico et le centre de gravité du parabolide, SCIAMVS 2 (2001), 187-250.
37. C Pirina, Michelangelo and the Music and Mathematics of His Time, The Art Bulletin 67 (3) (1985), 368-382.
38. S Pugliatti, Le Musicae Traditiones di Francesco Maurolico, Atti della Accademia Peloritana dei Pericolanti, classe di lettere, filosofia e belle arti 48 (1951-1967), 313-398.
39. V Ronchi, Il Keplero conosceva l'ottica del Maurolico?, Atti della Fondazione Giorgio Ronchi 37 (1982), 153-197.
40. V Ronchi, Ancora a proposito dei 'Photismi de lumine et umbra' dell'Abate Maurolico, Atti della Fondazione Giorgio Ronchi 37 (1982), 581-585.
41. P L Rose, The Works of Francesco Maurolico, Physics 16 (1974), 149-198.
42. E Rosen, The Date of Maurolico's Death, Scripta Mathematica 22 (1956), 285-286.
43. E Rosen, Maurolico's Attitude toward Copernicus, Proc. Amer. Philos. Soc. 101 (2) (1957), 177-194.
44. E Rosen, The Editions of Maurolyco's Mathematical Works, Scripta Mathematica 24 (1957), 59-76.
45. E Rosen, The Title of Maurolico's 'Photismi', American Journal of Physics 25 (1957), 226-228.
46. E Rosen, Maurolico was an Abbott, Archives internationales d'histoire des sciences 9 (1956), 349-350.
47. E Rosen, Was Maurolico's essay on the nova of 1572 printed, Isis 48 (2) (1957), 171-175.
48. K Saito, Quelques observations sur l'édition des 'Coniques' d'Apollonius de Francesco Maurolico, Boll. Storia Sci. Mat. 14 (2) (1994), 239-258.
49. K Saito, Francesco Maurolico's edition of the Conics, in Medieval and classical traditions and the renaissance of physico-mathematical sciences in the 16th century (Brepols, Turnhout, 2001), 41-46.
50. J-P Sutto, Le compendium du 5e livre des Éléments d'Euclide de Francesco Maurolico, Rev. Histoire Math. 6 (1) (2000), 59-94.
51. J-P Sutto, Les arithmétiques de Francesco Maurolico, in Medieval and classical traditions and the renaissance of physico-mathematical sciences in the 16th century (Brepols, Turnhout, 2001), 73-81.
52. A-K Taha and P Pinel, Sur les sources de la version de Francesco Maurolico des Sphériques de Ménélaos, Boll. Storia Sci. Mat. 17 (2) (1997), 149-198.
53. A-K Taha and P Pinel, La version de Maurolico des Sphériques de Ménelaos et ses sources, in Medieval and classical traditions and the renaissance of physico-mathematical sciences in the 16th century (Brepols, Turnhout, 2001), 59-72.
54. R Tassora, The 'Sereni cylindricorum libelli duo' of Francesco Maurolico and an unknown treatise on conic sections (Italian), Boll. Storia Sci. Mat. 15 (2) (1995), 135-264.
55. R Tassora, La formation du jeune Maurolico et les auteurs classiques, in Medieval and classical traditions and the renaissance of physico-mathematical sciences in the 16th century (Brepols, Turnhout, 2001), 23-32.
56. T M Tonietti, The mathematical contributions of Francesco Maurolico to the theory of music of the 16th century (the problems of a manuscript), Centaurus 48 (3) (2006), 149-200.
57. A Tripodi, Francesco Maurolico (Italian), Giorn. Mat. Battaglini (6) 2 (92) (1964), 126-131.
58. G Vacca, Maurolycus, the first discoverer of the principle of mathematical induction, Bull. Amer. Math. Soc. 16 (2) (1909), 70-73.
|
{}
|
# Oscillations about equilibrium for coupled differentail equations
I have the following system of equations:
\begin{align} \frac{dX}{dt} &= 2Y-2\\ \frac{dY}{dt} &= 9X-X^3 \end{align}
I would like to study the property of solutions to this function about the point $(3,1)$. Namely, I'd like to find a linear approximation for small oscillations about the equilibrium point, and I'd like to estimate the period of oscillation.
I don't exactly know what is meant by "find a linear approximation" for a coupled system, though. I've taken the Jacobian of the system, yielding eigenvalues $6i,-6i$ and corresponding eigenvectors $[1,3i]$ and $[1,-3i]$.
But from here, I don't exactly know where to go.
-
First write: $$X=x+3,\ \ \ Y=y+1$$ so that the equilibrium point moves to $(0,0)$. The equations become: $$\begin{array}{l}\dot{x}=2y\\\dot{y}=9x+27-(x+3)^3=-x^3-9x^2-18x\end{array}$$ Now linearize in both $x$ and $y$ (i.e.: take the Taylor expansion up to first order). You get: $$\begin{array}{l}\dot{x}=2y\\\dot{y}=-18x\end{array}$$ This equation determinates the dominant behavior of your system near the equilibrium point. I think you can take it from here. Good luck.
Also, notice that you could directly take the Taylor expansion in $X$ and $Y$, but you should do it around $X=3$ and $Y=1$. – Daniel Robert-Nicoud Jul 21 '14 at 8:05
AH! Since $x'=2y$ we have $x''=2y'=-36x$ which has solution $x = A \cos (-6t - \phi)$. Similarly we can find $y = B \cos (-6t-\phi )$. So the period is $\frac{\pi}{3}$. Am I correct? – user2899162 Jul 21 '14 at 8:08
@user2899162 Also, you should be able to find some relations between the phase shifts and amplitudes: your system has only two degrees of freedom (corresponding to the choice of initial values for $x$ and $y$) and, as it is right now, your solution admits three (namely $\phi$, $A$ and $B$). – Daniel Robert-Nicoud Jul 21 '14 at 8:13
Kind-of. I'm trying to do the taylor expansion directly as a sanity and comprehension check. I get $\left. \frac{dX}{dt} \right|_{(3,1)} \approx 2(X-3)$ and $\left. \frac{dY}{dt} \right|_{(3,1)} \approx -18(Y-1)$. Which seems to be what you got after correcting for the coordinate shift! So I think I didn't mess up and can waddle my way through. – user2899162 Jul 21 '14 at 8:18
|
{}
|
main-content
## Weitere Artikel dieser Ausgabe durch Wischen aufrufen
14.11.2017 | Original Paper | Ausgabe 3/2018 Open Access
# Improved nine-node shell element MITC9i with reduced distortion sensitivity
Zeitschrift:
Computational Mechanics > Ausgabe 3/2018
Autoren:
K. Wisniewski, E. Turska
## 1 Introduction
It has been clear since the earliest implementations of a basic 9-node element, that it has several limitations, i.e. it is excessively stiff (locks) and its accuracy is diminished by shape distortions.
To alleviate the problem of an excessive stiffness, five types of modifications to the standard formulation of the 9-node element have been proposed:
1.
Uniform Reduced Integration (URI) in [ 53], with the $$2 \times 2$$ instead of the $$3 \times 3$$ Gauss integration scheme. This yields a rank-deficient stiffness matrix and requires a stabilization, which was developed e.g., in [ 4, 5, 31] and for shells in [ 50].
2.
Selective Reduced Integration (SRI) in [ 32], with different integration schemes for various parts of the strain energy. This method yields the correct rank of the stiffness matrix but strain components are computed at different integration points, which limits the range of application of the element.
3.
Two-level approximations of strains, which are developed as either the Assumed Strain (AS) method or the Mixed Interpolation of Tensorial Components (MITC) method, see, e.g., two books [ 11, 17]. These were invented to overcome the above mentioned limitation of the SRI. (Note that in the 4-node element, this technique is applied to transverse shear strains and designated the Assumed Natural Strain (ANS) method.) We will show in the current paper that the MITC method still has the potential for improvement.
4.
The enhanced element based on the Enhanced Assumed Strain (EAS) technique, which uses a set of 11 modes for membrane behavior [ 7]. They are incorporated using an identical formula to that used for 4-node elements, see [ 39, 40, 42, 51]. We use these modes in our version of the 9-EAS11 shell element.
5.
Partly hybrid formulations based on the Hellinger–Reissner functional, e.g., [ 34, 35]. Hybrid stress, hybrid strain and enhanced strain 9-node shell elements are developed and assessed in [ 36]. We are not aware of any 9-node shell element based on the Hu–Washizu functional, which was a basis of several 4-node shell elements of very good accuracy and exceptional robustness in non-linear applications, see e.g. [ 16, 41, 45, 47, 48].
The technique of two-level approximations of strains is applied either to Cartesian strain components (the AS method) or to covariant strain components (the MITC method); in the current paper we focus on the latter one. The strain components are sampled at the points where they are of good accuracy, and, next, extrapolated over the element. In effect, all strain components are available at each of the $$3 \times 3$$ Gauss integration points.
For sampling of the shell strain components (11, 13) and (22, 23), the $$2 \times 3$$ and $$3 \times 2$$-point schemes are used in the literature respectively. Several sets of sampling points and interpolation functions were proposed for the (12) strain components, i.e. the in-plane shear strain $$\varepsilon _{12}$$ and the twisting strain $$\kappa _{12}$$,
1.
The $$2 \times 3$$ and the $$3 \times 2$$-point schemes in [ 19, 21, 31]. Note that the last paper uses different approximation functions from the first two.
2.
The $$2 \times 2$$-point scheme in [ 8, 28]. This scheme uses exactly the same points for the shear component as the SRI of [ 32] (Table 1, p. 580).
In our tests, the $$2 \times 2$$-point scheme for the (12) strain components provided the correct rank of the tangent matrix and was the most accurate.
Another problem with the 9-node elements is the sensitivity of solutions to shifts of the midside nodes and the central node from the middle positions, which causes loss of accuracy for all the element formulations listed above. Some earlier studies, e.g., [ 26] suggest that only regular meshes, i.e. with straight element boundaries and evenly spaced nodes, give satisfactory results, while the accuracy is poor for curved boundaries or unevenly spaced nodes. Since then, however, research has been conducted to reduce this sensitivity.
The best remedy found so far, are the Corrected Shape Functions (CSF) of [ 9] used instead of the standard shape functions based on isoparametric transformations of [ 15]. In [ 9], the CSF are tested for an 8-node (serendipity) element for the Laplace equation (heat conduction) and the $$4\times 4$$ integration rule. In [ 30], we examine several 9-node 2D elements for plane stress elasticity and the $$3\times 3$$ Gauss integration: QUAD9** [ 19], MITC9 [ 2] and ours: 9-AS [ 28] and MITC9i [ 43]. Summarizing the performed tests, because of the CSF, the midside nodes can be shifted along straight element sides without rendering errors while for perpendicular shifts of these nodes errors are smaller and negative values of the Jacobian determinant are often avoided. Also the central node can be shifted in an arbitrary direction without causing errors. All of the tested 2D elements benefited from using the CSF, hence, in the current paper, we further develop this technique to make it applicable to shells.
Objectives and scope of the paper The objective of the current paper is to develop an improved 9-node MITC9i shell element with drilling rotations.
1.
The original 9-node MITC9 element has good accuracy but does not pass the patch test. In [ 43], we have scrutinized its formulation to find the source of this problem. By an alternative but equivalent formalism, the formulation of the MITC9 element was seen from a different perspective, and the modified transformations were proposed. As a result of this change, the membrane MITC9i element passed the patch test for a regular mesh, i.e. with straight element sides and middle positions of the midside nodes and the central node.
In the current paper this technique is extended to the bending and transverse shear strains of a shell element MITC9i, for which proper transformations between Cartesian and covariant components are devised. In particular, we verify whether this change suffices to pass the patch tests for middle positions of the midside nodes and the central node.
2.
The problem of sensitivity to node shifts is particularly important for 9-node elements as it affects a parametrization of the element domain and element accuracy. In the current paper, we improve the shell element robustness to node shifts extending the corrected shape functions of [ 9]. In [ 30], we tested several membrane 9-node elements integrated by the $$3\times 3$$ Gauss rule, and these functions have proven beneficial to all of them. In the current paper, we further develop this technique to make it applicable to shells. For a 9-node shell element located in 3D space, we have to extend the method of computing the node shift parameters due to a presence of the third coordinate. Additionally, a new criterion is needed to determine the shift parameters associated with the central node for non-flat elements. Next, we have to verify whether the shell element MITC9i using the corrected shape functions passes patch tests for the midside nodes shifted along straight element sides and for the central node in an arbitrary position. Additionally, we noticed that the original method to determinate the shift parameters of [ 9] yields a non-symmetric side curve when the midside node is shifted. Therefore, we propose an alternative method, which does not rely on the proportion of arc-lengths, but uses a parametric equation of a parabola to construct a symmetric curve also when the midside node is shifted. We describe this method and evaluate its accuracy for an example with curved and symmetric shell edges.
3.
The drilling rotation is incorporated into the formulation of the 9-node shell element via the drilling Rotation Constraint (RC) and the penalty method. We implemented it consistently with the additive/multiplicative rotation update scheme for large rotations, which combines a 3-parameter canonical rotation vector and a quaternion. Regarding the drilling RC, we analyze its form for equal-order bi-quadratic interpolations of displacements and the drilling rotation to check whether it contains a faulty term, analogous to the $$\xi \eta$$-term in 4-node elements, see [ 48], and to evaluate how it affects the solution. Besides, we verify how the corrected shape functions change the sensitivity of a solution to the regularization parameter $$\gamma$$ of the penalty method, in particular in the case of curved or distorted elements.
Finally, the 9-node MITC9i shell element with drilling rotations is tested on a range of linear and non-linear numerical examples, which are performed to check passing the patch tests, an absence of locking, accuracy, an insensitivity to node shifts, and correctness of implementation of the drilling rotation; a selection of these tests is presented in Sect. 6.
Performance of the MITC9i element is mostly compared to other 9-node elements, some of them with similar improvements implemented as in the tested element. Besides, reference results obtained by our 4-node shell elements are provided; the corresponding ones for triangular elements can be found e.g., in [ 10].
## 2 Shell element characteristic
Reissner–Mindlin shell kinematics The position vector of an arbitrary point of a shell in the initial configuration is expressed as
\begin{aligned} \mathbf{X}(\zeta ) = \mathbf{X}_0 + \zeta \, \mathbf{t}_3, \end{aligned}
(1)
where $$\mathbf{X}_0$$ is a position of the reference surface and $$\mathbf{t}_3$$ is the shell director, a unit vector normal to the reference surface. Besides, $$\zeta \in [-h/2, +h/2]$$ is the coordinate in the direction normal to the reference surface, where h denotes the initial shell thickness.
In a deformed configuration, the position vector is expressed by the Reissner–Mindlin kinematical assumption,
\begin{aligned} \mathbf{x}(\zeta ) = \mathbf{x}_0 + \zeta \ \mathbf{Q}_0 \mathbf{t}_3, \end{aligned}
(2)
where $$\mathbf{x}_0$$ is a position of the reference surface and $$\mathbf{Q}_0 \in SO(3)$$ is a rotation tensor.
Green strain for shell The deformation function $$\varvec{\chi }: \ \mathbf{x}= \varvec{\chi }(\mathbf{X})$$ maps the initial (non-deformed) configuration of a shell onto the current (deformed) one. Let us write the deformation gradient as follows:
\begin{aligned} \mathbf{F}\doteq \frac{\partial \mathbf{x}}{\partial \mathbf{X}} =\frac{\partial \mathbf{x}}{\partial \varvec{\xi }} \ \mathbf{J}^{-1}, \end{aligned}
(3)
where $$\varvec{\xi }\doteq \{\xi , \eta , \zeta \}$$, $$\xi , \eta \in [-1,+1]$$, and the Jacobian matrix $$\mathbf{J}\doteq {\partial \mathbf{X}}/{\partial \varvec{\xi }}$$. The right Cauchy–Green deformation tensor is
\begin{aligned} \mathbf{C}\doteq \mathbf{F}^T \mathbf{F}= \mathbf{J}^{-T} \left( \frac{\partial \mathbf{x}}{\partial \varvec{\xi }}\right) ^T \frac{\partial \mathbf{x}}{\partial \varvec{\xi }} \ \mathbf{J}^{-1}, \end{aligned}
(4)
and the Green strain is
\begin{aligned} \mathbf{E}\doteq \frac{1}{2} \left( \mathbf{C}- \mathbf{C}_0 \right) , \end{aligned}
(5)
where $$\mathbf{C}_0 \doteq \left. \mathbf{C}\right| _\mathbf{x =\mathbf X } = \mathbf{I}$$. The Green strain can be expressed as a series of the $$\zeta$$ coordinate,
\begin{aligned} \mathbf{E}(\zeta ) \approx \mathbf{E}_0 + \zeta \mathbf{E}_1 + \cdots , \end{aligned}
(6)
where the higher order terms are neglected. The 0th order strain $$\mathbf{E}_0$$ includes the membrane components $$\varvec{\varepsilon }$$ and the transverse shear components $$\varvec{\gamma }/2$$ while the 1st order strain $$\mathbf{E}_1$$ includes the bending/twisting components $$\varvec{\kappa }$$. We assume that the transverse shear part of $$\mathbf{E}_1$$ is negligible, i.e. $$\kappa _{\alpha 3} \approx 0$$ ( $$\alpha =1,2$$). Note that the normal strains $$\varepsilon _{33}$$ and $$\kappa _{33}$$ are equal to zero because of Eq. ( 2) and must be either recovered from an auxiliary condition, such as the plane stress condition or the incompressibility condition, or be introduced by the EAS method. In the current paper, we use the plane stress condition for this purpose.
Incremental form of rotation $$\mathbf{Q}_0$$ To enable computation of large rotations of a shell, we use an incremental approach and an update scheme involving a rotation vector $$\Delta \varvec{\psi }$$ and a quaternion $$\mathbf{q}^n$$, where n is the increment (step) index. Consistently with this scheme, the rotation $$\mathbf{Q}_0 \in SO(3)$$ of Eq. ( 2) is assumed in the following form
\begin{aligned} \mathbf{Q}_0 = \Delta \mathbf{Q}_0(\Delta \varvec{\psi }) \, \, \mathbf{Q}_0^n(\mathbf{q}^n), \end{aligned}
(7)
where
\begin{aligned} \Delta \mathbf{Q}_0(\Delta \varvec{\psi }) \doteq \mathbf{I}+ \frac{\sin \Vert \Delta \varvec{\psi }\Vert }{\Vert \Delta \varvec{\psi }\Vert } \widetilde{\Delta \varvec{\psi }} + \frac{1 - \cos \Vert \Delta \varvec{\psi }\Vert }{\Vert \Delta \varvec{\psi }\Vert ^2} \widetilde{\Delta \varvec{\psi }}{}^2, \end{aligned}
(8)
which is the Rodrigues’s formula for the canonical parametrization of a rotation tensor. Besides, $$\Vert \Delta \varvec{\psi }\Vert \doteq \sqrt{\Delta \varvec{\psi }\cdot \Delta \varvec{\psi }} \ge 0$$, $$\widetilde{\Delta \varvec{\psi }} \doteq \Delta \varvec{\psi }\times \mathbf{I}$$ is the skew-symmetric tensor and $$\mathbf{I}$$ is the second-rank identity tensor. Within an increment, we use the Newton method and additively update the rotation vector after each iteration, $$\Delta \varvec{\psi }^{i+1} = \Delta \varvec{\psi }^{i} + \delta \varvec{\psi }$$, where $$\delta \varvec{\psi }$$ is the rotation vector increment for an iteration and i is the index of iterations. When the Newton method have converged, $$\Delta \varvec{\psi }^{i+1}$$ is converted to a quaternion for an increment $$\Delta \mathbf{q}$$, and the total quaternion is multiplicatively updated $$\mathbf{q}^{n+1} = \Delta \mathbf{q}\, \circ \, \mathbf{q}^n$$. This scheme works very well for statical computations in the range of large rotations.
Rotations in mechanics
In mechanics, the rotations can be treated in two ways: either as independent variables, e.g. in Cosserat continuum, or as dependent variables, e.g. in Cauchy continuum; the latter case is of interest in the current paper. In the Cosserat shells, see e.g. [ 12, 34] and [ 35], the drilling rotation is naturally present but the constitutive equations are complicated.
For Cauchy continuum and the known deformation gradient $$\mathbf{F}$$, the rotations can be obtained by the polar decomposition $$\mathbf{F}=\mathbf{R}\mathbf{U}= \mathbf{V}\mathbf{R}$$, where the rotation $$\mathbf{R}\in SO(3)$$ and $$\mathbf{U}, \ \mathbf{V}$$ is the right and left stretching tensor, respectively. But this is merely a post-processing which is not a subject of the current paper; our goal is to include the rotations into the formulation.
To include rotations as additional variables into 3D equations, first, we define the extended configuration space
\begin{aligned} \mathcal{C}_{\mathrm {ext}}\doteq \{ (\varvec{\chi }, \mathbf{Q}): \ B \rightarrow R^3 \times SO(3) \quad | \quad \varvec{\chi }\in \mathcal{C } \}, \end{aligned}
(9)
where $$\mathcal{C} \doteq \{ \varvec{\chi }\!: \ B \rightarrow R^3 \}$$ is the classical configuration space, and, next, we impose the Rotation Constraint (RC)
\begin{aligned} {\mathrm{skew}}(\mathbf{Q}^T \mathbf{F}) = \mathbf 0, \end{aligned}
(10)
to obtain the Cauchy continuum, see [ 37]. Here $$\mathbf{Q}\in SO(3)$$ is the rotation, which at a solution of the extended system of equations (equilibrium equations plus the RC) is equal to $$\mathbf{R}$$ yielded by the polar decomposition of $$\mathbf{F}$$; this issue is considered in detail in [ 44].
For shells, we can use an analogous approach to that for the 3D Cauchy continuum, but with the following alterations: (a) the rotation tensor is constant in $$\zeta$$, i.e. $$\mathbf{Q}\approx \mathbf{Q}_0$$, where $$\mathbf{Q}_0$$ is defined in Eq. ( 7), and (b) only the drilling component of $$\Delta \varvec{\psi }$$ is considered because the tangent ones are introduced by the term $$\mathbf{Q}_0 \mathbf{t}_3$$ in Eq. ( 2).
Drilling rotation constraint The drilling rotation for an increment is defined as an elementary rotation about the forward-rotated director $$\mathbf{a}_3^n$$, i.e. $$\Delta \omega \doteq \Delta \varvec{\psi }\cdot \mathbf{a}_3^n$$, where $$\mathbf{a}_3^n = \mathbf{Q}_0^n \mathbf{t}_3$$, see Fig. 1. Note that $$\mathbf{Q}_0 \mathbf{t}_3 = \Delta \mathbf{Q}_0 \mathbf{a}_3^n$$ in Eq. ( 2), so the tangent components of $$\Delta \varvec{\psi }$$ in the $$\{ \mathbf{a}_k^n \}$$ basis are present in shell equations while the normal one, i.e. the drilling rotation, must be included in a different way.
Let us define the drilling Rotation Constraint (drilling RC) as the (1,2) component of the RC of Eq. ( 10) in the local basis $$\{ \mathbf{t}_k\}$$ $$(k=1,2,3)$$,
\begin{aligned} c \doteq \mathbf{t}_1 \cdot \left[ {\mathrm{skew}}(\mathbf{Q}_0^T \mathbf{F}) \, \mathbf{t}_2 \right] =0. \end{aligned}
(11)
It can be written in the incremental form consistent with the form of $$\mathbf{Q}_0$$ of Eq. ( 7) and the multiplicative decompositions $$\mathbf{F}= \Delta \mathbf{F}\mathbf{F}^n$$,
\begin{aligned} c \doteq \frac{1}{2} \left[ \mathbf{a}_1^n \cdot \left( \Delta \mathbf{A}\mathbf{t}_2^n \right) - \mathbf{t}_1^n \cdot \left( \Delta \mathbf{A}\!{}^T \mathbf{a}_2^n \right) \right] =0, \end{aligned}
(12)
where $$\Delta \mathbf{A}\doteq \Delta \mathbf{Q}_0^T \Delta \mathbf{F}$$, the convected vectors are $$\mathbf{t}_{\alpha }^n \doteq \mathbf{F}^{n} \mathbf{t}_{\alpha }$$ and the forward-rotated vectors are $$\mathbf{a}_{\alpha }^n \doteq \mathbf{Q}_0^{n} \mathbf{t}_{\alpha }$$ ( $$\alpha =1,2$$); a derivation of the above formula is given in Sect. 5. To obtain the shell equations with drilling rotations, we construct an extended potential energy functional including the above scalar equation.
Physical interpretation of drilling rotation For simplicity, consider the planar (2D) deformation, so only the drilling rotation $$\omega$$ matters in the definition of $$\mathbf{Q}_0$$ while the tangent rotation components are equal to zero. To obtain an interpretation of $$\omega$$, we consider a pair of orthogonal unit vectors $$\mathbf{t}_1$$ and $$\mathbf{t}_2$$, associated with the initial configuration, which are rotated and stretched by $$\mathbf{F}$$, so we can write
\begin{aligned} \mathbf{F}\mathbf{t}_1 = \lambda _1 \, \mathbf{Q}(\beta _{1} ) \, \mathbf{t}_1 ,\qquad \mathbf{F}\mathbf{t}_2 = \lambda _2 \, \mathbf{Q}(\beta _{2} ) \, \mathbf{t}_2, \end{aligned}
(13)
where $$\lambda _1, \, \lambda _2 >0$$ are stretches and $$\beta _1, \, \beta _2$$ are rotation angles. Using these formulas in the drilling RC in the standard (non-incremental) form (see the derivation in Eq. ( 59)),
\begin{aligned} c \doteq \frac{1}{2} \left[ (\mathbf{F}\mathbf{t}_2) \cdot \mathbf{a}_1 - (\mathbf{F}\mathbf{t}_1) \cdot \mathbf{a}_2 \right] = 0, \end{aligned}
(14)
we obtain
\begin{aligned} \omega \approx \frac{1}{2}(\beta _1 + \beta _2) + k \pi , \qquad k=0, \ldots , K, \end{aligned}
(15)
for $$\cos \omega \ne 0$$, $$\lambda _1 c_1 + \lambda _2 c_2 \ne 0$$ and $$\lambda _{\alpha } \approx 1$$, where $$c_{1} \doteq \cos \beta _{1}$$ and $$c_{2} \doteq \cos \beta _{2}$$. Hence, the drilling angle $$\omega$$ is an average of rotations of vectors $$\mathbf{t}_1$$ and $$\mathbf{t}_2$$; for details see [ 46] “Appendix”.
Extended potential energy functional for shells with drilling rotation To obtain the shell equations with drilling rotations, we define the extended shell potential energy,
\begin{aligned} F_{PE}^\mathrm{sh} \doteq \mathcal {W}^\mathrm{sh}(\mathbf{E}) - F_{ext}^\mathrm{sh} + F^\mathrm{sh}_{drill}. \end{aligned}
(16)
This functional includes the shell strain energy,
\begin{aligned} \mathcal {W}^\mathrm{sh}(\mathbf{E}) \doteq \int _A \int ^{+\frac{h}{2}}_{-\frac{h}{2}}\mathcal {W}(\mathbf{E}) \ \mu \, \mathrm {d}\zeta \, dA, \end{aligned}
(17)
the work of external forces $$F_{ext}^\mathrm{sh}$$, and an additional term for the drilling RC, $$F^\mathrm{sh}_{drill}$$. Besides $$\mu \doteq \det \mathbf{Z}$$, where $$\mathbf{Z}$$ is the shifter tensor, and A is the shell reference surface. We discuss below the first and third term.
(A) The strain energy $$\mathcal {W}^\mathrm{sh}(\mathbf{E}) = \mathcal {W}^\mathrm{sh}(\mathbf{E}_0,\mathbf{E}_1)$$ by using the shell form of Green strain $$\mathbf{E}(\zeta ) \approx \mathbf{E}_0 + \zeta \mathbf{E}_1$$, see Eq. ( 6). For instance, for a linear material and symmetry of its properties w.r.t. the reference surface, upon integration over thickness, we obtain the well-known: $$\mathcal {W}^\mathrm{sh}(\mathbf{E}_0,\mathbf{E}_1) = h \ \mathcal {W}(\mathbf{E}_0) \ + \ ({h^3}/{12}) \ \mathcal {W}(\mathbf{E}_1)$$.
(B) The drilling rotation term $$F^\mathrm{sh}_{drill}$$ is assumed in the penalty form,
\begin{aligned} F^\mathrm{sh}_{drill} \doteq {\frac{\gamma h}{2} \int _A c^2} \ dA, \end{aligned}
(18)
where c is defined by Eq. ( 11) and $$\gamma \in (0, \infty )$$ is the regularization parameter. The upper bound on $$\gamma$$ equal to the shear modulus G was found for an isotropic elastic material in [ 20], but a reduced value can be beneficial for non-planar shell elements; in the current paper we test also $$\gamma = G/1000$$. Note the thickness h in this formula.
Another question is how to select $$\gamma$$ for non-isotropic and/or inelastic materials for which theoretical results are not available. For an orthotropic elastic material the shear modulus for the in-plane shear $$G_{12}$$ should be used instead of G. For multilayer shells composed of orthotropic elastic layers of different orientation, the effective material is anisotropic. We treat this problem approximately, expressing $$G_{12}$$ of a substitute orthotropic material in terms of the effective in-plane stiffness matrix $$\mathbf{D}_0$$. The elasto-plastic tangent matrix $$\mathbf{C}^\mathrm{ep}$$ can be used similarly.
Remark on PL method We have also tested the Perturbed Lagrange (PL) form of the drilling rotation term,
\begin{aligned} F^\mathrm{sh}_{drill} \doteq h \int _A \left[ T^* c - {{\frac{1}{2\gamma } (T^*)^2}} \right] \ dA, \end{aligned}
(19)
where $$T^*$$ is the Lagrange multiplier. The second term under the integral provides a regularization in $$T^*$$, and a small perturbation of the tangent matrix which is needed when it is singular, i.e. for the drilling rotation $$\omega =0$$. Note that for 9-node elements, we have to use 9 parameters for $$T^*$$ to avoid singularity of the tangent matrix. They can be either values at nodes or coefficients of interpolation functions, and can be eliminated (condensed out) on the element level. For 4-node mixed/enhanced shell elements, the PL form implies a reduced sensitivity to distortions and a larger radius of convergence in non-linear problems, see [ 48, 49]. For 9-node elements, the PL form also works well but is slightly less beneficial, hence the penalty form of Eq. ( 18), for which the element is faster, is tested in Sect. 6.
Taking a variation of the governing functional of Eq. ( 16) and performing a consistent linearization of it, we obtain the tangent stiffness matrix $$\mathbf{K}$$ for the Newton method; this procedure is standard and, therefore, not described here. The standard procedure is also used to compute the stress and couple resultants.
Basic definitions for 9-node shell element We consider a 9-node isoparametric element with the nodes numbered and named as in Fig. 2, where 1, 2, 3, 4 are the corner nodes, 5, 6, 7, 8 are the midside nodes, and 9 is the central node. For a shell element, we use a Cartesian basis associated with it as a reference basis.
Consider the following vectors associated with the reference surface of a shell: the initial position $$\mathbf{X}$$, the current position $$\mathbf{x}$$, the displacement $$\mathbf{u}$$ and the rotation vector $$\Delta \varvec{\psi }$$. The above vectors are interpolated as follows:
\begin{aligned} \begin{aligned} \mathbf{X}(\xi ,\eta ) = \sum _{I=1}^{9} N_I(\xi ,\eta ) \ \mathbf{X}_I,&\quad \mathbf{x}(\xi ,\eta ) = \sum _{I=1}^{9} N_I(\xi ,\eta ) \ \mathbf{x}_I, \\ \mathbf{u}(\xi ,\eta ) = \sum _{I=1}^{9} N_I(\xi ,\eta ) \ \mathbf{u}_I,&\quad \Delta \varvec{\psi }(\xi ,\eta ) = \sum _{I=1}^{9} N_I(\xi ,\eta ) \ \Delta \varvec{\psi }_I, \end{aligned} \end{aligned}
(20)
where the natural coordinates $$\xi , \eta \in [-1,+1]$$ and I is the node number. Note that the rotation vectors $$\Delta \varvec{\psi }_I$$ are used in a step, and converted to quaternions at the end of the step. The shape functions $$N_I(\xi ,\eta )$$ are presented in Sect. 4.
The tangent vectors of the natural basis on the reference surface are defined as
\begin{aligned} \mathbf{g}_{1}(\xi , \eta ) \doteq \frac{\partial \mathbf{X}(\xi , \eta )}{\partial \xi } ,\qquad \mathbf{g}_{2}(\xi , \eta ) \doteq \frac{\partial \mathbf{X}(\xi , \eta )}{\partial \eta }. \end{aligned}
(21)
The vectors $$\mathbf{g}^{k}$$ of the co-basis $$\{ \mathbf{g}^1, \mathbf{g}^2 \}$$ are defined in the standard way by $$\mathbf{g}^{\alpha } \cdot \mathbf{g}_{\beta } = \delta ^{\alpha }_{\beta }$$, $$\alpha ,\beta =1,2$$. In general, $$\mathbf{g}_{1}$$ and $$\mathbf{g}_{2}$$ as well as $$\mathbf{g}^{1}$$ and $$\mathbf{g}^{2}$$ are neither unit nor mutually orthogonal. The director $$\mathbf{t}_3$$ is defined as a unit vector perpendicular to vectors $$\mathbf{g}_1$$ and $$\mathbf{g}_2$$,
\begin{aligned} \mathbf{t}_3 \doteq \frac{\mathbf{g}_1 \times \mathbf{g}_2}{\Vert \mathbf{g}_1 \times \mathbf{g}_2 \Vert }. \end{aligned}
(22)
Tangent vectors of the Cartesian basis $$\{\mathbf{i}_k\}$$ can be defined as
\begin{aligned} \mathbf{i}_1 = \frac{1}{\sqrt{2}} (\tilde{\mathbf{i}}_1 - \tilde{\mathbf{i}}_2), \qquad \mathbf{i}_2 = \frac{1}{\sqrt{2}} (\tilde{\mathbf{i}}_1 + \tilde{\mathbf{i}}_2), \end{aligned}
(23)
where the auxiliary vectors are
\begin{aligned} \tilde{\mathbf{i}}_1 = \frac{\tilde{\mathbf{g}}_1 + \tilde{\mathbf{g}}_2}{\Vert \tilde{\mathbf{g}}_1 + \tilde{\mathbf{g}}_2 \Vert } ,\qquad \tilde{\mathbf{i}}_2 = \mathbf{t}_3 \times \tilde{\mathbf{i}}_1. \end{aligned}
(24)
The normalized natural vectors are denoted by a tilde, i.e. $$\tilde{\mathbf{g}}_k = {\mathbf{g}_k}/{\Vert \mathbf{g}_k \Vert }$$. Note that $$\mathbf{i}_1$$ and $$\mathbf{i}_2$$ are equally distant from $$\mathbf{g}_1$$ and $$\mathbf{g}_2$$, see Fig. 2 in [ 48]. The above vectors for the element center are designated by the letter “c”.
## 3 MITCi method for bending/twisting and transverse shear strains
Standard and improved 2D MITC9 element A lot of research has been devoted to various aspects of the Mixed Interpolation of Tensorial Components (MITC) method; for a comprehensive review see [ 11, 8]. Here we focus on the transformations used in this method.
The MITC9 is a 9-node element formulated using the MITC method and fully integrated ( $$3 \times 3$$ Gauss rule). This element uses the covariant components of the Green strain, which e.g. for 2D are as follows
\begin{aligned} 2 \varepsilon _{\xi \xi } \doteq \mathbf{x}_{,\xi } \cdot \mathbf{x}_{,\xi } - \mathbf{g}_1 \cdot \mathbf{g}_1,&\quad 2 \varepsilon _{\eta \eta } \doteq \mathbf{x}_{,\eta } \cdot \mathbf{x}_{,\eta } - \mathbf{g}_2 \cdot \mathbf{g}_2, \nonumber \\& 2 \varepsilon _{\xi \eta } \doteq \mathbf{x}_{,\xi } \cdot \mathbf{x}_{,\eta } - \mathbf{g}_1 \cdot \mathbf{g}_2. \end{aligned}
(25)
For $$\mathbf{X}$$ and $$\mathbf{x}$$ interpolated as in Eq. ( 20) and $$\mathbf{g}_{\alpha }$$ of Eq. ( 21), the above components can be directly computed. Then, in the MITC method, these strains are sampled at selected points, interpolated, and transformed to the local Cartesian basis at each Gauss point.
Generally, the MITC9 element performs quite well but it does not pass the patch test even for a regular mesh shown in Fig. 7, i.e. with straight elements’ sides and middle positions of the midside nodes and the central node. This motivated our work on the formulation of this element in [ 43] and [ 30], and below we shortly describe the improvement which we proposed.
First, we note that the matrix $$\varvec{\varepsilon }_\xi$$ of covariant components of Eq. ( 25) can be obtained from the Cartesian components $$\varvec{\varepsilon }^{ref}$$ by the transformation
\begin{aligned} \varvec{\varepsilon }_\xi = \mathbf{j}^{T} \, \varvec{\varepsilon }^{ref} \mathbf{j}, \end{aligned}
(26)
where $$\mathbf{j}\doteq [ J_{\alpha \beta } ]$$ ( $$\alpha ,\beta =1,2$$) is a $$2 \times 2$$ sub-matrix of $$\mathbf{J}$$ defined below Eq. ( 3). We noticed that $$\mathbf{j}$$ varies over the element, and, in consequence, the covariant $$\varvec{\varepsilon }_\xi$$ is associated with a different co-basis at each sampling point. We put forward the hypothesis that this was the cause of failing the patch test by the classical MITC9 element, and proposed an improvement described below.
Because the Jacobian $$\mathbf{j}$$ varies over the element, it can be decomposed $$\mathbf{j}\doteq \mathbf{j}_c \Delta \mathbf{j}$$, where $$\Delta \mathbf{j}$$ designates the relative Jacobian between the element center and a Gauss point. Then,
\begin{aligned} \varvec{\varepsilon }_\xi \ = \ \mathbf{j}^{T} \, \varvec{\varepsilon }^{ref} \ \mathbf{j}\ = \ \Delta \mathbf{j}^T \underbrace{( \mathbf{j}_c^{T} \, \varvec{\varepsilon }^{ref} \ \mathbf{j}_c)}_{\doteq \ \varvec{\varepsilon }_\xi ^c} \ \Delta \mathbf{j}. \end{aligned}
(27)
Hence, we proposed to neglect $$\Delta \mathbf{j}$$ and use the Jacobian at the element center $$\mathbf{j}_c$$ instead of $$\mathbf{j}$$. Note that the components $$\varvec{\varepsilon }_\xi ^c$$ yielded by such a transformation are not exactly the covariant components of Eq. ( 25); for clarity, we designate them by “COVc”.
Remark
The Jacobian at center was used in [ 40] to improve the method of incompatible modes for a 4-node element. However, no such an improvement was proposed for the MITC method until our paper [ 43], where the MITC9i element was developed. The main difficulty was to notice that the covariant components of Eq. ( 25) can be obtained from the Cartesian ones by Eq. ( 26), and apply this formula in the context of sampling/interpolation.
MITCi method for bending/twisting and transverse shear strains For the membrane strains $$\varvec{\varepsilon }$$ of the shell element MITC9i, we apply the transformations which we developed in [ 43]. In the present paper, we propose and test analogous transformations for the bending/twisting strains $$\varvec{\kappa }$$ and the transverse shear strains $$\varvec{\gamma }$$. Note that the transformation formulas between Cartesian and covariant components are different for the in-plane strains, i.e. $$\varvec{\varepsilon }$$ and $$\varvec{\kappa }$$, and for the transverse shear strain $$\varvec{\gamma }$$, see “Appendix”.
Steps defining the improved shell element MITC9i are as follows:
1.
The representations in the reference Cartesian basis are transformed to the co-basis at the element center, to obtain the COVc components,
\begin{aligned} \varvec{\kappa }_\xi = \mathbf{j}_c^{T} \, \varvec{\kappa }^{ref} \ \mathbf{j}_c,\qquad \varvec{\gamma }_\xi = \mathbf{j}_c^{T} \, \varvec{\gamma }^{ref}, \end{aligned}
(28)
2.
The two-level approximations of the COVc components are performed,
\begin{aligned} \varvec{\kappa }_\xi \ {\mathop {\longrightarrow }\limits ^{\mathrm{MITC}}}\ \widetilde{\varvec{\kappa }}{}_{\xi },\qquad \varvec{\gamma }_\xi \ {\mathop {\longrightarrow }\limits ^{\mathrm{MITC}}}\ \widetilde{\varvec{\gamma }}{}_{\xi }. \end{aligned}
(29)
This involves sampling and interpolation, which are described in detail in the next paragraph. Results are designated by the tilde $$\widetilde{~~}''$$.
3.
The approximated COVc components in the co-basis at the element center are transformed back from the co-basis at the element center to the reference Cartesian basis,
\begin{aligned} \widetilde{\varvec{\kappa }}{}^{ref} = \mathbf{j}_c^{-T} \, \widetilde{\varvec{\kappa }}{}_{\xi } \ \mathbf{j}_c^{-1},\qquad \widetilde{\varvec{\gamma }}{}^{ref} = \mathbf{j}_c^{-T} \, \widetilde{\varvec{\gamma }}{}_{\xi }. \end{aligned}
(30)
The transformations of the first and third steps are reciprocal and without the second step we would simply obtain $$\widetilde{\varvec{\kappa }}^{ref} = {\varvec{\kappa }}^{ref}$$ and $$\widetilde{\varvec{\gamma }}^{ref} = {\varvec{\gamma }}^{ref}$$.
Sampling points and interpolation functions The two-level approximations are applied to the COVc components of shell strains, as symbolically given by Eq. ( 29). The COVc strain components are sampled at points shown in Fig. 3, where $$a=\sqrt{1/3}$$ and $$b=\sqrt{3/5}$$. Let us group the sampled shell strain components,
\begin{aligned}&E_{\xi \xi } \in \{ \varepsilon _{11}, \kappa _{11}, \gamma _{31} \}, \quad E_{\eta \eta } \in \{ \varepsilon _{22}, \kappa _{22}, \gamma _{32} \},\nonumber \\&\quad E_{\xi \eta } \in \{ \varepsilon _{12}, \kappa _{12} \}. \end{aligned}
(31)
Then the particular strain groups are interpolated, using the sampled values, as follows:
\begin{aligned}&\widetilde{E}_{\xi \xi }(\xi ,\eta ) = \sum _{l} R_{l}(\xi ,\eta ) \ (E_{\xi \xi })_l, \nonumber \\&\widetilde{E}_{\eta \eta }(\xi ,\eta ) = \sum _{l} R_{l}(\xi ,\eta ) \ (E_{\eta \eta })_l, \nonumber \\&\widetilde{E}_{\xi \eta }(\xi ,\eta ) = \sum _{l} R_{l}(\xi ,\eta ) \ (E_{\xi \eta })_l, \end{aligned}
(32)
where $$R_l$$ are the interpolation functions defined below, and l is the index of sampling points: $$l=A,B,C,D,E,F$$ for the 6-point schemes and $$l=A,B,C,D$$ for the 4-point scheme. The sets of interpolation functions are defined as follows:
1.
for $$\varepsilon _{11}, \kappa _{11}, \gamma _{31}$$, the points of Fig. 3a are used (2 points in the $$\xi$$ direction),
(33)
2.
for $$\varepsilon _{22}, \kappa _{22}, \gamma _{32}$$, the points of Fig. 3b are used (2 points in the $$\eta$$ direction),
(34)
3.
For $$\varepsilon _{12}, \kappa _{12}$$, the points of Fig. 3c are used (2 points in $$\xi$$ and $$\eta$$ directions),
(35)
Remark
The sampling points of Fig. 3 correspond to the integration points proposed for the Selective Reduced Integration (SRI) of strain energy terms in [ 32]. Several attempts were made to apply the 6-point schemes of Fig. 3a and b to $$\varepsilon _{12}$$ and $$\kappa _{12}$$ but without a significant improvement, see [ 19, 21, 31].
Improved sampling strategy Note that the schemes of Fig. 3a and b involve 6 sampling points each, which implies a considerable number of evaluations. To reduce it, we use the so-called sampling lines instead of points, which yields a more efficient implementation. This method is not fully equivalent to the 6-point scheme due to the transformations of Eqs. ( 28) and ( 30) but works very well.
To explain the method, we consider only one strain component $$E^c_{\xi \xi }$$, for which sampling points and integration points are shown in Fig. 4a. Let us, for a moment, neglect the effect of transformations of Eqs. ( 28) and ( 30). Then we see that both these types of points are located at the same $$\eta \in \{-b, 0, +b \}$$, where $$b =\sqrt{{3}/{5}}$$. Hence, the $$3 \times 3$$ integration does evaluate $$E^c_{\xi \xi }$$ at correct $$\eta$$-coordinates, and, therefore, no separate sampling and interpolation in the $$\eta$$-direction is needed. In consequence, we sample and interpolate $$E^c_{\xi \xi }$$ only in the $$\xi$$-direction,
\begin{aligned} \widetilde{E}_{\xi \xi }(\xi ,\eta ) = R_{L1}(\xi ) \, E^c_{\xi \xi }(-a,\eta ) \ + \ R_{L2}(\xi ) \, E^c_{\xi \xi }(+a,\eta ), \nonumber \\ \end{aligned}
(36)
where
\begin{aligned} R_{L1}(\xi )= & {} \frac{1}{2}\left( 1-\frac{\xi }{a}\right) , \qquad R_{L2}(\xi )=\frac{1}{2}\left( 1+\frac{\xi }{a}\right) ,\nonumber \\&a=\sqrt{{1}/{3}}. \end{aligned}
(37)
This formula involves two sampling lines shown in Fig. 4b, where L1 is located at $$\xi =-a$$ and L2 at $$\xi =a$$. Besides, note that the interpolation functions $$R_{L1}$$ and $$R_{L2}$$ of Eq. ( 37) replace six much more complicated functions $$R_l$$ of Eq. ( 33). The sampling lines are transformed back by Eq. ( 30).
## 4 Corrected shape functions for shell element
In this section, we present several modifications to the method of calculation the shift parameters for the corrected shape functions, which: (a) are necessary for 9-node shell elements located in 3D space, and (b) improve accuracy of a solution for non-flat shell elements and (c) enable to construct a symmetric side curve for a shifted midside node. The last modification is also applicable to 2D problems.
The standard isoparametric shape functions are obtained by the assumption that the midside nodes (5,6,7,8) are located at the middle positions between the respective corner nodes and the central node 9 is located at the element center. When these nodes are shifted from the middle positions then the physical space parameterized by the standard shape functions is distorted, see e.g. Figs. 13a and 20 in [ 30].
To alleviate this problem, the corrected shape functions (CSF) were proposed in [ 9], with six additional parameters introduced in the local coordinates space, $$\alpha , \beta , \gamma , \epsilon , \theta , \kappa \in [-1,+1]$$, see Fig. 5. The distance of these nodes from the middle positions in the local coordinates space is computed as proportional to their distance in the physical space. To determine these 6 parameters, nonlinear equations must be solved: 4 equations with 1 unknown each and 2 equations with 2 unknowns. This is done only once, so the time overhead is insignificant.
The corrected shape functions for the 9-node element are defined in two steps. First, the shape functions of the 8-node (serendipity) element are defined,
(38)
and they account for shifts of the midside nodes from the middle positions. Next, the basis function for the central node 9 is added hierarchically to the shape functions for the 8-node element. The obtained shape functions for the 9-node element are
(39)
where $$\bar{N}_i (\theta ,\kappa ) \doteq \bar{N}_i (\xi =\theta ,\eta =\kappa )$$, see [ 9], Eq. (20). These functions account for shifts of the midside nodes from the middle positions and the central node from the central position. When the shift parameters are equal to zero, then the CSF of Eq. ( 39) are reduced to the standard shape functions.
### 4.1 Computation of shift parameters in 3D for midside nodes
For the midside nodes (5,6,7,8), the procedure to determine the shift parameters $$\alpha , \beta , \gamma , \epsilon$$ can be extended from 2D to 3D space as follows. For instance, let us determine the shift parameter $$\alpha$$ of node 5 in local coordinates. Consider the parametric form of the side 1-5-2, which, in general, can be curved. For 2D problems, it is
\begin{aligned} X(\xi )= & {} \mathbf{N}_{3n}(\xi ) \cdot [X_1,X_5,X_2] \quad \text{ and } \quad \nonumber \\ Y(\xi )= & {} \mathbf{N}_{3n}(\xi ) \cdot [Y_1,Y_5,Y_2], \end{aligned}
(40)
where the vector of corrected shape functions for a 3-node element is
\begin{aligned} \mathbf{N}_{3n}(\xi ) \doteq \left[ \frac{1}{2}\frac{(\xi -1)(\xi -\alpha )}{1+\alpha }, \ \frac{1-\xi ^2}{1-\alpha ^2}, \ \frac{1}{2}\frac{(\xi +1)(\xi -\alpha )}{1-\alpha } \right] .\nonumber \\ \end{aligned}
(41)
For shells in 3D space, we have to write additionally a similar expression for the third coordinate,
\begin{aligned} Z(\xi ) =\mathbf{N}_{3n}(\xi ) \cdot [Z_1,Z_5,Z_2]. \end{aligned}
(42)
In the method proposed in [ 9], the shift parameter $$\alpha$$ is obtained from the proportion of arc-lengths, i.e. the fractional distance of node 5 along the curved side relative to nodes 1 and 2 is required to be identical in a physical space and in a local space,
\begin{aligned} \frac{L_{-1}^{\alpha }}{L_{-1}^{+1}}=\frac{\int _{-1}^{\alpha }d\xi }{\int _{-1}^{1} d\xi }, \end{aligned}
(43)
where the arc-length of the side in a physical space from $$\xi = -1$$ to $$\xi =\alpha$$ is
\begin{aligned} L_{-1}^{\alpha } \doteq \int _{-1}^{\alpha } \sqrt{\left( {dX}/{d\xi }\right) ^2+\left( {dY}/{d\xi }\right) ^2 +\left( {dZ}/{d\xi }\right) ^2} \ d\xi . \end{aligned}
(44)
In the above form, we included the term $$\left( {dZ}/{d\xi }\right) ^2$$, i.e. the contribution of the third coordinate, which is needed for shells. The definition of $$L_{-1}^{+1}$$ is analogous; we have to replace $$\alpha$$ by $$+1$$. Besides, for the right-hand-side, we obtain $$\int _{-1}^{\alpha }d\xi = 1+\alpha$$, and $$\int _{-1}^{1}d\xi =2$$, and Eq. ( 43) can be transformed to a single non-linear equation in $$\alpha$$,
\begin{aligned} F(\alpha ) \doteq \frac{\int _{-1}^{\alpha } \sqrt{a \xi ^2 + b \xi + c} \ d \xi }{\int _{-1}^{1} \sqrt{a \xi ^2 + b \xi + c} \ d \xi } - \frac{\alpha +1}{2} =0, \end{aligned}
(45)
where abc depend on differences in node positions and on $$\alpha$$. We solve this equation using the Newton method, and the term added for shells, $$\left( {dZ}/{d\xi }\right) ^2$$, contributes to the tangent matrix and the residual vector.
An initial value of $$\alpha$$, denoted as $$\alpha _0$$, is obtained by a proportion and a change of variables as follows:
\begin{aligned} a \doteq \frac{L_{15}}{L_{15}+L_{52}} \in [0,1], \, \alpha _0 \doteq 2 a -1 \in [-1,+1], \end{aligned}
(46)
where $$L_{ij}$$ is the distance between nodes i and j in 3D space, computed as a length of a vector connecting these points. When the side 1-5-2 is straight then, $$L_{15}+L_{52} = L_{12}$$ and the so-defined $$\alpha _0$$ is a solution of the nonlinear Eq. ( 43). The parameters for the other midside nodes, i.e. $$\beta , \gamma$$ and $$\epsilon$$, are obtained in an analogous way.
### 4.2 Computation of shift parameters in 3D for central node
For the central node 9 and the $$(\theta , \kappa )$$ parameters, the generalization to 3D space is more complicated. For 2D space, we use an assumption that the Jacobian of transformation from physical to local coordinates should not be affected by the central node 9. For shells, we have additionally the third coordinate Z, and we can consider two methods of treating it:
M1.
Neglect a contribution of the Z-coordinate, and solve a system of 2 equations identical to that used for 2D space,
\begin{aligned} \left[ \begin{array}{c} X_9 - \sum _{i=1}^8 \bar{N}_i(\theta ,\kappa ) \ X_i =0 \\ Y_9 - \sum _{i=1}^8 \bar{N}_i(\theta ,\kappa ) \ Y_i =0 \end{array}\right] . \end{aligned}
(47)
This method is an option when the elemental reference basis is used, e.g. the tangent basis attached at node 9, as then Z describes elevation or warping of an element.
M2.
Account for a contribution of the Z-coordinate. We propose to incorporate it using a definition of the square error
\begin{aligned} e \doteq {\mathbf{r}^T \mathbf{r}}, \end{aligned}
(48)
where
\begin{aligned} \mathbf{r}(\theta ,\kappa ) \doteq \left[ \begin{array}{c} X_9 - \sum _{i=1}^8 \bar{N}_i(\theta ,\kappa ) \ X_i \\ Y_9 - \sum _{i=1}^8 \bar{N}_i(\theta ,\kappa ) \ Y_i \\ Z_9 - \sum _{i=1}^8 \bar{N}_i(\theta ,\kappa ) \ Z_i \end{array}\right] . \end{aligned}
(49)
The error e is a scalar, and minimization of it w.r.t. $$\theta$$ and $$\kappa$$ yields 2 equations
\begin{aligned} \left[ \begin{array}{c} 2 \mathbf{r}^T (d \mathbf{r}/ d \theta ) =0 \\ 2 \mathbf{r}^T (d \mathbf{r}/ d \kappa ) =0 \end{array}\right] . \end{aligned}
(50)
This method can be applied when either the elemental or global reference basis is used for an element. For a flat shell element, M2 yields exactly the same results as M1, so the proposed generalization is correct indeed.
In both methods, the set of nonlinear equations is solved for $$\theta$$ and $$\kappa$$ using the Newton method. Initial values can be calculated, e.g., as follows:
\begin{aligned} \theta _0 \doteq \frac{2 L_{89}}{L_{89}+L_{96}}-1,\qquad \kappa _0 \doteq \frac{2 L_{79}}{L_{79}+L_{95}}-1, \end{aligned}
(51)
where $$L_{ij}$$ is the distance between nodes i and j in 3D space. When nodes 8, 9 and 6 lie on one straight line then $$\theta =\theta _0$$. Similarly, when nodes 5, 9 and 7 lie on one straight line then $$\kappa =\kappa _0$$. The proposed method M2 is tested in Sect. 6.4.
### 4.3 Alternative computation of shift parameters for curved sides
When the element side is straight, then the shift $$\alpha = \alpha _0$$, where $$\alpha _0$$ is given by Eq. ( 46). Computations are more complicated when the side is curved; e.g. in the method of [ 9], the proportion of arc-lengths yields a nonlinear Eq. ( 45) from which $$\alpha$$ is calculated. We have noticed two features of that procedure:
1.
when the midside node is shifted from the middle position then the so-computed $$\alpha$$ yields a non-symmetrical side curve,
2.
any particular shape of the side curve is not assumed in this method so we cannot control it. It is a drawback, e.g., for domain boundaries, the shape of which we try to represent as exactly as possible.
We propose an alternative method by which a parabola is constructed in such a way that a symmetric shape of the side is obtained even for a shifted midside node. The steps of the proposed method for the given nodes $$P_1, \, P_2$$ and $$P_5$$ are as follows:
1.
Check the position of $$P_5$$ relative to $$P_1$$ and $$P_2$$
\begin{aligned} d \doteq \Vert \mathbf{p}_5 - \mathbf{p}_1 \Vert - \Vert \mathbf{p}_5 - \mathbf{p}_2 \Vert < tol, \end{aligned}
(52)
where $$\mathbf{p}_1, \ \mathbf{p}_2$$ and $$\mathbf{p}_5$$ are position vectors of $$P_1$$, $$P_2$$ and $$P_5$$, respectively, and tol is a prescribed tolerance. If $$d < tol$$ then $$P_5$$ is on the axis of symmetry so $$\alpha = 0$$ and this ends the calculations; otherwise we proceed further.
2.
Construct vectors of a local Cartesian basis in 3D using $$P_1, \, P_2$$ and $$P_5$$
\begin{aligned} \mathbf{s}_t = (\mathbf{p}_2- \mathbf{p}_1)/\Vert (\mathbf{p}_2- \mathbf{p}_1) \Vert , \end{aligned}
\begin{aligned} \mathbf{v}= (\mathbf{p}_1 - \mathbf{p}_5) \times (\mathbf{p}_2 - \mathbf{p}_5),\qquad \mathbf{s}_r =\mathbf{v}/\Vert \mathbf{v}\Vert , \end{aligned}
(53)
\begin{aligned} \mathbf{s}_n = \mathbf{s}_r \times \mathbf{s}_t, \end{aligned}
Note that $$\mathbf{s}_t$$ and $$\mathbf{s}_n$$ belong to the plane defined by $$P_1$$, $$P_2$$ and $$P_5$$.
3.
Calculate coordinates of $$P_1$$, $$P_2$$ and $$P_5$$ in the local basis $$\{ \mathbf{s}_t, \, \mathbf{s}_n, \, \mathbf{s}_r \}$$,
\begin{aligned} \mathbf{r}_1= & {} \mathbf{O}^T (\mathbf{p}_1 - \mathbf{p}_0),\qquad \mathbf{r}_2 = \mathbf{O}^T (\mathbf{p}_2 - \mathbf{p}_0),\nonumber \\&\mathbf{r}_5 = \mathbf{O}^T (\mathbf{p}_5 - \mathbf{p}_0), \end{aligned}
(54)
where $$\mathbf{p}_0 = (\mathbf{p}_1 + \mathbf{p}_2)/2$$ and $$\mathbf{O}\doteq [ \mathbf{s}_t \, | \, \mathbf{s}_n \, | \, \mathbf{s}_r ] \in SO(3)$$. The applied transformation involves a shift by $$\mathbf{p}_0$$ and a rotation by $$\mathbf{O}$$. Associate the local coordinates xy with the vectors $$\mathbf{s}_t$$ and $$\mathbf{s}_n$$, respectively, so let them have zero at $$\mathbf{p}_0$$. The local coordinates of nodes are designated: $$\mathbf{r}_1 = [x_1, y_1]$$, $$\mathbf{r}_2 = [x_2,y_2]$$ and $$\mathbf{r}_5 = [x_5,y_5 ]$$, where $$y_1 = y_2 = 0$$.
4.
Calculate coefficients of a parabola in the local basis, assuming that it passes through $$P_1$$ and $$P_2$$, and is symmetrical w.r.t. $$x =0$$. Writing $$y = a x^2 + c$$ for $$P_1$$ and $$P_5$$, we obtain a system of two equations, from which we can calculate
\begin{aligned} a = \frac{y_5-y_1}{x_5^2 - x_1^2}, \qquad c = y_5 - a x_5^2. \end{aligned}
(55)
5.
The parametric equation of parabola $$y = a x^2 + c$$ is
\begin{aligned} x = -\frac{2}{4 a} t,\qquad y = \frac{1}{4 a} t^2 + c, \end{aligned}
(56)
where t is a parameter. The value of t for $$P_1$$ can be obtained using the condition $$x=x_1$$, and the value of t for $$P_5$$ using the condition $$x=x_5$$,
\begin{aligned} t_{P_1} = -\, 2a x_1,\qquad t_{P_5} = -\, 2a x_5. \end{aligned}
(57)
Then, the shift parameter $$\alpha \in [-1,1]$$ can be defined as a ratio
\begin{aligned} \alpha \doteq \frac{t_{P_5}}{| t_{P_1} | } = \frac{x_5}{ | x_1 | }. \end{aligned}
(58)
The new method is simpler than that of [ 9] and its accuracy is better when the shell edge is curved and symmetric indeed, see Sect. 6.5. Note that the midside shift parameters, such as $$\alpha$$, affect also the $$(\theta , \kappa )$$ parameters for the central node, see Eq. ( 50). Finally, we note that e.g. a semi-circle can be implemented in a similar manner, but a parabola is simpler.
Example
Consider three points $$P_1$$(0,0), $$P_2$$(4,0) and $$P_5$$(1,2) shown in Fig. 6, where the side curves are obtained for the standard shape functions (“standard”) and the corrected shape functions. For the latter, $$\alpha$$ is obtained in two ways: the method of [ 9] yields $$\alpha =-\,0.264405$$ and the non-symmetric curve “CSF,CG”, while the proposed method yields $$\alpha =-\,0.5$$ and the symmetric curve “CSF,new”.
## 5 Treatment of drilling RC
The purpose of using the drilling Rotation Constraint (drilling RC) is to provide an additional equation for the drilling rotation $$\Delta \omega \doteq \Delta \varvec{\psi }\cdot \mathbf{a}_3^n$$, which is not present in the shell kinematical assumption ( 2) and in strains.
We define the drilling RC as the (1,2) component of the RC equation of Eq. ( 10), which is taken with $$\mathbf{Q}\approx \mathbf{Q}_0$$ in the local basis $$\{ \mathbf{t}_k\}$$ $$(k=1,2,3)$$. Applying the $$\mathbf{t}_1 \cdot \left[ (\,\cdot \,) \, \mathbf{t}_2 \right]$$ operation to the l.h.s. of Eq. ( 10), we obtain the scalar
\begin{aligned} c\doteq & {} \mathbf{t}_1 \cdot \left[ {\mathrm{skew}}(\mathbf{Q}_0^T \mathbf{F}) \, \mathbf{t}_2 \right] = \frac{1}{2} \left[ \mathbf{t}_1 \cdot \left( \mathbf{Q}_0^T \mathbf{F}\mathbf{t}_2 \right) - \mathbf{t}_1 \cdot \left( \mathbf{F}^T \mathbf{Q}_0 \mathbf{t}_2 \right) \right] \nonumber \\= & {} \frac{1}{2} \left[ (\mathbf{Q}_0 \mathbf{t}_1) \cdot \left( \mathbf{F}\mathbf{t}_2 \right) - (\mathbf{F}\mathbf{t}_1) \cdot \left( \mathbf{Q}_0 \mathbf{t}_2 \right) \right] \nonumber \\= & {} \frac{1}{2} \left[ \mathbf{a}_1 \cdot ( \mathbf{F}\mathbf{t}_2 )- (\mathbf{F}\mathbf{t}_1) \cdot \mathbf{a}_2 \right] , \end{aligned}
(59)
where $$\mathbf{a}_1 \doteq \mathbf{Q}_0 \, \mathbf{t}_1$$ and $$\mathbf{a}_2 \doteq \mathbf{Q}_0 \, \mathbf{t}_2$$ are the forward-rotated tangent vectors $$\mathbf{t}_1$$ and $$\mathbf{t}_2$$.
The drilling RC can also be obtained in the forward-rotated basis $$\{ \mathbf{a}_k \}$$, where $$\mathbf{a}_k \doteq \mathbf{Q}_0 \, \mathbf{t}_k$$, which is associated with the current (deformed) configuration. Let us denote $$\mathbf{A}\doteq \mathbf{Q}_0^T \mathbf{F}$$ and its forward-rotated form $$\mathbf{A}\!{}^* \doteq \mathbf{Q}_0 \mathbf{A}\mathbf{Q}_0^T =\mathbf{F}\mathbf{Q}_0^T$$. We see that
\begin{aligned} \mathbf{t}_1 \cdot ({\mathrm{skew}}\mathbf{A}\, \mathbf{t}_2) = \mathbf{t}_1 \cdot (\mathbf{Q}_0^T {\mathrm{skew}}\mathbf{A}\!{}^* \mathbf{Q}_0 \mathbf{t}_2) =\mathbf{a}_1 \cdot ({\mathrm{skew}}\mathbf{A}\!{}^* \, \mathbf{a}_2), \end{aligned}
(60)
where the last form uses the forward-rotated basis vectors. When we approach the solution, i.e. $$\mathbf{Q}_0 \rightarrow \mathbf{R}$$, then $$\mathbf{A}\doteq \mathbf{Q}_0^T \mathbf{F}\rightarrow \mathbf{R}^T \mathbf{F}= \mathbf{U}$$ and $$\mathbf{A}\!{}^* \doteq \mathbf{F}\mathbf{Q}_0^T \rightarrow \mathbf{F}\mathbf{R}^T = \mathbf{V}$$, where the rotation $$\mathbf{R}\in SO(3)$$ and the right and left stretching tensors, $$\mathbf{U}$$ and $$\mathbf{V}$$, respectively, are obtained by the polar decomposition of $$\mathbf{F}$$.
Incremental form of the drilling RC An incremental form of the drilling RC can be obtained from the last form of Eq. ( 59) using the multiplicative decompositions $$\mathbf{F}= \Delta \mathbf{F}\mathbf{F}^n$$ and $$\mathbf{Q}_0 = \Delta \mathbf{Q}_0 \mathbf{Q}_0^n$$, where the superscript “ n” designates the last converged configuration. Introducing the convected vectors $$\mathbf{t}_{\alpha }^n \doteq \mathbf{F}^{n} \mathbf{t}_{\alpha }$$ and the forward-rotated vectors $$\mathbf{a}_{\alpha }^n \doteq \mathbf{Q}_0^{n} \mathbf{t}_{\alpha }$$ ( $$\alpha =1,2$$), after straightforward transformations, we obtain
\begin{aligned} c = \frac{1}{2} \left[ \mathbf{a}_1^n \cdot \left( \Delta \mathbf{A}\mathbf{t}_2^n \right) - \mathbf{t}_1^n \cdot \left( \Delta \mathbf{A}\!{}^T \mathbf{a}_2^n \right) \right] , \end{aligned}
(61)
where $$\Delta \mathbf{A}\doteq \Delta \mathbf{Q}_0^T \Delta \mathbf{F}$$. This form of c complies with the additive/multiplicative rotation update scheme, in which, we use $$\Delta \mathbf{Q}_0$$ parametrized by $$\Delta \varvec{\psi }$$ for the increment, see Eq. ( 8). This scheme works correctly for $$| \Delta \omega | < \pi /2$$, and, when combined with a quaternion, allows for arbitrarily large drilling rotations. For the initial configuration, $$\mathbf{F}^{n} = \mathbf{I}$$ and $$\mathbf{Q}_0^{n} = \mathbf{I}$$, so we have $$\mathbf{t}_{\alpha }^n =\mathbf{t}_{\alpha }$$, $$\mathbf{a}_{\alpha }^n = \mathbf{t}_{\alpha }$$, and the above formula reduces to
\begin{aligned} c = \frac{1}{2} \left[ \Delta \mathbf{a}_1 \cdot \left( \Delta \mathbf{F}\mathbf{t}_2 \right) - \left( \Delta \mathbf{F}\mathbf{t}_1 \right) \cdot \Delta \mathbf{a}_2 \right] , \end{aligned}
(62)
where $$\Delta \mathbf{a}_{\alpha } \doteq \Delta \mathbf{Q}_0 \mathbf{t}_{\alpha }$$, of an apparent similarity to the last form of Eq. ( 59).
Discussion of approximation of drilling RC Below we show that for the equal-order bi-quadratic interpolations of displacements and the drilling rotation, the drilling RC is incorrectly approximated.
For simplicity, consider a planar bi-unit square element with drilling rotations and a linearized form of the drilling RC: $$2 c \doteq 2 \omega + (u_{,\eta }-v_{,\xi })= 0$$, where $$u_{,\eta } \doteq u_{1,2}$$ and $$v_{,\xi } \doteq u_{2,1}$$. Let us write the interpolation formulas as follows:
\begin{aligned} u = \sum _{i=1}^9 \phi _i \, U_i, \qquad v = \sum _{i=1}^9 \phi _i \, V_i, \qquad \omega = \sum _{i=1}^9 \phi _i \, \Omega _i, \end{aligned}
(63)
where $$[\phi _i ] \doteq \left[ \, 1, \, \xi , \, \eta , \, \xi \eta , \, \xi ^2, \, \eta ^2, \, \xi \eta ^2, \, \xi ^2 \eta , \, \xi ^2 \eta ^2 \, \right]$$ and $$\xi ,\eta \in [-1,+1]$$. Besides, $$U_i, \, V_i, \, \Omega _i$$ are coefficients for displacement components and the drilling rotation depending on the nodal values $$u_k, \, v_k, \, \omega _k$$ $$(k=1, \ldots , 9)$$. For the above interpolations, the linearized drilling RC yields
\begin{aligned} 2 c\doteq & {} \left( 2 \Omega _1 + U_3-V_2 \right) \ + \ \left( 2 \Omega _2 + U_4 -2 V_5 \right) \xi \ \nonumber \\&+ \ \left( 2 \Omega _3 +2 U_6 - V_4 \right) \eta +\, \ 2 \left( \Omega _4 + U_8 - V_7 \right) \xi \eta \nonumber \\&+ \ \left( 2 \Omega _5 + U_7 \right) \xi ^2 \ + \ \left( 2 \Omega _6 - V_8 \right) \eta ^2 +\, \ 2 \left( \Omega _8 - V_9 \right) \nonumber \\&\quad \xi \eta ^2 \ + \ 2 \left( \Omega _7 - U_9 \right) \xi ^2 \eta \ + \ \underline{\underline{2 \Omega _9 \ \xi ^2 \eta ^2}}. \end{aligned}
(64)
Note that the last $$\xi ^2 \eta ^2$$-term contains only the rotational coefficient $$\Omega _9$$ but no displacement coefficients $$U_i, \, V_i$$, which is incorrect, because both these types of coefficients should be linked in each term of the drilling RC. This problem is caused by equal-order approximations of displacements and the drilling rotation; a similar one appears for 4-node (bi-linear) elements, but then the $$\xi \eta$$-term is faulty, see [ 48].
Let us designate the faulty term as $$c_9 \doteq \Omega _9 \ \xi ^2 \eta ^2$$. For a bi-unit square element and the linearized drilling RC, we can express $$c_9$$ in terms of nodal drilling rotations $$\omega _k$$ $$(k=1, \ldots , 9)$$ as follows:
\begin{aligned} c_9= & {} \bigg [ \frac{1}{4}\left( \omega _1 + \omega _2 +\omega _3 +\,\omega _4 \right) \nonumber \\&-\frac{1}{2}(\omega _5 + \ \omega _6 + \omega _7 + \omega _8) + \omega _9 \bigg ] \xi ^2 \eta ^2. \end{aligned}
(65)
Note that for the rigid element rotation $$\omega _{rr}$$, all nodal $$\omega _k = \omega _{rr}$$ and the term in brackets vanishes, so the faulty term does not cause any problem then. However, in general, $$\int _{-1}^{1} \int _{-1}^{1} c_9^2 \ d\xi d\eta = \frac{4 }{25} \Omega _9^2 \ne 0$$, which means that the penalty form involving $$c_9$$ is non-zero and affects the solution.
To establish a magnitude of the effect caused by $$c_9$$, we: (1) removed $$c_9$$ from c, which implied 1 additional zero eigenvalue of the tangent matrix, and (2) stabilized this matrix by the scaled-down $$c_9$$. Then, Eq. ( 18) valid for an arbitrary element (not only for the bi-unit one), has the following modified form
\begin{aligned} F^\mathrm{sh}_{drill} = \frac{\gamma h}{2}\int _{A} \left[ \ (c- c_9)^2 \ + \ \frac{1}{10^3} \, c_9^2 \ \right] dA, \end{aligned}
(66)
where $$1/10^3$$ is the scaling factor. This form was tested on the Cook’s membrane example of Sect. 6.3, and the effect of $$c_9$$ term was not strong. Hence, in further tests, we use the form of $$F^\mathrm{sh}_{drill}$$ given by Eq. ( 18), which is simpler.
## 6 Numerical tests
In this section, we present numerical tests of the developed 9-node shell element MITC9i. The element uses the modified transformations of Sect. 3 and the corrected shape functions (CSF) in the extended version for shells of Sect. 4, which enables calculation of shift parameters for non-flat element geometry and generation of a symmetric side curve for a non-symmetric position of a midside node. An implementation of the drilling rotation is described in Sects. 2 and 5. The element is integrated using the $$3 \times 3$$ Gauss rule.
The tested and reference shell elements are listed in Table 1. All “ours” shell elements are of the Reissner–Mindlin type and with drilling rotations (6 dofs/node).
Table 1
Tested and reference 9-node shell element
Element
Characteristics
Tested
MITC9i
Two-level approximations of covariant strains in co-basis at element center (COVc), CSF
Reference, ours
9
Standard (not modified)
9-AS
[ 28], Assumed Strain, two-level approximations of strains in Cartesian basis at element center
9-SRI0
[ 32] Selective Reduced Integration scheme of Table 1, CSF
9-SRI
[ 28] Selective Reduced Integration scheme of Table 2, CSF
9-EAS11
Enhanced Assumed Strain for membrane strains of [ 7] Eq. (30), bending strain unmodified, transverse shear strain treated as in MITC9i, CSF
Reference, of [ 29]
MITC9
ADINA [ 2], 5 dofs/node, two-level approximations of covariant strains in local co-basis
S9R5
ABAQUS [ 1], 5 dofs/node (small strain/finite rotation, Kirchhoff), $$2 \times 2$$ URI+stabilization (1 spurious eigenvalue, not suppressed)
4-node shells, ours [ 48]
HW47, HW29
Mixed/enhanced elements based on Hu–Washizu functional with 47 or 29 p. Drilling RC by Perturbed Lagrange method
Enhanced Assumed Displacement Gradient element with 5 p drilling RC by Perturbed Lagrange method
The elements were derived using the automatic differentiation program AceGen described in [ 23, 24], and were tested within the finite element program FEAP developed by R.L. Taylor [ 52]. The use of these programs is gratefully acknowledged. Our parallel multithreaded (OMP) version of FEAP is described in [ 22].
We tacitly assume that any consistent set of units is used for the data defined in numerical examples.
### 6.1 Eigenvalues of single element
The eigenvalues of a tangent matrix are computed for a single unsupported element. First, a single bi-unit square element with regularly placed mid-side and central nodes is checked. Then, several other flat and non-flat (h-p, one- and two-curvature) element shapes with shifted mid-side and central nodes are examined. This test is performed for the standard as well as corrected shape functions, which implies different element shapes. For all these cases, the tested shell element MITC9i has a correct number of zero eigenvalues (6).
### 6.2 Patch tests
The patch tests are crucial to demonstrate the benefits of using the new MITC9i shell element. The five-element patch of elements proposed in [ 33] of Fig. 7 and its distorted form of Fig. 8 are used. The data $$E=10^{6}$$, $$\nu =0.25$$ and a thickness $$h=0.001$$.
The stretching/shearing (membrane) and bending/twisting patch tests are performed as described in [ 27]; displacements and rotations are prescribed at external nodes and the computed values are checked at internal nodes. The transverse shear test is performed for the load case defined for a 9-node plate in [ 18], see “Shearing case” in Fig. 2b therein. The particular types of constant strains are verified as follows:
1.
For the stretching/shearing (membrane) patch test, the displacement and rotation fields are
(67)
This test verifies that $$\varepsilon _{xx}=\varepsilon _{yy}=0.001$$ and $$\varepsilon _{xy}=0.0005$$ while the bending/twisting strains and the transverse shear strains are equal to zero.
The drilling rotation, $$\psi _z$$ is unconstrained, and we should obtain the solution $$\psi _z =0$$, where zero results from the linearized drilling RC, $$\psi _z=\omega \doteq {\frac{1}{2}}(u_{x,y} - u_{y,x})$$.
2.
For the bending/twisting patch test, the displacement and rotation fields are
(68)
This test verifies that the bending strains $$\kappa _{xx}=\kappa _{yy}=0.001$$ and the twisting strain $$\kappa _{xy} = -0.0005$$ while the membrane strains and the transverse shear strains are equal to zero. Note that passing this test is not required for consistency; it is merely a test of ability to model constant bending/twisting strains.
3.
For transverse shear patch, the uniform transverse force $$P_z =1$$ is applied at the right boundary, the left boundary is clamped. Rotations $$\psi _x$$ and $$\psi _y$$ are set to zero at all nodes of the mesh to prevent the bending moments developing. The analytical solution is
\begin{aligned} u_z = 0.025 \ x, \end{aligned}
(69)
and at the right boundary $$u_z = 0.006$$. This test verifies that the transverse shear strains are: $$\varepsilon _{xz} = 0.0125$$ and $$\varepsilon _{yz} = 0$$.
Table 2
Patch tests of MITC9i shell element
Cases of node shifts
$$\log _{10} e_u$$
Stretch/shear
Bend/twist
Transverse shear
(A) Zero shifts (regular mesh)
− 15.7
− 15.7
− 15.8
(B) Arbitrary shifts of n. 25
− 5.4
− 5.2
− 5.4
(C) Parallel shifts of n. 21-24
− 5.4
− 5.2
− 5.3
(D) Perpendicular shifts of n. 21-24
− 3.0
− 1.1
− 4.4
Logarithm of relative error of Eq. ( 70)
The standard patch tests are performed using the regular mesh of Fig. 7, which has straight element sides, the side nodes are placed at the middle positions and the central node of each element is located at the intersection of lines connecting opposite midside nodes.
We additionally perform the patch tests using the distorted meshes obtained from the regular mesh by shifts of the selected five nodes indicated in Fig. 8. The shifts are kept within the radius $$r=0.00673145$$, which is a half of the minimal distance of nodes in the entire mesh. Pseudo-random real numbers are generated 100 times in the range $$[-r, r]$$, and the relative error is computed over all nodes,
\begin{aligned} e_u = \max \frac{Abs(u_{ana} - u)}{u_{ana}}, \end{aligned}
(70)
where u is a selected component and the analytical solution $$u_{ana}$$ is computed using either Eq. ( 67) or ( 68) or ( 69). The rounded logarithms of these errors are shown in Table 2, and we see that for the regular mesh (Case A), the errors are of order $$10^{-16}$$, which means that the MITC9i element passes all the standard patch tests. For the distorted meshes, additionally the corrected shape functions are activated, and the results can be summarized as follows:
1.
For the shifted central node 25 (Case B) and for parallel shifts of nodes 21, 22, 23, 24 (Case C), the errors are of order $$10^{-5}$$, so the element passes all the patch tests for these cases. Note that to obtain such a performance the standard shape functions are not sufficient and the corrected shape functions are required.
2.
For perpendicular shifts of nodes 21, 22, 23 and 24 (Case D), we observe a drop of accuracy in all three patch tests.
The biggest error occurs for bending/twisting, and this patch test is not passed. We checked bending and twisting separately, and they both contribute to the error. (To shed additional light on this case, we replaced the transverse shear part of the 9-node element by an assembly of the transverse shear parts of four 4-node elements, in which the ANS method of [ 3] was applied to eliminate locking. Then the error dropped to $$\log _{10} e_u = -3.1$$.) Because the bending/twisting patch test is not necessary for consistency, the convergence of the FE solution is possible despite failing this test.
Finally, note that the classical MITC9 element does not pass these patch tests even for the regular mesh (Case A), so the MITC9i element provides an improvement.
### 6.3 Cook’s membrane
In this test, we demonstrate that the use of corrected shape functions reduces sensitivity of a solution to the value of regularization parameter $$\gamma$$ in the drilling RC term. The implementation of $$F^\mathrm{sh}_{drill}$$ of Eq. ( 18) is tested.
The membrane is clamped at one end (all degrees of freedom are restrained, including drilling rotations), while at the other end, the uniformly distributed shear load $$P=1$$ is applied, see Fig. 9. The data is as follows: $$E=1, \ \nu ={1}/{3}$$ and a thickness $$h=1$$.
Two meshes are used, the skew one of Fig. 9a, and the irregular one, where the central node 9 and three midside nodes (5, 7, 8) are shifted by d, see Fig. 9b. We use two mesh densities: $$1 \times 1$$ and $$16 \times 16$$ elements.
Table 3
Cook’s membrane
Element
Regularization parameter $$\gamma$$
Skew mesh
Irregular mesh
$$1\times 1$$
$$16\times 16$$
$$1\times 1$$
$$16\times 16$$
MITC9i, Standard shape f.
2D, no drill RC
22.265
23.955
19.347
23.961
with drill RC
G
22.256
23.946
19.025
23.955
with drill RC
G / 1000
22.256
23.955
19.346
23.961
MITC9i, Corrected shape f.
2D, no drill RC
22.265
23.955
22.265
23.960
with drill RC
G
22.256
23.946
22.256
23.955
with drill RC
G / 1000
22.265
23.955
22.265
23.960
9
G / 1000
19.644
23.949
12.736
23.949
9-AS
G / 1000
21.799
23.953
19.462
23.958
MITC9
22.209
23.955
16.570
23.958
S9R5
26.540
23.96
19.46
4-node elements
4n HW14-S [ 47]
G
21.353
23.940
Ref. [ 14]
23.810
Vertical displacement at node A
We test the MITC9i element using: (a) two values of the regularization parameter: $$\gamma = G$$ and $$\gamma = G/1000$$, and (b) two types of shape functions, standard and corrected. The results are given in Table 3, and we summarize them as follows:
1.
Solutions for the coarse $$1 \times 1$$-element mesh are more sensitive to the value of $$\gamma$$ than those for the dense $$16 \times 16$$-element mesh. For both tested types of shape functions, the reduced value $$\gamma = G/1000$$ improves the accuracy.
2.
For the irregular mesh of Fig. 9b and both mesh densities, the use of the corrected shape functions is beneficial and the solution is almost insensitive to the node shifts. The MITC9i element is more accurate than the other elements.
We see that for the corrected shape functions, the solution is less sensitive to the value of regularization parameter $$\gamma$$, which is another positive effect of using these functions.
### 6.4 Shift parameters for parabolic cylindrical element
In this test, we evaluate the method M2 of calculation the shift parameters $$\theta$$ and $$\kappa$$ of the corrected shape functions for shells, which was proposed in Sect. 4.
The parabolic cylindrical element shown in Fig. 10a is obtained from a bi-unit square $$X,Y \in [-1,+1]$$ by shifting nodes 5, 7 and 9 by the vector $$[d_x,0,d_z]$$. (In the notation of Sect. 4 $$, \ Z_5 =Z_7=Z_9=d_z$$.) The remaining nodes are at $$Z=0$$ and the standard XY positions.
For this element, we first calculate the shift parameters $$\alpha , \beta , \gamma , \epsilon$$ and next $$\theta$$ and $$\kappa$$ using methods M1 and M2 of Sect. 4. For the initial values given by Eq. ( 51), the Newton method always converged in less than 5 iterations. The values of $$\theta$$ obtained for the horizontal shift $$d_x \in [0, \, 0.5]$$ and the elevation $$d_z \in [0, \, 0.7]$$ are plotted in Fig. 10b, and we see that:
1.
M1 yields always the value $$\theta = d_x$$, which is correct for flat elements and acceptable for shallow elements, when $$0 \le d_z \le 0.1$$, but is incorrect beyond this range.
2.
For M2, the value of $$\theta$$ varies with $$d_z$$ and the bigger $$d_z$$ the $$\theta$$ differs more from $$d_x$$, which is correct as the arc-length changes then. Note that, e.g., for $$d_z=0.7$$ and $$d_x =0.5$$, the error of M1 relative to M2 is about $$17.7 \%$$.
Concluding, the method M2 is appropriate for flat, shallow and non-flat elements, and can be used as a general method of calculation $$\theta$$ and $$\kappa$$ for 9-node shell elements.
### 6.5 Single curved element
In this test, we evaluate the method of computation of the shift parameters for curved sides proposed in Sect. 4, which constructs a symmetrical side curve when the midside node is shifted from the middle position.
The curved cantilever is clamped at one end and at the other end is loaded by either “Load 1” causing in-plane bending by a pair of in-plane forces, or “Load 2” causing twisting by a pair of transverse forces, see Fig. 11. The material data and geometry are: $$E=1.0\times 10^{5}, \nu =0.0, h=0.025$$, inner radius = 0.095, outer radius = 0.105, force $$P=0.1$$, as in [ 25]. One 9-node element is used and it’s nodes are shifted as follows: (1) the midside nodes 6 and 8 by $$d=n*0.001$$ along straight boundaries, (2) the midside nodes 5 and 7 by $$\phi =n*1^{\circ }$$ in the circumferential direction along curved boundaries, and (2) the central node 9 in the radial direction by $$d=n*0.001$$, see Fig. 11.
The element shape for $$n=3$$ in shown in Fig. 12, and we see that the standard shape functions very poorly approximate the geometry while the corrected ones much better.
The displacements at node 1 obtained for the increasing distortion parameter n are shown in Figs. 13 and 14. The curves are designated as follows: “standard” for the standard shape functions, while for the corrected shape functions: “CSF,CG” for the method of calculation of the shift parameter of [ 9], and “CSF,new” for the method of calculation of the shift parameter proposed in Sect. 4.
Next, we can calculate errors of the displacement at node 1 for the distortion parameter $$n=4$$ relative to the displacement for $$n=0$$, see Table 4. We see that the corrected shape functions are beneficial for both tested elements and significantly reduce errors. The proposed method of calculation of the shift parameter “CSF,new” is more accurate than the original method of [ 9] “CSF,CG”. It renders that the element MITC9i is practically insensitive to nodes’ shifts for both load cases.
### 6.6 Curved cantilever
In this test, we assess the effects of elements’ curvature, skew shape and a varying shell thickness h on the accuracy of a solution.
The curved cantilever is fixed at one end and loaded by a moment $$M_z$$ at the other, see Fig. 15. The data is as follows: $$E=2\times 10^5$$, $$\nu =0$$, width $$b= 0.025$$ and radius of curvature $$R=0.1$$. The FE mesh consists of 6 nine-node elements, which have either rectangular or skew shape, see [ 25]. The analytical solution for a curved beam is
\begin{aligned} u_y=\frac{M_z R^2}{EI} = 0.024, \qquad \psi _z=\frac{\pi M_z R}{2EI} = 0.377, \end{aligned}
(71)
where I is the moment of inertia.
The shell thickness is varied in $$h \in [ 10^{-2}, 10 ^{-5}]$$, and the external moment is assumed as $$M_z=\left( {R}/{h}\right) ^{-3}$$, so a solution of the linear problem should remain constant. For this range of h, we obtain the range of $$R/h \in [ 10, 10^4 ]$$ and $$L/h \in 2.6 \times [ 1, 10^4]$$, where L is a circumferential length of a single rectangular element.
Note that for the rectangular mesh, coefficients of the corrected shape functions are equal to zero while for the skew mesh non-zero values are obtained; the values for the ’middle’ elements ( $$\theta$$ and $$\kappa$$ are obtained by M2 of Sect. 4) are given in Table 5.
Table 4
Single curved element
Shape functions
MITC9i
9-AS
MITC9i
9-AS
CSF,new
0.04
0.04
0.06
4.4
CSF,CG
8.3
12.5
4.8
15.0
Standard
98.4
93.8
251.0
275.0
Errors of displacement at node 1 (in %)
Table 5
Coefficients of corrected shape functions for skew mesh ( $$\times 10^{4}$$)
Parameter
$$\alpha$$
$$\beta$$
$$\gamma$$
$$\epsilon$$
$$\theta$$
$$\kappa$$
Value
0.35484
0.0
− 0.24975
0.10912
0.15873
0.05455
The displacement $$u_y$$ at point A obtained by the linear analysis are shown in Fig. 16. The plots of the rotation $$\psi _z$$ are similar and for this reason not shown here. We can conclude this test as follows:
1.
For the rectangular mesh, the solutions for all elements are are close to the analytical value, and we provide for reference the curve designated “MITC9i,rectang”. The only exception is the element ‘9’, which severely locks; then the curve is similar to the one shown for this element and the skew mesh.
2.
For the skew mesh,
a.
all the tested elements, except ‘9’, yield accurate solution for $$R/h \le 100$$. Beyond this range, the accuracy of all elements worsens. The most accurate is S9R5, second is MITC9i but the difference between it and the next 9-EAS11, 9-AS and 9-SRI is not big. The solution for MITC9 was obtained only for $$\log ({R}/{h})=1,2$$, see [ 29] p. 93.
b.
MITC9i behaves slightly better than the elements 9-AS and 9-SRI of [ 29]; this can be attributed to the corrected shape functions implemented in MITC9i. MITC9i performs also slightly better than 9-EAS11 when both use the corrected shape functions.
### 6.7 Pinched hemispherical shell with hole
In this test, the shell undergoes an almost in-extensional deformation and membrane locking can strongly manifest itself, see [ 4]. The elements are curved and we test two values of the regularization parameter $$\gamma$$ for the drilling rotation term.
A hemispherical shell with an $$18^o$$ hole is loaded by two pairs of equal but opposite external forces, applied in the plane $$z=0$$, along the 0X and 0Y axes, see Fig. 17. The data is as follows: $$E=6.825\times 10^{7}$$, $$\nu =0.3$$, $$R=10$$ and thickness $$h=0.04$$ or $$h=0.01$$. Note that for the smaller thickness, $$h/R=0.001$$. Because of the double symmetry, only a quarter of the hemisphere is modeled.
Linear analyses Two values of the regularization parameter $$\gamma$$ for the drilling RC term are tested, $$\gamma =G$$ and $$\gamma =G/1000$$, and the results for the $$8 \times 8$$ element mesh are given in Tables 6 and 7. A displacement at the point where a force is applied and in a direction of this force is reported. The conclusions are as follows:
1.
For both thicknesses, the reduced value $$\gamma =G/1000$$ yields more accurate results for all tested elements.
2.
For $$h=0.04$$ and both tested values of $$\gamma$$, MITC9i performes similarly to 9-EAS11.
We see that the solution for the basic element 9 is very locked, and such techniques as EAS, MITC, AS or SRI improve accuracy about threefold for $$h=0.04$$ and over 30 times for $$h=0.01$$.
Table 6
Pinched hemispherical shell with a hole. $$h=0.04$$. Linear analysis
Element
$$-u_y \times 10^2$$
$$\gamma =G$$
$$\gamma =G/1000$$
MITC9i
9.3582
9.3733
9
3.1370
3.1554
9-AS, 9-SRI
9.3473
9-SRI0
9.3352
9.3503
9-EAS11
9.3505
9.3776
MITC9
8.5687
S9R5
9.3513
4n HW47 $$64\times 64$$
9.3714
Ref. [ 27]
9.4000
Table 7
Pinched hemispherical shell with a hole. $$h=0.01$$. Linear analysis
Element
$$-u_y$$
$$\gamma =G$$
$$\gamma =G/1000$$
MITC9i
5.7834
5.8875
9
0.1677
0.1727
9-SRI0
5.7687
5.8722
9-EAS11
5.6916
5.8899
4n HW47 $$64\times 64$$
5.8924
Non-linear analyses The non-linear analyses are performed using the Newton method and $$\Delta P =0.2$$. We use the mesh of $$8 \times 8$$ elements, unless differently indicated. The solution curves for the inward displacement under the force are shown in Fig. 18.
1.
The curves MITC9i, 9-SRI0 and 9-EAS11 are obtained for $$\gamma =G/1000$$. We see that they almost coincide with the solution for the 4-node element HW47.
2.
For MITC9i, we additionally provide the solution for $$\gamma =G$$ (“MITC9i,G”) and it is locked compared to the one for $$\gamma =G/1000$$. The solution for a $$16 \times 16$$-element mesh (“MITC9i, 16x16”) is softer than that for the $$8 \times 8$$-element mesh.
Concluding the linear and non-linear analyses, we see that the accuracy of the MITC9i shell element is very good compared to the 9-SRI0 element and that the reduced value $$\gamma =G/1000$$ for the drilling rotation term is beneficial in the case of curved elements.
### 6.8 Short C-beam
Drilling rotations are mainly introduced to allow for analyses of intersecting shells and in this example we have two 90-degree intersections.
A short C-beam is fully clamped at one end and loaded by a vertical force P at the other, see Fig. 19a. At the clamped end, displacements and rotations are constrained to zero, as proposed in [ 12]. The data is as follows: $$E=10^{7}$$, $$\nu =0.333$$, the thickness $$h=0.05$$. The web is modeled by $$18 \times 3$$ elements and each flange by $$18 \times 1$$ elements, so the total number of elements is 90. The vertical displacement $$u_z$$ at the point where the force P is applied is monitored.
The mesh is regular so coefficients of the corrected shape functions (CSF) are equal to zero in this example.
The linear solutions for $$P=1$$ and $$\gamma =G/1000$$ are given in Table 8. The reference value is computed in [ 13], using the $$(2+3+2)\times 9$$-element mesh of the 16-node CAM elements and $$\alpha _t =0.01$$. For comparison, we also provide a solution by 4-node HW47 and $$(2+6+2)\times 36$$-element mesh obtained in [ 48].
We see that the solutions for MITC9i, 9-AS, 9-SRI and 9-EAS11 are similar and slightly above the reference value. Besides, MITC9i is insensitive to the value of $$\gamma$$, and yields almost the same results for $$\gamma =G/1000$$ and $$\gamma =G$$. The solution by MITC9 is slightly excessive. Surprisingly, the solution for a basic element 9 does not show much locking.
Table 8
Short C-beam. Linear solution. Vertical displacement
Element
$$-u_z \times 10^3$$
MITC9i
1.1551
9
1.1902
9-AS, 9-SRI
1.1556
9-SRI0
1.1557
9-EAS11
1.1559
MITC9i, G
1.1550
MITC9
1.2839
S9R5
1.1628
4n HW47
1.1537
Ref. [ 13]
1.1544
The non-linear solution computed using the arc-length method with the initial $$\Delta P=20$$ is shown in Fig. 20. All elements are tested for $$\gamma =G/1000$$. The curves for 9-AS and 9-SRI coincide with that for MITC9i and the solution for MITC9 is quite close to them. The 4-node HW47 element is slightly softer than MITC9i. Additionally, MITC9i is tested for $$\gamma =G$$, and the solution curves for both values of $$\gamma$$ almost coincide.
Note that the reference elements 9 and S9R5 behave erroneously; the solution for 9 is too stiff while for S9R5 is too flexible, compared also to the solutions from, e.g., [ 6, 12]. Note that S9R5 lost convergence at about $$u_z \approx - 2.7$$.
### 6.9 L-shaped plate
In this test we compare the convergence properties of the MITC9i element to the 9-node element 9-EAS11 and two 4-node elements, including the mixed/enhanced HW29 element of [ 48]. This example was also analyzed using quadrilateral and triangular shell elements in [ 38] and [ 10].
The L-shaped plate is clamped at one end and the in-plane force P is applied at the other end, see Fig. 21. The data is as follows: $$E=71{,}240$$, $$\nu =0.31$$, $$w=30$$, $$L=240$$, and thickness $$h=0.6$$. The value of the regularization parameter for the drilling RC term $$\gamma =G$$. A mesh of 17 nine-node elements (102 nodes) is used.
The solution of this problem has a bifurcation point at which an out-of-plane deformation occurs. We add a small out-of-plane load and solve the equilibrium problem using the arc-length method. The in-plane force P and the out-of-plane load $$P_3 = P/1000$$ are applied at point B, at which the out-of-plane displacement $$u_3$$ is monitored.
In this test, 10 steps are performed by the arc-length method for the initial load increment $$\Delta P = 0.5$$ and the requested number of iterations per step $$I_{req} = 20$$. The non-linear solutions are presented in Fig. 22. The reference curve was obtained in [ 48] using 64 4-node elements and $$\Delta P = 0.1$$.
We see in Fig. 22, that the longest steps were made by the 4-node HW29 element, next by the 9-node MITC9i and 9-EAS11, and the shortest ones by the 4-node EADG5A. The total number of Newton iterations used in 10 steps was as follows:
(a) 4-node elements: HW29 - 67, EADG5A - 90,
(b) 9-node elements: MITC9i - 91, 9-EAS11 - 91.
We see that the 4-node HW29 outperforms the other 4- and 9-node elements in this test. We recall that a treatment of the drilling RC was pivotal to improve convergence properties of this element, see [ 48]. In this element, we applied the Perturbed Lagrange method and the 3-parameter Lagrange multiplier, which yielded one spurious mode, for which we developed the $$\gamma$$-stabilization. Finally, we note that this analysis can be continued further without any problem, and, e.g., for MITC9i and 20 steps, we end up at the load level of about 130.
## 7 Final remarks
The improved nine-node shell element MITC9i with drilling rotations was developed and tested in this paper. The element is based on the Reissner–Mindlin kinematics, Green strain and the extended potential energy functional for the plane stress condition. Its formulation includes several improvements, which we summarize as follows:
1.
We propose to formulate the MITC method for the bending/twisting strains $$\varvec{\kappa }$$ and the transverse shear strains $$\varvec{\gamma }$$ using the ‘COVc’ components of Green strain, see Sect. 3, which is different from in the classical MITC9 element. This modification renders that the patch tests are passed by the MITC9i element for the regular mesh of Fig. 7.
2.
The corrected shape functions are applied instead of the standard shape functions, see Sect. 4. They eliminate distortions of a local coordinate space caused by shifted nodes, which results in a better interpolation accuracy. These functions enable the MITC9i element to pass the patch tests for meshes distorted by parallel shifts of midside nodes and arbitrary shifts of the central node, see Fig. 8. The method of computation of the shift parameters of [ 9] was generalized from 2D to shells located in 3D space in Sect. 4. To compute the shift parameters of a central node, we propose the method based on the minimization of a square error, Eq. ( 50), which is suitable for flat, shallow and non-flat elements, see Sect. 6.4. Additionally, we propose an alternative method to calculate the shift parameters for midside nodes, Eq. ( 58), different from the one of [ 9]. It does not rely on the proportion of arc-lengths but uses a parametric equation of a parabola. In effect, a symmetric curved side is generated even for a shifted midside node, which improves accuracy when the shell boundary is curved and symmetric, see Sect. 6.5.
3.
The drilling rotation is incorporated using the drilling part of the Rotation Constraint equation, Eqs. ( 10) and ( 11). We found that the corrected shape functions reduce sensitivity of solutions to the value of regularization parameter $$\gamma$$ of the penalty method. Besides, we confirmed that the reduced value $$\gamma =G/1000$$ improves accuracy for curved or distorted element shapes, see Sects. 6.3 and 6.7.
Finally, the shell element MITC9i with drilling rotations not only passes the patch tests described above, but its accuracy is very good in linear tests for coarse distorted meshes. In non-linear tests, see Sects. 6.7, 6.8 and 6.9 (we have also run several other tests, not reported in this paper), it is very robust and its convergence properties are good. Nonetheless, its radius of convergence is smaller than of our best 4-node element HW29, see Sect. 6.9. Therefore, further studies are needed to fully compare its performance to other existing very good quadrilateral and triangular elements.
## Appendix
### Transformations between Cartesian and covariant components
For shell, we usually assume that the director $$\mathbf{t}_3$$ is perpendicular to the reference surface and define the so-called normal basis, such that $$\mathbf{t}_3$$ is one of its vectors and two other vectors are tangent to the reference surface. Various tangent vectors can be used: Cartesian $$\mathbf{t}_\alpha$$, natural $$\mathbf{g}_\alpha$$ and the co-vectors to the natural ones, $$\mathbf{g}^\alpha$$ ( $$\alpha =1,2$$), which implies various transformation rules for components of vectors and tensors. Below we derive the transformation rules for the 2nd rank tensors and their covariant and Cartesian components.
Relation between tangent vectors $$\mathbf{g}_\alpha$$ and $$\mathbf{t}_\alpha$$. The natural basis vectors $$\mathbf{g}_\alpha$$ can be decomposed in the Cartesian basis $$\{ \mathbf{t}_{\alpha } \}$$ as follows:
\begin{aligned} \mathbf{g}_{\alpha } = (\mathbf{g}_{\alpha } \cdot \mathbf{t}_1) \, \mathbf{t}_1 + (\mathbf{g}_{\alpha } \cdot \mathbf{t}_2) \, \mathbf{t}_2, \end{aligned}
(72)
in which $$(\mathbf{g}_{\alpha } \cdot \mathbf{t}_\beta )$$ are components of the Jacobian matrix
\begin{aligned} \mathbf{j}\doteq \left[ \frac{\partial S^{\alpha }}{\partial \xi ^{\beta }} \right] = \left[ \begin{array}{lr} \mathbf{g}_{1} \cdot \mathbf{t}_{1} \quad &{} \mathbf{g}_{2} \cdot \mathbf{t}_{1} \\ \mathbf{g}_{1} \cdot \mathbf{t}_{2} \quad &{} \mathbf{g}_{2} \cdot \mathbf{t}_{2} \end{array} \right] . \end{aligned}
(73)
Hence, we can rewrite Eq. ( 72) as
\begin{aligned} \left[ \begin{array}{c} \mathbf{g}_{1} \\ \mathbf{g}_{2} \\ \end{array} \right] = \mathbf{j}^T \ \left[ \begin{array}{c} \mathbf{t}_{1} \\ \mathbf{t}_{2} \\ \end{array} \right] \qquad \hbox {and obtain} \qquad \left[ \begin{array}{c} \mathbf{t}_{1} \\ \mathbf{t}_{2} \\ \end{array} \right] = \mathbf{j}^{-T} \ \left[ \begin{array}{c} \mathbf{g}_{1} \\ \mathbf{g}_{2} \\ \end{array} \right] . \nonumber \\ \end{aligned}
(74)
The second relation is used below in the form
\begin{aligned} \mathbf{t}_{1} = J^{-1}_{11} \ \mathbf{g}_{1} + J^{-1}_{21} \ \mathbf{g}_{2} ,\qquad \mathbf{t}_{2} = J^{-1}_{12} \ \mathbf{g}_{1} + J^{-1}_{22} \ \mathbf{g}_{2}, \end{aligned}
(75)
where $$J^{-1}_{\alpha \beta }$$ designates components of the inverse Jacobian matrix
\begin{aligned} \mathbf{j}^{-1} = \left[ \begin{array}{lr} \mathbf{g}^{1}\cdot \mathbf{t}_{1}\quad &{} \mathbf{g}^{1}\cdot \mathbf{t}_{2}\\ \mathbf{g}^{2}\cdot \mathbf{t}_{1} \quad &{} \mathbf{g}^{2}\cdot \mathbf{t}_{2} \end{array} \right] , \end{aligned}
(76)
and $$\mathbf{g}^\alpha$$ are the co-basis vectors.
Transformation of in-plane and transverse components Transformation formulas for covariant and Cartesian components of the second-rank tensor $$\mathbf{A}$$ can be obtained in the following way.
In-plane ( $$\alpha \beta$$) components First, we calculate the Cartesian components $$A^C_{\alpha \beta } = \mathbf{t}_{\alpha } \cdot (\mathbf{A}\mathbf{t}_{\beta })$$ and, next, transform them using Eq. ( 75). Finally, we identify the covariant components $$A_{\alpha \beta } \doteq \mathbf{g}_\alpha \cdot (\mathbf{A}\ \mathbf{g}_\beta )$$ of the tensor representation $$\mathbf{A}= A_{\alpha \beta } \ \mathbf{g}^\alpha \otimes \mathbf{g}^\beta$$. The transformations are performed as follows:
\begin{aligned} A_{11}^C\doteq & {} \mathbf{t}_1 \cdot (\mathbf{A}\ \mathbf{t}_1) = [J^{-1}_{11} \ \mathbf{g}_{1} + J^{-1}_{21} \ \mathbf{g}_{2}] \cdot [J^{-1}_{11} \ (\mathbf{A}\mathbf{g}_{1}) \\&+\, J^{-1}_{21} \ (\mathbf{A}\mathbf{g}_{2})]\\= & {} (J^{-1}_{11})^2 \ A_{11} + (J^{-1}_{21})^2 \ A_{22} +\, J^{-1}_{11} \ J^{-1}_{21} \ A_{12}\\&+\, J^{-1}_{11} J^{-1}_{21} \ A_{21}, \\ A_{22}^C\doteq & {} \mathbf{t}_2 \cdot (\mathbf{A}\ \mathbf{t}_2) =[J^{-1}_{12} \ \mathbf{g}_{1} + J^{-1}_{22} \ \mathbf{g}_{2}] \cdot [J^{-1}_{12} \ (\mathbf{A}\mathbf{g}_{1})\\&+\, J^{-1}_{22} \ (\mathbf{A}\mathbf{g}_{2})]\\= & {} (J^{-1}_{12})^2 \ A_{11} + (J^{-1}_{22})^2 \ A_{22} +\, J^{-1}_{12} J^{-1}_{22} \ A_{12} \\&+\, J^{-1}_{12} J^{-1}_{22} \ A_{21}, \\ A_{12}^C\doteq & {} \mathbf{t}_1 \cdot (\mathbf{A}\ \mathbf{t}_2) =[J^{-1}_{11} \ \mathbf{g}_{1} + J^{-1}_{21} \ \mathbf{g}_{2}] \cdot [J^{-1}_{12} \ (\mathbf{A}\mathbf{g}_{1})\\&+\, J^{-1}_{22} \ (\mathbf{A}\mathbf{g}_{2})] \\= & {} J^{-1}_{11} J^{-1}_{12} \ A_{11} + J^{-1}_{21} J^{-1}_{22} \ A_{22} + \, J^{-1}_{11} J^{-1}_{22} \ A_{12} \\&+\, J^{-1}_{12} J^{-1}_{21} \ A_{21}, \\ A_{21}^C\doteq & {} \mathbf{t}_2 \cdot (\mathbf{A}\ \mathbf{t}_1) = [J^{-1}_{12} \ \mathbf{g}_{1} + J^{-1}_{22} \ \mathbf{g}_{2}] \cdot [J^{-1}_{11} \ (\mathbf{A}\mathbf{g}_{1}) \\&+ \, J^{-1}_{21} \ (\mathbf{A}\mathbf{g}_{2})] \\= & {} J^{-1}_{11} \ J^{-1}_{12} \ A_{11} + J^{-1}_{21} \ J^{-1}_{22} \ A_{22} + J^{-1}_{12} \ J^{-1}_{21} \ A_{12}\\&+\, J^{-1}_{11} \ J^{-1}_{22} \ A_{21}. \end{aligned}
We can rewrite the above formulas in the vector-matrix form
\begin{aligned} \mathbf{A}^{C}_v = \mathbf{T}^* \ \mathbf{A}_{\xi v}, \end{aligned}
(77)
where
\begin{aligned} \mathbf{T}^*\doteq & {} \left[ \begin{array}{cccc} (J^{-1}_{11})^2 \quad &{} (J^{-1}_{21})^2 \quad &{} J^{-1}_{11} J^{-1}_{21} \quad &{} J^{-1}_{11} J^{-1}_{21} \\ (J^{-1}_{12})^2 \quad &{} (J^{-1}_{22})^2 \quad &{} J^{-1}_{12} J^{-1}_{22} \quad &{} J^{-1}_{12} J^{-1}_{22} \\ J^{-1}_{11} J^{-1}_{12} \quad &{} J^{-1}_{21}J^{-1}_{22} \quad &{} J^{-1}_{11} J^{-1}_{22} \quad &{} J^{-1}_{12} J^{-1}_{21} \\ J^{-1}_{11} J^{-1}_{12} \quad &{} J^{-1}_{21}J^{-1}_{22} \quad &{} J^{-1}_{12} J^{-1}_{21} \quad &{} J^{-1}_{11} J^{-1}_{22} \\ \end{array} \right] , \\&\mathbf{A}^{C}_v \doteq \left[ \begin{array}{c} A_{11}^C \\ A_{22}^C \\ A_{12}^C \\ A_{21}^C \\ \end{array} \right] ,\quad \mathbf{A}_{\xi v} \doteq \left[ \begin{array}{c} A_{11} \\ A_{22} \\ A_{12} \\ A_{21} \\ \end{array} \right] . \end{aligned}
Alternatively, in terms of $$2 \times 2$$ matrices of components $$\mathbf{A}^{C}$$ and $$\mathbf{A}_{\xi }$$, we have
\begin{aligned} \mathbf{A}^{C} = (\mathbf{j}^{-1})^T \ \mathbf{A}_{\xi } \ \mathbf{j}^{-1}, \end{aligned}
(78)
and the covariant components in terms of Cartesian ones,
\begin{aligned} \mathbf{A}_{\xi } = \mathbf{j}^T \mathbf{A}^{C} \mathbf{j}. \end{aligned}
(79)
Transverse ( $$\alpha 3$$ and $$3 \alpha$$) components The derivation is analogous to that for the in-plane components. First, we calculate the Cartesian components $$A^C_{\alpha 3} = \mathbf{t}_{\alpha } \cdot (\mathbf{A}\mathbf{t}_{3})$$ and, next, transform them using Eq. ( 75). Finally, we identify the covariant components $$A_{\alpha 3} \doteq \mathbf{g}_\alpha \cdot (\mathbf{A}\ \mathbf{g}_3)$$ of the tensor representation $$\mathbf{A}= A_{\alpha 3} \ \mathbf{g}^\alpha \otimes \mathbf{g}^3$$. (Note that for the normal basis, $$\mathbf{g}_3=\mathbf{g}^3 = \mathbf{t}_3$$.) The transformations are performed as follows:
\begin{aligned}&A_{13}^C \doteq \mathbf{t}_1 \cdot (\mathbf{A}\, \mathbf{t}_3) = (J^{-1}_{11} \, \mathbf{g}_{1} + J^{-1}_{21} \, \mathbf{g}_{2}) \cdot (\mathbf{A}\ \mathbf{g}_{3}) \\&\qquad =J^{-1}_{11} \, A_{13} + J^{-1}_{21} \, A_{23}, \\&A_{23}^C \doteq \mathbf{t}_2 \cdot (\mathbf{A}\, \mathbf{t}_3) = (J^{-1}_{12} \, \mathbf{g}_{1} + J^{-1}_{22} \, \mathbf{g}_{2}) \cdot (\mathbf{A}\ \mathbf{g}_{3}) \\&\qquad =J^{-1}_{12} \, A_{13} + J^{-1}_{22} \, A_{23}. \end{aligned}
We can rewrite the above formula in the vector-matrix form
\begin{aligned} \left[ \begin{array}{c} A_{13}^C \\ A_{23}^C \\ \end{array} \right] =\mathbf{j}^{-T} \left[ \begin{array}{c} A_{13} \\ A_{23} \\ \end{array} \right] . \end{aligned}
(80)
The formulas for $$A^C_{3\alpha }$$ components are analogous.
## Unsere Produktempfehlungen
### Premium-Abo der Gesellschaft für Informatik
Sie erhalten uneingeschränkten Vollzugriff auf alle acht Fachgebiete von Springer Professional und damit auf über 45.000 Fachbücher und ca. 300 Fachzeitschriften.
Literatur
Über diesen Artikel
Zur Ausgabe
|
{}
|
1. ## Factoring Question
I am asked to factor the equation
$4x^2 - 4x - 15 = 0$
To my knowledge, your supposed to find pairs of factors that multiply to give -15 and add to give -4, correct?
I can't find a pair that gives both those answers.
I'm stuck.
2. Originally Posted by cantstop01
I am asked to factor the equation
$4x^2 - 4x - 15 = 0$
To my knowledge, your supposed to find pairs of factors that multiply to give -15 and add to give -4, correct?
I can't find a pair that gives both those answers.
I'm stuck.
$4x^2 - 4x - 15 = (2x-5)(2x+3)$
3. Thanks, looks like the answer is $x = 5/2$ and $x = -3/2$
I would like to ask one more thing.
what is the reasoning behind the factors not actually adding up to -4, yet still being the correct answer?
4. Originally Posted by cantstop01
Thanks, looks like the answer is $x = 5/2$ and $x = -3/2$
I would like to ask one more thing.
what is the reasoning behind the factors not actually adding up to -4, yet still being the correct answer?
that "technique" only works if the coefficient of $x^2$ is 1
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.