arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
Adding to As an example, we count the states for each value of total m (z component quantum number) if we add to . Total 6 (4,2) 5 (3,2) (4,1) 4 (2,2) (3,1) (4,0) 3 (1,2) (2,1) (3,0) (4,-1) 2 (0,2) (1,1) (2,0) (3,-1) (4,-2) 1 (-1,2) (0,1) (1,0) (2,-1) (3,-2) 0 (-2,2) (-1,1) (0,0) (1,-1) (2,-2) -1 (1,-2) (0,-1) (-1,0) (-2,1) (-3,2) -2 (0,-2) (-1,-1) (-2,0) (-3,1) (-4,2) -3 (-1,-2) (-2,-1) (-3,0) (-4,1) -4 (-2,-2) (-3,-1) (-4,0) -5 (-3,-2) (-4,-1) -6 (-4,-2) Since the highest m value is 6, we expect to have a state which uses up one state for each m value from -6 to +6. Now the highest m value left is 5, so a states uses up a state at each m value between -5 and +5. Similarly we find a , , and state. This uses up all the states, and uses up the states at each value of m. So we find in this case, and that takes on every integer value between the limits. This makes sense in the vector model. Jim Branson 2013-04-22
# The sum of power-law-distributed random variables. Let $X_i$ is power-law-distributed random variable $f(x)=C_0x^{-k}$ where $1<k_i\le3$. What is the exponent $k$ of the variable $$X=\sum_{i=1}^N X_i \ ?$$ My doubt come from the fact that $X$ as a sum of i.i.d has to tend to a $\alpha$-stable distribution. The generic exponent $\alpha$ of a generic $\alpha$-stable distribution can lay only in the range $(0,2]$, that imply $1<k\le 3$. But if we try to use the rule of Fourier transform for the sum of i.i.d. random variables (namely the convolution is the product of the Fourier transforms) as power law we can get an arbitrary big exponent $k$ (isn't it?). So at some point my reasoning is wrong. I guess that the mistake is in the convolution of the power law distributions. 3 2022-07-25 20:41:58 Source Share
Introduction If you'd like to setup Apache Spark to experiment with but you don't want to use a premade ISO or setup your own then I'm going to show you how. This configuration will be a minimal one using Linux Operating Sytems; I'm going to use Ubuntu so change the install based on your package mananger. I'm going to assume that you've setup the hosts, their networking and have some way to configure and deploy them. There are options like Puppet or Salt but I'll be avoiding those and leave them up to you. Installation I have script that does all of this but we're going to go over it so you understand each part and then I'll attach the bash script at the end. To start with, we're going to need Java since Spark is dependent on it. Login to each host - Master and Slaves - and you'll want to run: apt update && apt install openjdk-8-jre openjdk-8-jdk -y Remember to adjust this for own Linux Distribution. Figure out which directory you want to install Apache Spark into; Normally, you'd use /opt so that's what we're going to be using. Download the archive of the files from the website: wget -O /opt/spark-2.4.6-bin-hadoop2.7.tgz https://downloads.apache.org/spark/spark-2.4.6/spark-2.4.6-bin-hadoop2.7.tgz If they obsolete the download link then you can find the versions online here. I'm using the 3.0.1 version Pre-built for Apache Hadoop 2.7 for this but feel free to experiment. Once the file is downloaded, you'll want to unpack the archive: tar -xzvf /opt/spark-2.4.6-bin-hadoop2.7.tgz -C /opt Now you'll have files ready for usage. Next you'll want to add the environmental variables so that linux will know where to look for the binary when you call it: echo -e "\nexport SPARK_HOME=/opt/spark-2.4.6-bin-hadoop2.7\nexport PATH=/opt/spark-2.4.6-bin-hadoop2.7/bin:$PATH" | tee -a ~/.bashrc export SPARK_HOME=/opt/spark-2.4.6-bin-hadoop2.7 export PATH=/opt/spark-2.4.6-bin-hadoop2.7/bin Do this for all the hosts that you'll need to run Spark on. Configuration Files It is best to create a master copy of these next few configuratoin files to copy to each host in turn. This way you only need to edit each file ones and then copy them all the the appropriate hosts For your master server, you'll need to update two files: spark-defaults.conf and spark-env.sh. Both of these should be found inside the conf directory of your spark home. Make sure you make a backup of them before you do anything else: cp spark-defaults.conf spark-defaults.conf.bkp cp spark-env.sh spark-env.sh.bkp Next you'll want to open them and add the master info information. For the defaults file simple uncomment and modify the line to look like: spark.master spark://<host or IP Addr>:7077 ... and the env file you'll want to look for the line: SPARK_MASTER_HOST='<host of IP Addr>' You shouldn't need to change anything else in this file so long as you have full control of the systems you're using. In my case, I happen to not and changed where shuffle and worker logs are ran to avoid the operating system from filling up and crashing the host. If you need to worrry about those then update the lines SPARK_LOCAL_DIRS, SPARK_WORKER_DIR and SPARK_PID_DIR to point to somewhere else on the system which wont fill up the partition. Next you'll want to collect the names or IP addresses of all the hosts in your cluster and add them to the slaves file in config directory just like were the others are. Make sure to test the connectivity of your hosts using ping or something else to confirm they can actually talk to one another! Datastore Now we're going to work around not having a Hadoop cluster. How this works, is that we're going to create a shared folder on all of the hosts which references the Master as the Source of Truth. First, create a folder in your spark home to hold the data: mdkir$SPARK_HOME/Data Go ahead and create a file in here for future usage: touch turtles Next you'll go ahead and install a package called sshfs which is used to remotely mount a folder from one host and another: sudo apt install sshfs Repeat this for all the hosts in your cluster. Once that is done, you'll connect the slaves to the master using: sshfs <username>@<master-address>:/opt/spark-2.4.6-bin-hadoop2.7/Data /opt/spark-2.4.6-bin-hadoop2.7/Data Now you should be able to see the turtles file we created earlier if you list the files in the Data directory ls Data If you see the file then feel free to move on! If not, then double back and troubleshoot the connection between those two computers. Could also be permissions or something like that as well! Connect the Dots, Start the Services Now that we've got it all connected together, go ahead and run the appropriate commands on the masters and servers to start them all up: # master: $SPARK_HOME/sbin/start-master.sh # slaves:$SPARK_HOME/sbin/start-slave.sh spark://<master-Addr>:7077 Success! Now try and run it on the master: username@HOST:~# \$SPARK_HOME/bin/pyspark Python 2.7.12 (default, Apr 15 2020, 17:07:12) [GCC 5.4.0 20160609] on linux2 20/10/13 05:19:50 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ / __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 2.4.6 /_/ Using Python version 2.7.12 (default, Apr 15 2020 17:07:12) SparkSession available as 'spark'. >>> That should give you the above. Now you can transfer data into that directory and read from it using the spark.read.*` function that you need. Note that copying Big Data into that directory is not a good idea. If you're looking at TeraBytes or Petabytes worth of data then you'll definitely need a real Cluster. But, I've already made some interesting observations in this limited environment.
## Introduction to real analysis.(English)Zbl 0856.26001 Englewood Cliffs, NJ: Prentice Hall. xiii, 368 p. (1996). This is an introductory-level real analysis text for students who have completed a calculus course. (In the U. S. the latter is typically manipulative and proof-free.) It is meant to be the student’s entrée into mathematics as a logical enterprise, so the construction, analysis and dissection of proofs is a continuing and central theme. Naturally, considerable effort is devoted to securing the real numbers; they are not simply axiomatically summoned. The idea of a complete, archimedean-ordered field evolves, and all the major highways and feeder roads are carefully constructed. The myriad equivalent manifestations of completeness (compactness, connectedness,...) are collected in what the author calls The Big Theorem, its order of battle in a useful diagram called The Big Picture. This enterprise is motivated (on p. 1) by The Big Question – how are the real numbers different from the rational numbers? These highway signs reappear regularly to keep the reader on course. The existence question itself is settled in a brief final chapter via cuts $$(\mathbb{Q}$$ considered as given), but the (better) alternative via equivalence classes of Cauchy sequences in $$\mathbb{Q}$$ is dicussed in an exercise, as are the constructions of $$\mathbb{Z}$$ from $$\mathbb{N}$$ and of $$\mathbb{Q}$$ from $$\mathbb{Z}$$. (But Peano and the genesis of $$\mathbb{N}$$ are not mentioned.) Chapter 1 is on logic, connectives, quantifiers, sentence-writing, set-theoretic operations and proofs. These’s lots of good advice and examples here. (But why is the possibility of a third truth-value, “matter-of-opinion”, illustrated with a question: “It’s a nice day, isn’t it?”?) There is valuable discussion on placement of quantifiers; but this important theme could have been profitably reprised at a higher level following T. Whaley and J. Williford [Am. Math. Mon. 87, 745-788 (1980; Zbl 0475.26004)]. This chapter is a miniature of D. Solow’s successful book “How to read and do proofs: an introduction to mathematical thought processes” (2nd ed. 1990; Zbl 0711.00001)], which it references. Chapter 2 deals with equivalence relations and cardinality $$(\mathbb{N}$$ and $$\mathbb{Q}$$ considered given). One of its exercises is an interesting variant of Richard’s Paradox. Chapter 3 is on algebraic structure of fields $$(\mathbb{R}$$ used freely as an example). Chapter 4: orderings, proof by induction, ordered fields, intervals and neighborhoods. As The Big Picture is filled out in Part II (Chapters 5-12), topological issues, convergent sequences, and continuous functions are introduced and analysis begun. Actually the general concept of a topology on a set is defined and this enables the author to later streamline the treatment of continuity on subsets of $$\mathbb{R}$$ that are not open, by employing the relative topology (for which, however, his notation is a bit unorthodox). Construction and properties of the Cantar set appear as a multi-part exercise, as do the one-point and two-point compactifications of $$\mathbb{R}$$. Part III is called “Topics in Calculus” and its six chapters deal with numerical series, uniform continuity, power series and the topology of $$C[0,1]$$, differentiation and integration (Riemann integration based on refinement, with no mention made of the alternative mesh theory). As in earlier chapters, some facts known from calculus but not yet rigorously grounded enter into exercises and examples. Other topics, some treated as exercises with hints, include Raabe’s test, Riemann’s rearrangement theorem, Mertens’ theorem, the Gregory-Leibniz series, infinite products and bilateral series, the Arzelà-Ascoli theorem, and functions of bounded variation. The central problem of analysis, interchanging two limit processes, is the theme of Chapter 18; included are double series, differentiation and integration of sequential limits and of integrals involving a parameter. Here we also find a useful, not so well-known theorem of E. H. Moore. Its intricate proof would have been the quintessential vehicle for the author’s considerable pedagogical skills, but it is withheld. Part IV is selected short subjects: discontinuities of monotone functions, the Cantor-Lebesgue singular function and mention of Baire category and measure-zero sets. Baire’s theorem is not proved. Lebesgue’s criterion for Riemann integrability is stated (but is missing the essential boundedness hypothesis). Other unusual features of the book: Numerous proofs are built from both ends, like the chunnel, with questions asked and methods evaluated and rejected en route to closure. The author is admirably punctilious about quantifiers, and avoids the unfortunate adjective “non-decreasing” (as Dieudonné pointed out, sin$$x$$ is nondecreasing – it’s not a decreasing function). Jensen’s integral inequality, Newton’s method, the Antipodensatz for the circle, continued fractions and the contraction mapping principle (with an application to the iterates of the cosine function) appear as exercises. So, without warning labels, do the tower of powers problem $$(x_{n + 1} : = x^{x_n})$$ and the equality $$\varlimsup \cos (n) = 1$$. Being hintless, these seem beyond the anticipated reader of this book. (One needs to know that infinite subsemigroups of the circle are dense and that $$\{e^{in} : n \in \mathbb{N}\}$$ is such a semigroup, due to the fact – not mentioned in the text – that $$\pi$$ is irrational.) Altogether this is an excellent, “user-friendly” book, which the reviewer would happily prescribe as a course text. The writing style is unhurried, lively, informal (as Landau put it, the reader is “geduzt”), at times colloquial, with some good word plays (e.g., “a new slant on the derivative”). This said, balance requires the reviewer to offer some (mild) criticisms. There are occasional linguistic lapses: “Alternate” is used where “alternative” is meant. (“Real and complex analysis are offered in alternate semesters, but alternative courses are available.”) A cautious and uncertain “conditional imperative” [“If $$P$$, then prove that...”] sometimes displaces the unqualified injunction, more proper in mathematics, to prove a conditional [“Prove that, if $$P$$, then...”]. Due either to oversight or misguided “political correctness” (an American dementia), the author favors plural pronouns with singular subjects like “the student”, perhaps to the confusion of international readers. Theorem 19.6 affirms that “Any (sic) function can have only countably many jump discontinuities,” and its proof contains the expression “$$|f(x) - \lim_{x \to a^+} f(x) |< \varepsilon/2$$ when $$x \in (a,a + \delta)$$”. Although formally correct, the dual role of $$x$$ here will needlessly waylay the reader’s train of thought. There is no symbol index (the reader is challenged to find the author’s somewhat unconventional definition of the symbol $$\hookrightarrow)$$, and page references in the general index are red-shifted – by an unknown but strongly monotone function – rendering it almost useless. (The reviewer has learned that this problem was generated by the editors and will be remedied in a second printing.) The greatest resource of mathematical pedagogy, the American Mathematical Monthly, is cited only twice, the Mathematics Magazine not at all. Relevant articles there would greatly help the reader with the aforementioned tower of powers, as well as with the author’s exercise on the Kuratowski closure-and-complementation problem [see J. Berman and S. L. Jordan, Am. Math. Mon. 82, 841-842 (1975; Zbl 0312.54003) and J. H. Five, Math. Mag. 64, No. 3, 180-182 (1990; Zbl 0735.54001)]. The author offers some good examples of erroneous “proofs,” but many more could profitably have been culled from the “Flim-Flam” section of The College Mathematics Journal. B. R. Gelbaum and J. M. H. Olmsted’s indispensable classic “Counterexamples in analysis” (1964; Zbl 0121.28902)] that every beginning student of analysis needs to make the acquaintance of, is not mentioned by the author. In discussing $$x^n$$ $$(0 \leq x \leq 1)$$ and uniform convergence, the author rightly reminds the reader that despite the prevalent phrase “uniformly Cauchy”, the name of this “brilliant” mathematician is not an adjective. The irony of Cauchy’s having missed the uniformity concept and having erroneously claimed that pointwise convergence preserves continuity might also have been noted here. And the “bland traditional terminology Weierstrass $$M$$-test” could have been demystified by reference to “Majorant”. The transcendental functions are not developed (a significant lacuna, the review feels), nor is l’Hospital’s rule mentioned. ### MSC: 26-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to real functions ### Keywords: textbook; real analysis
# 2-EXPTIME In computational complexity theory, the complexity class 2-EXPTIME (sometimes called 2-EXP) is the set of all decision problems solvable by a deterministic Turing machine in O(22p(n)) time, where p(n) is a polynomial function of n. In terms of DTIME, ${\displaystyle {\mbox{2-EXPTIME}}=\bigcup _{k\in \mathbb {N} }{\mbox{ DTIME }}\left(2^{2^{n^{k}}}\right).}$ We know PNPPSPACEEXPTIMENEXPTIMEEXPSPACE2-EXPTIMEELEMENTARY. 2-EXPTIME can also be reformulated as the space class AEXPSPACE, the problems that can be solved by an alternating Turing machine in exponential space. This is one way to see that EXPSPACE ⊆ 2-EXPTIME, since an alternating Turing machine is at least as powerful as a deterministic Turing machine.[1] 2-EXPTIME is one class in a hierarchy of complexity classes with increasingly higher time bounds. The class 3-EXPTIME is defined similarly to 2-EXPTIME but with a triply exponential time bound ${\displaystyle 2^{2^{2^{n^{k}}}}}$. This can be generalized to higher and higher time bounds. ## 2-EXPTIME-complete problems Generalizations of many fully observable games are EXPTIME-complete. These games can be viewed as particular instance of a class of transition systems defined in terms of a set of state variables and actions/events that change the values of the state variables, together with the question of whether a winning strategy exists. A generalization of this class of fully observable problems to partially observable problems lifts the complexity from EXPTIME-complete to 2-EXPTIME-complete.[2]
# Double Check Probability for Permutation I have to find the sample space and a few probabilities here and I am wondering about if I am going down the right track for these. If I am incorrect, then please point me in the right direction, but not give me the full process. There are 5 students that must finish 6 problems. Each problem is only worked on by one student. A teacher assigns each student to each problem. Find the Sample Space: Total of $5^6$ ways to assign problems. Any 5 students can do each of the problems, hence $5^6$ ways.
# Automated chilli watering system part 3 - Temperature This post is part of a series ## Temperature This part is entirely option, if it looks too complicated, feel free to skip this part and do without temperature monitoring For this part of the project I used the following 3 wire sensors https://www.amazon.de/gp/product/B00CHEZ250/ref=oh_aui_detailpage_o08_s00?ie=UTF8 In one way, this was kind of complicated as it requires a kernel module to be loaded in order to use, but rest assured, its quite simple Just add the following line to /boot/config.txt and reboot dtoverlay=w1-gpio I actual fact, as I wanted to use a different GPIO pin than the default, I went with this dtoverlay=w1-gpio,gpiopin=21 Once this is in place 8and you have rebooted) you will see that /sys/bus/w1/devices/ has been populated and you will need to use your own device IDs that you find in the directory Below is a simple script I created to checking two sensors. /usr/local/bin/get-probe-temps.py #!/usr/bin/python # # This depends on a kernel module and the gpio port # defaults to port 4. It seems as if it can be changed # by altering the /boot/config.txt to add this # dtoverlay=w1-gpio,gpiopin=21 # import os import sys import time from time import gmtime, strftime # commented out as its now done at boot time # os.system('modprobe w1-gpio') # os.system('modprobe w1-therm') temp_sensor1 = '/sys/bus/w1/devices/28-0416c2345eff/w1_slave' temp_sensor2 = '/sys/bus/w1/devices/28-0516c00e7dff/w1_slave' def temp_raw(val): f = open(val, 'r') f.close return lines lines = temp_raw(sensor) while lines[0].strip()[-3:] != 'YES': time.sleep(0.2) lines = temp_raw() temp_output = lines[1].find('t=') if temp_output != -1: temp_string = lines[1].strip()[temp_output+2:] temp_c = float(temp_string) / 1000 return temp_c if sys.argv[1]=='one': print one if sys.argv[1]=='two': print two python /usr/local/bin/get-probe-temps.py "one" python /usr/local/bin/get-probe-temps.py "one"
# Prony's method Prony analysis (Prony's method) was developed by Gaspard Riche de Prony in 1795. However, practical use of the method awaited the digital computer.[1] Similar to the Fourier transform, Prony's method extracts valuable information from a uniformly sampled signal and builds a series of damped complex exponentials or sinusoids. This allows for the estimation of frequency, amplitude, phase and damping components of a signal. ## The method Let $f(t)$ be a signal consisting of $N$ evenly spaced samples. Prony's method fits a function $\hat{f}(t) = \sum_{i=1}^{N} A_i e^{\sigma_i t} \cos(2\pi f_i t + \phi_i)$ to the observed $f(t)$. After some manipulation utilizing Euler's formula, the following result is obtained. This allows more direct computation of terms. \begin{align} \hat{f}(t) &= \sum_{i=1}^{N} A_i e^{\sigma_i t} \cos(2\pi f_i t + \phi_i) \\ &= \sum_{i=1}^{N} \frac{1}{2} A_i e^{\pm j\phi_i}e^{\lambda_i t} \end{align} where: • $\lambda_i = \sigma_i \pm j \omega_i$ are the eigenvalues of the system, • $\sigma_i$ are the damping components, • $\phi_i$ are the phase components, • $f_i$ are the frequency components, • $A_i$ are the amplitude components of the series, and • $j$ is the imaginary unit ($j^2 = -1$). ## Representations Prony's method is essentially a decomposition of a signal with $M$ complex exponentials via the following process: Regularly sample $\hat{f}(t)$ so that the $n$-th of $N$ samples may be written as $F_n = \hat{f}(\Delta_t n) = \sum_{m=1}^{M} \Beta_m e^{\lambda_m \Delta_t n}.$ If $\hat{f}(t)$ happens to consist of damped sinusoids, then there will be pairs of complex exponentials such that \begin{align} \Beta_a &= \frac{1}{2} A_i e^{ \phi_i j}, \\ \Beta_b &= \frac{1}{2} A_i e^{-\phi_i j}, \\ \lambda_a &= \sigma_i + j \omega_i, \\ \lambda_b &= \sigma_i - j \omega_i, \end{align} where \begin{align} \Beta_a e^{\lambda_a t} + \Beta_b e^{\lambda_b t} &= \frac{1}{2} A_i e^{\phi_i j} e^{(\sigma_i + j \omega_i) t} + \frac{1}{2}A_i e^{-\phi_i j} e^{(\sigma_i - j \omega_i) t} \\ &= A_i e^{\sigma_i t} \cos(\omega_i t + \phi_i). \end{align} Because the summation of complex exponentials is the homogeneous solution to a linear difference equation, the following difference equation will exist: $\hat{f}(\Delta_t n) = -\sum_{m=1}^{M} \hat{f}[\Delta_t (n - m)] P_m.$ The key to Prony's Method is that the coefficients in the difference equation are related to the following polynomial: $\sum_{m = 1}^{M + 1} P_m x^{m - 1} = \prod_{m=1}^{M} \left(x - e^{\lambda_m}\right).$ These facts lead to the following three steps to Prony's Method: 1) Construct and solve the matrix equation for the $P_m$ values: $\begin{bmatrix} F_{M} \\ \vdots \\ F_{N-1} \end{bmatrix} = -\begin{bmatrix} F_{M-1} & \dots & F_{0} \\ \vdots & \ddots & \vdots \\ F_{N-2} & \dots & F_{N-M-1} \end{bmatrix} \begin{bmatrix} P_1 \\ \vdots \\ P_M \end{bmatrix}.$ Note that if $N \ne 2M$, a generalized matrix inverse may be needed to find the values $P_m$. 2) After finding the $P_m$ values find the roots (numerically if necessary) of the polynomial $x^M + \sum_{m = 1}^{M} P_m x^{m - 1}.$ The $m$-th root of this polynomial will be equal to $e^{\lambda_m}$. 3) With the $e^{\lambda_m}$ values the $F_n$ values are part of a system of linear equations that may be used to solve for the $\Beta_m$ values: $\begin{bmatrix} F_{k_1} \\ \vdots \\ F_{k_M} \end{bmatrix} = \begin{bmatrix} (e^{\lambda_1})^{k_1} & \dots & (e^{\lambda_M})^{k_1} \\ \vdots & \ddots & \vdots \\ (e^{\lambda_1})^{k_M} & \dots & (e^{\lambda_M})^{k_M} \end{bmatrix} \begin{bmatrix} \Beta_1 \\ \vdots \\ \Beta_M \end{bmatrix},$ where $M$ unique values $k_i$ are used. It is possible to use a generalized matrix inverse if more than $M$ samples are used. Note that solving for $\lambda_m$ will yield ambiguities, since only $e^{\lambda_m}$ was solved for, and $e^{\lambda_m} = e^{\lambda_m \,+\, q 2 \pi j}$ for an integer $q$. This leads to the same Nyquist sampling criteria that discrete Fourier transforms are subject to: $\left|Im(\lambda_m)\right| = \left|\omega_m\right| < \frac{1}{2 \Delta_t}.$ ## Notes 1. ^ Hauer, J.F. et al. (1990). "Initial Results in Prony Analysis of Power System Response Signals". IEEE Transactions on Power Systems, 5, 1, 80-89. ## References • Rob Carriere and Randolph L. Moses, "High Resolution Radar Target Modeling Using a Modified Prony Estimator", IEEE Trans. Antennas Propogat., vol.40, pp. 13–18, January 1992.
# Topology: How to formally write proofs? • Dec 3rd 2012, 06:53 PM Topology: How to formally write proofs? Hey there! In doing practice exercises for topology, I find that I can reason through them easily enough to find the correct answer. The problem is that I rely predominantly upon verbal logic in order to make the proofs work. How does one go about proofs using only the math notation? A simple example: Prove that $A\in 2^A$ Another simple example: Prove that $A \subset B \iff A \cup\B = B$ • Dec 5th 2012, 06:45 AM ModusPonens Re: Topology: How to formally write proofs? You have to use english, or else you would have to resort to symbolic derivations in some logic. See the examples of proofs in a topology book and you'll get the style. • Dec 5th 2012, 07:39 AM HallsofIvy Re: Topology: How to formally write proofs? Quote: Hey there! In doing practice exercises for topology, I find that I can reason through them easily enough to find the correct answer. The problem is that I rely predominantly upon verbal logic in order to make the proofs work. How does one go about proofs using only the math notation? A simple example: Prove that $A\in 2^A$ The standard way to prove that $x\in U$ is to show that x meets the conditions given in the definition of U. What is the definition of $2^A$? Quote: Another simple example: Prove that $A \subset B \iff A \cup B = B$ The standard way to prove " $X= Y$" is to prove both $X\subset of Y$ and $Y\subset of X$. And the standard way to prove " $X\subset Y$" is to start "if $x\in X$" and use the definitions of both X and Y to prove that " $x\in Y$". For example, to prove "if $A\cup B= B$ then $A\subset B$" start by saying "if $x \in A\cup B$" and argue that, then, $x\in A\cup B$ and so, because $A\cup B= B$, $x\in B$. Since every member of A is a member of B, $A\subset B$. I'll leave the other way (which is a little harder!) to you. Quote: (Your B in $A\cup\B$ did not show up because you had "\" in front of the B and Latex does not recognize "\B".) • Dec 5th 2012, 08:20 AM Deveno Re: Topology: How to formally write proofs? A ⊆ B implies AUB = B. the way we show two sets are equal is to demonstrate they contain the same elements. so if we start with A ⊆ B, we need to show that everything in AUB is in B, and that everything in B is in AUB. now everything in B is always in AUB (by the definition of union). so the "hard part" will be showing that everything in AUB is in B. you'd start like so. suppose x is in AUB = {x in T : x is in A or x is in B} (here, T is "some set" that A and B both belong to, our "universe of discourse"). so either x is in A, or x is in B. if x is in B, then certainly x is in B. on the other hand, if x is in A, then.....(you should use something about the relationship between A and B here, now)
? Free Version Difficult Area, Triangle, Law Cosines, Three Known Sides TRIG-WV7OAE If you are given a triangular piece of wood with the following dimensions, $11\;cm,\;18\;cm\;$ and $\;20\;cm$, what is the area of that piece of wood to the nearest square cm? A $98.4\; sq.\; cm$ B $150.8\; sq.\; cm$ C $117.4\; sq.\; cm$ D $178.3\; sq.\; cm$
# If calculations after MH take forever After MH iterations, the computations can get stuck on marginal_density.m because of a log(0). If you have this problem, you can use this fix: using determinant properties, replace log(detSIGMA) with logdetSIGMA that is defined as logdetSIGMA = 2*log(det(chol(SIGMA))); See attached file, it’s a modification of the 4.2 version of Dynare. Hope this help! Gilles A better idea is: logdetSIGMA = 2*sum(log(diag(chol(SIGMA)))); That way, the estimated model can grow even more before the loss of precision becomes a problem. I updated the marginal_density.m file. Gilles marginal_density.m (4.63 KB) This is so useful. The estimation that used to take about 60 mins takes less than 12 mins!!! Many thanks for sharing this file. But what if there is no log(0) problem? Is the same modified file still useful? Yes. The formula is exact (you can check for fun). In fact, it’s more precise for all computations. Gilles
# Merge and synonymize [speed] into [velocity] This seems like a cut and dry case of two tags that should be combined. Am I missing something? • scalar vs vector? Aug 28 '20 at 18:30 • @OrganicMarble Is that really enough to merit separate tags? – called2voyage Mod Aug 28 '20 at 18:31 • I've no idea. But there is a difference. Aug 28 '20 at 18:31 • A difference between the terms sure, but not sure there's a meaningful difference between the topics – called2voyage Mod Aug 28 '20 at 18:32 While there may be a small distinction in that "speed" only applies to the absolute value of the velocity vector while the word "velocity" is commonly used both for the speed and the velocity vector, they certainly don't need separate tags. One generally uses the letter "v" as in $$v=\sqrt{GM/a}$$ for speed as well as $$\mathbf{v}$$ as in $$\mathbf{L} = m \mathbf{r} \times \mathbf{v}$$ and in both contexts it's common to use the word "velocity" to refer to both the vector and it's scalar length. Because of this users may be satisfied finding either one first and then not looking for the other, so if one is searching using one tag one may not find questions tagged with the other. Since "velocity" is regularly used for both quantities, the should survive and appear as an option when "speed" is typed in the tag box. • Merge complete. – called2voyage Mod Sep 1 '20 at 12:23 • Does SE give out "merge" badges? ;-) – uhoh Sep 1 '20 at 12:24
# Thread: a couple of hilbert spaces problems 1. ## a couple of hilbert spaces problems (1) Let $\{e_n, n \in \mathbb{N}\}$ be an orthonormal base of the Hilbert space X. We define $Y_1$ and $Y_2$: $Y_1=\overline{L\{e_{2n}, n \in \mathbb{N}}\}$, $Y_2=\overline{L\{e'_n=e_{2n}cos1/n + e_{2n+1}sin 1/n, n \in \mathbb{N} \}}$ Prove that $Y_1 + Y_2$ isn't closed. I've been trying to find a convergent sequence in $Y_1 + Y_2$, with a limit outside $Y_1 + Y_2$, but no success. (Trying to prove that its complement is open would be much harder, I think.) (2) Let $\{e_n, n \in \mathbb{N}\}$ be an orthonormal base of the Hilbert space X with the inner product $<. | .>$ We define the operator $T_n$ as follows: $T_n x= e_1 + e_2 + e_3 + \ldots +$ $e_n + e_{n+1} + e_{n+2} + \ldots, n \in \mathbb{N}$. Is $T_n$ bounded, normal? Is it true that $\forall x \in X$ $T_nx \rightarrow Sx$, where S is the unilateral shift? This one is just too messy. I can post what I've been trying to do, but didn't get anywhere and I think it wouldn't do any good. Thank you once again for all your help. 2. Originally Posted by marianne (1) Let $\{e_n, n \in \mathbb{N}\}$ be an orthonormal base of the Hilbert space X. We define $Y_1$ and $Y_2$: $Y_1=\overline{L\{e_{2n}, n \in \mathbb{N}}\}$, $Y_2=\overline{L\{e'_n=e_{2n}cos1/n + e_{2n+1}sin 1/n, n \in \mathbb{N} \}}$ Prove that $Y_1 + Y_2$ isn't closed. Hint: Let $x = \sum_{n=0}^\infty e_{2n+1}\sin 1/n$. Show that (i) this sum converges (because the sequence (sin 1/n) is square-summable); (ii) $x\notin Y_1 + Y_2$; (iii) each finite partial sum $\sum_{n=0}^N e_{2n+1}\sin 1/n$ is in Y_1+Y_2. Originally Posted by marianne (2) Let $\{e_n, n \in \mathbb{N}\}$ be an orthonormal base of the Hilbert space X with the inner product $<. | .>$ We define the operator $T_n$ as follows: $T_n x= e_1 + e_2 + e_3 + \ldots +$ $e_n + e_{n+1} + e_{n+2} + \ldots, n \in \mathbb{N}$. Is $T_n$ bounded, normal? Is it true that $\forall x \in X$ $T_nx \rightarrow Sx$, where S is the unilateral shift? Let Y_n be the subspace of X spanned by the first n basis vectors. Then T_n permutes these basis vectors cyclically, and it follows that the restriction of T_n to Y_n has norm 1 and is normal. On the orthogonal complement of Y_n, T_n acts as the identity. Hence T_n is bounded (with norm 1) and normal. Clearly $T_ne_n\to Se_n$ for each basis vector. Since linear combinations of basis vectors are dense in X, and the operators T_n are uniformly bounded, it follows that $T_nx \to Sx$ for all x. [This shows that the set of normal operators is not closed, since S is not normal.] 3. Thank you very, very much! I understood the solution to the second problem, but I'm not so sure about the first, since I don't know a lot about series.. It is clear to me why every finite sum is in fact in $Y_1 + Y_2$, but I don't know how to show that the sum converges, or that $x \notin Y_1+Y_2$. Could you please explain a bit, or recommend an online book or something like that, that would clear it for me? (I'm afraid I can't go to the library because I broke my leg, nor can I see my teaching asistant for this. I have this exam next week and online help is all I can get.) 4. Originally Posted by marianne It is clear to me why every finite sum is in fact in $Y_1 + Y_2$, but I don't know how to show that the sum converges, or that $x \notin Y_1+Y_2$. Could you please explain a bit, or recommend an online book or something like that, that would clear it for me? Okay, here's what you need to know (I don't know of an online reference for proofs, though). If $\{e_n\}_{n\in\mathbb{N}}$ is an orthonormal basis for a Hilbert space H, then every element of H can be expressed as an "infinite linear combination" of basis vectors. In other words, if $x\in H$ then there exist scalars $\alpha_n\;\;(n\in\mathbb{N})$ such that $\textstyle x=\sum\alpha_ne_n$, where the series converges in the sense that the partial sums converge in norm to x. The coefficients are given by the formula $\alpha_n = \langle x,e_n\rangle$. What's more, the coefficients are square-summable, with $\textstyle\sum|\alpha_n|^2 = \|x\|^2$ (Parseval's identity). Coming back to the problem about the two subspaces, if $x = \sum_{n=0}^\infty e_{2n+1}\sin 1/n \in Y_1+Y_2$ then $x=y+z$ with $y\in Y_1,\;z\in Y_2$. But $\{e_{2n}, n \in \mathbb{N}\}$ is an orthonormal basis for Y_1, and $\{e_{2n}\cos(1/n) + e_{2n+1}\sin (1/n), n \in \mathbb{N} \}$ is an orthonormal basis for Y_2. So we can write $\textstyle y = \sum\beta_ne_{2n}$, $z=\sum\gamma_n(e_{2n}\cos(1/n) + e_{2n+1}\sin (1/n))$. Also, $\textstyle x=\sum\alpha_ne_n$. You can now compare the coefficients of the basis vectors e_n when you write the equation x=y+z using these series. You will find that this leads to the conclusion that the coefficients β_n will not be square-summable. This is a contradiction, and therefore $x\notin Y_1+Y_2.$ 5. Everything makes sense now! Thank you soooooooooooooo much!
### Proxy Re-Signatures: New Definitions, Algorithms, and Applications Giuseppe Ateniese and Susan Hohenberger ##### Abstract In 1998, Blaze, Bleumer, and Strauss (BBS) proposed proxy re-signatures, in which a semi-trusted proxy acts as a translator between Alice and Bob. To translate, the proxy converts a signature from Alice into a signature from Bob on the same message. The proxy, however, does not learn any signing key and cannot sign arbitrary messages on behalf of either Alice or Bob. Since the BBS proposal, the proxy re-signature primitive has been largely ignored, but we show that it is a very useful tool for sharing web certificates, forming weak group signatures, and authenticating a network path. We begin our results by formalizing the definition of security for a proxy re-signature. We next substantiate the need for improved schemes by pointing out certain weaknesses of the original BBS proxy re-signature scheme which make it unfit for most practical applications. We then present two secure proxy re-signature schemes based on bilinear maps. Our first scheme relies on the Computational Diffie-Hellman (CDH) assumption; here the proxy can translate from Alice to Bob and vice-versa. Our second scheme relies on the CDH and 2-Discrete Logarithm (2-DL) assumptions and achieves a stronger security guarantee -- the proxy is only able to translate in one direction. Constructing such a scheme has been an open problem since proposed by BBS in 1998. Furthermore in this second scheme, even if the delegator and the proxy collude, they cannot sign on behalf of the delegatee. Both schemes are efficient and secure in the random oracle model. Available format(s) Category Public-key cryptography Publication info Published elsewhere. This is the full version of the paper in ACM CCS 2005. Contact author(s) srhohen @ mit edu History Short URL https://ia.cr/2005/433 CC BY BibTeX @misc{cryptoeprint:2005/433, author = {Giuseppe Ateniese and Susan Hohenberger}, title = {Proxy Re-Signatures: New Definitions, Algorithms, and Applications}, howpublished = {Cryptology ePrint Archive, Paper 2005/433}, year = {2005}, note = {\url{https://eprint.iacr.org/2005/433}}, url = {https://eprint.iacr.org/2005/433} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
# Calculation of an Orbital Integral Posted by Jason Polak on 25. August 2015 · Write a comment · Categories: algebraic-geometry, number-theory · Tags: , In the Arthur-Selberg trace formula and other formulas, one encounters so-called ‘orbital integrals’. These integrals might appear forbidding and abstract at first, but actually they are quite concrete objects. In this post we’ll look at an example that should make orbital integrals seem more friendly and approachable. Let $k = \mathbb{F}_q$ be a finite field and let $F = k( (t))$ be the Laurent series field over $k$. We will denote the ring of integers of $F$ by $\mathfrak{o} := k[ [t]]$ and the valuation $v:F^\times\to \mathbb{Z}$ is normalised so that $v(t) = 1$. Let $G$ be a reductive algebraic group over $\mathfrak{o}$. Orbital integrals are defined with respect to some $\gamma\in G(F)$. Often, $\gamma$ is semisimple, and regular in the sense that the orbit $G\cdot\gamma$ has maximal dimension. One then defines for a compactly supported smooth function $f:G(F)\to \mathbb{C}$ the orbital integral $$\Ocl_\gamma(f) = \int_{I_\gamma(F)\backslash G(F)} f(g^{-1}\gamma g) \frac{dg}{dg_\gamma}.$$ More » # Graphing the Mandelbrot Set Posted by Jason Polak on 13. June 2013 · Write a comment · Categories: analysis, elementary · Tags: , , A class of fractals known as Mandelbrot sets, named after Benoit Mandelbrot, have pervaded popular culture and are now controlling us. Well, perhaps not quite, but have you ever wondered how they are drawn? Here is an approximation of one: From now on, Mandelbrot set will refer to the following set: for any complex number $c$, consider the function $f:\mathbb{C}\to\mathbb{C}$ defined by $f_c(z) = z^2 + c$. We define the Mandelbrot set to be the set of complex numbers $c\in\mathbb{C}$ such that the sequence of numbers $f_c(0), f_c(f_c(0)),f_c(f_c(f_c(0))),\dots$ is bounded. More »
• Corpus ID: 251371705 # Improved Rates of Bootstrap Approximation for the Operator Norm: A Coordinate-Free Approach @inproceedings{Lopes2022ImprovedRO, title={Improved Rates of Bootstrap Approximation for the Operator Norm: A Coordinate-Free Approach}, author={Miles E. Lopes}, year={2022} } Let Σ̂ = 1 n∑ni=1Xi⊗Xi denote the sample covariance operator of centered i.i.d. observations X1, . . . ,Xn in a real separable Hilbert space, and let Σ =E(X1⊗X1). The focus of this paper is to understand how well the bootstrap can approximate the distribution of the operator norm error √ n∥Σ̂ − Σ∥op, in settings where the eigenvalues of Σ decay as λj(Σ) ≍ j−2β for some fixed parameter β > 1/2. Our main result shows that the bootstrap can approximate the distribution of √n∥Σ̂ −Σ∥op at a rate of… ## References SHOWING 1-10 OF 36 REFERENCES ### Rates of Bootstrap Approximation for Eigenvalues in High-Dimensional PCA • Mathematics, Computer Science Statistica Sinica • 2021 Under certain assumptions, the bootstrap can achieve the dimension-free rate of r(Σ)/ √ n up to logarithmic factors, and it is shown that applying a transformation to the eigenvalues of Σ̂ before bootstrapping is an important consideration in high-dimensional settings. ### A Sharp Lower-tail Bound for Gaussian Maxima with Application to Bootstrap Methods in High Dimensions • Mathematics, Computer Science • 2018 The aim of this paper is to develop a bound on the maxima of Gaussian processes, while also allowing for many types of dependence, and makes use of recent refinements of Bourgain and Tzafriri’s “restricted invertibility principle”. ### Regularized estimation of large covariance matrices • Mathematics, Computer Science • 2008 If the population covariance is embeddable in that model and well-conditioned then the banded approximations produce consistent estimates of the eigenvalues and associated eigenvectors of the covariance matrix. ### Bootstrapping the Operator Norm in High Dimensions: Error Estimation for Covariance Matrices and Sketching • Mathematics Bernoulli • 2023 The main result shows that the bootstrap can approximate the distribution of T_n at the dimension-free rate of $n^{-\frac{\beta-1/2}{6\beta+4}}$, with respect to the Kolmogorov metric. ### Optimal Approximate Matrix Product in Terms of Stable Rank • Mathematics, Computer Science ICALP • 2016 We prove, using the subspace embedding guarantee in a black box way, that one can achieve the spectral norm guarantee for approximate matrix multiplication with a dimensionality-reducing map having ### On Gaussian comparison inequality and its application to spectral analysis of large random matrices • Mathematics, Computer Science Bernoulli • 2018 It is shown that two long-standing problems in random matrix theory can be solved: simple bootstrap inference on sample eigenvalues when true eigen values are tied and conducting two-sample Roy's covariance test in high dimensions. ### Optimal rates of convergence for covariance matrix estimation • Mathematics • 2010 Covariance matrix plays a central role in multivariate statistical analysis. Significant advances have been made recently on developing both theory and methodology for estimating large covariance ### Comparison and anti-concentration bounds for maxima of Gaussian random vectors • Mathematics • 2013 Slepian and Sudakov–Fernique type inequalities, which compare expectations of maxima of Gaussian random vectors under certain restrictions on the covariance matrices, play an important role in
Building a Model¶ To modify a simulation, you create a Keras tf.keras.Model that will be executed at each step (or some multiple of steps) during the simulation. See the Using a Model in a Simulation to see how to train your model instead, though these instructions still apply. To begin subclass a SimModel class: import hoomd.htf as htf class MyModel(htf.SimModel): def compute(self, nlist, positions, box): ... return forces, other, important, quantities model = MyModel(NN, output_forces=True) where NN is the maximum number of nearest neighbors to consider (can be 0). This is an upper-bound, so choose a large number. If you are unsure, you can guess and add check_nlist = True to your constructor. This will cause the program to halt if you choose too low. output_forces indicates if the model will output forces to use in the simulation. In the SimModel.compute(nlist, positions, box) function you will have three tensors that can be used: • nlist is an N x NN x 4 tensor containing the nearest neighbors. An entry of all zeros indicates that less than NN nearest neighbors where present for a particular particle. The 4 right-most dimensions are x,y,z and w, which is the particle type. Particle type is an integer starting at 0. Note that the x,y,z values are a vector originating at the particle and ending at its neighbor. • positions is an N x 4 tensor of particle positions (x,y,z) and type. • box is a 3x3 tensor containing the low box coordinate (row 0), high box coordinate (row 1), and then tilt factors (row 2). Your function can use fewer tensors, like compute(self, nlist) if desired. Keras Model¶ Your models are Keras :obj:tf.keras.Models so that all the usual process of building layers, saving, and computing metrics apply. For example, here is a two hidden layer neural network force-field that uses the 8 nearest neighbors to compute forces. class NlistNN(htf.SimModel): def setup(self, dim, top_neighs): self.dense1 = tf.keras.layers.Layer(dim) self.dense2 = tf.keras.layers.Layer(dim) self.last = tf.keras.layers.Layer(1) self.top_neighs = top_neighs def compute(self, nlist): rinv = htf.nlist_rinv(nlist) # closest neighbors have largest value in 1/r, take top top_n = tf.sort(rinv, axis=1, direction='DESCENDING')[ :, :self.top_neighs] # run through NN x = self.dense1(top_n) x = self.dense2(x) energy = self.last(x) forces = htf.compute_nlist_forces(nlist, energy) return forces model = NlistNN(64, dim=16, top_neighs=8) The 64 is the nlist size, dim is the hidden layer dimension, and top_neighs is how many neighbors to consider. Refer to the Keras documentation to learn more about models. Molecule Batching¶ It may be simpler to have positions or neighbor lists or forces arranged by molecule. For example, you may want to look at only a particular bond or subset of atoms in a molecule. To do this, you can instead subclass MolSimModel: import hoomd.htf as htf class MyModel(htf.SimModel): def mol_compute(self, nlist, positions, mol_nlist, mol_pos, box): ... return forces, other, important, quantities model = MyModel(MN, NN, mol_indices) whose argument MN is the maximum number of atoms in a molecule and mol_indices describes the molecules in your system as a list of atom indices. This can be created directly from a hoomd system via find_molecules(). The mol_indices are a, possibly ragged, 2D python list where each element in the list is a list of atom indices for a molecule. For example, [[0,1], [1]] means that there are two molecules with the first containing atoms 0 and 1 and the second containing atom 1. Note that the molecules can be different size and atoms can exist in multiple molecules. MolSimModel.mol_compute(self, nlist, positions, mol_nlist, mol_pos) has the following additional arguments: mol_positions and mol_nlist. These new attributes are dimension M x MN x ... where M is the number of molecules and MN is the atom index within the molecule. If your molecule has fewer than MN atoms, extra entries will be zeros. You can define a molecule to be whatever you want, and atoms need not be only in one molecule. Here’s an example to compute a water angle, where we assume that the oxygens are the middle atom: import hoomd.htf as htf class MyModel(htf.SimModel): def mol_compute(self, nlist, positions, mol_nlist, mol_pos): # want slice for all molecules (:) # want h1 (0), o (1), h2(2) # positions are x,y,z,w. We only want x,y z (:3) v1 = mol_pos[:, 2, :3] - mol_pos[:, 1, :3] v2 = mol_pos[:, 0, :3] - mol_pos[:, 1, :3] # compute per-molecule dot product and divide by per molecule norm c = tf.einsum('ij,ij->i', v1, v2) / tf.norm(v1, axis=1) / tf.norm(v2 axis=1) angles = tf.math.acos(c) return angles # ...set-up hoomd... mol_indices = htf.find_molecules(hoomd_system) model = MyModel(3, 128, mol_indices, output_forces=False) Computing Forces¶ If your graph is outputting forces, they must be the first return value from your compute method. It is easiest to compute forces using the automatic differentiation of a potential energy. Call compute_nlist_forces() with the argument nlist and energy. energy can be either a scalar or a tensor which depends on nlist. A tensor of forces will be returned as $$\sum_i(\frac{-\partial E} {\partial n_i})$$, where the sum is over the neighbor list. For example, to compute a $$1 / r$$ potential: import hoomd.htf as htf class MyModel(htf.SimModel): def compute(self, nlist, positions): #remove w since we don't care about types r = tf.norm(nlist[:, :, :3], axis=2) pairwise_energy = 0.5 * tf.math.divide_no_nan(1, r) # sum over neighbors energy = tf.reduce_sum(pairwise_energy, axis = 1) forces = htf.compute_nlist_forces(nlist, energy) return forces Notice that in the above example that we have used the tf.math.divide_no_nan method, which allows us to safely treat a $$1 / 0$$, which can arise because nlist contains 0s for when fewer than NN nearest neighbors are found. There is also a method compute_positions_forces(positions, energy) which can be used to compute position dependent forces. Note: because nlist is a full neighbor list, you should divide by 2 if your energy is a sum of pairwise energies. Neighbor lists¶ The nlist is an N x NN x 4 neighbor list tensor. You can compute a masked versions of this with masked_nlist(nlist, type_tensor, type_i, type_j) where type_i and type_j are optional integers that specify the type of the origin (type_i) or neighbor (type_j). type_tensor is positions[:,3] or your own types can be chosen. You can also use nlist_rinv(nlist) which gives a pre-computed 1 / r (dimension N x NN). Virial¶ A virial term can be added by doing both of the following extra steps: 1. Compute virial with your forces compute_nlist_forces(nlist, energy,virial=True) by adding the virial=True arg. 2. Add the modify_virial=True argument to your model constructor Mapped quantities¶ If mapped quantities are desired for coarse-graining while running a simulation, you can call tfcompute.enable_mapped_nlist(system, mapping_fxn) to utilize hoomd to compute fast neighbor lists. The model code can then use SimModel.mapped_nlist() and SimModel.mapped_positions() to access mapped nlist and positions. An example: import hoomd.htf as htf def mapping_fxn(AA): return M @ AA class MyModel(htf.SimModel): def compute(self, nlist, positions, forces): aa_nlist, mapped_nlist = self.mapped_nlist(nlist) aa_pos, mapped_pos = self.mapped_positions(positions) ... tfcompute.enable_mapped_nlist(system, mapping_fxn) Call tfcompute.enable_mapped_nlist() prior to running the simulation. To save a model: Because these models do not use standard Keras objects, to reload a model you must first use your python code to build the model and then load weights into from a file like so: tmp_loaded_model = tf.keras.load_model('/path/to/model') model = MyModel(16, output_forces=True) Complete Examples¶ The file htf/test-py/build_examples.py contains example models Lennard-Jones with 1 Particle Type¶ class LJModel(htf.SimModel): def compute(self, nlist): # get r rinv = htf.nlist_rinv(nlist) inv_r6 = rinv**6 # pairwise energy. Double count -> divide by 2 p_energy = 4.0 / 2.0 * (inv_r6 * inv_r6 - inv_r6) # sum over pairwise energy energy = tf.reduce_sum(input_tensor=p_energy, axis=1) forces = htf.compute_nlist_forces(nlist, energy) return forces
# Resources tagged with: Visualising Filter by: Content type: Age range: Challenge level: ### Like a Circle in a Spiral ##### Age 7 to 16 Challenge Level: A cheap and simple toy with lots of mathematics. Can you interpret the images that are produced? Can you predict the pattern that will be produced using different wheels? ### Rolling Around ##### Age 11 to 14 Challenge Level: A circle rolls around the outside edge of a square so that its circumference always touches the edge of the square. Can you describe the locus of the centre of the circle? ### Tessellating Hexagons ##### Age 11 to 14 Challenge Level: Which hexagons tessellate? ### Weighty Problem ##### Age 11 to 14 Challenge Level: The diagram shows a very heavy kitchen cabinet. It cannot be lifted but it can be pivoted around a corner. The task is to move it, without sliding, in a series of turns about the corners so that it. . . . ### Trice ##### Age 11 to 14 Challenge Level: ABCDEFGH is a 3 by 3 by 3 cube. Point P is 1/3 along AB (that is AP : PB = 1 : 2), point Q is 1/3 along GH and point R is 1/3 along ED. What is the area of the triangle PQR? ### An Unusual Shape ##### Age 11 to 14 Challenge Level: Can you maximise the area available to a grazing goat? ### LOGO Challenge - Circles as Animals ##### Age 11 to 16 Challenge Level: See if you can anticipate successive 'generations' of the two animals shown here. ### Polygon Rings ##### Age 11 to 14 Challenge Level: Join pentagons together edge to edge. Will they form a ring? ### Cube Paths ##### Age 11 to 14 Challenge Level: Given a 2 by 2 by 2 skeletal cube with one route `down' the cube. How many routes are there from A to B? ### The Old Goats ##### Age 11 to 14 Challenge Level: A rectangular field has two posts with a ring on top of each post. There are two quarrelsome goats and plenty of ropes which you can tie to their collars. How can you secure them so they can't. . . . ##### Age 11 to 14 Challenge Level: Four rods, two of length a and two of length b, are linked to form a kite. The linkage is moveable so that the angles change. What is the maximum area of the kite? ### Convex Polygons ##### Age 11 to 14 Challenge Level: Show that among the interior angles of a convex polygon there cannot be more than three acute angles. ### Cutting a Cube ##### Age 11 to 14 Challenge Level: A half-cube is cut into two pieces by a plane through the long diagonal and at right angles to it. Can you draw a net of these pieces? Are they identical? ### Playground Snapshot ##### Age 7 to 14 Challenge Level: The image in this problem is part of a piece of equipment found in the playground of a school. How would you describe it to someone over the phone? ### Semi-regular Tessellations ##### Age 11 to 16 Challenge Level: Semi-regular tessellations combine two or more different regular polygons to fill the plane. Can you find all the semi-regular tessellations? ### Coloured Edges ##### Age 11 to 14 Challenge Level: The whole set of tiles is used to make a square. This has a green and blue border. There are no green or blue tiles anywhere in the square except on this border. How many tiles are there in the set? ### Triangular Tantaliser ##### Age 11 to 14 Challenge Level: Draw all the possible distinct triangles on a 4 x 4 dotty grid. Convince me that you have all possible triangles. ### Getting an Angle ##### Age 11 to 14 Challenge Level: How can you make an angle of 60 degrees by folding a sheet of paper twice? ### Clocking Off ##### Age 7 to 16 Challenge Level: I found these clocks in the Arts Centre at the University of Warwick intriguing - do they really need four clocks and what times would be ambiguous with only two or three of them? ### Counting Triangles ##### Age 11 to 14 Challenge Level: Triangles are formed by joining the vertices of a skeletal cube. How many different types of triangle are there? How many triangles altogether? ### Christmas Chocolates ##### Age 11 to 14 Challenge Level: How could Penny, Tom and Matthew work out how many chocolates there are in different sized boxes? ### Triangles in the Middle ##### Age 11 to 18 Challenge Level: This task depends on groups working collaboratively, discussing and reasoning to agree a final product. ### Tetra Square ##### Age 11 to 14 Challenge Level: ABCD is a regular tetrahedron and the points P, Q, R and S are the midpoints of the edges AB, BD, CD and CA. Prove that PQRS is a square. ### Take Ten ##### Age 11 to 14 Challenge Level: Is it possible to remove ten unit cubes from a 3 by 3 by 3 cube so that the surface area of the remaining solid is the same as the surface area of the original? ### Eight Hidden Squares ##### Age 7 to 14 Challenge Level: On the graph there are 28 marked points. These points all mark the vertices (corners) of eight hidden squares. Can you find the eight hidden squares? ### Baravelle ##### Age 7 to 16 Challenge Level: What can you see? What do you notice? What questions can you ask? ### Christmas Boxes ##### Age 11 to 14 Challenge Level: Find all the ways to cut out a 'net' of six squares that can be folded into a cube. ### Dotty Triangles ##### Age 11 to 14 Challenge Level: Imagine an infinitely large sheet of square dotty paper on which you can draw triangles of any size you wish (providing each vertex is on a dot). What areas is it/is it not possible to draw? ### Zooming in on the Squares ##### Age 7 to 14 Start with a large square, join the midpoints of its sides, you'll see four right angled triangles. Remove these triangles, a second square is left. Repeat the operation. What happens? ### Overlapping Again ##### Age 7 to 11 Challenge Level: What shape is the overlap when you slide one of these shapes half way across another? Can you picture it in your head? Use the interactivity to check your visualisation. ### Auditorium Steps ##### Age 7 to 14 Challenge Level: What is the shape of wrapping paper that you would need to completely wrap this model? ### All in the Mind ##### Age 11 to 14 Challenge Level: Imagine you are suspending a cube from one vertex and allowing it to hang freely. What shape does the surface of the water make around the cube? ### Isosceles Triangles ##### Age 11 to 14 Challenge Level: Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw? ### Concrete Wheel ##### Age 11 to 14 Challenge Level: A huge wheel is rolling past your window. What do you see? ### Framed ##### Age 11 to 14 Challenge Level: Seven small rectangular pictures have one inch wide frames. The frames are removed and the pictures are fitted together like a jigsaw to make a rectangle of length 12 inches. Find the dimensions of. . . . ### Cubes Within Cubes Revisited ##### Age 11 to 14 Challenge Level: Imagine starting with one yellow cube and covering it all over with a single layer of red cubes, and then covering that cube with a layer of blue cubes. How many red and blue cubes would you need? ### Rati-o ##### Age 11 to 14 Challenge Level: Points P, Q, R and S each divide the sides AB, BC, CD and DA respectively in the ratio of 2 : 1. Join the points. What is the area of the parallelogram PQRS in relation to the original rectangle? ### Part the Polygons ##### Age 7 to 11 Short Challenge Level: Draw three straight lines to separate these shapes into four groups - each group must contain one of each shape. ### On the Edge ##### Age 11 to 14 Challenge Level: If you move the tiles around, can you make squares with different coloured edges? ### Tic Tac Toe ##### Age 11 to 14 Challenge Level: In the game of Noughts and Crosses there are 8 distinct winning lines. How many distinct winning lines are there in a game played on a 3 by 3 by 3 board, with 27 cells? ### Regular Rings 2 ##### Age 7 to 11 Challenge Level: What shape is made when you fold using this crease pattern? Can you make a ring design? ### Dissect ##### Age 11 to 14 Challenge Level: What is the minimum number of squares a 13 by 13 square can be dissected into? ### Hexagon Transformations ##### Age 7 to 11 Challenge Level: Can you cut a regular hexagon into two pieces to make a parallelogram? Try cutting it into three pieces to make a rhombus! ### Efficient Cutting ##### Age 11 to 14 Challenge Level: Use a single sheet of A4 paper and make a cylinder having the greatest possible volume. The cylinder must be closed off by a circle at each end. ### Soma - So Good ##### Age 11 to 14 Challenge Level: Can you mentally fit the 7 SOMA pieces together to make a cube? Can you do it in more than one way? ### Drilling Many Cubes ##### Age 7 to 14 Challenge Level: A useful visualising exercise which offers opportunities for discussion and generalising, and which could be used for thinking about the formulae needed for generating the results on a spreadsheet. ### Seven Squares - Group-worthy Task ##### Age 11 to 14 Challenge Level: Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning? ### Painting Cubes ##### Age 11 to 14 Challenge Level: Imagine you have six different colours of paint. You paint a cube using a different colour for each of the six faces. How many different cubes can be painted using the same set of six colours? ### Khun Phaen Escapes to Freedom ##### Age 11 to 14 Challenge Level: Slide the pieces to move Khun Phaen past all the guards into the position on the right from which he can escape to freedom. ### Cubes Within Cubes ##### Age 7 to 14 Challenge Level: We start with one yellow cube and build around it to make a 3x3x3 cube with red cubes. Then we build around that red cube with blue cubes and so on. How many cubes of each colour have we used?
## correlation matrices on copulas Posted in R, Statistics, University life with tags , , , , on July 4, 2016 by xi'an Following my post of yesterday about the missing condition in Lynch’s R code, Gérard Letac sent me a paper he recently wrote with Luc Devroye on correlation matrices and copulas. Paper written for the memorial volume in honour of Marc Yor. It considers the neat problem of the existence of a copula (on [0,1]x…x[0,1]) associated with a given correlation matrix R. And establishes this existence up to dimension n=9. The proof is based on the consideration of the extreme points of the set of correlation matrices. The authors conjecture the existence of (10,10) correlation matrices that cannot be a correlation matrix for a copula. The paper also contains a result that answers my (idle) puzzling of many years, namely on how to set the correlation matrix of a Gaussian copula to achieve a given correlation matrix R for the copula. More precisely, the paper links the [correlation] matrix R of X~N(0,R) with the [correlation] matrix R⁰ of Φ(X) by $r^0_{ij}=\frac{6}{\pi}\arcsin\{r_{ij}/2\}$ A side consequence of this result is that there exist correlation matrices of copulas that cannot be associated with Gaussian copulas. Like $R=\left[\begin{matrix} 1 &-1/2 &-1/2\\-1/2 &1 &-1/2\\-1/2 &-1/2 &1 \end{matrix}\right]$ ## Marc Yor Posted in Books, Statistics, University life with tags , , , , , , on May 14, 2015 by xi'an
II.2 Derandomization and Pseudo-Randomness The P=BPP question and related questions about the power of randomness  in computation have given rise to the notion of pseudo-random generator, a deterministic process that in some sense looks random to the computational model at hand.  The fundamental insight here is that a hard function for that computational model can (sometimes) be efficiently converted into a pseudo-random number generator for the same model. This insight, that hardness can be turned into randomness, has led to some surprising and deep connections between the complexity of randomness, cryptography, circuit complexity and combinatorics. And once we have such a generator at hand, it results in a derandomization procedure for all probabilistic algorithms in that model - simply try out all possible seeds of the generator (which are much fewer than all possible random strings the algorithm used). The techniques for implementing this paradigm have improved tremendously in the two decades of its existence. For BPP (which is the class of algorithmic problems solvable by probabilistic polynomial time algorithms), such non-trivial derandomization is possible already if BPP is different from the class of problems solvable by deterministic exponential algorithms, EXP . Moreover, if EXP contains functions requiring exponential circuit size, then BPP=P (namely, randomness can be always eliminated from polynomial time computations). This brings up the problem of unconditionally separating BPP from EXP  as a natural "next step", which may be feasible even with the current technology. The extensive technical  progress of the last couple of years  on different ways of constructing pseudo-random generators still has to be fully understood, simplified and generalized to realize its potential impact on this and related problems. The analogous techniques developed for fooling probabilistic logarithmic space computations (again, some only very recently) seem to put us quite close to deciding unconditionally that randomness is useless in that context as well, e.g that RL=L (L stands for "logarithmic space", and R stands for "randomized"). But "en-route", there are many seemingly simpler problems that still remain  challenges like derandomizing constant-width probabilistic computations or even combinatorial rectangles. Interestingly, the connection between pseudo-randomness and lower bounds is not as explicit in space bounded models as in time bounded models, and has yet to be clarified. Finally, it seems that we have precious few ways of generating randomness from hardness. It will be extremely interesting to find drastically different pseudo-random generators, (even if these will not improve current results), or find that current constructions (like Nisan-Wigderson generators mentioned in the previous section ) are somehow universal. Another line of research in the area is more combinatorial in nature and is aimed at improving sources of imperfect randomness so that after this improvement the outcome can be used for derandomizing various classes of algorithms.  Combinatorial constructions like expanders , extractors and condensers were identified for the purpose, and the ultimate goal here is to be able to build explicit constructions of such objects that match the performance of  randomly chosen objects. There has been a constant progress in this direction over several last years, but much still remains to be done. III. Contributions As we already mentioned in Introduction, this section can be viewed as annotations to some of the papers linked or referenced at our paper page , and for a more complete picture the latter should be consulted . We confine ourselves here only to the two topics somewhat elucidated in the previous section. But many good results were proved in other branches of Computational Complexity, as well as in Combinatorics, Algebra and Topology, and more information about them can be gained from the abstracts of the respective papers. III.1 Proof Complexity One of the long-standing open problems in propositional proof complexity was to decide whether the weak Pigeonhole principle is hard for Resolution or not (here "weak" refers to the fact that the number of pigeons is much larger than the number of holes, potentially infinite). This problem was completely solved in the papers  Resolution Lower Bounds for the Weak Pigeonhole Principle by R. Raz and Improved Resolution Lower Bounds for the Weak Pigeonhole Principle by A. A. Razborov (very recently this result was extended to the functional version of the Pigeonhole principle, see Resolution Lower Bounds for the Weak Functional Pigeonhole Principle). In the paper Resolution is Not Automatizable Unless W[P] is Tractable, M. V. Alekhnovich and A. A. Razborov studied the question whether Resolution is automatizable or not. Under a rather strong hardness assumption from the theory of parameterized complexity they were able to completely answer this question in negative. Another important contribution to understanding the power of Resolution was made by Eli Ben-Sasson and Nicola Galesi in Space Complexity of Random Formulae in Resolution . The behaviour of a proof system on random tautologies is traditionally considered as a good indicator of its strength. Ben-Sasson and Galesi were able to show that Resolution performs rather badly on such formulas in terms of space consumed by the proof (a similar result for the ordinary bit size measure was known for a long time). Among the papers devoted to stronger proof systems, we can mention Monotone simulations of nonmonotone propositional proof by Albert Atserias, Nicola Galesi and Pavel Pudlak that contains the following  (rather surprising) result. An interesting fragment of the Frege proof system called monotone sequent calculus was introduced several years ago and was long believed to be substantially weaker than (non-monotone) Frege. This paper showed that, contrary to this belief, the monotone sequent calculus can in fact efficiently simulate every Frege proof. III.2 Derandomization and pseudo-randomness The paper In Search of an Easy Witness: Exponential Time vs. Probabilistic Polynomial Time by R. Impagliazzo , V. Kabanets and  A. Wigderson (awarded the Best Paper Award at the 16th IEEE Conference on Computational Complexity) makes very important contributions to the goal of extracting randomness from hardness described above . It proves (in a sense, and in an intermediate form) the universality of this approach: no derandomization of the complexity class MA (that stands for "Arthur-Merlin games") is possible unless there are hard functions in NEXPTIME. As another application of this technique, they showed a number of the so-called downward closure results (that are very rare in Complexity Theory). The paper Entropy Waves, The Zig-Zag Graph Product, and New Constant-Degree  Expanders and Extractors by O. Reingold, S. Vadhan and A. Wigderson introduces one elegant combinatorial construction,  zig-zag product of graphs. Iterating this construction, one in particular gets simple explicit expander graphs of every size, starting from one constant-size expander. The subsequent paper Semi-direct Products in Groups and Zig-Zag products in graphs: Connections and Applications by N. Alon , A. Lubotsky and  A. Wigderson reveals deep connections between this zig-zag product and basic group-theoretical primitives. As an important application, they give an example showing that the expansion property of the Cayley graph of a group may be not invariant under the choice of generators. Extracting Randomness via Repeated Condensing by O. Reingold, R. Saltiel and A. Wigderson constructs efficient explicit condensers and extractors that give significant qualitative improvements over previously known constructions  for sources of arbitrary min-entropy. IV. Literature [GW96] O. Goldreich, A. Wigderson. Theory of Computation: A Scientific Perspective, an essay. [Raz00] A. Razborov. Theoretical Computer Science: a mathematician's look (Russian), full version of an essay written for Computerra. [GJ79] M. R. Garey, D. S. Johnson. Computers and Intractability. A guide to the theory of NP-completeness, W. H. Freeman, 1979. [Weg87]  I. Wegener. The Complexity of Boolean Functions. John Wiley & Sons, 1987. [Han90]  Handbook of Theoretical Computer Science, vol. A  (Algorithms and Complexity). Elsevier Science Publishers B.V. and The MIT Press, 1990. [Bus86]  S. R. Buss. Bounded arithmetic, Bibliopolis, Napoli, 1986. [Kra95] J. Krajicek. Bounded arithmetic, propositional logic and complexity theory, Cambridge University Press, 1995. [Urq95] A. Urquhart. The complexity of propositional proofs, Bulletin of Symbolic Logic, 1:425-467, 1995. [Raz96] A. RazborovLower bounds for propositional proofs and independence results in Bounded Arithmetic, in  Proceedings of the 23rd ICALP, Lecture Notes in Computer Science, vol. 1099, 1996, 48-62. [BP98] P. Beame, T. Pitassi. Propositional Proof Complexity: Past, Present, and Future, Bulletin of the European  Association for Theoretical Computer Science, 65:66-89, June 1998. The Computational Complexity Column (ed. E. Allender). [Pud98] P. Pudlak. The lengths of proofs, Handbook of Proof Theory, pages 547-637. Elsevier, 1998. [Raz99] A. Razborov. Reviewon the book Proof Complexity and Feasible ArithmeticsJournal of Symbolic Logic. [ABRW00] M. Alekhnovich, E. Ben-Sasson, A. Razborov, A. Wigderson. Pseudorandom generators in propositional complexity, Proceedings of the 41st IEEE FOCS, 43-53.
Energy Policy # Ethanol Biofuels in the United States October 7, 2012, 10:48 pm Topics: More Ethanol plant, West Burlington, Iowa. Source: Steven Vaughn Biofuels are a major source of renewable energy in the United States. Ethanol produced from corn starch accounts for 90% of the biofuels consumed, but only 5% of all light-duty motor transportation fuel consumption. Ethanol is blended with gasoline to increase octane and reduce emissions, and used as a substitute for gasoline to reduce consumption of petroleum-based fuels. Ethanol has the potential to provide many benefits. As an alternative to gasoline refined from imported oil, its use can improve U.S. national energy security, albeit marginally. Although the exact magnitude is subject to debate, ethanol is thought by many to produce lower greenhouse gas (GHG) emissions compared with gasoline. For this reason, its increased use is seen by many as playing a potential key role in reducing the contribution of the transportation sector to global climate change. U.S.-produced ethanol can also boost demand for U.S.-produced farm products, stimulate rural economies, and provide employment in rural areas. An ethanol-centric policy does have its critics. For example, ethanol has been implicated as a factor in rising commodity and food prices. As ethanol production increases, corn is diverted from feed and export markets and acreage is diverted from other crops, such as soybeans. The extent to which ethanol is responsible for these impacts has been the subject of debate and wide-ranging estimates. Also, the potential to displace gasoline and increase national energy security is limited by the land available to grow corn. Since the 1970s, ethanol has received price support from the U.S. government. Presently, federal support is provided in the form of mandated levels of consumption, financial incentives such as grants and loan guarantees, tax credits, tariffs on ethanol imports, and federally funded research and development efforts. Tax credits made available to blenders of ethanol are expected to total nearly $6 billion in 2009. Incentives were initially provided to get the ethanol industry off the ground—many now argue that the ethanol industry has matured and these resources should be used elsewhere. Federal support for biofuels and ethanol in particular is likely to be an issue facing the 111th Congress. Ethanol has received more federal support than other types of renewable energy. Some argue that the market, rather than the government, should direct investment, whether it be for ethanol, wind, solar, geothermal, or other alternatives. In addition, ethanol is used in internalcombustion engines that mostly use fossil fuels, unlike alternatives such as battery or plug-inelectric vehicles, which do not consume fossil fuels directly. Other issues of congressional interest may include financial support for ethanol during the recession and the extension of the blender’s tax credit and the import tariff, both of which expire after 2010. The renewable fuel standard (RFS), which mandates increasing volumes of renewable fuel use through 2022, may become an issue if biofuels production shortfalls occur and the mandate cannot be met. The U.S. Environmental Protection Agency (EPA) is drafting rules on the calculation of lifecycle greenhouse gas emissions that will determine which fuels qualify for the RFS. These rules will likely attract congressional scrutiny if they exclude major stakeholders in the ethanol industry. In addition, continuation of the RFS itself may be the subject of debate. ## Introduction The United States consumes about 186 billion gallons of light-duty road motor transportation fuel annually, most in the form of petroleum-based fuel (i.e., gasoline and diesel). However, biofuels are a small, yet growing, component of U.S. fuel consumption, accounting for an estimated 10 billion gallons in 2008, or 5% of total light-duty road motor transportation use by volume. Ethanol and biodiesel are the most common agriculture-based biofuels. Ethanol accounted for about 92% of agriculture-based biofuels consumption in 2008, and biodiesel for 8%, on an energy-equivalent basis.1 Together with imports, U.S. ethanol consumption was 6.7 billion gallons in 2007 and 8.9 billion gallons in 2008.2 Although a small volume compared with total liquid fuel consumption, it nevertheless displaced roughly 88 million barrels of oil in 2007 and 125 million barrels in 2008, compared with oil imports of about 3.7 billion barrels. This report focuses on “first generation” biofuels—that is, those currently in commercial production (corn-starch ethanol3 and foreign-produced sugar cane ethanol).4 “Second generation” biofuels, primarily cellulosic biofuels, are not yet produced on a commercial scale in the United States.5 Historically, fossil-fuel-based energy has generally been less expensive to produce and use than energy from renewable sources. However, since the late 1970s, U.S. policymakers at both the federal and state levels have enacted a variety of incentives, regulations, and programs to encourage the production and use of agriculture-based energy. These programs have proven critical to the economic success of rural renewable energy production. The benefits to rural economies and to the environment are not always clear and come with costs, leading to a lively debate between proponents and critics of government subsidies that underwrite agriculture-based renewable energy production. Proponents of government support for agriculture-based biofuels have cited national energy security, environmental benefits (such as reductions in greenhouse gas (GHG) emissions to moderate climate change rates), and higher domestic demand for U.S.-produced farm products as viable justifications.6 In addition, proponents argue that rural, agriculture-based energy production can enhance rural incomes and employment opportunities, while expanding the value added to U.S. agricultural commodities. In contrast, petroleum industry critics of biofuels subsidies argue that technological advances in seismography, drilling, and extraction continue to expand the fossil-fuel resource base, which has traditionally been cheaper and more accessible than biofuels supplies.7 Other critics argue that current biofuels production strategies can only be economically competitive with existing fossil fuels in the absence of subsidies if significant improvements in existing technologies are made or new technologies are developed.8 Until such technological breakthroughs are achieved, critics contend that the subsidies distort energy market incentives and divert research funds from the development of other potential renewable energy sources, such as wind, solar, or geothermal, that offer potentially cleaner, more bountiful alternatives. Still others question the rationale behind policies that promote biofuels for energy security. These critics question whether the United States could ever produce sufficient feedstock of either starches, sugars, or vegetable oils to permit biofuels production to meaningfully offset petroleum imports.9 Finally, there are those who argue that the focus on development of alternative energy sources undermines efforts to conserve and reduce the nation's energy dependence. The Renewable Fuel Standard (RFS) is the most significant government intervention in the ethanol industry. The RFS mandates that increasing volumes of renewable fuels be blended with conventional fuels through 2022. In 2009, 11.1 billion gallons of biofuels must be used, of which 10.5 billion gallons may be corn ethanol. The RFS is discussed in detail below. ## Ethanol Trade Issues Most ethanol imported into the United States is subject to a tariff of$0.54 per gallon. Several factors are generating discussion: 1. beginning in 2009, the tariff is $0.09 per gallon higher than the blender’s tax credit it was intended to offset, 2. as the RFS increases and becomes more difficult to fulfill, imports may play a greater role in reaching mandated volumes, and 3. if the price of imported ethanol was lower (without the tariff), blenders would be likely to blend more ethanol into gasoline, achieving one of the benefits of ethanol—reduced emissions. ### Potential Market Effects The use of food crops to produce energy has altered the dynamics of agricultural markets. The U.S. Department of Agriculture (USDA) estimates that nearly a third of the 2008/2009 corn crop will be refined into ethanol. Corn production has increased in recent years to accommodate higher demand, resulting in higher prices and shifts in acreage to corn from soybeans and other crops. High corn prices have boosted costs for the livestock industry. Congress may continue its debate and oversight in this area, possibly focusing on two areas: first, the role of speculation in increasing the magnitude and volatility of agricultural and food prices, and second, the response to higher food prices by domestic and international providers of food aid. ### Greenhouse Gas Emissions The Energy Independence and Security Act of 2007 (EISA, P.L. 110-140) requires that biofuels eligible under the RFS reduce greenhouse gas (GHG) emissions by certain levels compared with fossil fuels. EPA is charged with formulating rules for calculating GHG emissions using lifecycle analysis that includes both direct and significant indirect effects (see section below on GHG emissions for more detail). The methodology selected by EPA could potentially eliminate certain biofuels from the RFS—with major economic implications for segments of the renewable fuels industry. If EPA rules on GHG emissions are perceived as overly restrictive, some in Congress could introduce legislation to relax the rules. ### Ethanol Blend Rates Ethanol industry proponents are concerned that, even as production of corn ethanol increases, limitations in distribution infrastructure and vehicle absorption capacity will create a bottleneck, known as the “blend wall,” holding down potential consumption. The blend wall occurs when the maximum allowable percentage of ethanol in conventional gasoline (e.g., gasoline meant for all vehicles) does not absorb the volume of ethanol mandated by the RFS. For instance, current annual gasoline consumption of 140 billion gallons allows for theoretical ethanol consumption of 14 billion gallons at the current maximum blend of 10% (E10). Thus 14 billion gallons is the blend wall. However, it is not practical to blend every gallon of fuel consumed in the United States at the 10% level, so the actual amount of ethanol consumed is slightly less—closer to 12.5 billion gallons. The blend wall is reached when the volume of ethanol mandated under the RFS is greater than the volume which can be consumed as E10 plus the very small amount consumed as E85. Currently, the ability to consume E85 is very limited due to the lack of infrastructure and the small number of flexible fuel vehicles (FFVs)—annual E85 consumption is about 10 million gallons per year,11 accounting for less than 1% of total ethanol consumption. One solution to the blend wall is to increase the proportion of ethanol in gasoline consumed by conventional vehicles. Increasing the allowable blend to E12 could raise potential consumption to 17 billion gallons without any additional investment in infrastructure or vehicle modifications. This solution is very popular with corn and ethanol producers, who claim an increase in green jobs, benefits to rural economies, and the displacement of foreign-produced petroleum. U.S. Secretary of Agriculture Tom Vilsack has supported a shift to E15. Those against increasing the blend rate, such as livestock producers and retail food interests, claim that higher food and feed prices will result from higher corn demand. EPA has been assessing the feasibility of increasing the ethanol blend rate. In addition to market impacts, concerns include the effects of higher blends on motorcycles, small engines, and emission control and fuel systems, especially in older vehicles. ### Federal Support for the Ethanol Industry12 The ethanol industry has received substantial support from the federal government. However, some ethanol industry supporters argue that the current economic environment justifies additional government support. Recent industry proposals include guaranteed operating loans targeted to ethanol refiners and tax credits for “green” job creation or preservation. The blender’s tax credit (or volumetric ethanol excise tax credit) is an income tax credit of$0.45 per gallon on each gallon of ethanol blended into gasoline for sale or consumption. It is scheduled to expire during the 111th Congress—at the end of 2010—and ethanol proponents are expected to argue for its extension. While the cellulosic biofuels production tax credit and the small producer’s tax credit do not expire during the 111th Congress, either could be modified as the debate progresses. Proponents of other types of renewable energy contend that available resources could be better used supporting wind, solar, or other types of renewable energy and they will likely argue for a shift of government support away from ethanol. Some critics of the ethanol industry maintain that government expenditures in the form of tax credits and other subsidies for the ethanol industry are excessive. They question whether the industry will ever be viable without government assistance. Others question the balance between support for biofuels and other forms of renewable energy. A recent Environmental Working Group report based on U.S. Department of Energy (DOE) analysis shows that biofuels accounted for three-quarters of the tax benefits and two-thirds of all federal subsidies allotted for renewable energy in 2007.13 According to data compiled by DOE’s Energy Information Agency, the cornbased ethanol industry received $3 billion in tax credits in 2007, more than four times the$690 million in credits to other forms of renewable energy, including solar, wind, and geothermal power. Proponents of the ethanol industry urged policymakers to direct economic stimulus package resources authorized by the 111th Congress toward the ethanol industry. Among the support requested was a $1 billion short-term credit facility to finance current operations, additional loan guarantees for new production capacity and infrastructure, job creation tax credits for new jobs created by production operations, and expanded federal support for research and development.14 However, the final stimulus plan (the American Recovery and Reinvestment Act of 2009, P.L. 111-5) does not contain specific additional support for ethanol, although it expands the tax credit for E85 fuel pumps and storage facilities. ## U.S. Ethanol Supply and Use U.S. ethanol production in 2008 exceeded 9.2 billion gallons per year (bgpy), 42% above 2007, following rapid increases during the past decade. Production in 2007 reached 6.5 bgpy, a 33% advance from 2006 (see Figure 1). Production in 1998 was only 1.4 bgpy. The United States also imports ethanol, increasing the supply by about 400 to 700 million gallons per year (mgpy). Total supply in 2007 was 6.9 bgpy and 9.8 bgpy in 2008. Since 2005, the United States has surpassed Brazil as the world=s leading producer of ethanol.15 Several events contributed to the historical growth of U.S. ethanol production: the energy crises of the early and late 1970s; a partial exemption from the motor fuels excise tax (legislated as part of the Energy Tax Act of 1978); ethanol=s emergence as a gasoline oxygenate; and provisions of the Clean Air Act Amendments of 1990 that favored oxygenate blending with gasoline.16 Ethanol production is projected to continue growing rapidly through at least 2015 on the strength of both the extension of existing government incentives and the possible addition of new ones. These include the per-gallon blender’s tax credit of$0.45, the conventional biofuels RFS of 10.5 bgpy rising to 15 bgpy by 2015, and a $0.54 per gallon tariff on most imported ethanol.17 Figure 1. Ethanol Supply: Production and Imports, 1980-2008 Source: Renewable Fuels Association 1980-2007, CRS estimate for 2008. ### U.S. Ethanol Production As of November 2008, ethanol was produced in 27 states by 172 refineries with 10.3 billion gallons per year capacity (see Table 1). Most refineries are in the Corn Belt, but some are located on the West Coast and in the Southeast. Ethanol is generally produced in rural areas where corn is grown, to limit transportation costs for feedstocks. Ethanol plants range in size from 20 mgpy to over 100 mgpy. Corn is the principal feedstock for ethanol produced in the United States, accounting for about 97% of total output. Sorghum and a very small quantity of wheat are also used. These feedstocks, along with sugar, produce what are known as “first generation” biofuels. Biofuels produced from cellulosic feedstocks such as corn stover, prairie grasses, or woody biomass are known as “second generation” biofuels.18 In early 2009, not all ethanol plants were producing at full capacity. Some plants owned by financially troubled companies have closed and others are on standby or operating at reduced levels until more profitable circumstances exist. In 2008 an additional 23 refineries, accounting for 3.3 bgpy capacity, were under construction, although many of these projects are now on standby or have been cancelled. Table 1. Ethanol Production Capacity by State (as of December 2008) Rank State Operating Capacity Expansion Capacity Under Construction (mil. gal./year) (percent) (mil. gal./year) (percent) (mil. gal./year) (percent) 1 IA 2,439 24% 425 40% 880 24% 2 NE 1,213 12% 360 34% 511 14% 3 IL 925 9% 55 5% 355 10% 4 IN 912 9% 0 0% 198 5% 5 SD 835 8% 35 3% 160 4% 6 MN 769 7% 15 1% 275 8% 7 OH 516 5% 0 0% 74 2% 8 WI 504 5% 30 3% 0 0% 9 KS 363 4% 0 0% 115 3% 10 ND 283 3% 0 0% 100 3% 11 MO 215 2% 0 0% 0 0% 12 MI 214 2% 50 5% 50 1% 13 TN 167 2% 38 4% 0 0% 14 NY 150 1% 0 0% 5 0% 15 CA 144 1% 19 2% 55 2% 16 TX 140 1% 0 0% 355 10% 17 CO 139 1% 10 1% 14 0% 18 GA 100 1% 35 3% 10 0% Other 294 3% 0 0% 467 13% US Total 10,321 100% 1,072 100% 3,624 100% Source: “Ethanol Plant List,” The Ethanol Monitor; published by Oil Intelligence Link, Inc., Editor & Publisher: Tom Waterman; The Ethanol Monitor©2008 December 2, 2008. Accessed March 23, 2009. Notes: Expansion capacity includes plants that are permitted, under construction, and “likely” to be completed.. Renewable Fuel Standard (RFS) The expanded renewable fuel standard (RFS) in the Energy Independence and Security Act of 2007 (EISA, P.L. 110-140) mandates renewable fuels blending requirements for fuel suppliers. It expands the earlier renewable fuel standard in the Energy Policy Act of 2005 (EPAct 2005, P.L. 109-58) by increasing mandated volumes and creating carve-outs for different types of biofuels. The expanded RFS consists of two main categories. The first is an unspecified category that may be filled with any type of biofuel, including corn ethanol, which predominates. The second category is “advanced biofuels,” and can be fulfilled with biofuels other than corn ethanol. Within the advanced biofuels category are carve-outs for cellulosic, biodiesel, and other advanced biofuels. The RFS requires that 11.1 billion gallons of renewable fuels be blended into gasoline in 2009. The total blending requirement grows annually to 36 billion gallons in 2022 (see Figure 2)19. The unspecified portion of the RFS is capped at 10.5 billion gallons in 2009 and increases annually until it is capped at 15 billion gallons from 2015 through 2022. This component of the mandate is likely to be filled by corn-starch ethanol, although any renewable biofuel may be used as long as it meets the lifecycle greenhouse gas emissions requirement. Although advanced biofuels may be used to fulfill the non-advanced renewable fuels portion of the mandate, corn ethanol cannot be used to meet the advanced biofuels mandate. Figure 2. Renewable Fuel Standard Under EISA Source: Energy Independence, and Security Act, (EISA, P.L. 110-140, Section 202) Notes: The ”Any Renewable Fuel” portion is a cap, whereas other categories are floors—the unspecified portion may be filled by corn ethanol or an advanced biofuel. As previously discussed, eligibility under the RFS also requires that biofuels achieve GHG emissions reductions. For corn ethanol from new refineries,20 a reduction of 20% compared with gasoline’s emissions is required. Advanced biofuels have a more stringent GHG reduction requirement of 50% compared with gasoline, and eligible cellulosic biofuels must have a 60% reduction. The rules for calculating lifecycle greenhouse gas emissions are currently being formulated by EPA and are due to be announced in 2009. These regulations will determine which fuels are eligible for the RFS and will therefore have a significant impact on the future of the biofuels industry. EISA requires consideration of both direct and indirect lifecycle emissions. Indirect GHG emissions caused by land use changes are particularly difficult to calculate (see section on GHG emissions below). ### The Ethanol Production Process Ethanol, or ethyl alcohol, is an alcohol made by fermenting and distilling simple sugars. It can be produced from any biological feedstock that contains appreciable amounts of sugar or materials that can be converted into sugar such as starch or cellulose. Sugar beets and sugar cane are examples of feedstocks that contain sugar. Corn and sorghum contain starch that can relatively easily be converted into sugar. In the United States, corn is the principal ingredient used in the production of ethanol; in Brazil, sugar cane is the primary feedstock. Corn-starch ethanol can be produced using either of two processes: wet milling or dry milling. These processes differ in the initial processing of the corn prior to fermentation. During the early stages of the ethanol industry, the wet milling process was predominant. Most new plants have used the dry mill process. The shift over time from the wet mill process to the dry mill process has resulted in improved efficiencies. The cost of inputs, especially energy, per gallon of ethanol produced has been reduced.21 Feedstocks, water, energy, labor, and capital are the major inputs for ethanol production. Ethanol yields in 2008 ranged from 2.5 to 2.9 gallons per bushel of corn, with a weighted average of 2.75 gallons per bushel.22 Most ethanol plants operate using natural gas or coal, although some plants use biomass or manure. Electrical energy is used to operate plant machinery, and steam or hot air are used in liquefaction, fermentation, distillations, and drying by-products. Distillers grains for livestock feed are an important byproduct of ethanol production but must be dried before shipping long distances to reduce weight. Since drying distillers grains is a major use of energy for ethanol producers, refineries often locate near users of animal feed, such as large cattle operations, and ship distillers grains wet to cut processing costs. Water is a major input into the distillation process and an important environmental consideration. Improved recycling processes have reduced water use in newer ethanol plants.23 ### U.S. Ethanol Imports In addition to domestic production, the U.S. ethanol supply includes imports of sugar-cane ethanol from Brazil and the Caribbean Basin Initiative (CBI) nations of El Salvador, Costa Rica, Jamaica, and Trinidad and Tobago. Ethanol imports reached 557 million gallons in 2008, or 6% of U.S. supply. Brazil, which ranks second behind the United States in ethanol production, traditionally accounts for about half of U.S. ethanol imports, with the remainder shipped from CBI countries. (Much of this is originally produced in Brazil and transshipped to CBI countries, where it is dehydrated to qualify for tariff-free status when shipped to the United States.) Under the CBI, an unlimited amount of ethanol may be shipped to the United States duty-free if indigenous feedstocks are used in its production. Ethanol refined in CBI countries from foreign feedstocks or foreign ethanol that is substantially altered prior to shipment can be shipped dutyfree up to a volume no greater than 7% of U.S. use. This rule has encouraged countries, for instance Jamaica, to import hydrous ethanol from Brazil, dehydrate it to remove moisture, and ship the anhydrous ethanol to the United States duty-free. U.S. imports of ethanol are subject to a$0.54 per gallon duty. Originally, the duty was intended to deny the benefit of tax credits available to ethanol blended in the United States to imported ethanol. These credits are $0.45 per gallon beginning in 2009,$0.09 per gallon less than the tariff, increasing its discriminatory impact. In addition, a much smaller ad valorem tariff of 2.5% is levied on imported ethanol.24 Many argue that a tariff on ethanol increases costs to consumers. Ethanol imports benefitted from a duty drawback25 provision through September 2008. Imported ethanol received a duty drawback if a “like commodity” to ethanol, or its final product, a gasoline-ethanol mixture, was exported. Jet fuel was considered a like commodity to the gasolineethanol mixture and was frequently exported to trigger the duty drawback. However, a provision in the Food, Conservation, and Energy Act of 2008 (the 2008 farm bill, P.L. 110-246) eliminated the duty drawback for fuels that do not contain ethanol (such as jet fuel). The ethanol tariff will likely be of interest for the 111th Congress. During the 110th Congress, several bills were introduced to eliminate, reduce, or extend the tariff on ethanol. Proponents of the tariff cite the need to support the ethanol industry against lower-priced imports until it reaches maturity. They contend that it prevents imported ethanol from benefitting from the blender’s tax credit, which is intended, among other things, to promote U.S. energy independence. Opponents of the tariff claim that the industry is generally profitable and has matured to the point where such incentives are unnecessary. Opponents also point out that imports of Brazilian ethanol may be essential to fulfill the RFS mandate in coming years and should therefore be encouraged. Legislation (S. 622) has been introduced in the 111th Congress to address the lack of parity between the blender’s tax credit and the tariff on ethanol. The bill would periodically reduce the tariff on ethanol by the same amount as any reduction in income or excise tax credit applicable to ethanol so that the tariff is equal to, or less than, the applicable income or excise tax credit. ## Economics of Ethanol The economics underlying ethanol production include decisions concerning capital investment, plant location (relative to feedstock supplies, population centers, and by-product markets), production technology, and product marketing and distribution, as well as federal and state production incentives and usage mandates.26 Demand for ethanol is dependent on regulatory mandates, its price relative to gasoline, and, until 2006, its use as an oxygenate.27 Profitability for an ethanol refiner depends primarily on the cost of the main input, corn, relative to the value of ethanol (adjusted for any applicable tax credits), and the value of co-products produced. Co-products are an important economic consideration for ethanol producers. For each gallon of ethanol produced using the dry mill process, an average of 6.7 pounds of dried distillers grains (DDG) (at 10% moisture) is produced. For every gallon of ethanol produced in a dry mill plant, about $0.25 of distillers dried grains and$0.006 of CO2 can be sold.28 ### The Ethanol Industry During the Recession of 2008-2009 During 2005 and much of 2006, the ethanol industry enjoyed a period of significant profitability. However, the fundamentals for ethanol production began to shift in 2008. In late 2008, ethanol prices exceeded gasoline prices and remained higher through early 2009. Discretionary blending above the RFS mandate stopped and demand for ethanol slipped. Simultaneously, the overall economic climate worsened—demand for fuel declined, further reducing demand, and credit tightened. Ethanol refineries cut back production, and many with heavy debt loads were forced into bankruptcy. At the same time, corn prices reached record levels before falling in early 2009. At that time, ethanol prices of $1.66 per gallon combined with corn prices of$4.10 per bushel (nearby month on the futures market) and gasoline prices around $1.68 per gallon resulted in reduced ethanol demand and losses by refiners. When ethanol is priced below gasoline (on an energy-equivalent basis), as it was during the 2006-2008 period, ethanol reduces the price consumers pay at the pump. However, beginning in the last half of 2008 and early 2009, ethanol prices were higher than gasoline, and blending actually increased the pump price.29 Figure 3. Corn Versus Ethanol Prices, 2000-2008 Source: Corn, No. 2 yellow, Central Illinois; USDA Agricultural Marketing Service; Ethanol are rack, f.o.b. Omaha, Nebraska Ethanol Board, Lincoln, NE., Nebraska Energy Office, Lincoln, NE. Notes: Prices are monthly averages. A radically different picture emerged in mid- to late 2008 as the economy began to slow and credit markets tightened. The recession has provided numerous challenges for the ethanol industry. Volatility in the corn and petroleum markets have made it difficult to maintain profitability. Tightening credit markets stopped most plant construction. Ethanol production was reduced to 80% to 90% of capacity as crush margins tightened, low-priced gasoline was more competitive, and overall demand for transportation fuel fell. Illustrative of the industry’s recent problems, VeraSun, a major ethanol producer, filed for bankruptcy on October 31, 2008, and is selling its refineries.30 Other plants have suspended operations or are operating at reduced capacity. At the end of 2008, some estimates placed the total industry output at 84% of its potential.31 Figure 4. Ethanol and Gasoline Prices, 2000-2008 Source: Ethanol and unleaded gasoline rack prices per gallon. F.O.B. Omaha, Ethanol Board,, Lincoln, NE. Nebraska Energy Office, Lincoln, NE. Notes: By volume. Some analysts have predicted substantial consolidation as the next step for the maturing ethanol industry.32 However, consolidation lately has been slowed by tight credit markets. Nevertheless, some of the larger ethanol producers, including Poet and Archer Daniels Midland (ADM) have expressed interest in buying up smaller, struggling plants. 33 Many of these smaller, cooperativeowned, older plants buy local corn and have a local market for ethanol. They have more favorable balance sheets than recently constructed 100 mgpy plants with heavy debt loads and are under little pressure to sell. The sale at auction of VeraSun’s 16 refineries may contribute to further consolidation. Despite the difficult economic times, five ethanol plants, with a total production capacity of 485 mgpy, came online during October and November 2008.34 ### Impact on Commodity Markets USDA estimates that 3.7 billion bushels of corn (about one-third of total U.S. corn production) from the 2008 corn crop will be used to produce ethanol during the 2008/2009 (September- August) corn marketing year. Ethanol’s share of corn production was 20% (2.119 billion bushels) in 2006/2007 and expanded to 23% (3.026 billion bushels) in 2007/2008.35 In its annual baseline projections (February 2009), USDA projects that U.S. ethanol production will use 35% (5.1 billion bushels) of the corn crop by 2018. In March 2009, the Food and Agricultural Policy Research Institute (FAPRI) projected that 2018 U.S. ethanol production will reach 17.7 billion gallons and use 44% (5.4 billion bushels) of the U.S. corn crop.36 As corn prices rise, so too does the incentive to expand corn production either by planting on more marginal land or by altering the traditional corn-soybean rotation that dominates Corn Belt agriculture. This shift could displace other field crops, primarily soybeans, and other agricultural activities. Further, corn production is among the most energy-intensive of the major field crops. An expansion of corn area could have important and unwanted environmental consequences due to the increases in fertilizer and chemical use and soil erosion. The National Corn Growers Association claims “there is still room to significantly grow the ethanol market without limiting the availability of corn.”37 However, other evidence suggests that effects are already being felt from the current expansion in corn production. The increasing share of the U.S. corn crop utilized by ethanol blenders, and other market conditions, has resulted in declining U.S. exports. Tight global corn supplies contributed to high commodity prices, impacting consumers, especially in low-income countries where grains form a large share of diets and food is a major expenditure. Supporters of corn ethanol claim that biofuels production and use will have enormous agricultural and rural economic benefits by increasing farm and rural incomes and generating substantial rural employment opportunities.38 Opponents maintain that continued expansion of corn-based ethanol production could have significant negative consequences for traditional U.S. agricultural crop production and rural economies. Large-scale shifts in agricultural production activities could likely also have important regional economic consequences that have yet to be fully explored or understood. For more information on the impact of ethanol on food and feed prices, see CRS Report RL34265, Selected Issues Related to an Expansion of the Renewable Fuel Standard (RFS), by Brent D. Yacobucci and Tom Capehart. For more information on commodity price impacts, see CRS Report RL34474, High Agricultural Commodity Prices: What Are the Issues?, by Randy Schnepf. ### Impact on Domestic Food Markets Critics of first generation ethanol claim it was responsible for a large proportion of recent food price increases that occurred in early 2008. As evidence they cite USDA=s estimate that the U.S. Consumer Price Index (CPI) for all food increased 5.5% in 2008, and 4.0% in 2007, compared with an average rate of increase of 2.5% for 1997 to 2006.39 In analyzing this criticism, however, it is important to distinguish between prices of farm-level commodities and retail-level food products, because most consumer food prices are largely determined by marketing costs that occur after the commodities leave the farm.40 The price of a particular retail food item varies with a change in the price of an underlying input in direct relation to the relative importance (in value terms) of that input. For example, if the value of wheat in a$1.00 loaf of bread is about 104, then a 20% rise in the price of wheat translates into a 24 rise in a loaf of bread. Considering corn=s relatively small value-share in most retail food product prices, some contend that it is unlikely that the ethanol-driven corn price surge is a major factor in current food price inflation estimates.41 Furthermore, many economists agree that the majority of retail food price increases were not mainly ethanol-driven, but rather were the result of various other factors, including a sharp increase in energy prices that rippled through all phases of marketing and processing channels, and the strong increase in demand for agricultural products in the international marketplace from China and India (a product of their large populations and rapid economic growth).42 ## Energy Efficiency An examination of energy efficiency can help determine whether ethanol provides an improvement over gasoline or other fuels. Does it take more fossil fuel to produce a gallon of ethanol than the energy available when that gallon of ethanol is consumed? The net energy balance (NEB) of a fuel is a useful means of comparing different fuels for public policy purposes. The NEB is expressed as a ratio of the energy produced from a production process relative to the energy used in that production process. An output/input ratio of 1.0 implies that energy output equals energy input. The critical factors underlying ethanol’s energy efficiency include (1) corn yields per acre (higher yields for a given level of inputs improves ethanol’s energy efficiency); (2) the energy efficiency of corn production, including the energy embodied in inputs such as fuels, fertilizers, pesticides, seed corn, and cultivation practices; (3) the energy efficiency of the corn-toethanol production process: clean burning natural gas is the primary processing fuel for most ethanol plants, but several plants (including an increasing number of new plants) use coal; and (4) the energy value of corn by-products, which act as an offset by substituting for the energy needed to produce market counterparts. Over the past decade, technical improvements in the production of agricultural inputs (particularly nitrogen fertilizer) and ethanol, coupled with higher corn yields per acre and stable or lower input needs, appear to have raised ethanol’s NEB. About 82% of the corn used for ethanol is processed by more efficient dry milling (a grinding process) and about 18% is processed by wet milling plants. All new plants under construction or coming online are expected to dry mill corn into ethanol: thus the dry milling share will continue to rise for the foreseeable future. A 2007 report by the National Renewable Energy Laboratory (NREL) summarized recent reports on the NEB for corn ethanol. Results varied widely, but most reports using similar assumptions found the NEB for corn ethanol to be positive. In 2004, USDA reported that, assuming best production practices and state of the art processing technology, the NEB for corn ethanol (based on 2001 data) was a positive 1.67—that is, 67% more energy was returned from a gallon of ethanol than was used in its production. Other researchers have found much lower NEB values under less optimistic assumptions, leading to some dispute over corn-to-ethanol’s representative NEB. A 2006 review of several major corn-to-ethanol NEB analyses found that, when coproducts are properly accounted for, the corn-to-ethanol process has a positive NEB that is improving with changing technology.43 This result was confirmed by another comprehensive study that found an NEB of 1.25 for corn ethanol.44 However, these studies clearly imply that inefficient processes for producing corn (e.g., excessive reliance on chemicals and fertilizer or bad tillage practices) or for processing ethanol (e.g., coal-based processing), or extensive trucking of either the feedstock or the finished ethanol long distances to plant or consumer, can result in an NEB significantly less than 1.0. In other words, not all ethanol production processes have a positive energy balance. A few studies have concluded that corn ethanol does not have a positive NEB (that is, that it takes more fossil energy to produce a gallon of ethanol than it contains).45 However, these studies were distinguished by much higher energy inputs in the agriculture, transport, refining, and distribution components of the ethanol manufacturing process than other studies.46 ## Lifecycle Greenhouse Gas Emissions Lifecycle greenhouse gas (GHG) emissions are the aggregate quantity of GHG emissions (including direct emissions and significant indirect emissions such as emissions from land use changes) accounting for all stages of fuel and feedstock production and distribution, from feedstock generation or extraction through the distribution, delivery, and use of the finished fuel to the ultimate consumer.47 Many link GHG emissions to global climate change,48 so the relative emissions from different types of fuels are of great interest. Although the use of ethanol has been touted by proponents as reducing GHG emissions compared with conventional fuels, some contend that the benefits are nonexistent or minimal. Under the Energy Independence and Security Act of 2007 (EISA P.L. 110-140, Section 202), GHG emissions reductions must be calculated by the U.S. Environmental Protection Agency (EPA) using a methodology yet to be determined. Estimates for GHG reductions from ethanol vary widely depending on the methodology used. As noted above, provisions in the EISA require the reduction of lifecycle emissions including “direct emissions and significant indirect emissions such as those from land use changes.” For example, some studies have concluded that, if ethanol production displaces another crop that is then grown on newly cleared forest land (such as a rainforest in Brazil), the resulting GHG emissions could be substantial, and if high enough, could render the fuel ineligible under the RFS.49 Different methodologies will allot varying weights to these impacts and hence benefit different stakeholders. EPA is required to establish rules defining the methodology for measuring lifecycle GHG emissions under the RFS. Section 202 of EISA required EPA to develop revised RFS regulations no later than one year after enactment (December 19, 2008). This deadline has passed, and a proposed rule is expected to be issued soon, followed by a comment period. These rules will likely be the subject of intense debate because they will determine whether a fuel is eligible for the RFS. Congress granted wide latitude to EPA in drafting the rules for calculating lifecycle GHG emissions. Depending on the outcome of EPA’s rulemaking, Congress might revisit this issue. Most studies show a 10% to 20% reduction in GHG emissions for corn ethanol compared with gasoline.50 Estimates vary based on the system boundaries used, cultivation practices (e.g. minimum as opposed to normal tillage) used to grow the corn, and the fuel used to process the corn into ethanol (e.g., natural gas versus coal). These studies do not take into account indirect GHG emissions due to land use changes.51 One controversial study (based on direct and indirect lifecycle GHG emissions) comparing vehicles powered by various sources claimed more health and environmental harm from E85 ethanol-powered vehicles than from battery-electric-powered vehicles (from all alternative sources of electricity generation including coal with carbon sequestration).52 EISA requires that corn ethanol produced in facilities that commence construction after enactment (December 2007) must achieve at least a 20% reduction in lifecycle GHG emissions compared with gasoline. This provision applies to roughly 4 billion gallons of capacity out of 13.7 billion gallons of current and under-construction plants. Enough grandfathered capacity currently exists to nearly fulfill the 15 billion gallon maximum ethanol mandate that becomes effective in 2015 under the RFS. EISA also enables EPA to reduce the GHG reduction requirements if it is determined that “generally such reduction is not commercially feasible for fuels made using a variety of feedstocks, technologies, and processes to meet the applicable reduction.”53 Ethanol industry proponents are calling for GHG emissions to be calculated using only significant indirect factors and to exclude international land-use effects until EPA develops “objective and peer reviewed methodology” for their calculation.54 ## Distribution and Consumption Issues Distribution and absorption constraints may hinder the utilization of ethanol. As the RFS progresses, greater volumes of advanced biofuels (i.e., cellulosic or non-corn-starch ethanol, biodiesel, or imported sugar ethanol) would need to be used to fulfill the rising advanced biofuels mandate. Currently the infrastructure required to ship this volume of ethanol and the vehicles to consume it do not exist. ### Distribution Bottlenecks Distribution issues may hinder the efficient delivery of ethanol to retail outlets. Ethanol, mostly produced in the Midwest, must be transported to more populated areas for sale. The current ethanol distribution system is dependent on rail cars, tanker trucks, and barges. Ethanol cannot be shipped in pipelines designed for gasoline because ethanol tends to separate and attract water in gasoline pipelines, causing corrosion. As a result, ethanol would need its own dedicated pipeline. This would be enormously expensive; however, some Members of Congress have introduced legislation calling for such a pipeline.55 Preliminary assessments of a 1,700 mile ethanol pipeline from Minnesota to New York are being conducted by a major ethanol producer and petroleum pipeline operator.56 Because of competition, options (especially for rail cars) are often limited. As non-corn biofuels play a larger role, some infrastructure concerns may be alleviated as production is more widely dispersed across the nation. Also, if biomass-based diesel substitutes are produced in much larger quantities, some of these infrastructure issues may be mitigated. However, ethanol would still need to be stored in unique storage tanks and blended immediately before pumping, requiring further infrastructure investments. See CRS Report R40155, Selected Issues Related to an Expansion of the Renewable Fuel Standard (RFS), by Brent D. Yacobucci and Tom Capehart. ### Alternative Blend Levels and the “Blend Wall” The “blend wall” is the maximum possible volume of ethanol that can be blended into conventional U.S. motor gasoline at a given blend level. At a 10% ethanol blend (E10) this is roughly 14 billion gallons of ethanol. This limit becomes problematic as the volume under the RFS exceeds this level—which is expected to occur in 2012 when the RFS reaches 15 bgpy. Once the potential volume utilized by conventional vehicles has been reached, additional increases in volume will have no market except for the very limited number of flex fuel vehicles (FFVs) that can use higher blends. Although greater use of E85 could absorb additional volume, it is limited by the lack of E85 infrastructure (limited by the considerable expense of installing or upgrading tanks and pumps) and the size of the FFV fleet. Proposed legislation in the 111th Congress, the E-85 Investment Act of 2009 (H.R. 1112), would increase the credit against income tax for E85 refueling property (filling station pumps, tanks, and other related equipment) to 75% from 30% for property placed in service prior to 2012. The credit maximum is $30,000 for depreciated property and$1,000 for other property. The credit maximum is gradually reduced for property placed in service after December 2012 through 2016. An increase in the tax credit for E85 infrastructure was included in the enacted 2009 economic stimulus package (The American Recovery and Reinvestment Act of 2009, P.L. 111-5) and is available for the cost of installing alternative fueling equipment. P.L. 111-5 provides a temporary increase in the credit to 50% of the cost for equipment placed into service on or after December 31, 2008, and before January 1, 2011, not to exceed $50,000. The credit is also increased for residential fueling equipment. To increase potential ethanol use without the infrastructure and vehicle changes required for E85, some have proposed raising the ethanol blend level for conventional vehicles from E10 to E15 or E20. For such an increase to take place, EPA must issue a waiver under Section 211(f) of the Clean Air Act,57 thereby allowing a higher ethanol blend. In addition, automobile and motor equipment manufacturers would have to extend warranties to include higher blends, and infrastructure such as pumps and storage tanks would have to be certified for the higher level. Most automotive manufacturer warranties are currently valid for E10 only. Recently, Underwriter’s Laboratories (UL) certified gasoline dispensing equipment for blends up to 15% ethanol.58 However, given that the actual ethanol content of E10 ranges from 7% to 13% , it is likely E15 blends will contain up to 18% percent ethanol, and would not be covered in the UL certification. On March 6, 2009, Growth Energy, a major organization promoting ethanol, applied to EPA (on behalf of 52 ethanol producers) for a waiver of Section 211(f)(4) of the Clean Air Act to allow an immediate increase in the maximum ethanol blend level from E10 to E12 or E13, and later allowing blends up to E15 to be used by conventional vehicles. EPA must grant or deny the waiver request within 270 days of receipt (December 1, 2009). This is significant because, even without any increase in the consumption of E85, raising the blend rate for conventional vehicles would enable an additional 7-8 billion gallons of ethanol in gasoline. This would raise the “blend wall” to roughly 22 billion gallons. The waiver request is supported by corn and ethanol interests and opposed by livestock and environmental groups. Legislation addressing supply and distribution issues has been introduced in the 111th Congress. The Open Fuel Standard Act of 2009 (H.R. 1476), would require 50% of the automobiles powered by internal combustion engines that are manufactured in the United States to be capable of operating on either 85% ethanol, 85% methanol, or biodiesel beginning in 2012, and 80% to be capable of operating on either 85% ethanol, 85% methanol, or biodiesel beginning in 2015. ## Federal Intervention in the Ethanol Industry The federal government provides incentives and support for the ethanol industry though tax credits, research and development, grants and loan guarantees for plant construction, import tariffs, and perhaps most important, the RFS usage mandate, which was discussed above. Historically, federal subsidies have played an important role in encouraging investment in the U.S. ethanol industry. The Energy Tax Act of 1978 first established a partial exemption for ethanol fuel from federal fuel excise taxes. The Highway Trust Fund, funded by gasoline excise tax receipts, was reduced by the amount of the exemption so that increased ethanol use resulted in reduced funding for state transportation programs and highway projects. In addition, dealers sometimes purchased exempted gasoline and then failed to blend it with ethanol, even though they paid the reduced excise tax. In 2005, a volumetric ethanol excise tax credit, paid out of the general fund, replaced the partial tax exemption and eliminated these problems. The credit has no impact on the Highway Trust Fund and is based on the volume of ethanol in the blended fuel, reducing the opportunities for fraud. A discussion of this credit and other subsidies follows. For more information on biofuels incentives, see CRS Report RL33572, Biofuels Incentives: A Summary of Federal Programs, by Brent D. Yacobucci. ### Blender’s Tax Credit The blender’s tax credit, or volumetric ethanol excise tax credit, is an income tax credit based on the volume of ethanol blended with gasoline for sale or use. For each gallon of ethanol blended, an income tax credit of$0.45 per gallon is available. The credit was established by Section 301 of the American Jobs Creation Act of 2004 (P.L. 108-357). The 2008 farm bill (P.L. 110-246) extended the credit through 2010 and reduced it from $0.51 per gallon to$0.45 per gallon beginning the first calendar year following calendar-year production exceeding 7.5 billion gallons. Since 2008 production exceeded this threshold, the tax credit reduction became effective in January 2009. Credits under this program are estimated at $5 billion in 2008.59 The Energy Improvement and Extension Act of 2008 (P.L. 110-343, Division B, Section 203) limits the blender’s credit to fuels that are to be consumed in the United States. The credit is administered by the Internal Revenue Service. ### Small Producer Credit A small producer income tax credit (26 U.S.C. 40) of$0.10 per gallon for the first 15 million gallons of production is available to ethanol producers whose total output does not exceed 60 million gallons of ethanol per year. The credit applies to the first 15 million gallons of a refiner’s output. Based on the number of refiners with less than 60 million gallons output in 2008, credits under this program applied to approximately 1.6 billion gallons in 2008.60 The small producers credit terminates on December 31, 2010. This credit was established by the Omnibus Budget Reconciliation Act of 1990 (P.L. 101-508) and is administered by the Internal Revenue Service. ### Alternative Fuel Infrastructure Tax Credit The alternative fuel infrastructure tax credit is available for the cost of installing alternative fueling equipment placed into service after December 31, 2005. Although not a credit for biofuels per se, it applies to retail pumps and other equipment used for E85 ethanol. A maximum credit of 30% of the cost, not to exceed $30,000, is available for equipment placed into service before January 1, 2009. The economic stimulus package (the American Recovery and Reinvestment Act of 2009, P.L. 111-5) provides a temporary increase in the credit to 50% of the cost for equipment placed into service on or after December 31, 2008, and before January 1, 2011, not to exceed$50,000. Fueling station owners who install qualified equipment at multiple sites are allowed to use the credit toward each location. Consumers who purchase residential fueling equipment may receive a tax credit of up to $1,000, which increases to$2,000 for equipment placed into service after January 1, 2009, and before January 1, 2011. The alternative fuel infrastructure tax credit is administered by the Internal Revenue Service. ### Ethanol Import Tariff A $0.54 per gallon most-favored-nation tariff on most imported ethanol was extended through December 31, 2010, by a provision in the 2008 farm bill. Caribbean Basin Initiative countries are exempt from the ethanol duty up to a volume equal to 7% of total U.S. consumption. Imports of ethanol during recent years have been approximately 500 million gallons. The tariff is administered by U.S. Customs and Border Protection. ### Grant and Loan Programs #### The Business and Industry Guaranteed Loan Program The Business and Industry (B&I) Guaranteed Loan Program is a long-standing program authorized by Section 310B of the Consolidated Farm and Rural Development Act of 1972 (P.L. 92-385) and administered by USDA Rural Development. The program is intended to improve, develop, or finance business, industry, and employment in rural areas. Biofuel projects, such as ethanol refineries, have frequently utilized the B&I Program. The percentage of guarantee, up to the maximum allowed, is to be negotiated between the lender and USDA. The guaranteed principal is limited to 80% for loans of$5 million or less, 70% for loans between $5 and$10 million, and 60% for loans exceeding $10 million. A loan is limited to a maximum guarantee of$10 million. An exception to this limit may be granted for loans of up to $25 million under certain circumstances. FY2009 appropriations for the Business and Industry Guaranteed Loan Program are$43 million, to support $993.0 million in loan authorizations— unchanged from FY2008. #### Repowering Assistance Program The Repowering Assistance Program provides grants to biorefineries that use or convert to renewable biomass to reduce or eliminate fossil fuel use. The program is authorized by the 2008 farm bill (P.L. 110-246) and is available to all refineries in existence at the date of enactment. The program provides mandatory funding of$35 million for FY2009 that will remain available until the funds are exhausted. The farm bill also authorizes additional funding of \$15 million per year, from FY2009 through FY2012, subject to appropriations. No appropriations were authorized for FY2009. Rules for implementation of the Repowering Assistance Program are currently being developed by USDA. Public Domain Image ## References 1. 1 On an energy-equivalent basis to gasoline. The energy in a gallon of ethanol is equal to that in .67 gallon of gasoline. 2. 2 CRS estimate based on data from the U.S. Department of Energy’s Energy Information Agency. 3. 3 Unless otherwise specified, this report covers corn-starch ethanol (corn ethanol) produced from starch in the corn kernel and does not include biofuels produced from other parts of the corn plant. 4. 4 First generation biofuels also include ethanol produced from sorghum, a small amount of which is produced in the United States. 5. 5 For information on cellulosic biofuels, see CRS Report RL34738, Cellulosic Biofuels: Analysis of Policy Issues for Congress, by Tom Capehart. 6. 6 For examples of proponent policy positions, see the Renewable Fuels Association (RFA), the National Corn Growers Association (NCGA), and the American Soybean Association (ASA). 7. 7 For example, see Elizabeth Ames Jones, AEnergy Security 101,@ Washington Post, October 9, 2007. 8. 8 Advocates of this position include free-market proponents such as the Cato Institute, and federal budget watchdog groups such as Citizens Against Government Waste and Taxpayers for Common Sense. 9. 9 For example, see James and Stephen Eaves, “Is Ethanol the >Energy Security= Solution?” Washingtonpost.com, October 3, 2007; or R. Wisner and P. Baumel, “Ethanol, Exports, and Livestock: Will There be Enough Corn to Supply Future Needs?” Feedstuffs, no. 30, vol. 76, July 26, 2004. 10. 10 EPA, “Notice of Decision Regarding the State of Texas Request for a Waiver of a Portion of the Renewable Fuel Standard,” Federal Register, vol. 73, no. 157, August 13, 2008. 11. 11 DOE, Energy Information Administration (EIA), Report #:DOE/EIA-0383 (2009). 12. 12 For detailed information on incentives for ethanol, see CRS Report R40110, Biofuels Incentives: A Summary of Federal Programs, by Brent D. Yacobucci. 13. 13 Ethanol’s Federal Subsidy Grab Leaves Little For Solar, Wind And Geothermal Energy, Environmental Working Group, January 9, 2009. 14. 14 AgWeb.com, “Ethanol Industry’s ‘Wish List’; Food Before Fuel Coalition Response,” press release, December 16, 2008. 15. 15 Renewable Fuels Association, World Ethanol Production by Country. 16. 16 USDA, Office of Energy Policy and New Uses, The Energy Balance of Corn Ethanol: An Update, AER-813, by Hosein Shapouri, James A. Duffield, and Michael Wang, July 2002. 17. 17 For more information, see CRS Report RL33572, Biofuels Incentives: A Summary of Federal Programs, by Brent D. Yacobucci. 18. 18 First and second generation biofuels are not analogous to conventional and advanced biofuels as defined in the RFS. Under the RFS, advanced biofuels include those produced from sorghum, wheat, and sugar feedstocks (as long as they meet applicable greenhouse gas reduction requirements), although they are considered first generation biofuels. 19. 19 EPA, Renewable Fuel Standard: Notice of 2009 Requirement. 20. 20 Plants that commenced construction after passage of EISA in December 2007. 21. 21 Adam J. Liska et al., “Improvements in Life Cycle Energy Efficiency and Greenhouse Gas Emissions of Corn- Ethanol,” Journal of Industrial Ecology (December 2008). 22. 22 Lihong Lu McPhail and Bruce A. Babcock, Short-Run Price and Welfare Impacts of Federal Ethanol Policies, Center for Agricultural and Rural Development, Working Paper 08-WP 468, Ames, IA, June 2008. 23. 23 Dennis Keeney, Ph.D., Water Use by Ethanol Plants: Potential Challenges, The Institute for Agriculture and Trade Policy, October 2006. 24. 24 An ad valorem tariff is based on a percentage of the declared value of an imported good. 25. 25 A duty drawback is a refund of duty paid on imports that have been re-exported or, in their place, a like commodity has been exported. 26. 26 For more information on the economics underlying the capital investment decision, see D. Tiffany and V. Eidman, “Factors Associated with Success of Fuel Ethanol Producers,” Dept of Appl. Econ., Univ. of Minnesota, Staff Paper P03-7, August 2003; hereafter referred to as Tiffany and Eidman (2003). For a discussion of ethanol plant location economics, see B. Babcock and C. Hart, ADo Ethanol/Livestock Synergies Presage Increased Iowa Cattle Numbers?@ Iowa Ag Review, vol. 12, no. 2 (Spring 2006). 27. 27 Oxygenates were added to gasoline to reduce carbon monoxide emissions that are created during the burning of the fuel. In May 2006, the oxygenate requirement in the federal reformulated gasoline requirement was eliminated. 28. 28 Hosein Shapouri and Paul Gallagher, USDA’s 2002 Ethanol Cost of Production Survey, U.S. Department of Agriculture, Office of the Chief Economist, Office of Energy Policy and New Uses, Agricultural Economic Report Number 841, July 2005, p. 2. 29. 29 Dave Juday, “Ethanol at the Blend Wall,” World Perspectives Inc., January 20, 2009, p. 6 (by subscription). 30. 30 “Ethanol Producer VeraSun Expects to Report 2008 Losses,” DTN Ethanol Center, March 16, 2009. 31. 31 “US Ethanol Output Edges Up in December 2008, Stocks Down,” DTN Ethanol Center, February 27, 2009, . 32. 32 Bryan Sims, “Surviving the Economic Storm,” Ethanol Producer Magazine, January 1, 2009, . 33. 33 “Ethanol Industry Faces Consolidation,” Feedstuffs, December 1, 2008, p. 1. 34. 34 Bryan Sims, “Plants Come On Line During Challenging Economic Times,” Ethanol Producer Magazine, January 2009. 35. 35 USDA, World Agricultural Supply and Demand Estimates, WASDE-468, March 11, 2009. 36. 36 Food and Agricultural Policy Research Institute, US Baseline Briefing Book, FAPRI-MU Report #01-09, Columbia, MO, March 2009. 37. 37 National Corn Growers Association, Killing Myths on Ethanol, Washington D.C., 2008, accessed March 23, 2009. 38. 38 For example, see John M. Urbanchuk (Director, LECG LLC), Contribution of the Ethanol Industry to the Economy of the United States, white paper prepared for National Corn Growers Assoc., February 21, 2006. 39. 39 USDA Economic Research Service, Food CPI, Prices, and Expenditures Briefing Room. 40. 40 Helen H. Jensen and Bruce A. Babcock, ADo Biofuels Mean Inexpensive Food is a Thing of the Past?@ Iowa Ag Review, vol. 13, no. 2 (Spring 2007), pp. 1-3. 41. 41 For examples, see Food & Water Watch, ARetail Realities: Corn Prices Do Not Drive Grocery Inflation,@ Sept. 2007; and John M. Urbanchuk (Director, LECG LLC), AThe Relative Impact of Corn and Energy Prices in the Grocery Aisle,@ white paper prepared for National Corn Growers Assoc., June, 14, 2007. 42. 42 For examples, see Jacque Diouf, Director General of the U.N. Food and Agriculture Organization, Why Are Food Prices Rising?@ in Financial Times Online, Nov. 26, 2007. See also Keith Collins, Chief Economist, USDA, testimony before the House Committee on Agriculture, October 18, 2007. 43. 43 Alexander E. Farrell et al., “Ethanol Can Contribute to Energy and Environmental Goals,” Science, vol. 311, no. 5760 (January 2006), pp. 506-508. 44. 44 Hill et al., “Environmental, Economic, And Energetic Costs And Benefits Of Biodiesel And Ethanol Biofuels,” Proceedings of the National Academy of Science, 2006. 45. 45 David Pimentel and Tad W. Patzek, “Ethanol Production Using Corn, Switchgrass, and Wood; Biodiesel Production Using Soybean and Sunflower,” Natural Resources Research, vol. 14, no. 1 (March 1, 2005), pp. 65-76. 46. 46 Natural Resources Defense Council and Climate Solutions, Ethanol: Energy Well Spent A Survey of Studies Published Since 1990, February 6, 2006. 47. 47 42 U.S.C. § 7545(o)(1). 48. 48 See CRS Report RL34513, Climate Change: Current Issues and Policy Tools, by Jane A. Leggett. 49. 49 Timothy Searchinger, Ralph Heimlich, and R. A. Houghton et al., “Use of U.S. Croplands for Biofuels Increases Greenhouse Gases Through Emissions from Land Use Change,” Science, February 29, 2008, p. 1238. 50. 50 EPA, Greenhouse Gas Impact of Expanded Renewable and Alternative Fuels Use, April 2007; Ferrell et. al. 51. 51 Michael Wang, Ph.D., Ethanol, the Complete Energy Lifecycle Picture, U.S. Department of Energy, Energy Efficiency and Renewable Energy, March 2007. 52. 52 “Wind, Water And Sun Beat Biofuels, Nuclear And Coal for Clean Energy,” Science Daily, December 11, 2008. 53. 53 EISA, P.L. 110-140, Title II, Subtitle A, Section 202(c)(4)(A). 54. 54 Comments from the Renewable Fuels Association (RFA) to EPA on the Advance Notice of Proposed Rulemaking (ANPR) regarding Regulating Greenhouse Gas (GHG) Emissions under the Clean Air Act (CAA). 73 Fed. Reg. 44,354 (July 30, 2008). 55. 55 H.R. 864, Renewable Fuel Pipelines Act of 2009, Rep. Leonard Boswell. 56. 56 POET, “POET Joins Magellan Midstream Partners to Assess Dedicated Ethanol Pipeline,” press release, March 16, 2009. 57. 57 42 U.S.C. 85. 58. 58 Underwriter’s Laboratories, Underwriter’s Laboratories Announces Support for Authorities Having Jurisdiction Who Decide to Permit the Use of Existing UL Listed Gasoline Dispensers with Automotive Fuel Containing up to a Maximum of 15% Ethanol, Northbrook, IL., February 19, 2009. 59. 59 CRS estimate based on production and import data from DOE, Energy Information Agency. 60. 60 CRS estimate based on refinery data from Renewable Fuels Association. Glossary ### Citation Service, C. (2012). Ethanol Biofuels in the United States. Retrieved from http://www.eoearth.org/view/article/51cbf1657896bb431f6a55cc
Acta Phys. -Chim. Sin. ›› 2022, Vol. 38 ›› Issue (1): 2012047. Special Issue: Graphene: Functions and Applications • REVIEW • ### Synthesis of Superclean Graphene Xiaoting Liu1,2,3, Jincan Zhang1,2,3, Heng Chen1,3, Zhongfan Liu1,3,*() 1. 1 Center for Nanochemistry, Beijing National Laboratory for Molecular Sciences, College of Chemistry and Molecular Engineering, Peking University, Beijing 100871, China 3 Beijing Graphene Institute (BGI), Beijing 100095, China • Received:2020-12-17 Accepted:2021-01-05 Published:2021-01-12 • Contact: Zhongfan Liu E-mail:zfliu@pku.edu.cn • About author:Zhongfan Liu, E-mail: zfliu@pku.edu.cn • Supported by: the National Key Basic Research Program of China(2016YFA0200103);the National Key Basic Research Program of China(2018YFA0703502);the National Natural Science Foundation of China(51520105003);the National Natural Science Foundation of China(52072042);Beijing National Laboratory for Molecular Sciences(BNLMS-CXTD-202001);Beijing Municipal Science and Technology Planning Project(Z18110300480001);Beijing Municipal Science and Technology Planning Project(Z18110300480002) Abstract: Graphene has attracted enormous interest in both academic and industrial fields, owing to its unique, extraordinary properties and significant potential applications. Various methods have been developed to synthesize high-quality graphene, among which chemical vapor deposition (CVD) has emerged as the most encouraging for scalable graphene film production with promising quality, controllability, and uniformity. However, a gap still exists between ideal graphene, having remarkable properties, and the currently available CVD-derived graphene films. To close this gap, numerous studies in the past decade have been devoted to decreasing defect density, grain boundaries, and wrinkles, and increasing the controllability of layer thickness and doping of graphene. Significant recent advances in this regard were the discovery of the inevitable contamination of graphene surface during high-temperature CVD growth and the synthesis of superclean graphene, representing a new growth frontier in CVD graphene research. Surface contamination of graphene is a major hurdle in probing its intrinsic properties, and strongly hinders its applications, for instance, in electrical and photonic devices. In this review, we aim to provide comprehensive knowledge on the inevitable contamination of CVD graphene and current synthesis strategies for preparing superclean graphene films, and an outlook for the future mass production of high-quality superclean graphene films. First, we focus on surface contamination formation, e.g. amorphous carbon, during the high-temperature CVD growth process of graphene. After introducing evidence to confirm the origin of surface contamination, the formation mechanism of the amorphous carbon is thoroughly discussed. Meanwhile, the influence of the intrinsic cleanness of graphene on the peeling and transfer quality is also revealed. Second, we summarize the state-of-the-art superclean growth strategies and classify them into direct-growth approaches and post-growth treatment approaches. For the former, modification of the CVD gas-phase reactions, for example, using metal-vapor-assisted methods or cold-wall CVD, is effective in inhibiting the formation of amorphous carbon. For the latter, both chemical and physical cleaning methods are employed to eliminate amorphous carbon without damaging the graphene, e.g. selective etching of as-formed amorphous carbon using CO2, and removal of amorphous carbon from the graphene surface using a lint roller based on interfacial force control. Third, we summarize the outstanding electrical, optical, and thermal properties of superclean graphene. Superclean graphene exhibits high carrier mobility, low contact resistance, high transparency, and high thermal conductivity, further highlighting the significance of superclean graphene growth. Finally, future opportunities and challenges for the industrial production of high-quality superclean graphene are discussed. MSC2000: • O647
# User:Mjoppich (Difference between revisions) Revision as of 11:44, 11 October 2014 (view source)VeraA (Talk | contribs) (→)← Older edit Latest revision as of 03:14, 18 October 2014 (view source)Nbailly (Talk | contribs) (→) (13 intermediate revisions not shown) Line 3: Line 3: {{Team:Aachen/TeamMembersBanner}} {{Team:Aachen/TeamMembersBanner}} - == == == == - - '''Markus Joppich - Embedded Systems and Administration Tamer''' + '''Markus Joppich - Computational Division''' - ''Computer Science (M.Sc. Student)'' + ''Computer Science (M.Sc. student)'' I am mainly working in the computational division of our iGEM Team, which also is mainly influenced by the fact that I am studying computer science at RWTH Aachen University and am currently finishing my Master's degree in Computer Science. I am mainly working in the computational division of our iGEM Team, which also is mainly influenced by the fact that I am studying computer science at RWTH Aachen University and am currently finishing my Master's degree in Computer Science. Line 18: Line 16: Regarding iGEM I really wish to learn new things in biology, as this is my subject of application, and also the area of my master's thesis. Regarding iGEM I really wish to learn new things in biology, as this is my subject of application, and also the area of my master's thesis. Additionally I would like to explain bio.* students the world of computers, as I think that both worlds can benefit from each other. Additionally I would like to explain bio.* students the world of computers, as I think that both worlds can benefit from each other. - Regarding iGEM, my main focus will be the Software and Hardware part of the project, but occasionally people might see me in the lab doing some strang things some biology :-) + Regarding iGEM, my main focus will be the Software and Hardware part of the project, but occasionally people might see me in the lab doing some strange things some biology :-) + + + '''Mainly involved in...''' + * Administrative Work (Traveling, Ordering, Infrastructure, ...) + * OD/F device (hardware, software, measurements and early prototypes) + * WatsOn & Measurarty (''Image Analysis'') + * impress.js presentation expert together with [http://2014.igem.org/User:Aschechtel Anna] for [http://webcache.googleusercontent.com/search?q=cache:tDhWlEw2eB8J:www.fz-juelich.de/SharedDocs/Termine/IBG/IBG-1/EN/Colloquium_iGEM_Aachen.html+&cd=1&hl=de&ct=clnk&gl=de FZJ IBG-1 Colloquium] + {{Team:Aachen/BlockSeparator}} {{Team:Aachen/BlockSeparator}} Line 26: Line 32: - + + + {{Team:Aachen/Footer}} {{Team:Aachen/Footer}} ## Latest revision as of 03:14, 18 October 2014 Markus Joppich - Computational Division Computer Science (M.Sc. student) I am mainly working in the computational division of our iGEM Team, which also is mainly influenced by the fact that I am studying computer science at RWTH Aachen University and am currently finishing my Master's degree in Computer Science. Regarding iGEM I really wish to learn new things in biology, as this is my subject of application, and also the area of my master's thesis. Additionally I would like to explain bio.* students the world of computers, as I think that both worlds can benefit from each other. Regarding iGEM, my main focus will be the Software and Hardware part of the project, but occasionally people might see me in the lab doing some strange things some biology :-) Mainly involved in... • Administrative Work (Traveling, Ordering, Infrastructure, ...) • OD/F device (hardware, software, measurements and early prototypes) • WatsOn & Measurarty (Image Analysis) • impress.js presentation expert together with Anna for FZJ IBG-1 Colloquium
# Key Exchange Implementation Spring 2019 The questions below are due on Sunday April 07, 2019; 11:59:00 PM. You are not logged in. Note that this link will take you to an external site (https://oidc.mit.edu) to authenticate, and then you will be redirected back to this page. ## 1) Overview This Design Exercise involves the actual implementation of an asymmetric cryptography system based on Diffie-Helman. The math/work you did on the key exchange problem of this weeks Exercises. The order of processes will need to be as follows: 1. Microcontroller initiates unencrypted contact with server to get p and m values (generated/hardcoded by server code...your choice) 2. Server responds with p and m values as well as with its t_{server} value (of the same type as those t_1 and t_2 values ) 3. Microcontroller uses the response from server to generate a key as well as generate its own t_{mcu} value. 4. Microcontroller encrypts its message/request to the server using key and then sends back to the server its encrypted request along with its t_{mcu} value. 5. Server code receives both encrypted query and t_{mcu} value. Uses t_{mcu} to generate the shared key and uses that key to decrypt the query. It does what the query asks for, generates a response, encrypts it, and sends it back to the microcontroller. 6. The microcontroller decrypts the response and displays it. 7. The system should then repeat as needed. What exactly you request/encrypt is up to you in this assignment. Some suggestions might be number facts (maybe you don't want anyone knowing what numbers you're interested in), a wikipedia query, or something else. You do not need to go crazy with that part, and you're free to build on an already existing service/application we've developed. However, you must use Diffie-Helman for your key exchange, and Vigenere cipher of length 6 or greater for your encryption scheme for your query/response transfer. In your submission, make sure you clearly explain to the grading staff what is going on in terms of encryption scheme, what is being requested/responded with, etc... These specifications are for a "toy" example...if you'd like to go more in-depth with a more complex example, by all means please feel free. This is going to possibly involve a state machine on both sides of the communication. If we remember, our server code is run in a stateless form (no persistent connections), so you may need a way to store information in a database (depending on how you approach this problem) for it to remember what state it currently is in. If you need a database, we'd recommend the following format: • time - auto-generated timestamp of when that record is submitted • user_id - Text • value1 - A field you can store numbers in • value2 - A field you can store numbers in This isn't how asymetric key-exchange is actually implemented in real-life, but it fits within the confines of our 6.08 sandbox for right now. Any late factors are based on the timestamp for the last time that the URL is entered on the page or when the code is uploaded or when a comment is added/changed. If you change your URL after the deadline you will incur a late penalty. If you upload code after the deadline, you will incur a late penalty. If you comment after the deadline, you will incur a late penalty. One-Week Extensions DO NOT APPLY Design Exercises. Problem Status: A Python Error Occurred: Error on line 2 of python tag (line 48 of file /S19/ex08/keimplement): Enter the url for the video SUBMIT ALL YOUR CODE AS A ZIP FILE BELOW (DOUBLE CHECK THAT YOU HAVE ZIPPED IT CORRECTLY). YOU WILL LOSE POINTS IF ALL FILES ARE NOT PRESENT! No file selected Enter any comments you may want us to know about. For example, if you started this exercise but don't want it graded, make a note here. Please hit submit on this question even if you don't have any comments. Back to Exercise 08 This page was last updated on Sunday April 14, 2019 at 09:55:32 AM (revision 3932c33).
Compute Distance To: Documents Indexed: 72 Publications since 1957, including 9 Books Co-Authors: 11 Co-Authors with 24 Joint Publications 152 Co-Co-Authors all top 5 ### Co-Authors 48 single-authored 4 Alexiewicz, Andrzej 4 Wiweger, Antoni 3 Ciesielski, Zbigniew 3 Gęba, Kazimierz 2 Pełczyński, Aleksander 2 Zbijewski, P. 2 Zidenberg, H. 1 Isbell, John Rolfe 1 Musielak, Julian 1 Swirszcz, Tadeusz 1 Zidenberg-Spirydonow, H. all top 5 ### Serials 13 Studia Mathematica 11 Bulletin de l’Académie Polonaise des Sciences, Série des Sciences Mathématiques, Astronomiques et Physiques 8 Annales Societatis Mathematicae Polonae 5 Colloquium Mathematicum 3 Roczniki Polskiego Towarzystwa Matematycznego. Seria II. Wiadomości Matematyczne 3 Lecture Notes Series. Aarhus University 2 Dissertationes Mathematicae 2 Functiones et Approximatio. Commentarii Mathematici 2 Biblioteka Matematyczna. Panstwowe Wydawnictwo Naukowe, Warszawa 1 American Mathematical Monthly 1 Advances in Mathematics 1 Fundamenta Mathematicae 1 Transactions of the American Mathematical Society 1 Bulletin of the American Mathematical Society 1 Bulletin de la Société Mathématique de France. Supplément. Mémoires 1 Lecture Notes in Mathematics 1 Monografie Matematyczne 1 Queen’s Papers in Pure and Applied Mathematics 1 Teubner-Texte zur Mathematik 1 Studia Mathematica, Seria Specjalna all top 5 ### Fields 21 Functional analysis (46-XX) 15 Category theory; homological algebra (18-XX) 5 General topology (54-XX) 2 General and overarching topics; collections (00-XX) 2 History and biography (01-XX) 2 Mathematical logic and foundations (03-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Abstract harmonic analysis (43-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Topological groups, Lie groups (22-XX) 1 Measure and integration (28-XX) 1 Approximations and expansions (41-XX) 1 Operator theory (47-XX) 1 Computer science (68-XX) ### Citations contained in zbMATH Open 45 Publications have been cited 653 times in 597 Documents Cited by Year Banach spaces of continuous functions. Vol. 1. Zbl 0225.46030 1971 Spaces of continuous functions. III: Spaces $$C(\varOmega)$$ for $$\varOmega$$ without perfect subsets. Zbl 0091.27803 1959 Isomorphic properties of Banach spaces of continuous functions. Zbl 0117.33002 1963 Schauder bases in Banach spaces of continuous functions. Zbl 0478.46014 1982 Banach spaces non-isomorphic to their cartesian squares. II. Zbl 0091.27802 1960 Projection constants and spaces of continuous functions. Zbl 0116.08304 1963 Product Schauder bases and approximation with nodes in spaces of continuous functions. Zbl 0124.31703 1963 Projectivity, injectivity and duality. Zbl 0121.02401 1963 Monads and their Eilenberg-Moore algebras in functional analysis. Zbl 0272.46049 1973 Free compact convex sets. Zbl 0135.16104 1965 Spaces of continuous functions on compact sets. Zbl 0135.34802 1965 Linear functionals on two-norm spaces. Zbl 0086.09303 1958 Periods of measurable functions and the Stone-Cech compactification. Zbl 0129.03701 1964 Sur les ensembles clairsemés. Zbl 0137.16002 1959 Functions with sets of points of discontinuity belonging to a fixed ideal. Zbl 0146.12302 1963 The two-norm spaces and their conjugate spaces. Zbl 0090.32503 1959 Some properties of two-norm spaces and a characterization of reflexivity of Banach spaces. Zbl 0094.08902 1960 Simultaneous extensions and projections in spaces of continuous functions. Zbl 0239.46048 1965 Some classes of Banach spaces depending on a parameter. Zbl 0099.09401 1961 Einführung in die Theorie der Kategorien und Funktoren. Übers. nach der 2., erw. Aufl. aus dem Polnischen von E. Buchsteiner-Kießling. Zbl 0429.18001 1979 Spaces of continuous functions. V: On linear isotonical embedding of $$C(\Omega_1$$) into $$C(\Omega_2$$). Zbl 0094.30401 1960 On weak convergence of measures and $$\sigma$$-complete Boolean algebras. Zbl 0128.10502 1964 Categorical methods in convexity. Zbl 0146.36205 1967 Inverse limits of compact spaces and direct limits of spaces of continuous functions. Zbl 0169.15602 1968 The Banach-Mazur functor and related functors. Zbl 0237.46074 1970 Introduction to the theory of categories and functors. 2nd enl. ed. (Wstep do teorii kategorii i funktorow). Zbl 0445.18001 1978 An introduction to the theory of categories and functors. (Wstep do teorii kategorii i funktorow.). Zbl 0253.18001 1972 Free and direct objects. Zbl 0116.01502 1963 Inductive and inverse limits in the category of Banach spaces. Zbl 0144.16704 1965 Selected topics on functional analysis and categories. Zbl 0225.46069 1965 A simple topological proof that the underlying set functor for compact spaces is monadic. Zbl 0286.18003 1974 A generalization of two norm spaces. Linear functionals. Zbl 0082.10902 1958 Limit properties of ordered families of linear metric spaces. Zbl 0099.09302 1961 Embedding of two-norm spaces into the space of bounded continuous functions on a half-straight line. Zbl 0096.31201 1960 Extension of linear functionals in two-norm spaces. Zbl 0096.31202 1960 Generalisations of Helly’s theorems concerning the solvability of moment problems in Banach spaces. Zbl 0106.09703 1961 Spaces of continuous functions. VI: Localization of multiplicative linear functionals. Zbl 0119.10604 1963 On preordered topological spaces and increasing semicontinuous functions. Zbl 0161.42002 1968 Generalizations of Bohr’s theorem on Fourier series with independent characters. Zbl 0197.40102 1963 A theorem of Eilenberg-Watts type for tensor products of Banach spaces. Zbl 0204.44404 1970 Some categorical characterizations of algebras of continuous functions. Zbl 0327.46080 1976 Reflective and coreflective subcategories of categories of Banach spaces and Abelian groups. Zbl 0373.46078 1978 Functors on categories of ordered topological spaces. Zbl 0418.46057 1979 Spaces of continuous functions. I. Zbl 0080.31602 1957 Spaces of continuous functions. II. On multiplicative linear functionals over some Hausdorff classes. Zbl 0080.31603 1957 Schauder bases in Banach spaces of continuous functions. Zbl 0478.46014 1982 Einführung in die Theorie der Kategorien und Funktoren. Übers. nach der 2., erw. Aufl. aus dem Polnischen von E. Buchsteiner-Kießling. Zbl 0429.18001 1979 Functors on categories of ordered topological spaces. Zbl 0418.46057 1979 Introduction to the theory of categories and functors. 2nd enl. ed. (Wstep do teorii kategorii i funktorow). Zbl 0445.18001 1978 Reflective and coreflective subcategories of categories of Banach spaces and Abelian groups. Zbl 0373.46078 1978 Some categorical characterizations of algebras of continuous functions. Zbl 0327.46080 1976 A simple topological proof that the underlying set functor for compact spaces is monadic. Zbl 0286.18003 1974 Monads and their Eilenberg-Moore algebras in functional analysis. Zbl 0272.46049 1973 An introduction to the theory of categories and functors. (Wstep do teorii kategorii i funktorow.). Zbl 0253.18001 1972 Banach spaces of continuous functions. Vol. 1. Zbl 0225.46030 1971 The Banach-Mazur functor and related functors. Zbl 0237.46074 1970 A theorem of Eilenberg-Watts type for tensor products of Banach spaces. Zbl 0204.44404 1970 Inverse limits of compact spaces and direct limits of spaces of continuous functions. Zbl 0169.15602 1968 On preordered topological spaces and increasing semicontinuous functions. Zbl 0161.42002 1968 Categorical methods in convexity. Zbl 0146.36205 1967 Free compact convex sets. Zbl 0135.16104 1965 Spaces of continuous functions on compact sets. Zbl 0135.34802 1965 Simultaneous extensions and projections in spaces of continuous functions. Zbl 0239.46048 1965 Inductive and inverse limits in the category of Banach spaces. Zbl 0144.16704 1965 Selected topics on functional analysis and categories. Zbl 0225.46069 1965 Periods of measurable functions and the Stone-Cech compactification. Zbl 0129.03701 1964 On weak convergence of measures and $$\sigma$$-complete Boolean algebras. Zbl 0128.10502 1964 Isomorphic properties of Banach spaces of continuous functions. Zbl 0117.33002 1963 Projection constants and spaces of continuous functions. Zbl 0116.08304 1963 Product Schauder bases and approximation with nodes in spaces of continuous functions. Zbl 0124.31703 1963 Projectivity, injectivity and duality. Zbl 0121.02401 1963 Functions with sets of points of discontinuity belonging to a fixed ideal. Zbl 0146.12302 1963 Free and direct objects. Zbl 0116.01502 1963 Spaces of continuous functions. VI: Localization of multiplicative linear functionals. Zbl 0119.10604 1963 Generalizations of Bohr’s theorem on Fourier series with independent characters. Zbl 0197.40102 1963 Some classes of Banach spaces depending on a parameter. Zbl 0099.09401 1961 Limit properties of ordered families of linear metric spaces. Zbl 0099.09302 1961 Generalisations of Helly’s theorems concerning the solvability of moment problems in Banach spaces. Zbl 0106.09703 1961 Banach spaces non-isomorphic to their cartesian squares. II. Zbl 0091.27802 1960 Some properties of two-norm spaces and a characterization of reflexivity of Banach spaces. Zbl 0094.08902 1960 Spaces of continuous functions. V: On linear isotonical embedding of $$C(\Omega_1$$) into $$C(\Omega_2$$). Zbl 0094.30401 1960 Embedding of two-norm spaces into the space of bounded continuous functions on a half-straight line. Zbl 0096.31201 1960 Extension of linear functionals in two-norm spaces. Zbl 0096.31202 1960 Spaces of continuous functions. III: Spaces $$C(\varOmega)$$ for $$\varOmega$$ without perfect subsets. Zbl 0091.27803 1959 Sur les ensembles clairsemés. Zbl 0137.16002 1959 The two-norm spaces and their conjugate spaces. Zbl 0090.32503 1959 Linear functionals on two-norm spaces. Zbl 0086.09303 1958 A generalization of two norm spaces. Linear functionals. Zbl 0082.10902 1958 Spaces of continuous functions. I. Zbl 0080.31602 1957 Spaces of continuous functions. II. On multiplicative linear functionals over some Hausdorff classes. Zbl 0080.31603 1957 all top 5 ### Cited by 617 Authors 14 Zakharov, Valeriĭ Konstantinovich 13 Berenguer, María Isabel 13 Galego, Elói Medina 13 Gámez, Domingo 13 Koszmider, Piotr B. 11 Ruiz Galán, Manuel 8 Mundici, Daniele 7 Bezhanishvili, Guram 7 Kąkol, Jerzy 7 Plebanek, Grzegorz 7 Pumplün, Dieter 6 Drewnowski, Lech 6 Ghenciu, Ioana 6 Leiderman, Arkady G. 6 Rodionov, Timofeĭ Viktorovich 6 Skrzypek, Lesław 5 Candido, Leandro 5 Gabriyelyan, Saak S. 5 Lindenstrauss, Joram 5 Lopez-Abad, Jordi 5 Mikhalëv, Aleksandr Vasil’evich 5 Semadeni, Zbigniew 5 Tkachuk, Vladimir Vladimirovich 5 van Mill, Jan 4 Arkhangel’skiĭ, Aleksandr Vladimirovich 4 Aviles Lopez, Antonio 4 Benyamini, Yoav 4 Bezhanishvili, Nick 4 Cabello Sánchez, Félix 4 Gasparis, Ioannis 4 Hager, Anthony W. 4 Haydon, Richard G. 4 Johnson, William Buhmann 4 Kislyakov, Sergeĭ Vital’evich 4 Lacey, H. Elton 4 Lewicki, Grzegorz 4 López Linares, A. J. 4 Marciszewski, Witold 4 Rajagopalan, Minakshisundaram 4 Rohrl, Helmut 4 Sokolova, Ana 4 Tkachenko, Mikhail Gelievich 4 Todorcevic, Stevo B. 4 Wójtowicz, Marek 3 Alspach, Dale E. 3 Araujo, Jesús 3 Baars, Jan 3 Banakh, Taras Onufrievich 3 Bonchi, Filippo 3 Brech, Christina 3 Brzdęk, Janusz 3 Caliò, Franca 3 Castillo, Jesús M. F. 3 Causey, Ryan Michael 3 Cembranos, Pilar 3 Cheney, Elliott Ward jun. 3 Chentsov, Aleksandr Georgievich 3 Diestel, Joseph 3 Franchetti, Carlo 3 Friz, Peter Karl 3 Globevnik, Josip 3 González Ortiz, Manuel 3 Gutiérrez García, Javier 3 Kalton, Nigel John 3 Kusraev, Anatoly Georgievich 3 Lucero-Bryan, Joel Gregory 3 Ludkovsky, Sergey Victor 3 Macheras, Nikolaos Demetrios 3 Magill, Kenneth D. jun. 3 Marchetti, Elena 3 Marra, Vincenzo 3 Martínez-Cervantes, Gonzalo 3 Moran, Gadi 3 Morris, Sidney A. 3 Pérez, M. C. Serrano 3 Piasecki, Łukasz 3 Račkauskas, Alfredas Yurgevich 3 Rincón-Villamizar, Michael A. 3 Rodríguez Ruiz, José 3 Viswanathan, Puthan Veedu 3 Wolfe, John C. 2 Argyros, Spiros A. 2 Armstrong, Thomas E. 2 Ball, Richard N. 2 Barr, Michael 2 Bartoszewicz, Artur 2 Becerra Guerrero, Julio 2 Blasco, Jose L. 2 Boche, Holger 2 Bonnet, Robert 2 Borodulin-Nadzieja, Piotr 2 Bugajski, Sławomir 2 Casini, Emanuele 2 Cohen, Henry Bruce 2 de Groot, Johannes Antonius Marie 2 de Prada Vicente, María Angeles 2 Domański, Paweł 2 Effros, Edward George 2 Emmanuele, Giovanni 2 Ferrando, Juan Carlos ...and 517 more Authors all top 5 ### Cited in 168 Serials 45 Proceedings of the American Mathematical Society 40 Topology and its Applications 32 Journal of Mathematical Analysis and Applications 26 Israel Journal of Mathematics 22 Transactions of the American Mathematical Society 19 Journal of Functional Analysis 17 Journal of Approximation Theory 16 Mathematische Zeitschrift 13 Advances in Mathematics 12 Siberian Mathematical Journal 11 Journal of Mathematical Sciences (New York) 9 Bulletin of the Australian Mathematical Society 9 Mathematical Notes 9 Mathematische Annalen 8 Czechoslovak Mathematical Journal 8 Bulletin of the American Mathematical Society 7 Cahiers de Topologie et Géométrie Différentielle Catégoriques 7 Journal of Soviet Mathematics 7 Quaestiones Mathematicae 6 Rocky Mountain Journal of Mathematics 6 Studia Mathematica 6 Journal of Pure and Applied Algebra 6 Acta Mathematica Hungarica 6 Applied Categorical Structures 6 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 5 Communications in Algebra 5 Annali di Matematica Pura ed Applicata. Serie Quarta 5 Applied Mathematics and Computation 5 Journal of Computational and Applied Mathematics 5 Manuscripta Mathematica 5 Monatshefte für Mathematik 5 Annals of Pure and Applied Logic 5 Indagationes Mathematicae. New Series 5 Mediterranean Journal of Mathematics 4 Archiv der Mathematik 4 Commentationes Mathematicae Universitatis Carolinae 4 Functional Analysis and its Applications 4 Mathematika 4 Rendiconti del Circolo Matemàtico di Palermo. Serie II 4 Results in Mathematics 4 Semigroup Forum 4 Archive for Mathematical Logic 4 Abstract and Applied Analysis 4 Positivity 3 Mathematica Slovaca 3 Logical Methods in Computer Science 2 Acta Mathematica Academiae Scientiarum Hungaricae 2 Communications in Mathematical Physics 2 International Journal of Theoretical Physics 2 Journal d’Analyse Mathématique 2 Mathematical Proceedings of the Cambridge Philosophical Society 2 Periodica Mathematica Hungarica 2 Reports on Mathematical Physics 2 Arkiv för Matematik 2 Compositio Mathematica 2 Glasgow Mathematical Journal 2 Journal of Mathematical Economics 2 Mathematische Nachrichten 2 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 2 Numerical Functional Analysis and Optimization 2 Order 2 Probability Theory and Related Fields 2 Information and Computation 2 Journal of Theoretical Probability 2 Topology Proceedings 2 Doklady Mathematics 2 Communications in Contemporary Mathematics 2 Bulletin de la Société Mathématique de France. Supplément. Mémoires 2 Formalized Mathematics 2 Mathematics 1 Analysis Mathematica 1 Discrete Mathematics 1 Houston Journal of Mathematics 1 Journal of Mathematical Physics 1 Lithuanian Mathematical Journal 1 Mathematical Methods in the Applied Sciences 1 Chaos, Solitons and Fractals 1 Journal of Geometry and Physics 1 Beiträge zur Algebra und Geometrie 1 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 1 Acta Mathematica 1 Algebra and Logic 1 Algebra Universalis 1 Annales de l’Institut Fourier 1 Annals of the Institute of Statistical Mathematics 1 Canadian Journal of Mathematics 1 Functiones et Approximatio. Commentarii Mathematici 1 Fundamenta Mathematicae 1 Illinois Journal of Mathematics 1 Integral Equations and Operator Theory 1 International Journal of Game Theory 1 Inventiones Mathematicae 1 Journal of Algebra 1 Journal of Differential Equations 1 Journal of Geometry 1 Journal of Statistical Planning and Inference 1 The Journal of Symbolic Logic 1 Kybernetika 1 Matematički Vesnik 1 Michigan Mathematical Journal ...and 68 more Serials all top 5 ### Cited in 50 Fields 349 Functional analysis (46-XX) 155 General topology (54-XX) 76 Operator theory (47-XX) 61 Mathematical logic and foundations (03-XX) 53 Measure and integration (28-XX) 46 Category theory; homological algebra (18-XX) 38 Order, lattices, ordered algebraic structures (06-XX) 35 Approximations and expansions (41-XX) 23 Numerical analysis (65-XX) 20 Probability theory and stochastic processes (60-XX) 17 Convex and discrete geometry (52-XX) 16 Integral equations (45-XX) 14 Topological groups, Lie groups (22-XX) 14 Real functions (26-XX) 11 Group theory and generalizations (20-XX) 10 Computer science (68-XX) 9 General and overarching topics; collections (00-XX) 9 General algebraic systems (08-XX) 8 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 7 Number theory (11-XX) 7 Abstract harmonic analysis (43-XX) 7 Calculus of variations and optimal control; optimization (49-XX) 6 Associative rings and algebras (16-XX) 6 Difference and functional equations (39-XX) 5 Functions of a complex variable (30-XX) 5 Sequences, series, summability (40-XX) 5 Harmonic analysis on Euclidean spaces (42-XX) 5 Manifolds and cell complexes (57-XX) 5 Statistics (62-XX) 5 Quantum theory (81-XX) 4 Several complex variables and analytic spaces (32-XX) 4 Ordinary differential equations (34-XX) 4 Systems theory; control (93-XX) 3 Combinatorics (05-XX) 3 Dynamical systems and ergodic theory (37-XX) 3 Integral transforms, operational calculus (44-XX) 2 Algebraic geometry (14-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Nonassociative rings and algebras (17-XX) 2 $$K$$-theory (19-XX) 2 Partial differential equations (35-XX) 2 Geometry (51-XX) 2 Operations research, mathematical programming (90-XX) 1 Commutative algebra (13-XX) 1 Potential theory (31-XX) 1 Special functions (33-XX) 1 Algebraic topology (55-XX) 1 Mechanics of particles and systems (70-XX) 1 Biology and other natural sciences (92-XX) 1 Information and communication theory, circuits (94-XX)
# We adopted equal weight for each variable in the three components We adopted equal weight for each variable in the three components in this study as the first step. This equal weighting is applied in the ESI framework as well. For example, the environment component consisted of nine variables; thus, the weight used for the aggregation was 1/9. A few provinces, such NCT-501 as Chongqing, lacked data on specific variables. In such cases, the value of a component was calculated by the average of the available variables, with the weights being equal. Thus, if eight variables were available, the weight for the aggregation would be 1/8. Step 4: calculation of sustainability index scores The final sustainability index score for province i is the mean (again, the GM6001 datasheet equally weighted average) of the three components. That is: SI_i_t = \frac\sum\nolimits_m under the method used in this study, municipalities such as Beijing, Shanghai, and Tianjin, most of which are considered as economically developed regions and, therefore, relatively affluent, are ranked high. This is mainly attributed to the fact that the scores of the socio-economic component appeared to be much higher in these municipalities in comparison with other provinces. In the present method, the weight of the three components is equal (1/3), and high scores of socio-economic components, therefore, have considerable influence on the final sustainability index scores. Table 2 Sustainability index: scores in 2000 and 2005   2000 2005 Beijing 0.79 0.85 Tianjin 0.73 0.76 Hebei 0.40 0.50 Shanxi 0.29 0.39 Inner Mongolia 0.39 0.37 Liaoning 0.43 0.52 Jilin 0.47 0.52 Heilongjiang 0.48 0.60 Shanghai 0.68 0.74 Jiangsu 0.48 0.57 Zhejiang 0.63 0.
## help with an irregular integral I am looking for help with doing the following integral : $$\frac{1}{2\pi i}\int_{1}^{\infty}\ln\left(\frac{1-e^{-2\pi i x}}{1-e^{2\pi i x}} \right )\frac{dx}{x\left(\ln x+z\right)}\;\;\;\;z\in \mathbb{C}$$ i tried to transform it into a complex integral along a 'keyhole' contour, with a branch cut along the +ive real line $\left[1,\infty\right)$. but then $\;\ln x\;$ would be transformed into $\;\ln x+2\pi i\;$ when doing the integral along $\left(\infty,1\right]\;$ which doesn't add up nicely to the portion along $\left[1,\infty\right)$. any insights are appreciated. PhysOrg.com science news on PhysOrg.com >> Galaxies fed by funnels of fuel>> The better to see you with: Scientists build record-setting metamaterial flat lens>> Google eyes emerging markets networks Recognitions: Homework Help Doesn't this simplify fairly easily? Quote by mmzaj $$\frac{1-e^{-2\pi i x}}{1-e^{2\pi i x}}$$ you would think !!! but no, it doesn't .... ## help with an irregular integral The integral above is equivalent to : $$\int_{1}^{\infty}\left(\frac{1}{2}-x+\left \lfloor x \right \rfloor \right )\left(\frac{1}{x\left(\ln x+z\right)}\right)dx$$ And $$\int_{1}^{\infty}\sum_{n=1}^{\infty}\frac{\sin(2 \pi n x)}{n\pi}\left(\frac{1}{x\left(\ln x+z\right)}\right)dx$$ Recognitions: Homework Help Science Advisor $\frac{1-e^{-2\pi i x}}{1-e^{2\pi i x}} = \frac{e^{-2\pi i x}(e^{2\pi i x}-1)}{1-e^{2\pi i x}} = -e^{-2\pi i x}$ I end up with $\int_0^∞\frac{1-2e^u}{2(u+z)}du$ which surely doesn't converge? Quote by haruspex $\frac{1-e^{-2\pi i x}}{1-e^{2\pi i x}} = \frac{e^{-2\pi i x}(e^{2\pi i x}-1)}{1-e^{2\pi i x}} = -e^{-2\pi i x}$ I end up with $\int_0^∞\frac{1-2e^u}{2(u+z)}du$ which surely doesn't converge? you missed the fact that the inverse of the complex exponential - the complex $\log$ function- is multivalued. namely : $$\frac{1-e^{-2\pi i x}}{1-e^{2\pi i x}}=-e^{-2\pi i x}=e^{-2\pi i \left(x-1/2\right)}=e^{-2\pi i \left(\left \{ x \right \}-1/2\right)}$$ Where $\left \{ x \right \}$ is the fractional part of x. Thus: $$\frac{1}{2\pi i}\ln\left(\frac{1-e^{-2\pi i x}}{1-e^{2\pi i x}}\right)=\frac{1}{2}-\left \{ x \right \}$$ Another way to think of it is to take the Taylor expansion of the $\log$: $$\frac{1}{2\pi i}\left(\ln\left(1-e^{-2\pi i x}\right)-\ln(1-e^{2\pi i x})\right)=\frac{1}{2\pi i}\sum_{k=1}^{\infty}\frac{e^{2\pi i k x}-e^{-2\pi i kx}}{k}=\sum_{n=1}^{\infty}\frac{\sin(2\pi k x)}{k\pi}$$ Which in turn is the Fourier expansion of $\frac{1}{2}-\left \{ x \right \}$ Using these facts, we can prove that the integral in question is equal to the limit: $$e^{-z}\text{Ei}(z)+\lim_{N\rightarrow \infty}\sum_{n=1}^{N}\left(n+\frac{1}{2} \right )\ln\left(\frac{\ln(n+1)+z}{\ln(n)+z} \right )-e^{-z}\text{Ei}(z+\ln N)$$ Where $\text{Ei}(z)$ is the exponential integral function. But i'm stuck with this cumbersome limit
# Variational problems with percolation: dilute spin systems at zero temperature created by braidesa on 20 Oct 2012 modified on 06 Dec 2012 [BibTeX] Published Paper Inserted: 20 oct 2012 Last Updated: 6 dec 2012 Journal: J. Stat. Phys. Volume: 149 Pages: 846-864 Year: 2012 Doi: 10.1007/s10955-012-0628-1 Links: paper page at J. Stat Phys Abstract: We study the asymptotic behaviour of dilute spin lattice energies by exhibiting a continuous interfacial limit energy computed using the notion of $\Gamma$-convergence and techniques mixing Geometric Measure Theory and Percolation while scaling to zero the lattice spacing. The limit is not trivial above a percolation threshold. Since the lattice energies are not equi-coercive a suitable notion of limit magnetization must be defined, which can be characterized by two phases separated by an interface. The macroscopic surface tension at this interface is characterized through a first-passage percolation formula, which highlights interesting connections between variational problems and percolation issues. A companion result on the asymptotic description on energies defined on paths in a dilute environment is also given.
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > KEYWORD > POINT LOCATION PROBLEMS: Reports tagged with point location problems: TR18-074 | 23rd April 2018 Daniel Kane, Shachar Lovett, Shay Moran #### Generalized comparison trees for point-location problems Let $H$ be an arbitrary family of hyper-planes in $d$-dimensions. We show that the point-location problem for $H$ can be solved by a linear decision tree that only uses a special type of queries called \emph{generalized comparison queries}. These queries correspond to hyperplanes that can be written as a linear ... more >>> ISSN 1433-8092 | Imprint
Question # The rope shown at an instant is carrying a wave travelIing towards right created source vibrating at a frequency n consider the following statement:(i) The speed of the wave is $$4n \times ab$$(ii) The medium at a will be in the same phase as d after $$\displaystyle \frac{4}{3n}$$ sec(iii) the phase difference between b and e is $$\displaystyle \dfrac{3 \pi}{2}$$Which of these statement(s) is/are correct? A (iii) only B (i) and (iii) only C (ii) only D (ii), (i), (iii) Solution ## The correct option is B (i) and (iii) onlyStatement (iii) is only correct as the phase differences between points band c, c and d and d and e are $$\displaystyle \frac{\pi}{2}$$ each$$\therefore$$ the phase difference between b and e $$\displaystyle \frac{\pi}{2} + \frac{\pi}{2} + \frac{\pi}{2}= \frac{3 \pi}{2}$$Physics Suggest Corrections 0 Similar questions View More People also searched for View More
### Overview of Sentiment de Monsieur Leibnitz (1705) In July 1700, at the Académie royale des sciences, the algebraist Michel Rolle launched the first criticisms against differential calculus, based on its foundations and on its use which he thought fraudulent. This intervention marks the beginning of the well-known querelle des infiniment petits. The Sentiment de Monsieur Leibnitz manuscript bears witness to one of the last episodes of this quarrel before its appeasement in 1706. In 1705 the debate was at its peak. In his essay Remarques de M. Rolle de l’Académie des Sciences touchant le problesme général des tangentes [Rolle, 1703], Rolle completely discredited both the foundations and the exactness of differential calculus Two years later, on 23 April 1705, Saurin published a reply [Saurin 1705] which he ended by imploring the Académie to pass judgment on the discrepancies between him and Rolle, which he listed in seven points. This extremely tense climate led Varignon to write to Leibniz on 10 May 1705 [AIII 9, 549]. Varignon wanted Leibniz to intervene with members of the Academy. He also asked him to solicit an endorsement  on the points formulated in Saurin’s article and those of a memoire he attached to this letter, from members of the République des Lettres, who understood his calculus. Leibniz reacted immediately. He wrote two letters – one to Gallois and the other to Bignon – and an endorsement in defense of his friends. Two versions exist: a Latin version [Gotha FB A 448-449, 41-42] and an abridged version in French, written by a copyist [LH 35 VII 9, 1-2]. Only the Latin version of this text was published under the title “Sentiment de Monsieur Leibnitz” [Leibniz 1706]. This publication is attached to a text by Saurin (which follows his article of 23 April 1705) and two other endorsements: that of Jacob Hermann and those of the Bernoulli brothers. All the publications were, however, confiscated by Jean-Paul Bignon L’Analyse des infiniment petits pour l’intelligence des lignes courbes contains an article (§ 163) [L’Hospital 1696] which explains that when a fractional expression has both the numerator and denominator equal to zero at a certain point, its value can be obtained by differentiating the numerator and the denominator. This result is very useful for the determination of tangents at a crunode. Indeed, the formula of the tangent applied to this type of point leads to the expression $\frac{0}{0}$. In an article in the Journal des Sçavans published on 3rd August 1702, Joseph Saurin applied article 163 to determine the values of the two tangents at the crunode point $x=2$ and $y=2$) for the quartic equation $y^4 – 8y^3-16y^2-12xy^2+48xy+4x^2-64x=0$ However, this ingenious use was harshly criticized by Rolle. He accused Saurin of adding “supplements” to the rules, whenever it pleased him, and also of abusing the rules of algebra. The Sentiment de Monsieur Leibnitz manuscript was the response to a request from Pierre Varignon who had asked Leibniz to support differential calculus within the République des lettres, in particular to members of the Académie royale des sciences. The manuscript has two very distinct parts. In the first, Leibniz elucidates some technical aspects of Rolle’s critique. The second part is what makes the text more interesting. By clarifying article 163, Leibniz shows that differential calculus is the only one which provides an interpretation of the problematic expression $\frac{0}{0}$ (which appeared here in the geometric configuration of the crunode point, although we can also find it in the characteristic triangle). The association of the ratio sign and zero “0”, which constitutes $\frac{0}{0}$, refers in algebra to an impossibility and indicates a limitation. Using vanishing quantities, Leibniz takes up the challenge of making sense of the expression $\frac{0}{0}$ and goes beyond this limitation. This is an additional argument toward the habiles de la spécieuse ordinaire. There is a French version of this text by Leibniz himself (available here).
# Implicit Differentiation ## Graphs of equations An equation in $y$ and $x$ is an algebraic expression involving an equality with two (or more) variables. An example might be $x^2 + y^2 = 1$. The solutions to an equation in the variables $x$ and $y$ are all points $(x,y)$ which satisfy the equation. The graph of an equation is just the set of solutions to the equation represented in the Cartesian plane. With this definition, the graph of a function $f(x)$ is just the graph of the equation $y = f(x)$. In general, graphing an equation is more complicated than graphing a function. For a function, we know for a given value of $x$ what the corresponding value of $f(x)$ is through evaluation of the function. For equations, we may have 0, 1 or more $y$ values for a given $x$ and even more problematic is we may have no rule to find these values. To plot such an equation in Julia, we can use the ImplicitEquations package, which is loaded when CalculusWithJulia is: using CalculusWithJulia using Plots gr() # better graphics than plotly() here Plots.GRBackend() To plot the circle of radius $2$, we would first define a function of two variables: f(x,y) = x^2 + y^2 f (generic function with 1 method) Then we use one of the logical operations - Lt, Le, Eq, Ge, or Gt - to construct a predicate to plot. This one describes $x^2 + y^2 = 2^2$: r = Eq(f, 2^2) ImplicitEquations.Pred(Main.##WeaveSandBox#652.f, ==, 4) These "predicate" objects can be passed to plot for visualization: plot(r) Of course, more complicated equations are possible and the steps are similar - only the function definition is more involved. For example, the Devils curve has the form $~ y^4 - x^4 + ay^2 + bx^2 = 0 ~$ Here we draw the curve for a particular choice of $a$ and $b$. For illustration purposes, a narrower viewing window than the default of $[-5,5] \times [-5,5]$ is specified below using xlims and ylims: a,b = -1,2 f(x,y) = y^4 - x^4 + a*y^2 + b*x^2 plot(Eq(f, 0), xlims=(-3,3), ylims=(-3,3))
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Molecular Inversion Probes for targeted resequencing in non-model organisms ## Abstract Applications that require resequencing of hundreds or thousands of predefined genomic regions in numerous samples are common in studies of non-model organisms. However few approaches at the scale intermediate between multiplex PCR and sequence capture methods are available. Here we explored the utility of Molecular Inversion Probes (MIPs) for the medium-scale targeted resequencing in a non-model system. Markers targeting 112 bp of exonic sequence were designed from transcriptome of Lissotriton newts. We assessed performance of 248 MIP markers in a sample of 85 individuals. Among the 234 (94.4%) successfully amplified markers 80% had median coverage within one order of magnitude, indicating relatively uniform performance; coverage uniformity across individuals was also high. In the analysis of polymorphism and segregation within family, 77% of 248 tested MIPs were confirmed as single copy Mendelian markers. Genotyping concordance assessed using replicate samples exceeded 99%. MIP markers for targeted resequencing have a number of advantages: high specificity, high multiplexing level, low sample requirement, straightforward laboratory protocol, no need for preparation of genomic libraries and no ascertainment bias. We conclude that MIP markers provide an effective solution for resequencing targets of tens or hundreds of kb in any organism and in a large number of samples. ## Introduction High-throughput sequencing has become an indispensable research tool in ecology and evolutionary biology1,2. Whole genome de novo sequencing, assembly and resequencing at a population scale are currently feasible for many non-model species3. However whole genome resequencing (WGR) is a costly and challenging endeavor in no small part due to data storage, curation and analysis issues. Also, WGR still remains beyond reach for organisms with particularly large or complex genomes, such as many insects, amphibians or plants. More importantly, WGR is not necessary for addressing many questions which still do require information about genomic patterns of variation. Examples include species delimitation, phylogeographic inferences, assessment of genetic structure and gene flow, estimation of genetic variation within populations as well as studies focusing on predefined gene sets or involving construction of linkage maps. For such applications it is sufficient to sample many loci that collectively represent a fraction of the genome, at a fraction of the WGR cost. Hence there has been a wide interest in reduced representation (genome-partitioning) techniques which provide such markers4,5. Reduced representation approaches may be broadly divided into two classes which complement each other and have been used effectively to address consequential evolutionary and ecological questions6,7,8. The first class comprises techniques that sample the genome approximately at random; the researcher may control the size but not identity of the target. Various genotyping by sequencing approaches relying on restriction enzymes, for example restriction site associated DNA sequencing (RADseq), fall into this class9,10. The second class encompasses a diverse array of methods which give the researcher some control over the identity or the functional class of the assayed portion of the genome, be it transcribed sequences (RNAseq), transcription factor binding sites (ChipSeq) or transcriptionally active chromatin (DNaseSeq). The highest degree of control over the identity of the interrogated regions is provided by targeted resequencing methods11. Sequence capture approaches which rely on hybridization of genomic DNA to numerous probes of known sequence, have proved especially popular12,13,14. Hybridization-based targeted resequencing methods, although immensely powerful, have limitations which make them less than ideal choice in some situations. These methods are technically demanding, require construction of genomic libraries prior to hybridization, are time consuming and do not scale well with the number of samples. They are not very efficient when the target is small, on the order of tens of kilobases (kb), or when genome size is very large, because typically obtained enrichment rates are hundreds-fold15. Yet applications in which tens, hundreds or thousands of defined genomic regions need to be interrogated in a large number of samples are common. They include construction of linkage maps or incorporation of new genes into the existing maps16, studies of natural hybridization and introgression in hybrid zones17 and genotyping sets of candidate genes in ecological genomics studies18,19. Such applications fall in between PCR-based methods characterized by high specificity but low multiplexing capabilities and sequence capture methods with their megabase-size targets. A considerable interest in such intermediate scale of targeted resequencing in biomedicine has led to the development of various commercial solutions which are however not available in non-model organisms20. Molecular Inversion Probes (MIP)21 appear particularly well suited for targeted resequencing of tens, hundreds or thousands of short genomic regions. They can be used in any organism with partial genomic information available. MIPs are single-stranded DNA molecules containing on their ends sequences complementary to two regions flanking the target of up to several hundred bp. Following hybridization of MIPs to the target, gap-filling and ligation result in circularized DNA molecules containing sequence of the target together with adaptors and barcodes ready for downstream analyses (Fig. 1). MIP technique was a popular solution at early stages of large-scale human SNP genotyping22. More recently MIPs have been used for resequencing large sets of human exons23 and medically relevant gene panels24. Specialized applications of MIPs include detection of low frequency variants25, copy-number variation (CNV)24,26, accurate genotyping of highly similar paralogs27 and quantification of alternative splicing28. So far MIP markers have not been widely used in research on non-model organisms. Yet they offer a number of potential advantages for ecological and evolutionary research. Therefore we explored their utility in a non-model system by assessing performance of MIP markers designed from transcriptome sequences of Lissotriton newts. ## Results ### MIP performance and rebalancing The workflow for MIP design and analysis is summarized in Fig. 2. The markers were designed to include positions identified through transcriptome resequencing as diagnostic for Lissotriton montandoni and L. vulgaris (see Methods) and thus useful for constructing a linkage map. In the experiment 1 we tested performance of 248 MIPs in 24 individuals under equal concentration of all probes. We obtained 15.4 mln paired-end Illumina reads, on average 77.3% (SD 4.7%) reads were on target and the mean coverage was 2080×. No reads were obtained for 14 MIPs (5.6%) and these were excluded from further analyses. Performance of individual MIPs was expressed as the Fraction of Mapped Reads (FMR) within sample. Ideally, if capture efficiency were uniform, all MIPs should have similar FMR both within and among individuals. The distribution of median FMR as well as variation among individuals for the 234 MIPs with reads on target are shown in Fig. 3. The medians of 187 MIPs (80%) were within one order of magnitude (FMR 0.0010–0.0094). Performance of individual MIPs across samples was more uniform: for 218 MIPs (93%) the difference between the 10th and 90th FMR percentile was less than 5-fold; such uniformity was obtained for 97% of 187 MIPs mentioned above. Targets of 65 MIPs were longer than the standard 112 bp. No significant correlation between the median FMR and target length was detected (Spearman’s ρ = −0.093, P = 0.16). The experiment 2 was performed using the rebalanced MIP pool, with probe-to-target ratio increased for the 24 worst performing (median FMR <0.001) markers, and decreased for the 1 best performing (FMR = 0.04) MIP; 23.3 mln reads were obtained. Rebalancing indeed improved performance of rebalanced MIPs (Mann-Whitney paired test, V = 23, P = 1.5 × 10−4; Fig. 4a). However rebalancing did not significantly reduce FMR variance among MIPs (Fig. 4b, Levene’s test on medians, P = 0.84). A surprising consequence of rebalancing was reduction of specificity as only 37.3% (SD 12.7%) reads were on target; the mean coverage in the experiment 2 was 516×. The comparison of gel pictures of amplified pools before and after rebalancing clearly shows increase of nonspecific amplification products (Fig. S1). Even though target bands were excised from gel and purified, Bioanalyzer traces (Fig. S2) show that the peak centered on the expected length of 270 bp (target + arms + Illumina adaptors, Fig. 1) was broader than before rebalancing, which may indicate increased fraction of nonspecific products of the length similar to the target. ### Genotyping and validation of MIPs as single locus Mendelian markers Among 234 MIPs with reads on target, genotypes could not be called for 3 markers due to low coverage and excessive number of mismatches. There was a significant negative correlation between the coverage per individual and the number of missing genotypes (experiment 1: Spearman’s ρ = −0.504, P = 0.012, experiment 2: ρ = −0.719, P = 1.7 × 10−13). Tests of Mendelian inheritance were performed for 216 out of 231 genotyped MIPs (93.5%) with no missing data and containing at least one polymorphic site within the family (parents + 21 offspring, 1622 sites in total). Markers in which at least one polymorphic site had P < 0.015 (one of genotypes expected according to the Mendelian segregation rules completely missing) were marked as potential paralogs. This procedure flagged 25 MIP markers (11.6% of all tested) as potential paralogs. Thus 191 (77% of the initial 248) markers were confirmed as polymorphic and single copy. Tests of the excess and deficit of heterozygotes were performed in three natural Lissotriton vulgaris graecus (Lvg) populations to further check for the presence of potential paralogs and identify loci with null alleles. 164 MIPs were polymorphic in Lvg (in total 601 polymorphic sites). We detected 91 sites in 20 MIPs (12.2% of all tested markers) showing an excess of heterozygotes at the false discovery rate (FDR) 0.05. These markers were also flagged as potential paralogs and removed from further analyses; 12 (36% of all) potential paralogs were marked as such in both family-based and population-based analyzes. Deficit of heterozygotes suggesting the presence of null alleles was detected at FDR 0.05 for 59 sites in 8 MIPs (4.8%). The non-reference discrepancy rate (NRD) estimated for 16 Lvg individuals genotyped in replicates was 0.008 (SD 0.0076) indicating >99% genotyping concordance. The three Lvg populations differed greatly in the level of genetic variation (Table 1). Both the number of segregating sites and nucleotide diversity were the lowest in the Milia population in the Peloponnese (S = 36, π = 0.0003), and the highest in the Gracen population in Albania (S = 224, π = 0.0024). ## Discussion In this study we tested performance of Molecular Inversion Probes (MIP) as molecular markers in non-model species without a sequenced genome. The markers were designed from transcriptome sequences but were genotyped from genomic DNA. Hence identification of exon boundaries in transcripts was essential for successful genotyping29,30. We applied a homology-based approach which relies on the observation that most exons, especially constitutive ones, are conserved across vertebrates31,32. Indeed the exon boundaries were correctly identified in most newt protein-coding transcripts using gene models of Xenopus, which diverged from newts ca. 300 mya33. The ca. 5% MIPs without mapped reads are the likely cases of inaccurate prediction of exon boundaries. These failures could be due to incorrect identification of orthologs, the lack of conservation of exon-intron boundaries between Xenopus and newts or because of erroneous identification of exon boundaries by our blastn-based scripts. Considering only markers with mapped reads, important measures of their performance are specificity and coverage uniformity. We targeted ca. 28 kb of genomic sequence which is less than 10−6 of the 30 Gb Lissotriton genome34. With 77% reads on target the enrichment rate was almost million-fold, approaching that of PCR. Specificity was lower than 98–99% reported in humans11,23, but when the fraction of reads mapped to reference is taken into account, the difference between results obtained for humans (89.5%) and newts (77.3%) is not large. Coverage uniformity is lower for MIP markers than for other targeted resequencing methods, a major limitation of the technique11. In our study uniformity (80% of MIPs within the 10-fold range) compares favorably with that reported in a human study utilizing a large set of array-synthesized MIPs (58% of MIPs within the 10-fold range)23 and is similar to that for column-synthesized MIPs after rebalancing (90% of MIPs within the 15-fold range)24. In our hands rebalancing by increasing concentration of poor performers 100-fold25 slightly improved performance of the rebalanced probes but decreased the fraction of reads on target; such effect has not been reported in human studies. It is possible that increasing concentration of poor performers 10 or 50 fold24 would produce better results. However it is worth noting that even without rebalancing uniformity was acceptable. Therefore, although further tests of rebalancing may be desirable, we suppose that in many situations discarding poorly performing markers may be a satisfactory solution. In accordance with earlier reports23 capture efficiencies of individual MIPs were highly reproducible. Thus if samples are sequenced to similar coverage, the fraction of missing data should be low for most MIPs. This is an important advantage in applications sensitive to high incidence of missing data, such as construction of linkage maps35. Another crucial aspect of MIP performance is their utility as molecular markers. Useful markers may be broadly defined as those reproducibly genotyped and easily interpreted, i.e. are single locus, polymorphic, codominant, and have low incidence of null alleles36,37. In the SNP literature the fraction of designed markers which can be assayed is termed the conversion rate, while the fraction of markers which are both confirmed as single-locus and polymorphic is referred to as the validation rate. For SNP discovered from transcriptomes both figures are typically lower than those obtained in the current study. For example in a set of fish studies conversion rate ranged between 43 and 92% and validation rate between 12 and 83%; for species without extensive genomic resources these values were in the lower part of the range, unless exon boundaries were identified in transcripts and taken into account during marker design38. In species poorly characterized at the genomic level conversion rate will inevitably be variable depending on factors such as intraspecific polymorphism and the genomic rate of duplication. The latter determines the frequency of young paralogs and the extent of copy number variation. In the present study almost all markers were polymorphic in the hybrid family, indicating successful identification of diagnostic positions in transcriptome resequencing data. Although we attempted to filter out paralogs prior to MIP design, the departures from segregation ratios expected under single-locus Mendelian inheritance suggest that ca. 12% of MIP markers may still be paralogs. Additional paralogs were implied by an excess of heterozygotes in Lvg populations. Only less than a third of paralogs were common to the two datasets. This may result from high genomic duplication rate and extensive copy number variation within and between newt populations, consistent with the reported ca. 10% differences in genome size between closely related lineages of L. vulgaris and L. montandoni34. The extent to which MIP markers are transferable between related species is an interesting question. In the present study we assessed performance of MIPs in three populations of Lvg, an evolutionary lineage which diverged from L. vulgaris vulgaris (Lvv) at least 2 mya39. Although no genomic information from Lvg was used during MIP design, almost all MIPs working in the hybrid family worked also in Lvg and 70% were polymorphic. MIPs may be thus more easily transferable between related species than microsatellite loci, especially in species with large genomes40,41. The combination of information on performance of MIPs obtained in the present study with previously published data for humans allows assessment of strengths and limitations of the MIP markers in research on non-model organisms. MIPs have a number of advantages: • They can be reproducibly genotyped from a low amount of input DNA. We used 300–500 ng corresponding to 1–1.6 × 104 template copies and achieved >99% genotype concordance. In humans 50–120 ng of input DNA, i.e. 1.6–4 × 104 copies were used24. Thus in species with genomes 1 Gb or smaller, 20–50 ng of genomic DNA should be sufficient. In principle the method should work with even lower amount of input DNA but then the frequency of PCR duplicates increases42. If the amount of starting material is limiting, molecular tags uniquely marking reads derived from distinct template molecules may be incorporated into MIP probes to filter out PCR duplicates in downstream bioinformatics analyses25,42. • MIP probes are hybridized directly to the extracted genomic DNA, eliminating the need for constructing genomic libraries. Although simplified library construction protocols have been described43, preparation of numerous genomic libraries is still laborious, costly and requires microgram DNA quantities, especially if PCR-free protocols are preferred. • High specificity and extremely high enrichment rate of MIP markers allow genotyping of a relatively small number of targets even in very large and complex genomes. There is much flexibility in this respect: tens, hundreds or thousands of marker can be easily assayed in a single reaction. This is an advantage compared to multiplex PCR assays44,45,46. • Only standard laboratory equipment is required. • Workflow is straightforward, the entire procedure can be completed within two working days for hundreds of samples and is amenable to automation using liquid handling systems. • Thousands of samples can be sequenced simultaneously using dual indexing. • Design and analysis of MIP markers are relatively simple. Software is available for MIP design47 and standard or dedicated48 tools may be used for mapping reads to reference and calling polymorphisms. Molecular Inversion Probes can be either column- or array-synthesized. The former are individually synthesized unmodified oligonucleotides which do not require purification other than standard desalting. Synthesis in a small scale, for example 5 nmol as in our study, is sufficient for virtually unlimited number of samples. Column-synthesized MIPs can be combined into various panels and rebalanced as needed, making the approach extremely versatile. Although the initial cost of probes is considerable if thousands of MIPs are required, it remains constant regardless of the number of samples processed. Handling hundreds or thousands of oligos may be challenging, but is greatly simplified if they are delivered and stored in 96-well plates. When thousands or tens of thousands of probes are needed they can by synthesized on arrays at an extremely low per probe cost. Oligos are delivered as pool which eliminates the need for handling individual probes. However, this method suffers from serious limitations: i) array-synthesized MIPs are difficult to produce at a scale that would support their use in thousands of samples, ii) the non-uniform synthesis and amplification of array-synthesized MIPs negatively impact the performance of targeted capture, iii) high-quality array-synthesized oligonucleotide libraries are not yet broadly accessible24. Array synthesized probes need to be PCR amplified and converted into single stranded probes in a complex, multistep procedure49. Recently a simplified procedure utilizing array-synthesized double stranded probes was proposed42. Thus it appears that currently both column- and array-synthesized MIP probes are useful, and the choice of either option depends mainly on the target size and the number of samples to process. MIP markers have also limitations which restrict the range of their applications; the two most important limitations are the target size and cost. If multimegabase regions are targeted, sequence capture techniques would be more efficient due to better uniformity11. If probes are column synthesized their cost is substantial, transferring into relatively high per sample cost, especially if few samples are processed. Potentially, the need for prior genomic information required to design markers could be considered a limitation. However, currently sequencing, assembly and detecting polymorphism using transcriptome data are straightforward in any system50,51. The cost of column synthesized MIP markers is constant regardless of the number of processed samples. Therefore the per sample and per genotype costs of MIP analysis decrease with the increasing number of samples (Table 2). This effect can be quite dramatic as increasing the number of samples from 100 to 10 000 reduces the per sample cost 20× for 1000 MIPs and 50× for 5000 MIPs. The per genotype cost is $0.017 for 1000 MIPs and 1000 samples and merely$0.002 for 5000 MIPs and 10 000 samples. The calculations in Table 2 exclude sequencing, because many options are available depending on the size of the experiment. Sequencing would however represent a minor fraction of the total cost. For example, assuming that the mean per MIP per sample coverage of 500× is required, sequencing of 1000 MIPs on HiSeq 2500 with would add ca $1.5 per sample or ca$0.0015 per genotype to the total cost. In conclusion, we demonstrated satisfactory performance of MIP markers designed from transcriptome sequences in a non-model system possessing a very large and complex genome. As few methods for medium-scale targeted resequencing of numerous samples are available for non-model organisms, MIPs fill an important methodological gap. We would thus like to bring the MIP markers to the attention of researchers as a useful extension of the molecular toolkit and an effective solution for large-scale resequencing of tens or hundreds of kb in ecological and evolutionary studies. ## Methods ### Design of Molecular Inversion Probes Design of Molecular Inversion Probes (MIPs) used in the current study follows that of O’Roak et al.24 (Fig. 2). Individual MIPs are column synthesized, unmodified, salt-free oligonucleotides 70 bp long. Each MIP contains a common 30 bp linker sequence in the middle, and two fragments complementary to the genomic sequence of interest: extension arm of 16–20 bp at the 3′ end and ligation arm of 20–24 bp at the 5′ end, which together flank a 112 bp target (Fig. 1a). Following hybridization, gap filling and ligation (Fig. 1b), circularized DNA molecules are used as template in PCR with universal primers complementary to the linker sequence (Fig. 1c–e). Sample-specific index sequences and Illumina adaptors are introduced during the PCR step; we used double indexing with 8-bp Nextera indexes. Amplicons are then purified and sequenced using Illumina technology. In our study two issues had to be taken into account during MIP design. First, because no Lissotriton genome is available, MIP markers were designed using transcriptome sequences (available at http://newtbase.eko.uj.edu.pl/). MIP genotyping was however performed from genomic DNA so we had to ensure that individual MIPs are contained within a single exon. To satisfy this requirement we identified exon boundaries in Lissotriton transcripts using gene models of Xenopus tropicalis, an amphibian species with sequenced and annotated genome. Sequences of X. tropicalis exons were downloaded from Ensembl version 79 and used to prepare gene models. Lissotriton transcripts were blastn-ed to X. tropicalis gene models and exon boundaries were identified in pairwise alignments using scripts available at https://github.com/molecol/targeted-resequencing-with-mips. Only Lissotriton contigs mapping unambiguously to single X. tropicalis genes were used to minimize the incidence of paralogs. We note that such procedure may introduce bias towards less variable genes and may not be appropriate for some types of studies. Second, MIP markers were developed with the aim of constructing the linkage map of Lissotriton newts. Therefore MIPs were selected using available polymorphism data to identify diagnostic markers most informative for the mapping purposes. The linkage map is being constructed using the F2 hybrids between the smooth (Lissotriton vulgaris vulgaris, Lvv) and Carpathian (Lissotriton montandoni, Lm) newts. Although no transcriptome sequences of parents of the hybrid family (generation P) were available, transcriptomes of two Lvv and two Lm individuals from the same geographic regions as the P generation newts are available (http://newtbase.eko.uj.edu.pl/). A total of 4,135 contigs representing putative single-copy genes were available for MIP design. This resequencing dataset allowed identification of diagnostic SNPs, i.e. homozygous in all four individuals but with different allele in each species. Although the sample size of two individuals (four gene copies) per species is small, probably these “diagnostic” SNPs indeed have allele frequencies highly differentiated between species and will be useful for mapping because animals of the P generation are likely to be alternative homozygotes. Regions of interest were defined in BED files and MIP markers were designed using scripts of O’Roak et al.24 available at: http://krishna.gs.washington.edu/mip_pipeline/; recently, these scripts have been replaced by the MIPgen software47, which further optimizes MIP design. To test the impact of the target length on MIP performance, before designing MIPs we randomly deleted codons from 65 contigs; actual targets for MIPs designed in these contigs were longer than the standard 112 bp, varying from 115 to 154 bp and in one case 235 bp. MIPs were ranked according to their predicted performance based on the melting temperature of arms24 and only high scoring MIPs encompassing diagnostic SNPs were considered. If multiple MIPs within a gene passed filters, one was randomly selected. In total 248 MIP markers were synthesized (Biosearch Technologies) and tested in the laboratory (Table S1). ### Samples In total 85 individuals were analyzed. To check the Mendelian inheritance we used a hybrid Lm × Lvv family, 2 individuals from F1 generation and 21 of their offspring (F2). To test performance of MIPs in a closely related but distinct evolutionary lineage we used samples from three L. v. graecus (Lvg) populations: Gracen (Albania, 41.16 N, 19.95 E), Milia (Greece, 37.60 N, 22.41 E), and Sagaiika (Greece, 38.10 N, 21.47 E); 18 individuals from each population were analyzed. The remaining 8 individuals were Lvv and Lm from Poland and additional F2 hybrids from another family. In order to estimate genotyping concordance 16 Lvg individuals from the Gracen population were analyzed in replicates. ### Laboratory procedures and sequencing All experimental protocols were carried out in accordance with the institutional animal ethics permit (number 28/2011) and were approved by the First Local Ethical Committee on Animal Testing at the Jagiellonian University in Krakow. DNA was extracted from tail tips of adults and from whole F2 larvae stored in 96% ethanol using Wizard Genomic DNA Purification Kit (Promega). DNA was dissolved in 100 ul of TE buffer. Target capture and library construction were performed using the protocol described in Hiatt et al.25 with modifications during library amplification. Probes were pooled equimolarly and 5′-phosphorylation was performed using 85 ul of the pool, 50 units of T4 Polynucleotide Kinase (NEB) and 10 ul of 10 × T4 DNA ligase buffer in a total volume of 100 ul. Reaction was incubated for 45 min at 37 °C, followed by an inactivation of the kinase at 80 °C for 20 min. Captures were performed using 300–500 ng of genomic DNA, the phosphorylated probe pool at a 1000-fold probe-to-target molar excess (adjusted for the rebalanced pool, see below), and 1 ul of 10 × Ampligase DNA ligase buffer (Epicentre) in a total volume of 10 ul. Hybridization mixture was incubated at 98 °C for 3 min, 85 °C for 30 min, 60 °C for 60 min, and 56 °C for 120 min. Gap filling and ligation reactions contained 10 ul of hybridization mixture, 300 pm of each dNTPs (NEB), 20 nm NAD+ (NEB), 7.5 um betaine (Sigma), 1 ul of 10 × Ampligase DNA ligase buffer, 5 units of Ampligase DNA ligase (Epicentre) and 3.2 units of Phusion DNA polymerase (NEB) in a total volume of 20 ul and were carried out at 56 °C for 60 min and 72 °C for 20 min. Reactions were then cooled to 37 °C and 20 units of Exonuclease I (NEB) and 100 units of Exonuclease III (NEB) were added to degrade not circularized probes and genomic DNA. Reactions were incubated at 37 °C for 45 min and at 80 °C for 20 min. For each sample PCR amplification of captured targets was performed using 25 ul of Multiplex PCR Kit (Qiagen), 0.5 uM of each indexed primer, 5 ul of capture reaction and nuclease-free water to 50 ul. The following PCR conditions were used: 95 °C/15 min, 28x (94 °C/30 s, 65 °C/90 s, 72 °C/90 s), 72 °C/10 min. PCR products from multiple samples were pooled equimolarly, run on a 1.5% agarose gel at 6.5 V/cm for 60 min and the band at ca. 270 bp was excised and purified using MinElute Gel Extraction Kit (Qiagen). The purified PCR product was quantified via Qubit and run on a Bioanalyzer (Agilent) to check quality of the library. The library was then diluted to 12 pM and sequenced using custom primers (Table S1) on the MiSeq platform, producing 2 × 150 bp paired-end reads. ### Mapping and genotyping Reads were mapped to the reference with Bowtie 252 using –end-to-end –sensitive settings. The multisample SNP calling was performed using GenomeAnalysisTK UnifiedGenotyper53: PCR error rate was set to 0.005 (–pcr_error_rate 5.0E–3); we excluded all reads with mate unmapped or mapped to a different marker (–read_filter UnmappedRead, –read_filter BadMate); the minimum base quality considered for variant calling was set to 20 (–mbq 20). Genotypes in positions with coverage <16 or genotype quality <30 phred were considered missing. The Fraction of Mapped Reads (FMR) represented by the given MIP within an individual was used as the measure of MIP capture performance. ### Pool rebalancing and statistical analyses Because MIPs can differ in the capture efficiency, rebalancing was recommended to improve the uniformity of capture as it reduces the required mean coverage and thus minimizes the cost of sequencing24. Rebalancing is an adjustment of the concentration of individual probes, in particular increasing the probe-to target ratio for the poorly performing probes. To test the effect of rebalancing MIP capture, amplification and sequencing were performed in two experiments: 1. 1 24 individuals were analyzed using the MIP pool containing all markers in equal concentration. 2. 2 77 individuals were analyzed using the rebalanced MIP pool. Based on the results of the experiment 1, 25 MIPs were selected for rebalancing, these were the 24 MIP with the lowest FMR and 1 MIP with the highest FMR; concentrations of the former were increased 100 fold and concentration of the latter was reduced 10 fold. Sixteen Lvg individuals included in both experiments were used to estimate the genotyping error expressed as the non-reference discrepancy rate (NRD)53 in GATK module GenotypeConcordance. Mendelian inheritance was tested using the hybrid family. The exact multinomial test in the R package EMT54 was applied to test whether genotype counts in progeny followed expectations of the single locus Mendelian inheritance. The effect of rebalancing was assessed with the Levene’s test in the R package car55. To identify potential paralogous markers and detect loci with null alleles in the Lvg populations genotype frequencies were tested for the agreement with Hardy-Weinberg expectations in Genepop56. Markers with an excess of heterozygotes in at least one population were flagged as potential paralogs whereas those with deficit of heterozygotes as loci with null alleles. The number of segregating sites and the nucleotide diversity for three Lvg populations were estimated in Arlequin57. The remaining statistical tests were performed in R58. Accession codes: MIP and primer sequences: online Supplementary Information. Transcriptome sequences: http://newtbase.eko.uj.edu.pl/. Scripts for identification of exon boundaries: https://github.com/molecol/ targeted-resequencingwith-mips. Scripts for MIP design: http://krishna.gs.washington.edu/mip_pipeline/. Reference sequences, scripts and .vcf files: DRYAD entry doi:10.5061/dryad.q72b7. How to cite this article: Niedzicka, M. et al. Molecular Inversion Probes for targeted resequencing in non-model organisms. Sci. Rep. 6, 24051; doi: 10.1038/srep24051 (2016). ## References 1. 1 Ekblom, R. & Galindo, J. Applications of next generation sequencing in molecular ecology of non-model organisms. Heredity 107, 1–15 (2011). 2. 2 McCormack, J. E., Hird, S. M., Zellmer, A. J., Carstens, B. C. & Brumfield, R. T. Applications of next-generation sequencing to phylogeography and phylogenetics. Mol Phyl Evol 66, 526–538 (2013). 3. 3 Ellegren, H. Genome sequencing and population genomics in non-model organisms. Trends Ecol Evol 29, 51–63 (2014). 4. 4 Turner, E. H., Ng, S. B., Nickerson, D. A. & Shendure, J. Methods for genomic partitioning. Annu Rev Genomics Hum Genet 10, 263–284 (2009). 5. 5 Good, J. M. Reduced representation methods for subgenomic enrichment and next-generation sequencing. Mol Methods Evol Genet 5, 85–103 (2011) 6. 6 Hohenlohe, P. A. et al. Population genomics of parallel adaptation in threespine stickleback using sequenced RAD tags. PLoS Genet 6, e1000862 (2010). 7. 7 McCormack, J. E. et al. Ultraconserved elements are novel phylogenomic markers that resolve placental mammal phylogeny when combined with species-tree analysis. Genome Res 22, 746–754 (2012). 8. 8 Hebert, F. O., Renaut, S. & Bernatchez, L. Targeted sequence capture and resequencing implies a predominant role of regulatory regions in the divergence of a sympatric lake whitefish species pair (Coregonus clupeaformis). Mol Ecol 22, 4896–4914 (2013). 9. 9 Davey, J. W. et al. Genome-wide genetic marker discovery and genotyping using next-generation sequencing. Nat Rev Genet 12, 499–510 (2011). 10. 10 Narum, S. R., Buerkle, C. A., Davey, J. W., Miller, M. R. & Hohenlohe, P. A. Genotyping‐by‐sequencing in ecological and conservation genomics. Mol Ecol 22, 2841–2847 (2013). 11. 11 Mamanova, L. et al. Target-enrichment strategies for next-generation sequencing. Nat Methods 7, 111–118 (2010). 12. 12 Grover, C. E., Salmon, A. & Wendel, J. F. Targeted sequence capture as a powerful tool for evolutionary analysis1. Am J Bot 99, 312–319 (2012). 13. 13 Faircloth, B. C. et al. Ultraconserved elements anchor thousands of genetic markers spanning multiple evolutionary timescales. Syst Biol 61, 717–726 (2012). 14. 14 Lemmon, A. R., Emme, S. A. & Lemmon, E. M. Anchored hybrid enrichment for massively high-throughput phylogenomics. Syst Biol 61, 727–744 (2012). 15. 15 Tewhey, R. et al. Enrichment of sequencing targets from the human genome by solution hybridization. Genome Biol 10, R116 (2009). 16. 16 Hale, M. C., Thrower, F. P., Berntson, E. A., Miller, M. R. & Nichols, K. M. Evaluating adaptive divergence between migratory and nonmigratory ecotypes of a salmonid fish, Oncorhynchus mykiss. G3 3, 1273–1285 (2013). 17. 17 Twyford, A. & Ennos, R. Next-generation hybridization and introgression. Heredity 108, 179–189 (2012). 18. 18 Ehrenreich, I. M. et al. Candidate gene association mapping of Arabidopsis flowering time. Genetics 183, 325–335 (2009). 19. 19 Cogni, R. et al. Variation in Drosophila melanogaster central metabolic genes appears driven by natural selection both within and between populations. Proc Roy Soc Lond B: Biol Sci 282, 20142688 (2015). 20. 20 Altmüller, J., Budde, B. S. & Nürnberg, P. Enrichment of target sequences for next-generation sequencing applications in research and diagnostics. Biol Chem 395, 231–237 (2014). 21. 21 Hardenbol, P. et al. Multiplexed genotyping with sequence-tagged molecular inversion probes. Nat Biotech 21, 673–678 (2003). 22. 22 Hardenbol, P. et al. Highly multiplexed molecular inversion probe genotyping: over 10,000 targeted SNPs genotyped in a single tube assay. Genome Res 15, 269–275 (2005). 23. 23 Turner, E. H., Lee, C., Ng, S. B., Nickerson, D. A. & Shendure, J. Massively parallel exon capture and library-free resequencing across 16 genomes. Nat Methods 6, 315–316 (2009). 24. 24 O’Roak, B. J. et al. Multiplex targeted sequencing identifies recurrently mutated genes in autism spectrum disorders. Science 338, 1619–1622 (2012). 25. 25 Hiatt, J. B., Pritchard, C. C., Salipante, S. J., O’Roak, B. J. & Shendure, J. Single molecule molecular inversion probes for targeted, high-accuracy detection of low-frequency variation. Genome Res 23, 843–854 (2013). 26. 26 Wang, Y. et al. Analysis of molecular inversion probe performance for allele copy number determination. Genome Biol 8, R246 (2007). 27. 27 Nuttle, X. et al. Rapid and accurate large-scale genotyping of duplicated genes and discovery of interlocus gene conversions. Nat Methods 10, 903–909 (2013). 28. 28 Lin, S., Wang, W., Palm, C., Davis, R. W. & Juneau, K. A molecular inversion probe assay for detecting alternative splicing. BMC Genomics 11, 712 (2010). 29. 29 Studer, B. et al. A transcriptome map of perennial ryegrass (Lolium perenne L.). BMC Genomics 13, 140 (2012). 30. 30 Zieliński, P., Dudek, K., Stuglik, M. T., Liana, M. & Babik, W. Single nucleotide polymorphisms reveal genetic structuring of the Carpathian newt and provide evidence of interspecific gene flow in the nuclear genome. PLoS One 9, e97431 (2014). 31. 31 Alekseyenko, A. V., Kim, N. & Lee, C. J. Global analysis of exon creation versus loss and the role of alternative splicing in 17 vertebrate genomes. RNA 13, 661–670 (2007). 32. 32 Gelfman, S. et al. Changes in exon–intron structure during vertebrate evolution affect the splicing pattern of exons. Genome Res 22, 35–50 (2012). 33. 33 Roelants, K. et al. Global patterns of diversification in the history of modern amphibians. Proc Natl Acad Sci USA 104, 887–892 (2007). 34. 34 Litvinchuk, S. N., Rosanov, J. M. & Borkin, L. J. Correlations of geographic distribution and temperature of embryonic development with the nuclear DNA content in the Salamandridae (Urodela, Amphibia). Genome 50, 333–342 (2007). 35. 35 Van Ooijen, J. W. & Jansen, J. Genetic mapping in experimental populations. (Cambridge University Press, 2013). 36. 36 Schlötterer, C. The evolution of molecular markers—just a matter of fashion? Nat Rev Genet 5, 63–69 (2004). 37. 37 Seeb, J. et al. Single‐nucleotide polymorphism (SNP) discovery and applications of SNP genotyping in nonmodel organisms. Mol Ecol Res 11, 1–8 (2011). 38. 38 Montes, I. et al. SNP discovery in European anchovy (Engraulis encrasicolus, L) by high-throughput transcriptome and genome sequencing. PLoS One 8, e70051 (2013). 39. 39 Pabijan, M. et al. The dissection of a Pleistocene refugium: phylogeography of the smooth newt, Lissotriton vulgaris, in the Balkans. J Biogeogr 42, 671–683 (2015). 40. 40 Hendrix, R., Susanne Hauswaldt, J., Veith, M. & Steinfartz, S. Strong correlation between cross-amplification success and genetic distance across all members of ‘True Salamanders’ (Amphibia: Salamandridae) revealed by Salamandra salamandra-specific microsatellite loci. Mol Ecol Res 10, 1038–1047 (2010). 41. 41 Garner, T. W. J. Genome size and microsatellites: the effect of nuclear size on amplification potential. Genome 45, 212–215 (2002). 42. 42 Yoon, J.-K. et al. microDuMIP: target-enrichment technique for microarray-based duplex molecular inversion probes. Nucl Acid Res 43, e28 (2015). 43. 43 Rohland, N. & Reich, D. Cost-effective, high-throughput DNA sequencing libraries for multiplexed target capture. Genome Res 22, 939–946 (2012). 44. 44 Wielstra, B. et al. Parallel tagged amplicon sequencing of transcriptome‐based genetic markers for Triturus newts with the Ion Torrent next‐generation sequencing platform. Mol Ecol Res 14, 1080–1089 (2014). 45. 45 Zieliński, P., Stuglik, M. T., Dudek, K., Konczal, M. & Babik, W. Development, validation and high-throughput analysis of sequence markers in nonmodel species. Mol Ecol Res 14, 352–360 (2014). 46. 46 Campbell, N. R., Harmon, S. A. & Narum, S. R. Genotyping‐in‐Thousands by sequencing (GT‐seq): A cost effective SNP genotyping method based on custom amplicon sequencing. Mol Ecol Res 15, 855–867 (2014). 47. 47 Boyle, E. A., O’Roak, B. J., Martin, B. K., Kumar, A. & Shendure, J. MIPgen: optimized modeling and design of molecular inversion probes for targeted resequencing. Bioinformatics 30, 2670–2672 (2014). 48. 48 Pedersen, B. S. Aligning sequence from molecular inversion probes. bioRxiv, 007260 (2014). 49. 49 Murgha, Y. E., Rouillard, J.-M. & Gulari, E. Methods for the preparation of large quantities of complex single-stranded oligonucleotide libraries. PloS One 9, e94752 (2014). 50. 50 Wolf, J. B. Principles of transcriptome analysis and gene expression quantification: an RNA‐seq tutorial. Mol Ecol Res 13, 559–572 (2013). 51. 51 De Wit, P., Pespeni, M. H. & Palumbi, S. R. SNP genotyping and population genomics from expressed sequences–current advances and future possibilities. Mol Ecol 24, 2310–2323 (2015). 52. 52 Langmead, B. & Salzberg, S. L. Fast gapped-read alignment with Bowtie 2. Nat Meth 9, 357–359 (2012). 53. 53 DePristo, M. A. et al. A framework for variation discovery and genotyping using next-generation DNA sequencing data. Nat Genet 43, 491–498 (2011). 54. 54 Menzel, U. (2013). EMT: Exact Multinomial Test: Goodness-of-Fit Test for Discrete Multivariate data. URL: https://cran.r-project.org/web/packages/EMT/. 55. 55 Fox, J. & Weisberg, S. An (R) Companion to Applied Regression. (Sage, 2011). 56. 56 Rousset, F. G. E. N. E. P. O. P. ‘ 007: a complete re-implementation of the GENEPOP software for Windows and Linux. Mol Ecol Res 8, 103–106 (2008). 57. 57 Excoffier, L. & Lischer, H. E. L. Arlequin suite ver 3.5: A new series of programs to perform population genetics analyses under Linux and Windows. Mol Ecol Res 10, 564–567 (2010). 58. 58 R. Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org/ (2013). ## Acknowledgements This work was supported by the Polish National Science Center (UMO-2012/04/A/NZ8/00662 to W.B.) and by Jagiellonian University (DS/WBiNoZ/INoS/762/14). ## Author information Authors ### Contributions W.B., M.N., A.F. and K.D. designed research; M.N., K.D., A.F. performed research; A.F. and M.S. contributed new analytical tools; M.N. and W.B. analyzed data; W.B. M.N. wrote the paper; all authors contributed to and approved the final version of the manuscript. ### Corresponding author Correspondence to W. Babik. ## Ethics declarations ### Competing interests The authors declare no competing financial interests. ## Rights and permissions Reprints and Permissions Niedzicka, M., Fijarczyk, A., Dudek, K. et al. Molecular Inversion Probes for targeted resequencing in non-model organisms. Sci Rep 6, 24051 (2016). https://doi.org/10.1038/srep24051 • Accepted: • Published: • ### Tracing species replacement in Iberian marbled newts • , Isolde van Riemsdijk •  & Jan W. Arntzen Ecology and Evolution (2021) • ### Screening and Analysis of Key lncRNA and miRNA Affecting the Occurrence and Proliferation of Human Glioma Cells • 明扬 李 Asian Case Reports in Oncology (2021) • ### Molecular Evolution of Antigen-Processing Genes in Salamanders: Do They Coevolve with MHC Class I Genes? • Gemma Palomar • , Katarzyna Dudek • , Ben Wielstra • , Elizabeth L Jockusch • , Michal Vinkler • , Jan W Arntzen • , Gentile F Ficetola • , Masatoshi Matsunami • , Bruce Waldman • , Martin Těšický • , Piotr Zieliński • , Wiesław Babik •  & Sabyasachi Das Genome Biology and Evolution (2021) • ### Soybean BARCSoySNP6K: An assay for soybean genetics and breeding research • Qijian Song • , Long Yan • , Charles Quigley • , Edward Fickus • , He Wei • , Linfeng Chen • , Faming Dong • , Susan Araya • , Jinlong Liu • , David Hyten • , Vincent Pantalone •  & Randall L. Nelson The Plant Journal (2020) • ### Precision and Personalized Medicine: How Genomic Approach Improves the Management of Cardiovascular and Neurodegenerative Disease • Oriana Strianese • , Francesca Rizzo • , Michele Ciccarelli • , Gennaro Galasso • , Ylenia D’Agostino • , Annamaria Salvati • , Carmine Del Giudice • , Paola Tesorio •  & Maria Rosaria Rusciano Genes (2020)
# 4x4 Sudoku solver performance The code is for a 4x4 Sudoku solver. It works fine when there are a small number of unfilled spaces (0's) but when I give the whole matrix input as 0's or so the solver takes a very long time. I need it to give only the first valid output, no need to calculate the rest of the outputs. I input a matrix with values from 0 to 4. If they are from 1 to 4 then they are prefilled and they cannot be changed. But if the value is 0, then we can change and fill in any values from 1 to 4 so that once the Sudoku is filled validly we get the output else the program prints "No". Matrix A contains the inputs. Matrix B contains either 1's or 0's. If the value is 0 at location x and y in matrix B, then that means that that value is not prefilled. #include<stdio.h> #include<stdlib.h> void printd(int A[4][4]) { int i,j; for(i=0;i<4;i++) { for(j=0;j<4;j++) { printf("%d", A[i][j]); } printf("\n"); } printf("\n"); return; } int check(int A[4][4]) { int i,j,k,a,b,c,state=1,temp=1; for(i=0;i<4;i++) { for(j=0;j<4;j++) { if(A[i][j]==0) temp=2; } } if(temp==1) { for(i=0;i<4;i++) { for(j=0;j<4;j++) { for(a=0;a<4;a++) { if(a!=i && A[i][j]==A[a][j]) state=0; if(a!=j && A[i][j]==A[i][a]) state=0; if(i<2 && j<2) { for(b=0;b<2;b++) { for(c=0;c<2;c++) { if((b!=i || c!=j) && A[i][j]==A[b][c]) state=0; } } } else if(i<2 && j<4) { for(b=0;b<2;b++) { for(c=2;c<4;c++) { if((b!=i || c!=j) && A[i][j]==A[b][c]) state=0; } } } else if(i<4 && j<2) { for(b=2;b<4;b++) { for(c=0;c<2;c++) { if((b!=i || c!=j) && A[i][j]==A[b][c]) state=0; } } } else if(i<4 && j<4) { for(b=2;b<4;b++) { for(c=2;c<4;c++) { if((b!=i || c!=j) && A[i][j]==A[b][c]) state=0; } } } } } } return state; } return 0; } int fill(int A[4][4],int B[4][4],int x,int y) { int val,i,j,a,b,p; int C[4][4]; for(i=0;i<4;i++) { for(j=0;j<4;j++) { C[i][j]=A[i][j]; } } val=check(A); if(val==1) { printf("Yes"); printd(A); exit(0); } /*else if(val==0 && x==3 && y==3) { printf("N6o"); return; }*/ else if(x<4) { if(B[x][y]==0) { for(p=1;p<5;p++) { C[x][y]=p; //printd(C); //printf("%d %d %d\n", C[0][0], C[0][1], C[0][2]); if(y<3) fill(C,B,x,y+1); else if(x<4) fill(C,B,x+1,0); } } else { if(y<3) fill(C,B,x,y+1); else if(x<4) fill(C,B,x+1,0); } } } int main() { int n,i,j,k,a,b,c,d; int A[4][4]; int B[4][4]={0}; //printd(B); scanf("%d", &n); for(i=0;i<n;i++) { for(j=0;j<n;j++) { scanf("%d", &A[i][j]); if(A[i][j]!=0) { B[i][j]=1; } } } fill(A,B,0,0); printf("No"); return 0; } • Try this meta: How can I prepare my code so that I can paste it formatted? if you have problems with copy-pasting the code (especially tabs). – user52292 Nov 8 '14 at 8:34 • 1. You don't need arrays B or C, if you replace guesses with 0 when you backtrack. 2. You don't need to check the whole matrix on every guess. You should only check whether your last guess was illegal. 3. When you encounter a filled in square, you should scan ahead to find the next unfilled square and recurse on that. There's no need to recurse on filled squares, because you don't do anything with them. – JS1 Nov 15 '14 at 11:46 The formatting is messy: • Many unexpectedly indented blocks and lines. Use indenting well to improve readability, by clearly identifying nested blocks of code. • Put a space after commas in variable lists and parameter lists, for example: • Instead of int n,i,j,k,a,b,c,d prefer int n, i, j, k, a, b, c, d • Instead of fill(A,B,0,0) prefer fill(A, B, 0, 0) • Put a space around operators, for example: • Instead of for(i=0;i<n;i++) prefer for(i = 0; i < n; i++) • Instead of if(A[i][j]!=0) prefer if (A[i][j] != 0) The number 4 is magic in this code. It's everywhere. If you ever extend your implementation to a 5x5 matrix or other, you will have to replace that everywhere. It would be better to define a macro for this, for example: #define DIM 4 Of course, some of the functions in the current implementation can't work with any other matrix. In those functions leave the number 4 as a reminder that you should improve the implementation to support arbitrary dimensions. In other functions that can work with any matrix (with some adjustments) such as main and fill, replace the number 4 with DIM. • PUT SPACES AROUND OPERATORS very, very good tip. This is very important for legibility. I have yet to work somewhere that this was not part of the code standard. – Almo Nov 8 '14 at 19:36 ...but when I give the whole matrix input as 0's or so the solver takes a very long time. That is because your code is very brutal, you simply try every possibility, always check all the slots and continue the loops even when you already know the solution is bad: for(i=0;i<4;i++) { for(j=0;j<4;j++) { if(A[i][j]==0) temp=2; } } The above loop can clearly be terminated inside the if: for(i=0;i<4;i++) { for(j=0;j<4;j++) { if(A[i][j]==0) { temp=2; goto not_filled; } } } not_filled: Similar opportunity for optimization with state: for(i=0;i<4;i++) { for(j=0;j<4;j++) { for(a=0;a<4;a++) { if(a!=i && A[i][j]==A[a][j]) state=0; Looks like you can terminate the loop whenever you change state=0. goto is no bad, use it ;) (break cannot terminate all nested loops, goto can) But that is only beginning, try thinking about the algorithm, can it be improved? What about checking the filled slot first (along with all sub-rows and sub-columns)? Try finding out all unnecessary computation you do and eliminate it (to speed it up). • If you dislike goto sufficiently to want to avoid it here, you can add in a int knownbad = 0; and use it as an exit condition on each loop. The first would become for(i=0;knownbad == 0 && i<4; i++) for example. goto not_filled would become knownbad = 1;. It's a matter of taste and preference. – Martijn Nov 8 '14 at 13:33 • I would rather use few (inline) functions with return if you don't like goto. – user52292 Nov 8 '14 at 14:20
# random walk on Z towards the origin Consider a random walk on $$\mathbb{Z}$$ with rate $$a>0$$ (begin no origin). The r.w. jumps one step towards the origin with probability $$p$$ or one step away from the origin with probability $$1 −p$$. If the walk is at the origin then it jumps left or right with probability $$1/2$$. Question. Is true that if $$p > 1/2$$, $$P(\{\omega: \exists N(\omega)\in [1,\infty), |S_t|\leq N(\omega) \forall t>0\})=1$$? I think. I thought about making a coupling • There obviously is a chance of $(1-p)^{N+1} \gt 0$ that the walk will exit the interval in the first $N+1$ steps and it's equally obvious the chance of exiting can only get larger as the number of steps increases. This would seem to answer your question in the negative, depending on what you mean by "a.s. there exists $N$" and by "all times." Could you explain your meaning? – whuber May 15 '19 at 16:14 • a.s. there exists $N$ means that $P(\{\omega\in\Omega :\limsup_{n\rightarrow\infty}S_n<\infty \})=1$ and $P(\{\omega\in\Omega :\liminf_{n\rightarrow\infty}S_n>-\infty \})=1$. May 16 '19 at 0:18 • There are paths $\omega$ such that there is $N_p(\omega)$ one such $|S_n|<N_p(\omega)$. All times means that such a trajectory the random walk will be confined to $[-N,N]$ for $t\geq0$. May 16 '19 at 0:51 • I'm not sure that the proof you are providing actually addresses the question. You seem to be proving that on average the sum of steps approaches 0. At best it seems like this shows that the average grows sub-linearly. May 16 '19 at 2:22 • The statements you quote make no sense. In the first one, for instance, you assert the existence of an "$N$" according to a predicate in which the symbol "$N$" never even appears! – whuber May 16 '19 at 12:21 I'm not sure that this is true for $$p<1$$. Give me an $$N$$. I suggest if you wait $$N+1$$ steps then there is probability $$(1-p)^{N+1} > 0$$ that the random walker has exceeded the $$N$$. It is therefore not a.s. perhaps you meant a.a.s. or I'm not understanding a.s. as almost surely? • What does a.a.s. stand for? May 15 '19 at 5:11 • Asymptotically almost surely May 15 '19 at 11:52 • I'm not really sure if this will hold a.a.s, but it at least punts the question outside of what I know how to handle May 15 '19 at 14:22
Documentation # ndBasis Basis functions for tunable gain surface ## Description You use basis function expansions to parameterize gain surfaces for tuning gain-scheduled controllers, with the tunableSurface command. The complexity of such expansions grows quickly when you have multiple scheduling variables. Use ndBasis to build N-dimensional expansions from low-dimensional expansions. ndBasis is analogous to ndgrid in the way it spatially replicates the expansions along each dimension. example shapefcn = ndBasis(F1,F2) forms the outer (tensor) product of two basis function expansions. Each basis function expansion is a function that returns a vector of expansion terms, such as returned by polyBasis. If ${F}_{1}\left({x}_{1}\right)=\left[{F}_{1,1}\left({x}_{1}\right),{F}_{1,2}\left({x}_{1}\right),\dots ,{F}_{1,i}\left({x}_{1}\right)\right]$ and ${F}_{2}\left({x}_{2}\right)=\left[{F}_{2,1}\left({x}_{2}\right),{F}_{2,2}\left({x}_{2}\right),\dots ,{F}_{2,i}\left({x}_{2}\right)\right]$, then shapefcn is a vector of terms of the form: ${F}_{ij}={F}_{1,i}\left({x}_{1}\right){F}_{2,j}\left({x}_{2}\right).$ The terms are listed in a column-oriented fashion, with i varying first, then j. shapefcn = ndBasis(F1,F2,...,FN) forms the outer product of three or more basis function expansions. The terms in the vector returned by shapefcn are of the form: ${F}_{{i}_{1}\dots {i}_{N}}={F}_{1,{i}_{i}}\left({x}_{1}\right){F}_{2,{i}_{2}}\left({x}_{2}\right)\dots {F}_{N,{i}_{N}}\left({x}_{N}\right).$ These terms are listed in sort order that of an N-dimensional array, with i1 varying first, then i2, and so on. Each Fj can itself be a multi-dimensional basis function expansion. ## Examples collapse all Create a two-dimensional basis of polynomial functions to second-order in both variables. Define a one-dimensional set of basis functions. F = @(x)[x,x^2]; Equivalently, you can use polyBasis to create F. F = polyBasis('canonical',2); Generate a two-dimensional expansion from F. F2D = ndBasis(F,F); F2D is a function of two variables. The function returns a vector containing the evaluated basis functions of those two variables: $F2D\left(x,y\right)=\left[x,{x}^{2},y,yx,y{x}^{2},{y}^{2},x{y}^{2},{x}^{2}{y}^{2}\right].$ To confirm this, evaluate F2D for x = 0.2, y = -0.3. F2D(0.2,-0.3) ans = 1×8 0.2000 0.0400 -0.3000 -0.0600 -0.0120 0.0900 0.0180 0.0036 The expansion you combine with ndBasis need not have the same order. For instance, combine F with first-order expansion in one variable. G = @(y)[y]; F2D2 = ndBasis(F,G); The array returned by F2D2 is similar to that returned by F2D, without the terms that are quadratic in the second variable. $F2D2\left(x,y\right)=\left[x,{x}^{2},y,yx,y{x}^{2}\right].$ Evaluate F2D2 for x = 0.2, y = -0.3 to confirm the order of terms. F2D2(0.2,-0.3) ans = 1×5 0.2000 0.0400 -0.3000 -0.0600 -0.0120 Create a set of two-dimensional basis functions where the expansion is quadratic in one variable and periodic in the other variable. First generate the one-dimensional expansions. Name the variables for improved readability. F1 = polyBasis('canonical',2,'x'); F2 = fourierBasis(1,1,'y'); For simplicity, this example takes only the first harmonic of the periodic variation. These expansions have basis functions given by: $F1\left(x\right)=\left[x,{x}^{2}\right],\phantom{\rule{1em}{0ex}}F2\left(y\right)=\left[\mathrm{cos}\left(\pi y\right),\mathrm{sin}\left(\pi y\right)\right].$ Create the two-dimensional basis function expansion. Note that ndBasis preserves the variable names you assigned to one-dimensional expansions. F = ndBasis(F1,F2) F = function_handle with value: @(x,y)utFcnBasisOuterProduct(FDATA_,x,y) The array returned by F includes all multiplicative combinations of the basis functions: $F\left(x,y\right)=\left[x,{x}^{2},\mathrm{cos}\left(\pi y\right),\mathrm{cos}\left(\pi y\right)x,\mathrm{cos}\left(\pi y\right){x}^{2},\mathrm{sin}\left(\pi y\right),x\mathrm{sin}\left(\pi y\right),{x}^{2}\mathrm{sin}\left(\pi y\right)\right].$ To confirm this, evaluate F for x = 0.2, y = -0.3. F(0.2,-0.3) ans = 1×8 0.2000 0.0400 0.5878 0.1176 0.0235 -0.8090 -0.1618 -0.0324 ## Input Arguments collapse all Basis function expansion, specified as a function handle. The function must return a vector of basis functions of one or more scheduling variables. You can define these basis functions explicitly, or using polyBasis or fourierBasis. Example: F = @(x)[x,x^2,x^3] Example: F = polyBasis(3,2) ## Output Arguments collapse all Basis function expansion, specified as a function handle. shapefcn takes as input arguments the total number of variables in F1,F2,...,FN. It returns a vector of functions of those variables, defined on the interval [–1,1] for each input variable. When you use shapefcn to create a gain surface, tunableSurface automatically generates tunable coefficients for each term in the vector. ## Tips • The ndBasis operation is associative: ndBasis(F1,ndBasis(F2,F3)) = ndBasis(ndBasis(F1,F2),F3) = ndBasis(F1,F2,F3)
On the stability of the Runge-Kutta 4th order any intro to numerical ODEs will cover that, eg Ascher, Iserles or Deuflhard by there is a new zeeland mathematician called Bucher. i think some of his works/books. Lamberts book, among others mentioned 0 like 0 dislike
# Can we get MathML support? On Math SE and Crypto SE, they have MathML support. This would be great for Security SE too, since we deal with quite a bit of crypto. For those of you that don't know, MathML turns this: $$m \oplus k = c$$ Into this: • Do we get many like this? I kind of thought our threshold was that when they got to the maths stage we sent them over to Crypto :-) – Rory Alsop Mod Aug 14, 2012 at 9:05 • @RoryAlsop It's not just for the math, it's for notation. When I'm saying "compute k from KDF(p,s)", I'd much rather have MathML notation, rather than the code-style one. It looks cleaner, and makes identification of symbols much easier. This is also useful for notation of security mechanisms, which are outside the scope of crypto. Aug 14, 2012 at 9:11 • sure - I already upvoted. Was just needing clarity. – Rory Alsop Mod Aug 14, 2012 at 9:31 • S'all good. Figured you knew, just wanted to respond so everyone knew my reasoning :) Aug 14, 2012 at 9:43 • @Polynomial: My name's Scott Pack and I approve of TeX. Aug 14, 2012 at 12:10
The astronomers reckon that they will collect up to one thousand observation hours (~40 days) per year. doi: D.V. Worklights. (2007b, this issue). Deals. by M.J. Rycroft, S.K. Holidays. C.M.S. In order to study in more details the Pyramid and its Big Void, the team has then built in 2018 two new telescopes, which have been installed inside the Grand Gallery of the pyramid, where they faced additional challenges: … During its journey through the air, this pair comes across more atomic nuclei and a gamma quantum is generated which then once again hits atomic nuclei. 2. high-energy - providing a relatively large amount of energy upon undergoing a chemical reaction. Rev. Examples: NFL, NASA, PSP, HIPAA. high-energy - of or relating to elementary particles having energies of hundreds of thousands of electron volts. J. The first of four telescopes of the HESS experiment was inaugurated at its location on Goellschau farm, which is at an altitude of 1800 meters. However, detecting gamma-ray photons is a daunting task both owing to the overwhelming presence of cosmic rays and to the opacity of our atmosphere. ten times more sensitive than earlier Cherenkov telescopes. The year 2002 marked the completion of the first of four telescopes officially inaugurated. Abbreviation to define. Not affiliated High-energy cosmic gamma rays produce e ± pairs in the atmosphere over $$\frac{9}{7}X_0$$ (with the radiation length $$X_0 \approx 36.5~\rm{g/cm}^2$$ in air). Higher energy X-ray and Gamma-ray telescopes refrain from focusing completely and use coded aperture masks: the patterns of the shadow the mask creates can be reconstructed to form an image. Unlike a conventional telescope, the Cherenkov telescope does not produce images of celestial objects. Searching for gamma quanta is tedious because they hit the earth with much less frequency than optical photons. Construction began in August 2000. Looking for abbreviations of HEFT? HEFT - High-Energy Focusing Telescope. 491, 414 (1997), STEREO Mission Operations Center (MOC) to Payload Operations Center (POC) and to STEREO Science Center (SSC) Interface Control Document (ICD), Applied Physics Laboratory internal document 7381-9045A (2002), E.C. Advertisement: This definition appears somewhat frequently. Why is the shore platform neither smooth nor flat? HET stands for High Energy Telescopes. It is High-Energy Focusing Telescope. Lee, Astrophys. Andersen, J.F. Each telescope has a diameter of twelve meters (~40 feet) and 380 individual round mirrors that make up a light-collecting surface area of 108 square meters (~1000 sq. Die Entscheidung wird für 2021 erwartet. The concept for the project crystallized in 1996 at the Max Planck Institute for Nuclear Physics in Heidelberg. pp 391-435 | In two years time, this "High Energy Stereoscopic System" should be completed and able to explore high-energy radiation from galaxies or the remnants of supernovae. See other definitions of HET. Not logged in J. Stopping electrons are measured in the energy range ∼0.7–6 MeV. LUVOIR-A would be large enough to find and study the dozens of … [1] See all departments. Reames, Space Sci. The result is plotted as a point on a map of the sky. a *, J. E. Grindlaya, B. Allena, S. D. Barthelmyb, G. K. Skinnerb, N. Gehrelsb, and the EXIST HET Working Gro. Wall Cabinets. THE abbreviation stands for Telescope for High Energy. Faucets. These instruments are, however, not suitable to detect radiation from the cores of active galaxies or from the remnants of supernovae having even higher energies of up to a trillion electron volts. doi: R. Müller-Mellin et al., Space Sci. MAGIC detected [5] very high energy cosmic rays from the quasar 3C 279, which is 5 billion light years from Earth. The HETs are designed to measure the abundances and energy spectra of electrons, protons, He, and heavier nuclei up to Fe in interplanetary space. McDonald, J. Geophys. Res. What coastal processes are happening here? Thus, a single cosmic gamma particle creates roughly a thousand secondary particles in a cascade-like process or sub-atomic shower. Bitte scrollen Sie nach unten und klicken Sie, um jeden von ihnen zu sehen. Because the particle moves faster than the speed of light in air, there is a sonic boom or shock wave, which sends out a flash of blue light in the direction of the primary gamma quantum and lasts a few billionths of a second. My Collections. Die Entscheidung wird für 2021 erwartet. Kahler, C.K. Tweet. Stone et al., Space Sci. Workshop Safety. The mirrors, which are arranged in the shape of a honeycomb, detect weak light flashes that develop when cosmic gamma quanta penetrate the earth’s atmosphere. Beschreibung in Englisch: High Energy Telescope. I voted for him, by the way. Plumbing Parts & Tools. The IMPACT investigation for the STEREO Mission includes a complement of Solar Energetic Particle instruments on each of the two STEREO spacecraft. The Energetic X-ray Imaging Survey Telescope (EXIST) is a proposed next generation multi-wavelength survey mission. The focus of the telescope has a French-built electronic camera with 960 photo tube detectors, which are mounted on an area of about 1.4 meters (~4 ft) in diameter. The Federal Republic of Germany is represented by the Max Planck Institute for Nuclear Physics in Heidelberg, the Humboldt University in Berlin, the Ruhr University in Bochum, the University in Hamburg, the University in Kiel, and the Heidelberg Observatory. Solar Energy Supplies. Contrary to visible light, gamma rays are non-thermal meaning that they are not produced in hot celestial bodies like the sun. How does wave action vary along this length of coast? Its maiden flight took place in May 2005 from Fort Sumner, New Mexico, USA. The Telescope Array (TA) experiment, located in midwest Utah, USA(39.3N, 112.9W, Alt 1382 m), consists of two types of detector ().Both methods observe the high energy phenomenon known as an “air shower”, which is generated by an ultra high energy cosmic ray. In order to obtain the image of the gamma’s origin, a computer on the HESS site combines up to four images and determines the position as well as the energy of the air shower. Work Benches. 67, 4983 (1962), CCSDS Packet Telemetry Recommendation for Space Data System Standards, 102.0-B-5, Blue Book, Issue 5 (CCSDS, Washington, DC, 2000). High-Energy Focusing Telescope listed as HEFT HET is defined as High Energy Telescopes somewhat frequently. J. Of these instruments, the High Energy Telescopes (HETs) provide the highest energy measurements. It will have about 20 times more sensitivity, cover a much broader energy range, have considerably better energy resolution and provide a significantly improved angular resolution. This happens about ten kilometers ( 6.3 miles ) above the earth ’ s yet... Weatherproof and are not protected by any sort of roofing or building previously based! A flash simultaneously, a stereoscopic observation can be made Cite as atmosphere yet the light can be..., um jeden von ihnen zu sehen HET andere Bedeutungen von HET klicken Sie, um von! Gamma rays are non-thermal meaning that they are not produced in hot celestial like. Conventional Telescope, the STEREO Mission pp 391-435 | Cite as booms or pressure. High-Energy focusing Telescope listed as HEFT What is the shore platform neither smooth flat.: NFL, NASA, PSP, HIPAA and bright image with the generous, light-gathering objective. The four telescopes detect a flash simultaneously, a stereoscopic observation can be made trick! Im Falle einer positiven Entscheidung zur Realisierung des Projekts soll das Teleskop 2031Vorlage: 5... Be directed at the same location in the earth ’ s surface billion... Efficient international scientific collaboration '' Teleskop har HEAT andre betydninger jeden von ihnen zu sehen ''! Distance to their galaxies hver af dem roughly 250 meters ( 1/10th of a second relating to particles. In Space research XIII, ed astrophysical sources in the hard X-ray ( 20–100 keV ) band any of. Het Neben Hochenergie-Teleskop hat HET andere Bedeutungen von HET Neben Hochenergie-Teleskop hat HET andere Bedeutungen the first four... Of points produced in hot celestial bodies like the sun create sonic booms than earlier Cherenkov.! High-Energy astronomy requires specialized telescopes to make observations from the quasar 3C 279, which is billion. ] very High energy telescopes ( HETs ) provide the highest energy measurements heard such sonic booms based! ~40 days ) per year sort of roofing or building than previously thought based on data from optical infrared. 60Mm objective wide lens HET andere Bedeutungen is the shore platform neither smooth nor flat the HESS telescopes operate nights... 2005 from Fort Sumner, New Mexico, USA stop in the HET the. The Energetic X-ray Imaging Survey Telescope ( EXIST ) is a proposed next generation multi-wavelength Survey.. Example of efficient international scientific collaboration '' on a map of the two STEREO spacecraft a second is... Examples: NFL, NASA, PSP, HIPAA this happens about ten (. Happens about ten kilometers ( 6.3 miles ) above the earth ’ surface... In our Universe ∼13 to 40 MeV/n action vary along this length of coast order to detect the weak light... During nights when the moon is not visible atmospheric flashes of Cherenkov light when. Individual grains of sand the first of four telescopes detect a flash simultaneously, a single cosmic particle! Roughly 250 meters ( 1/10th of a second kinetic energy range ∼0.7–6 MeV andre betydninger ten more. Beach width and cliff height Solar Energetic particle instruments on each of the two STEREO spacecraft enabling them to observations. A large number of points produced in hot celestial bodies like the sun High altitude rockets... Mã©Xico ( 2007a ) remnants of a mere one hundred millionth of a supernova flash simultaneously, single. Happens about ten kilometers ( 6.3 miles ) above the earth ’ s surface lenses! In hot celestial bodies like the sun a thousand secondary particles in a cascade-like process or sub-atomic shower officially. Therefore, the High energy cosmic rays from the ground al., Adv with., ed for stopping He, the High energy this way show a or! Lenses or mirrors or the remnants of a mile ) in diameter having energies of to... When the moon is not visible research XIII, ed ~400 ft. ) und klicken Sie, jeden! To make observations since most of these particles go through most metals and glasses that... Satellites and High altitude research rockets register gamma rays with energies of up to around ten billion volts... Kev ) band, which is 5 billion light years from earth for many hours between each is! Shower in the earth with much less frequency than optical photons Berlin, 1973,! Or mirrors based on data from optical and infrared telescopes to detect weak...
ros::init(argc, argv, "my_node_name", ros::init_options::AnonymousName + ros::init_options::NoSigintHandler); init_options are bit flags, so you can just add them up.
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Return to List  Item: 1 of 1 Bosonic Construction of Vertex Operator Para-Algebras from Symplectic Affine Kac-Moody Algebras Michael David Weiner, Pennsylvania State University, Altoona, PA SEARCH THIS BOOK: Memoirs of the American Mathematical Society 1998; 106 pp; softcover Volume: 135 ISBN-10: 0-8218-0866-4 ISBN-13: 978-0-8218-0866-5 List Price: US$46 Individual Members: US$27.60 Institutional Members: US\$36.80 Order Code: MEMO/135/644 Inspired by mathematical structures found by theoretical physicists and by the desire to understand the "monstrous moonshine" of the Monster group, Borcherds, Frenkel, Lepowsky, and Meurman introduced the definition of vertex operator algebra (VOA). An important part of the theory of VOAs concerns their modules and intertwining operators between modules. Feingold, Frenkel, and Ries defined a structure, called a vertex operator para-algebra (VOPA), where a VOA, its modules and their intertwining operators are unified. In this work, for each $$n \geq 1$$, the author uses the bosonic construction (from a Weyl algebra) of four level $$- 1/2$$ irreducible representations of the symplectic affine Kac-Moody Lie algebra $$C_n^{(1)}$$. They define intertwining operators so that the direct sum of the four modules forms a VOPA. This work includes the bosonic analog of the fermionic construction of a vertex operator superalgebra from the four level 1 irreducible modules of type $$D_n^{(1)}$$ given by Feingold, Frenkel, and Ries. While they get only a VOPA when $$n = 4$$ using classical triality, the techniques in this work apply to any $$n \geq 1$$. Readership Graduate students, research mathematicians, and physicists working in representation theory and conformal field theory. Table of Contents • Introduction • Bosonic construction of symplectic affine Kac-Moody algebras • Bosonic construction of symplectic vertex operator algebras and modules • Bosonic construction of vertex operator para-algebras • Appendix • Bibliography Return to List  Item: 1 of 1 AMS Home | Comments: webmaster@ams.org © Copyright 2014, American Mathematical Society Privacy Statement
# First law of Thermodynamics- FAQ in JEE First law of Thermodynamics First law of thermodynamics is based on the conservation of energy. Statement: The change in internal energy of a system is the heat added to the system minus the work done by the system. Heat is a form of energy. There are two general ways of giving energy to a system: 1. By supplying heat to it 2. By doing mechanical work on it Let us consider an ideal gas in a cylindrical container fitted with a piston. The piston is fixed to its position and the walls of the cylinder are at a higher temperature than the gas. Therefore the average kinetic energy of a wall molecule will be higher than the average kinetic energy of a gas molecule. So when the gas molecules strike the wall, they receive some energy from the wall molecules. Thus, the total internal energy of the gas increases. Again, consider the same situation but now the walls are the same temperature as the gas. Suppose the piston is pushed inside very slowly to compress the gas. As the gas molecules collide with the piston, their speed increases. Hence the total internal energy of the gas increases. Thus, the total internal energy of the gas increases when there is a heat transfer or when there is a work done on the gas. If in a process, an amount ?Q of heat is given to the gas and an amount ?W of work is done by it. The total internal energy of the gas must then increase by ?Q – ?W. ?U = ?Q – ?W This is the first law of thermodynamics. Conventions: When work is done by the system, ?W is positive and similarly when work is done on the system, ?W is negative. Also for heat given to the system, ?Q is positive and for heat given by the system, ?Q is negative. Note: These conventions are valid only in Physics. In Chemistry we have different conventions. Work done by a gas: W = $int^{V_2}_{V_1} p dv$ Where P is the pressure of the gas V1 is the initial volume of the gas V2 is the final volume of the gas Work done in an Isobaric process: W = P (V2 –V1) Work done is an Isochoric process: Since the volume remains constant, there is no work done by the system. Thermodynamics Example: A thermodynamic system is taken through the cycle a-b-c-d-a as shown in the figure. Find the total heat rejected by the gas during the process. ab and cd are isobaric processes. So the work done can be calculated as: For ab: W = 100 (300 – 100) x 10-3 J = 20 J For cd: W = 200 (100 – 300) x 10-3 J = – 40 J/ bc and da are Isochoric processes. So the work done is zero. Therefore total work done during the cycle: ?W = (20 – 40) J = – 20 J The change in internal energy is zero as the initial state is same as the final state. ?U = 0 Using first law of thermodynamics ?U = ?Q – ?W 0 = ?Q – (- 20 J) ?Q = – 20 J Therefore the system rejects 20 J of heat during the cycle.
# Understanding relation between vector valued function and function objective in an multi objective optimization problem I try to understand the relation between "vector-valued function" and "function objective" as used in optimization problem. I understand that objective function in a multi-objective problem can be defined $min(f_1(x), f_2(x)),...f_k(x))$ with $x$ is a candidate solution taken into $\mathbb{X}$ problem space. For example in the multi-objective test function of Schaeffer, $f1(x)$ and $f2(x)$ are two objective function the optimization algorithm search to $min(f1(x),f2(x))$ $Minimize = \begin{cases} f1(x) = x^2 \\ f2(x) = (x-2)^2 \end{cases}$ With $\mathbb{X}$ defining the problem space, i suppose each of this function have this mapping $f: \mathbb{X} \mapsto \mathbb{R}$ , so my space is $\mathbb{X} \mapsto \mathbb{R^2}$ isn't it ? Litterature suggest that we can also write this set of objective function as "vector valued function" like this : $f(x) = (f1(x), f2(x))^T$ with $\mathbf{f} : \mathbb{X} \mapsto \mathbb{R^n}$ I found also another multiple notation : a) $r(t) = x^2i+ (x-2)^2j$ b) $\overrightarrow{r}(t) = x^2i+ (x-2)^2j$ c) $\overrightarrow{f}(x) = (f1(x), f2(x))^T$ d) $\overrightarrow{f}(\overrightarrow{x}) = (f1(\overrightarrow{x}), f2(\overrightarrow{x}))^T$ e) $\mathbf{f}(x) = [f1(x), f2(x)]^T$ I found also this annotation to define vector of parameters : x* = {x1*,x2*,...,xn*} My questions are : • Why using "vector valued function" to redefine "objective function" ? I don't understand the interest :/ • What is really a "vector valued function" on this use case and what is the best notation ? Can you explain this concept with an example or a graphic ? • What is $^T$ and x* in this use case ? • Just looking at the name, a "vector valued function" is a function with values in a vector space. It is usually specified to point out that the function output is a multi-dimensional vector, which is the same as saying it has multiple numerical outputs (in your example, those are $x^2$ and $(x-2)^2$ for input $x$). Therefore, your function is still an objective function, but with multiple outputs, that's why it is called "vector valued". The best way to represent it would be as a parametrized curve: in your example, consider the variable $x$ as time. At each instant, the function gives a point in the plane $\mathbb R^2$, so we can represent all of them as a curve. Here is the curve for your particular example, when $-2\leq x\leq 4$: • You are free to use the notation you find the most easy to work with, as long as it is explicit that you have one input $x$ and two outputs $f_1(x)$ and $f_2(x)$. The notations with $i$ and $j$ are rather ambiguous, unless you have defined those as basis of the vector space before. The arrow notation is nice if you want to remember that the function is vector valued, but might be cumbersome to carry everywhere. • The $(\cdot)^T$ notation stands for transpose, which means change rows to columns and columns to rows in a matrix. This is because vectors in $\mathbb{R}^n$ are usually represented as a column vector, while we usually write in rows. Specifically, $$(f_1(x), f_2(x))^T =\left(\begin{array}{c}f_1(x)\\ f_2(x)\end{array}\right).$$ The $*$ notation is more ambiguous, and could mean many different things, according to context. You can find here an explanation which seems to suit your case (although I cannot be sure of it), as the vector of solutions to the optimization problems of each coordinates. Following this rule, in the example, $$x^*=(x_1^*,x_2^*) = (0,2)$$ (because $0$ is the optimal value of $x$ for $f_1$ and $2$ is the optimal value of $x$ for $f_2$).
## Natural pans as an important surface water resource in the Cuvelai Basin — Metrics for storage volume calculations and identification of potential augmentation sites Please always quote using this URN: urn:nbn:de:bvb:20-opus-223019 • Numerous ephemeral rivers and thousands of natural pans characterize the transboundary Iishana-System of the Cuvelai Basin between Namibia and Angola. After the rainy season, surface water stored in pans is often the only affordable water source for many people in rural areas. High inter- and intra-annual rainfall variations in this semiarid environment provoke years of extreme flood events and long periods of droughts. Thus, the issue of water availability is playing an increasingly important role in one of the most densely populated andNumerous ephemeral rivers and thousands of natural pans characterize the transboundary Iishana-System of the Cuvelai Basin between Namibia and Angola. After the rainy season, surface water stored in pans is often the only affordable water source for many people in rural areas. High inter- and intra-annual rainfall variations in this semiarid environment provoke years of extreme flood events and long periods of droughts. Thus, the issue of water availability is playing an increasingly important role in one of the most densely populated and fastest growing regions in southwestern Africa. Currently, there is no transnational approach to quantifying the potential storage and supply functions of the Iishana-System. To bridge these knowledge gaps and to increase the resilience of the local people's livelihood, suitable pans for expansion as intermediate storage were identified and their metrics determined. Therefore, a modified Blue Spot Analysis was performed, based on the high-resolution TanDEM-X digital elevation model. Further, surface area–volume ratio calculations were accomplished for finding suitable augmentation sites in a first step. The potential water storage volume of more than 190,000 pans was calculated at 1.9 km$$^3$$. Over 2200 pans were identified for potential expansion to facilitate increased water supply and flood protection in the future.
# Red Plenty or socialism without doctrines by on June 9, 2012 Among the many reasons I enjoyed Francis Spufford’s Red Plenty, one of the most important is that the story it tells is part of my own intellectual development, on one of the relatively few issues where my ideas have undergone an almost complete reversal over the years. I was once, like most of the characters in the book, a believer in central planning. I saw the mixed economy and social democracy as half-hearted compromises between capitalism and socialism, with history inevitably moving in the direction of the latter. While I was always hostile to the dictatorial policies of Marxist-Leninism, I thought, in the crisis years of the early 1970s, that the Soviet Union had the better economic model, and that the advent of powerful computers and new mathematical techniques would help to fix any remaining problems. At the same time, I was critical of the kinds of old-style methods of government intervention (tariffs, subsidies and so on) that are now called ‘business welfare’. Over time, and with experience of actual attempts at planning on a smaller scale, I became steadily more disillusioned with the idea. On the whole, I concluded Hayek and Mises had the better of the famous socialist calculation debate of the 1920s and 1930s, and that their arguments about the price mechanism had a lot of merit. This didn’t, however, lead me to share their free-market views, particularly in the dogmatic form in which I encountered them studying economics at the Australian National University. Although I hadn’t read him at the time (and I wonder what Corey Robin would have to say on the subject), I agree pretty much with Oakeshott when he says ‘This is, perhaps, the main significance of Hayek’s Road to Serfdom — not the cogency of his doctrine, but the fact that it is a doctrine. A plan to resist all planning may be better than its opposite, but it belongs to the same style of politics’. This aspect of Hayek is even more pronounced in Mises, for whom free-market economics is a matter of logical deduction, and taken to a ludicrous extreme by their propertarian followers today. The same kind of thinking was evident in much of the financial ‘rocket science’ that gave us the global financial crisis. The belief was that sufficiently sophisticated financial ‘engineering’ could overcome the realities of risk and uncertainty, producing untold wealth for its practitioners while making society as a whole more prosperous – only the first part of the promise was delivered. So, rather than switching from central planning to free-market capitalism, I’m now, in Andre Metin’s description of Australia in early C20, a believer in ‘socialism without doctrines’, starting from the historical premise that Keynesian social democracy has delivered better outcomes than either free-market dogmatism or central planning, and looking for ways to develop a new social democratic vision relevant to our current circumstances. As Red Plenty shows, my enthusiasm for and disillusionment with central planning was about fifteen years behind the same developments in the Soviet Union itself. Spufford gives us a sympathetic picture of their hopes, and of the promise generated by new mathematical techniques like linear programming and optimal control (although entirely free of actual math, the book does a better job than any I’ve read of conveying the feel of these techniques). In 1956, Kruschev makes his famous promise of overtaking the US, and it seems quite credible, but a decade later, all belief in the promise of plenty has been lost. As the book ends, the mathematical programmers charged with making the plan work are pushing the benefits of prices – some at least, like Janos Kornai, would complete the journey to the free-market right, and advocacy of the ‘shock therapy’ approach to post-Communist transition. Red Plenty is a great book. It would be fascinating to see Spufford tackle the post-Soviet transition and particularly the way in which liberal reformers like Chubais and Berezovsky transformed themselves into oligarchs, with the aid of Western academic economists like Andrei Shleifer. The pattern of naïve faith and disillusionment with free-market economics would make a perfect counterpoint to the story of central planning presented here. { 144 comments } 1 BenK 06.09.12 at 2:13 am The problem is that the free-market system hasn’t been tried in recent memory to a degree of rigor so as to allow such disillusionment. There is no such historical novel to write, at this point. 2 Boconnor 06.09.12 at 2:26 am Charles Dickens novels will give you an insight into free market systems with no safety net or government social security system, if you’re after a free market system of sufficient “rigour”. 3 Matt 06.09.12 at 2:36 am This is good, though I’m skeptical that Berezovsky was ever properly thought of as a reformer. He may have tried to put on that role at times, but he seems to have been a crook and villain from the start. This is largely true for the majority of the oligarchs, I think, even those who were or are favorites of the Western media. 4 Colin Danby 06.09.12 at 2:51 am I’m reminded of the neoliberal technocrats I met in Mexico and India in the mid-1990s, many of them originally trained as engineers and mathematicians. They were enthusiastic, very smart, and oddly guileless, convinced that the application of mathematical rationality would straighten it all out. They may still believe that. 5 gordon 06.09.12 at 2:56 am If planned economies and totally “free” markets are the ideological extremes, and if the best system lies somewhere in between, the next question is obviously “where?” Though I’m strongly inclined to agree with Prof. Quiggin that some intermediate system with both some planning and some market allocation of resources is the best, identifying the best compromise position is difficult. And there are closely associated issues like the degree of regulation of markets and products, the optimal extent of Govt. enterprise (including ownership of resources) and sustainability which need to be addressed at the same time. 6 John Quiggin 06.09.12 at 3:12 am @BenK – I take it this is some sort of ironic play on “Communism was never properly tried out”. You should know that irony never works on the Internetz. 7 Plume 06.09.12 at 3:40 am Interesting, Mr. Quiggin. My own trajectory is the opposite of yours. I’ve moved from social democrat to a supporter of a radical egalitarian economy, because, to me, social democracy “settles”. It’s no where near as ambitious as we need. It does not put social justice at the forefront, or democracy (which must include the economy), and does nothing about the problems of sustainability. It basically admits defeat and tries to mitigate the worst excesses of capitalism, without taking the logical next step: overturning the system that needs so much mitigation in the first place, and replacing it with one that does not. Beyond that, I think you narrow the choices too much. It’s not just between the two you mention. We can, in fact we must, do “planning”, if we’re going to achieve social justice and prevent ecological catastrophe. But it doesn’t have to be centrally planned. We do this locally, primarily. Local control, with integration into larger bounded areas. States, regions and the nation as a whole. As we get further away from the local, the “planning” becomes more and more generalized, with specifics left up to local economies. National guidelines create the umbrella, the boundaries, the general goals and pathways and all localities are represented in all other bounded areas. Localities are then free to implement the specifics according to what works for them, as long as these also fit in holistically with the rest of the nation. One family pulling together. Synergy. And that one family owns the means of production. As in, all of us, together. 8 Plume 06.09.12 at 3:45 am Also, JQ, @6, It’s true. It has never been tried on any national scale. Communism being what comes after true Socialism successfully builds and sustains a surplus. Communism being the withering away of the state and an end to all classes. Socialism being the workers owning the means of production. Or, as some updates had it, the entire citizenry owning it. Can anyone truly say Russia tried either one? Or China? Or Cuba? The closest thing in recent times to true communism was Native American village life. No national government has even remotely attempted such a system/no-system. True communism is horizontal, in the sense that Occupy talks about. Obviously, the Soviet, North Korean, Chinese and Cuban systems were vertical. 9 Sebastian H 06.09.12 at 5:28 am “Communism being what comes after true Socialism *successfully builds and sustains a surplus*. Communism being the withering away of the state and an end to all classes.” And if the bolded part can’t be done with anything like modern technology can we finally move on? 10 Nine 06.09.12 at 6:44 am Somewhat OT – is Chubais really an oligarch now ? His wikipedia profile seems to suggest an excellently poised appratchik in the new, new, new russian regime, not a Berezovsky scale oligarch. Though he probably does have a terrific swiss bank account. 11 Hidari 06.09.12 at 8:36 am John Do you not think that some of Adam Curtis’s documentaries (too many to mention, but this blog post gives the general idea) explore the idea of the essential continuities between the Russia of (‘communist’) central control/planning, and the Russian of (neo-liberal) central control? After all, like neo-classical economics, (Soviet style) ‘communism’ presupposes that social life is ultimately ‘rational’, and that with the right mathematics it can be modelled and therefore controlled. And Adam Curtis’s stuff (like Havel’s plays) are all about the (ultimate) irrationality of rationality, or as John Cage put it: ‘you can’t be rational in an irrational world. It’s not rational’. Incidentally (and I know I have raised this issue on this blog before, but no one seems to have mentioned it here) is anyone going to mention the Inca Empire? I’ve read quite a few of the comments and posts on Spufford’s book, and without exception (unless I’ve missed it) everyone assumes that the Soviet Union was the only planned economy in history. But of course that’s not true. The Inca Empire was a planed economy, and seemed relatively stable, unless one infers that its quick collapse after the Spanish invasion was a sign of internal instability. Do any anthropologists/economists have anything to say about what the Inca economy can tell us about the ultimate prospects for a ‘planed economy’? 12 Tim Worstall 06.09.12 at 8:48 am “If planned economies and totally “free” markets are the ideological extremes, and if the best system lies somewhere in between, the next question is obviously “where?”” Clearly. Even on the wilder shores of the laissez faire neoliberal right (extend phrase as you wish) I can’t think of anyone who argues against a mixed economy. Well, maybe Lew Rockwell and there are plenty of other problems with his views. All I can see people arguing about is what is the correct mix? 13 Hidari 06.09.12 at 9:00 am One other thing I noticed was that few people seemed to link to Spufford’s website. But this is a goldmine. Look, for example here at Soviet advertising, promising an unrealisable* ‘good life’, in much the same way that ‘our’ advertising does now. *for all. 14 John Quiggin 06.09.12 at 9:51 am As I tried to argue, I don’t think social democracy is just a mixture of central planning and free markets. 15 Down and Out of Sài Gòn 06.09.12 at 11:36 am Do any anthropologists/economists have anything to say about what the Inca economy can tell us about the ultimate prospects for a ‘planed economy’? Theocracy, aristocracy and illiteracy may be positives if central planning is your thing. That’s not really helpful to most Communists; they generally think these things are negatives. 16 Ken MacLeod 06.09.12 at 11:45 am In the 1970s I thought that central planning combined with democratic control along the lines argued for by (e.g.) Ernest Mandel was possible and desirable. Towards the end of the decade I stumbled upon the economic calculation argument, as briefly stated by David Ramsay Steele in a readable pamphlet. I didn’t understand it fully but I kept worrying at the problem it posed. In the 1980s I read Geoffrey Hodgson’s The Democratic Economy, and Nove’s The Economics of Feasible Socialism, which made some socialist sense of the same argument. More recently I’ve been interested in the more radical market socialism proposed by David Schweickart. The only serious socialist arguments against market socialism are those of Paul Cockshott et al for a democratic, centrally planned economy – which I don’t have the mathematics to follow in detail, but which I keep dragging to the attention of anyone who does. Meanwhile, in my own neck of the woods, the Scottish Socialist Party offers a 12-point plan for a ‘Scottish socialist republic’, one of whose 12 points is: ‘Supermarket prices will be frozen.’ Sometimes I wonder why I bother. 17 Data Tutashkhia 06.09.12 at 12:11 pm Ending exploitation was the (ostensible) goal, centrally planned economy was the means. If private ownership/hired labor are high on the list of means of exploitation, then centrally planned economy could, plausibly, be a solution. And if that is not a good solution, then maybe something else is. 18 Matt 06.09.12 at 12:16 pm Nine, I don’t think Chubias was every an Oligarch in the full sense- he was always involved in the government in a way that the Oligarchs were not, didn’t “own” big industries, etc. I’m sure he’s wildly wealthy from ill-gotten gain (and not just things like his infamous “book advance” he got caught with during the ’96 presidential campaign), but he’s had a quite different role from people like Potanin or Berezovsky. (Also, Chubias is the name of my cat, so whenever I hear people mention him, I feel lightly confused for just a minute.) 19 Data Tutashkhia 06.09.12 at 12:26 pm If I am not mistaken, Chubais (not the cat) is the main reason why the word ‘democrat’ (small d) is now a swear word in Russian. 20 Antoni Jaume 06.09.12 at 2:01 pm Sebastian H 06.09.12 at 5:28 am «[...] And if the bolded part can’t be done with anything like modern technology can we finally move on?» No, if you have nothing really better for people who are starving while someone else is raking the profits. 21 Slocum 06.09.12 at 2:13 pm ‘Supermarket prices will be frozen.’ They’ll be kept next to the ice cream. 22 bob mcmanus 06.09.12 at 2:21 pm 20: Whatever you want to call it, I will accept an economics that is designed to increase real wages, real social wages, and the size of the commons, designed to decrease the real prices/relative values of private assets and private wealth, stabilizes the business cycle, and builds in the expectations that will be our future. 23 geo 06.09.12 at 3:47 pm Tim W @12: what is the correct mix? Check out David Schweickart’s After Capitalism and Alec Nove’s The Economics of Feasible Socialism, both discussed in the earlier Red Plenty seminar post, “To market, to market … or not?” Short answer: free market in goods and services, democratic control of workplaces but otherwise free labor market, public, democratic control of investment. The latter is crucial: private financial markets must go. 24 Antoni Jaume 06.09.12 at 3:58 pm geo 06.09.12 at 3:47 pm «Tim W @12: what is the correct mix? [...] The latter is crucial: private financial markets must go. The empirical evidence in Spain makes me to agree. The public debt was well within the limits of the eurozone, until the government tried to reduce the catastrophe that the private debts were leading to. Now in retrospective it might have been better to let the private interests go bankrupt. As it is now the whole population, including those who did no benefit from the previous state of things, has to bear the payments. 25 Plume 06.09.12 at 4:13 pm Geo, Looking forward to reading Nove. Thanks for the reminder. I like the idea of public control of financing/investment, but am still baffled why people are afraid to take the next logical step: Make all business a part of the Commons. Your own home is a different matter. It’s your castle. But every business belongs to the people. Instantly, we make things cheaper. Without profit, and without obscene salaries at the top, we can cut our work hours down significantly, and inequality withers away over time. We’re no longer working to make a few people rich like rajas, and our work hours become much more aligned with actual production, as opposed to production + multi-million dollar executive salaries + shareholder dividends + profit, etc. We make what society needs, instead of what the 1% wants us to have and tells us we “desire.” It’s also baffling why anyone would think a “free market” could ever really benefit anyone aside from the people who own it: the 1% or the .1%. It’s not one market. It’s not a thing. It’s merely a random, arbitrary collection of individuals, rowing their oars in their own best interests, regardless how that affects anyone else, much less society as a whole, much less the planet. It’s not logical or rational to assume that a million individuals or less, all by doing their own personal thing, all by choosing what is best for no one but themselves, could benefit the rest of us. It’s beyond baffling. Accept for the part about brainwashing for centuries, etc. IE being told that what is good for the rich is good for the rest of us. In whatever form that ruling class appears. The social democratic way doesn’t get rid of that con. It just smooths over some of its roughest edges. We need to end the con, period. We won’t have true democracy until the entire economy is included. Not just a bit here and there. Not just a bit of “representation” here and there. But full inclusion of the economy. 26 Bruce Wilder 06.09.12 at 4:22 pm I do not understand why people ever think that “planning” is going to resolve conflict, let alone scarcity or shortcoming, into a “system” and that will be an end to it: conflict, shortcoming, . . . envy, exploitation, unfairness, resentment . . . . no more. Just doesn’t make sense to me. Here on the windward side of peak oil and climate change, I understand there are utopians speculating on post-scarcity economics. Somebody has been missing a lot of important memos, I guess. If I think of examples of necessary and successful “central planning” in action, it is hard for me to reconcile that activity with GOSPLAN or linear programming optimization. I would think fisheries regulation to manage sustainable fish stocks might qualify. Or, the arguments on-going over standards of medical care, such as whether it makes sense to routinely test men for prostate cancer. Basically, I’m with bob mcmanus @21. An economics that put its thumb on the scale in favor of fairness, efficiency and genuine progress would be enough for me. It is this twisting around to rationalize all manner of stupidity and looting, which gets me down. 27 Plume 06.09.12 at 4:26 pm And, again, financing ceases to be a problem once we shift to virtual money for work done. Take a family of four. The parents hand out tokens to the kids for doing chores. Extend this to a local commune. Pretty much everything the people need to live is there in that commune. Everyone pitches in, does their share. They receive virtual tokens in exchange. They use those virtual tokens to purchase what they need from their follow communalists. And round and round it goes. The virtual digits are endless. There is no worry about “scarce” financing for anything. The commune is only bounded by what the people are able to labor for. If the entire nation is set on that model, and every outlet is included, “money” takes a backseat to work attempted and work done. The sky’s the limit, because everyone is on the same system of exchange — the exchange of virtual digits which are limitless and cost no one to make. This also means there is no need for taxation or debt. We are the government and we hold that unlimited set of digits in common. Everything else is simply redistribution of said digits, with no need to replenish the total stock. Individual workers replenish their own through another day’s work. But the total never goes down. The people — as in everyone — decides upon prices, wages and locks them down to avoid inflation, bubbles and crashes. Set in stone in a new constitution, with rights and responsibilities for all. Work hours are dramatically shortened, because we no longer work our day to create profit for ownership and shareholders. We work to provide for ourselves and society alone. No profit, no money, full on Egalitaria. 28 tomslee 06.09.12 at 4:44 pm what is the correct mix? The only political principle I’ve found that approaches universality is “power corrupts” and the only conclusion I’ve been able to draw is that a stable, permanent “correct mix” can be found only at the end of a rainbow. 29 mattski 06.09.12 at 4:49 pm Better to focus on the process rather than the outcome. How can we increase transparency, access to information, our general level of intelligence? How do we counter the tendency of privately owned media to reflect the interests of the wealthy? I think it’s necessary to lobby for a more robust public media. 30 Plume 06.09.12 at 5:54 pm Bruce, But why is “planning” seen as such a bad thing? Businesses couldn’t survive without it, obviously. Nor could families. What could be more logical than to plan holistically, together, especially when individual planning, per business, especially, generally conflicts with the greater good, the planet, etc. etc.? As in: Even with functioning liberal or social democratic societies, the “free market” overwhelms the regulators. If they managed to stay above the fray of “capture”, they’re overwhelmed. Why? Because our “free market” society means millions of people flood the markets with often stupid, environmentally dangerous ideas for goods and services, cuz they have the incentive to, to make a buck. Actually, to make a killing rather than a living. They have incentive to do whatever it takes to get rich, regardless of the impact on workers, consumers or the planet. In essence, our system has a thin wall around the city, but all the doors and gates are open, so the barbarians (business owners and their marketing campaigns) pour in. The regulators fight worse than a “rearguard action”, trying to handle the onslaught. It’s generally too late. And, because of the “free market”, not only are we saturated with dangerous and potentially dangerous products and services, we’re saturated by glut and overkill. Which is why more than 44% of all businesses go under in their first four years, thus adding even more waste to our landfills, along with draining limited financial resources. Our system of government is reactive, instead of proactive. Our system, ironically, actually produces the need for more government because capitalists can do virtually whatever they want prior to getting to the overwhelmed regulator, courts, etc. etc. Logically, if we extended the city to the entire edge of the nation, controlled all the gates, and made sure nothing was ever even attempted if it did not meet strict social-good criteria, we would need far less “government” on the back-end, fighting that rearguard (re)action. Make the social good the sole criteria for anything new coming on the market, and we have far less reason to “regulate” after the fact. As in, plan and plan ahead. Does the product serve the social good? Is the product environmentally safe? Is it safe for individuals, kids, the elderly? Is it sustainable? Does it work and play well with others, with other locales, regions, the planet? Do we actually need it? Yes? Then, bring it on. No, and it’s a no go right off the bat. Contrast that with our current system in which we have 200 different kinds of pretty much everything from the seemingly benign (sneakers, deodorant, toothpaste) to the likely catastrophic (toxic chemicals, factory farms, non-renewable energy, etc), and heaven knows what all. Too much for our regulatory system to handle more than a few percent of the total. If you plan on the front end, and with the entire citizenry and nature in mind from the start, with truly equal input from everyone, then you reduce, if not eliminate, the need for corrective, reactive, regulatory action on the back-end. You also create a healthier, happier, better-educated and more cohesive society in the bargain, and this you can sustain. 31 peter 06.09.12 at 6:34 pm “particularly in the dogmatic form in which I encountered them studying economics at the Australian National University” Ah, Economics at the ANU in the 1970s! Those were the days: When the department (all of it, I now wonder?) made a submission to a public enquiry by the Australian Capital Territory Egg Marketing Board into the price of eggs in the ACT by suggesting that the the Egg Marketing Board should dissolve itself, as any government intervention in any economy was always bad. When the lecturer for first year micro told the class that if a conjecture dependent on the natural numbers was found to be true for n =1, and also for n = 2, and also for n =3, the you could rest assured it would also be true for all n. This rule was known as The Principle of Mathematical Induction, apparently. When the lecturer for first year micro announced to the class, “Theorem: Any government intervention in an economy always leads to a fall in national income.” and followed this with the words, “Proof: Consider a two-person economy . . .” I guess such statements are one way to discourage people with any mathematical competence from entering your profession and competing with you. 32 Plume 06.09.12 at 7:55 pm Interesting review (by Chris Lehman) of two books about morality and markets: The Moral Limits of Markets 33 Sebastian H 06.09.12 at 8:08 pm Plume, you don’t appear to understand what prices are for. The thumbnail sketch of how they work is that they signal scarcity and encourage alternatives to be used for the scarcer items when possible. So your critique of money ends up being entirely beside the point. You can’t “lock down” prices the way you describe without causing a host of other problems– including shortages of necessities. And I suspect you haven’t thought about your new constitution very well. What do you do with people who don’t want to work in the area that the “democracy” votes for them? Traditionally in communist societies this involves very evil looking pressures on them. Capitalist societies paradoxically allow for more non traditional or non “approved” options than most other societies. The one you’re describing doesn’t sound remotely friendly to the person not fully on board. 34 Sebastian H 06.09.12 at 8:12 pm “Logically, if we extended the city to the entire edge of the nation, controlled all the gates, and made sure nothing was ever even attempted if it did not meet strict social-good criteria, we would need far less “government” on the back-end, fighting that rearguard (re)action. Make the social good the sole criteria for anything new coming on the market, and we have far less reason to “regulate” after the fact.” This is especially the part I find troubling. It sounds like nanny state time ten million. Non traditional or outside of mainstream people are going to end up in trouble here. 35 Plume 06.09.12 at 8:31 pm Sebastian H, It’s not the “nanny state” at all. Unless you think that every citizen is a nanny. Cuz the social good would be decided via democratic debate and choice, decided by all before the fact, and all power would reside in that dispersed form. One person, one vote. No one with more power than anyone else. Power resides in our collectivity, which is the cumulative desire of 310 million people. IOW, it’s not “representative” governance. It’s actual, direct democracy. No political parties — they would be outlawed. No corporations. Again, all of us own the means of production, so power over the economy can never be concentrated. It remains dispersed and obviously diverse. Zero private ownership of the means of production. But you own your own home. It is not on the grid of the Commons. But all “business” is. There would be no “ruling class” or “technocratic class” or “management class.” We’re all the managers, technocrats and rulers of our own nation, employee and employer at the same exact moment, with no one having any more weight than anyone else. No room for “nannies” unless everyone is a nanny. And they aren’t. We aren’t. We plan together for the betterment of the collective*, which is us. What could be more logical? Currently, we’re collectivized as well, but solely on behalf of ownership and for the capitalist system. Ownership takes the lion’s share for itself of what that collective produces. That is incredibly irrational and illogical, as well as immoral and unethical. The collective should benefit from the work of the collective, and improve quality of life for the collective, in harmony with finite Nature. That’s really all this is. 36 Hidari 06.09.12 at 8:43 pm ‘The thumbnail sketch of how they work is that they signal scarcity and encourage alternatives to be used for the scarcer items when possible.’ Yes but prices self-evidently do not work that way and the only way you can pretend they do is to invoke mythical ‘laws’ of supply and demand. The classic example is the passenger pigeon which was hunted to death in the 19th century. The salient point is that the price of the good did not go up (in a statistically significant fashion), even as it neared extermination/. So whatever prices are, they don’t indicate scarcity. 37 Sebastian H 06.09.12 at 8:46 pm “It’s not the “nanny state” at all. Unless you think that every citizen is a nanny. Cuz the social good would be decided via democratic debate and choice, decided by all before the fact, and all power would reside in that dispersed form. One person, one vote. No one with more power than anyone else. Power resides in our collectivity, which is the cumulative desire of 310 million people. IOW, it’s not “representative” governance. It’s actual, direct democracy.” At this point I dont know what to say. If you can’t see how this could be problematic for non majority points of view I don’t know how to help. “tyranny of the majority” is googleable though… 38 Sebastian H 06.09.12 at 8:55 pm Hidari, the study you cite is the opposite of helpful to the point on pricing. Both buffalo and carrier pigeon cases are tragedy of the commons cases The price did not go up as the herds approached extinction because no one owned them so the cost of extintiction was divorced from the harvester. When owned, the herder/harvester has price incentives which typically avoid that outcome. 39 Hidari 06.09.12 at 9:31 pm ‘When owned, the herder/harvester has price incentives which typically avoid that outcome.’ The word ‘typically’ is doing a lot of work there. 40 Watson Ladd 06.09.12 at 9:44 pm Hidari, let’s do the math. Call the rate of population growth $p$ and the rate of harvesting $d$. The population grows by $p-d$ each year, and the owner of the resource gets profit $vdA$ where A is the population and $v$ the price. The owner is faced with deciding how much $d$ will be each time period. Let the interest rate be $r$. I’ve made assumptions that the owner has exclusive control of a small population, and that he is keeping it in a fairly exponential growth regime, relevant for a very damage population. Now it should be obvious that the owner has a unique profit maximizing $d$ in each period, the same over all periods as he is faced with the same problem each time. The profit in period $n$ is $(1-r)^{n-1}(p-d)^{n}dA$. This a geometric series with ratio $(1-r)(p-d)$, and has sum $\frac{dA}{1-(1-r)(p-d)}$. Maximizing this is a trivial exercise. Unless interest rates are very high it is clear that the prospect of larger rewards next year encourage sustainable consumption. 41 Plume 06.09.12 at 9:46 pm Sebastion H, @38, At this point I dont know what to say. If you can’t see how this could be problematic for non majority points of view I don’t know how to help. “tyranny of the majority” is googleable though… Hmm. Yes, it’s terribly ‘tyrannical” to outlaw the introduction of toxic destruction into society. Ooooh the oppression that would lead to!! Why capitalists would no longer be able to get stinking rich off of stinking up our air, land and oceans. They would no longer be able to get rich by saturated the market with useless waste, with no redeeming value, other than its ability to make a tiny minority rich. Beyond that, someone decides what gets into the market, right? There is no way to get around that, if there is a market. Someone always must decide. Why is it better in your eyes to let a very tiny fraction of the population be the decision makers, rather than the entire population? I should think you or anyone would prefer inclusiveness over to exclusivity, especially when that inclusiveness does away with the problem of the luck of birth lottery and the fact that most people don’t want to own or run businesses. How logical is it to let “the market decide” when all that means is letting the very small class of business owners decide? And not even just that small class. The real decision makers are a subset of the class, business owners. They’re the class of rich and powerful business owners. So, they should get to decide for the rest of us? That’s not “tyranny” to you? You’d rather not step on their “freedom” to pollute at will, toss workers to the curb at will, bring dangerous and wasteful products and services into society at will, all so we supposedly avoid the “tyranny of the majority”? Really? Again, I hardly think it’s “tyrannical” to make sure our environment is safe and sustainable, and that the populace is free from and liberated from noxious, toxic products and services whose sole reason for existence is that they help a tiny few make a ton of money. We don’t need to trade our freedom and liberty for the “freedom and liberty” of capitalists to do as they please. It’s obviously a very bad trade. 42 Plume 06.09.12 at 10:14 pm Hidari, True. If prices actually worked as Sebastian suggests, then we would have stopped eating most fish decades ago. Instead, we are rapidly approaching extinction for many forms of sea life, and total fish stocks are down 90% since 1950. There are thousands of other species we’ve hunted to death, and hundreds of ecosystems raped, plundered and pillaged by capitalists. Prices did not act as Early Warning devices. At least in result. Also, by constantly looking for cheaper and cheaper labor, capitalists can “distort” the pricing effect suggested by Sebastian and keep prices low for consumers. This not only hides the true costs of cheap imported goods, but also prevents any kind of warning system that might appear if things worked according to some market theories. The problem with those theories, of course, is that many variables in the market act in conflict with one another. In short, if we can create a closed system, and the nation itself controls that closed system, it can indeed lock in prices and wages. The prices then become tracking device for circulation within that closed system. An accounting, rather than an allocation of riches. 43 Bruce Wilder 06.09.12 at 10:53 pm Plume I don’t deprecate planning. Why do you deprecate conflict? [I previously posted this comment to the wrong thread; sorry for my error -- it was probably due to poor planning on my part.] 44 Plume 06.09.12 at 11:07 pm Bruce, okay. I must have misread. Can you flesh out the part about conflict? I know you’re being at least partially facetious. But am wondering about that kernel of truth imbedded within the tomfoolery. :>) 45 John Quiggin 06.09.12 at 11:30 pm “No political parties—they would be outlawed.” One of the lessons of the Soviet experience, with a one-party state, and of many previous attempts to ban political parties, is that this can’t be done, except if a single person becomes an absolute monarch. Otherwise, the parties reappear as factions, tendencies and so on, with as much formal organization as the law permits. This was the situation in the Soviet Union (roughly) from Lenin’s death to Stalin’s consolidation of power, for example. The US Founders, most of whom didn’t like the idea of parties, nevertheless had some sensible things to say about the inevitable spirit of party and faction. For example, 46 John Quiggin 06.09.12 at 11:33 pm Of course, following Madison’s argument to its logical conclusion, and end to economic scarcity, and therefore to inequality, would remove the biggest single cause of political division. 47 Plume 06.09.12 at 11:59 pm Mr. Quiggin, @45, One of the lessons of the Soviet experience, with a one-party state, and of many previous attempts to ban political parties, is that this can’t be done, except if a single person becomes an absolute monarch. Otherwise, the parties reappear as factions, tendencies and so on, with as much formal organization as the law permits. This was the situation in the Soviet Union (roughly) from Lenin’s death to Stalin’s consolidation of power, for example. Thanks for the link to Madison. I have read him with pleasure in the past, and will look closely at the link cited. That said, I think there is a major problem with using the Soviet Union as a warning post against any alternative system — even the idea of alternatives — and I think its use has reached the point of autopilot for all too many. I do troubleshooting for a living. Internet, networks, computers, etc. And I’ve learned from experience that context is everything. Find the variables as they exist now, right now, do process of elimination, and determine as best we can cause and effect. Pretty much all math and science depend upon this and most mathematicians, logicians and scientists realize context is everything. Historians do to. Different context will yield different results. Invariably. To make a long story short, ultimately, it doesn’t matter what happened in the Soviet Union, when it comes to subsequent attempts to change our system(s). It really doesn’t. Yes, we can learn a few things around the edges (mostly general things), but given the fact that Russia in 1917 and on bore almost no relationship to the world we live in now, we simply can’t draw pertinent, relevant or useful conclusions regarding what can or can’t work in our totally different world. We’re not like Russia. Not then. Not now. We’re also not at all like the America of the founders. The variables are different. Radically different. We won’t yield the same results. It’s physically, logically, mathematically impossible. Bottom line: we don’t know what will happen in our own, unique context, if we try alternatives. Yes, we can make educated guesses. But those guesses aren’t going to provide sensible launching pads if we don’t take into account our radical difference from other times and places, etc. etc. IMO, it’s a huge mistake to avoid attempting radical change, simply because other nations failed at their own, unique projects. Totally different context, variables, resources, people, methods, dreams, etc. etc. 48 gordon 06.10.12 at 12:11 am Mattski (at 29): “Better to focus on the process rather than the outcome”. To me, this raises the issue of how to measure whether we’ve arrived (or are at least travelling towards) the best outcome. What do we measure? A lot of work has been done over the last few years on non-traditional measures of economic/social success. Should we, for example, measure our progress with the Genuine Progress Indicator? Or the Human Development Indicator? Or should we simply stick with GDP/capita? What about inequality? 49 Jim Harrison 06.10.12 at 12:38 am gordon— The first thing is to insist that measures of economic success are merely figures of merit like the indices that engineers use to rate refrigerators. Their utility is provisional; and we’re never going to come up with a number that can withstand dimensional analysis because, as one must endless insist, economics is not the physics of money. My suggestion for such a figure of merit is GNP per capita, corrected for inflation, and multiplied by 1 – the Gini coefficient. 50 Stephen Frug 06.10.12 at 12:53 am John Quiggan @ 46: The biggest *so far*. But Madison also says that “so strong is this propensity of mankind to fall into mutual animosities, that where no substantial occasion presents itself, the most frivolous or fanciful distinctions have been sufficient…” for factions. (Which I’ve explained in class by citing Dr Seuss’s Sneeches.) SF 51 Plume 06.10.12 at 1:01 am Is it better to us GNP or GDP? The former sticking with production/services owned by residents of that nation, as opposed to the latter, production within its boundaries, regardless of ownership. Production/services, of course, does not tell us anything about health, education, the environment, cost of living, freedom to pursue individual projects, etc. etc. As in, it tells us nothing about quality of life. 52 Sebastian H 06.10.12 at 1:26 am “Hmm. Yes, it’s terribly ‘tyrannical” to outlaw the introduction of toxic destruction into society. Ooooh the oppression that would lead to!! Why capitalists would no longer be able to get stinking rich off of stinking up our air, land and oceans. They would no longer be able to get rich by saturated the market with useless waste, with no redeeming value, other than its ability to make a tiny minority rich.” Gosh I can’t think of any instance where minority cultures get oppressed by majorities who have different ideas about ethical or moral issues. But maybe gay, Jewish, or black people might have insight. Your theory works so long as you’re confident that you’ll like the rules the majority is going to come up with–so confident that you’re ok with outlawing the rest. That is a very privileged position to be talking from. I have some pretty noticeable non majority things in my life, and there are lots of people with even more. Your vision of outlawing the minority choices doesn’t sound safe to us. Your paragraphs on avoiding the lessons of the USSR are some of the most chilling things I’ve seen recently. Yes context is everything. And you know what context we still have? Humans. There are lots of humans who like to elevate their personal preferences and stuff it down the throats of other people. Enough of them that I suspect your ‘paradise’ would be at least as much hell for minority opinions as now, only with less outlet and potential for tolerance. Your vision might look good to certain majorities, but it shoul be terrifying to anyone else. 53 Plume 06.10.12 at 1:51 am Sebastian H, Right now, in our system, a tiny minority “stuffs it down our throats”. A tiny fraction’s “preferences” dominate everything else. The majority does not rule. The 1% does. Rather, the .1%. What they say goes. Are you okay with that? I’m not. I’m not okay with the fact that that tiny fraction, in order to make mega billions for themselves, crams their ideology down our throats, buys off our politicians, writes our legislation, foments wars to protect and expand their markets, pushes for a security state to protect its property, and pollutes at will. That tiny fraction of capitalists rapes the land, the oceans, and pollutes land, air and sea alike. They couldn’t care less what the majority wants. They couldn’t care less about our health, our safety, our welfare, or the sustainability of anything they’re doing. They dominate our society. As for your talk about how minorities might suffer under the egalitarian system I’m suggesting. I suffer under this one, Sebastian, as do pretty much all minorities and the majority collectively. The only minority that doesn’t suffer under the thumb of our present system is that minority class of business owners and capitalists. They don’t suffer but they cause immense suffering for pretty much everyone else in THIS system. Wanna talk about real minorities? I’m an anti-capitalist, Buddhist, poet, artist and radical egalitarian democrat. Small “d”. Do you really think people like me have any power in this system? Is our system even slightly, remotely open to any part of the egalitarian vision? No. It shuts it down. Often in our history, violently. But in the one I dream of, EVERYONE will have equal say and equal power, and that constitution I spoke of will protect minority rights. Just as long as that minority doesn’t want to own businesses or produce crap, pollution or despoil nature, their rights are protected. Ethnic, racial, religious, gender and sexual minorities will all have equal protection under the law and under the Constitution. The only people who will be “discriminated” against are those who want to be predators and accumulate riches unto themselves. In my dream, they are never persecuted. They just can’t set up shop in our Egalitaria. They are always welcome otherwise, if they can abide by our system and our way of living. If not, they are free to go to a country that welcomes that kind of thing. 54 Plume 06.10.12 at 1:58 am And the only thing those capitalists will be barred from doing is capitalism. Their human rights are still protected and are paramount. And they get to enjoy the fruits of a society that protects the environment, offers free education, free health care to all, with open access to the Commons which stretches virtually everywhere. They get to enjoy our parks, schools, libraries, museums and cultural venues, at no charge. Clean water, clean air, verdant land as far as the eye can see, safe, organic food supplies, safe, renewable energy created by society for society — even capitalists can enjoy all these things. The only thing they need to give up is their capitalism. 55 Gareth Wilson 06.10.12 at 2:10 am “But in the one I dream of, EVERYONE will have equal say and equal power, and that constitution I spoke of will protect minority rights. Just as long as that minority doesn’t want to own businesses or produce crap, pollution or despoil nature, their rights are protected. Ethnic, racial, religious, gender and sexual minorities will all have equal protection under the law and under the Constitution. “ What if the majority of the population wants to discriminate against an ethnic, racial, religious, gender, or sexual minority? If someone prevents them from doing that, the members of the majority don’t have equal power to whoever’s preventing them, right? 56 Plume 06.10.12 at 2:22 am It’s strictly in the Constitution. Which was put to a vote by the total populace and approved. Unlike ours, which restricted that to a few white guys who had property. The Constitution of Egalitaria forbids discrimination based upon race, gender, ethnicity, religion, sexuality, etc. Human rights are paramount. Everyone is equal under the law, as I mentioned. The law says you can’t discriminate. It’s highly doubtful that the majority would ever want to discriminate against the minorities mentioned. But if that were the case, they couldn’t. It would be against the law. In a sense, in a paradoxical way, the majority would have already spoken against the majority. Or something to that effect. It would have done the right thing and encoded it in stone. 57 mattski 06.10.12 at 3:25 am Plume, your dream is your dream. There is a reason that boats have captains. This is pretty elementary stuff. How much experience do you have with collective decision making? … Empirically minded people might appreciate the history of communal experiments. Small groups of like-minded people have consistently tried and failed to live together for any extended period. There is a reason for that. You, otoh, are talking about VERY large groups of not at all like-minded people… 58 Plume 06.10.12 at 4:00 am Mattski, It’s the dream of a great number of leftists, going back millennium. Not mine alone, not by a long shot, though I add my own flavors to it. It follows in the footsteps and incorporates the insights of Fourier, Marx, Emerson, Thoreau and the American Transcendentalists, Emma Goldman, Mother Jones, Helen Keller, Norman Thomas and Michael Harrington — to name a few. To name some more, G.E. Cohen, David Harvey, David Graeber and Howard Zinn. Add in Pramoedya Ananta Toer, Jose Saramago, Pablo Neruda, Naomi Klein. The work of the Situationists influenced this. The Frankfurt School influenced this. Adorno, Horkheimer, Marcuse, etc. George Scialabba’s essays provoked much thought that led in this direction. As did Dwight MacDonald and C. Wright Mills. Throw in Russell Jacoby and Christopher Lasch. Orwell and Camus were essential. In short, we are legion. The dream goes back thousands of years. The dream of equality. The dream of living in a safe, beautiful, unpolluted environment. The dream of a society without a ruling class, with no classes, with no profit, money or inequality. You are welcome to bash the idea if you desire, mock it, make fun of it. But, please don’t try to claim that this is my idea alone, or that it doesn’t have a hell of a long list of precursors. I know that makes it easier to attack (isolate and destroy), but it wouldn’t be honest. 59 Hidari 06.10.12 at 7:25 am ‘Hidari, let’s do the math.’ No let’s not. Let’s look at the empirical evidence. You initial approach demonstrates, in a mathematical nutshell, everything that is wrong with neo-classical economics. Incidentally, before anyone starts to reach for their politically correct cliches about ‘the tragedy of the commons’ did anyone actually bother to read the essay I linked to? To quote from the abstract: ‘The historical studies suggest that thetheoretical possibility of a non-extinction equilibrium is unlikely to hold in practice. Similarly, while privatization in a single species context may appear feasible, in a multi-species context the apparent profitability of privatization may be superseded and the species driven to extinction. ‘ (In the real world, of course, we are very much facing a multi-species extinction event). The paper linked to is science, not religion, because, yes, he looked at the maths and did the equations, but then he looked at the empirical evidence to see whether the equations did in fact have predictive power. That’s what science is. Without this second stage, you are doing maths, or metaphysics: not science. 60 Sebastian H 06.10.12 at 8:07 am Plume your dream is firmly at odds with how humans actually work. It might be better if that statement weren’t true, but ignoring reality isn’t going to help us to a better world with your dream any more than it the Christians’ dreams are getting us anywhere. A society organized as you outlined would be a worse tyranny than what you’re living in now. Hidari, in the paper you linked, private ownership saved one of the species studied and was attempted too late for the other. That is what the facts showed. That is the science part. The “may be” part is very much not in evidence. Nearly all the extinction events you are referring to are a) tragedy of the commons–think overfishing in common waters, b) species not currently deemed suitable for economic purposes, or c) subject to insecure property right (poaching). This is a tragedy, but has almost nothing to do with Plume’s misunderstanding about prices. The fact that the tragedy of the commons doesn’t perfectly cover all such situations is not a good argument against the fact that it excellently describes many of those situations. But even if it turns out that our understanding of economics is so crappy that we can’t figure out when such a simple concept applies, doesn’t that say bad things about Plume’s prices on everything fixed by committee approach? Plume fails to realize that removing all economic choices (at least at levels of technology below that of matter conversion) necessitates an attack on civil rights unless we all join a hive mind. People have different priorities. A capitalist system can allow vegans, vegetarians, fish eaters, and pork eaters all to follow their own ideas mitigated by prices. We can argue that governments props up certain choices, but at least they can all exist simultaneously. In Plume’s world the majority could just decide: vegans are too much of a pain to cater to, no special resources will be used to cultivate harvests for their protein needs, we are focusing on chicken (not because it is more ecologically efficient, but purely because we like chicken better and don’t want to waste resources on alternatives). Majority rules on economic choices. Get over it. Right? 61 Hidari 06.10.12 at 8:49 am Incidentally, as the Wikipedia article I linked to showed, there is no such thing as a ‘tragedy of the commons’ in a real world situation. Even Hardin admitted that it should have been the ‘Tragedy of the Unmanaged Commons’ but since commons are, in actual fact, hardly ever unmanaged in a real world situation, in reality, it hardly ever happens. Also: to quote the article: ‘In the context of current issues, even if it is profitable to privatize the common-property native resources of forest lands or coastal regions, if larger profits are obtainable from export or other sales, then extinction is likely to occur although a single species frame-work might suggest that the marketplace would lead to preservation.’ In any case it’s all completely irrelevant because the European invaders didn’t have the legal right to privatise anything, as it was all stolen land from the indigenous people, so the point is moot. 62 mattski 06.10.12 at 9:57 am Plume, I’m not bashing the spirit of leftism. My sympathies are lean unapologetically to the left. I am critical of what appears to me sloppy, amateurish and thus counterproductive thinking. Even your list of fellow utopians is highly suspect. I’m not aware that Howard Zinn or George Orwell or Naomi Klein would come close to endorsing your vision. You can put me firmly in I.F. Stone’s camp. And Paul Krugman’s. I think we’re better off trying to keep our feet on the ground. I sincerely wish you peace of mind. 63 mattski 06.10.12 at 10:27 am (If I was King I would require my subjects read I.F. Stone’s The Trial of Socrates. Plume’s utopianism has more in common with Plato–the original Republican!–than, for example, FDR, LBJ, or most real world champions of the Left you can name.) 64 Tim Worstall 06.10.12 at 10:28 am @ 24 “The latter is crucial: private financial markets must go. The empirical evidence in Spain makes me to agree.” I’d not entirely agree with that. The part of the Spanish financial system which is entirely and totally crocked is the cajas. These (were, they’ve had to be reshuffled in the last couple of years) mutually owned and very largely politically directed local and regional not for profits. As The Guardian said yesterday: http://www.guardian.co.uk/world/2012/jun/08/spain-savings-banks-corruption “Chairmen were often unqualified politicians, with academic investigators finding a close relationship between the size of a bank’s bad loan book and the inexperience, lack of qualifications and degree of politicisation of the chairman.” There most certainly are problems with market allocation of capital, debt, investment and so on. It’s not entirely obvious that replacing those problems with allocation by politics is a good solution. 65 Antoni Jaume 06.10.12 at 11:16 am Tim Worstall 06.10.12 at 10:28 am «[...] I’d not entirely agree with that. The part of the Spanish financial system which is entirely and totally crocked is the cajas. These (were, they’ve had to be reshuffled in the last couple of years) mutually owned and very largely politically directed local and regional not for profits.» While there is some truth in that, what sank most of them was the expansion of lending that accompanied the building boom. Mortgages were their main line of business, however with the boom, local capital no longer was stored there, so they had to go abroad for bonds as they could not emit shares. When the bubble busted they no longer had a valid collateral to make their payments. One reason I suspect is that the limit on mortgages was suppressed, so people could mortgage for the 100%. 66 Nick 06.10.12 at 1:59 pm ‘But in the one I dream of, EVERYONE will have equal say and equal power, and that constitution I spoke of will protect minority rights. Just as long as that minority doesn’t want to own businesses or produce crap, pollution or despoil nature, their rights are protected. Ethnic, racial, religious, gender and sexual minorities will all have equal protection under the law and under the Constitution. The only people who will be “discriminated” against are those who want to be predators and accumulate riches unto themselves.’ Ok, could we consider an example, pornography? There exist allegedly liberal theorists who have called porn ‘pollution’ of various kinds and a majority vote would probably restrict various kinds of it in a great many states. At the same time, it has made huge amount of money for some people, so defintely something that comes under your economic regulation concerns. But on the other hand, it also plays a crucial role in forming and supporting some minority sexual orientations and identities. So how does your system deal with these different interests. Are there substantive defences for sexual minorities or is it a case of the law being ‘equal’ because it will ban both straights and gays from watching, buying or selling gay porn? 67 geo 06.10.12 at 3:45 pm Sebastian: your dream is firmly at odds with how humans actually work. It might be better if that … weren’t true, but ignoring reality isn’t going to help us to a better world So what will help us to a better world, in which humans don’t work that way? 68 Plume 06.10.12 at 6:08 pm Sebastian H, Many people will, like you, fall back on the argument from Nature. Trouble with that is, the argument from Nature supports me, not you. Human beings lived for 200,000 years communally, and as recently as the 19th century in North America, Native Americans lived that way. They shared pretty much everything. It’s natural for us to do so. It’s natural for us to work together for the betterment of the family, the neighborhood, the tribe, cooperatively. We evolved in that way, knowing we needed each other to survive and then building from there. True, we do have alphas and predators in our midst who naturally seek to rule and crush the spirit of the gift, the spirit of the cooperative, the spirit of small “c” communism. And, of course, they’re not content with actually ruining horizontal systems. They want to make sure everyone believes those horizontal systems are, as you say, “unnatural.” They aren’t. It’s the vertical systems which are unnatural — for the vast majority of us. The vast majority of us do not want to rule over others. We want to get along and live in harmony and cooperate with our fellow neighbors, etc. etc. David Graeber correctly notes that capitalism is the worst possible system to cram down the throats of people who are natural communists. Think of what families do, for instance. Parents bring home food and share it out equally. They strive to make sure their kids all have equal shares of clothing, gifts for their birthdays and Christmas, school choice, chores. When they invite friends over, they share their food and they don’t charge prices for it. They don’t ration their entertainment according to who can afford it. At work, throughout the day, we work cooperatively with our coworkers. We give of ourselves, our knowledge base, our skills, without asking for money in return. Beneath the vertical structure of the corporation, which is anti-democratic to the core, lies a kind of natural, unconscious communism, wherein workers share knowledge and skills with one another without expecting anything monetary in return. A “thank you” always helps. But no money is exchanged and it’s horizontal. As for economic choices. Not sure how many more ways I can explain this. One of the most important things about Egalitaria would be ecological protection and sustainability. Our choices would all be under that umbrella. The environment is central. Ecology, Human Rights and Social Justice are the holy trinity for Egalitaria. Making sure that we live in a healthy, sustainable way, in harmony with nature means Vegans are going to be just fine. Making all choices with that harmony in mind will lead to great innovations, too. How can we grow the widest range of crops in a sustainable fashion? How can we have the widest range of foods in a sustainable fashion? How can we do all of this and treat animals in a humane, compassionate manner? How to make sure our water supply is always safe, clean, etc. etc.? In short, I think you’re trying waaay too hard to find rather absurd scenarios regarding this utopian idea. I’m guessing you’re doing so because you think that’s enough to shoot it all down. Your imaginary situations, etc. etc. Well, under our current system, we already know we’re going to have to ration away choice in the not so distant future. Our unbridled desire to give “economic choices” to a small percentage of the population, while the rest of it starves is both immoral and perverse. Our reckless endangerment of the environment will eventually make it so even that small percentage of rich folks won’t be able to have the “economic choices” you see as vital. There simply won’t be enough clean water, safe food, arable land or unpolluted ecosystems to do that anymore. 69 Sebastian h 06.10.12 at 6:28 pm Plume, Your history is horribly wrong and it is leading you horrible directions. But how about this: if you really believe that communism is so ‘natural’ run a small city, say 250,000 under those lines. Demonstrate that it can work without crushing oppression of minority views. Show that it can avoid crippling shortages when you fix prices by vote. It shouldn’t be too hard, you can live off the capitalist innovation machine’s discoveries till now. The only time I’ve seen evidence of it working is with strong religious commitment (ie Amish or some Jewish) but if you believe it, don’t ask us for a complete revolution on everything without a demonstration. You’re asking for more trust that your theories are right than even hyper libertarian. Either side of that utopian game should be able to show us a mid scale demonstration city. Your theory is like trying to engineer a machine to operate on faster than light principles. *Maybe* it isn’t actually impossible. But in practical engineering I’ll trust machines operating on better understood principles until I see a demonstration of the other machines. The last three or four times your machine was attempted on country-sized scales it caused the murder of tens of millions. Let’s start smaller this time, ok? 70 Plume 06.10.12 at 6:28 pm Mattski, Yes, they would at least come close, and they all appear to be well to your left, judging from your belief that FDR and LBJ are your ideal left-wing champions. This may be why we’re not communicating very well, and why you insist on continuing your insults. Beyond that . . . Yes, most of the people I list would love the idea of free lifetime education for all. Free access to health care for all. Free access to museums, libraries, cultural venues for all. Free access to parks and beaches and recreation areas for all and the omnipresence of all the above. They would love the idea of a vast Commons, which never discriminates against anyone based upon ability to pay. They would love the idea of no one ever having to worry about paying for cancer treatments, or dialysis, or long-term hospital stays which can bankrupt people in our system. I think they would all dig an egalitarian New Atlantis, where ecological sustainability takes precedence, and capitalism is no more. Pretty much everyone I listed is an anti-capitalist, though there are a few exceptions. That’s the starting point for this utopia. That along with Ecology and Human Rights. Orwell and Camus, for example, were champions for Human Rights, though they both came along a bit before environmentalist thought really kicked in. Zinn was a champion for both ecology and human rights. Naomi Klein? Definitely. She’s a Democratic Socialist, which is close to where I sit, if I have to choose a formal designation or some sort of party affiliation . . . . Though my utopia goes further, obviously. I think it would require Democratic Socialism to begin with, as its first stage. As Marx correctly noted, it’s not wise to leap over transitional stages. I wish you better luck with “right speech.” 71 Sebastian h 06.10.12 at 6:33 pm Geo, I’m not sure trying to get to a better world where humans don’t work that way is a good goal, at least in the immediate future. It is like hoping for matter conversion or FTL travel. Far better to aim for a future where greedy humans don’t do as much damage to each other and the world. Even that won’t be easy, but it is likely much more possible than positing a huge change in basic human operations. (and it is certainly better than Plume’s method of just assuming away the problem. 72 Sebastian h 06.10.12 at 6:38 pm Plume, in your utopia, what happens to people who don’t agree with the majority, or people who try to trade pot plants or cocaine amongst themselves? (oops I mean outlawed capitalist products). 73 Plume 06.10.12 at 6:42 pm Sebastian H, The last three or four times your machine was attempted on country-sized scales it caused the murder of tens of millions. Let’s start smaller this time, ok? Oh, come on. We’ve been through this already. It’s never been tried. No one has ever done anything remotely like what I’ve suggested. Did the Soviet system put everything up for a vote? Did they even attempt direct democracy? Obviously not. Were they concerned with ecological sustainability or human rights? Obviously not. Red Plenty and other works makes it more than clear that the Soviet Union strove with all its might to outdo Western Capitalist nations, especially the U.S. It was, in essence, “managed capitalism” and had so little in common with the word “communism” it should have been sued for defamation. Again, it never even instituted real socialism, which is the precursor for communism. The means of production were never owned by the people. There was never one vast Commons. All of that remained in the hands of the ruling party/dictatorship, which simply replaced the ruling class of the Czar. If historical precedent truly is important to you, then you need to at least get your history right. The Soviet Union never implemented socialism, much less communism. There was never a shred of democracy there. There was no emphasis on ecology or human rights. Again, Egalitaria’s foundation is direct, participatory democracy. Its holy trinity, Ecology, Human Rights and Social Justice. The Soviet dictatorship failed on all of those grounds. 74 Plume 06.10.12 at 6:47 pm Nick, Haven’t given that subject any thought, when it comes to Egalitaria. But as this dream is a democratic dream, the people would have to decide. Again, there would be constitutional protections, and personal autonomy is paramount — as long as it does not cause harm to others or the environment, etc. So there is likely a lot of wiggle room for that. However, then you have to factor in the way the workers in that industry are treated, and we all have heard the horror stories. Rape, kidnapping, torture, etc. That would obviously mean human rights abuses, which would not be tolerated. 75 Sebastian h 06.10.12 at 6:49 pm I think you’re wrong about it never being tried. But even assuming you’re right, can we try it on a couple hundred thousand people before we risk it at bigger levels. If I’m wrong about the USSR analogy, no harm done. You get your proof of concept and it will probably convince people. But if you’re wrong and the USSR is a good analogy, we risk a reprise. 76 Plume 06.10.12 at 7:05 pm Sebastian H, Plume, in your utopia, what happens to people who don’t agree with the majority, or people who try to trade pot plants or cocaine amongst themselves? (oops I mean outlawed capitalist products). People who don’t agree with the majority are free to participate in general assemblies, make their case, persuade their fellow citizens that things should change. They are unconstrained in this regard. As virtually everything is the Commons, outside one’s own home, they wouldn’t have to worry about being locked up like Occupy protestors. Police wouldn’t have the right to kick them off the Commons. They can protest and petition and rally people to their cause until the cows come home. This, in fact, would be encouraged. Our education system would encourage civic participation and free thinking. It would encourage critical thinking and help develop critical thinking skills. As for pot? It would be legal. It’s natural. It has immense health benefits. Trading it is logical in this system. In fact, it would be available over the counter at village outlets. Hand the clerk your debit card, they ring you up, and you walk out of the store with your pot. Strange that you would limit drugs to merely a “capitalist product”, as if they wouldn’t have medicinal or utilitarian uses beyond a market economy. They do. Those that bring health and/or happiness to individual citizens would be on the shelves of your local village store. 77 Plume 06.10.12 at 7:14 pm Sebastian H, @75, That’s more than fair. Actually, I’ve always thought it best to do this on an island first. Call it Egalitaria. Best case scenario, find a deserted island and ask for volunteers. As in, the people who left the mainland to live on the island would already know what they’re getting into. It would be their choice. They’d already be “on-board” so to speak from the start. Obviously, things would develop, evolve, change with the times from that point on. But the initial establishment of the constitution, horizontal structure, organic farming, locking down sustainability, etc.. . . . would be far easier if starting from scratch. People would be free to leave the island if they found they preferred other systems. No charge. No hassle. No problem. Have also thought out how it might interact with other systems/nations. Can flesh that out later. 78 Emily 06.10.12 at 7:21 pm Re: pricing and scarcity – my understanding is that markets are considered to be reasonably good at pricing relative scarcity but not absolute scarcity. I’ve been reading Ahmed Hussen’s Principles of Environmental Economics (tho the maths is rather beyond me), and it seems to me that throughout the freemarket-socialism trajectory you still retain problems of the potential conflict between efficient (present day) use of resources and resources available for the use of “future generations” as well as other environmental or long-term issues – such as the inability of “the market” to price ecosystem health and so on… 79 Robert 06.10.12 at 7:50 pm Actually, the mathematics of price does not imply what capitalist apologists say it does. This has been known for at least half a century. 80 Robert 06.10.12 at 7:51 pm Should be “the mathematics of price theory” 81 Data Tutashkhia 06.10.12 at 7:54 pm Of course pricing may not work well even without any scarcity, if the supply chain is oligopolistic. Or even if it’s not oligopolistic, but suppliers are involved in price-fixing, and they usually are. And if they are not involved in price-fixing, then, chances are, most of them are going to fail quickly, and then the supply chain will become oligopolistic. Somebody has to monitor and take care of all that stuff anyway. 82 Plume 06.10.12 at 8:54 pm Sebastian H, A quick thought. Would like your response. The pot thing got me to thinking. In capitalism, the actual use or benefit of the product or service is generally secondary, at best. The purpose is not to improve quality of life or protect the environment, it’s to make money for the capitalist/shareholder. The thing sold is beside the point. It can be virtually anything. It’s merely a vehicle to gain riches — which, to me, is one of the biggest problems with capitalism in the first place. It’s like a salesperson. They don’t really care what they sell, just as long as they sell it, make their commission, etc. I did sales when I was younger, and routinely worked with people who had previously sold an incredible array of goods and services. Mostly, there was no logical connection between one sales job and the next. They were that different. Again, in the capitalist system, the object or the service is the vehicle, the means to an end, not the end itself. Its function, utility and overall impact are just not of much concern to the seller, owner, shareholder, etc. In our utopian society, OTOH, we’ve changed the way we look at this. The object or service itself is what counts, how these impact the consumer, society and the planet. What matters is the purpose and effect of the thing in question, not that someone can use it as a vehicle to obtain more of something else: money. We’ve stripped away everything but its utility and its impact on individual who receives it, the society and the ecosystem. As in, the object or service syncs up with itself. There is no delay, no spatial or temporal gap between the thing and its purpose. This system is end-user-centric, worker-centric, society and ecologically-centric . . . as opposed to seller/owner/shareholder-centric. 83 geo 06.10.12 at 9:02 pm Sebastian @71: Far better to aim for a future where greedy humans don’t do as much damage to each other and the world Good, let’s aim for that. Any suggestions? 84 Bruce Wilder 06.10.12 at 10:23 pm In some ways, I think there’s much to say for a collapse of civilization. The few million people, who would be left to live in scattered villages, ringing the arctic circle might well be much happier than the 9 billion fouling their many-chambered nest. 85 mattski 06.11.12 at 2:15 am Plume, I don’t think you’re doing the left any favors. I really don’t. 86 Sebastian 06.11.12 at 4:01 am “In capitalism, the actual use or benefit of the product or service is generally secondary, at best.” The use or benefit of the product or service is generally primary, the problem is that STATUS can be something that people seek in products which you probably don’t like to build into price (and wealth can be a form of status as well). One of your problems is that your system seems to assume that status seeking won’t exist, or that it will be wholly beneficial in your system. A bunch of the nasty stuff that comes with capitalism is expressed through status seeking. In answer to geo, I suspect that our best bet is to try to channel status seeking productively. Capitalism tries to do that, though obviously imperfectly. Most communist theories either ignore status seeking, or pretend that it can *easily* be turned to the ‘good of the community’. Again, most successful versions of well channeled status seeking on large scales have tended to be in religious contexts (which I know isn’t exactly comforting). 87 Plume 06.11.12 at 5:18 am Mattski, Yes, yes. It’s a terrible thing to promote social justice, equality under the law, full, participatory democracy, ecological health and sustainability. It’s just an awful thing to promote equal access to education, health care, cultural venues and all the fruits of civilization, regardless of ability to pay. Just rotten stuff, that. Surely any promotion of an egalitarian system, which radically increases health and education levels across the board, while providing safe, environmentally friendly food and water systems for everyone would be anathema to the left. And if you really believe that, you know nothing whatsoever about the left. Which I suspected after reading your responses, especially the one where you mention FDR and LBJ. Good liberals, of course. Good, mainstream liberals. Two of our best. But liberalism is merely the tip of the left-wing iceberg, and in 2012, center-left solutions are weak tea. We need dramatically stronger medicine to cure what ails us and prevent an ecological train wreck. Quick analogy: Capitalism is like cigarettes. It causes cancer and everyone on the left knows it. Liberals know it too, but they think we can tame it with filters. Put a filter on the cigarette, and that’s enough. Conservatives and propertarians, OTOH, think capitalism needs to be unleashed even further, so they’re dead set against even filters. The conservative is against filters because he thinks it’s unmanly. The propertarian is against filters because he thinks it’s a conspiracy by the Fed to rob us of our freedoms. Leftists are different. They’ve studied and analyzed and observed how totally ineffectual those filters are. They see the great gaps of inequality in health outcomes and they know that filters are less than bandaids. Unlike their liberal friends, they’ve moved beyond attachment to the failed system of cigarettes, filtered or unfiltered, and understand that it’s not worth saving. Again. For the thousandth time. It’s not worth bailing out, coddling, pampering, pleading with, begging it to be nice. Stronger medicine is needed, and the real left understands this. . . . . In short, liberals want to play nicey nicey with the cause of cancer. In effect, they’re sleeping with the enemy even as they realize it is the enemy. As a leftist, I disagree. We’re in a war with plutocracy and oligarchy, and capitalism is the number one generator of plutocrats and oligarchs. Time to choose sides, Mattski. The Dalai Lama has. He’s a Buddhist Marxist, which comes close to my own loosely defined camp. 88 Gareth Wilson 06.11.12 at 5:40 am The Dalai Lama believes he’s the reincarnation of a man who died in 1933, whose embalmed body turned its head toward where his new incarnation was living. He believes he was able to pick items belonging to his previous incarnation using memories of his past life, and that he can control whether he’s reincarnated. I think his economic beliefs have just as much relevance to modern society as his spiritual beliefs. 89 Hidari 06.11.12 at 6:20 am # 79.80 Robert, what does modern mathematics have to say about price theory? Genuine question. Before I got distracted into talking about the passenger pigeon, I was merely making the point that, whatever prices are, they can’t be what apologists for capitalism claim they are. Incidentally, the top post on your blog is dated December 31st 2014 so I am hoping with your future wisdom you can give us an insight into how the Euro crisis ends as well. 90 Plume 06.11.12 at 6:29 am Gareth, So, of all the things you chose to respond to in my post, it’s the part about the Dalai Lama? Do you think that somehow reduces the credibility of what came before? Do you think you’re making a good point by dismissing Marxist thought based upon your bias against Buddhist beliefs? Sheeesh. Mattski and I are Buddhists. I mentioned the Dalai Lama to show wider connections, primarily because he has tried to paint my thought as isolated from left-wing contexts. It is, of course, centrally located in those contexts, in the mainstream of Utopian Socialism, Marxism, Marxian economic theory, Left-Anarchism, Democratic Socialism and the like. It’s also centrally located in the Civil Rights and Human Rights movements, and walks in their shoes. The eradication of poverty. The eradication of discrimination based upon gender, sexuality, ethnic, racial or religious grounds. The eradication of inequality. All of this tied together. Wrapped up in environmentalism. And it’s doable. The funding problem has been solved. 91 Robert 06.11.12 at 8:21 am Hidari, I date the first post ahead as a hack to keep it first. Even abstracting from imperfections, market prices are not scarcity indices, and, for example, Marshall’s principle of substitution cannot be sustained. It is not the case, in theory, that if there were less of some input, its price and the price of commodities in which it was used relatively more intensively would be higher and consumers and producers would be encouraged to substitute other goods for those commodities. I like to make these points using Sraffa’s Production of Commodities by Means of Commodities, but one can make analogous arguments using general equilibrium theory. 92 Roger Gathman 06.11.12 at 8:37 am Since we are approaching a historic moment, when the capitalist system in the developed countries, for the first time since the thirties, is creating a less prosperous field for the vast majority of the next generation than existed for the past generation, we are obviously going to be re-fighting the economic system question. This is why the Red Plenty symposium has been cool! Central planning seems one entry into the question of what to do now, since understanding who benefits from planning, who makes the plans, and to what end is crucial in a period in which trillions are being given (or loaned at rates that amount to gifts) to the elite, while trillions are being sapped away from the assets and incomes of the majority (again, in the developed countries). The combination of planning and private enterprise that forms the financial system – with its central banks, commercial banks, investment banks, and parabanks – went bankrupt in 2008 and has become a massive global drain since. These are the cohort that is “owed”, and the majority is supposed to adjust itself to a system devoted mainly to paying them. Of course, the system is insane. Why a small group in a relatively unimportant sector of the economy should be owed to the point where the majority is sent into debt enslavement is a question that isn’t asked, because as soon as it is, one sees how nutty the system is. Surely here, the Hayekian and Keynesian compromises you are talking about, John, have failed. We have really no need of a commercial banking system that produces, expensively, a function that the state could take over, given information technologies, and do inexpensively. It wasn’t Lenin but Theodore Roosevelt who, in his 1908 state of the union address, suggested setting up a bank in post offices and pretty much letting the state become the banker to the middle classes. It seems evident that the ability to inject money into that class would have ended the Great Recession Instead, in a combination of the worst of central planning and the worst of Hayekian “spontaneity”, it was injected from the top down in the hope that low interest rates for the megabanks would mean low interest rates for the average consumer. Well, corporate borrowing went up, but it didn’t go into r and d, or hiring. Loans for the 80 percent went down. So I’d say banking is a good example of where compromise central planning could well be supplanted by fullthroated planning by the state, at least in part, with a gain for all. Investment banking, which lends to real companies, could well continue to be private, but casino banking, the whole system of hedge funds and n non-vanilla security instruments and options (which have grown up in the post Bretton Woods culture) should simply be merged with casinos and taken out of the financial system entirely. This would allow the gamblers to gamble to their heart’s delight without affecting anything outside their sphere. If pension funds were hooked up with the poker prowess of some superbowl poker player, I think it would violate the rules in place. But pension funds are hooked up with other more dangerous poker players. So perhaps in a novel parallel to Red Plenty, we could contemplate one about “White” Plenty, charting the downfall of a system designed to reward predators while being touted as a necessary adjunct for the life of prey. And how that worked out. 93 Katherine 06.11.12 at 9:55 am As I tried to argue, I don’t think social democracy is just a mixture of central planning and free markets. I’d love to see an expansion of that, John. Or if someone can point me to a past post that does the job, I’d love to see that instead. 94 Plume 06.11.12 at 4:20 pm Roger, @92 So perhaps in a novel parallel to Red Plenty, we could contemplate one about “White” Plenty, charting the downfall of a system designed to reward predators while being touted as a necessary adjunct for the life of prey. And how that worked out. Well said. Predators have always tried to rationalize their actions by snowing the people they screw over. If they aren’t full on sociopaths, who couldn’t care enough to attempt the snow jobs, they’ll try to justify their riches and the fact that others starve. The effort put into justification often seems to exceed the effort taken to extract the wealth in the first place. But I think in today’s America, we’ve reached a critical stage. We’ve been fed the lie that “capitalism is natural” for a long time. But Americans generally didn’t buy that until, oh, say, Reagan. The onslaught of advertising, propaganda in schools, and the virtual takeover of the media have done the trick. Even though the “natural” part only works for the predator class, not the rest of us, it has permeated our consciousness. But the real killer app for the predator class was when they enlisted the help of right-wing libertarians (propertarians), who have done great work for the predator class by linking, if not fusing the concept of “freedom” and “liberty” with the ability of predators to do as they please. Now, for conservatives, propertarians, centrists and even some liberals, the suggestion that we curb the activities of predators draws cries of “you’re taking away our freedoms!!!” The snow job is pretty much complete. They’ve won. They’ve won both on and off the field of battle, in the boardroom, in the classroom, in the backrooms of Congress, in the churches, and in our homes. If capitalism is “natural” and if what predators do means “freedom and liberty” then we are locked in a world of white plenty for the 1% . . . with its growing separation and segregation from the rest of the populace. We won’t be able to reverse this process until we at least wake up enough to see their freedom and liberty means our enchainment, and their actions are not “natural” for the species as a whole. Natural for the 1%. But not the 99%. Actually, it’s probably more like .1%. 95 The Raven 06.11.12 at 5:00 pm I’m bringing this bit here because I think it’s relevant to the discussion of planning and because it is of some current import. We have here a new publication from Dr. Margaret R. Taylor of Berkeley, “Innovation under cap-and-trade programs.” Here’s part of a summary: “Policymakers rarely see with perfect foresight what the appropriate emissions targets are to protect the public health and environment—the history is that these targets usually need to get stricter,” said Taylor. “Yet policymakers also seldom set targets they don’t have evidence that industry can meet. This is where R&D that can lead to the development of innovative technologies over the longer term is essential.” Link. The worst of both worlds: arbitrage and politicized goals. Policies incentivizing the private sector to reach its innovative potential in “clean” technologies are likely to play a key role in achieving climate stabilization. This article explores the relationship between innovation and cap-and-trade programs (CTPs)—the world’s most prominent climate policy instrument—through empirical evidence drawn from successful CTPs for sulfur dioxide and nitrogen oxide control. The article shows that before trading began for these CTPs, analysts overestimated the value of allowances in a pattern suggestive of the frequent a priori overestimation of the compliance costs of regulation. When lower-than-expected allowance prices were observed, in part because of the unexpected range of abatement approaches used in the lead-up to trading, emissions sources chose to bank allowances in significant numbers and reassess abatement approaches going forward. In addition, commercially oriented inventive activity declined for emissions-reducing technologies with a wide range of costs and technical characteristics, dropping from peaks before the establishment of CTPs to nadirs a few years into trading. This finding is consistent with innovators deciding during trading that their research and development investments should be reduced, based on assessments of future market conditions under the relevant CTPs. The article concludes with a discussion of the results and their implications for innovation and climate policy. 96 Hidari 06.11.12 at 5:30 pm ‘It is not the case, in theory, that if there were less of some input, its price and the price of commodities in which it was used relatively more intensively would be higher’. Yes I suspected as much. Interesting to know that theory predicts it too. And yet it’s genuinely astounding how many people still believe that there is some ‘law’ of ‘supply and demand’ even though even the relevant Wikipedia article makes clear it is simply an idealisation with little or no real-world relevance. 97 ezra abrams 06.11.12 at 10:32 pm In theory, theory and practice are the same, In practice, they ain’t (anon) As a scientist who works in the biotech industry, and who learns, painfully, everyday how little he knows about economics, I’m skeptical of the central planning won’t work; some thoughts that make me skeptical: There is a huge, enormous amount of waste in industry; most of it is hidden – even in public companies, we rarely hear of failures. When we do hear of failures, they are noticeable , as HPs recent decision to can its tablet. Since you have a lot of waste, a central system has a lot of slack; if you look at the macmaster carr catalog (good phd thesis for someone, the criticality of macmaster to us innovation) they might have several hundred types of screws. And you do need a lot of different types of screws. However, screws don’t cost much, so a central planner could simply have the screw factory make 10X more then we need; the cost to the economy of the “waste” (eg, sending all the workers home for 9 months out of the year) is simply not that high. However, in many other instances, diversity of parts is driven by biz needs rather then engineering, eg if company X won’t sell part A to company Y, then Y has to go out and make a new version of part A. This is actually pretty significant. Humans required diversity, and things to show off their status, and so forth; however, much of our technology is “only” conspicuous consumption; if, psychologically, power and status were conferred by tother things, then some huge fraction of the stuff we make is unecessaryary and much of this is driven purely by commercial needs; if you have shopped for a laptop recently, youknow that there are, within a single company like Dell or HP, 1,000 of variants, and most of the variation is related solely – solely- to selling; if price was not that important, you could reduce 90% of the complexity in inventory and no one would notice. ON the other hand, lack of central planning generates a huge number of jobs, so maybe the real problem is that without all those sales and maarketing people, we don’t have enough jobs to go around. As I’m sure many CT readers know, the US congress is debating “the farm bill”, with all the ritualistic denunciations of pork, and all the ritulastics incantations about hte family farm. Has anyone else noticed that the most socialized sector of american society is the farm sector, and that the sector that produces the most for the least cost is…the farm sector ? 98 Sebastian 06.11.12 at 11:48 pm “Has anyone else noticed that the most socialized sector of american society is the farm sector, and that the sector that produces the most for the least cost is…the farm sector ?” No, because the farm sector isn’t the most socialized, it is one of the sectors that has benefited the most from mechanization and the oil economy. In terms of staple crops it is low-labor, benefits from low transport costs and benefits from easy access to petroleum based fertilizer. “There is a huge, enormous amount of waste in industry; most of it is hidden – even in public companies, we rarely hear of failures. When we do hear of failures, they are noticeable , as HPs recent decision to can its tablet.” The problem is ‘compared to what’? There is a huge amount of waste in human systems public, private, government or other. We’ve typically found that ignoring the price signal creates more waste rather than less. Geo et al, above, I think you’re getting sucked way into the edge cases. Yes you need more than newton’s laws and a basic theory of gravitation to get a precise description of interactions where the things have tiny mass and are really really close together. But guess what? Those four things can get you a great description for most of the large scale movements in the universe. You are also seemingly confusing scarcity with usefulness. In order to have a high price something must appear useful and rare. Useful non rare things are cheap. Unuseful rare things are often cheap too. Prices signal scarcity of things that people want because they signal intensity of the want, and the ease or lack of ease of substitution. In any case, I’d be super careful about equating “extracts huge amounts from the government” with “socialized”. I’m on the right, so maybe that really is the same thing, but if I were on the left I wouldn’t be happy using it that way. (I.e. I don’t think that having extracted huge bailouts from the government without effective oversight controls makes the banks ‘socialized’ in the way you’re saying) 99 John Quiggin 06.12.12 at 2:43 am @Katherine I realise I probably haven’t made the argument in a single place, and I can’t point to anyone else saying it the way I want to. I’ll put in on my ToDo list. 100 Tim Worstall 06.12.12 at 11:37 am “And yet it’s genuinely astounding how many people still believe that there is some ‘law’ of ‘supply and demand’ even though even the relevant Wikipedia article makes clear it is simply an idealisation with little or no real-world relevance.” Is that so? Every commodities trader in the world is lined up outside to interview you. They’d just love to know how bad harvests, export bans, murrains and plagues of locusts do not increase food prices, how rising wealth doesn’t change diets and thus food prices, how China building in only 20 years the infrastructure for damn near an entire continent hasn’t pushed up iron ore or copper prices….and so on. 101 Adrian Kelleher 06.12.12 at 12:05 pm Well I think you put your finger right on it with “how rising wealth doesn’t change diets and thus food prices”. What about, say, spam? Wealth has risen yet demand for spam has unaccountably collapsed. It turns out the spam market does not posses the eternal or unalterable qualities of physical law, and there is no economic method to satisfactorily account for this because it is a cultural change. In fact trying to make sense of it using economics is a fools errand; it just can’t be done. So what use was all that work the titans of spam economics put into their discipline back in the 1930s and 40s? There is continuity in referral — “spam market” — but it is a pure illusion. The thing referred to possesses no definite or permanent form, so referring to a “spam market” is simply an abuse of language. It’s a wonderful convenience, but it breaks a fundamental protocol of human communication: that things referred to actually exist. 102 Adrian Kelleher 06.12.12 at 12:07 pm Prev @Tim Worstall Also, “fool’s errand” and ” that things referred to should actually exist”. 103 Sebastian h 06.12.12 at 2:55 pm Adrian, you think preferences don’t lead to supply/ demand issues? 104 Adrian Kelleher 06.12.12 at 3:47 pm Of course they do. But there is no “law” of supply and demand. Some prices are sticky, others not, others still will be sticky now and become unstuck later. Cultural shifts and unexpected changes in circumstance will result in rapid and unforeseen market shifts. The right makes claims for markets that blend into what is often rightly characterised by detractors as magical thinking, a tendency that stretches all the way up to the Greenspans and the Trichets. They dress their simple prejudices up in technical jargon but this apparent sophistication is really the empty mysticism of a neolithic shaman; rather than illuminating the truth, it conscripts the forms and rituals of science to the services of blind faith. Like Hayek in Keynes’ famous quip, Trichet and Greenspan were resolute logicians who started with a mistake and ended up in bedlam. But they were the herd leaders; they possessed the authority to convince the orthodox that they knew what they were doing. The orthodox then engaged in mutual contratulation to the exclusion of all other points of view. Note, btw, that these are the same people whose elaborate hindsight compensates for their abject failures in foresight. They are the rulers of the world to this day. 105 Plume 06.12.12 at 4:37 pm Would you include gas consumption in your list? The recession knocked out a great deal of demand. People, at least in America, were actually conserving their driving habits. They used less gas, yet prices kept going up. The recession had worldwide reach, so this should have impacted pretty much everyone, but oil per barrel jumped dramatically, and gas prices rose. I’ve also noticed over time that the price at the pump doesn’t always reflect rate changes per barrel. Which adds another twist in the supposed supply and demand “law”. Too many contradictions throughout the system, too many conflicts, for there to be a law. And whenever anything is a virtual monopoly, as oil is, supply and demand rules go out the window. They can charge what they want, because people need it and the supply is radically controlled. Of course, they have their own constraints and people to answer to. But when it comes to gas prices, there appears to be a major disconnect between consumer demand and producer supply. Throttled on purpose or not. Consumer electronics also seems problematic for those who say there is a rule. Prices generally keep dropping, regardless of demand. Most of that appears to be because of cheaper and cheaper labor, working in virtual slave-settings, round the clock, often with suicides as the form of escape. We in the developed world get to benefit from those who work on the verge of suicide. And economists write “laws” about how it all supposedly functions, generally without mentioned anything about the real costs. 106 Hidari 06.12.12 at 5:34 pm ‘Is that so? Every commodities trader in the world is lined up outside to interview you. They’d just love to know how bad harvests, export bans, murrains and plagues of locusts do not increase food prices, how rising wealth doesn’t change diets and thus food prices, how China building in only 20 years the infrastructure for damn near an entire continent hasn’t pushed up iron ore or copper prices….and so on.’ OK a slight primer about the nature of scientific law. Perhaps I should have made clear that when I said ‘there is no law of supply and demand’, I should have made clear the missing word ‘scientific’.* My bad. In any case, there is no such thing as the scientific law of demand. It does not exist, in precisely the same (or the opposite) way that Boyle’s Law or The First Law of Thermodynamics clearly do exist. This is because (as <a href="http://en.wikipedia.org/wiki/Nancy_Cartwright_(philosopher)"Nancy Cartwright has spent her career arguing), like all scientific laws, the ‘law’ of supply and demand only functions under ‘ceteris paribus’ conditions. But whereas Boyle’s Law or laws pertaining to thermodynamics produce pretty good predictions even when ceteris paribus conditions do not hold the ‘law’ of supply and demand doesn’t. In other words, in ‘real world’ situations, there are simply too many variables, and the law of supply and demand doesn’t have the deterministic character we expect from a scientific law. Therefore it’s not one. QED. To put it in layman’s terms: you can use the laws of physics to make pretty accurate predictions (at the sub-atomic levels, extremely accurate predictions): the laws of economics…..not so much. And this is important because prediction is not, as some economists seem to think, the icing on the cake of a good ‘law’. It is the whole purpose of creating the law in the first place. Without prediction all you have is a beautiful gleaming car, great bodywork, wonderful leather upholstery, but no engine. Now, economists (and, doubtless, commodity traders) try and weasel out of this. But they fail to see the point of Niels Bohr’s little joke about ‘Prediction is very hard, especially of the future’. Sure, these commodity traders and economists are very good at predicting the past. They are very good at seeing the way prices go up or down, and then producing stories about how this ‘must have’ been because of this that and the other ‘supply and demand’ issue. So they are good at predicting the past. But the future? Not so much. *There is of course a heuristic of supply and demand, which has the same metaphysical status as ‘red sky at night, shepherd’s delight’ or ‘a stitch in time will save nine’. Which is not quite the same as saying it’s false. 107 ajay 06.12.12 at 5:42 pm The question “can economists reliably pick which securities are going to go up in future” seems to me to be slightly different from “do supply and demand shocks reliably affect prices”. 108 Hidari 06.12.12 at 5:47 pm ‘Note, btw, that these are the same people whose elaborate hindsight compensates for their abject failures in foresight.’ This makes the point I was making above about prediction a lot more clearly than I was able to. 109 Hidari 06.12.12 at 5:50 pm ‘The question “can economists reliably pick which securities are going to go up in future” seems to me to be slightly different from “do supply and demand shocks reliably affect prices”.’ No it is the same question because look how you have changed the tense. The question is not whether in some Platonic world, ‘supply and demand….affects prices’. The question is: based on the ‘law’ of supply and demand, can I make real world, quantitative, objective, falsifiable, predictions about prices in the future? If the answer is ‘no’ it’s clearly not a law in the sense that Boyle’s Law is a law. It might be a useful heuristic, or way of clarifying your thoughts, but that’s all. 110 Sebastian h 06.12.12 at 6:25 pm Ok, so preferences are the demand side of the supply demand equation. The point on prices, that the pro-planners are having trouble with, is that prices are the best known way of getting good signals about how scarcity interacts with demand interacts with substitution possibility That doesn’t say that prices are the perfect way of getting perfect information (the apparent geo complaint) just that they are the best known method of aggregating AND disseminating that information. If you come up with a better method, fantastic, though I’d counsel using it on a proof of concept scale before just throwing away prices. But at the moment you don’t have a better method. You can talk about social justice or whatever all you want, but you would be best trying to apply that after market a la Sweden rather than screwing with the price signal. And you really shouldn’t be screwing with it if you don’t understand how it enables substitution effects and changes to demand preferences (ie the spam confusion above). 111 ajay 06.12.12 at 6:29 pm The question is: based on the ‘law’ of supply and demand, can I make real world, quantitative, objective, falsifiable, predictions about prices in the future? And you’re saying that the answer is “no” – that, to use everyone’s favourite example, “Trading Places”, even if you manage to get hold of the Department of Agriculture’s forecast for the orange crop in advance, you couldn’t use it to make a killing in the frozen orange juice futures market, because information about supply is useless in predicting future prices? 112 Hidari 06.12.12 at 7:12 pm 113 geo 06.12.12 at 7:19 pm Sebastian @110: the apparent geo complaint Not sure what this refers to, but I did recently write a lengthy post in the Red Plenty seminar setting out one instance of the market/anti-market debate, giving a sympathetic exposition of “market socialism” alternative (Schweickart’s and Nove’s models). The short version: markets in goods and services, yes; labor markets, yes with substantial modification; capital markets, no. Sebastian again: You can talk about social justice or whatever all you want, but you would be best trying to apply that after market This is a perfectly respectable position, except for the “you.” Why not “we”? Don’t you care about social justice? 114 Katherine 06.12.12 at 7:34 pm @John – I appreciate it. But just so you know – don’t feel obliged to be my own personal economics teacher! 115 Sebastian h 06.12.12 at 7:43 pm I don’t find the modifier ‘social’ to be a useful modifier, and it often obscures rather than reveals. Many things that are talked about as social justice can be covered in individual justice (ie discrimination) while many of the economic ones end up being ‘our system is imperfect and I want to throw it away but have no practical alternative’ while others are actively anti individual justice which I’m usually not ok with. 116 Alex K. 06.12.12 at 7:46 pm Appealing to the claim that “prices are scarcity indexes” in the context of someone claiming that we should do away with money is overkill. It simply does not matter whether prices are scarcity indexes in the context of the calculation problem — it only matters that prices are an indispensable guide to what will work and what will not work in the current social context of given resources, given technology, and given tastes. Ironically, in an alternate universe where it is true that prices are scarcity indexes, it would actually be much _easier_ for a central planner to do away with markets. If in this alternative universe we also exclude the pesky problem of having adjustment processes unfold in real time, then in such a universe Lange would be correct to claim that you can simulate markets by trial and error (if you have shortages, increase the price, if you have overproduction, decrease the price). As is it, the fact that prices are not scarcity indexes in a reliable manner (they are influenced by technology, ownership patterns , tastes, expectations etc.) is just an extra nail in the coffin of the socialist side of the economic calculation debate. 117 geo 06.12.12 at 8:03 pm That was a dreary and disappointing reply, Sebastian. The point of saying “social justice” is to point out that this is a class society, in which there are very large inequalities of power, resources, and life chances, and as a result much needless suffering. You can say this is simply a matter of individual injustice, to be dealt with on a case by case basis, but why on earth would you want to say that? It would simply suggest that you see no large, structural inequities, or else that you see them and don’t give much of a damn about addressing them, even “after market.” 118 Sebastian h 06.12.12 at 8:13 pm No I see structural problems and think steps should be taken to fix them. There are just too many side concepts in the term “social justice” for me to want to sign on as a catch all term. Rather than trying to sort out the wheat from the chaff of the term I prefer not to use it. 119 geo 06.12.12 at 8:25 pm So one can’t either coax or browbeat you, as I’ve been trying to do for the last half of this thread, to offer some suggestions for relieving what you agree is a great deal of unnecessary suffering caused by large-scale inequalities? Why so reticent? 120 Sebastian h 06.12.12 at 8:39 pm Well for example, I would think that making Medicare available to all at actuarial cost with subsidies for the poor would have been a good fix to the problem of lack of access. 121 geo 06.12.12 at 8:56 pm So you’re a flaming radical after all! Welcome to the political wilderness. 122 Tim Wilkinson 06.12.12 at 9:49 pm Prices signal scarcity of things that people want because they signal intensity of the want, and the ease or lack of ease of substitution. Except when they don’t because, among other factors, * the whole of C20th n-c economics is based on the impossibility of any interpersonal comparison of utility, which is dealt with by introducing an obviously inadequate measure of ‘efficiency’ which favours the haves over the have-nots. * guess what? Some people have more money than others. And in case this is not obvious, the quantity of purchasing power expended is therefore useless as a proxy for intensity of needs, wants, desires. * amazingly, isolated consumers don’t actually have good information, so all the magic calculation that goes into price-setting is done by suppliers, who are powerful enough to inflate (or sometimes for strategic reasons, notably predatory pricing, deflate) prices. * prices are not signals, they are numbers. They have to be calculated and communicated like any other numerical quantity. The ridiculous magical properties ascribed to them are largely the result of the fact that they are just about the only publicly visible numerical quantity produced by the unaccountable, profiteering, secretive planners who currently control the allocation of almost everything. Note how much control they wield by trickling back (with the option of withholding) sponsorship and advertising funds out of their unearned surplus. They are (in the UK) now busily getting their greedy hands on the last few publicly owned institutions like the NHS, police, schools, universities, roads, parks, etc. And all of this rather centralised allocation involves planning – to further corporate interests rather than anything else, of course. Also, to ezra abrams’s #97, a non-exhaustive supplementary list of the peculiarly wasteful aspects of capitalist production: advertising: branding and ‘aspiration’-inculcation trade secrets and IP, including suppressed inventions the profit tithe duplication of infrastructure and other investments the waste and misery incurred by business failures, the feedback mechanism of capitalism calculated obsolescence: illusory ‘new improved’ versions, short product life, irreparability 123 Adrian Kelleher 06.12.12 at 11:44 pm @Plume, 105 Well the real issue is that “supply” and “demand” are very useful as shorthand that cuts through a lot of complexity but at the expense of realism. Do all market participants really share the same information? Are their tastes fixed? Will changes in one market (e.g. artificial fibres) affect another (e.g. grain)? (Answer: yes if it leads to a reduction in acreage of flax; similar logic relates fuel to grain, say, via biofuels. These relationships can be unobvious and paradoxical.) All these real-world difficulties are assumed away. Worse, answering these questions is no easier than the measurement problems Hayek pointed out about state planning. Problems arise when market evangelists assume all these problems are solved whereas in reality they’ve simply been circumvented. And when an economist calculates a demand curve he or she’s supposed to bear in mind the enormous number of tendentious assumptions made in arriving at that conclusion. But in the real world, once a given set of findings are combined with lots of others and that combination is itself combined with lots of other meta-findings then the idea that anyone is keeping track of all the assumptions and inter-dependencies is just a fantasy. This wouldn’t be so bad if economists, like most professions, didn’t possess the power to define competence as expertise in the convoluted and dubious edifice they have themselves created. To accept this is to become mired in logic of their own choosing. To reject it is to be entirely beyond the pale of civilisation as far as many are concerned. But to be frank I could name more than a couple of economists whose grasp of the fundamentals of science and mathematics is anything but sound. Poll results and questionnaires are assumed to reveal details of the human soul. Games defined by economists are assumed to possess material reality, and results are assumed to reflect concepts like “success”, “failure” and “merit”. Cultural norms are assumed to be biological truths. And so on (and on and on). This process is all too easily abused. Assumptions often termed “arbitrary” are frequently in economics more aptly called “political” or simply just “dishonest”. By choosing appropriate assumptions it’s often possible to arrive at any desired conclusion and insert it into the literature. Once this result is in turn included in literature surveys and so on it then becomes a sort of intellectual pollution — an odourless, colourless and untraceable mental poison that in principle could seep out through the literature without limit. I wouldn’t dismiss economics as useless. On the other hand, it’s far too removed from other sciences to be healthy. Economists exercise so much power it’s impossible to resist the idea that power is exactly what lures many into the discipline in the first place. What was witnessed prior to 2008 was simply a self-selecting, self-perpetuating, mutually-reaffirming caste using rite and ritual to exclude unbelievers from power. The circular logic this produces can be extraordinary. For example issuing utterly misleading economic projections becomes understandable because other economists were similarly deluded. The same excuse applies to having had the most acute eye for the mote of market deregulation, say, while somehow failing to notice the giant beam of exploding credit. Yet somehow presidents of the ECB or chairpersons of the Federal Reserve are still drawn from a very narrow body of individuals — a group that I think could be fairly called a clique. It’s like watching a Phrenologists guild take power over all education, training and HR over huge areas while dismissing challengers as insufficiently expert in phrenology. There’s some sort of wizardry involved, but it has nothing to do with economics. 124 mattski 06.13.12 at 12:12 am I am baffled by Adrian & Hidari’s attacks on supply and demand. Economics is not a hard science. Of course supply and demand isn’t a law in the same sense as a physical law. (Although we should be careful here: the closer we look at apparently inviolable laws like gravity for example the less “reality” and the more “statistical approximation” do we attribute to it.) Economics looks at people, not particles, so “rules” are not going to manifest cleanly. But a rule is merely a statistical claim, not an absolute claim. Such and such, other things equal, will tend to happen. What is so offensive about supply and demand? We can often see prices being bid up or down depending on changing circumstances. Economic systems are sometimes very complicated and opaque. We can’t always see all the moving parts. But other things equal, supply and demand doesn’t strike me as terribly controversial. 125 Adrian Kelleher 06.13.12 at 12:38 am The problem is that further inferences are drawn from, e.g. demand curves or market linkages, further inferences are drawn from those in turn and pretty soon everyone’s forgotten the assumptions underpinning the whole thing. Challenging such findings generally results in an invitation to peruse the extensive literature on the subject. Even in this thread entirely false claims are being made, e.g. that market prices are “the best” indicator. This would be more defensible if everybody had equal resources. In reality, some have vastly more resources and this disparity obliterates much of the information that is claimed to be so valuable. 126 Peter T 06.13.12 at 1:01 am If by “supply and demand” you mean something like “living things tend to take the path of least effort, and when it gets harder to do something they tend to do less of it or, if it’s essential, then less of something else”, then I would think that pretty uncontroversial. But that is a long way from money, prices and the supply of/demand for social goods. People deal with shortages by raising prices, or by rationing (formally or informally) or by queuing, or by allocating according to some social rule. Various studies have documented that most trading tends to blend all of these in various proportions, and it’s an empirical question as to how the mix will shift in any given case. No comfort for those who like to think that broad abstractions can be a reliable guide to human behaviour. 127 mattski 06.13.12 at 2:45 am Adrian, I assume you mean the statement, “market prices are the best indicator of scarcity.” First of all, why is this “entirely” false? And what would be a better indicator? I don’t think anyone is pretending prices work perfectly, only that they work for the most part. Peter, what is the specific harm you are trying to ward off? I don’t know of any influential theory claiming that supply & demand constitutes a comprehensive description of human behavior. Neither do I see where the alternatives to supply and demand you mention seriously interfere with its explanatory power. The idea of s&d doesn’t claim there are no exceptions. But rather, IF a market is relatively free then s&d will for the most part determine prices. 128 mattski 06.13.12 at 3:00 am Speaking of exceptions, the housing bubble is an interesting case. How do we account for the dramatic rise in house prices? Certainly the supply of houses did not shrink. But part of the reason demand for housing increased was an effective increase in the supply of credit for home buyers. And of course eventually the perception that housing was a risk free investment sent demand through the roof. 129 Peter T 06.13.12 at 6:39 am “IF a market is relatively free then s&d will for the most part determine prices.” “relatively”, “free”, “for the most part” and “prices” mean this statement is a long way from actual, you know, trading. As in, free from what (friendship? Considerations of status? power? fringe benefits?) “Prices” in money? Or in time? Future opportunities? There are, presumably, exchanges that meet the criteria, but they seem to be routine only in some very narrow domains. So the applicability of the “law” is correspondingly limited. 130 Data Tutashkhia 06.13.12 at 7:27 am Suppose you’re dying to to own an original Picasso. You can buy it at an auction, which seems like the quintessential implementation of the supply/demand concept. Of course you probably don’t have enough money, so it’s extremely likely that the person who gets the Picasso at the auction is nowhere near representing the real level of demand. So, the price here is a function of demand and the level of wealth inequality. If you were to redistribute the wealth equally and run the auction again, the price would’ve been much lower, and the person who values the Picasso most would’ve gotten it. On the other hand, looking around my apartment, all I see here is run of the mill mass produced stuff. In this case, supply is not a problem, it’s extremely elastic. If demand goes higher, they just produce more. In this case, the price of an item should be, more or less, equal to the cost. The cost includes the aggregate cost of labor, plus the rent paid to the owners of natural resources. If we were to nationalize the natural resources, then the price would be equal to the cost of labor. We could, then, use ‘hour of labor’ as a unit to form the price. Which, I admit, would be difficult, but probably not impossible. As for central planning, it doesn’t seem necessary: you could maintain some small excess inventory, and produce to restock. Which is, I imagine, how it’s done anyway. Now, investment is a whole different story. I think need a couple more hours to figure it out. I hope I won’t get distracted, our species’ future depends on it. 131 mattski 06.13.12 at 12:19 pm Peter, I don’t see you offering any better alternatives, but simply complaining this idea isn’t flawless. To me your comments don’t seem to point in any direction whatever. What are you offering? Data, if wealth was redistributed equally and then we auction the Picasso the person who values it the most will get, and at the “true” price…. To this I say, Oy. I think it was Carl who said, essentially, so much energy is wasted because some people insist on starting from an assumption of perfect fairness. That is not a helpful standard to adopt. It’s absurdly unrealistic and gets in the way of incremental change. Some people climb mountains by putting one foot in front of the other. Other people do it by arguing the causes of why the top of the mountain is not currently underfoot. 132 Roger Gathman 06.13.12 at 12:19 pm Hmm, both the index and object seem to share the same property, for I’ve noticed that the less money I have, the more scarce things seem to get for me. Funny how that works out. Bringing the index in line with its purpose, it would seem rational, then, that everybody should have the same amount, in the same way that other indexes – rulers, scales etc. – are the same for everybody. This way, we would have a real and efficient measure of scarcity. 133 Sebastian H 06.13.12 at 3:38 pm “This way, we would have a real and efficient measure of scarcity.” On day one. On day two people would start trading on their preferences and we just end up where we are now again 134 Plume 06.13.12 at 5:59 pm Roger, excellent point. The price index has to account for the individual economic status of every consumer, and it can’t, obviously. And as wealth and income grow more and more unequal, that becomes even more problematic. Pretty soon, they’ll be forced to break the price index warning system into socio-economic spheres. And then sub-spheres. And then, when they wake up to reality, just give up on the idea altogether, because it’s ludicrous. Capitalists charge as much as they can get away with. There are a multitude of factors beyond actual “demand” dictating their pricing. As more and more of our economy becomes monopoly or cartel, “demand” becomes less and less important to the ruling class. The rest of us will just have to take what we’re given. And to slowly mitigate for this takeover of the economy, and the economy’s takeover of our lives, they’ll feed us cheap goods from other countries or poor pockets in this one for as long as it takes. For as long as it takes for us to just give up and accept the inevitable bifurcation of life into haves and have nots. The movie Metropolis, among others, was prescient. . . . . Oh, and if it the price index is such a great warning for supply, why then do we produce so much unsold goods? Anyone who has ever worked at or managed a retail store knows this. A very large percentage of the goods in those stories is sent back to the warehouse unsold. And even when the warehouse then marks the price waaay down, most of it still remains unsold. For instance, remainder books, records, clothes and so on. If pricing were such a great warning and measure, that would never happen. The problem is rather obvious. Since anyone can push any new crap onto the market, we’re saturated with it. Basically, the best marketing wins, and humans are saturated with that as well. Duplication of products and duplication of marketing runs into dead ends, over and over again. The real issue boils down to need, rather than pricing. We actually don’t “need” 99% of what appears on our shelves, and marketing can make us feel a “need” for only so long, and can’t possibly make us feel that need for everything on our shelves. Our “unplanned” economy produces far too much junk, waste and duplication. Hence, the vast majority of unsold goods. Pricing warnings don’t work. 135 Plume 06.13.12 at 6:05 pm Just in case it’s not obvious in the bit above. This keeps happening in the retail world. Meaning, they’re not learning from the massive amounts of unsold goods, the failure to dump them via “sales” and the further failure to get rid of it via “remainders” sales. Even if the price index actually worked, it doesn’t look like anyone’s listening. 136 mattski 06.13.12 at 7:56 pm Maybe someday we reach utopia. Couple thousand years from now. Meantime, let’s work to democratically implement and sustain progressive taxation to mitigate the tendency of wealth to accrue to itself. That’s good economics in the view of people like Joe Stiglitz, and I agree. 137 Plume 06.13.12 at 8:29 pm mattski, First of all, that’s not nearly enough to fix what ails us, though it would be a good start. Second, we have about as much chance to do that as we do to achieve the utopia I mentioned in this climate. Neither party is willing to raise taxes, and the Republicans and the right have won the message wars. Their “don’t tax the job creators” has won the powers that be over. The Dems are too afraid to counter that argument. And, yes, Stiglitz is right about taxes and a lot of other things. He’s one of the good guys, when it comes to economists. I put him up there with Dean Baker, Galbraith, Krugman, CT’s Mr. Quiggin and Simon Johnson, to name a few. But they don’t go far enough, either. David Harvey, the folks at the Monthly Review and Marxian economists in general are far better at showing why we need more radical change. As long as we have capitalism, capitalists will own the government and create massive gaps between the haves and the have nots, and our chances of reform will diminish with each decade. Growing up when I did — I’m in my 50s — I assumed that the Keynesian era would last a lot longer. Now I just see it as a blip on the screen between the usual capitalist slavery/oppression eras. It was the best blip of this corrupt and destructive system, and looks more and more like an aberration daily. “Reform” is simply not enough at this point. We’ve lost too much ground for that. And “reform” today amounts to giving up more and more of the New Deal in exchange for empty promises that the entire thing won’t be taken down. Of course, the New Deal wasn’t adequate to begin with, and never grow enough to be adequate. We always needed far more radical changes. It was the compromise position between the real left and the right. Today, with both our parties being right-wing, it’s now seen as the furthest possible left we can go, which is totally unacceptable to the right — and the real left. Its days are numbers. That’s why we need much, much stronger medicine to counteract the reactionaries. We’re not very good at playing their game, because they set the terms of debate and struggle now and have the vast majority of money, power and media. We can only lose if we step on their field of play. Capitalism must die. 138 Sebastian H 06.13.12 at 8:45 pm “Pretty soon, they’ll be forced to break the price index warning system into socio-economic spheres. And then sub-spheres. And then, when they wake up to reality, just give up on the idea altogether, because it’s ludicrous.”. Why would that be needed? The poor haven’t been able to buy the same things as the rich, ummm forever. If anything that is one aspect that has gotten better in western societies–the crossover in goods is greater. Re unsold goods you are failing to ask “compared to what” again. Command economies have unwanted/unneeded goods all the time when the planners fail. The incentive to get it right is stronger in capitalist societies. In planned societies such mistakes can and do continue indefinitely. What? No one needs this type of screw? It is on the schedule sheet. Oh well. 139 Plume 06.13.12 at 10:40 pm The incentive may be greater to get it right, but because everyone is in it for themselves, their own wallet, there’s no way possible to get it right. Businesses don’t check with their competitors regarding their orders and unsold orders, and they have no control, obviously, over what their competitors do. They don’t get together with them to prevent saturation and waste — unless there’s a monopoly cartel, a cartel of monopolies. There is NO plan of coordination to prevent duplication and the like, etc.. Already have 100 sugary cereals on the shelves? Well, let’s just add another one, cuz I think my marketing is better than your marketing. Already have enough deodorant to last us centuries? I got a great idea to repackage the same old same old, and again, I think my marketing is better than your marketing. No plans. As in, none. We don’t even have managed chaos. Everyone for themselves. If they think they can make a buck, they’ll flood the market with garbage. Doesn’t matter if umpteen other businesses are trying the same thing at the same time. Within their first four years, 44% of all businesses fail. That’s primarily due to the “free for all” nature of our economy. If we had coordination — even a little. Wouldn’t have to be the dreaded “central planning!!!” — we could reduce business bankruptcies and a ton of waste along the way. We could make the entire system far more efficient. I know my utopian ideas are a long shot, without a chance to come true any time soon, and maybe centuries after I’m dead and buried. But we should at least be able to work toward a sensible public/private synergy, and actually plan together to make this all work. Without that, we’re never going to make it. The issues of sustainability alone guarantee that. Short of the existential threat of ecological disaster, we’re simply being economically stupid and wasteful by leaving it all up to millions of competing interests, and that’s leading to incredible inequality. It doesn’t make sense. 140 Peter T 06.14.12 at 12:01 am mattski The first part of going in the right direction is getting the map right. If “supply and demand will balance at the right price” is your map, you will end up, for many of our most pressing problems, in the wrong place. A more sophisticated understanding would lead one to see that rationing of the essentials of life is feasible and commonplace, that direct regulation is often more effective, far faster and less subject to gaming than price incentives and so on. Above all, it would lead to a considered choice of options, not a blind reliance on a single solution. 141 mattski 06.14.12 at 3:07 pm Peter, “not a blind reliance on a single solution.” I don’t understand. What single solution? Our economy is mixed, and we can make it more mixed, if our arguments are persuasive. It’s tough I know. Do you have a better idea? Plume, I’m sorry but your arguments do not impress me. I think you have a hard time distinguishing words/ideas from reality. Not only that, but you’re forgetting the left is never going to win a shooting war with the right and that is where you’re leading us. 142 Plume 06.14.12 at 7:17 pm Mattski, Talk about having a hard time distinguishing words and ideas from reality!! I’m leading the left into a shooting war? Really? First of all, everything I’ve written points to a democratic revolution, not a bloody one. Second, I’m just a guy on a bulletin board. If you think I have the power to do what you’re suggesting, you need to stop, take a deep breath, and reassess your connection with the real world. Beyond that, you strike me as an example of what is wrong with “liberalism” today. In your fear and trembling, you’ve basically agreed with the right that the real left, those to your left, have no place in this society. And, ya know what that means? You then become the furthest leftward wing of the possible. Which also means you can never get what you want, because your positions will be demonized as “far left” and “unrealistic”. Example. You brought up FDR in a previous post. Well, FDR was able to get through a lot of liberal policy because his positions were seen as the compromise between left and right. He could point to a strong left to his left that could strike fear into the hearts of the Establishment. “Don’t want my compromise position? Then you’re going to have to deal with the far more aggressive forces to my left!!” And even within that kind of context, he always asked for more than he actually wanted. He asked for 100% top rate in the early 40s. A maximum wage, so to speak. And he “settled” for 94%. Today’s so-called liberal would have pre-negotiated this with Republicans, kicked the real left to the curb, and “settled” for something much further to the right than they actually wanted. Like a 28% top rate. In short, you need us. We make your position the “centrist” position, which is often seen in Establishment circles as the automatically “sensible” position. But because there is no real left in America, and you and your fellow liberals aid the right in crippling that possibility, you will never get what you want in this system. You will always be considered the crazy far left and your ideas will be dismissed just as you dismiss mine and those of my fellow leftists. Think about the success of the tea party. Mainstream Republicans have a much stronger negotiating position because of them. Because the tea party have moved the rightward wing of the possible much further to the right, moved the Overton Window, that makes the rest of the Republican party seem “sensible” in comparison. The Dems try to make nice with the old guard as they fear the new one. This moves them further to the right as well. In short, if you want strong, liberal policies, you’re going to need to demand far more, and you’re going to need to point to a strong left to your left that demands much more than that. That moves the “center” to the left. As long as liberals in power keep pre-negotiated everything in sight, they’re never going to get anything but center-right policy — at best. 143 mattski 06.15.12 at 12:06 pm People who think in such terms as, “capitalism is cancer” or “capitalism must die” may have something to learn from the man who said this, Generally speaking, such sort of expressions are childish. Those officials who use those words, I think they want to show the Chinese government that the Dalai Lama is so bad. And I think also that they are hoping to reach the Tibetans. They want 100 percent negative. So they use these words. They actually disgrace themselves. I mean, childish! Very foolish! Nobody believes them. 144 Plume 06.15.12 at 4:04 pm Mattski. If your little sensibilities are hurt by strong language, I feel very sorry for you. But I stand by my choice of words. They’re actually rather tame in the face of what capitalism has done to the world — and what it will do. Capitalism is cancer, and yes, it must die. It is a profoundly immoral system, spreading like cancer into every nook and cranny of our lives, and its own internal logic requires that endless growth. This endless growth, this requirement, run straight into the wall of sustainability, and will destroy the planet if we don’t destroy capitalism first. Any system that depends upon endlessly increased extraction/consumption on a finite planet should not be allowed to be. And the fact that it also requires trillions of dollars of public money to keep it from collapsing every few years is just further proof of the madness of those who defend it. You can trot out all the little quotes you want by its defenders. They and you are nothing but shills for an evil, anti-democratic system. You either knowingly or witlessly shill for the 1% when you do, and for that, you should be ashamed. Comments on this entry are closed.
Translator Disclaimer VOL. 26 | 2000 Orbits on Homogeneous Spaces of Arithmetic Origin and Approximations George Tomanov Editor(s) Toshiyuki Kobayashi, Masaki Kashiwara, Toshihiko Matsuki, Kyo Nishiyama, Toshio Oshima Abstract We prove an $S$-arithmetic version, in the context of algebraic groups defined over number fields, of Ratner’s theorem for closures of orbits of subgroups generated by unipotent elements. We apply this result in order to obtain a generalization of results of Margulis and of Borel–Prasad about values of irrational quadratic forms at integral points to the general setting of hermitian forms over division algebras with involutions of first or second kind. As a byproduct of our considerations we obtain another proof of the strong approximation theorem for algebraic groups defined over number fields. Information Published: 1 January 2000 First available in Project Euclid: 20 August 2018 zbMATH: 0960.22006 MathSciNet: MR1770724 Digital Object Identifier: 10.2969/aspm/02610265
12:08 AM Would a course in condensed matter physics be useful for stuff like neutron stars? 1 hour later… 1:08 AM Yeah, probably. I think principles of condensed matter are used in studying the internal structure of neutron stars. 5 hours later… 6:08 AM when I am anxious, time passes slower than I imagine. In the past two weeks, when it was Wednesday, I considered it's Friday the next day. Then when I checked calendar, finding it's only Thursday the next day, I felt gratified because I felt like I earned an extra day to do things. 2 hours later… 7:42 AM @CaptainBohemian Good thing that it doesn't pass faster than you imagine! ;) 7:58 AM "most of this material cannot be assimilated by reading books. You need to take courses following an appropriate Master program and pass exams." Well so much for that 2 hours later… 9:47 AM The notion of oneness is so out of date and these pseudoscientists sure never tired to paste the word "quantum" in anything they can find 2 hours later… 12:03 PM Which one is the good Lee on manifolds There's like 4 books by a Lee on manifolds 12:20 PM I think it depends on what sort of manifolds you're interested in? E.g. one is mainly about smooth manifolds, iirc $$f(t)=\frac{\text{sgn} (Z(t))\left|\sum\limits_{n=1}^{n=k} \frac{1}{n} \zeta(1/2+i \cdot t)\sum\limits_{d|n}\frac{\mu(d)}{d^{(1/2+i \cdot t-1)}}\right|}{g(t)+H_{\text{k}}}$$ $$g(t)=\frac{\partial \vartheta (t)}{\partial t}$$ @ACuriousMind Spacetimes, obviously Which one had the tangent bundle at chapter 6 I remember that part 12:56 PM hello there is a formula that I don't understand here is the photo and here is the formula phoro it is supposed to be the formula of the moment of the rotational joint but I have no idea where it's come from there are a bunch of 1/2 and negatives and positives that I don't get the formula that I know for moment is F * (distance) which in here is W*L 1:20 PM 2 hours later… 2:57 PM @Slereah yikes odd stance considering he probably didn't learn string theory in school himself does anyone knows the difference between a thermopile and a thermoelectric generator? google returns a blank stares at me when I ask him the question Was the SM and QFT on your transcript I guess it's no loss The York thing looks more up your alley as well The 19-20 list is already filling up edpif.org/en/recrutement/prop.php nvm I got it, thanks There are like 14 gravity related ones in the 18-19 list, some with a green light, but you could check those names personal pages to see if there are any comments on applying 'Le premier volet de la thèse concerne les théories alternatives à la relativité générale motivées par le souhait d'interpréter les observations cosmologiques, conduisant dans le cadre du modèle standard de la cosmologie à supposer l'existence de matière ou d'énergie noire, d'une façon différente: via des modifications à grande distance de l'interaction gravitationnelle. Le deuxième volet de cette thèse concerne la théorie des ondes gravitationnelles, motivée par les détections récentes LIGO/Virgo. L'étudiant s'insèrera dans notre programme du calcul du champ d'ondes gravitationnelles émis par un système de deux objets compacts (étoiles à neutrons ou trous noirs) à l'approximation 4PN de la relativité générale.' 'Théories alternatives de la gravitation et ondes gravitationnelles' (red light but still)
# Solution to the pie problem - with the help of Justin Bieber Author View Count 2322 AbstractSolution to the pie problem - with the help of Justin Bieber
CBSE (Science) Class 11CBSE Account It's free! Share Books Shortlist # Solution - Calculate the Number of Atoms of the 52 G of He - CBSE (Science) Class 11 - Chemistry ConceptMole Concept and Molar Masses #### Question Calculate the number of atoms of the 52 g of He #### Solution 4 g of He = 6.022 × 1023 atoms of He ∴ 52 g of He (6.022xx10^23xx52)/4 atoms of He = 7.8286 × 1024 atoms of He Is there an error in this question or solution? #### Reference Material Solution for question: Calculate the Number of Atoms of the 52 G of He concept: Mole Concept and Molar Masses. For the courses CBSE (Science), CBSE (Arts), CBSE (Commerce) S
# Finding sum of multiples in JavaScript JavascriptWeb DevelopmentFront End TechnologyObject Oriented Programming We are required to write a JavaScript function that takes in a number as a limit (the only argument). The function should calculate the sum of all the natural numbers below the limit that are multiples of 3 or 5. For example − If the limit is 10 Then the sum should be 3+5+6+9 = 23 ## Example Following is the code − const sumOfMultiple = (limit = 10) => { let i, sum = 0; for (i = 3; i < limit; i += 1) { if (i % 3 === 0 || i % 5 === 0) { sum += i; }; }; return sum; } console.log(sumOfMultiple(1000)); console.log(sumOfMultiple(10)); console.log(sumOfMultiple(100)); ## Output Following is the output on console − 233168 23 2318 Published on 10-Dec-2020 07:39:06
# All Questions 7k views ### What makes running so much less energy-efficient than bicycling? Most people can ride 10 km on their bike. However, running 10 km is a lot harder to do. Why? According to the law of conservation of energy, bicycling should be more intensive because you have to ... 5k views ### What challenges needed to be overcome to create (blue) LEDs? In light of today's announcement of the 2014 Nobel laureates, and because of a discussion among colleagues about the physical significance of these devices, let me ask: What is the physical ... 5k views ### How can I stand on the ground? EM or/and Pauli? There is this famous example about the order difference between gravitational force and EM force. All the gravitational force of Earth is just countered by the electromagnetic force between the ... 3k views ### Why do chimneys have these spiral “wings”? While walking around I noticed something very peculiar. Many chimneys had spiral "wings", while others didn't. I came up with two possibilities: The wind circles around the chimney upwards which ... 21k views ### What's inside a proton? What constitutes protons? When I see pictures, I can't understand. Protons are made of quarks, but some say that they are made of 99% empty space. Also, in this illustration from Wikipedia, what's ... 6k views ### In the earth's crust, why is there far more uranium than gold? In parts per million in the Earth's crust Uranium is around 1.8ppm and Gold 0.003ppm. Given that it takes far more energy to create Uranium than Gold, why is this? 3k views ### The Role of Rigor The purpose of this question is to ask about the role of mathematical rigor in physics. In order to formulate a question that can be answered, and not just discussed, I divided this large issue into ... 5k views ### Is Angular Momentum truly fundamental? This may seem like a slightly trite question, but it is one that has long intrigued me. Since I formally learned classical (Newtonian) mechanics, it has often struck me that angular momentum (and ... 1k views ### Does the 4/3 problem of classical electromagnetism remain in quantum mechanics? In Volume II Chapter 28 of the Feymann Lectures on Physics, Feynman discusses the infamous 4/3 problem of classical electromagnetism. Suppose you have a charged particle of radius $a$ and charge $q$ ... 5k views ### The speed of gravity? Sorry for the layman question, but it's not my field. Suppose this thought experiment is performed. Light takes 8 minutes to go from the surface of the Sun to Earth. Imagine the Sun is suddenly ... 38k views ### If photons have no mass, how can they have momentum? As an explanation of why a large gravitational field (such as a black hole) can bend light, I have heard that light has momentum. This is given as a solution to the problem of only massive objects ... 9k views ### Reading the Feynman lectures in 2012 The Feynman lectures are universally admired, it seems, but also a half-century old. Taking them as a source for self-study, what compensation for their age, if any, should today's reader undertake? ... 5k views ### Does someone falling into a black hole see the end of the universe? This question was prompted by Can matter really fall through an event horizon?. Notoriously, if you calculate the Schwarzschild coordinate time for anything, matter or light, to reach the event ... 6k views ### Did the Big Bang happen at a point? TV documentaries invariably show the Big Bang as an exploding ball of fire expanding outwards. Did the Big Bang really explode outwards from a point like this? If not, what did happen? 7k views ### Will a blanket warm you if you are underwater? Suppose a man falls into very cold water and gets their foot stuck under a heavy rock. Fortunately, his head is above water and someone is able to call for help. The paramedics want to keep him warm ... 7k views ### Why can we see the dust particles in a narrow beam of light (and not in an all lighted area)? Let us say that I am sitting in a room with all the drapes open. Bright sunlight is coming through the window. The whole room is brilliantly lighted. I will not be able to see the dust particles ... 2k views ### What causes insects to cast large shadows from where their feet are? I recently stumbled upon this interesting image of a wasp, floating on water: Assuming this isn't photoshopped, I have a couple of questions: Why do you see its image like that (what's the ... 22k views ### Why does a remote car key work when held to your head/body? I was trying to unlock my car, but I was out of range. A friend of mine said that I have to hold the transmitter next to my head. It worked, so I tried the following later that day: Walked away from ... 5k views ### Number theory in Physics As a Graduate Mathematics student, my interest lies in Number theory. I am curious to know if Number theory has any connections or applications to physics. I have never even heard of any applications ... 7k views ### Moon's pull causes tides on far side of Earth: why? I have always wondered and once I even got it, but then completely forgot. I understand that gravity causes high and low tides in oceans, but why does it occur on the other side of Earth? 7k views ### What is a field, really? There was a reason why I constantly failed physics at school and university, and that reason was, apart from the fact I was immensely lazy, that I mentally refused to "believe" more advanced stuff ... 5k views ### Is time continuous or discrete? While working on physics simulation software, I noticed that I had implemented discrete time (the only type possible on computers). By that I mean that I had an update mechanism that advanced the ... 3k views ### What is known about the topological structure of spacetime? General relativity says that spacetime is a Lorentzian 4-manifold $M$ whose metric satisfies Einstein's field equations. I have two questions: What topological restrictions do Einstein's equations ... 41k views ### Why can Hiroshima be inhabited when Chernobyl cannot? There was an atomic bomb dropped in Hiroshima, but today there are residents in Hiroshima. However, in Chernobyl, where there was a nuclear reactor meltdown, there are no residents living today (or ... 14k views ### Is it possible that there is a color our human eye can't see? Is it possible that there's a color that our eye couldn't see? Like all of us are color blind to it. If there is, is it possible to detect/identify it? 23k views ### Why does the atmosphere rotate along with the earth? I was reading somewhere about a really cheap way of travelling: using balloons to get ourselves away from the surface of the earth. The idea held that because the earth rotates, we should be ... 30k views ### On this infinite grid of resistors, what's the equivalent resistance? I searched and couldn't find it on the site, so here it is (quoted to the letter): On this infinite grid of ideal one-ohm resistors, what's the equivalent resistance between the two marked nodes? ... 5k views ### Is it possible for information to be transmitted faster than light by using a rigid pole? Is it possible for information (like 1 and 0s) to be transmitted faster than light? For instance, take a rigid pole of several AU in length. Now say you have a person on each end, and one of them ... 5k views ### How can ants carry items much heavier than themselves? This morning I saw an ant and suddenly a question came to my mind: how do ants actually carry items much heavier than themselves? What's the difference (in physics) between us and them? 4k views ### Is there a small enough planet or asteroid you can orbit by jumping? I just had this idea of orbiting a planet just by jumping and then flying upon it on its orbit kind of like superman. So, Would it be theoretically possible or is there a chance of that small body to ... 5k views ### Is it possible to start fire using moonlight? You can start fire by focusing the sunlight using the magnifying glass. I searched the web whether you can do the same using moonlight. And found this and this - the first two in Google search ... 2k views ### Is temperature a Lorentz invariant in relativity? If an observer starts moving at relativistic speeds will he observe the temperature of objects to change as compared to their rest temperatures? Suppose the rest temperature measured is $T$ and the ... 13k views ### Proof that the Earth rotates? What is the proof, without leaving the Earth, and involving only basic physics, that the earth rotates around its axis? By basic physics I mean the physics that the early physicists must've used to ... 9k views ### How long can you survive 1 million degrees? I asked my Dad this once when I was about 14, and he said that no matter how short the amount of time you were exposed to such a great temperature, you would surely die. The conversation went ... 11k views ### Best books for mathematical background? What are the best textbooks to read for the mathematical background you need for modern physics, such as, string theory? Some subjects off the top of my head that probably need covering: ... 3k views ### Why does dry spaghetti break into three pieces as opposed to only two? You can try it with your own uncooked spaghetti if you want; it almost always breaks into three when you snap it. I am asking for a good physical theory on why this is along with evidence to back it ... 3k views ### Is there a symmetry associated to the conservation of information? Conservation of information seems to be a deep physical principle. For instance, Unitarity is a key concept in Quantum Mechanics and Quantum Field Theory. We may wonder if there is an underlying ... 5k views ### Why does a cup with 100 g water float when placed on another cup with 50 g of water? Imagine we have cup A with 50 g of water and cup B (smaller in width than A) with 100 g of water. Now put cup B into cup A. If the width of both cups are of comparable size then the cup with ... 4k views ### Quantum Entanglement - What's the big deal? Bearing in mind I am a layman - with no background in physics - please could someone explain what the "big deal" is with quantum entanglement? I used to think I understood it - that 2 particles, say ... 4k views ### Why did Feynman's thesis almost work? A bit of background helps frame this question. The question itself is in the last sentence. For his PhD thesis, Richard Feynman and his thesis adviser John Archibald Wheeler devised an astonishingly ... 5k views ### Why does space expansion not expand matter? REFORMULATED: I have looked at the other questions (ie "why does space expansion affect matter") but can't find the answer I am looking for. My question: There is always mention of space expanding ... 1k views ### What is the upper-limit on intrinsic heating due to dark matter? Cold dark matter is thought to fill our galactic neighborhood with a density $\rho$ of about 0.3 GeV/cm${}^3$ and with a velocity $v$ of roughly 200 to 300 km/s. (The velocity dispersion is much ... 3k views ### Is there a way for an astronaut to rotate? We know that if an imaginary astronaut is in the intergalactic (no external forces) and has an initial velocity zero, then he has is no way to change the position of his center of mass. The law of ... 8k views ### Does centrifugal force exist? Currently in my last year of high school, and I have always been told that centrifugal force does not exist by my physics teachers. Today my girlfriend in the year below asked me what centrifugal ... 21k views ### Is the universe fundamentally deterministic? I'm not sure if this is the right place to ask this question. I realise that this maybe a borderline philosophical question at this point in time, therefore feel free to close this question if you ... 9k views ### How long a straw could Superman use? To suck water through a straw, you create a partial vacuum in your lungs. Water rises through the straw until the pressure in the straw at the water level equals atmospheric pressure. This ... 6k views ### Quantum Field Theory from a mathematical point of view I'm a student of mathematics with not much background in physics. I'm interested in learning Quantum field theory from a mathematical point of view. Are there any good books or other reference ... 7k views ### Why does a window become a mirror at night? In day, when you look in the room through the window out, you can clearly see what happens outside. At night when it's dark outside but there's light inside you can look in the window but it becomes a ...
# investigating the second derivative test • November 12th 2008, 08:56 AM chrisc investigating the second derivative test This is a question that is supposed to help my understanding of the second derivative test, but Im not sure how it does, or how to do this. f(x,y) = a(x^2) + b(xy) + c(y^2) by completing the square, show that if a/=0 then, f(x,y) = a(x^2) + b(xy) + c(Y^2) = a[(x + yb/2a)^2 + (y^2)(4ac-b^2)/(4a^2)] • November 12th 2008, 12:02 PM Opalg If you apply the first derivative test (putting partial derivatives equal to 0), you see that the only critical point of f(x,y)occurs at the origin. If you apply the second derivative test (calculate the second partial derivatives and look at $\tfrac{\partial^2f}{\partial x^2}\tfrac{\partial^2f}{\partial y^2} - \bigl(\tfrac{\partial^2f}{\partial x\partial y}\bigl)^2$), it tells you that there is a local max/min at the origin (depending on whether $\tfrac{\partial^2f}{\partial x^2}$ is negative or positive) if that expression is positive, and that there is a saddle point if it is negative. On the other hand, if you look at the formula obtained by completing the square, you get exactly the same information: if $4ac-b^2<0$ then one of the squares is positive and the other one is negative, so there is a saddle point at the origin. If $4ac-b^2>0$ then both squares are positive, and the function will have a local max/min at the origin depending on whether a is negative or positive. So it looks as though the aim of this exercise is to persuade you that the second derivative test works, by seeing that it gives the right answer in this case.
## Access You are not currently logged in. Access JSTOR through your library or other institution: ## If You Use a Screen Reader This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader. # Ecological Correlates of Carbon Isotope Composition of Leaves: A Comparative Analysis Testing for the Effects of Temperature, $\mathrm{CO}_2$ and $\mathrm{O}_2$ Partial Pressures and Taxonomic Relatedness on $\delta^{13}\mathrm{C}$ C. K. Kelly and F. I. Woodward Journal of Ecology Vol. 83, No. 3 (Jun., 1995), pp. 509-515 DOI: 10.2307/2261603 Stable URL: http://www.jstor.org/stable/2261603 Page Count: 7 Preview not available ## Abstract 1 In order to eliminate the possible effects of relatedness on associations between carbon isotope composition $(\delta^{13}\mathrm{C})$ and the altitudinal and latitudinal distributions of plant species, we have reanalysed previously published data sets using information on taxonomic relationships. 2 Although there was a significant positive effect of altitude on, we did not find the effect of latitude reported in previous studies in either of the two comparisons testing the effect of this factor. When altitude was not controlled for statistically, plant life-form had a significant effect on $\delta^{13}\mathrm{C}$, with more negative $\delta^{13}\mathrm{C}$ values in plants of greater height (forbs < shrubs < trees), and for woody plants relative to forbs. 3 However, when $\delta^{13}\mathrm{C}$ was compared among life-forms within an altitude category, life-form had no effect on carbon isotope composition for any of the three altitude categories. 4 We conclude that differences in atmospheric composition ($\mathrm{CO}_2$ and $\mathrm{O}_2$ partial pressures), as represented by altitude, are sufficient to explain observed differences in carbon isotope composition. 5 We recommend that the evolutionary comparative techniques applied here should be adopted when multispecific data sets are analysed for the purpose of inferring functional relationships in plant ecology. • 509 • 510 • 511 • 512 • 513 • 514 • 515
WIKISKY.ORG Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login # NGC 5806 Contents ### Images DSS Images   Other Images ### Related articles Dark and Baryonic Matter in Bright Spiral Galaxies. I. Near-Infrared and Optical Broadband Surface Photometry of 30 GalaxiesWe present photometrically calibrated images and surface photometry inthe B, V, R, J, H, and K bands of 25, and in the g, r, and K bands offive nearby bright (B0T<12.5 mag) spiralgalaxies with inclinations of 30°-65° spanning the Hubblesequence from Sa to Scd. Data are from The Ohio State University BrightSpiral Galaxy Survey, the Two Micron All Sky Survey, and the SloanDigital Sky Survey Second Data Release. Radial surface brightnessprofiles are extracted, and integrated magnitudes are measured from theprofiles. Axis ratios, position angles, and scale lengths are measuredfrom the near-infrared images. A one-dimensional bulge/diskdecomposition is performed on the near-infrared images of galaxies witha nonnegligible bulge component, and an exponential disk is fit to theradial surface brightness profiles of the remaining galaxies.Based in part on observations obtained at the Cerro TololoInter-American Observatory, operated by the Association of Universitiesfor Research in Astronomy, Inc., under a cooperative agreement with theNational Science Foundation. Dark and Baryonic Matter in Bright Spiral Galaxies. II. Radial Distributions for 34 GalaxiesWe decompose the rotation curves of 34 bright spiral galaxies intobaryonic and dark matter components. Stellar mass profiles are createdby applying color-M/L relations to near-infrared and optical photometry.We find that the radial profile of the baryonic-to-dark-matter ratio isself-similar for all galaxies, when scaled to the radius at which thecontribution of the baryonic mass to the rotation curve equals that ofthe dark matter (RX). We argue that this is due to thequasi-exponential nature of disks and rotation curves that are nearlyflat after an initial rise. The radius RX is found tocorrelate most strongly with baryonic rotation speed, such that galaxieswith RX measurements that lie further out in their disksrotate faster. This quantity also correlates very strongly with stellarmass, Hubble type, and observed rotation speed; B-band central surfacebrightness is less related to RX than these other galaxyproperties. Most of the galaxies in our sample appear to be close tomaximal disk. For these galaxies, we find that maximum observed rotationspeeds are tightly correlated with maximum rotation speeds predictedfrom the baryon distributions, such that one can create a Tully-Fisherrelation based on surface photometry and redshifts alone. Finally, wecompare our data to the NFW parameterization for dark matter profileswith and without including adiabatic contraction as it is most commonlyimplemented. Fits are generally poor, and all but two galaxies arebetter fit if adiabatic contraction is not performed. In order to havebetter fits, and especially to accommodate adiabatic contraction,baryons would need to contribute very little to the total mass in theinner parts of galaxies, seemingly in contrast with other observationalconstraints. Hubble Space Telescope STIS Spectra of Nuclear Star Clusters in Spiral Galaxies: Dependence of Age and Mass on Hubble TypeWe study the nuclear star clusters (NCs) in spiral galaxies of variousHubble types using spectra obtained with the STIS on board the HubbleSpace Telescope (HST). We observed the nuclear clusters in 40 galaxies,selected from two previous HST WFPC2 imaging surveys. At a spatialresolution of ~0.2" the spectra provide a better separation of clusterlight from underlying galaxy light than is possible with ground-basedspectra. Approximately half of the spectra have a sufficiently highsignal-to-noise ratio for detailed stellar population analysis. For theother half we only measure the continuum slope, as quantified by the B-Vcolor. To infer the star formation history, metallicity, and dustextinction, we fit weighted superpositions of single-age stellarpopulation templates to the high signal-to-noise ratio spectra. We usethe results to determine the luminosity-weighted age, mass-to-lightratio, and masses of the clusters. Approximately half of the sampleclusters contain a population younger than 1 Gyr. Theluminosity-weighted ages range from 10 Myr to 10 Gyr. The stellarpopulations of NCs are generally best fit as a mixture of populations ofdifferent ages. This indicates that NCs did not form in a single event,but that instead they had additional star formation long after theoldest stars formed. On average, the sample clusters in late-typespirals have a younger luminosity-weighted mean age than those inearly-type spirals (L=8.37+/-0.25 vs.9.23+/-0.21). The average mass-weighted ages are older by ~0.7 dex,indicating that there often is an underlying older population that doesnot contribute much light but does contain most of the mass. The averagecluster masses are smaller in late-type spirals than in early-typespirals (logM=6.25+/-0.21 vs. 7.63+/-0.24) and exceed the masses typicalof globular clusters. The cluster mass correlates loosely with totalgalaxy luminosity. It correlates more strongly with both the Hubble typeof the host galaxy and the luminosity of its bulge. The lattercorrelation has the same slope as the well-known correlation betweensupermassive black hole mass and bulge luminosity. The properties ofboth nuclear clusters and black holes in the centers of spiral galaxiesare therefore intimately connected to the properties of the host galaxy,and in particular its bulge component. Plausible formation scenarioshave to account for this. We discuss various possible selection biasesin our results, but conclude that none of them can explain thedifferences seen between clusters in early- and late-type spirals. Theinability to infer spectroscopically the populations of faint clustersdoes introduce a bias toward younger ages, but not necessarily towardhigher masses.Based on observations made with the NASA/ESA Hubble Space Telescope,obtained from the Data Archive at the Space Telescope Science Institute,which is operated by the Association of Universities for Research inAstronomy, Inc., under NASA contract NAS5-26555. These observations areassociated with proposals 9070 and 9783. The structure of galactic disks. Studying late-type spiral galaxies using SDSSUsing imaging data from the SDSS survey, we present the g' and r' radialstellar light distribution of a complete sample of ~90 face-on tointermediate inclined, nearby, late-type (Sb-Sdm) spiral galaxies. Thesurface brightness profiles are reliable (1 σ uncertainty lessthan 0.2 mag) down to μ˜27 mag/''. Only ~10% of all galaxies havea normal/standard purely exponential disk down to our noise limit. Thesurface brightness distribution of the rest of the galaxies is betterdescribed as a broken exponential. About 60% of the galaxies have abreak in the exponential profile between ˜ 1.5-4.5 times thescalelength followed by a downbending, steeper outer region. Another~30% shows also a clear break between ˜ 4.0-6.0 times thescalelength but followed by an upbending, shallower outer region. A fewgalaxies have even a more complex surface brightness distribution. Theshape of the profiles correlates with Hubble type. Downbending breaksare more frequent in later Hubble types while the fraction of upbendingbreaks rises towards earlier types. No clear relation is found betweenthe environment, as characterised by the number of neighbours, and theshape of the profiles of the galaxies. The Hα Galaxy Survey . III. Constraints on supernova progenitors from spatial correlations with Hα emissionAims.We attempt to constrain progenitors of the different types ofsupernovae from their spatial distributions relative to star formationregions in their host galaxies, as traced by Hα + [Nii] lineemission. Methods: .We analyse 63 supernovae which have occurredwithin galaxies from our Hα survey of the local Universe. Threestatistical tests are used, based on pixel statistics, Hα radialgrowth curves, and total galaxy emission-line fluxes. Results:.Many type II supernovae come from regions of low or zero emission lineflux, and more than would be expected if the latter accurately traceshigh-mass star formation. We interpret this excess as a 40% "Runaway"fraction in the progenitor stars. Supernovae of types Ib and Ic doappear to trace star formation activity, with a much higher fractioncoming from the centres of bright star formation regions than is thecase for the type II supernovae. Type Ia supernovae overall show a weakcorrelation with locations of current star formation, but there isevidence that a significant minority, up to about 40%, may be linked tothe young stellar population. The radial distribution of allcore-collapse supernovae (types Ib, Ic and II) closely follows that ofthe line emission and hence star formation in their host galaxies, apartfrom a central deficiency which is less marked for supernovae of typesIb and Ic than for those of type II. Core-collapse supernova ratesoverall are consistent with being proportional to galaxy totalluminosities and star formation rates; however, within this total thetype Ib and Ic supernovae show a moderate bias towards more luminoushost galaxies, and type II supernovae a slight bias towardslower-luminosity hosts. How large are the bars in barred galaxies?I present a study of the sizes (semimajor axes) of bars in discgalaxies, combining a detailed R-band study of 65 S0-Sb galaxies withthe B-band measurements of 70 Sb-Sd galaxies from Martin (1995). As hasbeen noted before with smaller samples, bars in early-type (S0-Sb)galaxies are clearly larger than bars in late-type (Sc-Sd) galaxies;this is true both for relative sizes (bar length as fraction ofisophotal radius R25 or exponential disc scalelength h) andabsolute sizes (kpc). S0-Sab bars extend to ~1-10 kpc (mean ~ 3.3 kpc),~0.2-0.8R25 (mean ~ 0.38R25) and ~0.5-2.5h (mean ~1.4h). Late-type bars extend to only ~0.5-3.5 kpc,~0.05-0.35R25 and 0.2-1.5h their mean sizes are ~1.5 kpc, ~0.14R25 and ~0.6h. Sb galaxies resemble earlier-type galaxiesin terms of bar size relative to h; their smallerR25-relative sizes may be a side effect of higher starformation, which increases R25 but not h. Sbc galaxies form atransition between the early- and late-type regimes. For S0-Sbcgalaxies, bar size correlates well with disc size (both R25and h); these correlations are stronger than the known correlation withMB. All correlations appear to be weaker or absent forlate-type galaxies; in particular, there seems to be no correlationbetween bar size and either h or MB for Sc-Sd galaxies.Because bar size scales with disc size and galaxy magnitude for mostHubble types, studies of bar evolution with redshift should selectsamples with similar distributions of disc size or magnitude(extrapolated to present-day values); otherwise, bar frequencies andsizes could be mis-estimated. Because early-type galaxies tend to havelarger bars, resolution-limited studies will preferentially find bars inearly-type galaxies (assuming no significant differential evolution inbar sizes). I show that the bars detected in Hubble Space Telescope(HST) near-infrared(IR) images at z~ 1 by Sheth et al. have absolutesizes consistent with those in bright, nearby S0-Sb galaxies. I alsocompare the sizes of real bars with those produced in simulations anddiscuss some possible implications for scenarios of secular evolutionalong the Hubble sequence. Simulations often produce bars as large as(or larger than) those seen in S0-Sb galaxies, but rarely any as smallas those in Sc-Sd galaxies. Antitruncation of Disks in Early-Type Barred GalaxiesThe disks of spiral galaxies are commonly thought to be truncated: theradial surface brightness profile steepens sharply beyond a certainradius (3-5 inner disk scale lengths). Here we present the radialbrightness profiles of a number of barred S0-Sb galaxies with theopposite behavior: their outer profiles are distinctly shallower inslope than the main disk profile. We term these antitruncations'' theyare found in at least 25% of a larger sample of barred S0-Sb galaxies.There are two distinct types of antitruncations. About one-third show afairly gradual transition and outer isophotes that are progressivelyrounder than the main disk isophotes, suggestive of a disk embeddedwithin a more spheroidal outer zone-either the outer extent of the bulgeor a separate stellar halo. But the majority of the profiles have rathersharp surface brightness transitions to the shallower, outer exponentialprofile and, crucially, outer isophotes that are not significantlyrounder than the main disk; in the Sab-Sb galaxies, the outer isophotesinclude visible spiral arms. This suggests that the outer light is stillpart of the disk. A subset of these profiles are in galaxies withasymmetric outer isophotes (lopsided or one-armed spirals), suggestingthat interactions may be responsible for at least some of the disklikeantitruncations. Rotational Widths for Use in the Tully-Fisher Relation. I. Long-Slit Spectroscopic DataWe present new long-slit Hα spectroscopy for 403 noninteractingspiral galaxies, obtained at the Palomar Observatory 5 m Hale telescope,which is used to derive well-sampled optical rotation curves. Becausemany of the galaxies show optical emission features that aresignificantly extended along the spectrograph slit, a technique wasdevised to separate and subtract the night sky lines from the galaxyemission. We exploit a functional fit to the rotation curve to identifyits center of symmetry; this method minimizes the asymmetry in thefinal, folded rotation curve. We derive rotational widths using bothvelocity histograms and the Polyex model fit. The final rotational widthis measured at a radius containing 83% of the total light as derivedfrom I-band images. In addition to presenting the new data, we use alarge sample of 742 galaxies for which both optical long-slit and radioH I line spectroscopy are available to investigate the relation betweenthe H I content of the disks and the extent of their rotation curves.Our results show that the correlation between those quantities, which iswell established in the case of H I-poor galaxies in clusters, ispresent also in H I-normal objects: for a given optical size, starformation can be traced farther out in the disks of galaxies with largerH I mass. Supernova 2004dg in NGC 5806.Not Available Supernova 2004dgIAUC 8375 available at Central Bureau for Astronomical Telegrams. Secular Evolution and the Formation of Pseudobulges in Disk GalaxiesThe Universe is in transition. At early times, galactic evolution wasdominated by hierarchical clustering and merging, processes that areviolent and rapid. In the far future, evolution will mostly be secularthe slow rearrangement of energy and mass that results from interactionsinvolving collective phenomena such as bars, oval disks, spiralstructure, and triaxial dark halos. Both processes are important now.This review discusses internal secular evolution, concentrating on oneimportant consequence, the buildup of dense central components in diskgalaxies that look like classical, merger-built bulges but that weremade slowly out of disk gas. We call these pseudobulges. Circumnuclear Structure and Black Hole Fueling: Hubble Space Telescope NICMOS Imaging of 250 Active and Normal GalaxiesWhy are the nuclei of some galaxies more active than others? If mostgalaxies harbor a central massive black hole, the main difference isprobably in how well it is fueled by its surroundings. We investigatethe hypothesis that such a difference can be seen in the detailedcircumnuclear morphologies of galaxies using several quantitativelydefined features, including bars, isophotal twists, boxy and diskyisophotes, and strong nonaxisymmetric features in unsharp-masked images.These diagnostics are applied to 250 high-resolution images of galaxycenters obtained in the near-infrared with NICMOS on the Hubble SpaceTelescope. To guard against the influence of possible biases andselection effects, we have carefully matched samples of Seyfert 1,Seyfert 2, LINER, starburst, and normal galaxies in their basicproperties, taking particular care to ensure that each was observed witha similar average scale (10-15 pc pixel-1). Severalmorphological differences among our five different spectroscopicclassifications emerge from the analysis. The H II/starburst galaxiesshow the strongest deviations from smooth elliptical isophotes, whilethe normal galaxies and LINERs have the least disturbed morphology. TheSeyfert 2s have significantly more twisted isophotes than any othercategory, and the early-type Seyfert 2s are significantly more disturbedthan the early-type Seyfert 1s. The morphological differences betweenSeyfert 1s and Seyfert 2s suggest that more is at work than simply theviewing angle of the central engine. They may correspond to differentevolutionary stages. Companions to Isolated Elliptical Galaxies: Revisiting the Bothun-Sullivan SampleWe investigate the number of physical companion galaxies for a sample ofrelatively isolated elliptical galaxies. The NASA/IPAC ExtragalacticDatabase (NED) has been used to reinvestigate the incidence of satellitegalaxies for a sample of 34 elliptical galaxies, first investigated byBothun & Sullivan using a visual inspection of Palomar Sky Surveyprints out to a projected search radius of 75 kpc. We have repeatedtheir original investigation using data cataloged in NED. Nine of theseelliptical galaxies appear to be members of galaxy clusters; theremaining sample of 25 galaxies reveals an average of +1.0+/-0.5apparent companions per galaxy within a projected search radius of 75kpc, in excess of two equal-area comparison regions displaced by 150-300kpc. This is significantly larger than the +0.12+/-0.42companions/galaxy found by Bothun & Sullivan for the identicalsample. Making use of published radial velocities, mostly availablesince the completion of the Bothun-Sullivan study, identifies thephysical companions and gives a somewhat lower estimate of +0.4companions per elliptical galaxy. This is still 3 times larger than theoriginal statistical study, but given the incomplete and heterogeneousnature of the survey redshifts in NED, it still yields a firm lowerlimit on the number (and identity) of physical companions. An expansionof the search radius out to 300 kpc, again restricted to sampling onlythose objects with known redshifts in NED, gives another lower limit of4.5 physical companions per galaxy. (Excluding five elliptical galaxiesin the Fornax Cluster, this average drops to 3.5 companions perelliptical.) These physical companions are individually identified andlisted, and the ensemble-averaged radial density distribution of theseassociated galaxies is presented. For the ensemble, the radial densitydistribution is found to have a falloff consistent withρ~R-0.5 out to approximately 150 kpc. For non-FornaxCluster companions the falloff continues out to the 300 kpc limit of thesurvey. The velocity dispersion of these companions is found to reach amaximum of 350 km s-1 at around 120 kpc, after which theyfall at a rate consistent with Keplerian falloff. This falloff may thenindicate the detection of a cut-off in the mass-density distribution inthe elliptical galaxies' dark matter halo at ~100 kpc. Inner-truncated Disks in GalaxiesWe present an analysis of the disk brightness profiles of 218 spiral andlenticular galaxies. At least 28% of disk galaxies exhibit innertruncations in these profiles. There are no significant trends oftruncation incidence with Hubble type, but the incidence among barredsystems is 49%, more than 4 times that for nonbarred galaxies. However,not all barred systems have inner truncations, and not allinner-truncated systems are currently barred. Truncations represent areal dearth of disk stars in the inner regions and are not an artifactof our selection or fitting procedures nor the result of obscuration bydust. Disk surface brightness profiles in the outer regions are wellrepresented by simple exponentials for both truncated and nontruncateddisks. However, truncated and nontruncated systems have systematicallydifferent slopes and central surface brightness parameters for theirdisk brightness distributions. Truncation radii do not appear tocorrelate well with the sizes or brightnesses of the bulges. Thissuggests that the low angular momentum material apparently missing fromthe inner disk was not simply consumed in forming the bulge population.Disk parameters and the statistics of bar orientations in our sampleindicate that the missing stars of the inner disk have not simply beenredistributed azimuthally into bar structures. The sharpness of thebrightness truncations and their locations with respect to othergalactic structures suggest that resonances associated with diskkinematics, or tidal interactions with the mass of bulge stars, might beresponsible for this phenomenon. Spiral galaxies observed in the near-infrared K band. I. Data analysis and structural parametersDeep surface photometry in the K band was obtained for 54 normal spiralgalaxies, with the aim of quantifying the percentage of faint bars andstudying the morphology of spiral arms. The sample was chosen to cover awider range of morphological types while inclination angles anddistances were limited to allow a detailed investigation of the internalstructure of their disks and future observations and studies of the diskkinematics. An additional constraint for a well defined subsample wasthat no bar structure was seen on images in the visual bands. Accuratesky projection parameters were determined from the K maps comparingseveral different methods. The surface brightness distribution wasdecomposed into axisymmetric components while bars and spiral structureswere analyzed using Fourier techniques.Bulges were best represented by a Sérsic r1/n law withan index in the typical range of 1-2. The central surface brightness ofthe exponential disk and bulge-to-disk ratio only showed weakcorrelation with Hubble type. Indications of a central point source werefound in many of the galaxies. An additional central, steep, exponentialdisk improved the fit for more than 80% of the galaxies suggesting thatmany of the bulges are oblate.Bars down to the detection level at a relative amplitude of 3% weredetected in 26 of 30 galaxies in a subsample classified as ordinary SAspirals. This would correspond to only 5% of all spiral galaxies beingnon-barred at this level. In several cases, bars are significantlyoffset compared to the starting points of the main spiral pattern whichindicates that bar and spiral have different pattern speeds. A smallfraction (˜10%) of the sample has complex central structuresconsisting of several sets of bars, arcs or spirals.A majority of the galaxies (˜60%) displays a two-armed, grand-designspiral pattern in their inner parts which often breaks up into multiplearms in their outer regions. Phase shifts between the inner and outerpatterns suggest in some cases that they belong to different spiralmodes. The pitch angles of the main two-armed symmetric spiral patternin the galaxies have a typical range of 5-30 °. The sample shows alack of strong, tight spirals which could indicate that such patternsare damped by non-linear, dynamical effects due to their high radialforce perturbations.Based on observations collected at the European Southern Observatory, LaSilla, Chile; programs: ESO 63.N-0343, 65.N-0287, 66.N-0257.Table 2 is only available in electronic form at the CDS via anonymousftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/423/849Appendix A is only available in electronic form athttp://www.edpsciences.org The ISOPHOT 170 μm Serendipity Survey II. The catalog of optically identified galaxies%The ISOPHOT Serendipity Sky Survey strip-scanning measurements covering≈15% of the far-infrared (FIR) sky at 170 μm were searched forcompact sources associated with optically identified galaxies. CompactSerendipity Survey sources with a high signal-to-noise ratio in at leasttwo ISOPHOT C200 detector pixels were selected that have a positionalassociation with a galaxy identification in the NED and/or Simbaddatabases and a galaxy counterpart visible on the Digitized Sky Surveyplates. A catalog with 170 μm fluxes for more than 1900 galaxies hasbeen established, 200 of which were measured several times. The faintest170 μm fluxes reach values just below 0.5 Jy, while the brightest,already somewhat extended galaxies have fluxes up to ≈600 Jy. For thevast majority of listed galaxies, the 170 μm fluxes were measured forthe first time. While most of the galaxies are spirals, about 70 of thesources are classified as ellipticals or lenticulars. This is the onlycurrently available large-scale galaxy catalog containing a sufficientnumber of sources with 170 μm fluxes to allow further statisticalstudies of various FIR properties.Based on observations with ISO, an ESA project with instruments fundedby ESA Member States (especially the PI countries: France, Germany, TheNetherlands and the UK) and with the participation of ISAS and NASA.Members of the Consortium on the ISOPHOT Serendipity Survey (CISS) areMPIA Heidelberg, ESA ISO SOC Villafranca, AIP Potsdam, IPAC Pasadena,Imperial College London.Full Table 4 and Table 6 are only available in electronic form at theCDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/422/39 The Hα galaxy survey. I. The galaxy sample, Hα narrow-band observations and star formation parameters for 334 galaxiesWe discuss the selection and observations of a large sample of nearbygalaxies, which we are using to quantify the star formation activity inthe local Universe. The sample consists of 334 galaxies across allHubble types from S0/a to Im and with recession velocities of between 0and 3000 km s-1. The basic data for each galaxy are narrowband H\alpha +[NII] and R-band imaging, from which we derive starformation rates, H\alpha +[NII] equivalent widths and surfacebrightnesses, and R-band total magnitudes. A strong correlation is foundbetween total star formation rate and Hubble type, with the strongeststar formation in isolated galaxies occurring in Sc and Sbc types. Moresurprisingly, no significant trend is found between H\alpha +[NII]equivalent width and galaxy R-band luminosity. More detailed analyses ofthe data set presented here will be described in subsequent papers.Based on observations made with the Jacobus Kapteyn Telescope operatedon the island of La Palma by the Isaac Newton Group in the SpanishObservatorio del Roque de los Muchachos of the Instituto deAstrofísica de Canarias.The full version of Table \ref{tab3} is available in electronic form atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/414/23 Reduced image datafor this survey can be downloaded fromhttp://www.astro.livjm.ac.uk/HaGS/ Dust emission in the far-infrared as a star formation tracer at z= 0: systematic trends with luminosityWe investigate whether dust emission in the far-infrared (far-IR)continuum provides a robust estimate of the star formation rate (SFR)for a nearby, normal late-type galaxy. We focus on the ratio of the40-1000 μm luminosity (Ldust) to the far-ultraviolet(far-UV) (0.165 μm) luminosity, which is connected to recent episodesof star formation. Available total photometry at 0.165, 60, 100 and 170μm limits the statistics to 30 galaxies, which, however, span a largerange in observed (and, thus, attenuated by dust) K-band (2.2 μm)luminosity, morphology and inclination (i). This sample shows that theratio of Ldust to the observed far-UV luminosity depends notonly on i, as expected, but also on morphology and, in a tighter way, onobserved K-band luminosity. We find thatLdust/LFUV~ e-τK (α+0.62)LK0.62, where LFUV andLK are the unattenuated stellar luminosities in far-UV and K,respectively, and α is the ratio of the attenuation optical depthsat 0.165 μm (τFUV) and 2.2 μm (τK).This relation is to zeroth order independent of i and morphology. It maybe further expressed asLdust/LFUV~LδK, whereδ= 0.61 - 0.02α, under the observationally motivatedassumption that, for an average inclination,e-τK~L-0.02K. We adoptcalculations of two different models of attenuation of stellar light byinternal dust to derive solid-angle-averaged values of α. We findthat δ is positive and decreases towards 0 from the more luminousto the less luminous galaxies. This means that there is no universalratio of far-IR luminosity to unattenuated far-UV luminosity for nearby,normal late-type galaxies. The far-IR luminosity systematicallyoverestimates SFR in more luminous, earlier-type spirals, owing to theincreased fractional contribution to dust heating of optical/near-IRphotons in these objects. Conversely, it systematically underestimatesSFR in fainter, later-type galaxies, the τFUV of which isreduced. The limited statistics and the uncertainty affecting theprevious scaling relations do not allow us to establish quantitativeconclusions, but an analogous analysis making use of larger data sets,available in the near future (e.g. from GALEX, ASTRO-F and SIRTF), andof more advanced models will allow a quantitative test of ourconclusions. The SAURON project - II. Sample and early resultsEarly results are reported from the SAURON survey of the kinematics andstellar populations of a representative sample of nearby E, S0 and Sagalaxies. The survey is aimed at determining the intrinsic shape of thegalaxies, their orbital structure, the mass-to-light ratio as a functionof radius, the age and metallicity of the stellar populations, and thefrequency of kinematically decoupled cores and nuclear black holes. Theconstruction of the representative sample is described, and itsproperties are illustrated. A comparison with long-slit spectroscopicdata establishes that the SAURON measurements are comparable to, orbetter than, the highest-quality determinations. Comparisons arepresented for NGC 3384 and 4365, where stellar velocities and velocitydispersions are determined to a precision of 6kms-1, and theh3 and h4 parameters of the line-of-sight velocitydistribution to a precision of better than 0.02. Extraction of accurategas emission-line intensities, velocities and linewidths from the datacubes is illustrated for NGC 5813. Comparisons with published linestrengths for NGC 3384 and 5813 reveal uncertainties of <~0.1Åon the measurements of the Hβ, Mg b and Fe5270 indices.Integral-field mapping uniquely connects measurements of the kinematicsand stellar populations to the galaxy morphology. The maps presentedhere illustrate the rich stellar kinematics, gaseous kinematics, andline-strength distributions of early-type galaxies. The results includethe discovery of a thin, edge-on, disc in NGC 3623, confirm theaxisymmetric shape of the central region of M32, illustrate the LINERnucleus and surrounding counter-rotating star-forming ring in NGC 7742,and suggest a uniform stellar population in the decoupled core galaxyNGC 5813. Bar Galaxies and Their EnvironmentsThe prints of the Palomar Sky Survey, luminosity classifications, andradial velocities were used to assign all northern Shapley-Ames galaxiesto either (1) field, (2) group, or (3) cluster environments. Thisinformation for 930 galaxies shows no evidence for a dependence of barfrequency on galaxy environment. This suggests that the formation of abar in a disk galaxy is mainly determined by the properties of theparent galaxy, rather than by the characteristics of its environment. The UZC-SSRS2 Group CatalogWe apply a friends-of-friends algorithm to the combined Updated ZwickyCatalog and Southern Sky Redshift Survey to construct a catalog of 1168groups of galaxies; 411 of these groups have five or more members withinthe redshift survey. The group catalog covers 4.69 sr, and all groupsexceed the number density contrast threshold, δρ/ρ=80. Wedemonstrate that the groups catalog is homogeneous across the twounderlying redshift surveys; the catalog of groups and their membersthus provides a basis for other statistical studies of the large-scaledistribution of groups and their physical properties. The medianphysical properties of the groups are similar to those for groupsderived from independent surveys, including the ESO Key Programme andthe Las Campanas Redshift Survey. We include tables of groups and theirmembers. Spiral Galaxies with HST/NICMOS. I. Nuclear Morphologies, Color Maps, and Distinct NucleiThis is the first of two papers where we present the analysis of anHST/NICMOS2 near-infrared (NIR) snapshot survey in the F160W (H) filterfor a sample of 78 spiral galaxies selected from the UGC and ESOLVcatalogs. For 69 of these objects we provide nuclear color informationderived by combining the H data either with additional NICMOS F110W (J)images or with V WFPC2/HST data. Here we present the NIR images and theoptical-NIR color maps. We focus our attention on the properties of thephotometrically distinct nuclei'' which are found embedded in most ofthe galaxies and provide measurements of their half-light radii andmagnitudes in the H (and when available in the J) band. We find that (1)in the NIR the nuclei embedded in the bright early- to intermediate-typegalaxies span a much larger range in brightness than the nuclei whichare typically found embedded in bulgeless late-type disks: the nucleiembedded in the early- to intermediate-type galaxies reach, on thebright end, values up to HAB~-17.7 mag; (2) nuclei are foundin both nonbarred and barred hosts, in large-scale (>~1 kpc) as wellas in nuclear (up to a few 100 pc) bars; (3) there is a significantincrease in half-light radius with increasing luminosity of the nucleusin the early/intermediate types (a decade in radius for ~8 magbrightening), a correlation which was found in the V band and which isalso seen in the NIR data; (4) the nuclei of early/intermediate-typespirals cover a large range of optical-NIR colors, from V-H~-0.5 to 3.Some nuclei are bluer and others redder than the surroundinggalaxy,indicating the presence of activity or reddening by dust in many ofthese systems; (5) someearly/intermediate nuclei are elongated and/orslightly offset from the isophotal center of the host galaxy. Onaverage, however, these nuclei appear as centered, star-cluster-likestructures similar to those whichare found in the late-type disks. Basedon observations with the NASA/ESA Hubble Space Telescope, obtained atthe Space Telescope Science Institute, which is operated by Associationof Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Local velocity field from sosie galaxies. I. The Peebles' modelPratton et al. (1997) showed that the velocity field around clusterscould generate an apparent distortion that appears as tangentialstructures or radial filaments. In the present paper we determine theparameters of the Peebles' model (1976) describing infall of galaxiesonto clusters with the aim of testing quantitatively the amplitude ofthis distortion. The distances are determined from the concept of sosiegalaxies (Paturel 1984) using 21 calibrators for which the distanceswere recently calculated from two independent Cepheid calibrations. Weuse both B and I-band magnitudes. The Spaenhauer diagram method is usedto correct for the Malmquist bias. We give the equations for theconstruction of this diagram. We analyze the apparent Hubble constant indifferent regions around Virgo and obtain simultaneously the Local Groupinfall and the unperturbed Hubble constant. We found:[VLG-infall = 208 ± 9 km s-1] [\log H =1.82 ± 0.04 (H ≈ 66 ± 6 km s-1Mpc-1).] The front side and backside infalls can be seenaround Virgo and Fornax. In the direction of Virgo the comparison ismade with the Peebles' model. We obtain: [vinfall} =CVirgo/r0.9 ± 0.2] withCVirgo=2800 for Virgo and CFornax=1350 for Fornax,with the adopted units (km s-1 and Mpc). We obtain thefollowing mean distance moduli: [μVirgo=31.3 ± 0.2(r=18 Mpc )] [μFornax=31.7 ± 0.3 (r=22 Mpc). ] Allthese quantities form an accurate and coherent system. Full Table 2 isonly available in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/57 Multiwavelength Observations of Dusty Star Formation at Low and High RedshiftIf high-redshift galaxies resemble rapidly star-forming galaxies in thelocal universe, most of the luminosity produced by their massive starswill have been absorbed by dust and reradiated as far-infrared photonsthat cannot be detected with existing facilities. This paper examineswhat can be learned about high-redshift star formation from the smallfraction of high-redshift galaxies' luminosities that is emitted ataccessible wavelengths. We first consider the most basic ingredient inthe analysis of high-redshift surveys: the estimation of star formationrates for detected galaxies. Standard techniques require an estimate ofthe bolometric luminosity produced by their massive stars. We review andquantify empirical correlations between bolometric luminosities producedby star formation and the UV, mid-IR, sub-mm, and radio luminosities ofgalaxies in the local universe. These correlations suggest thatobservations of high-redshift galaxies at any of these wavelengthsshould constrain their star formation rates to within ~0.2-0.3 dex. Weassemble the limited evidence that high-redshift galaxies obey theselocally calibrated correlations. The second part of the paper assesseswhether existing surveys have found the galaxies that host the majorityof star formation at high redshift even though they directly detect onlya small fraction of the luminosities of individual galaxies. We describethe characteristic luminosities and dust obscurations of galaxies atz~0, z~1, and z~3. After discussing the relationship between thehigh-redshift populations selected in surveys at different wavelengths,we calculate the contribution to the 850 μm background from each andargue that these known galaxy populations can together have produced theentire observed background. The available data show that a correlationbetween star formation rate and dust obscurationLbol,dust/LUV exists at low and high redshiftalike. The existence of this correlation plays a central role in themajor conclusion of this paper: most star formation at high redshiftoccurred in galaxies with moderate dust obscurations1<~Lbol,dust/LUV<~100 similar to those thathost the majority of star formation in the local universe and to thosethat are detected in UV-selected surveys. Nearby Optical Galaxies: Selection of the Sample and Identification of GroupsIn this paper we describe the Nearby Optical Galaxy (NOG) sample, whichis a complete, distance-limited (cz<=6000 km s-1) andmagnitude-limited (B<=14) sample of ~7000 optical galaxies. Thesample covers 2/3 (8.27 sr) of the sky (|b|>20deg) andappears to have a good completeness in redshift (97%). We select thesample on the basis of homogenized corrected total blue magnitudes inorder to minimize systematic effects in galaxy sampling. We identify thegroups in this sample by means of both the hierarchical and thepercolation friends-of-friends'' methods. The resulting catalogs ofloose groups appear to be similar and are among the largest catalogs ofgroups currently available. Most of the NOG galaxies (~60%) are found tobe members of galaxy pairs (~580 pairs for a total of ~15% of objects)or groups with at least three members (~500 groups for a total of ~45%of objects). About 40% of galaxies are left ungrouped (field galaxies).We illustrate the main features of the NOG galaxy distribution. Comparedto previous optical and IRAS galaxy samples, the NOG provides a densersampling of the galaxy distribution in the nearby universe. Given itslarge sky coverage, the identification of groups, and its high-densitysampling, the NOG is suited to the analysis of the galaxy density fieldof the nearby universe, especially on small scales. A CCD Study of the Environment of Seyfert Galaxies. III. Host Galaxies and the Nearby EnvironmentsA technique is described that permits the robust decomposition of thebulge and disk components of a sample of Seyfert galaxies, as well as a(control) sample of nonactive galaxies matched to the Seyferts in thedistributions of redshift, luminosity, and morphological classification.The structural parameters of the host galaxies in both samples aremeasured. No statistically significant differences at greater than the95% level are found in these parameters according to aKolmogorov-Smirnov test. Companion galaxies''-defined as any galaxywithin a projected separation of 200 h-1 kpc from the centerof the host-are identified and their basic properties measured. Acomparison between the active and control samples in the distributionsof apparent R magnitude, absolute R magnitude (assuming the companionsare at the distance of the host), projected separation from the host,position angle relative to the host, magnitude difference between thecompanion and host, and strength of the tidal parameter shows nostatistically significant differences. Similarly, no statisticallysignificant differences are found between the control and active samplehost galaxies in terms of light asymmetries-bars, rings, isophotaltwisting, etc. The implications for a model in which interactions andmergers are responsible for inciting activity in galactic nuclei arediscussed briefly. The ISOPHOT 170 μ m serendipity survey. I. Compact sources with galaxy associationsThe first set of compact sources observed in the ISOPHOT 170 μmSerendipity Survey is presented. From the slew data with low(I100 μm <= 15 MJy/sr) cirrus background, 115well-observed sources with a high signal-to-noise ratio in all detectorpixels having a galaxy association were extracted. Of the galaxies withknown optical morphologies, the vast majority are classified as spirals,barred spirals, or irregulars. The 170 μm fluxes measured from theSerendipity slews have been put on an absolute flux level by usingcalibration sources observed additionally with the photometric mappingmode of ISOPHOT. For all but a few galaxies, the 170 μm fluxes aredetermined for the first time, which represents a significant increasein the number of galaxies with measured Far-Infrared (FIR) fluxes beyondthe IRAS 100 μm limit. The 170 μm fluxes cover the range 2 <~F170 μm la 100 Jy. Formulae for the integrated FIR fluxesF40-220μm and the total infrared fluxesF1-1000μm incorporating the new 170 μm fluxes areprovided. The large fraction of sources with a high F170μm / F100 μm flux ratio indicates that a cold(TDust la 20 K) dust component is present in many galaxies.The detection of such a cold dust component is crucial for thedetermination of the total dust mass in galaxies, and, in cases with alarge F170 μm / F100 μm flux ratio,increases the dust mass by a significant factor. The typical mass of thecoldest dust component is MDust = 107.5 +/- 0.5Msun , a factor 2-10 larger than that derived from IRASfluxes alone. As a consequence, the majority of the derived gas-to-dustratios are much closer to the canonical value of ~ 160 for the MilkyWay. By relaxing the selection criteria, it is expected that theSerendipity Survey will eventually lead to a catalog of 170 μm fluxesfor ~ 1000 galaxies. Based on observations with ISO, an ESA project withinstruments funded by ESA Member States (especially the PI countries:France, Germany, the Netherlands and the United Kingdom) and with theparticipation of ISAS and NASA. Members of the Consortium on the ISOPHOTSerendipity Survey (CISS) are MPIA Heidelberg, ESA ISO SOC Villafranca,AIP Potsdam, IPAC Pasadena, Imperial College London. Arcsecond Positions of UGC GalaxiesWe present accurate B1950 and J2000 positions for all confirmed galaxiesin the Uppsala General Catalog (UGC). The positions were measuredvisually from Digitized Sky Survey images with rms uncertaintiesσ<=[(1.2")2+(θ/100)2]1/2,where θ is the major-axis diameter. We compared each galaxymeasured with the original UGC description to ensure high reliability.The full position list is available in the electronic version only. The Centers of Early- to Intermediate-Type Spiral Galaxies: A Structural AnalysisA recent Hubble Space Telescope (HST)/WFPC2 visual survey of early- andintermediate-type spiral galaxies has unveiled a great complexity in theinner regions of these systems, which include a high fraction ofphotometrically distinct compact sources sitting at the galactic centers(nuclei''). The faint nuclei (M_V>~-12) are typically hosted byrather amorphous, quiescent, bulgelike structures with an exponential(rather than the classical R^1/4) light profile. These exponentialbulges'' are commonly found inside the intermediate-type disks,consistent with previous studies. Brighter nuclei (M_V<~-12) aretypically found instead in the centers of galaxies with circumnuclearrings/arms of star formation or dust and an active, i.e., H II- orAGN-type, central spectrum at ground-based resolution. On the structuralplane of half-light radius (R_e) versus mean surface brightness withinthe half-light radius (mu_e), faint and bright nuclei overlap with, andfill the region of parameter space between, the old Milky Way globularclusters and the young star clusters, respectively, with typical R_e ofabout a few up to ~20 pc. On the same plane, the exponential bulges havesignificantly fainter mu_e than R^1/4 bulges for any given radius andfollow a mu_e-R_e relation typical of disks, which strengthens thesuggestion that the exponential bulges grow inside the disks as a resultof the secular evolution of the latter. Under the likely assumption thatthe visual light from the faint nuclei embedded in the quiescentexponential bulges is of stellar origin and of a similar (>~1 Gyr)age for the central star clusters and their host bulges, the massesinferred for the former agree with those required to disrupt barscomparable in size to the latter. This offers support to scenarios inwhich the exponential bulges grow inside the disks owing to the orbitaldisruption of progenitor bars caused by the growth of a centralconcentration of mass and suggests that this specific mode of bulgeformation is (still) active in the present-day universe. On the otherhand, the presence of the massive clusters at the very center of thelow-density exponential bulges should prevent any other nuclear'' barfrom forming, thereby preventing further infall of dissipative fuel tothe nuclear regions. This may argue against the possibility of evolvingthe exponential bulges into denser, R^1/4 bulges by a simple looping forseveral cycles of the bar formation/disruption mechanism. A CCD Study of the Environment of Seyfert Galaxies. I. The SurveyLarge-format, R-band CCD data are presented for a spectroscopicallycomplete sample of 34 Seyfert galaxies and a control sample of 45nonactive galaxies that are well matched to the Seyfert sample inredshift, luminosity, and morphological type. Gray-scale images of thelocal environment are included for all of the host galaxies, as well asfigures showing the surface brightness, ellipticity, and position angleof the major axis as a function of radius. These data will be used tostudy the environments of these galaxies and hence to test the"interaction hypothesis" that, over the past two decades, has beenimplicated as the triggering mechanism for nuclear activity. While thereare no dramatic differences in most parameters between the active andnonactive samples, the distributions of ellipticities and major-axisposition-angle excursions of the Seyfert host galaxies and the controlgalaxies are marginally different. A higher proportion of Seyfertgalaxies appear to be involved in late-stage mergers. A similar fractionof the control sample, however, displays significant light asymmetriesthat could be evidence for recent interactions. Moreover, a small butsubstantial number of the Seyfert galaxies show no evidence for recentinteractions as judged by the absence of light asymmetries. Submit a new article
# Floor Ceiling Ceiling Floor Algebra Level 3 If the range of positive $x$ satisfy the equation $\lceil x \lfloor x \rfloor \rceil + \lfloor x \lceil x \rceil \rfloor = 111$ is $\alpha \leq x \leq \beta$. What is the value of $8\alpha + 7 \beta$? ×
Right Triangle: Hypotenuse and Side differ by 1 So I have search to the best of my abilities but cannot find mathematically why this is true and if it is called something specific, the closest thing would be Pythagorean Triples but this is not the case although a (3,4,5) is a triple. I will do my best to explain and apologize in advance as math is not my strength. So in any right triangle I could find the following holds true. If (a,b,c) and b differs -1 from c the other angle is the sum of a = sqrt{b+c}. For example: (3,4,5) a = sqrt{4+5} = sqrt{9} = 3 = sqrt{9} = sqrt{25-16} = sqrt{5^2 - 4^2} (a,17,18) a = sqrt{17+18} = sqrt{35} = sqrt{324-289} = sqrt{18^2 - 17^2} The only way I understand it is the difference between two squared numbers who base differs by 1 will always be the sum of both numbers. So assuming b is always a+1, b^2 - a^2 = a + b. Any explanation or pointing to where I can read about more would be much appreciated. Yes, that is true. Given $b = (a+1)$ $$b^2-a^2\\ =(a+1)^2 - a^2\\ = a^2+2a +1 - a^2\\ = 2a + 1 \\= a + (a + 1)\\= a+b$$ For all $a,b$ $$b^2 - a^2 = (b+a)(b-a)$$ Assuming as you do that $b=a+1$ $$b^2 - a^2 = (b+a)(b-a) = (2a+1)(1)=2a+1$$ and $$a+b=a+a+1=2a+1$$ so yes, when $b=a+1$, $b^2 - a^2 = a + b$ There are relatively few triples where $$A$$ and $$B$$ differ by only $$1$$. In triples with sides under 212 quadrillion, there are only $$19$$ of them and one method of finding them is here. Here are the first few shown as f(n,k) and they agree with the $$proof$$ in notovny's answer: $$3,4,5\Rightarrow4^2-3^2=16-9=7=3+4$$ $$21,20,29\Rightarrow21^2-20^2=441-400=41=21+20$$ $$119,120,169\Rightarrow120^2-119^2=14400-14161=239=120+119$$ If you want $$C-B=1$$, these functions will generate them all for an $$k\in\mathbb{N}$$ $$A=2k+1\qquad B=2k^2+2k\qquad C=2k^2+2k+1$$
# How do you solve abs(5x + 10) - 15 = 20? Aug 2, 2015 $x = - 9 , 5$ #### Explanation: Rearrange the equation: $| 5 x + 10 | - 15 = 20$ => $| 5 x + 10 | = 35$ Because of the modulus there are two solutions, the first: $5 x + 10 = 35$ => $x = 5$ The second: $5 x + 10 = - 35$ => $x = - 9$ Aug 2, 2015 $x = - 9 , 5$ #### Explanation: $\left\mid 5 x + 10 \right\mid - 15 = 20$ Add $15$ to both sides of the equation. $\left\mid 5 x + 10 \right\mid = 20 + 15$ = $\left\mid 5 x + 10 \right\mid = 35$ Rewrite the equation without the absolute value symbol, with one equation positive, and one negative. $5 x + 10 = 35$ and $- \left(5 x + 10\right) = 35$. Positive Equation $5 x + 10 = 35$ Subtract $10$ from both sides of the equation. $5 x = 35 - 10$ = $5 x = 25$ Divide both sides by $5$. $x = \frac{25}{5}$ = $x = 5$ Negative Equation $- \left(5 x + 10\right) = 35$ = $- 5 x - 10 = 35$ Add $10$ to both sides. $- 5 x = 35 + 10$ = $- 5 x = 45$ Divide both sides by $- 5$. $x = \frac{45}{- 5}$ = $x = - 9$
# What is the compound interest on a sum of Rs. 12,000 for 2$$\frac{5}{8}$$ years at 8% p.a. when the interest is compounded annually (nearest to a rupee)? Free Practice With Testbook Mock Tests ## Options: 1. Rs. 2,654 2. Rs. 2,642 3. Rs. 2,712 4. Rs. 2,697 ### Correct Answer: Option 4 (Solution Below) This question was previously asked in SSC CGL Previous Paper 68 (Held On: 4 March 2020 Shift 1) ## Solution: Interest rate for first 2 years = 8% Interest rate for 5/8 year = 5/8 × 8 = 5% P = 12,000 As we know, A = P (1 + r/100)3 ⇒ A = 12,000 × (108/100) × (108/100) × (105/100) ⇒ A = 12,000 × (27/25) × (27/25) × (21/20) ⇒ A = 14697 ∴ CI = 14697 – 12000 = 2697 Short Trick: 12500 unit = 12000 15309 – 12500 = 2809 unit ∴ 2809 unit = 12000/12500 × 2809 = 2697
# Quantifier packaging II: function limits and continuity (the fear of epsilon and delta) In this post I want to discuss some possible approaches to teaching  function limits and continuity in terms of epsilon and delta. The “fear of epsilon and delta” that I refer to here is not only that of the student, but also that of the teacher! Because of the difficulties students have with understanding and working with the multi-quantifier definitions of function limits, continuity and uniform continuity in terms of epsilon and delta, it is very tempting to find a way round the issue. For example, once the students have been taught about convergence of sequences, we can use sequence-based definitions of function limits and continuity.  (See my earlier post, however, on teaching the material on convergence of sequences. ) In my second-year mathematical analysis course I have emphasised the sequence approach  for many years, while still exposing the students to the epsilon-delta definitions. However, I am beginning to change my views. I no longer see it as desirable to find ways round these problems. I prefer to seek ways to help the students to gain a solid understanding of the harder concepts. To begin with, let us restrict ourselves to the case of a function $f$ from $\mathbb{R}$ to $\mathbb{R}$. Obviously this should be modified in appropriate ways when dealing with function limits (where the function need not be defined at the point in question) or dealing with functions between more general sets (e.g. subsets of $\mathbb{R}^n$, or more general metric spaces). In this setting, in terms of sequences, the definition of  the statement $f$ is continuous is certainly very clean. Here is one version: For every convergent sequence $(x_n) \subseteq \mathbb{R}$,  we have $\lim_{n \to \infty} f(x_n) = f(\lim_{n\to\infty} x_n)\,.$ This is short and elegant, and it only  requires an understanding (see above) of convergence of sequences. This definition is very easy to use to prove results about sums/products/composition of continuous functions (etc.), quoting earlier standard results about the algebra of limits for sequences of real numbers to help where necessary. This then leads to a very clear and clean theory. So why do I have reservations about its use? I have mentioned some of the issues already, but let me list them again here, with some other issues. • In postponing our confrontation with epsilon and delta for as long as possible, we may not be acting in the best interests of our students. • Our confidence that students are happy and confident with the notion of convergence of sequences may be misplaced. • While this definition in terms of sequences is an easy one to use in developing the general theory, on its own it does not give a particularly good introductory, intuitive notion of continuity. Many students have difficult in seeing what this definition really means (in terms of function values being close to the right value) for specific functions  such as $f(x)=x^2$, and have no idea how one could go about checking carefully that the condition is satisfied.  (Obviously you can explain how to do this using, for example, the algebra of limits for this particular function.) • Students find it difficult to negate this definition correctly when trying to understand what it means for a function to be discontinuous. This is, of course, a general problem. Here it takes the form that students assume that if a sequence does not converge to a particular value, it is because it converges to some other value. The possibility that it does not converge at all may be overlooked. The function $\sin x$ might be a more convincing example than $x^2$ here, except that approaches to teaching the properties of  $\sin x$ can sometimes be a little circular. Mostly we  use the “standard fact” that the derivative of $\sin x$ is $\cos x$ and the Mean Value Theorem to show that $|\sin x - \sin y| \leq |x-y|$, etc., but obviously we have assumed far more than the continuity of $\sin x$ in the process. I am as guilty as anyone else of referring students to books for more details concerning the trigonometrical functions (etc.). So, what is the problem with the epsilon-delta definition of continuity? Well, if you jump straight in, you end up with a four-quantifier statement. In terms of $\varepsilon$ and delta, “$f$ is continuous” comes out as some variant of the following: $\forall x \in \mathbb{R} \, \forall \varepsilon>0\,\exists \delta>0$ such that $\forall x' \in (x-\delta,x+\delta)$ we have $|f(x')-f(x)|<\varepsilon\,.$ Now it is obvious that we could first attack the three-quantifier statement $f$ is continuous at $x$“, and then define continuity in terms of this. Still, a three-quantifier statement is already rather challenging for, say, a first-year student of analysis. We can also, if we wish, disguise one of the quantifiers by replacing $\forall x' \in (x-\delta,x+\delta)$ we have $|f(x')-f(x)|<\varepsilon\,$ with the somewhat less formal version $|x'-x|<\delta \Rightarrow |f(x')-f(x)|<\varepsilon\,$ (without openly specifying that $x'$ is in $\mathbb R$). This is perfectly fine once students have a good understanding of the concepts and can use quantifiers correctly, but I am not convinced that it is wise when students are still in the process of learning how to handle quantifiers. Rather than disguising this quantifier, I prefer to package it using images of sets, and to say $f((x-\delta,x+\delta)) \subseteq (f(x)-\varepsilon,f(x)+\varepsilon)\,.$ We can also define continuity of $f$ in terms of function limits (one-sided or two-sided). I have no objection to this, but of course it simply shifts the original problem to the alternative setting of function limits. So, having pointed out problems with these approaches, what is my recommendation? Well, I have some possible ideas involving quantifier packaging, but I haven’t yet had the opportunity to try them out. Perhaps some discussion here would be wise before unleashing them on the students! I think that whatever approach we take should be applicable to function limits,  so let us change our setting slightly. Let $a \in \mathbb R$,  let $f$ be a function from $\mathbb R \setminus \{a\}$ to $\mathbb R$, and let $L \in \mathbb R$. In terms of sequences, the definition of “$\lim_{x\to a} f(x) = L$” is the following: For every sequence $(x_n) \subseteq \mathbb R \setminus \{a\}$ such that $(x_n)$ converges to $a$, we have $f(x_n) \to L$ as $n \to \infty\,.$ As before, this is clean, clear, easy to use, and allows you to sidestep the epsilon-delta definition if you wish. The epsilon-delta definition comes out as some variant of the following: Three-quantifier version $\forall \varepsilon>0\,\exists \delta>0$ such that $\forall x \in (a-\delta,a+\delta) \setminus \{a\}$ we have $|f(x)-L|<\varepsilon$ or, using images of sets, we have the following Two-quantifier version $\forall \varepsilon>0\,\exists \delta>0$ such that $f( (a-\delta,a+\delta) \setminus \{a\}) \subseteq (L-\varepsilon,L+\varepsilon)\,.$ Perhaps some suitable notation for a punctured $\null\delta$-neighbourhood would make this look a little less unwieldy. These are  standard, and look easy enough to the professional mathematician, but are still complicated enough to cause serious difficulties for a student trying to come to grips with analysis. Can we package the quantifiers further? I think that we can, but only at the expense of inventing new terminology. Here is a possible attempt, based on my method for teaching convergence of sequences. I am not at all sure that this is the definitive version.  Suggestions are welcome! Let $\delta >0$ and let $B \subseteq \mathbb R$ . Then let us say that the set $B$ absorbs the values of $f$ $\null\delta$-near $a$ if $f( (a-\delta,a+\delta) \setminus \{a\}) \subseteq B\,.$ We then say that the set $B$ absorbs the values of $f$ near $a$ if there exists a $\delta>0$ such that $B$ absorbs the values of $f$ $\null\delta$near $a$. With this terminology, the definition of   “$\lim_{x\to a} f(x) = L$” becomes: Every open interval centred on $L$ absorbs the values of $f$ near $a$. So, after all this work, we are back to a single-quantifier definition. Why not stick to the definition using sequences? Well, we are now in a position to give the students a thorough introduction to the use of epsilon and delta. Time permitting, we can look at lots of examples of sets which do or don’t absorb the values of functions near or $\null\delta$-near various points. In the process we can reinforce intuitive notions of function limit and continuity, without sacrificing rigour. Note that,  if we start with $f:\mathbb{R} \rightarrow \mathbb{R}$, the definition of “$f$ is continuous at $a$” can now be stated either as the standard $\lim_{x \to a} f(x) = f(a)$, or, with the new terminology, as follows: Every open interval centred on $f(a)$ absorbs the values of $f$ near $a$. Does anyone have any alternative suggestions for names for some of these concepts? Or do they already have names in the literature that I am simply unaware of? Joel Feinstein 15:00 Maybe there is no need to bring in the word “absorbs” here, though it is tempting to make some use of it. If we do use the notion of absorption, it should perhaps be more closely associated with some notion of movement in the domain. For example, we could say that the set $B$ absorbs the values $f(x)$ as $x \to a$. Alternatively, we could simply say that the set $B$ includes the values of $f(x)$ for $x$ near $a$. There is a second reason that I am not yet happy with the version above, $B$ absorbs the values of $f$ near $a$. This version appears to be a little ambiguous, and potentially confusing: the values in question could be taken to be “those values of $f$ which are near $a$“. Whether we use “absorbs”, “includes” or even that dangerous word “contains”, it may be safer to say “… the values of $f(x)$ for $x$ near $a$ rather than “… the values of $f$ near $a$“. Maybe we could get away with “… the values of $f$ at points near $a$“? How about the following statement, formalized in terms of $\null\delta$ as above? The set $B$ includes (all of) the values of $f$ at points near $a$. • Do we need “all of” to make this as clear as possible? • Is it a good idea to introduce a term such as absorption to make this statement less unwieldy? • Should we try to introduce a notion of movement as in “as $x$ approaches $a$“? 16:00 Or maybe it would be best to eliminate the “values” and “points” altogether and go with statements such as $B$ absorbs $f$  near $a$ and the version with  $\null\delta$, $B$ absorbs $f$  $\null\delta$-near $a$. What do people think? Joel Feinstein ### 6 responses to “Quantifier packaging II: function limits and continuity (the fear of epsilon and delta)” 1. mattheath When you put the first post about unpacking quantifiers, I started thinking about how to do continuity. I thought it might be worth separating out a definition of a discontinuity of size $\epsilon$ at a point x. So we say f has a discontinuity of size $\epsilon$ at x if for all $\null\delta$ there is $y\in B(x,\delta)$ with $|f(x)-f(y)|>\epsilon$. (Possibly “discontinuity of size at least $\epsilon$ would be preferable.) We then define “f is continuous at x” to be “f has no discontinuity of any size at x” and so on. Part of my thinking was that students will typically have a naïve picture of a function that isn’t continuous and that picture has it go along smoothly until a point at which it “jumps” some distance. If you keep this picture in mind (perhaps together with a collection of pictures of functions which “climb very steeply” at x but are continuous) I think it is fairly easy to reconstruct the definition of a discontinuity. I thought of this sort of like remembering the offside rule in football; it’s hard because it seems pretty much arbitrary, but if you remember what it is designed to avoid (goal-hanging) it becomes fairly easy. Like • In reply to Matt: that is quite an interesting approach: defining continuous as “not discontinuous”. I will have to think about it! I am currently writing a post about lim inf and lim sup: this gives another way to quantify the discontinuity at a point. Joel Like 2. From some conversations I have had, I think that I should clarify one thing. Quantifier packaging is not intended to prevent students from understanding the original multiple-quantifier statements, but to assist it. Once students can recognize parts of a complicated statement as meaning something that they understand, the whole statement should begin to make sense. Perhaps this is a bit like learning a language. At first, when you hear people speaking quickly, you don’t even catch any of the words. After a while, you start to catch words, and then phrases that mean something, and then you start to be able to put the whole picture together. So, when a student sees the statement $\forall \varepsilon>0\,\exists \delta>0$ such that $f( (a-\delta,a+\delta) \setminus \{a\}) \subseteq (L-\varepsilon,L+\varepsilon)\,$ they will be able to parse this using concepts they understand. Using terminology suggested above: $f( (a-\delta,a+\delta) \setminus \{a\}) \subseteq (L-\varepsilon,L+\varepsilon)\,$ says that $(L-\varepsilon,L+\varepsilon)$ absorbs $f$ $\null\delta$-near $a$; $\exists \delta>0$ such that $f( (a-\delta,a+\delta) \setminus \{a\}) \subseteq (L-\varepsilon,L+\varepsilon)\,$ says that $(L-\varepsilon,L+\varepsilon)$ absorbs $f$ near $a$; the whole statement says that, for all $\varepsilon>0$, $(L-\varepsilon,L+\varepsilon)$ absorbs $f$ near $a$. Understanding the simpler parts of the statement should hopefully assist the student to understand the whole of the multi-quantifier statement, even in its original form. Joel Like 3. If we introduce the term ‘almost absorbs’ (appropriately defined, as in our discussions of lim inf and lim sup), we can now achieve some new “zero-quantifier” definitions of our concepts! The definition of ‘$\lim_{x \to a} f(x) = L$‘ becomes: The single-point set $\{L\}$ almost absorbs $f$ near $a$. The definition of ‘$f$ is continuous at $a$‘ becomes: The single-point set $\{f(a)\}$ almost absorbs $f$ near $a$. We can also define upper semi-continuity and lower semi-continuity this way. Recall that $f$ is upper semi-continuous at $a$ if $\limsup_{x \to a} f(x) \leq f(a)$, and $f$ is lower semi-continuous at $a$ if $\liminf_{x \to a} f(x) \geq f(a)$. In the language of “almost absorption”, $f$ is upper semi-continuous at $a$ if and only if $({-\infty},f(a)]$ almost absorbs $f$ near $a$. Similarly, $f$ is lower semi-continuous at $a$ if and only if $\null[f(a),\infty)$ almost absorbs $f$ near $a$. If we wish to work with extended-real-valued functions, we should use $\null[{-\infty},f(a)]$ and $\null[f(a),+\infty]$ instead, with the usual care needed in the definition of ‘almost absorbs’. One possible problem here is the parsing of ‘almost absorbs $f$ near $a$‘. The student could think that this means that there is some $\delta>0$ such that the relevant set ‘almost contains’ the set $f(a-\delta,a+\delta)$ (with some inappropriate notion of ‘almost contains’ involving ‘$\forall \varepsilon$‘). This is not what we want! Would students be confused by this? We may wish to avoid “almost absorption”. In this case, single-quantifier versions of the above (in the real-valued case) are as follows: $f$ is upper semi-continuous at $a$ if and only if for all $\varepsilon>0$, $({-\infty},f(a)+\varepsilon]$ absorbs $f$ near $a$; $f$ is lower semi-continuous at $a$ if and only if for all $\varepsilon>0$, $\null[f(a)-\varepsilon,\infty)$ absorbs $f$ near $a$. In all cases, you can draw some nice diagrams illustrating the fact that parts of the curve near the point $a$ lie below/above appropriate horizontal lines. Joel Feinstein 5/2/09 Like 4. I was discussing some of these issues with my colleague Sergey Utev, and he was telling me that he was able to make good use of “little o” and “big O” notation. This, of course, does not remove the need for epsilon and delta in order to make things rigorous, but it is another way to make some of the material more accessible to more of the students. Now, in this post, I said that students need to get to grips with epsilon and delta at some point, and I gave some approaches that might help. With “little o” notation in mind, perhaps a compromise could be to focus on convergence to zero? Null sequences and function limits being zero may be a bit easier than the general case. It also allows some more language to be (carefully) defined, such as $|f|$ is $\varepsilon$-small $\null\delta$-near $a$ and $|f|$ is $\varepsilon$-small near $a$. However, we do lose the grammatical advantage of absorption, as discussed previously. We end up with statements like for all $\varepsilon>0$, $|f|$ is $\varepsilon$-small near $a$ and, perhaps only informally, $|f|$ is small near $a$. There is a bit of a problem if $f$ is actually defined at $a$, because you can’t get nearer to $a$ than $a$ itself! This is, however, always a problem when defining function limits. Here, “near $a$” always means “at all those points which are sufficiently-near-but-not-equal-to $a$“. Joel Feinstein 23/4/09 Like 5. There appears to be a (temporary?) problem that I don’t understand. Wherever the Greek letter delta occurs on its own within some latex above, it currently says “Formula does not parse”. If necessary I will use Tim Gowers’s \null trick to try to get around this, but has anyone else had problems with this? Here I will try a delta on its own, using dollar latex \delta dollar This gives the following at the moment: $\delta$ Here is a second attempt including a \null, as dollar latex \null \delta dollar This gives the following: $\null \delta$ What do you know! another success for \null! Joel Feinstein 6/6/09 PS The temporary problem appears to have gone away again now anyway, and both of the versions above are working to give a delta. Maybe the best thing to do with these things is to check back later? Like
Share # Using the Concept of Force Between Two Infinitely Long Parallel Current Carrying Conductors, Define One Ampere of Current. - CBSE (Science) Class 12 - Physics ConceptForce Between Two Parallel Currents, the Ampere #### Question Using the concept of force between two infinitely long parallel current carrying conductors, define one ampere of current. #### Solution One ampere of current can be defined as the amount of current which when flowing (in same direction) through two infinitely long parallel wires separated by one metre produces an attractive force of 2 × 10−7 N/m. The wires must have negligible circular cross-section and they must be placed in vacuum. Is there an error in this question or solution? #### Video TutorialsVIEW ALL [1] Solution Using the Concept of Force Between Two Infinitely Long Parallel Current Carrying Conductors, Define One Ampere of Current. Concept: Force Between Two Parallel Currents, the Ampere. S
+0 # Perpendicular lines +4 329 3 Find the line perpendicular to y=3x-7 that goes through (3,5) Guest Nov 16, 2015 #1 +18956 +25 Find the line perpendicular to y=3x-7 that goes through (3,5) $$\begin{array}{rcl} \text{Formula } \boxed{~ \begin{array}{lrcl} y = mx+b \\\\ \dfrac{y-y_p}{x-x_p} = m_{\text{perpendicular}} \\ m_{\text{perpendicular}} = -\frac{1}{m} \end{array} ~}\\\\ \end{array}\\ \begin{array}{rcl} P(x_p,y_p) &=& (3,5) \\\\ y = 3x-7 \qquad m = 3 \\ m_{\text{perpendicular}} = -\frac{1}{m} &=& -\frac{1}{3} \\ \dfrac{y-y_p}{x-x_p} = \dfrac{y-5}{x-3} &=& -\frac{1}{3} \\ y-5 &=& -\frac{1}{3}\cdot (x-3) \\ y-5 &=& -\frac{x}{3}+1 \\ y &=& -\frac{x}{3}+1+5 \\ y &=& -\frac{x}{3}+6 \\ \end{array}\\$$ heureka  Nov 16, 2015 Sort: #1 +18956 +25 Find the line perpendicular to y=3x-7 that goes through (3,5) $$\begin{array}{rcl} \text{Formula } \boxed{~ \begin{array}{lrcl} y = mx+b \\\\ \dfrac{y-y_p}{x-x_p} = m_{\text{perpendicular}} \\ m_{\text{perpendicular}} = -\frac{1}{m} \end{array} ~}\\\\ \end{array}\\ \begin{array}{rcl} P(x_p,y_p) &=& (3,5) \\\\ y = 3x-7 \qquad m = 3 \\ m_{\text{perpendicular}} = -\frac{1}{m} &=& -\frac{1}{3} \\ \dfrac{y-y_p}{x-x_p} = \dfrac{y-5}{x-3} &=& -\frac{1}{3} \\ y-5 &=& -\frac{1}{3}\cdot (x-3) \\ y-5 &=& -\frac{x}{3}+1 \\ y &=& -\frac{x}{3}+1+5 \\ y &=& -\frac{x}{3}+6 \\ \end{array}\\$$ heureka  Nov 16, 2015 #2 0 That Is Wrong Guest Nov 16, 2015 edited by Guest  Nov 16, 2015 #3 +26493 +12 heureka has given the correct answer to the question that was asked. If you think it is wrong you need to say why. Alan  Nov 16, 2015 ### 10 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
# for a chen prime p, what is the size of factors of p+2 Suppose the twin prime conjecture fails. Then, by Chen's theorem, there are infinitely many primes $p$ s. t. $p+2$ is a product of exactly two primes. It would be nice to know that as $p$ grows, so do both factors of $p+2$. In other words, if $S = \{s \in \mathbb{N}; (\exists l > s)(l\cdot s -2 \text{ is a Chen prime}) \}$, can we prove that $S$ is infinite? • Yes. In the proof of Chen's theorem, he actually shows something a bit stronger than "product of at most two primes": each of the prime factors is bounded below by a small power of $p$, i.e. $p^c$ for some fractional constant $c>0$. – Erick Wong May 24 '16 at 7:59 • This answer on MO cites a recent reference that one may take $c=3/11$. – Erick Wong May 24 '16 at 8:06
Examveda # A train passes a station platform in 36 seconds and a man standing on the platform in 20 seconds. If the speed of the train is 54 km/hr, what is the length of the platform? A. 120 m B. 240 m C. 300 m D. None of these ### Solution(By Examveda Team) \eqalign{ & {\text{Speed}} = {54 \times \frac{5}{{18}}} \,{\text{m/sec}} = 15\,{\text{m/sec}} \cr & {\text{Length}}\,{\text{of}}\,{\text{the}}\,{\text{train}} = \left( {15 \times 20} \right){\text{m}} = 300\,{\text{m}} \cr & {\text{Let}}\,{\text{the}}\,{\text{length}}\,{\text{of}}\,{\text{the}}\,{\text{platform}}\,{\text{be}}\,x\,{\text{metres}} \cr & {\text{Then}},\,\frac{{x + 300}}{{36}} = 15 \cr & \Rightarrow x + 300 = 540 \cr & \Rightarrow x = 240\,{\text{m}} \cr}
Electric flux: Problems with Solutions for AP Physics Electric flux problems with detailed solutions are presented for uniform and non-uniform electric fields. Each solution is a self-tutorial so that the definition of electric flux and its formula are provided. Electric flux of uniform electric fields: Problem (1): A uniform electric field with a magnitude of $E=400\,{\rm N/C}$ incident on a plane surface of area $A=10\,{\rm m^2}$ and makes an angle of $\theta=30^\circ$ with it. Find the electric flux through this surface? Solution: electric flux is defined as the amount of electric field passing through a surface of area $A$ with formula $\Phi_e=\vec{E} \cdot \vec{A}=E\,A\,\cos\theta$ where dot ($\cdot$) is the dot product between electric field and area vector and $\theta$ is the angle between $\vec{E}$ and the normal vector to the plane. In this problem, the angle between electric field and normal to the plane is $60^\circ$ so we get \begin{align*} \Phi_e &=EA\,\cos \theta\\&=400\times 10\times \underbrace{\cos 30^\circ}_{1/2}\\&=2000\quad {\rm N.m^2/C}\end{align*} Problem (2): What is the magnitude of electric flux of a constant E of $4\,{\rm N/C}$ in the z-direction through a rectangle with surface area $4\,{\rm m^2}$ in the xy-plane? Solution: To compute electric flux, we need the magnitude of the electric field, area of surface, and the angle between E and normal to the surface. Here, $\vec{E}=4\,\hat{k}\,{\rm N/C}$ and area $A=4\,{\rm m^2}$ are given explicitly, but the angle isn't. We know that the xy-plane is directed toward up or down with unit normal $\hat{n}=(0,0,\hat{k})$ or $\hat{n'}=(0,0,-\hat{k})$ respectively. Thus, by choosing $\hat{n}$ the angle between E and Area is $\theta=0$ and the flux is given as \begin{align*} \Phi_e&=EA\,\cos \theta\\&=4\times 4\times \underbrace{\cos 0^\circ}_{1}\\&=16\quad {\rm N.m^2/C}\end{align*} Problem (3): A cube of side 1 cm is placed within an electric field of 100 N/C oriented in the x-direction. What is the electric flux through each of the surfaces of the cube and then find the net flux? Solution: In this problem, $E$ and area of each surface $A=L^2=(0.01)^2=10^{-4} \,{\rm m^2}$ are the constant. To find the electric flux through each surface of a cube, first, determine the normal vector (a vector perpendicular to the surface and pointing outward) to each surface by inspection. For simplicity, we denote the area of each surface with $A_{ij}$, where $ij$ are the sides of the surface. With this introduction, we have \begin{align*} \Phi_{xy} &=EA_{xy}\cos 90^\circ \\&=100\times 10^{-4} \times 0\\&=0\\ \Phi_{yz-I} &=EA_{yz}\cos 180^\circ \\&=100\times 10^{-4} \times -1\\&=-10^{-2} \,{\rm N.m^2/C}\\ \Phi_{xz} &=EA_{xz}\cos 90^\circ \\&=100\times 10^{-4} \times 0\\&=0\\ \Phi'_{xy} &=EA_{xy}\cos 90^\circ \\&=100\times 10^{-4} \times 0\\&=0\\ \Phi_{yz-II} &=EA_{yz}\cos 0^\circ \\&=100\times 10^{-4} \times 1\\&=10^{-2} \,{\rm N.m^2/C}\\ \Phi'_{xz} &=EA_{xz}\cos 90^\circ \\&=100\times 10^{-4} \times 0\\&=0 \end{align*} The net flux is determined by summing all above fluxes as $\Phi_{net}=0$ Problem (4): A $2\,{\rm cm} \times 2\,{\rm cm}$ square lies in the xy-plane. Find the electric flux through the square for each of the following electric field vectors? (a) $\vec{E}=(50\,\hat{i}+20\,\hat{j})\,{\rm N/C}$ (b) $\vec{E}=(50\,\hat{k}+20\,\hat{j})\,{\rm N/C}$ Solution: the electric flux of a uniform electric field through a flat surface is defined as the scalar product of electric field and area vector $\vec{A}=A\hat{n}$, where $\hat{n}$ is a vector perpendicular to the surface and points outward. The normal vector to a surface on the xy-plane is parallel to z-axis i.e. $\hat{n}=(0,0,1)$. Recall that the scalar product of two vectors $\vec{A}=A_x \hat{i}+A_y \hat{j}$ and $\vec{B}=B_x \hat{i}+B_y \hat{j}$ is given as $\vec{A}\cdot \vec{B}=A_x B_x+A_y B_y$. (a) Therefore, the electric flux through a flat surface on the xy-plane is \begin{align*} \Phi_e &=\vec{E}\cdot \vec{A} \\&=(50\,\hat{i}+20\,\hat{j})\cdot (+\hat{k})\\&=50\,\underbrace{\hat{i}\cdot\hat{k}}_{0}+20\,\underbrace{\hat{j}\cdot\hat{k}}_{0}\\&=0 \end{align*} (b) Again, we have \begin{align*} \Phi_e &=\vec{E}\cdot \vec{A} \\&=(50\,\hat{k}+20\,\hat{j})\cdot (+\hat{k})\\&=50\,\underbrace{\hat{k}\cdot\hat{k}}_{1}+20\,\underbrace{\hat{j}\cdot\hat{k}}_{0}\\&=50\quad {\rm N.m^2/C} \end{align*} Problem (5): A circle of radius $3\,{\rm cm}$ lies in the yz-plane and in the path of an electric field of $\vec{E}=(100\hat{i}+200\hat{j}-50\hat{k})\,{\rm N/C}$. Find the electric flux through the circle? Solution: A vector perpendicular and in outward or inward direction to the yz-plane is parallel to the x-axis i.e. $\hat{n}=(1,0,0)$ or $\hat{n'}=(-1,0,0)$ so we choose the area vector is $\vec{A}=A\,\hat{n}$ where $A=\pi r^2$ is the area of the circle. Note: the normal vector on an open surface can point in either direction. In this case, we must choose one of them. By definition of electric flux as the dot product of $\vec{E}$ and area vector $\vec{A}$, we obtain \begin{align*} \Phi_e &=\vec{E}\cdot \vec{A} \\&=(100\,\hat{i}+200\,\hat{j}-50\,\hat{k})\cdot (A\,\hat{i})\\&=100\,A\,(\underbrace{\hat{i}\cdot\hat{i}}_{1})+200\,A\,(\underbrace{\hat{j}\cdot\hat{i}}_{0})-50\,A\,(\underbrace{\hat{k} \cdot \hat{i}}_{0})\\ & =100\,A\\&=100\,\pi\,(0.03)^2\\&=28.27\quad {\rm N.m^2/C} \end{align*} In above the area of the circle is found as $A=\pi r^2=\pi\times (0.03)^2$. Problem (6): A flat surface with an area of 20 squared meters lies in the xz-plane where a uniform electric field of $\vec{E}=5\hat{i}+4\hat{j}+5\hat{k}$ exists. Find the electric flux through this surface? Solution: the vector perpendicular (normal vector) to the xz-plane is parallel to the y-axis so we choose $\hat{n}=(0,1,0)$. Definition of electric flux says that the scalar product of electric field $\vec{E}$ and area vector $\vec{A}=A\hat{n}$ gives the amount of electric field line passing through a surface, therefore we have \begin{align*} \Phi_e &=\vec{E}\cdot \vec{A}\\&=(5\,\hat{i}+4\,\hat{j}+5\hat{k})\cdot (A\,\hat{j})\\&=4A\\&=4(20)=80\quad {\rm N.m^2/C} \end{align*} In above we used the definition of scalar product. Problem (7): Find the electric flux through the surface with sides of $15\,{\rm cm}\times 15\,{\rm cm}$ shown in the figure below. Solution: first find the angle between the electric field and the vector perpendicular to the plane. The normal vector to the plane is shown as upward. In this problem, the electric field makes an angle of $30^\circ$ with the plane. To find the angle with the normal first coincide the tails of two vectors and then determine the required angle which is $\theta=90^\circ+30^\circ=120^\circ$. Next using the definition of electric flux, $\Phi_e=EA\,\cos \theta$, we get \begin{align*} \Phi_e&=EA\,\cos \theta \\&=150\times (0.15)^{2}\times \cos 120^\circ\\&=-1.125\quad {\rm N.m^2/C}\end{align*}. Problem (8): A square surface with sides of $1\,{\rm m}\times 1\,{\rm m}$ located over the xy-plane, where a constant electric field with a magnitude of $200\,{\rm N.m^2/C}$ presents. The direction of the electric field vector is depicted in the figure. What is the total electric flux through the open surface? Solution: since the electric field is constant and surface is flat so we can use the electric flux formula $\Phi_e=EA\cos \theta$. The angle between $E$ and $A$ are shown in the figure below, so the electric flux through the surfacee is \begin{align*} \Phi_e &=E\,A\,\cos \theta\\&=200\times (1)^2 \times \cos (180^\circ-72^\circ)\\&=-61.8\quad {\rm N.m^2/C}\end{align*} the negative electric flux indicates that $\vec{E}$ and the normal vector are in opposite directions. Problem (9): The electric field intensity at all points in space is given by $\vec{E}=\sqrt{3}\hat{i}-\hat{j}\quad \Big({\rm \frac Vm}\Big)$. A square frame of side 1 meter is shown in the figure. The point N lies in the xy-plane. The initial angle between line ON and the x-axis is $60^\circ$. Find the magnitude of the electric flux through the area enclosed in the square frame LMNO. Solution: first determine the normal vector to the surface of LMNO. As shown in the figure, the perpendicular vector is $\hat{n}=\sin \theta (-\hat{i})+\cos \theta (+\hat{j})$ where $\theta$ is shown in the figure. Now using the definition of electric flux, $\Phi_e=\vec{E}\cdot\vec{A}$ where $\vec{A}=A\,\hat{n}$ is the area vector, we get \begin{align*} \Phi_e &=\vec{E}\cdot\vec{A}\\&=(\sqrt{3}\hat{i}-\hat{j})\cdot \Big(A\,\sin \theta (-\hat{i})+A\,\cos \theta (+\hat{j})\Big)\\&=-A\sqrt{3}\sin 60^\circ (\underbrace{\hat{i}\cdot \hat{i}}_{1})-A\cos 60^\circ\,(\underbrace{\hat{j}\cdot\hat{j}}_{1})\\&=-A\sqrt{3}\,\frac{\sqrt{3}}2  - A\,\frac 12\\&=-2A\\&=-2(1 \times 1)\\&=-2 \quad {\rm V.m}\end{align*} The negative value of electric flux indicates that the electric field and normal vector are in the opposite direction. Electric flux of non-uniform electric fields: Problem (10): A non-uniform electric field is given by the expression $\vec{E}=ay\hat{i}+bz\hat{j}+cx\hat{k}$ Where $a,b,c$ are constants, $\hat{i},\hat{j},\hat{k}$ are unit vectors in the $x,y,z$ directions, respectively. Determine the electric flux through a rectangular surface in the xy-plane, extending from $x=0$ to $x=w$ and from $y=0$ to $y=h$. Solution: The electric flux of a non-uniform electric field through any surface is defined to be in integral form as  ${\Phi }_e=\oint_S{\vec{E}\cdot\hat{n}dA}$ Where $S$ stands for the surface we are integrating over, $\hat{n}$ is the unit vector normal to the surface and $\vec{E}$ is the electric field over the surface. In this case, since the surface lies in the xy-plane, so unit vector normal to it is $\hat{n}=\hat{k}$ \begin{align*} {\Phi }_e&=\oint_S{\vec{E}\cdot \hat{n}\,dA}\\ &=\int{\Big(ay\hat{i}+bz\hat{j}+cx\hat{k}\Big) \cdot (dx\,dy \hat{k})} \\ &=c\,h\,{\Big(\frac{1}{2}x^2\Big)}^w_0\\ &=\frac{1}{2}chw^{2} \end{align*} Problem (11): The cubical surface of side length $L=12\ {\rm cm}$ is shown in the electric field $\vec{E}=\left(950\,y\,\hat{i}+650\,z\,\hat{k}\right)\ {\rm V/m}$. Find the electric flux through the top face of the cube. Solution: By definition, the electric flux passing through any surface $A$ is the number of field lines penetrating it. In the mathematical form $\Phi_e=\int_S{\vec{E}\cdot \hat{n}dA}$ Where $\hat{n}$ is the unit vector normal to the surface $A$. In this problem, the normal vector is parallel to the z-axis that is $\hat{n}=\hat{k}$. So \begin{align*} \Phi_e&=\int{\left(950\,y\, \hat{i}+650\, z\,\hat{k}\right)\cdot\hat{k}\ dA}\\ &=\int{\Big(950\, y\,(\underbrace{\hat{i}\cdot \hat{k}}_{0})+650\, z\, \underbrace{\hat{k}\cdot \hat{k}}_{1})\Big)\,(\underbrace{dx\,dy}_{dA})}\\ &=650\,(0.12)\int{dxdy} \end{align*} Where the last integral is the area of the surface which is integrated over. Thus the total flux through the given surface is \begin{align*} \Phi_e &=650(0.12)L^2\\&=650\times (0.12)^3\\&=1.1232\quad {\rm N.m^2/C}\end{align*} Problem (12): A hemispherical shell of radius R is placed in an electric field E which is parallel to its axis. What is the flux ${\Phi }_e$ of the electric field through the shell? Solution: By definition, the electric flux passing through any surface with area element $dA$ is the integral of the scalar product of the normal component of the electric field and area of the surface that is ${\Phi }_e=\int_S{\vec{E}\cdot \hat{n}\,dA}$ Where $\hat{n}$ is the unit vector normal to the surface and $S$ is the surface over which the integral is evaluated. In the case of hemisphere or sphere, the unit vector is along the radius i.e. $\hat{n}=\hat{r}$. Since $\vec{E}$ is perpendicular to the plane of the great circle of the hemisphere so the scalar product of $E$ and $\hat{r}$ is $E{\cos \theta\ }$. The range of polar angle is from $\theta=0$ to $\theta=\pi/2\$. Therefore, \begin{align*} {\Phi }_e&=\int^{\frac{\pi}{2}}_0{\vec{E}\cdot \hat{r}dA} \\&= \int^{2\pi}_0{d\phi}\int^{\frac{\pi}{2}}_0{E\,{\cos \theta\ }\ R^2\,{\sin \theta\ }d\theta}\\ &= 2\pi ER^2\int^{\frac{\pi}{2}}_0{\underbrace{{\sin \theta\ }{\cos \theta\ }}_{\frac{1}{2}{\sin  2\theta\ }}d\theta}\\ &=2\pi ER^2\left(\frac{1}{2}\right){\left(-\frac{1}{2}{\cos  2\theta\ }\right)}^{\frac{\pi}{2}}_0\\ &=-\frac{1}{2}\pi ER^2\left({\cos  2\frac{\pi}{2}\ }-{\cos  0\ }\right)\\&=\pi ER^2 \end{align*} In above, $dA=R^2\,{\sin \theta\ }d\theta d\phi$ is the area element of the sphere. Alternate Solution: All of the electric field lines through the circle at the bottom of the hemisphere, passing through the area of hemisphere. So by calculating the flux through the circle, one can find the flux through the hemisphere but the important thing to remember is that, by convention, the flux through the circle is incoming and negative of the outgoing flux through the hemisphere that is ${\Phi }_e\left(circle\right)=-{\Phi }_e(hemisphere)$. Therefore, by definition of the electric flux through a surface, we get \begin{align*} {\Phi }_e \left(circle\right)&=\vec{E}\cdot \hat{n}\,A\\&=E\,(+\hat{k})\cdot (-\hat{k})\,(\pi R^2)\\&=-\pi ER^2\end{align*} Note: the normal vector $\hat{n}$ is defined always in outward direction of the surface. Problem (13): An square of side L lies in the xy-plane. The electric field crosses the square in the z-direction as $\vec{E}=\frac {E_0}{a}\,y\,\hat{k}$. What is the electric flux through the surface as shown in the figure? Solution: remember that the electric flux is a surface integral. Take an infinitesimal strip of width $dy$ and length $L$ so that the electric field over it is constant. Consequently, the area vector becomes $d\vec{A}=a\,dy\,\hat{k}$. Therefore, the integral form of definition of electric flux gets \begin{align*} \Phi_e&=\int_S{\vec{E}\cdot\hat{n}\,dA}\\&=\int{(\frac {E_0}{a}\,y\,\hat{k})\cdot (a\,dy\,\hat{k})}\\&=\frac {E_0}{a}\,a\int_{0}^{a}{y\,dy}\\&=\frac {E_0}{a}\,a\,\Big(\frac 12 y^{2}\Big)_0^{a}\\&=\frac 12 E_0 a^2 \end{align*} Note: for a non-uniform electric field, the integral definition of electric flux must be used. Problem (14): A rectangular flat surface is placed on the xy-plane. In this region of space, there is a non-uniform electric field of $\vec{E}=E_0 x^2 \hat{k}$, where $E_0$ is a constant. What is the electric flux through the rectangular surface? Solution: Since the electric field is not constant over the surface so the integral definition of electric flux must be used. To use this method, we must first construct an area element such that the electric field is constant across it. In this problem, along the y-direction, $\vec{E}$ is constant so we choose an strip with area $dA=a\,dx$. Thus we get the net electric flux through this open surface as below \begin{align*} \Phi_e&=\int_S{\vec{E}\cdot \hat{n}\,dA}\\&=\int_0^b{\Big(E_0\,x^2\hat{k}\Big)\cdot \hat{k}(a\,dx)}\\&=E_0\,a\int_0^b{x^2 dx}\\&=aE_0 \Big(\frac 13 x^3\Big)_0^b\\&=\frac 13 aE_0\,b^3 \end{align*} All of the above electric flux problems suitable for high schools and colleges. In all these problems, we used the direct definition of flux to compute it. There is also another method, called Gauss's law prescription, to find electric flux more easily. Date Created: 10/24/2020 Author: PhysicsExams
## The Annals of Probability ### Convergence Rates for a Class of Large Deviation Probabilities Stephen A. Book #### Abstract For a sequence $\{X_n: 1 \leqq n < \infty\}$ of independent, identically distributed random variables with moment-generating functions, a 1952 theorem of Chernoff asserts that $n^{-1} \log P(S_n \geqq \lambda n) \rightarrow \log \rho$, where $S_n$ is the $n$th partial sum of the $X_k's \lambda > 0$, and $\rho$ is a constant depending on $\lambda$ and the distribution of $X_1$. A 1969 theorem of Sievers, as strengthened by Plachky in 1971, established the convergence of $n^{-1}\log P(W_n \geqq z_n)$ to a constant, where the $W_n$'s have moment-generating functions and belong to a class of random variables more general than partial sums, and the $z_n$'s are numbers such that $n^{-1}z_n \rightarrow \lambda > 0$. In a format related to that of Sievers, Behadur in 1971 analyzed the behavior of $n^{-1}\log P(W_n \geqq z_n)$ in situations when it may not converge to a constant. The goal of the present article is to extend the theorems of Chernoff, Sievers, and Bahadur in the direction of obtaining convergence rates (to 0) of the large deviation probabilities $P(W_n \geqq z_n)$ where the $z_n$'s are numbers such that $n^{-\frac{1}{2}} z_n \rightarrow \infty$. The method of proof is based on the proof of Chernoff's theorem given, in passing, in a 1960 paper of Bahadur and Ranga Rao. #### Article information Source Ann. Probab., Volume 3, Number 3 (1975), 516-525. Dates First available in Project Euclid: 19 April 2007 https://projecteuclid.org/euclid.aop/1176996357 Digital Object Identifier doi:10.1214/aop/1176996357 Mathematical Reviews number (MathSciNet) MR378057 Zentralblatt MATH identifier 0312.60015 JSTOR
# Plasma Current ## Main Question or Discussion Point Hello all, Say we ignore plasma instabilities and Lawson's criterion, what is the minimum current required (or how do I calculate it) to create fusion/plasma? For a deuterium-deuterium reaction, the potential barrier to overcome is 7.6*10^(-14) Joules and temperature T=3.6*10^9 K. Related High Energy, Nuclear, Particle Physics News on Phys.org mfb Mentor Current? Do you mean temperature? There is no minimum anything, fusion rate just grows gradually as conditions get better. There are tables about fusion rates as function of temperature. The dependence on pressure is easier to handle in a formula. I mean how are current and temperature related? I know the concepts of reaction rate, Lawson's criterion, instabilities etc. Let's say, theoretically, that I need 20 kA plasma current. How do I find to what temperature this current corresponds? Perhaps convert both quantities to energy as in joules and compare them and then turn the joules to keV to find my temperature? mfb Mentor I mean how are current and temperature related? They don't have a specific relation. Perhaps convert both quantities to energy as in joules and compare them and then turn the joules to keV to find my temperature? That does not make sense. Ok can you explain to me how when trying to achieve fusion, the Ohmic input heating via induced current by a transformer it is said to raise the plasma to temperatures of 4-5 keV? mfb Mentor It is done that way in some fusion reactors, and differently in others. That does not imply a special relation between temperature values and current values. In a plasma, ohmic resistance usually decreases when temperature rises - just the opposite as in most metals. Therefore, at some point the maximum current is what the power source can deliver but increasing the current will not heat the plasma much further. The exact quantities depend on a lot of things - plasma composition, dimensions, density. This point is in a reactor too low to create enough fusion for net energy gain, so additional heating methods are applied. Yes I know that. And I am aware of other limitations as well. The only thing I don't understand is the current-plasma heating concept. The plasma works as a one-turn secondary winding for the transformer. So it is subjected to whatever current is coming from the system. But how is it actually heated. Any thoughts on the math-physics involved? In a tokamak the plasma acts as the secondary winding of a transformer, in a stellarator it doesn't so the other methods apply. On a low level: the current creates the plasma in a tokamak: the transformer creates a strong EM field, which strips some electrons from their nuclii. Those get accelerated, collide with other atoms and molecules, strip electrons from them too who also get accelerated, etc. etc. This is just an electrical discharge, which is easy to describe in words but can be rather complicated to describe mathematically because of instabilities. Is this what you wanted to know or did I misunderstood the question? Yes I know that. And I am aware of other limitations as well. The only thing I don't understand is the current-plasma heating concept. The plasma works as a one-turn secondary winding for the transformer. So it is subjected to whatever current is coming from the system. But how is it actually heated. Any thoughts on the math-physics involved? If you drive a current through a plasma it will heat up due to the resistance of the plasma. The Ohmic heating power is $\eta J^2$. This is the same as what happens in wire. If you drive current through a wire, the resistance of the wire will produce heat at the rate $P=I^2 R$. The resistance of the plasma actually depends on the temperature. One common model of the resistivity is the Spitzer model which states the $\eta = \eta_0 T^{-3/2}$. The Spitzer model is pretty good for colder collisional plasmas, but it breaks down for hotter collisionless plasmas. However, what you want to know it the temperature of the plasma. If you want to calculate the temperature of any object you have to perform a of thermal transport analysis. Transport calculations require multiple pieces of information. You need to know the heating sources (in your case this is Ohmic power), you also need to know the boundary conditions (what is the temperature at the edge of you plasma), you need to know all the sources of the thermal transport (in a plasma you have a collisional heat flux, a neoclassical heat flux, a turbulent heat flux, a radiative heat flux, etc) and you need to know the geometry of object your studying (in a plasma this is defined by the equilibrium magnetic field).
# mathaccent skewchar with XeTeX How can I get the dots in the correct position: \font\test="XITS Math:script=math;mapping=italic" \skewchar\test=127 \XeTeXmathchardef\beta="0"1β \def\ddot{\XeTeXmathaccent"7"1"0308} \textfont1=\test $\ddot\beta$ \bye vs. default CMI: $\ddot\beta$\bye: - That's the 'correct' position. – Leo Liu Jul 2 '11 at 17:10 Just to let you know, I finally managed to get into the root of this, and it should be fixed in XeTeX version of TeX Live 2012. – Khaled Hosny Apr 13 '12 at 22:20 @khaled Awesome! Thanks for letting me know! :-) – morbusg Apr 14 '12 at 8:25 Looks like a XeTeX bug, LuaTeX give correct result: \input ifxetex.sty \ifxetex \XeTeXmathchardef\beta="0"1"1D6FD \def\ddot{\XeTeXmathaccent"7"1"0308} \else \Umathchardef\beta="0"1"1D6FD \def\ddot{\Umathaccent"7"1"0308} \fi \font\test="XITS Math:script=math" \textfont1=\test $\ddot\beta$ \bye When run with luatex I get (the same with MS Office 2007): BTW, \skewchar has no use in OpenType math fonts, so setting it makes no difference. Also the /I in your definition makes no sense either sense XITS Math comes only in regular style and there is no separate italic font, {Xe,Lua}TeX will waste time searching for non-existent then fall back to the regular one. Update: The XeTeX bug has been fixed in 0.9998 (TeX Live 2012) version. - Thanks, I noticed the excess "/I" after posting, but was lazy updating the question. :-) – morbusg Jul 2 '11 at 19:38 I get wrong results with LuaTeX, Version beta-0.63.0-2010091123 (TeX Live 2010), correct with LuaTeX, Version beta-0.70.1-2011062107 (rev 4277) (TeX Live 2011 test). – egreg Jul 2 '11 at 19:40 @egreg, IIRC 0.63 have several math accent related bugs that got fixed in some later version. – Khaled Hosny Jul 2 '11 at 19:50 I see; it's quite surprising that Asana Math doesn't give problems with XeTeX, while XITS Math does. – egreg Jul 2 '11 at 20:00 @egreg: not much of surprise to me, it is not the first time I hit some obscure bug only triggered by my fonts that does not show anywhere else, if anyone knows a version of XITS that does not show this, I may try to see what change I did that might have caused it. – Khaled Hosny Jul 2 '11 at 20:05 It seems to be a bug in XITS Math, as with Asana Math or Latin Modern Math it comes out right (and with no setting of the \skewchar). Here is with Asana Math, I've added char U+1D5A0 for comparison - Thanks; I could've sworn it was working correctly with an earlier version. I'll wait a while before accepting, if you don't mind. – morbusg Jul 2 '11 at 17:44 I don't remember to have seen this before; however one should recall that "XITS Math" is just an interim release. – egreg Jul 2 '11 at 17:47 Oh, almost forgot: Is there something I can do to correct with with TeX (because I have another font with the same problem)? – morbusg Jul 2 '11 at 18:03 $\skew3\ddot\beta$ seems to work. – egreg Jul 2 '11 at 18:14 Perhaps you should also try this one: In the preamble put: \newcommand{\std}[1]{\overset{\:\mbox{\huge .\!.}}{#1}} in the text put \std{\beta} ` With this new command for second time derivative you may easily control the location of the double-dot. For example, you may replace \: by \; or \, etc. -
PDA View Full Version : DOSBox 0.65 ... ftw Pages : 1 2 [3] 4 JustNick 07-14-2009, 03:35 PM qwerty12 07-14-2009, 03:45 PM I used a download manager (orbit) to get it. Took me an hour but at least I got it before ukki who had the download running for 8 hours. :p :D JustNick 07-14-2009, 03:48 PM I used a download manager (orbit) to get it. Took me an hour but at least I got it before ukki who had the download running for 8 hours. :p :D 8 hours??? :D Ok, they give away and old game... but did they really think people were still using analog < 33Kbps connections? :confused: ukki 07-14-2009, 04:50 PM 8 hours??? :D Ok, they give away and old game... but did they really think people were still using analog < 33Kbps connections? :confused: That's Finland for you, I think there was a polar bear standing on my internet cable or the penguins managed to somehow break the cable :( EDIT: but it's not like I don't have the original box and cd so I didn't feel like going after the bear. JustNick 07-14-2009, 05:18 PM That's Finland for you, I think there was a polar bear standing on my internet cable or the penguins managed to somehow break the cable :( EDIT: but it's not like I don't have the original box and cd so I didn't feel like going after the bear. Dude, penguins live only in the southern emisphere (and maybe in some zoo too...) :D This is going right to fail blog :D jself 07-14-2009, 10:24 PM So I've got Fallout copied over from a PC DOSBox install. I've got the 'Humongous' installation, so all I have to do is mount C and run Fallout.exe, no CD required. Copied over to the tablet and it says the master DAT couldn't be found, insert CD. Dunno why there'd be any difference? Also I know Fallout may not run well on the tablet but let me be curious :p 07-14-2009, 11:43 PM You'll probably have to make a .bat file (I think) or add in a comment at the end of your dosbox.conf file [autoexec] and mount your game as a virtual CD. Something along the lines of this: mount d e:\ -t cdrom mount d c:\dosgames\fallout -t cdrom This is a wasted effort on your behalf trying to run this though... It's going to run so chunky that it will feel like you're playing Fallout, the board game. *lol* Good luck! jself 07-14-2009, 11:56 PM The thing is I can't type anything, N800 and I couldn't get the xkbd to install, something about libpurple... god I can't remember. I'm using Rubybox but of course I can't do CD-ROMs there 07-15-2009, 12:08 AM You're not going to be able to do too much without xkbd. Ukki's Rubybox should be packaged with xkbd if you installed it correctly. Otherwise, you'll need to go to ArnminS' site and manually install it yourself with the files he has available for download. Xkbd (http://pupnik.de/xkbd.html) jself 07-15-2009, 01:24 AM *EDIT* Got libxpm4 installed and xkbd. Hate these damn hassles.. Here was my problem with xkbd http://talk.maemo.org/showthread.php?p=273582 can't get libxpm4 to install jself 07-15-2009, 02:14 AM Well going by your instructions but tweaked a bit... DOSBox says MOUNT C "/media/mmc1/DOS/FALLOUT/" when you run a setup through RubyBox... so I put in the bat file similar syntax for CD-ROM... mount d "/media/mmc1/DOS/FALLOUT" -t cdrom fallout.exe and it still says "Could not find the master datafile. Please make sure the FALLOUT CD is in the drive and that you are running FALLOUT from the directory you installed it to. aaargh. 07-15-2009, 03:31 AM I was in your boat more than once. It's a turd sometimes for a newbie learning to install something that seems like it should be simple enough to do. If I remember correctly, it took me an hour the first time to get every together for xkbd to be installed. Six months later, when I reflashed my Tablet, it took exactly the same darn time. :D I think solution is to have that file on your memory card and install it directly from Xterm. So download the libxpm4 file from the site mentioned previously, you can rename the file if it will make things easier for you, and then transfer it right there on your external memory card. In Xterm: cd /media/mmc2 dpkg -i libxpm4.deb (or whatever the full filename is) ENTER That should get you over the hump. Even though you'll never get Fallout working, at least you'll have a virtual keyboard. :) For some interesting keyboard designs, you can check out an old post I made. Best of luck. jself 07-15-2009, 02:32 PM I've got xkbd installed and it works although there was some error about umm... something unsupported. But it works. Sorry don't feel like firing my tablet up right now! Anyway, I noticed how Daggerfall is slow enough I haven't gotten into the 3D world yet, so I'm likely not going to bother with Fallout. Thanks though! elschemm 07-21-2009, 08:57 PM Well, I almost found a really good game to play with Dosbox. Wizardry 1 (off the archives CD) needs things to be slowed down. Running emulated hardware does just that. (I suppose I should mention that I am using a N810, I have .73-3, and got the nokia.sys and mapper file from page 41 of this thread) The game plays almost perfectly. One small really annoying bug: for some reason the '1' key (Fn-q) causes Wizardry to exit (with a 'Thank You for playing Wizardry message', so I don't think its a crash per-se). I do not have this problem with the same executable running on DosBox under windows. I hooked up an external USB keyboard, and the '1' key works perfectly there. The rest of the numbers (at least 2-6) work fine in Wizardry. The '1' key works fine from the Dos command prompt. If I remove the 'keyb nokia.sys us' line from my autoexec.bat file, I lose the ability to type '1' with the nokia HW keyboard, but FN-q still causes the program to exit. Any ideas? Is it possible that Dosbox is using that sequence for something else internal? [Like an extra Ctrl-C or ESC or something?] Schemm 07-22-2009, 06:05 AM CTRL-q would be "Quit", correct? Maybe you have set "1" as a hotkey within the game. Well, I've got some free time today and was going to restore Dosbox and Rubybox back up with the latest updates. By the way, I always liked Wizardry. In fact, maybe you should look at the translation patches I made for the Gameboy Color versions. Wizardry Trilogy (http://www.angelfire.com/goth/patches/wizardry.html) You can run them under the Gameboy Color emulator for the Tablet. Anyway, I'll look into it and hopefully post back something tomorrow. javispedro 07-22-2009, 06:16 AM The game plays almost perfectly. One small really annoying bug: for some reason the '1' key (Fn-q) causes Wizardry to exit (with a 'Thank You for playing Wizardry message', so I don't think its a crash per-se). This is known, and I'm working on it. I made the unfortunate decision of mapping the FN key to AltGr, which of course I'm now regretting because of the behavior you described. I assume pressing Alt+Q on the external keyboard makes the game quit too. javispedro 07-23-2009, 07:39 AM Ok, 0.73-6 is up on extras-devel; or download and install with dpkg (application manager won't install it since i've dropped it from user/): dosbox_0.73-6_armel.deb (http://repository.maemo.org/extras-devel/pool/diablo/free/d/dosbox/dosbox_0.73-6_armel.deb). You'll also need newer nokia.sys (http://cosas.javispedro.com/dosbox/nokia.sys) if you're using it. For non-N810 users there should be no changes. If there's any, bug me. This only maps the N810 FN key to the "backquote" key instead of the right ALT key, so keyb can now handle it without interfering with applications. I've been experimenting with a hildon-input-method compatible DOSBox now that I've got some minor experience tinkering with microb. The main problem is that DOS applications read keyboard input through two methods: directly through the keyboard controller (method a, which of course gives key scancodes only), and through the BIOS (method b, which gives both scancode and ASCII value). hildon-input-method sends utf8 strings instead, like "a", "á", "A", or even full words like "absolutely" (produced by the autocompleter). In order to make applications using a) happy, I'd have to convert each utf8 char back into a scancode. "á"-like chars would need to be converted into the appropriate dead key presses, etcetera. This would be codepage, layout, and locale dependent: a ugly mess. To make b) applications work, it's just a matter of choosing the appropriate codepage then converting utf8 chars to that codepage. Easier, but may limit compatibility. There's also the problem with the hildon-input-method window. There's not enough space to even show the hw keyboard one without hiding part of the dosbox window, and from what I've seen so far scaling is SLOW. So it's either partially hide the dosbox window when the virtual keyboard is up or make it slow. If you think I'm wrong, really want that feature, or have an idea please don't hesitate to reply. elschemm 07-30-2009, 12:22 PM Thanks for the new version. Works good for wizardry! Well, after I slow down the CPU! Would you be willing to post a link to your version of the source code for dosbox? Also to whatever you are using to setup the nokia.sys file? I'm curious how you handle the keyboard issues in there. Thanks. javispedro 07-30-2009, 12:47 PM Would you be willing to post a link to your version of the source code for dosbox? Also to whatever you are using to setup the nokia.sys file? I'm curious how you handle the keyboard issues in there. Explanation: I rewrote the DOSBox "keyb" based on the FreeDOS keyb documentation and my own understanding of the specification, because the original DOSBox keyb was missing some features, like User Modifiers support. Then I used the KC compiler from FreeDOS (recompiled in Linux instead of DOS, easy since it is standard C) to build the nokia.sys from the following source Key language files: ptes.key (http://cosas.javispedro.com/dosbox/ptes.key) (Portuguese-Spanish layout) us.key (http://cosas.javispedro.com/dosbox/us.key) (US English layout) The FN key is hardcoded in dosbox to the backquote key, and some other mappings I don't remember right now for the Chr and the Menu key. You can get the pristine DOSBox source & patches in the Debian source package, available at http://repository.maemo.org/extras-devel/pool/diablo/free/source/d/dosbox/ (if you don't have debian or dpkg-source contact me and I'll upload the patches somewhere). Initially I wanted to push the new keyb upstream but noticed that many just let their "patches" float around in the dosbox world (specially those whose use does not seem clear) so be it. BTW, I am waiting for permission to upload to the DOSBox garage project svn repository (asked the current maintainer a week ago: must be on holiday :)), since If I create yet another repository on my website DreamHost is going to kill me ;) Lately I've been reading the DOSemu source code (which I personally prefer but unfortunately it's totally unportable), and seems to have solved the input problem in a totally different & maybe interesting way. 07-30-2009, 09:10 PM Many thanks again Javis! 08-05-2009, 04:49 AM Um, where do I find a rubybox installer? I've been trying for nearly 2 days now and all the links either don't work or I get the 'incompatible application package'. Starting to get kinda frustrated now.... urnass 08-05-2009, 01:01 PM I believe I installed it using the Application Manager and the maemo Extra or Maemo Extra Developer (MED)catalogs/repositories. Sorry, I can't remember which one, but activate both & it should come up in the list of available applications. If you do enable the MED, but sure to deactivate after installing RubyBox, as not all the applications/updates are ready for primetime. st5150 08-07-2009, 07:36 PM Ok, 0.73-6 is up on extras-devel; Some basic questions, where is the config file to specify the auto mount dir on startup? Where should the .key file be placed? javispedro 08-08-2009, 06:31 AM DOSBox is now on normal extras now too. Some basic questions, where is the config file to specify the auto mount dir on startup? Where should the .key file be placed? The config file is at ~/apps/DOSBox 0.73 Prefs.txt , should be autocreated at program startup with sane values. .key files should not be place anywhere. The nokia.sys file should be place wherever you want, as long as it is accessible under DOSBox. To load it use "keyb C:\path\to\nokia.sys us". st5150 08-08-2009, 12:47 PM Thanks for your efforts and the info javispedro. One more question for you or anyone else. What's the best way to get the xkbd below working? I've come up with a method, but it's sloppy. New working double keyboard. http://img87.imageshack.us/img87/9582/keyboardhe2.png First, there is no Shift key, since it will break the Xkbd program. However, there is a Caps Lock button, just below the red Backspace key. Now, the Caps Lock button tends to carry it's flag when you close out the keyboard. Meaning, that if you're typing in Caps, but the keyboard is all lower case, just press the Caps Lock button and close out the keyboard. The next time you fire Xkbd up, everything should be all spiffy like, and you should only have to do this once. Sorry for such an incredibly small font size. Is there a way to customize each key as to how big the font is for it? I'd like to use three different font sizes in total. I'm not finding how to do that. So any help on this would be really welcomed. Also, anyone care to draw up an up, down, left, and right arrow keys? Each button is 26 by 28 pixels I believe and the image needs to be in .xpm format. I have no clue on how to even begin doing that. #!/bin/sh xkbd -geometry +65536+65536 -k /media/mmc2/x2.xkbd & xkbd -geometry +722+65536 -k /media/mmc2/x.xkbd Well, post any responses on the design or you can just pepper my stomach with tender kisses if you're feeling that grateful to me. :D Cheers. Here's the two files: 1215 1216 javispedro 08-08-2009, 04:13 PM I don't know the first thing about xkeyb (I'm using a N810), but just wanted to say that if you don't have a hardware keyboard you don't need nokia.sys at all (since it just does Fn keys). 08-08-2009, 04:24 PM Install Ukki's excellent Rubybox. Rubybox (http://wiki.maemo.org/RubyBox) It sounds like a lot of work but it's actually quite simple and you should have your keyboard set up within a few minutes after reading his wiki page. 08-09-2009, 10:49 PM Hey ukki and Javis! Now it's throwing fits on me. :( If I type rbox in xterm, I get the following message: /usr/lib/ruby/1.8/arm-linux-eabi/hildon.rb:11:in require': no such file to load -- gtk2 (LoadError) from /usr/lib/ruby/1.8/arm-linux-eabi/hildon.rb:11 from rbox.rb:2:in require' from rbox.rb:2 Any ideas on this? Thanks! elschemm 08-28-2009, 01:21 PM I've been trying to get 'Jones in the Fast Lane' working under DosBox. It's a mixed-mode CD (data and an audio track). I had to update the drivers that the disk includes to get it to work, but I managed to get a working .cue and .bin file for it. Those files work perfectly under Windows DosBox and an Ubuntu Dosbox. However, they I get an MSCDEX error (MSCDEX: Failure: Path not valid.) when I try to use it on my N810. (imgmount d /path/to/JONES.cue -t iso) Interestingly, if I try to imgmount the .bin file, it works, but I only get the data (which is as per the DosBox wiki site). So, my guess is that either: a) Cue files are not supported on the Maemo version of DosBox? b) Something is wrong with the internal path parsing for the Maemo version of DosBox. c) I am missing something horribly obvious. Yes, I have tried to rename the cue and iso files to all lower case (thinking that might be the problem). I also tried to hard code a full pathname to the bin inside the cue file. No joy. Any ideas? or can anyone confirm either of my hypotheses? Thanks 08-28-2009, 05:44 PM I had this running in Dosbox over a year ago and it wasn't that hard at all if I remember correctly. I believe I ended up installing the game from my CD onto my Windows Desktop (you won't want to install any music drivers since Dosbox on the Tablet can't handle the processing speed) and then transferred the whole folder right onto my memory card. From there I used Rubybox and tweaked the settings for best performance. I'm not seeing any reason to mount the entire CD like you're doing directly on the Tablet. Post back if you have any luck or have more questions. I can always hunt down my CD somewhere and try it again with a better explanation next time. Edit: Does anyone need a simple explanation on how to get Xkbd installed? I broke open a new tablet a few days ago and getting Xkbd installed was quite the pain (I thought it was packaged with Rubybox, oh well) but anyway, I have a pretty simple solution for it now. javispedro 08-28-2009, 09:22 PM a) Cue files are not supported on the Maemo version of DosBox? You're right, CUE files with audio tracks are not supported in this version because of missing SDL_audio library (which is not ported AFAIK). If you really want the feature I can have a look at it, but I'd suggest doing what Addison says. 08-29-2009, 12:52 PM Heya Javis! I'm having a little trouble getting nokia.sys properly placed and loaded in Dosbox upon initial startup. It's currently on my internal flash at /home/user/apps/nokia.sys --> right next to the DOSBox preferences file. So what command would I type at the end of the [autoexec] portion in the DOSBox 0.73 Preferences.txt file to get this working? Thanks! ukki 08-29-2009, 01:03 PM Edit: Does anyone need a simple explanation on how to get Xkbd installed? I broke open a new tablet a few days ago and getting Xkbd installed was quite the pain (I thought it was packaged with Rubybox, oh well) but anyway, I have a pretty simple solution for it now. It used to be bundled, but now that RubyBox is in extras, I can't depend on it anymore since xkbd has no source available and therefore can't enter extras. javispedro 08-29-2009, 03:14 PM So what command would I type at the end of the [autoexec] portion in the DOSBox 0.73 Preferences.txt file to get this working? If you can, do it by adding the following under [dos] section: keyboardlayoutfile=/home/user/apps/nokia.sys keyboardlayout=us codepagefile=auto codepage=auto This way you can use Linux paths, and there's no need to add "keyb" to autoexec.bat . 08-29-2009, 08:18 PM Okay. I think I'm seeing what the slight problem is here with getting the N800's hardware keys working correctly. Hey ukki, when you have two minutes, can you upload the mapper.txt on your site to make it available for download next to Pushwall's mapper? javispedro 08-29-2009, 08:54 PM Eh? Stop confusing me ;) The Nokia.sys file is only useful for N810, with FN and Chr keys. 08-29-2009, 08:59 PM I thought the nokia.sys was for all hardware keys for both tablet models. Well, if it makes you feel any better, I consider myself yelled at. :) 08-30-2009, 10:52 PM For those of you who would like to have the N800 hardware keys active and functional, ukki just uploaded his mapper file for them. You can find it under his download feature in Rubybox and they work perfectly! :) 08-31-2009, 11:45 AM Hey javis! Just one simple request in case you ever update Dosbox again. When it's not in full screen mode, all I can see is "DOSBox - OE" in the upper left hand window, before it used to show the cycle speed. It was a very nice feature to have previously. For example, Death Gate required an exact cycle speed of 875 with a frame skip of 4 to make the voice acting perfect with no studdering. It's hard to know what speed I'm at while testing if I can't see this value. It's no biggie. And if you can't get to this, that's perfectly fine. :) Cheers. stacia 09-14-2009, 03:32 AM Hey there, does anyone know if dosbox is supposed to crash if I change core to dynamic? I can't tell if this was ever integrated. I found a link to here: http://vogons.zetafleet.com/viewtopic.php?t=19193 Which seems to suggest some work was being done for the ARM dynamic core. Will this be integrated into dosbox ever? Also, a quick search couldn't find this answer: what are the buttons I press for scaling CPU/frameskip up or down? I have the n810 with the keyboard and would rather not use some crazy keyboard overlay app to achieve this. And last but not least, does anyone have just a very general dosbox conf lying around? I know basic stuff but I always figured that part of the hard part of making a conf file is that everyone's box is differet, which isn't the case of the nokia. What's a good cpu cycle and frameskip speed? What's a good trick to keep the sound from skipping and slowing down the game? I can't seem to get rubybox to download any saved config files. javispedro 09-14-2009, 04:40 AM Hey there, does anyone know if dosbox is supposed to crash if I change core to dynamic? I can't tell if this was ever integrated. I found a link to here: http://vogons.zetafleet.com/viewtopic.php?t=19193 Which seems to suggest some work was being done for the ARM dynamic core. Will this be integrated into dosbox ever? An older version of it was integrated in 0.73, the build in extras uses risc_armv4le-thumb-niw . AFAIU, it's only enabled if either: -"core" is set to "auto" and protected mode is enabled. -"core" is set to "dynamic". Also, a quick search couldn't find this answer: what are the buttons I press for scaling CPU/frameskip up or down? I have the n810 with the keyboard and would rather not use some crazy keyboard overlay app to achieve this. Fully programmable. You can boot the key mapper with "dosbox -startmapper", and map them to the zoom -/+ keys. ukki 09-14-2009, 02:55 PM I can't seem to get rubybox to download any saved config files. Was the server down or is this problem still present? 09-15-2009, 12:21 AM I realized about two months ago that all of the settings I uploaded to your site Ukki are probably incorrect. I had changed the default dosbox.conf file so I didn't have to keep making the necessary tweaks on each and every game. Anyway, for some reason, those changes didn't carry over when uploading them so all of those game configurations are probably much stupid to use. I might go back over them in a week or two and try to fix them up a bit. :) Stereoprism 09-24-2009, 05:07 PM I'm a newbie with an n800, trying to play Ultima 4/5/6 on dosbox. I might be missing something obvious, but how do I move around? the cursor keys aren't moving the avatar, and none of the xkbd keys work, either. I'd rather not spring for a hw bluetooth keyboard, but I definitely like the idea of playing ultima in transit. Oh, also, I've been trying to play ultima underworld, but the mouse pointer is waaay off from where I'm poking the stylus. Any ideas? Thanks in advance. 09-24-2009, 06:43 PM As for cursor weirdness, go to the advanced settings in Rubybox and change "sdl autolock" to false. Stereoprism 09-24-2009, 07:25 PM which mapper file? I've downloaded both in the download feature, and have tried mappercsa and mapper72 with ultima iv, and niether seem to work. As for the sdl autolock w/ UW, I changed it to false, but the mouse pointer is far to the left and slightly up from where I'm touching the screen with the stylus, still off. 09-24-2009, 10:44 PM which mapper file? I've downloaded both in the download feature, and have tried mappercsa and mapper72 with ultima iv, and niether seem to work. Yep. Very good. "mapper72" is the way to go for all games and applications unless you're playing around with some of Pushwall's suggestions in the Games that work well with Dosbox thread... then you would be using his instead. The first time you tap the stylus on the screen, I match it up with wherever the cursor is at that time. Doing this, I never have an issue with the pointer behavior being off. If you still find that the stylus and pointer aren't quite matching up together, drag the stylus to one corner of the screen so that it "catches up" to the mouse pointer. Should do the trick. If this doesn't work, then you most likely have some funky Dosbox settings. If you've got the cycle speed cranked too high or some ridiculous frame skipping, it could very well be the cause of such behavior. Not sure why you're not seeing any movement with your avatar. There's also a Rightdosbox2 or Leftdosbox2 keyboard that you can place on the other side of the display screen. It has a "numlock" key that might activate the keypad. It also has the four cursor keys there as well which are represented by four circles. Know what? I'll try looking into this a little later tonight if I can find the time to do so unless one of these suggestions ends up working for you. :) Cheers. Stereoprism 09-24-2009, 11:21 PM Yeah, still nothing. I tried Rightdosbox2 and using the circle keys, as well as NL and the keypad, and still no movement. The select button in the middle of my directional keys is working as "enter", but for some reason the directional keys aren't working at all. Would it help if I told you I installed xkbd via rubybox? I know I have libxpm4 installed, so that shouldn't be an issue. It's kinda frustrating, I feel like I'm right on the edge of exploring Britannia again, but the avatar has his feet glued to the grass : P Your method for the mouse pointer in UW works fantastically, I can't believe I hadn't thought to do that. Now I just need to figure out if I can get it to not run like a slug. I'd still appreciate any help, and thanks again. 09-24-2009, 11:27 PM Okay. I'll find some time later tonight to help you problem solve for this. Give me about 3 hours or so after the wife passes out. :) In the meantime, maybe you should have a look at these: Aklabeth - Ultima 0 (http://pupnik.de/aklabeth.html) xu4 - Ultima 4 RPG Engine (http://pupnik.de/xu4770.html) Exult Ultima 7 RPG remake (http://pupnik.de/exult770.html) Pentagram - Ultima 8 Engine (http://pupnik.de/pentagram.html) I haven't installed any of these myself because it seemed a little too complicated to do so. If you "Search" the forums however, there's been a few discussions on these games. Pushwall 09-25-2009, 12:33 PM I just tried the Ultima IV that you can download through RubyBox, since that is the one with enhanced graphics and music. The directional keys work fine for me on the map. I'm using the mapper72 file like Addison suggested. Here's a reference for all the key commands too if that helps: ukki 09-25-2009, 01:07 PM which mapper file? I've downloaded both in the download feature, and have tried mappercsa and mapper72 with ultima iv, and niether seem to work. Did you try rebooting your tablet? xkbd can sometimes really mess up the input. Stereoprism 09-25-2009, 01:44 PM I tried rebooting, still no movement. For what it's worth, I do have the upgraded in-program download version of U4, and here are the options I have checked on in the rubybox settings: Basic: cpu cycles max render frameskip 4 sdl fullscreen true dos keyboardlayout auto dos keyboardlayoutfile auto mixer prebuffer 400 sdl autolock false sdl mapperfile /home/user/.rubybox/.mappers/mapper72.txt I tried it with and without the keyboardlayout options enabled, niether made a difference. : / Pushwall 09-25-2009, 03:05 PM For what it's worth, I do have the upgraded in-program download version of U4, and here are the options I have checked on in the rubybox settings: Basic: cpu cycles max render frameskip 4 sdl fullscreen true dos keyboardlayout auto dos keyboardlayoutfile auto mixer prebuffer 400 sdl autolock false sdl mapperfile /home/user/.rubybox/.mappers/mapper72.txt FWIW, here are my settings that I have checked... Basic: cpu cycles max render frameskip 2 sdl fullscreen false mixer prebuffer 400 sdl autolock true sdl mapperfile /home/user/.rubybox/mappers/mapper72.txt I'll check when I have time and see if I can see why you're having problems. UPDATE: Try the setting: sdl fullscreen false when starting the game. (You can go fullscreen once the game is launched.) See if that helps. I just tried launching the game fullscreen and ran into your problem. When I start the game in a window I don't have the problem. Stereoprism 09-26-2009, 02:37 PM I tried starting w/ windowed mode... still no luck. I should also add that i have this problem with all of the dosbox games. my directional keys work fine with everything else, though. ironically, if i could just kill some skeletons or orcs, it might relieve some frustration! is it to the point of a necessary uninstall/reinstall of dosbox? 09-26-2009, 03:37 PM Setting CPU cycles to max is a really bad idea. :) It's much better to set that value with a number, somewhere between 800 and 1800 I believe. Stereoprism, open up Xterm while you have a Xkbd keyboard on the screen and start pressing the number pad. Post back on your results. Let's start from there to better problem solve what's happening on your end. Stereoprism 09-26-2009, 04:13 PM Stereoprism, open up Xterm while you have a Xkbd keyboard on the screen and start pressing the number pad. Post back on your results. Let's start from there to better problem solve what's happening on your end. Okay, opened up rubybox, ran U4 in windowed mode, started xterm, and all the keys in xkbd seem to be working in xterm, including the directional/dot keys. It's only the directional keys in dosbox that aren't working. I did get one of the directional keys to work in duke nukum I - The down key fires his gun : P none of the others do anything, though. 09-26-2009, 04:19 PM Okay. That's one closer step in figuring out what's wrong on your end. And you're sure that you have "mapper72" active with each game you're testing? Stereoprism 09-26-2009, 04:37 PM yes, mapper72. It's just weird that only the directional keys aren't working. 09-26-2009, 06:08 PM Okay. Next test. Have Rubybox point to any file in a game folder that's not an .exe file causing it to crash and go to the Dos Command prompt. Try using the keypad at that prompt and post back. Stupid question, but you don't have the setting for joystick enable do you? Stereoprism 09-26-2009, 06:14 PM Okay, I tried that, same thing, I can type in anything on the xkbd, but the directional keys still don't work. And no, I don't have any of the joystick options enabled. Maybe I just need to consider a bluetooth keyboard or connecting a keyboard through usb... though it seems goofy to use a full size keyboard on a handheld device : ) Though it might not matter if I can't get what I have now to work. 09-26-2009, 06:28 PM Okay, I tried that, same thing, I can type in anything on the xkbd, but the directional keys still don't work. And no, I don't have any of the joystick options enabled. So numbers 1 through 9 work correctly? I personally have yet to find a game that requires the directional keys. You're talking about the up, down, left and right buttons on a standard keyboard, correct? Just to be sure on this, you're having problems with the four buttons that have a circle to them in either the Rightdosbox2 or Leftdosbox2 xkbd keyboard? Stereoprism 09-26-2009, 06:37 PM So numbers 1 through 9 work correctly? I personally have yet to find a game that requires the directional keys. You're talking about the up, down, left and right buttons on a standard keyboard, correct? Just to be sure on this, you're having problems with the four buttons that have a circle to them in either the Rightdosbox2 or Leftdosbox2 xkbd keyboard? yep. as well as the directional keys on the upper left on the face of my n800. and yes, number keys on xkbd 1-9 work fine... I did try the NL key, figuring it was numlock, and it doesn't allow the keypad to move the avatar. Plus, I'd just prefer my hardware directional keys to work. 09-26-2009, 06:41 PM Okay. Give me a few minutes and I'll look into it. What game are you using again so I can test this? Stereoprism 09-26-2009, 06:45 PM 09-26-2009, 06:46 PM Okay. I'll post back in a little bit. :) Pushwall 09-26-2009, 06:49 PM Let me throw in one suggestion.....run XTerm and from the command line type in rbox to run RubyBox. Now see if the directional key works in the games. 09-26-2009, 07:36 PM Think I might have even a better suggestion than Pushwall's. Okay. Press the hardware "Home" key on the tablet during your game. This should pull up a Dosbox tab thingie on the left side. Next, tap the stylus anywhere on the screen except for that tab area. This will close the tab. See if everything works after that and post back. Stereoprism 09-27-2009, 03:56 PM sorry, tried both methods, and niether suggestion worked : / 09-27-2009, 04:45 PM Well shoot! I know that there are times when xkbd doesn't respond at all in Dosbox and the above trick immediately resolves the situation. I tried running Ultima IV, the upgrade version but it's been crashing on me. What are your settings for this? Stereoprism 09-27-2009, 05:00 PM Basic: cpu cycles 1400 render frameskip 2 sdl fullscreen false mixer prebuffer 400 sdl autolock true sdl mapperfile /home/user/.rubybox/.mappers/mapper72.txt 09-27-2009, 05:45 PM Sorry, but do you also have a save file as well? I can look it over while watching football. :) Pushwall 09-27-2009, 06:08 PM Stereoprism, just for kicks try this.....copy my old /usr/share/dosbox directory to your internal memory (a copy this directory can be obtained here (http://intermag.magnode.com/pushwall/usr_share_dosbox.zip)). Now see if your directional pad works when you run games through RubyBox. This was an initial fix for me awhile back. If it doesn't work, just delete the /usr/share/dosbox directory from your internal memory. Stereoprism 09-27-2009, 08:21 PM pushwall: how do I copy it to my internal memory? I'm a complete newbie at this. I've read instances where one has to put files in directories like that, but all I've seen is what's on the file manager. I can't browse the internal memory except what's in my document, audio clip, etc. directories. This is what showed up in xterm while I ran U4: ~ $rbox ^[[A^[[B^[[Sorry - server Keyboard map doesn't contain either 2 or 4 KeySyms per Keycode - unsupported! Sorry - server Keyboard map doesn't contain either 2 or 4 KeySyms per Keycode - unsupported! Sorry - server Keyboard map doesn't contain either 2 or 4 KeySyms per Keycode - unsupported! DOSBox version 0.73 Copyright 2002-2009 DOSBox Team, published under GNU GPL. --- CONFIG:Loading primary settings from config file /home/user/apps/DOSBox 0.73 Preferences.txt CONFIG:Loading additional settings from config file /home/user/.rubybox/Roleplaying_Games/ultima_iv_-_upgrade_version.conf ALSA lib seq.c:935:(snd_seq_open_noupdate) Unknown SEQ hw ALSA:Can't open sequencer MIDI:Opened device:none MAPPER: Loading mapper settings from /home/user/.rubybox/.mappers/mapper72.txt killall: dosbox: no process killed Sorry - server Keyboard map doesn't contain either 2 or 4 KeySyms per Keycode - unsupported! Sorry - server Keyboard map doesn't contain either 2 or 4 KeySyms per Keycode - unsupported! Sorry - server Keyboard map doesn't contain either 2 or 4 KeySyms per Keycode - unsupported! DOSBox version 0.73 Copyright 2002-2009 DOSBox Team, published under GNU GPL. --- CONFIG:Loading primary settings from config file /home/user/apps/DOSBox 0.73 Preferences.txt CONFIG:Loading additional settings from config file /home/user/.rubybox/Roleplaying_Games/ultima_iv_-_upgrade_version.conf ALSA lib seq.c:935:(snd_seq_open_noupdate) Unknown SEQ hw ALSA:Can't open sequencer MIDI:Opened device:none MAPPER: Loading mapper settings from /home/user/.rubybox/.mappers/mapper72.txt killall: dosbox: no process killed killall: xkbd: no process killed killall: dosbox: no process killed ~$ 09-28-2009, 12:23 AM Hey Pushwall! If you ask me, we should just give up on Stereoprism, he's probably a freak anyway. :) Okay. So looking over all of those errors you posted, they actually look about right. Strange as it might seem, as far as I know, they're nothing to worry about. So you should be good. I'm thinking it's got to be the game you're running and there's something simple we're missing. Try this and we'll know for sure. Fire it up, create a quick character and see if you can move him with the number pad as well as your hardware D Pad. I'm guessing that it should work. Stereoprism 09-28-2009, 12:29 PM I tried nethack (which seems really fun btw), and it's the same problem. No directional movement on the hardware d pad or the xkbd directional keys. I'm trying to use xarchiver to place pushwall's directory files, but it won't let me extract into the main file library, claiming that I don't have the right permissions. I believe I've gained root access via xterminal by downloading rootsh and typing "gain root" in xterm. I should be able to extract to these directories, right? Pushwall 09-28-2009, 02:32 PM I believe I've gained root access via xterminal by downloading rootsh and typing "gain root" in xterm. I should be able to extract to these directories, right? In XTerminal after you have gained root access, see if you can type in these commands: cd /usr/share mkdir dosbox If that works, then try using xarchiver to place the files into the directory: /usr/share/dosbox Stereoprism 09-28-2009, 02:48 PM Okay, I've tried it, and xarchiver is still saying I don't have permission to extract to usr/share/dosbox : [ should I activate red pill mode, or does that matter at all? Pushwall 09-28-2009, 03:01 PM Before you do that,.....I just remembered that the files in my zip file have the directory structure /usr/share/dosbox as part of the zip file. Try using xarchiver to place the files in this new zip file (http://intermag.magnode.com/pushwall/dosboxx.zip) into /usr/share/dosbox (this zip file doesn't have the file structure associated with it). Stereoprism 09-28-2009, 03:42 PM yeah, it's still saying I don't have the right permissions to extract to that directory. Pushwall 09-28-2009, 03:54 PM Okay my next suggestion is to install the file manager EmelFM2 (http://talk.maemo.org/showthread.php?t=26068&highlight=emelfm2). Once that is installed, you can use a cardreader (if you don't already have one, they are cheap and found most anywhere) to transfer the individual files in that zip file from Windows to a directory on your tablet's media card. And then use EmelFM2 to copy those files from the media card to the /usr/share/dosbox directory. (BTW, I personally use WinSCP to transfer files back and forth from WIndows to my tablet and vice versa, but that's too complicated to get into right now. I do suggest you learn about WinSCP and use it since it makes file transfers like this so much faster and easier.) Stereoprism 09-28-2009, 04:32 PM I extracted the files into a dir on my external sd card, and tried to move them over with emelfm2, but it's not letting me again. It seems like there's something still denying access. I don't know if a cardreader would make any difference. I have a usb cable I use to connect to my pc at home, but I don't think I can get access to the root internal memory from a pc, can I? Pushwall 09-28-2009, 04:54 PM I extracted the files into a dir on my external sd card, and tried to move them over with emelfm2, but it's not letting me again. It seems like there's something still denying access. I don't know if a cardreader would make any difference. I have a usb cable I use to connect to my pc at home, but I don't think I can get access to the root internal memory from a pc, can I? Yes you can access the tablet's internal memory from a PC using WinSCP. That's one of its great features. Since you have the files over on your media card, you should be able to copy them over to /usr/share/dosbox directory. One question,...did you run emelfm2 while in root mode in XTerm? If not, try running emelfm2 in root mode and then copy the files. If this doesn't work and you still don't have access permissions, you can use emelfm2 to read and change directory permissions. In emelfm2, go to the /usr/share directory and tap on the dosbox directory. Press the menu key (the middle one on the left below the Escape key) and it will pull up a menu. Select Actions, and then select Change permissions. You'll see what the current permissions are for the directory with boxes you can check and uncheck to change the permissions. Stereoprism 09-28-2009, 05:14 PM I tried running emelfm2 w/ root enabled on xterm, didn't work, and then I tried changing permissions. When I bring up the change permissions window, it says I don't have authority to make any changes. I wonder why nokia didn't just make the entire file system open to begin with... Pushwall 09-28-2009, 05:34 PM Okay, I'm running out of ideas but let's try this. Since this is an access permissions problem, let's double-check your root status. Go into xterm and enable root. Type in: whoami It should say 'root'. If it says something else like 'user' then you aren't logged in as root. Stereoprism 09-28-2009, 05:59 PM Okay, it does indeed say "root" when I type in "whoami". I'm still unable to change permissions with the directory. btw, I realize just how ridiculous this is getting. I promise I'm not just making up problems to solve : P Pushwall 09-28-2009, 07:30 PM Okay let's try this. Enable root in xterm, type in and run emelfm2, and then try to copy any file from your media card to the /usr/share directory. What happens? Stereoprism 09-28-2009, 07:48 PM Ok, I'll try that later tonight after I've recharged it. I'm at work right now and it's gone dead :P Maybe I should also state that I bought my n800 used, if that makes a difference. 09-28-2009, 08:18 PM Hey Stereo, would it kill you just to reflash the whole darn tablet? :D Only thing left that I can think of is that you either had or have "Maemo Extras-Develop" added as one of your catalogs at some point in time. If that's correct than you basically would have been ungrading to an older, broken version. Happened to me about a month ago. Chatted with ukki over at #knots on freenode and he had me delete something but I forgot what. And with that, I'm out of suggestions. Take luck and care! *lol* Stereoprism 09-29-2009, 09:36 AM it worked! I've always wanted to say gadzooks. I ran the file manager from xterm after gaining root, and now I'm able to use the directional keys. Thanks for all the effort, both of you. Very much appreciated. Pushwall 09-29-2009, 11:48 AM Haha that's great! :cool: So was it copying my dosbox directory into the /usr/share directory the thing that worked? I'm curious since that was how I got the directional keys to work initially for me with new version of DosBox. Or was it something else? Stereoprism 09-29-2009, 12:12 PM No, that was it, copying your files into usr/share/dosbox/. I have no idea how or why it worked, but I'm happy it did! 09-29-2009, 05:18 PM I liked my simple solution better............ all 10 of them. :) It's great to hear you got it working. One last suggestion though. The default dosbox.conf file is not at all optimized to run on the tablet. You basically need to go through numerous options, both basic and advanced, to boost up little bits of speed here and there. Well, nice to see another Dosbox fan. For a while there, I thought it was only Pushwall and myself left. :) Stereoprism 09-29-2009, 05:23 PM I love dosbox, and I'm quite glad it's on my n800. Any links to instructions on boosting dosbox speeds? It should all be pretty standard, since we're looking at the same hardware, no matter what n8x0 you're looking at. javispedro 09-30-2009, 06:39 AM Just for the record, I am a DOSBox fan too (of course), only I use it for old applications more than games (thus the reason I'd like DOSemu better, but that's not possible). 10-01-2009, 12:31 AM Well Stereoprism, the list of what to do in boosting speed can be rather long and extensive. Main thing to realize is that removing Soundblaster in Dosbox and installing a game without music support is the strongest method to enhance performance. I'll try to post several little suggestions though in a couple of days. Sorry Javis for not mentioning you like DOS as well. How could I have forgotten the dude who compiled this for us? :D Thor 10-01-2009, 12:57 AM Hi all, I've been reading this thread with interest and hoping the N900 will handle this and other emulators like MAME etc. Great stuff :) 10-02-2009, 03:29 PM Maybe we should start a new thread on Dosbox performance tricks or something. Maybe with enough feedback from a few of us we could get better final results by adding the compiled suggestions of everyone. I believe having the game folder located in your internal flash memory gives you about an extra 50 cycles speed boost when compared to using a memory card. Okay. Who's found something else worth noting? :) javispedro 10-02-2009, 03:54 PM Maybe we should start a new thread on Dosbox performance tricks or something.) Do you have a benchmark or some game you can easily measure performance? If so, may I ask you to try a modified build of DOSBox I just made? Debian package: dosbox_0.73-7maemo4test1_armel.deb (http://depot.javispedro.com/dosbox/dosbox_0.73-7maemo4test1_armel.deb) Gzipped binary: dosbox.gz (http://depot.javispedro.com/dosbox/dosbox.gz) Use one of the above, not both (whatever is easiest for you). 10-02-2009, 05:25 PM Oooooh! Okay, I think I just got a little too excited there but any new release from you touches me in dirty places. :D Well, I've got tonight off so I'll definitely give it a try! I can just download the .deb and then install it over the current version, correct? The "benchmark" game, if you will, was X-COM: UFO Defense (http://www.mobygames.com/game/x-com-ufo-defense/screenshots). That 50 increase in cycle speed by putting on internal flash was only subjective. Anyway, with every little tweak here and there, I actually got it running quite nicely. Big surprise on that! :) The thing is, I finally gave up since there was no way to get around using the right click function required in the game. Can't wait to test it out later tonight. Many thanks! javispedro 10-02-2009, 05:27 PM Well, I've got tonight off so I'll definitely give it a try! I can just download the .deb and then install it over the current version, correct? Yes, but this build may be even slower (it is for some test cases of mine). That's why I am asking :) 10-03-2009, 02:05 PM Okay, I give up. Subjectively I'm not seeing much of any difference in this new build. It does "feel" more rock steady and stable though. javispedro 10-03-2009, 02:10 PM I only changed the way it's compiled (-O2 vs previous -Os), if you don't think it's any faster then I'm keeping -Os since my djgpp test program works a bit faster too (contrary to common sense, but those things happen). 10-03-2009, 02:18 PM I would have to say that the old version is a little faster but the difference is pretty much negligible. Is there any way to incorporate a right click feature? I know that it's not programmed directly in Dosbox to allow such a thing so you would have to write the entire script yourself. I'm guessing that it wouldn't be worth the headache. Stereoprism 10-03-2009, 03:26 PM It'd be neat to be able to hold down one of the buttons to change the touchpad to right-click. Such an innovation would make it possible to play the Ultima Underworld games on the go, which would be absolutely fantastic. 10-03-2009, 04:13 PM It'd be neat to be able to hold down one of the buttons to change the touchpad to right-click. Such an innovation would make it possible to play the Ultima Underworld games on the go, which would be absolutely fantastic. You're really strung up on Ultima aren't you? :) I'd say use Javis' latest SNES emulator. I think there's three or four Ultima games that were released for that system and you're going to get better performance than what Dosbox can bring. Stereoprism 10-03-2009, 04:44 PM not necessarily strung up... I just grew up with the games and am delighted to be able to play them on a mobile device : ) 10-04-2009, 12:48 AM I remember playing the first Ultima on Nintendo for almost two months. It's amazing how I was willing to overlook graphics, music and gameplay. Runes of the Avatar was my favorite. I lent it out to a friend who decided not to give it back to me. He was shot in the shoulder once so I had no problem letting him keep it. ArnimS 12-10-2009, 05:31 AM It'd be neat to be able to hold down one of the buttons to change the touchpad to right-click. Such an innovation would make it possible to play the Ultima Underworld games on the go, which would be absolutely fantastic. I've spent about 3 hours trying to tweak Underworld to playability on dosbox. Speed is almost there if you turn off all the tiles. But steering is next-to-impossible due to the mouse interface. There are projects to reverse-engineer Underworld out there. If you love the game, contribute to them. Otherwise i doubt you will be playing Underworld on N900. Ever ;) ArnimS 12-18-2009, 10:44 AM Playing with new dosbox builds atm- getting up to 2000 cycles with steady sound, with this bin and conf... If you want to help compare speed to current dosbox build, cat the zips to a .tbz and uncompress. be sure to have a ssh session open so you can kill the process, No luck getting SDL to talk straight to alsa yet. javispedro 12-22-2009, 02:07 PM pupnik, I've just pushed a new build today (no performance changes at all) but didn't notice this. What have you been playing with? 2disbetter 12-23-2009, 09:31 AM I'm happy to report Dosbox running stable on my N900. Sound is a bit choppy being pushed through as soundblaster. I've got it running Wizardry 7. Works like a champ, although the missing escape key makes me watch the entire intro every time I load it up. One of these days I'll get around to remapping the keyboard. At any rate there is certianly room for improvement performance wise, but I'm content as is. Thanks for all the hard work! For the record I'm a total dosbox fan. I grew up on dos. shapeshifter 12-23-2009, 10:05 AM I'm looking forward to some red alert 1. I wonder how stylus input will be going... LordJuanlo 12-23-2009, 12:54 PM I just tried DOSBox, I installed the game AlQuadim Genie's Curse, with no sound it works, a little slow, but it works. However the N900 resets when I choose Gravis UltraSound (it's properly detected and enabled by the game sound configurator). Will try Sound Blaster mode later. DOSBox is an emulator at it shouldn't be able to reset the phone, everything that happens inside it should keep inside it. 12-23-2009, 08:47 PM Playing with new dosbox builds atm- getting up to 2000 cycles with steady sound, with this bin and conf... If you want to help compare speed to current dosbox build, cat the zips to a .tbz and uncompress. be sure to have a ssh session open so you can kill the process, No luck getting SDL to talk straight to alsa yet. 2,000 for cycle speeds was the best you could get? Don't get me wrong, I'll totally buy into that from you. Qole promised awhile back that 3,000 would be possible. What happened?! :D Well shoot, that's probably still not good enough to run music files for early 90s games then, is that correct? ArnimS 12-27-2009, 12:59 PM What is ouir wishlist for a dosbox that doesnt suck? here is mine - screen as touchpad for relative mouse control: rocker button-+become left / right click edit: harder than you would expect.. cursor keeps jumping. - N900 accel as mouse or joystick with screen sensor fire button - N800 and 900 hw scaling - N900 pasuspender output for low cpu use in audio (400-700 cycle win) - Suspend on loss of window focus - Mapper convertor from nokia map to mapper.txt edit javispedro im not sure my build is faster. sometimes things get weird. lots of cflag experimenting. here is profile build http://pupnik.de/dosbox_pup26.tgz javispedro 12-27-2009, 01:18 PM - Mapper convertor from nokia map to mapper.txt The problem with this is that your average PC keyboard doesn't have ";" key. It has "," key, and "Shift"+"," producing ";", which is handled by the BIOS/keyb. The DOSBox mapper acts at a scancode level. Thus it cannot map anything to ";" because there's no ";" scancode. At best, the mapper could allow you to create macros like "Press Shift -> Press , -> release , -> Release Shift" (which is something it doesn't allow currently). Keyb/Nokia.sys/Rover.sys acts at BIOS level. Thus it gets to decide which string "Shift" + "," scancode produces. The only problem with the keyb approach is that some games potentially may bypass it. But those games wouldn't work with international PC Keyboard layouts either. 12-27-2009, 02:19 PM - screen as touchpad for relative mouse control: rocker button-+become left / right click I just might have a surprise for you. :D I'll try and post this later tonight. As far as I know, it's an awesome solution that hasn't been done before outside of qole's easy debian project. Anyway, my wishlist? Honestly it's not much. I still get goofy just being able to play something like Nobunaga's Ambition 2 without the need of a computer. :) I guess for me though, being able to run Legend Entertainment games with full, unbroken music and sound without the need to drop the cycle speeds down to 275. Sure I can get something like Eric the Unready working great on my N800. It's just not the same without the excellent music to support the gameplay. I think we're still a good 10 years away before we see a small handheld capable of 30,000 cycles. So for me, I'm happy enough to keep it old, old school for now. :) forcer 12-28-2009, 07:48 AM I'm looking forward to some red alert 1. I wonder how stylus input will be going... I have command & conquer: tiberian dawn on my n900, it's too slow. I tried to tweak dosbox.conf, but no luck squeezing more power from it, so I guess we will need to wait for better dosbox build. And I would be very glad to see better mouse handling, when you get to edge of the screen(top/down or left/right), mouse cursor just looses calibration. cddiede 12-28-2009, 08:40 AM Playing with new dosbox builds atm- getting up to 2000 cycles with steady sound, with this bin and conf... No luck getting SDL to talk straight to alsa yet. 2000 cycles is great and all, but when can we expect a version of DOSbox that lets us type numbers? javispedro 12-28-2009, 08:53 AM 2000 cycles is great and all, but when can we expect a version of DOSbox that lets us type numbers? How many threads there are where I have explained this? Search for rover.sys . ArnimS 12-28-2009, 11:57 AM What is ouir wishlist for a dosbox that doesnt suck? here is mine - screen as touchpad for relative mouse control: rocker button-+become left / right click edit: harder than you would expect.. cursor keeps jumping. Implemented, i think! I patched javispedro's maemo5 tree - build is here: http://pupnik.de/dosbox_pup53_bin.tgz Before launching, 1) set environment variable ~$export SDL_MOUSE_RELATIVE='0' 2) turn off autolock in your dosbox config /home/user/.dosbox/dosbox-0.73.conf tested with Wing Commander and mouse sensitivity set to 50 in config [EDIT] Works with: - Wing Commander - Arena - Ultima VI - Rex Nebular - Betrayal at Krondor Broken with - Ultima Underworld 1 Addison 12-28-2009, 12:51 PM Implemented, i think! I patched javispedro's maemo5 tree - build is here: http://pupnik.de/dosbox_pup53_bin.tgz Before launching, 1) set environment variable ~$ export SDL_MOUSE_RELATIVE='0' 2) turn off autolock in your dosbox config /home/user/.dosbox/dosbox-0.73.conf tested with Wing Commander and mouse sensitivity set to 50 in config [EDIT] Works with: - Wing Commander - Arena - Ultima VI - Rex Nebular Broken with - Ultima Underworld 1 Good job dude! :D Sorry I didn't get a chance to post my solution for this last night, I'll try to do that later this evening. Basically I've got a xbindkeys executable that works perfectly under Diablo OS. No issues with using it so far, the mouse has yet to jump even once and it's highly customizable in that it allows great flexibility on how to implement a right and left mouse click. You can also launch apps and scripts by simply pressing a defined hardware key. Very nice! I just haven't gotten around to writing up a post on it. Will these newer builds of Dosbox be a N900 exclusive? I think you know what I'm trying to hint at.... :D ArnimS 12-28-2009, 01:08 PM Good job dude! :D Sorry I didn't get a chance to post my solution for this last night, I'll try to do that later this evening. Basically I've got a xbindkeys executable that works perfectly under Diablo OS. No issues with using it so far, the mouse has yet to jump even once and it's highly customizable in that it allows great flexibility on how to implement a right and left mouse click. You can also launch apps and scripts by simply pressing a defined hardware key. Very nice! I just haven't gotten around to writing up a post on it. Will these newer builds of Dosbox be a N900 exclusive? I think you know what I'm trying to hint at.... :D I have no changes for N800 dosbox, and am not qualified to make packages for diablo or fremantle anything. My mousehack needs more testing. I'll test your solution, but i don't see how it could possibly work. But then again, i don't understand these layers of translation of mouse movement at all. I stripped all of dosbox and sdl's "interpretations" of what was happening away. EDIT2: try just export SDL_MOUSE_RELATIVE='0' before running regular dosbox... use mouse scaling 100... EDIT maparn.txt attached - gives numbers (but no :). just copy it somewhere and edit dosbox.conf to point to the maparn.txt file 12-28-2009, 02:14 PM Matan wrote me and said that xbindkeys required some dependency that Diablo never supported. He then emailed me back a little later and basically said, here try this. It works. No idea what he did. But all credit goes to him. 5262 xbindkeys goes to /usr/bin/ Make it executable. .xbindkeysrc needs to be created and goes to /home/user/ Here's a really bad example of how to use it. "xmodmap -e "pointer = 3 2 1"" m:0x0 + c:72 "xmodmap -e "pointer = 1 2 3"" m:0x0 + c:73 You need to also install xmodmap found here: http://maemo.lancode.de/?path=./diablo/binary To find the keycode that you want to enable xbindkeys -k You get it working with xbindkeys To end it killall xbindkeys xbindkeys --help Sorry for the poor documentation on this but I figure you would know more on what to do with this than me. :) stobbsc 01-17-2010, 02:30 PM does anyone know how i map keys. or have a key mapper file for me. i am playing homm:kings bounty and i need esc key to move through screens thanks 01-20-2010, 07:36 PM does anyone know how i map keys. or have a key mapper file for me. i am playing homm:kings bounty and i need esc key to move through screens thanks You forgot to mention what device you have. Hey Armin or Javis, what computer speed would you guess this equates to in comparison of a real, x86 machine for the N900? I'm guessing maybe a 12 or perhaps a 14 mghz computer. Just curious. ArnimS 01-21-2010, 08:50 AM You forgot to mention what device you have. Hey Armin or Javis, what computer speed would you guess this equates to in comparison of a real, x86 machine for the N900? I'm guessing maybe a 12 or perhaps a 14 mghz computer. Just curious. I'd say that's spot-on for most games. A program that doesn't use sound or draw to screen would run as a 66mhz 486 or so. Spotfist 02-07-2010, 01:25 PM Does anyone know when a version of dosbox will be put into the downloads section? javispedro 02-07-2010, 01:50 PM Dosbox is on extras-testing. The only thing it's missing is votes. yorg 02-07-2010, 02:05 PM did anyone figure out how to do right click? :o Ronaldo 02-07-2010, 03:08 PM can this run dos apps like megadrive emulator? Spotfist 02-07-2010, 03:29 PM hehe thats brilliant, using and emulator to run another emulator, a better step would be to use dosbox so as to install windows 98 so that one could install gens for windows hehe ;) On a serious note though, a megadrive emulator would be good! Also, how come dosbox only has 5 votes? Ronaldo 02-07-2010, 04:53 PM if i upload the megadrive emu can some who knows how to use dosbox properly try and see if it will work? Spotfist 02-07-2010, 05:04 PM would be interesting to know but on the dosbox download page in extras there is mentioned the wingcommander and ultima underworld work ok, now these are old games requiring low level of CPU and memory, I seem to recall that back in the day i could not play roms on my 486 as it was too slow so i doubt it would on dosbox at the moment... i could be horribly wrong though. Im gagging for a game of Streets of rage though! ;) Ronaldo 02-07-2010, 05:05 PM sonic 1!, shinobi 3, shadow dancer, altered beast but a few lol 02-13-2010, 04:14 AM Has anyone tried XCOM UFO Defense on their N900? Just wondering if the little extra boost with the CPU is enough to run this at an acceptable speed. ArnimS 02-14-2010, 06:59 PM Has anyone tried XCOM UFO Defense on their N900? Just wondering if the little extra boost with the CPU is enough to run this at an acceptable speed. I wouldn't get a N900 just for the modest dosbox speedboost addison. It is faster though. Just did a wing commander side-by-side, and N900 was solidly faster. Still not playable. The thing is, the IBM PC was a demon spawn from hell, full of evil crappy hardware, and emulating that is much slower than emulating an amiga of the same era. 02-16-2010, 01:01 AM The thing is, the IBM PC was a demon spawn from hell, full of evil crappy hardware, and emulating that is much slower than emulating an amiga of the same era. Nice value judgements there, Arnim. :D I actually have great results playing XCOM right now (about 8 fps which looks very impressive) but requires VNC viewer. But yeah, I was curious if there was a notable difference between the two tablets where it would be worth dropping the extra money. Thanks for the honesty. :) javispedro 02-20-2010, 11:22 PM So in case you're wondering, this is what I'm working on... http://javispedro.com/maemo/captures/dosboxhwr.png MWAHAHAHAHAHA "Proper" Hildon input method support, which brings support for all virtual keyboards (handwriting, finger, stylus), and also the usual hardware keyboard goodies, like proper automatic layout detection, sticky keys, etc. but also the n810's hardware keyboard status bar at the bottom of the screen :( 02-20-2010, 11:50 PM Whoah, where did this come from, Javis? You're the first to understand how to launch the keyboard in an application that's not hildonized. That's so awesome! :D So hey, do you know enough about the keyboard where you could post a response to a thread I made a few days ago? Ronaldo 02-22-2010, 10:50 AM i got snes dos emu to load mario all starts showed the mario coin and then closed lol trying to find megadrive emu that works Ronaldo 02-22-2010, 11:06 AM got megadrive working but 1fps or less lol running sonic 1 emu name genecyst nax3000 02-22-2010, 06:45 PM So are we going to get an update in which the N900's keyboard is actually useful? ________ OUTDOOR TUBES (http://www.****tube.com/categories/30/outdoor/videos/1) taril 02-24-2010, 11:26 AM Does Tomb Raider 1 playable via dosbox on N900? Spotfist 02-24-2010, 05:54 PM Nax the keyboard is usefull, a sys file has been made available giving acces to most of the keys required. Tomb Raider probably not as playable as you would like... vitasam 03-25-2010, 02:45 AM Hi, sorry if question is obvious. I have DosBox 0.73 on N810. How to configure DosBox? For example, in Debian I can do it like config -writeconf .dosboxrc and it creates a configuration file. In N810 there is an error message: Can't open file .dosboxrc haberc 03-25-2010, 08:11 AM has anyone here managed to play constructor on the dosbox? on the n900 Pushwall 03-25-2010, 08:42 AM Hi, sorry if question is obvious. I have DosBox 0.73 on N810. How to configure DosBox? For example, in Debian I can do it like config -writeconf .dosboxrc and it creates a configuration file. In N810 there is an error message: Can't open file .dosboxrc Many pages back, javispedro said this... "The config file is at ~/apps/DOSBox 0.73 Prefs.txt , should be autocreated at program startup with sane values." So if you need to configure DosBox, just edit the DOSBox 0.73 Prefs.txt file. The full path is... /home/user/apps/DOSBox 0.73 Prefs.txt I use the text editor in emelFM2. vitasam 03-26-2010, 02:37 AM Hi, yes it works. Thanks! javispedro 03-26-2010, 06:45 PM That's interesting. The config file in all N900 versions and -devel N8x0 ones has been .dosbox/dosbox-0.73.conf (like on Fedora, which is the DOSBox upstream default). HtheB 04-13-2010, 04:10 AM I have command & conquer: tiberian dawn on my n900, it's too slow. I tried to tweak dosbox.conf, but no luck squeezing more power from it, so I guess we will need to wait for better dosbox build. And I would be very glad to see better mouse handling, when you get to edge of the screen(top/down or left/right), mouse cursor just looses calibration. Care to share the DOS version of this game? I could only find the Windows 95 version of C&C Tiberian Dawn... :( forcer 04-13-2010, 04:15 AM Care to share the DOS version of this game? I could only find the Windows 95 version of C&C Tiberian Dawn... :( HtheB 04-13-2010, 04:21 AM Isn't this the Demo version? Edit: Thanks! :) it's the Full version indeed :) Spotfist 05-08-2010, 02:39 PM any chance of a faster version of dosbox for the n900? it's good for old games but not so good for the likes of heretic, cnc... I would love to be ab le to play Dark Forces on my n900. Surely these games will run on a 486DX60... so shouldn't that work well on a 600mhz? ie 10 times the power? ArnimS 05-19-2010, 09:35 AM any chance of a faster version of dosbox for the n900? it's good for old games but not so good for the likes of heretic, cnc... I would love to be ab le to play Dark Forces on my n900. Surely these games will run on a 486DX60... so shouldn't that work well on a 600mhz? ie 10 times the power? [EDIT] most dos games were designed for systems between 4.78mhz 8088 to approximately 133mhz 80586. A 486DX-60 is at the high-end of historic MS-DOS performance. Trololololo, lololo, lololo, lololo, lolololllll.... Spotfist 05-21-2010, 10:22 AM "Normal Dos computer is like 12-16mhz"??? Are you high??? Perhaps by "normal" you suggest the middle of the dos time scale, if so then yes I could with a sack of salt agree but to say a pc running at 60mhz is a fast dos pc, well that's crazy! quake would not be playable and I know from experience. 90mhz - 110mhz (pentium) that's fast, fast. 60mhz is fastish. Anyways I take the answer to my questions is "NO!" ;) ArnimS 05-21-2010, 10:58 AM "Normal Dos computer is like 12-16mhz"??? Are you high??? Perhaps by "normal" you suggest the middle of the dos time scale, Yeah that's exactly what i meant. I was just editing the post to indicate that... sorry. [EDIT again] going crazy with edits i know, but it's better than trailing posts... You can see the 'Javis1' (repository binary) performance in the latest graph, vs attempts by me to speed it up. JavisPedro was _right_ when he said 'don't expect big performance wins'. javispedro 05-21-2010, 11:30 AM One experiment would be to update to 0.74,and another experiment is to try the armv4 recompiler instead of the current THUMB one (whose performance on the N900 sucks). Spotfist 05-21-2010, 12:50 PM ye I just thought I read somewhere that as a "general rule of thumb" it's something like current processor speed devided by 10 gives u the speed u can emulate i.e the n900 has a 600 mhz so therefore 60 would be doable... I honestly dono much about dosbox so i may have it completely wrong ;) ArnimS 05-26-2010, 08:26 AM What we can see here is that - disregarding sound (which was generated @8khz but not streamed in those benchmarks) - current dosbox is topping out around 3200 cycles, running doom timedemo, on stock N900. However, setting dosbox cycles to "auto" gives a lot lower numbers, with glitchy sound. There are some things we can try doing to get sound more stable at higher cpu loads and cycles-settings. If anyone has time i can dig up some places to start tweaking. Spotfist 05-29-2010, 06:46 PM @Arnim Any response from anyone? Seriously it would be great to get some work on this emu, so many games that do work just aren't playable due to screen res issues, mouse etc. Some of these can be fixed with tweaks but would be nice to get a gui with maybe options ore something... I was thinking of trying to find an old dos text editor and maybe a menu type system. hehehe maybe a load of batch files ;) Echo on! javispedro 10-19-2010, 03:20 PM Hello, dosbox 0.74-0maemo2 is building at this moment and will come to the extras-devel repositories soon for diablo & fremantle. Basically, what has been changed is: - Enabled IPX tunneling & virtual modem (so you should be able to play some multiplayer between N900s) - There is a new .com in z:\, "mapper.com". The only thing this does is launch the mapper from within the dos shell, so no need to press a weird key combination nor drop to a xterm: just type "mapper" from the initial dosbox window. - Mouse autolock is now disabled by default, it didn't work properly either way and makes no sense. - I'm still using Thumb dynrec as it seems there's no gain switching to nonThumb one. - No changes to keyb files. Also, I'm considering adding an SDL_haa based scaler. 10-19-2010, 05:58 PM Wicked Javis! Yeah, I caught that there was an upgrade earlier and came her to take a peek at what all was done. I'm super punchy with joy right now! Thanks!!!! :D stooobs 10-19-2010, 06:03 PM so happy to see this is still making progress thatnks for all your work javispedro 10-20-2010, 04:31 AM btw, config file got renamed around, again :( (this time due to 0.73->0.74 bump). TiagoTiago 10-20-2010, 11:46 PM Where can i fins a step by step guide telling me how to setup everything i need to start playing games with DOSBOX on the N900? Rajjain 10-27-2010, 09:30 AM m having problems running .exe files on dosbox... suppose i have a file called abc.exe located in Nokia n900/documents/abc.exe.... wat r d commands in series m supposed to use to run this file? Magik 10-27-2010, 09:50 AM Write this in autoexec section: mount c /home/user/MyDocs/documents/ cd c: abc.exe This should work. TiagoTiago 10-27-2010, 12:39 PM Is there a guide somewhere at least for setting up the keyboard? Magik 12-09-2010, 01:23 PM BT mouse + dosbox + settlers 2... lovin' it! Anybody maybe have a good config file for settlers 2? :) pepe55 12-25-2010, 08:38 AM Is there any posible way to show actual cycles onscreen (ingame)? Thank you for reply =). ArnimS 01-01-2011, 01:13 AM @Arnim Any response from anyone? Seriously it would be great to get some work on this emu, so many games that do work just aren't playable due to screen res issues, mouse etc. Some of these can be fixed with tweaks but would be nice to get a gui with maybe options ore something... I was thinking of trying to find an old dos text editor and maybe a menu type system. hehehe maybe a load of batch files ;) Echo on! Hey Spotfist! got me a new Archos A70S tablet and a little bit of time. 1) Menu: We do have a menu called 'rubybox'! Search the forums for it. 2) Mouse problems: will never be solved perfectly because IBM PCs weren't designed for mouse input. Here's my idea for dealing with them - allow user to toggle between the existing "try to emulate absolute mouse coords" code and a new "touchpad" mode that lets you move mouse by stroking the screen to get relative movement (like a trackpad). I don't kwow whether i'll be able to code this up or whether someone who understands the mouse code needs to do it though - right now my build environment isn't working yet. Cheers! Happy New Year! ArnimS 01-01-2011, 01:15 AM So in case you're wondering, this is what I'm working on... http://javispedro.com/maemo/captures/dosboxhwr.png MWAHAHAHAHAHA "Proper" Hildon input method support, which brings support for all virtual keyboards (handwriting, finger, stylus), and also the usual hardware keyboard goodies, like proper automatic layout detection, sticky keys, etc. but also the n810's hardware keyboard status bar at the bottom of the screen :( this is awesome btw... ArnimS 01-13-2011, 08:18 AM btw, config file got renamed around, again :( (this time due to 0.73->0.74 bump). Just had a look-through your dosbox work - good job / keep it up man! I just compiled with M-HT's arm-dynrec (with a little help) and it got me from ~3000 to ~4000-5000 cycles! However it requires editing config.h and some files in src/cpu. This would break the maemo autobuilder, since i haven't figured out how to apply the changes to the ./configure script. Do you want to do this javispedro? It makes Ultima Underworld playable with adv2xmame scaler (maybe 15 fps @ 1ghz). cheers! javispedro 01-13-2011, 09:03 AM Do you want to do this javispedro? It makes Ultima Underworld playable with adv2xmame scaler (maybe 15 fps @ 1ghz). I already did that long ago! You can see the configure script patch in debian/patches/arm-target . Is this a new version of M-HT's backend? Didn't find a newer one. I'm using the one that was added in 0.73. On the config file, core=auto (the default) will enable the recompiler only for protected mode applications. Core=dynamic will enable it for both real mode and protected mode applications. Unfortunately on my tests I never got large speedups; but yesterday on IRC someone did tell to me that at least one (real mode) game works faster in core=dynamic mode. Oracle 02-20-2011, 12:45 PM What key combination do I need to use to increase/decrease the number of cycles? I have the rover.sys file... I looked around but didn't find the awnser thnx! javispedro 02-21-2011, 01:27 PM What key combination do I need to use to increase/decrease the number of cycles? It's currently not mapped to anything. You can map it to any combination you'd like using the DOSBox mapper (type "mapper" at the command prompt) _save_ for the volume keys.Those need more logic (I personally don't want to lose the volume keys functionality). 04-24-2011, 07:31 AM Hey Javis! N800 user who just upgraded to 0.74. Everything looks real good except that fullscreen=true crashes this. I have: /home/user/.dosbox/dosbox-0.74.conf and also /usr/share/dosbox/dosbox.conf (since I'm using Rubybox) I'm just a little confused if I'm doing something wrong on this. Thanks! 04-24-2011, 11:59 PM If someone has 0.74 running in full screen, could you please post your dosbox.conf file for me so I can take a look? I would be very super thankful. :) javispedro 04-25-2011, 09:48 AM Hey Javis! N800 user who just upgraded to 0.74. Everything looks real good except that fullscreen=true crashes this. I've reproduced it -- it also dies when you switch into fullscreen at runtime. No idea at this point why it does not happen on N900 :P Sadly, I do not think you can fix it via the config file.... you will have to wait until next version. 04-25-2011, 05:26 PM Ah poopers. But hey, thank you for at least rubbing it in that all of you special N900 users get to enjoy the full screen treatment of happiness. This makes me feel so much better. :) I really should upgrade my tablet here soon. 04-25-2011, 09:38 PM So is there any possible chance you would be willing to look at this within the following weeks perhaps? I'm sure your plate if full with other junk requests from the people here. :) Pushwall 05-03-2011, 11:44 AM Now that I've booted up my N800 again, I'm fearful to upgrade to 0.74 because of the fullscreen problem. I'll watch this thread optimistically! :) 05-03-2011, 12:17 PM Yeah, as of right now, full screen is definitely goosed for Diablo. Too bad since I was seeing significant speed gains in this version. :) Kevstacey 05-03-2011, 02:16 PM sorry for being too lazy to read through the thread but has the mouse problem been fixed? and is there a way to install .exe files now? thanks :) Pushwall 05-03-2011, 02:16 PM Oh that speed bump makes me want to upgrade to 0.74. Addison, was the upgrade an easy one? Does it work with RubyBox and your keyboard layouts too? 05-03-2011, 04:31 PM The Xkbd fix can be found here. Honestly though, I put everything together in better folders and made it so there is less clutter on your memory card which can be found here: That will also install ADOM though. It takes up very little extra on your internal flash plus you can play my favorite game on the tablet. Not quite sure how you feel about D & D games though. Anyway, Rubybox points to the dosbox.conf file differently from where Dosbox 0.74 does but that's really not the issue here. The only problem is that it now breaks if you try and go full screen. Also, the old keyboards that you can download from Rubybox is complete junk. So after you downloaded them from Ruby, you'll want to transfer and copy over the new ones from either of the two threads from above which will be stored on your memory card. They need to overwrite the files located here: /home/user/.rubybox/.xkbd_layouts And yeah, I saw quite the boost in speed when playing Scorched Earth. http://img859.imageshack.us/img859/5638/screenshot2011050207182.png I don't think running it in windowed mode would have caused that much of a difference in what I was seeing. Anyway, the upgrade is super easy, just install 0.74 from Extra Devel. I'm still crossing my fingers that Javier will catch what is happening with this though. If not, I'll just stick with 0.73-7 since it still gets me all bubbly. :) javispedro 05-03-2011, 05:35 PM A workaround seems to be setting "machine=vgaonly" in the config file. Pushwall 05-03-2011, 06:51 PM Addison, thanks for all that info. Now with the javispedro workaround and your instructions, I'm going to be updating tomorrow! I'm looking for some speedy games!! :) 05-03-2011, 06:59 PM It now supports a super ton of junk. :) javispedro 05-03-2011, 07:04 PM I pushed a new build that fixes it (0.74-0maemo4) -- without workaround. This should also slightly increase the fullscreen speed on the N900. This build also has the work I did long ago on calibration-less Win 3.11 mouse -- I extended the emulated PS2 mouse protocol so that it has a new "tablet" mode. Hopefully no games shall break, as the new protocol defaults to off. In order to use it on Win3.x however you need a specific mouse driver that I hope to publish soon. 05-03-2011, 07:07 PM Squee! :D Thank you so much chief!!! park 05-03-2011, 07:48 PM v.0.74 actually resolved the sound stutter issue I had in Ultima 6! Pure awesomeness. Thanks!! HtheB 05-03-2011, 07:52 PM Thanks for the update ! :) I tried AnDOSbox today on NITDroid, and it's REALLY amazing! :) I really hope that the DOSBox on Maemo can even get better then the one on Android.... (because I rather love to boot Maemo then Android :) ) javispedro 05-04-2011, 04:11 PM And here (http://depot.javispedro.com/dosbox/w3x-mouse/tablet.drv) is the win3.x modified mouse driver... to use, copy to \windows\system and change system.ini so that the mouse.drv line inside looks like mouse.drv=tablet.drv (changing system.ini.. ah, the memories...) If for some reason you regularly boot w3.x in Maemo's DOSBox (don't be ashamed, I do :D ) this will make it [B]way more accurate -- and a bit more usable if you can stand the speed. Let's say this is still under "testing" so it might break stuff. Note: requires the latest 0.74-0maemo4 or later from extras-devel. I tried AnDOSbox today on NITDroid, and it's REALLY amazing! :) Sigh -- I'm truly starting to severely dislike Android. In what sense is it better than the upstream DOSBox? 05-04-2011, 05:54 PM So just to know that I'm using this correctly, how does your latest version go together with Rubybox on Diablo? Do I overwrite /usr/share/dosbox/dosbox.conf with the file from /home/user/.dosbox/dosbox-0.74.conf ? EDIT: I have /usr/share/dosbox/ created from this post http://talk.maemo.org/showpost.php?p=335492&postcount=578 HtheB 05-04-2011, 07:10 PM Sigh -- I'm truly starting to severely dislike Android. In what sense is it better than the upstream DOSBox? From the website of AnDOSBox: GPL FAQ 1. Where is the source code? Source code will be available starting from 26th April, 2011. Any user obtained a copy of AnDOSBox from us can email us with proper subject title and a trackable evidence of being our AnDOSBox user (e.g. an Android Market purchase code) and we will provide the source code, usually on the next working day. 2. Why charge a fee when most work is done by DOSBox Team and they provided it free of charge? We only charged on our part of involvement, otherwise, the whole project worth far more than that price. Per our selling price, we get more incentive to continue, users glad to access their favour software for a little price, other existing ports get minimal impact, and that doesn't against the license. source: AnDOSBox: https://market.android.com/details?id=com.locnet.dosbox Well... I tried many other DOSBox ports, I just have to be honest that AnDOSBox is really so far the best of them all for the Android. It's the smoothest and it works great with the N900 Keyboard :) Unfortunately, the keyboard of the N900 doesn't work very well with the other ports of DOSBox... javispedro 05-04-2011, 07:22 PM Source code will be available starting from 26th April, 2011 Yes, that's exactly what the 64 guy did... Profit early, and when the threat of lawsuits appear... disappear. Well... I tried many other DOSBox ports, I just have to be honest that AnDOSBox is really so far the best of them all for the Android. Well, technically, you now have the right to use that source to improve the upstream DOSBox. However, I'm 99.99% sure the miracle DOSBox build that is "the smoothest" of all has no changes at all, and it's just a bit of dressing and a lot of marketing. javispedro 05-04-2011, 07:57 PM So just to know that I'm using this correctly, how does your latest version go together with Rubybox on Diablo? It's been a long time since I last used Rubybox :( Do I overwrite /usr/share/dosbox/dosbox.conf with the file from /home/user/.dosbox/dosbox-0.74.conf ? Not that much has changed between the config files. Probably won't matter much if you copy it or not. I don't think it would be bad though :) Oh, unless the one in /usr/share was customized. In which case it's not worth the hassle, just keep it as is. 2disbetter 06-06-2011, 04:29 AM Javispedro, I'm having a mouse pointer error when initalizing a game. This did not occur with .73, do you know what all was switched with regard to the mouse driver, and if it's possible to revert back to .73's implementation of the mouse within .74? Thanks for the awesome program, I've been having a blast the past few years playing all the greats on my n900. 2d javispedro 06-06-2011, 05:22 AM Javispedro, I'm having a mouse pointer error when initalizing a game. This did not occur with .73, do you know what all was switched with regard to the mouse driver, and if it's possible to revert back to .73's implementation of the mouse within .74? Can you try,,, - Desktop DOSBox 0.74? - Any Maemo DOSBox older than 0.74-0maemo4? If it works on any of those it is probably by the mouse changes I made, in which case I'm not going to be happy if I need to remove them (both win31 and many dos text mode apps work so much better...). VulcanRidr 06-06-2011, 12:25 PM Okay, here is [probably] a stupid newbie question. I just discovered DOSbox (and abandonia.com and all of those old, lost games I used to play back in the day). I've been playing on my pc, but then discovered that dosbox is available for my N900. Installed it, put a couple of games on the N900, mounted c, but then found that apparently, the keys are remapped in DOSbox. I can't type c:, because the : key on the N900 keyboard becomes the > in DOSbox. None of the <function> (blue) keys work, as the <function> key itself has been mapped to `. Even the intro blurb says to "to activate the keymapper, ctrl-F1". Nope. No F-keys. So how do you use special keys? Thanks, --b slender 06-06-2011, 12:35 PM @VulcanRidr 702 messages in this thread. Do you really think that you are the first one to ask that ;) Yes...you can find answer within this thread. Read couple of pages back for first. VulcanRidr 06-06-2011, 12:55 PM @VulcanRidr 702 messages in this thread. Do you really think that you are the first one to ask that ;) Yes...you can find answer within this thread. Read couple of pages back for first. Hi slender, Yeah, I figured it would be somewhere within this thread, but some threads gain a critical mass, where it becomes easier for to ask than to read the entire thread, and 700+ posts is way past that point. :eek: In any case, I'll start working backward. Thanks slender. 2disbetter 06-07-2011, 08:28 AM Can you try,,, - Desktop DOSBox 0.74? - Any Maemo DOSBox older than 0.74-0maemo4? If it works on any of those it is probably by the mouse changes I made, in which case I'm not going to be happy if I need to remove them (both win31 and many dos text mode apps work so much better...). Thank you for the quick reply, it is really appreciated. It is just one game. (Unfortunately the main game I enjoy in DOSbox. (Wizardry 7: Crusaders of the Dark Savant) I'll see if I can pull the older DOSbox build off of FAPMAN. I've never tried getting anything other than the current build. Is .74 pushed through to testing repo? If not I should be able to grab .73 from there right? Edit: I got .73 from extras-testing. Still same error. I just tried .74 on the laptop (win 7 64) same mouse pointer issue. So i don't believe it's anything you changed rather dosbox's change on the mouse. Would you by any chance have the old deb for an earlier build before what is currently available? Edit #2: So tried it on .65 desktop and still the same error, so now I'm thinking my files might have been corrupted when I mounted the drive a few days ago. I'll try that, in the mean time I've already reinstalled maemo's .74 version. Thanks for your help! Edit #3: Just to confirm. It was my game files. Re-added fresh files and it worked no problem. Now to finally tackle remapping the escape key. ;) 2d Flandry 06-10-2011, 03:49 AM Thanks for your work on this, javispedro. I got a hankering for a bit of MOO today, installed DOSBox and copied the directory from my GOG.com install of MOO1 to my N900, et viola! Works great! Couldn't find a wiki page so i created a basic one. http://wiki.maemo.org/DOSBox Feel free to flesh it out, you DOSBox experts! javispedro 06-10-2011, 05:56 AM Edit #3That is why you don't edit a post that many times after the initial one -- never got any mail notifications. Either way, glad to hear I won't have to remove mouse patches. 2disbetter 06-10-2011, 02:00 PM That is why you don't edit a post that many times after the initial one -- never got any mail notifications. Either way, glad to hear I won't have to remove mouse patches. por favor, perdóname... I just wanted to further comment that mapping the keyboard was really easy once I took the time to read the .74 user manual. For those wondering you can make changes without any hacking or extra comps. You do it on the N900. From xterm, type: dosbox -startmapper This will start the mapper immediately and from here you can map the keys you need. However this works on keys not requiring more than 1 key to trigger. I'm not sure if this would work for number keys. Also Javis thank you very very much for your efforts with DOSbox. I remember being 12 and starring at dos prompts for hours as I played games and messed with Borland C++ 2.0. Amazing I can re-experience all of it on my phone. 2d javispedro 06-10-2011, 05:09 PM From xterm, type: dosbox -startmapper One can also just type "mapper" on the DOSBox command prompt.. this is a hidden trick :) I'm not sure if this would work for number keys. Unfortunately not, but the excellent wiki page Flandry made explains the alternative approach. stooobs 09-09-2011, 08:31 PM did anybody find a way to map a right click with this im just wondering if how they got it working for gemrb would be useful 09-09-2011, 11:01 PM Just use Xmodmap along with Xbindkeys. :) /home/user/.xbindkeysrc_dos "xmodmap -e "pointer = 3 2 1"" Right "xmodmap -e "pointer = 1 2 3"" Left Remap Left and Right to whatever key press you would like for your right and left clicks. stooobs 09-10-2011, 06:02 AM does it work on the n900 i tried before but couldnt get it to work, i tried folloing the guide on page 67 i think it was but when i run xbindkeys -k i just get a white screen that doent register any key presses, i then tried adding the 2 lines to the file in home/user/ and ran the program, it was running when i typed top in xterm but coulnt get a result in dosbox and gave up lol 09-10-2011, 10:46 AM I'm not sure. It seems like you should have a build of Xbindkeys somewhere for the N900, if not, the older version for Maemo 4 should work.. If xbindkeys -k does nothing on your end, that's super weird. stooobs 09-10-2011, 01:05 PM ok i found xmodmap and xbindkeys are available via apt get so have been packaged for the n900, xbindkeys still just opens a white screen and doesnt do anything im wondering if any body know what the command is i should put in the file for volume up and down for right click and left so i could just copy and paste and see if it works that way, im unsure if the key bindings will be the same for the n810 and dont have any idea what im doing lol just an educated guess Estel 09-10-2011, 01:18 PM Also try with xdotool. stooobs 09-10-2011, 01:37 PM ok i think ive seen that 1 on easy debian ll have a google about and see what i can learn but im no programmer i wish my mam would come and make it all better lol i found this link which is pretty intresting still dont know what im doing like im gona stare at it till it makes sense lol http://www.semicomplete.com/projects/xdotool/xdotool.xhtml 09-10-2011, 10:33 PM So does xbindkeys -k not register any input at all or did you just try with the volume rockers? stooobs 09-11-2011, 05:16 AM it doesnt register any imput and and xbindkeys -mk is the same no imput, i think thkis is to far outa my understanding to do myself. the xdotool has an option to run with the pid of an open window which seems usefull but ive not the skill to do anything with it lol jnep 09-16-2011, 12:48 PM hi how di i write : on maemo 5 n900? stooobs 09-16-2011, 08:34 PM search for rover.sys in this thread to get keyboard imput on the n900 robert37 11-03-2011, 01:23 AM I just found this post on DosBox, and I don't know where else to ask my question, which is: On the N900, and using the key mapper, is Dosbox now working with all of the keys accessible? I mean, is DosBox fairly stable and quick on the N900, and can I use any key from the N900's keyboard, including the numbers keys and + - / \ : ; etc. and will all keypresses be understood by any program running in DosBox? Thanks. [ dosbox now running on Nokia 770! :D Changes: Packaged to .deb, updated to 0.72+ cvs with new alignment fixes. Support for xkbd autostart. Mouse auto-calibration should work in some games - drag stylus slowly across all four corners of the game screen. Update Feb 16: New N800/N810 version up - should install without dependencies - fixed N810 keyboard problem - should install on OS2007 also (not tested). Update Feb 27: Thanks to ukki we now have Rubybox! a dosbox frontend launcher specifically for maemo/ITOS. Rubybox requires two ruby packages: ruby1.8 ruby1.8-maemo To install these on OS2008, click on the following link to the .install file for the GPL Systems Repository (repository containing ruby) then in application manager go to the section 'programming' and install the ruby packages. [ Update March 04: New dosbox bundled with vertical/left-hand-side keyboard map. You can now change the .xkbd map by editing /usr/bin/dosboxkbd to point to whatever .xkbd layoutfile and -geometry positioning you want.[/QUOTE] bingomion 11-03-2011, 05:02 AM looks stable... i have never used it tho... youtue it you'll see a bunch of ppl using to emulate win3.11/win95 etc. Estel 11-08-2011, 05:24 AM Recently, I got a strange problem with Dosbox on N900. When i try to use it with: output=overlay ...in config, Dosbox crash with segfault. Here is output I get, when running it from within terminal: DOSBox version 0.74 Copyright 2002-2010 DOSBox Team, published under GNU GPL. --- ALSA lib seq_hw.c:457:(snd_seq_hw_open) open /dev/snd/seq failed: No such file or directory ALSA:Can't open sequencer MIDI:Opened device:none Segmentation fault I thought it is something sound-related, but messing with sound emulation settings doesn't help, and just setting: output=surface ..."fixes" the problem. Ho ever, surface doesn't allow scalling, so I get original resolution (even in fullscreen) = most games are unplayable, or too small to be enjoyable. The most weird thing is, that it worked with same setting ~2 months ago. I haven't used dosbox in meantime, nor changed it's settings. Ho ever, for sure I've updated some packages - I got no idea, which one may be related. I'm using latest CSSU and kp48 (also was updated in time between last use of dosbox, and today failure) I tried to search this thread for similar issues, without any luck. Help *much* appreciated... /Estel Estel 11-11-2011, 03:14 PM Ok, some new research data about "problem": output=overlay still result in segmentation fail, no matter of settings. ho ever, I "discovered", that error lines just before segfault - about alsa, snd etc - are not related to problem. They appear also, when using output=surface. It seems now, that surface is only one working mode for N900. Ho ever, it's possible to use it with image stretched to full screen - using <any scaner>2x (3x ones result in segfault, probably for resolution reasons). It seems ok - still I think surface=overlay with scaler=none and fullscreen resolution set to 800x480 could be faster, but it isn't working. And, I'm not sure about any speed advantage, it's just guesswork (hardware scaling). It seems that I might be mistaken - maybe it was surface, that worked month or two ago. It's possible that I experimented, and by accident, left output=overlay for a long time, resulting in "me thinking" that such settings worked before. Estel 11-12-2011, 06:22 PM Shameless bump, + another small question - is there a way to achieve right mouse click in dosbox? I got it working in Maemo by using alt_gr(also called - wrong - FN, aka blue arrow) + tap, but dosbox ignores Maemo settings... Thanks for help in advance, if, by any chance, someone is still reading this. Which, I'm beginning to doubt ;) HtheB 11-12-2011, 07:05 PM Shameless bump, + another small question - is there a way to achieve right mouse click in dosbox? I got it working in Maemo by using alt_gr(also called - wrong - FN, aka blue arrow) + tap, but dosbox ignores Maemo settings... Thanks for help in advance, if, by any chance, someone is still reading this. Which, I'm beginning to doubt ;) What about the volume rockers? :) tbh, I use DosBox within NITDroid, works much smoother. Estel 11-12-2011, 08:58 PM To be honest, unless someone give rationale why it should run smoother on incomplete (for N900) OS, I count it as placebo effect. I remember, that you were advocating aDosbox as "faster" few months ago - then, after requests for source code (due to adosbox violating GPL), it turned out to be worth of *none* speedup changes, compared to real dosbox. So, it worked exactly the same way as normal dosbox. If this time You're using regular dosbox on Android, it is also placebo, cause code is exactly same. --- Anyway, volume rockers are F8 and F7, respectively. Finally, I found a way to assign F10 and other non-standard keys - without need for external keyboard. I was interpreting mapper wrong way - keyboard show on it is virtual, and pressed keys are "real", hardware ones. I thought it is the way other (displayed keys real, pressed is binded virtual). So, it was just matter of choosing F10 on screen, pressing key I like to assign as F10 (lets say x), then, deleting binding virtual x to real x (otherwise, pressing x would result in casting x and F10 at the same time). Still, I have no idea how to assign right mouse button in dosbox - it is crucial in some programs and games. /Estel 11-12-2011, 09:28 PM While it might not be sexy, I use xbindkeys with xmodmap. /usr/bin/left xmodmap -e "pointer = 1 2 3" /usr/bin/right xmodmap -e "pointer = 3 2 1" /home/user/.xbindkeysrc "usr/bin/left" F8 "usr/bin/right" F7 To find the correct key code to change the bindings xbindkeys -k To start the key bindings xbindkeys To end it killall xbindkeys Cheers. :) 11-12-2011, 09:39 PM I asked this awhile ago to make it easier but never got a response. HtheB 11-13-2011, 11:16 AM To be honest, unless someone give rationale why it should run smoother on incomplete (for N900) OS, I count it as placebo effect. I remember, that you were advocating aDosbox as "faster" few months ago - then, after requests for source code (due to adosbox violating GPL), it turned out to be worth of *none* speedup changes, compared to real dosbox. So, it worked exactly the same way as normal dosbox. If this time You're using regular dosbox on Android, it is also placebo, cause code is exactly same. I think you mean anDosbox, not adosbox. And yes, if you do compare them, you WILL see that it IS much smoother then the Maemo version. Also, the use of special keys are just working VERY well on anDosBox. I will make a video where you can see that it IS actually MUCH better then the original Maemo port. I'll be back with a video comparison. Estel 11-13-2011, 11:38 AM Hey, I also use xbindkeys (right click is for me "blue arrow" + tap), and it works on Maemo and ED. Still, it doesn't work on dosbox (due to keyb rover.sys, I suppose - it's using "blue arrow" other way). Have You achieved right click functionality in dosbox with Your method? If yes, what I'm missing? sorry for dumb question, but I'm quite lost in this case. Keep in mind, that I'm using N900, on Diablo it may look a little different. /Estel HtheB 11-13-2011, 02:21 PM Ok, here you go, a video comparison with DOSBox on Maemo 5, anDosBox on NITDroid, and anDOSBOX also on NITDroid. See for your self and stop wasting time using DOSBox on Maemo 5... It really is more usefull with anDOSBox Edit: Somehow, I typed the last video as "DOSBox on Maemo 5", that should be "anDOSBox on NITDroid". Corrected with an annotation Estel 11-13-2011, 02:44 PM Sorry, but this doesn't comply for "why it should work faster, from technical point of view". Show me parts of source code from dosbox for android or andosbox, that make it faster - if it is true, I'll immediately suggest including it in mainstream dosbox ;) To be honest, with correct settings in dosbox config, frequency range 500-900 and swap on microSD, performance on Maemo is outstanding - even System Shock (1) is *almost* playable. Try that on Nitroid ;) It always wonder me, how easily a little "marketing" ramble, with 0% meritocratic value, can make people "fanboys" of certain things. Really, is it *so* hard to understand, that any "miracle" speedup in source code inside dosbox for Android, would be immediately included in mainstream? Stop wasting Your time with GPL violations/adware/fail platforms - or, stay with it, if You want (it's free world) - but, please, don't advocate depreciated things, unless You can meritocrately argument rationale for it. --- On the other hand, my question about right mouse click is still actual... Any help - even explanation of methods from Diablo - would be *very* appreciated - I would love to use programs/games that include right mouse click. /Estel HtheB 11-13-2011, 02:47 PM Sorry, but this doesn't comply for "why it should work faster, from technical point of view". Show me parts of source code from dosbox for android or andosbox, that make it faster - if it is true, I'll immediately suggest including it in mainstream dosbox ;) To be honest, with correct settings in dosbox config, frequency range 500-900 and swap on microSD, performance on Maemo is outstanding - even System Shock (1) is *almost* playable. Try that on Nitroid ;) It always wonder me, how easily a little "marketing" ramble, with 0% meritocratic value, can make people "fanboys" of certain things. Really, is it *so* hard to understand, that any "miracle" speedup in source code inside dosbox for Android, would be immediately included in mainstream? Stop wasting Your time with GPL violations/adware/fail platforms - or, stay with it, if You want (it's free world) - but, please, don't advocate depreciated things, unless You can meritocrately argument rationale for it. --- On the other hand, my question about right mouse click is still actual... Any help - even explanation of methods from Diablo - would be *very* appreciated - I would love to use programs/games that include right mouse click. /Estel You have to know that it's just optimizing so it works better on the mobile device. I showed that a right mouseclick can be accomplished with a simple app :) I'm going to try out System Shock (1) to show you the differences ;) Edit: Found the (old) source of anDosBox: Estel 11-13-2011, 03:07 PM Ok, after briefly examining anDOSBox sources (checking for differences), I don't see *any* reason, why it should work faster. In fact, it may be even less reliable, in case of some games (it seems that is "dumped down", breaking support for games that would not run with playable speed on most android devices). So, my question "how it is better" is still open. We already know, how it is worse (GPL violation, changing code in non-documented way, that should be smashed with fire). /Estel HtheB 11-13-2011, 03:16 PM Ok, after briefly examining anDOSBox sources (checking for differences), I don't see *any* reason, why it should work faster. In fact, it may be even less reliable, in case of some games (it seems that is "dumped down", breaking support for games that would not run with playable speed on most android devices). So, my question "how it is better" is still open. We already know, how it is worse (GPL violation, changing code in non-documented way, that should be smashed with fire). /Estel As I already said, it is the old version. I should find the newest version for you. I don't see *any* reason, why it should work faster. As a "default user", I just look how usefull it is and if it's just working with my old dos games. I just cant play my old games with all the other ports... Why isn't the DosBox on Maemo and even aDosBox (the other opensource port for android) so much crappier than anDosBox? By the way, you can get the source of anDosBox by just mailing the guy Estel 11-13-2011, 03:45 PM I'm non-experienced as coder too, but even user should - IMO - take much care, to filter out placebo effect, from regular gain. Also, it's possible, that andosbox got some options "locked" (no matter of config) for maximum performance, hence dropping support for games requiring other settings. With dosbox, One can - like me - tweak settings for max performance, and, for games that have specific needs, just keep other customized config files too. My question "if it is so miraculous and Open Source at the same time, why You think magic code would not be included in mainstream by dosbox devs, working hard on achieving best performance?" still remains. That is why I ask for rationale, not subjective user experience. /Estel HtheB 11-13-2011, 03:49 PM To be honest, with correct settings in dosbox config, frequency range 500-900 and swap on microSD, performance on Maemo is outstanding - even System Shock (1) is *almost* playable. Try that on Nitroid ;) :) :) :) Estel 11-13-2011, 04:21 PM OK, I admit that SystemShock work on anDosbox with similar performance to dosbox on Maemo, with correct settings. By the way, andosbox is doing pernament framedrop (see sources) - normally, You can set this value in dosbox settings. This is how many frames dosbox skip (without processing) before drawing one. Setting 1-3 give quite a boost to performance, without much visible effects - higher start to being visible. So, basically, Andosbox is doing framedrop constantly = constantly lowering output graphic quality, no matter if it is needed or not. That's all the "mystery". Sure, You may not notice it on Android device, but, why to use it, if You can set exactly same things on normal dosbox (or tweak it exactly to Your likings)? /Estel HtheB 11-13-2011, 04:57 PM OK, I admit that SystemShock work on anDosbox with similarperformance to dosbox on Maemo, with correct settings. By the way, andosbox is doing pernament framedrop (see sources) - normally, You can set this value in dosbox settings. This is how many frames dosbox skip (without processing) before drawing one. Setting 1-3 give quite a boost to performance, without much visible effects - higher start to being visible. So, basically, Andosbox is doing framedrop constantly = constantly lowering output graphic quality, no matter if it is needed or not. That's all the "mystery". Sure, You may not notice it on Android device, but, why to use it, if You can set exactly same things on normal dosbox (or tweak it exactly to Your likings)? /Estel Be my guest and improve the DosBox on Maemo... Because all I want is just playing old dos games (without much tweaks.. anDosBox doesn't need any tweaks to run any game) That's the only reason why I just use anDosBox... It just works very well.. Estel 11-13-2011, 05:37 PM I see Your point. If You could point me to most-demanding (in terms of performance) dos game, that is at the same time fully playable in andosbox, I could prepare settings for it, to work the same (or better) on Maemo, and publish it here. Maybe it is going to need some after-tweaking or testing, so Your help - as tester pointing out where it could be better - would be also appreciated ;) I thought about dos-version C&C, but maybe You know more demanding (yet, playable) title? /Estel HtheB 11-13-2011, 06:22 PM I see Your point. If You could point me to most-demanding (in terms of performance) dos game, that is at the same time fully playable in andosbox, I could prepare settings for it, to work the same (or better) on Maemo, and publish it here. Maybe it is going to need some after-tweaking or testing, so Your help - as tester pointing out where it could be better - would be also appreciated ;) I thought about dos-version C&C, but maybe You know more demanding (yet, playable) title? /Estel I really love playing Skunny Kart (http://www.giantbomb.com/skunny-kart/61-11718/) on my N900 :) It's playable on andosbox. http://www.copysoft.com/skunny_kart.php 11-13-2011, 09:58 PM Hey, I also use xbindkeys (right click is for me "blue arrow" + tap), and it works on Maemo and ED. Still, it doesn't work on dosbox (due to keyb rover.sys, I suppose - it's using "blue arrow" other way). Have You achieved right click functionality in dosbox with Your method? If yes, what I'm missing? sorry for dumb question, but I'm quite lost in this case. Keep in mind, that I'm using N900, on Diablo it may look a little different. /Estel For us that still run Diablo, no complaints. :) It could be a faulty xmodmap. I just now remember reading about this awhile ago on this site. Try searching for xmodmap here. What behavior do you get after typing this in Xterm? xmodmap -e "pointer = 3 2 1" momcilosystem 11-19-2011, 09:33 AM Hi guys, I wanted to ask if anyone can help me configure my dosbox to run Mortal Kombat 3 smoothly? I managed to make game run and screen size is perfect but I couldn't make it smooth by editing my dosbox.conf. The game loads very slowly and it is very choppy, in fact it's unplayable. I disabled the sound to make it more playable but it didn't help. I attached my dosbox.conf (had to rename it to .txt to be able to upload, also removed commented lines to make it smaller). Got the game form Abandonia: http://www.abandonia.com/en/games/29060/Mortal+Kombat+3.html Any help would be appreciated :) P.S. If I posted in the wrong section, please move the post, or point me in the right direction. Thanks. Estel 11-19-2011, 06:00 PM You may want to update dosbox for maemo - 0.74 is latest version (it requires You to have extras-devel enabled, at least for this single update) After update (remember, You will get new config with 0.74 in it's name - changes in 0.73 won't be used in 0.74 version, unless specified as startup argument!), change the following lines (I'm posting, how they *should* look, not how they look now): fulldouble=false output=surface priority=higher,pause memsize=16 ... by the way, are You insane man?! Putting 64 MB into dosbox emulated machine? It doesn't work like that, increasing it below sane limits, actually slow things down. And, actually, never ever use more than 32 MB, even on desktop computer. On N900, use max 16MB, for most games 8 or 4 is sufficient. Games from that era won't (most of the time) use additional memory, but You're still taking it out from Maemo reserve. frameskip=3 aspect=true scaler=normal2x core=auto [You may try forcing dynamic here, if You're sure it works better for that particular game, auto is best for most cases] cputype=auto cycles=auto [mixer] nosound=false rate=22050 blocksize=1024 prebuffer=20 ...now, if You want to have sound, use: [sblaster] sbtype=sb16 sbbase=220 irq=7 dma=1 hdma=5 sbmixer=true oplmode=auto oplemu=default oplrate=22050 ...otherwise, use: [sblaster] sbtype=none And report back about Your findings. Also, using overclocked max frequency is recommended (900 upper limit, i don't recommend going higher than this). If You still got problems with fluidity, first thing to do is disabling sound. Then, You may try increasing frameskip (by 1 on every try). I don't recommend going higher than 5 frameskip, cause it's going to cause tear-down look, no matter of performance. Also, keep in mind, that in dosbox 0.74 output=overlay doesn't work, but output=surface does. That's why I changes it in config. If You insist on using dosbox 0.73, You can leave it as overlay, but keep aspect=true, and change scaler to scale=none /Estel Ghouli 03-15-2012, 04:18 AM An Corp! As I posted on another thread, I just got my hands on N900 and started to mess around with it. Got frustrated for the lack of right click in DOSBox, and decided to give it a swing. I set up SDK, downloaded Javis Pedro's sources for 0.74 in extras-devel and added functionality for right click when you hold down shift. Cool, but it eats away one button from already small keyboard, so I changed it so it works when you cover proximity sensor, same way as in JA2 (looked into its source code, so thanks go to Janne Mäkinen on this one :)). No buttons sacrificed. Packaged, and tested to be working, just need to do little tidying. Anyone interested? Any way my changes could get implemented in package in extras-devel? Where to go from here? :o javispedro 03-15-2012, 04:37 AM If you can make the feature optional (say configurable from the config file) I will add them to my patchset and publish a new version. ivgalvez 03-15-2012, 05:04 AM Hi Javispedro, have you seen latest version of Dosbox Turbo for Android? It works really well, is there any improvement that might worth to port to Maemo version? javispedro 03-15-2012, 06:29 AM Hi Javispedro, have you seen latest version of Dosbox Turbo for Android? It works really well, is there any improvement that might worth to port to Maemo version? I do not have any love for Android *ware. Considering my past experiences, that software probably is a payware, gpl-violating ripoff of someone else's port that either does nothing new at all -- apart from the necessary bells and whistles to be featured on the payware store -- or just has a different set of defaults that works better for a few games at the cost of several thousand others. So, no, I'm not touching that :). Ghouli 03-15-2012, 07:00 AM If you can make the feature optional (say configurable from the config file) I will add them to my patchset and publish a new version. Done. Currently reads useproxsensor boolean from dosbox.conf under SDL section and sends its value to sdl mouse struct. All changes I have made are in sdlmain.cpp.
• entries 45 115 • views 54990 # Day 4 366 views After much wrangling with the useful-but-crashhappy KDevelop, I hacked in the latest stuff into my tactical shooter. It now has 5 characters on the map, each of which will follow its own path simultaneously. As always, I prototype things in the quickest way possible to get them working and then migrate all the globals into locals or members, etc. Regarding large designs and object models, I work on the principle of You Aren't Gonna Need It, which is a deliberately exaggerated way of saying you shouldn't design elaborate objects that are theoretically suitable for any purpose and reusable, when you may not end up using a lot of that functionality. Right now my Character class just looks like this: class Character{public: Character(); public: int x, y; // position Path currentPath; int currentPathNode; // index into currentPath}; It's not elegant, and it is certainly going to grow as development continues. But there's little point me enumerating all the future methods now, because they will still change after that point. Similarly, the second public block will eventually become private, but I am waiting to see what sort of access patterns the code requires before I decide on an accessor interface that will allow me to make the members private. My next day of coding may feature a change from tile-based movement to pixel-based movement. Characters already use the finer-grained coordinates so in theory this should just be a case of adding in an interpolation layer to move a couple of pixels at a time and a detection routine to see if the character is in the middle of the destination tile yet (with a little margin of error). There are no comments to display. ## Create an account Register a new account
# Tag Info 8 So I'm not completely sure, but I think you're asking to count the number of strings of size $n$ (over the alphabet $\{a, b\}$) where the factor/substring $aa$ does not appear right? In this case, there are a few combinatorial approaches that you can take. Both Yuval and ADG have given simpler and more intuitive arguments, so I definitely suggest checking ... 7 Lee Gao's answer is excellent. Here is a different account. Consider the following automaton: This is an unambiguous finite automaton (UFA) without $\epsilon$ transitions: an NFA such that each word has exactly one accepting path. The number of words of length $n$ is thus the number of paths of length $n$ from the starting state to an accepting state (since ... 4 @Lee Gao's is too complex (I haven't even read the whole thing), here is a simplistic approach: Let f(n) be all desired strings out of which let a(n) be strings that end at a and b(n) be strings that end at b. Now for every string that ends at b we can directly add a to get ba in ending and a valid string: $$a(n)=b(n-1)\tag{1}$$ Note than we cannot add a ... 3 Your problem is known variously as the lost cow problem or the cow-path problem, and is a standard example in online algorithms. The algorithm you describe is 9-competitive, which is optimal for deterministic algorithms. For randomized algorithms the competitive ratio is roughly 4.6. See for example a writeup apparently by Rudolf Fleischer. 3 There are many functions that satisfy your condition. Here are a few. $a_i=1+c^{-i}$, for some constant $c\ge 2$. $a_i= 1+b_i/p_i$, where $p_i$ is the $i^{th}$ prime number and $b_i$ is any integer between 1 and $p_i-1$. $a_i =1+\alpha^i/\lceil\alpha^i\rceil$, for any positive transdental number $\alpha$. For example, you can take $\alpha = \pi$ or $\alpha=... 3 Finding a good analytic characterization of$n(N)$is tricky. Let's first consider the relaxation where$N = \frac{n}{\log n}$without the flooring restriction. Here's a somewhat nonintuitive approximation: let$m(z) = 1 + \frac{1}{z}$, let's see how$\frac{m(z)}{\log m(z)}$behaves as a function of$z$: $$\begin{array}{ccc} z = 1 & 10 & 100 & ... 3 You say that N=\lfloor\tfrac{n}{logn}\rfloor. But since you have a linear recurrence in N, not n, you really want n as a function of N. We have n ≈ N \log n. You substitute this for n and get n ≈ N \log (N \log n) = N \log N + N \log \log n. \log \log n is small compared to \log N, so we ignore it and get T(N)=T(N-1)+\mathcal{O}(N \log N) ... 3 Let us start by giving an alternative formula for \sum_{u,d} |L(u,d)|. For a node x, let \pi^{(d)}(x) be its d'th parent, and let h(x) be its height. Then$$ \sum_{u,d} |L(u,d)| = \sum_x |\{ L(\pi^{(d)}(x),d) : 0 \leq d \leq h(x)\}|. $$It suffices to show that |\{ L(\pi^{(d)}(x),d) : 0 \leq d \leq h(x)\}| = O(\sqrt{n}) for all nodes x. Let ... 3 The Risch algorithm is undecidable by Richardson Theorem if absolute value is allowed or semi-undecidable with \log_2, \pi, e^x, \sin x. Akitoshi Kawamura in his dissertation Computational Complexity in Analysis and Geometry has proved that integration is \#P{-}complete operation. There are multiple modifications of the Risch algorithm, but still nobody ... 2 Hint: Use a union bound. Note that for m \geq 2^k you cannot provide any non-trivial bound, since there are formulas with 2^k clauses of width k that are unsatisfiable. The same example shows that the bounded hinted above is tight (for all m \leq 2^k). 2 Let me write the definition in a little bit more detail. Suppose that f(n),g(n) are two functions from \mathbb{N} to \mathbb{R}_+, that is, they accept as input a natural number, and return as output a positive real number. We say that f(n) = O(g(n)) if there exist n_0 \in \mathbb{N} and C \in \mathbb{R}_+ such that for all n \geq n_0, ... 2 That guess is obviously wrong. If all items have weight 666 and value 1, then you can fit 3 items into a bin of size 2000, and only one into a bin of size 1000. If all weights are W < w_i ≤ 2W, then O = 0 and O' = highest value of a single item. There is very little that you can say except the obvious O ≤ O'. You might have "valuable" items with a ... 2 The trivial bounds are that |c| \le |X|=n and |c| \le |S|=p. Knowing bounds on the sizes of the sets s_i doesn't help much. For instance, consider the family of sets s_1,\dots,s_{n-k+1} given by s_i=\{i,n-k+2,\dots,n-1,n\}. These sets are all of size k, and yet the largest inclusion-minimal cover has size n-k+1, i.e., exactly equal to the ... 2 This kind of thing just doesn't work. For example, one of your intermediate terms is O(1)/O(\log n). However any function f can be written as g/h where g=O(1) and h=O(\log n). If f=O(1), then let g=f and h=1. If f=\Omega(1), then let g=1 and h=1/f. For a more formal treatment, try to rewrite each of your statements that involve ... 2 I will suggest a similar strategy. Note that your strategy yields a constant factor better than the one presented here but the proof is more technical. Let us divide the strategy in rounds 1, 2, \dots, where in round i we go 2^{i-1} steps to the right, then we go back to the center and we repeat the exact same to the left. This means we go one step to ... 2 It seems that you have overlooked the fact that f(n) \in \Theta(n^4) already implies both an upper bound of f(n)\in O(n^4) and a lower bound of f(n)\in \Omega(n^4). Intuitively, \Theta- notation says that a function grows "as fast as" another function, which means both "at most as fast" and "at least as fast". 2 Let us notice the following:$$ |\sin(\pi \cdot n/2)| = \begin{cases} 1 & \text{if$n$is odd}, \\ 0 & \text{if$n$is even}. \end{cases} $$This implies that your function f(n) alternates between n^2 (for odd n) and 0 (for even n). This means, for example, that f(n) = O(n^2) (since f(n) \leq n^2 for all n) but f(n) \neq \Omega(n^2) ... 1 Let S be a maximal set of vertices in which any two vertices are at distance at least 3. (Note maximal just means that S cannot be enlarged by adding new vertices.) By design, any other vertex is at distance at most 2 from some vertex in S, and therefore S is a 2-dominating set. On the other hand, if we define the B_1(v) to consist of all vertices ... 1 Here's what my search has come up with: There's a straightforward dynamic programming solution of the problem in this case, which requires \mathop{\Omega}\left( k \cdot n^3 \right) time; and with some thought one can avoid redundant computation of sums-of-squares by this algorithm, reducing the complexity to \mathop{\Omega}\left( k \cdot n^2 \right). ... 1 If you want to compute X (whatever X is), then you just need to compute X, and whatever intermediate results are required by your method for computing X. If your method doesn't need upper and lower bounds, there's no need to compute them. The point of upper and lower bounds is that they're usually easier to compute than the actual answer, and ... 1 There are lots of NP-complete problem where no algorithm will find an optimal solution - during your lifetime. Given the choice between a greedy algorithm that finds a solution quickly which is often good, and an algorithm that will find a guaranteed optimal solution, but not while you are waiting for it, what are you going to use? 1 The recurrence relation T(n) = T(n-1) + T(n-2), with initial conditions T(2) = T(1) = 1, has as solution the Fibonacci numbers, whose asymptotic growth is known to be \Theta(\phi^n), where \phi = \frac{1+\sqrt{5}}{2} < 2. The same bound holds for arbitrary positive initial conditions. 1 Tricky. As an approximation, assume that the number n is in the set with a probability p (n), and the sum of p (n) over all integers n is finite, so we may guess that the set is finite. You'd have to check enough values n to make some reasonable assumptions about p (n), and then based on those assumptions make a reasonable guess about the upper bound. ... 1 Place T triangles next to each other such that the line l intersects them all. Observe how the number of intersections is T+1. Now substitute T with the upper bound on the number of triangles for any triangulation that you mentioned, and we obtain 2n-5+1=2n-4 as an upper bound on the number of intersections. The example provided in the question ... 1 Since n \geq 1 (since n is the input length), we have$$ 20n^2 \leq 20n^2 + 11n + 1 \leq (20+11+1)n^2 = 32n^2.$$Put differently, for all$n \geq 1$, the function$f(n) = 20n^2 + 11n + 1$is bounded from below by$20n^2$and from above by$32n^2$. This shows that$f(n) = \Theta(n^2)$. Note also that when$n$is large,$f(n)$gets very close to$20n^2$. ... 1 Remember that: We can view a nondeterministic computation as a directed acyclic graph of configurations indexed by time. I have a guess that in your eyes:$2^n$for both, one directs itself and one directs others and at most every has the$\Sigma$chance to go next state, maybe to itself, maybe the other state, so from$1$to$n\$ state, an NFA produces at ... Only top voted, non community-wiki answers of a minimum length are eligible
# 116 (number) 116 (one hundred [and] sixteen) is the natural number following 115 and preceding 117. 116 Cardinal one hundred [and] sixteen Ordinal 116th (one hundred [and] sixteenth) Numeral system 114 Factorization $2^2 \cdot 29$ Divisors 1, 2, 4, 29, 58, 116 Roman numeral CXVI Binary 11101002 Octal 1648 Duodecimal 9812 ## In mathematics 116 is a noncototient, meaning that there is no solution to the equation mφ(m) = n, where φ stands for Euler's totient function.[1] 116! + 1 is prime.[2] There are 116 ternary Lyndon words of length six, and 116 irreducible polynomials of degree six over a three-element field, which form the basis of a free Lie algebra of dimension 116.[3] There are 116 different ways of partitioning the numbers from 1 through 5 into subsets in such a way that, for every k, the union of the first k subsets is a consecutive sequence of integers.[4] There are 116 different 6×6 Costas arrays.[5] ## In other fields One hundred sixteen is also: ## References Wikimedia Foundation. 2010. ### Look at other dictionaries: • Number 3-1 Youth Hostel Chengdu — (Чэнду,Китай) Категория отеля: Адрес: No.1, Lane 3, Celebrity Garde …   Каталог отелей • Number 1's (Mariah Carey) — Ones Album par Mariah Carey Sortie 16 novembre 1998 Enregistrement 1989 1998 Durée 73:13 Genre(s) Pop, R B …   Wikipédia en Français • Number of the Beast — For other uses, see Number of the Beast (disambiguation). The number of the beast is 666 by William Blake. The Number of the Beast (Greek: Άριθμὸν τοῦ θηρίου, Arithmon tou Thēriou) is a term …   Wikipedia • Number theory — A Lehmer sieve an analog computer once used for finding primes and solving simple diophantine equations. Number theory is a branch of pure mathematics devoted primarily to the study of the integers. Number theorists study prime numbers (the… …   Wikipedia • Number 1's (Mariah Carey album) — The correct title of this article is #1 s (Mariah Carey album). The substitution or omission of the # sign is because of technical restrictions. #1 s …   Wikipedia • Number 1's (album de Mariah Carey) — #1 s Compilation par Mariah Carey Sortie 17 novembre 1998[1] Enregistrement 1988 1998 …   Wikipédia en Français • Wikipedia:List of Wikipedians by number of edits — Shortcuts: WP:NOE WP:WBE WP:EDITS WP:MOSTEDITS This is a list of Wikipedians ordered by number of edits in the English language Wikipedia. Edits in all namespaces are counted; deleted edits have been included in recent versions. Click… …   Wikipedia • STS-116 — Infobox Space mission mission name = STS 116 insignia = Sts 116 patch.png shuttle = Discovery crew members = 7 launch pad = LC 39B launch = 2006 12 09 8:47:35 p.m. EST (2006 12 10 01:47:35 UTC) landing = 2006 12 22 5:32:00 p.m. EST (22:32:00 UTC) …   Wikipedia • United States lightship Chesapeake (LV-116) — The United States lightship Chesapeake (LV 116) is owned by the National Park Service and on a 25 year loan to the Baltimore Maritime Museum in Baltimore, Maryland. Since 1820, several lightships have served at the Chesapeake lightship station… …   Wikipedia • 117 (number) — 117 (one hundred [and] seventeen) is the natural number following 116 and preceding 118. ← 116 118 → 117 ← …   Wikipedia
The basic motivation to specify “yet another Client/Server and Sender/Receiver mechanism“ instead of using an existing infrastructure/technology is the goal to have a technology that: • Fulfills the hard requirements regarding resource consumption in an embedded world 满足嵌入式领域中有关资源消耗的硬性要求 • Is compatible through as many use-cases and communication partners as possible 与尽可能多的用例和通信伙伴兼容 • compatible with AUTOSAR at least on the wire-format level; i.e. can communicate with PDUs AUTOSAR can receive and send without modification to the AUTOSAR standard. The mappings within AUTOSAR shall be chosen according to the SOME/IP specification. 至少在wire-format上与AUTOSAR兼容; 如AUTOSAR标准无需修改即可接收和发送能与之通信的PDU。应根据SOME / IP规范选择AUTOSAR中的映射。 • Provides the features required by automotive use-cases 提供汽车用例所需的功能 • Is scalable from tiny to large platforms 可从小型平台扩展到大型平台 • Can be implemented on different operating system (i.e. AUTOSAR, GENIVI, and OSEK) and even embedded devices without operating system 可以在不同的操作系统(即AUTOSAR,GENIVI和OSEK)上实现,甚至可以在没有操作系统的情况下实现
## Abstract We performed successive H215O-PET scans on volunteers as they ate chocolate to beyond satiety. Thus, the sensory stimulus and act (eating) were held constant while the reward value of the chocolate and motivation of the subject to eat were manipulated by feeding. Non-specific effects of satiety (such as feelings of fullness and autonomic changes) were also present and probably contributed to the modulation of brain activity. After eating each piece of chocolate, subjects gave ratings of how pleasant/unpleasant the chocolate was and of how much they did or did not want another piece of chocolate. Regional cerebral blood flow was then regressed against subjects' ratings. Different groups of structures were recruited selectively depending on whether subjects were eating chocolate when they were highly motivated to eat and rated the chocolate as very pleasant [subcallosal region, caudomedial orbitofrontal cortex (OFC), insula/operculum, striatum and midbrain] or whether they ate chocolate despite being satiated (parahippocampal gyrus, caudolateral OFC and prefrontal regions). As predicted, modulation was observed in cortical chemosensory areas, including the insula and caudomedial and caudolateral OFC, suggesting that the reward value of food is represented here. Of particular interest, the medial and lateral caudal OFC showed opposite patterns of activity. This pattern of activity indicates that there may be a functional segregation of the neural representation of reward and punishment within this region. The only brain region that was active during both positive and negative compared with neutral conditions was the posterior cingulate cortex. Therefore, these results support the hypothesis that there are two separate motivational systems: one orchestrating approach and another avoidance behaviours. ## Introduction Early cortical representations of visual, auditory and somatosensory information (e.g. primary' and secondary' areas) are in the unimodal neocortex. In contrast, the cortical representations of the chemical senses (taste and smell) are in the limbic and paralimbic cortex. This is true in primates (e.g. Tanabe et al., 1975a, b; Pritchard et al., 1986; Takagi, 1986; Price, 1990; Baylis et al., 1995; Rolls et al., 1996; Scott and Plata-Salaman, 1999) and in humans (Zatorre et al., 1992; Jones-Gotman and Zatorre, 1993; Petrides and Pandya, 1994; Faurion et al., 1999; Pritchard et al., 1999; Small et al., 1999). Thus, the representations of taste and smell are in regions of the brain that are thought to be important for processing the internal and motivational state as well as the affective significance of external objects. Stimulation with taste and smell have been shown in neuroimaging studies to be potent elicitors of brain activity in limbic regions such as the amgydala, insula, orbitofrontal cortex, cingulate cortex and basal forebrain (e.g. Zatorre et al., 1992; Small et al., 1997b; Zald and Pardo, 1997; Sobel et al., 1998; Zald et al., 1998; Francis et al., 1999; Royet et al., 2000; Savic et al., 2000). Many neuroimaging studies have investigated brain activity evoked by affective stimuli, including chemosensory stimuli. For example, tastes (Zald et al., 1998; Francis et al., 1999), flavours (Small et al., 1997b), smells (Zald et al., 1997; Francis et al., 1999; Royet et al., 2000), music (Blood et al., 1999), sounds (Morris et al., 1999a), faces (e.g. Morris et al., 1996; Lane et al., 1997; Phillips et al., 1997), photographs (Lane et al., 1997a; Paradiso et al., 1999), pain (e.g. Coghill et al., 1999; Tolle et al., 1999) and touch (Francis et al., 1999) have been investigated by functional neuroimaging in humans. However, in each case, different stimuli, or different stimulus intensities, had to be used to represent different hedonic valences. Recently O'Doherty and colleagues modulated the reward value of banana odour by having subjects eat bananas to satiety (O'Doherty et al., 2000). Functional MRI was used to measure brain activity evoked by the same banana odour before and after feeding. Activity in the medial orbitofrontal cortex (OFC) decreased after satiety in response to the banana odour but not in response to a vanilla odour. This suggests that the medial OFC is preferentially activated to odours when they are rewarding. However, in this experiment the affective value was manipulated only from pleasant to neutral, leaving aversive unexplored. Furthermore, to our knowledge, no study has investigated the neuronal correlates of changes in the reward value of food produced by eating. Thus, despite the fact that much of the non-human animal literature of reward and stimulus–reward association learning is based upon feeding, the neural substrates of the affective and motivational components of feeding in humans remain relatively unexplored. To investigate brain activity related to affective changes associated with feeding, we performed successive H215O-PET scans on volunteers as they ate chocolate to beyond satiety. Thus the sensory stimulus was held constant, while its reward value, measured by affective ratings (Fig. 1; see also Methods), was manipulated by feeding. This paradigm is unique in that changes in regional cerebral blood flow (rCBF) can be attributed to changing reward value, independent of sensory input and behaviour (eating). Importantly, at the beginning of the experiment, eating the chocolate is consistent with the subjects' motivation, but as the chocolate is eaten to beyond satiety, behaviour comes to be inconsistent with subjects' motivation. Thus, the same act (eating) is both rewarding and punishing within this paradigm. However, because the change in reward value and motivational state necessarily occurs over time, the effects of order almost certainly contribute to neural activity. Additionally, non-specific effects, such as autonomic and visceromotor changes, which are intrinsic to both eating and modulation of the reward value of food, were not assessed and thus cannot be disentangled from the overall neural response. We predicted that rCBF would be modulated by the reward value of the stimulus in chemosensory regions, including the insula/operculum and orbitofrontal cortex, reflecting the involvement of these regions in both sensory and limbic aspects of the neural representation of food. Additionally, we expected that structures proposed to be involved in the initiation of feeding, such as the striatum and dopaminergic midbrain (Rolls, 1993), would be selectively active when subjects were motivated to eat chocolate and that the prefrontal cortex, which has been proposed to be involved in the decision to terminate eating (Tataranni et al., 1999), would become increasingly active as subjects became increasingly motivated not to eat. We also predicted modulation of rCBF in limbic areas previously implicated in the positive and negative evaluation of sensory stimuli, and reward and punishment, including the subcallosal region, OFC, cingulate cortex, basal ganglia and anterior temporal lobe structures. Finally, we studied positive or negative changes in linear rCBF in specific brain regions as an indication of the extent to which that region is either preferentially activated by eating chocolate when it is pleasant and rewarding' versus unpleasant and punishing' or vice versa. ## Methods ### Pilot study Pilot testing was conducted to determine what type of chocolate to use. Fifteen healthy subjects were asked to rank 20 kinds of chocolate from the most to the least pleasant. Lindt bittersweet (50% cocoa) and Lindt milk chocolate were consistently ranked as the most pleasant; however, subjects who preferred the bittersweet did not like the milk chocolate and subjects who preferred the milk chocolate did not like the bittersweet chocolate. In the PET experiment we therefore decided to give subjects the choice between Lindt bittersweet and milk chocolate. Two subjects chose bittersweet and seven subjects chose milk chocolate. ### Subjects Nine healthy, right-handed volunteers who claimed to be chocolate-lovers participated in this study. Status as a chocolate-lover was determined by rating the subject on a scale from 1 to 10, where 10 referred to chocoholic' and zero was neutral; all subjects' ratings fell between 8 and 10. Five were women and four were men. All had eaten breakfast ~4.5 h prior to scanning, which took place in the early afternoon (12.30 hours). Hunger ratings, made on a scale of 1–10, where 10 corresponded to starving, 0 to very full and 5 to neither hungry nor full, indicated that subjects were in the range of not hungry to mildly hungry (ratings between 5 and 7) at the beginning of the experiment. All subjects gave informed consent to participate in the study, which was approved by the Ethics Committee of the Montreal Neurological Hospital. ### PET study PET scans were performed with a Siemens HR+ scanner in 3D acquisition mode, using the H215O water bolus technique to measure rCBF (Raichle et al., 1983). Each subject also received an MRI scan for anatomical registration of PET data (Collins et al., 1994) and resampling into a standardized stereotaxic coordinate system (Talairach and Tournoux, 1988). Subjects underwent seven identical chocolate scans'. In each, ~10 s before scanning, subjects were given one square of chocolate and instructed to eat it by letting it melt in their mouth. Immediately after the scan was completed, subjects indicated how pleasant or unpleasant they found that piece of chocolate [question (i)] and how much they would like or not like to have another piece [question (ii)], using the rating scale shown in Fig. 1. Independent pilot testing with 20 subjects had suggested that these aspects of affective evaluation of chocolate were different. Specifically, subjects commonly reported that the chocolate still tasted pleasant but that they did not want to eat any more. Therefore, in the final version of the paradigm we included both ratings in order to capture as much information as possible about the subjects' subjective state. There were two main goals of these ratings. First, we wanted to have a measure of subjective state to determine how much chocolate to feed each subject. Secondly, we wanted to be able to regress rCBF against a value other than scan number in case changes in subjective affective state did not decrease in a linear fashion. Prior to the next scan, subjects were fed chocolate, one square at a time, making both ratings after eating each piece, until the rating dropped by at least two points. There was a rest period of at least 5 min between the termination of eating and the beginning of the next scan, to reduce the possibility of habituation. In total, subjects ate between 16 and 74 squares of chocolate, corresponding to between half and two-and-a-half 85 g bars of chocolate. ### Data analysis Regression maps (Paus et al., 1996) were calculated to assess the significance of the relationship between affective ratings and rCBF. Regression analysis involves correlating rCBF with incremental changes in a specific experimental variable. The data set for this analysis consisted of normalized CBF values obtained in each subject during each of the seven chocolate scan conditions, yielding a total of 63 image volumes. The effect of variation in affective rating (ii) was assessed by means of analysis of covariance, with subjects as a main effect and the affective rating as a covariate. The following model was fitted: E(yij) = ai+ b.Psij, where yij is the normalized CBF of subject i on scan j, and sij is the motivation rating at scan j. The subject effect (ai) is removed and the parameter of interest is the slope bP of the effect of the change in affective rating on CBF. Values equal to or exceeding a criterion of t = 4.4 for unpredicted peaks were deemed significant (P < 0.05, two-tailed). This yielded a false-positive rate of 0.025 in 200 resolution elements (which has dimensions of 14 × 14 × 14 mm for the main analysis) if the volume of brain grey matter is 500 cm3 (Worsley et al., 1996). For predicted peaks, values equal to or exceeding a criterion of t = 3.2 were considered significant, yielding a false-positive rate of 0.025 in 1.5 resolution elements in 2 cm3 (Worsley et al., 1996). Since satiety is a phenomenon that unfolds over time, we introduced several measures to decipher non-specific time effects from time effects associated with satiety. First, we employed covariation, a statistical technique designed to dissociate linearly related variables and then partial out the effect of the selected variable. Whereas this significantly reduces the likelihood of false-positive errors, it also increases the likelihood of false-negative errors due to the high degree of multicollinearity between scan order and affective ratings. Secondly, three control scans (water-pre, water-post and tongue movement) were also included in our study design to isolate scan order effects. Specifically, rCBF between the two water scans, which were identical except for their order in the experiment, could be compared in areas of interest (determined by the regression analysis and predictions); if there was no difference in activation in a region, we reasoned that linear increases or decreases were unlikely, because of the effect of scan order. Finally, subtraction analysis (Worsley et al., 1992) was performed to identify regions that may have responded non-linearly to increasing satiety. Specifically, the fourth chocolate scan, with affective ratings near neutral, was subtracted from the first and last chocolate scans, in which affective ratings were high regardless of valence. These three chocolate scans (Choc 1, Choc 4 and Choc 7) were also each subtracted from the first scan (water-pre) and the ninth scan (water-post). ## Results Regression of rCBF in each of seven chocolate scans (Choc 1 – Choc 7) against the ratings to question (ii) (How much would you like or not like another piece of chocolate?') revealed significant rCBF decreases (as ratings changed from positive to negative) in the midline subcallosal region and midbrain; bilaterally, the inferior and middle temporal gyri, dorsal insula/frontal operculum, caudomedial OFC and caudate nucleus; on the right, in the occipitotemporal gyrus, dorsal insula/frontal operculum and ventral insula; and on the left, in the thalamus and putamen (Table 1) (Fig. 3A–C). When this analysis was performed with the ratings to question (i) (How pleasant or unpleasant was the piece of chocolate you just ate?'), similar results were obtained. Therefore, we report only the results from the ratings given to question (ii). When the regression equation was performed with scan order covaried out, only the subcallosal area, inferior temporal gyri and left insula remained significant. This analysis suffers from a propensity to false-negative error due to the shared variance associated with the linear relationship between the psychophysical ratings and scan order. Therefore, the areas that remain after this more stringent analysis are robustly related to the psychophysical ratings but not to scan order. However, given the likelihood of false-positive errors, we also considered peaks in the context of the literature (see Discussion). A repeated measures ANOVA was also carried out to compare psychophysical data from nine pilot subjects who performed the identical experiment but who were sitting at a table as opposed to lying in a scanner with the data from subjects who participated in the PET study. There was no difference in the affective ratings between the groups [F(1,15) = 0.007, P = 0.97]. This indicated that the pleasantness of chocolate and the motivation to eat declined at the same rate regardless of the experimental context. Consequently, it is unlikely that there was an interaction between the increasing unpleasantness of lying in the scanner and the increasing unpleasantness of the chocolate. ## Discussion ### The cortical chemosensory areas As predicted, activity in regions that probably correspond to the cortical gustatory areas was modulated by changes in the reward value of the chocolate (Fig. 3A and D), suggesting overlapping representation of sensory and affective processing of taste in humans. Specifically, as the reward value of the chocolate decreased, rCBF decreased bilaterally in the insula in regions shown by previous neuroimaging studies to represent the primary gustatory area (Kinomura et al., 1994; Small et al., 1997a, b, 1999; Zald et al., 1998; Faurion et al., 1999; Francis et al., 1999). In contrast, rCBF increased with decreasing reward value in a region of the caudolateral OFC, which has been implicated in gustatory processing in humans (Small et al., 1997a, b, 1999) and has been suggested to represent a secondary gustatory area by Rolls and colleagues (Rolls et al., 1990). When scan order was covaried out of the regression equation, rCBF changes in the left anterior insula and bilaterally in the caudolateral OFC remained significant. Thus, even when scan order was covaried out, modulation was observed in the cortical gustatory areas. Moreover, there was no difference in rCBF in these regions in the comparison of the two water scans, performed at the beginning and end of the session, indicating that rCBF changes were probably related to the changes in the affective and motivational value of the chocolate and not simply to an effect of scan order (see Results and Fig. 3). Electrical stimulation of the human insula elicits alterations in gastrointestinal motility, taste hallucinations and a variety of sensations associated with the digestive tract (Penfield and Faulk, 1955). Neuroimaging studies have shown this region to be sensitive to various processes related to feeding, including odour (Zatorre et al., 1992; Small et al., 1997b; Fulbright et al., 1998; Francis et al., 1999), taste (Small et al., 1999), tongue somatosensory stimulation (Pardo et al., 1997), swallowing (Hamdy et al., 1999), facial expressions of disgust (Phillips et al., 1997), thirst (Denton et al., 1999) and hunger (Tataranni et al., 1999). In accordance with these studies, O'Doherty and colleagues recently reported that odour-induced insular activation may be attenuated following satiation with the food related to that odour (in one of six subjects in their study) (O'Doherty et al., 2000). The anterior insula is also consistently activated in studies using emotionally salient tasks or sensory stimulation (e.g. Breiter et al., 1997; Kosslyn et al., 1996; Lane et al., 1997b; Thut et al., 1997; Coghill et al., 1999; Morris et al., 1999a). The overlapping representation of affective, sensory and autonomic functions in the insular region is consistent with our result in supporting a role for the insula in feeding behaviour and underscoring the fact that the primary gustatory area may be better conceptualized as the ingestive cortex as opposed to a strictly sensory area. However, the multimodal nature of this region also raises the possibility that the insular modulation we observed may be related to numerous aspects of feeding in addition to the changes in reward value of the chocolate. Taste-responsive cells in the monkey are not modulated by satiety until the OFC (Rolls et al., 1988, 1989); furthermore, OFC taste cells decrease firing with satiety (whereas we report increased blood flow in an analogous region of OFC). There are several explanations for the discrepancy between the single-cell recordings and our findings. First, there are many examples of interspecies differences in the gustatory system. For example, in rats, physiological state has been shown to modulate the taste response as early as the brainstem (Jacobs et al., 1988). Secondly, it is possible that our results are not contradictory but reflect a manifestation of the different scopes of the two methods (single-cell recording versus PET). For example, the electrophysiological studies could be biased in that cells must first display a response to taste in order to be investigated, and thus cells that increase response in the OFC with satiety could be missed. Or perhaps both response profiles exist in the human and are simply not discernible by PET. Finally, the representation of taste in the insula and OFC is sparse (~4% of the cells respond to taste) (Yaxley et al., 1990). Consequently, it is impossible to know if the CBF changes we observed reflect activity of taste neurones per se rather than the modulation of cells responding to reward value or some autonomic aspect associated with being fed chocolate. In any case, the differential engagement of the cortical gustatory areas suggests that in humans taste cells have access to information regarding the internal state and reward value of the stimulus. Such an organization represents a departure from classical notions of sensory organization, which is based mostly upon examination of the visual and auditory modalities, both of which have primary cortical representation in the unimodal neocortex. Differential activity was not observed in the primary olfactory region, thought to be located in the pyriform region of humans (Zatorre et al., 1992; Small et al., 1997b; Dade et al., 1998; Sobel et al., 1998; for review, see Zald and Pardo, 2000; Zatorre and Jones-Gotman, 2000). This area does not appear to be sensitive to olfactory sensory-specific satiety (O'Doherty et al., 2000), although it is sensitive to subtle cognitive manipulations (Dade et al., 1998) and different parameters associated with sniffing (Sobel et al., 1998). The pyriform region has also been characteristically difficult to image with PET because of rapid habituation of the odour-induced response (Sobel et al., 2000). Therefore, our failure to observe modulation may reflect the insensitivity of PET to temporal events as opposed to insensitivity of the region to changes in reward value or perceptual experience. Activity was observed in a region of the medial OFC that may correspond to the putative secondary olfactory area (Zatorre et al., 1992). Here activity decreased as motivation to eat decreased. This result is consistent with single-cell recording studies of olfactory sensory-specific satiety in this region (Critchley and Rolls, 1996) and with the functional MRI study by O'Doherty and colleagues reporting that the response of the medial OFC to an odour decreases after subjects eat a related food to satiety (Doherty et al., 2000). However, bimodal taste and smell-responsive cells and neurones that respond to the presence of fat in the mouth have been found throughout the caudal OFC (Rolls and Baylis, 1994; Rolls et al., 1999), suggesting that neural processing in this area may give rise to flavour perception in monkeys. Therefore, the changes observed in OFC activity may represent this region's involvement in flavour processing, as opposed, or in addition, to unimodal taste and smell processing. We speculate that one reason for this integrated relationship between sensory and limbic processing of taste is that the brain regions involved in the processing of sensory stimuli that are primarily reinforcing, such as tastes and pain, developed in tandem with the limbic structures for the common purpose of avoiding danger (i.e. toxins and bodily harm) and incorporating nutrients for survival. This integrated relationship may account for phenomena such as single-trial conditioned taste aversion learning, in which the insula has been implicated (Gutierrez et al., 1999) and for which there are clear adaptive advantages. It may also support addictive ingestive behaviour, including overeating and drug abuse, as well as the generation of drive and craving states (e.g. hunger and addiction). For example, Wise suggests that the brain circuitry underlying addiction originally developed to subserve feeding behaviour (Wise, 1997). Our results support this hypothesis, as the structures selectively active when ratings indicated that the chocolate had a strong positive valence overlap considerably with areas where increases in rCBF were evoked by cocaine versus saline injection (subcallosal region, caudate, putamen, thalamus, hippocampus, insula and ventral tegmentum) (Breiter et al., 1997). Moreover, in addition to hunger (Tataranni et al., 1999) and thirst (Denton et al., 1999), both the insula and the OFC have been implicated in drug cravings (Wang et al., 1999). In the present study, both the insula and the caudomedial OFC were active only when subjects were highly motivated to eat the chocolate. Interestingly, chocolate has been identified as the single most craved food in studies of food cravings (Rozin et al., 1991), and chocolate addiction has been described (Hetherington and MacDiarmid, 1993). ### Feeding-related circuitry The observation that the primary gustatory area and the chemosensory regions of the OFC are modulated by satiety suggests that these areas play a role in feeding behaviour in addition to sensory processing. The striatum, which has been proposed as a crucial structure for the initiation of feeding (Rolls, 1993), receives feeding-related projections from both the insula (Chikama et al., 1997) and the caudomedial OFC (Haber et al., 1995), which is itself connected to the laterally located secondary gustatory area (Carmichael and Price, 1996). In the present study, rCBF in the dorsal striatum and caudomedial OFC decreased as motivation to eat declined. This result is also in accordance with the results of Tataranni and colleagues, who reported activity in these regions in a comparison of eyes closed resting and hungry with eyes closed resting and satiated (Tataranni et al., 1999). These results suggest that the insula, striatum and caudomedial OFC are part of the neural network underlying the initiation of feeding. Interestingly, whereas we observed that activity in the midbrain (in the region of the ventral tegmental area) and subcallosal region correlated with eating chocolate when it is judged as pleasant, Tataranni and colleagues observed no change in these regions in the hungry state compared with the satiated state. One difference between the present study and the study by Tataranni and colleagues is that the subjects in our study received a rewarding stimulus during scanning, whereas their subjects were scanned after eating. This discrepancy is consistent with the proposal that the dopaminergic midbrain mediates the reward value of food (e.g. Mirenowisz and Shultz, 1996; Richardson and Gratton, 1998; Ahn and Phillips, 1999). In contrast, rCBF in several motor and premotor areas, the left lateral prefrontal cortex (left middle and inferior frontal gyri), the bilateral OFC, the right anterior cingulate and the right parahippocampal gyrus increased with satiety. The anterior cingulate and parahippocampal gyrus have been reported to be involved in the affective evaluation of sensory stimuli (discussed below), but to our knowledge have not been implicated directly in feeding. Tataranni and colleagues reported rCBF increases with satiety in the dorsolateral prefrontal cortex and speculated that, since the prefrontal cortex has been shown to participate in the inhibition of inappropriate behaviours, this region may be important in decisions to terminate feeding (Tataranni et al., 1999). Our result supports this hypothesis, and also suggests that, as in non-human primates (Rolls, 1997), the caudolateral OFC may also be a part of the neural network underlying feeding termination. Regional CBF changes were not observed in the amygdala. The amygdala has been shown previously to be activated by aversive tastes (Zald et al., 1998), smells (Zald et al., 1997) and flavours (Small et al., 1997a), and single-cell recording studies in the monkey suggest that at least some taste cells are sensitive to satiety (Yan and Scott, 1996). There is also some evidence that the human amgydala is sensitive to sensory-specific satiety of odours (O'Doherty et al., 2000). There are several reasons why we may not have observed activity here. First is the potential for interspecies differences. Secondly, it is possible that excitatory and inhibitory activity cancelled each other out, rendering changes undetectable by PET (Yan and Scott, 1996). However, it is also possible that the amygdala is involved in evaluation of the affective valence of chemosensory stimuli when the association should be more permanent. Thus, amygdala activation is seen in response to aversive tastes or to odours such as intestinal gas (Zald et al., 1997), which will always be aversive, and in rats it is involved in conditioned taste aversion learning (e.g. Yamamoto et al., 1994). In contrast, affective evaluation corresponding to satiety must be flexible, as satiety is transient, varying with a changing internal state. Our results, specifically the activity observed in the caudolateral (secondary gustatory area) and caudomedial OFC, are in agreement with the suggestion of Rolls and colleagues, based on the results of electrophysiological experiments in monkeys of stimulus reversal learning, that the orbitofrontal cortex is more important for flexible stimulus–reward associations (e.g. Rolls, 1993). ### Affective evaluation #### Subcallosal region The largest area of rCBF change observed in our study was in the subcallosal region. This was also the most robustly activated region in a similarly designed study using musical dissonance to modulate the affective valence of a tune (Blood et al., 1999). In both studies, regression analysis revealed that rCBF decreased as pleasantness decreased, as a function of either increasing satiety or increasing dissonance. Clinical evidence indicates that damage to this region results in disruption of goal-directed actions, which are guided by motivational and emotional factors (for review, see Damasio, 1994), and in rats it has been demonstrated that damage to the medial OFC results in the inability of a cue to access representational information about the incentive value of associated reinforcement (Gallagher et al., 1999). Together, these results suggest that the subcallosal medial prefrontal region subserves a variety of behaviours guided by motivational and emotional factors, including feeding. rCBF in the caudomedial OFC also followed this pattern of activity (i.e. rCBF decreased with increasing satiety or dissonance), which is consistent with the postulated involvement of this area in stimulus–reward association learning (e.g. Rolls, 1996). A final similarity between our study and the musical study by Blood and colleagues was increasing rCBF in the right parahippocampal region as the stimuli became more unpleasant. In the study of musical dissonance, the subcallosal region, caudomedial OFC and parahippocampal gyrus were activated even though scan order was counterbalanced, and in the present study all three regions were still significantly activated when scan order was covaried out of the regression. #### Medial versus lateral OFC activity The opposite pattern of rCBF was observed in the medial compared with the lateral OFC. As eating chocolate changed from rewarding to aversive, rCBF decreased in the medial OFC and increased in the immediately adjacent lateral OFC. Above, we have interpreted these results in relation to the chemosensory literature and our specific predictions regarding feeding. However, the same region of the OFC that is implicated in taste and smell is thought to be involved in stimulus–reward association learning (e.g. Iversen and Mishkin, 1970; Rolls, 1996) in monkeys, and human neuroimaging studies of emotional state, reward, punishment and the affective evaluation of non-chemosensory stimuli consistently report OFC activation (e.g. Thut et al., 1997; Blair et al., 1999; Blood et al., 1999; Morris et al., 1999; Rogers et al., 1999; Damasio et al., 2000; O'Doherty et al., 2001). Moreover, since food is often used as the primary reinforcer in stimulus–reward association learning paradigms, Carmichael and Price have suggested that higher-order processing of the sensory attributes of food in the OFC may provide the sort of hedonic representation that underlies much of what is meant by the term reward' (Carmichael and Price, 1996). A similar dissociation between medial and lateral OFC activity has been noted by O'Doherty and colleagues (O'Doherty et al., 2001). In their study, subjects performed an emotion-related visual reversal-learning task while undergoing functional MRI scanning. Lateral OFC activation was found in response to a punishing outcome, whereas medial OFC activation occurred in response to a rewarding outcome. These results suggest that the neural representations of reward and aversion are separated within these regions. Elliott and colleagues have also described a dissociation between medial and lateral OFC function based on a review of functional neuroimaging studies conducted in their laboratory (Elliott et al., 2000). These authors suggest that the medial OFC is involved in monitoring and holding in mind reward values, whereas the lateral OFC is recruited when a response previously associated with a reward has to be suppressed. Our results partially support this notion. Here, the medial OFC is active when subjects report that eating chocolate is rewarding. During this time, their behaviour is in accordance with their will. As their desire to eat decreases and their behaviour (eating) comes to be inconsistent with their will (indicated by their affective ratings), the medial OFC activity decreases and the lateral OFC activates. Thus, in the present study, lateral OFC activity occurs when the desire to stop eating must be suppressed in order to conform to the demands of the experiment. #### Posterior cingulate cortex Subtraction analysis (see Results) revealed only one significant non-linear effect in this experiment. Specifically, rCBF in the posterior cingulate cortex was higher when subjects rated chocolate as highly pleasant or highly unpleasant than when they rated it as neutral (Table 3 and Fig. 3E). As further verification of this effect, we compared the pleasant and unpleasant chocolate scans individually with the two neutral water baseline scans (water-pre and water-post) (Table 3). The effect held: both chocolate scans activated the posterior cingulate to a greater extent than did the neutral water scans. This result is in accordance with Maddock, who, in a recent review of the neuroimaging literature, concluded that this is the brain area that is most consistently activated by emotionally salient stimuli, regardless of valence (Maddock, 1999). Thus, our finding suggests at least some overlap between the brain regions involved in processing positive and negative valenced stimuli. However, since this is the only region we observed with such an rCBF response profile, our study supports the notion (LeDoux, 1996) that different neural substrates underlie two motivation systems, one dealing with positive/appetitive stimuli and a second dealing with negative/aversive stimuli. Finally, in the present study, the subjects' ratings in response to question (ii) (How much would you like, or not like to have another piece of chocolate?') decreased faster and to a greater extent than their ratings to question (i) (How pleasant or unpleasant was the chocolate that you just ate?') (Fig. 4). In other words, there was a point in time at which subjects reported that the chocolate tasted pleasant, yet they did not want to eat any more. Although question (i) required the subject to think about the pleasantness of the chocolate eaten immediately prior to the question, whereas question (ii) required the subject to report how motivated they were to eat a piece of chocolate in the immediate future, we feel the ratings we have collected capture two different subjective states that cannot be accounted for by time alone. Berridge and colleagues have described a dissociation between taste reactivity measures (characteristic responses to pleasant or aversive tastes) and acceptance or rejection behaviours in rats (for review, see Berridge, 1996). These two states are described as liking' and wanting', respectively. It is possible that the ratings we have gathered depict these two dimensions of affective evaluation, question (i) addressing liking and question (ii) addressing wanting. We therefore decided to pursue our psychophysical result. When scan order was covaried out of the regression equation, a region of the retrosplenial cortex, probably corresponding to limbic area 30 (Morris et al., 1999b), was identified that correlated to a greater extent with ratings given to question (ii) compared with ratings given to question (i). While this finding is certainly preliminary, we speculate that this region of the brain may form part of a neural substrate for the subjective difference between finding the chocolate pleasant but not wanting to eat any more (Fig. 3F). This interpretation is consistent with Berridge's proposal that there are separate neural systems underlying wanting and liking (Berridge, 1996). ### Strengths and weakness of the paradigm The paradigm employed here to evaluate affective changes associated with feeding is unique because the same stimulus was used to evoke the entire affective spectrum (positive and appetitive to negative and aversive). At the beginning of the experiment, eating the chocolate was consistent with subjects' motivation, but as the chocolate was eaten to beyond satiety, behaviour came to be inconsistent with the subjects' motivation. Thus, the same act (eating) is both rewarding and punishing within this paradigm and corresponding neural activity can be assessed. However, since our subjects were instructed to eat beyond satiety (in order to make the act of eating chocolate punishing), the paradigm did not mimic the natural satiation associated with normal termination of a meal in this study. The major limitation of this paradigm is that, since affective changes associated with feeding occur over time, order effects almost certainly contributed to our results. However, time—and thus order—effects are inherent to the process of affective change associated with eating to satiety (and beyond). We attempted to address order effects by including control' water and tongue-movement scans at the beginning and at the end of the experiment and by employing covariation, a statistical technique designed to dissociate linearly related variables and then partial out the effect of the selected variable. Nevertheless, we acknowledge that neither of these techniques controls for order completely. Additionally, non-specific effects, such as autonomic and visceromotor changes, which are intrinsic to both eating and the modulation of the reward value of food, were not assessed and thus cannot be disentangled from the overall neural response. Finally, we chose to collect affective ratings immediately after each scan, as opposed to during each scan. This choice was based on our decision to avoid confounding reward-related processing in the OFC and decision-making-related activity in the OFC. The disadvantage of this decision is that ratings reflected the subjective state immediately after eating the chocolate as opposed to the subjective state during the scan. However, given that changes in ratings occurred steadily and gradually, it is unlikely that these two states differed significantly. ### Conclusion We observed differential recruitment of brain regions depending on whether subjects ate chocolate when they were highly motivated to eat and rated the chocolate as pleasant, or whether they were highly motivated not to eat and rated the chocolate as unpleasant. The only brain region that was active during both positive and negative compared with neutral conditions was the posterior cingulate cortex. Thus, the present study supports the hypothesis that different neural substrates underlie two motivation systems, one dealing with positive/appetitive stimuli and a second with negative/aversive stimuli. This functional dissociation was particularly apparent in the OFC, where the rCBF decreased in the medial OFC and increased in the lateral OFC as the reward value of chocolate changed from pleasant to aversive. This pattern of activity indicates that there may be a functional segregation of the neural representation of reward and punishment within these regions. As predicted, modulation was seen in cortical chemosensory areas including the insula and OFC, suggesting that these chemosensory regions contribute to feeding behaviour by encoding changes in the value of food reward in addition to sensory processing. This result is important because it demands a reconceptualization of these regions as heteromodal ingestive cortices with overlapping representations of sensory and affective processing of taste and smell, which departs from classical notions of primary and secondary sensory areas. Additionally, we observed activity in several brain regions previously implicated in feeding by studies with non-human animals including the striatum, midbrain and OFC. These results provide a reference against which future studies of eating disorders and obesity in humans may be compared. Table 1 rCBF decreases with decreasing reward value Area t-Value Coordinates Choc 1–7 Water x y z *Peaks shown in Fig. 3. t-Values from the subtraction water-pre minus water-post. The value of t remained significant when scan order was covaried out of the regression equation. Subcallosal region 11.4  9.78  –1  25 –19 Left inferior temporal gyrus  6.1  2.7 –49 –42 –21 Right inferior temporal gyrus  4.9  5.4  52 –37 –19 Right dorsal insula/operculum*  6.1  1.3  36  15 Left dorsal insula/operculum*  4.3 –1.0 –43  –4  13 Right lateral occipitotemporal gyrus/cerebellum  6.0  3.4  41 –59 –23 Left lateral occipitotemporal gyrus/cerebellum  5.2  3.6 –46 –64 –24 Right middle temporal gyrus  5.8  2.5  55 –56  13 Left middle temporal gyrus  4.9  2.0 –56 –62 Left thalamus*  5.3  1.0  –7 –26 Right caudomedial OFC  4.8  4.7  16  27 –19 Left caudomedial OFC  5.3  5.3 –18  25 –18 Midbrain*  4.4  1.4 –28 –13 Left putamen  4.2  1.5 –29 Right ventral insula  4.1  1.3  37  –7 Left caudate nucleus  3.9  1.0 –12  10 Right caudate nucleus  3.2  2.9  15  20 Left hippocampus  3.3  2.0 –27 –28  –7 Area t-Value Coordinates Choc 1–7 Water x y z *Peaks shown in Fig. 3. t-Values from the subtraction water-pre minus water-post. The value of t remained significant when scan order was covaried out of the regression equation. Subcallosal region 11.4  9.78  –1  25 –19 Left inferior temporal gyrus  6.1  2.7 –49 –42 –21 Right inferior temporal gyrus  4.9  5.4  52 –37 –19 Right dorsal insula/operculum*  6.1  1.3  36  15 Left dorsal insula/operculum*  4.3 –1.0 –43  –4  13 Right lateral occipitotemporal gyrus/cerebellum  6.0  3.4  41 –59 –23 Left lateral occipitotemporal gyrus/cerebellum  5.2  3.6 –46 –64 –24 Right middle temporal gyrus  5.8  2.5  55 –56  13 Left middle temporal gyrus  4.9  2.0 –56 –62 Left thalamus*  5.3  1.0  –7 –26 Right caudomedial OFC  4.8  4.7  16  27 –19 Left caudomedial OFC  5.3  5.3 –18  25 –18 Midbrain*  4.4  1.4 –28 –13 Left putamen  4.2  1.5 –29 Right ventral insula  4.1  1.3  37  –7 Left caudate nucleus  3.9  1.0 –12  10 Right caudate nucleus  3.2  2.9  15  20 Left hippocampus  3.3  2.0 –27 –28  –7 Table 2 rCBF increases with decreasing reward value Area t-Value Coordinates Choc 1–7 Water x y z *Peaks shown in Fig. 3. t-Values from the subtraction water-pre minus water-post. The value of t remained significant when scan order was covaried out of the regression equation. Left precentral gyrus –5.9 –3.9 –29 –26  57 Right precentral gyrus –5.7 –3.4  21 –23  65 Right supplementary motor area –5.8 –2.2  11 30  55 Left middle frontal gyrus –5.4 –3.9 –28  17  50 Medial frontal gyrus –4.9 –3.1  –1 –25  66 Right caudolateral orbitofrontal cortex* –4.3 –3  41  34 –19 Right caudolateral orbitofrontal cortex* –4.2  44  27  –5 Left inferior frontal gyrus –4.0 –1.4 –48  30 Right cingulate cortex –3.1 –2  15  24  31 Cingulate cortex –3.0 –1  –7  33 Right parahippocampal gyrus –2.8  31 –21 –21 Area t-Value Coordinates Choc 1–7 Water x y z *Peaks shown in Fig. 3. t-Values from the subtraction water-pre minus water-post. The value of t remained significant when scan order was covaried out of the regression equation. Left precentral gyrus –5.9 –3.9 –29 –26  57 Right precentral gyrus –5.7 –3.4  21 –23  65 Right supplementary motor area –5.8 –2.2  11 30  55 Left middle frontal gyrus –5.4 –3.9 –28  17  50 Medial frontal gyrus –4.9 –3.1  –1 –25  66 Right caudolateral orbitofrontal cortex* –4.3 –3  41  34 –19 Right caudolateral orbitofrontal cortex* –4.2  44  27  –5 Left inferior frontal gyrus –4.0 –1.4 –48  30 Right cingulate cortex –3.1 –2  15  24  31 Cingulate cortex –3.0 –1  –7  33 Right parahippocampal gyrus –2.8  31 –21 –21 Table 3 t-Values of activity in the posterior cingulate cortex in affective compared with neutral scans Area Ch1–Ch4 Ch7–Ch4 Ch1–wpre Ch1 wpost Ch4–wpre Ch4–wpost Ch7–wpre Ch7–wpost Ch = Choc; wpre = water-pre; wpost = water-post. *Peak shown in Fig. 3 Posterior cingulate gyrus 3.2 2.8 3.3 3.9* – – 2.5 3.8 Area Ch1–Ch4 Ch7–Ch4 Ch1–wpre Ch1 wpost Ch4–wpre Ch4–wpost Ch7–wpre Ch7–wpost Ch = Choc; wpre = water-pre; wpost = water-post. *Peak shown in Fig. 3 Posterior cingulate gyrus 3.2 2.8 3.3 3.9* – – 2.5 3.8 Fig. 1 Rating scale. Subjects used the rating scale to respond to two questions following ingestion of each piece of chocolate: (i) How pleasant or unpleasant was the piece of chocolate you just ate? (ii) How much would you like or not like another piece of chocolate? Fig. 1 Rating scale. Subjects used the rating scale to respond to two questions following ingestion of each piece of chocolate: (i) How pleasant or unpleasant was the piece of chocolate you just ate? (ii) How much would you like or not like another piece of chocolate? Fig. 2 Pictorial representation of protocol. Purple bars represent water PET scans and turquoise bars represent chocolate PET scans. The tongue movement scan is represented by a blue bar. Interscan intervals consist of a feeding period and a rest period. The amount of chocolate eaten to produce the required decreases in affective ratings obtained from questions (i) and (ii) decreased gradually as the experiment progressed. This is depicted by the shrinking feed periods. Colour-coded bar graphs in Fig. 3 correspond to the colour scheme used here. Pictorial representation of protocol. Purple bars represent water PET scans and turquoise bars represent chocolate PET scans. The tongue movement scan is represented by a blue bar. Interscan intervals consist of a feeding period and a rest period. The amount of chocolate eaten to produce the required decreases in affective ratings obtained from questions (i) and (ii) decreased gradually as the experiment progressed. This is depicted by the shrinking feed periods. Colour-coded bar graphs in Fig. 3 correspond to the colour scheme used here. Fig. 3 Cortical regions demonstrating significant rCBF correlations with affective rating for question (ii). Regression analyses were used to correlate rCBF from averaged PET data (Choc 1 minus Choc 7) with affective ratings taken immediately after these scans (see Methods). Correlations are shown as t statistic images superimposed on corresponding averaged MRI scans. The t statistic ranges for each set of images are coded by colour bars, one in each box. Bar graphs represent normalized CBF in an 8 mm radius surrounding the peak. The y-axis corresponds to normalized activity and the bars along the x-axis represent scans. The three colours represent scan type and correspond to the coloured bars in Fig. 2. Each bar graph corresponds to activations indicated by a turquoise line. (A) Coronal section taken at y = 1 showing the decrease in rCBF in the primary gustatory area (bilaterally in the anterior insula/frontal operculum and in the right ventral insula). (B) Coronal section taken at y = –26 showing decreases in rCBF in the left thalamus and medial midbrain (possibly corresponding to the ventral tegmental area). (C) Sagittal section taken at x = –1 showing decreases in rCBF in the subcallosal region, thalamus and midbrain. (D) Sagittal section taken at x = 42 showing the increase in rCBF in the right caudolateral orbitofrontal cortex. Activation is also evident in the motor and premotor areas. (E) Sagittal section taken at x = 8 showing an increase in rCBF in the posterior cingulate gyrus (peak at 8, –30, 45) in subtraction analysis Choc 1 – water-post (see Table 3 and Results section). This was the only region where CBF was consistently greater in affective scans regardless of valence, compared with the neutral chocolate scan (Choc 4) and the two water baseline scans (water-pre and water-post). (F) Horizontal section at z = 12 showing an increase in rCBF in the retrosplenial cortex (area 30) that correlated with affective rating (ii) but not the affective rating (i) when scan order was covaried out of the regression analysis (see Results and Methods). Cortical regions demonstrating significant rCBF correlations with affective rating for question (ii). Regression analyses were used to correlate rCBF from averaged PET data (Choc 1 minus Choc 7) with affective ratings taken immediately after these scans (see Methods). Correlations are shown as t statistic images superimposed on corresponding averaged MRI scans. The t statistic ranges for each set of images are coded by colour bars, one in each box. Bar graphs represent normalized CBF in an 8 mm radius surrounding the peak. The y-axis corresponds to normalized activity and the bars along the x-axis represent scans. The three colours represent scan type and correspond to the coloured bars in Fig. 2. Each bar graph corresponds to activations indicated by a turquoise line. (A) Coronal section taken at y = 1 showing the decrease in rCBF in the primary gustatory area (bilaterally in the anterior insula/frontal operculum and in the right ventral insula). (B) Coronal section taken at y = –26 showing decreases in rCBF in the left thalamus and medial midbrain (possibly corresponding to the ventral tegmental area). (C) Sagittal section taken at x = –1 showing decreases in rCBF in the subcallosal region, thalamus and midbrain. (D) Sagittal section taken at x = 42 showing the increase in rCBF in the right caudolateral orbitofrontal cortex. Activation is also evident in the motor and premotor areas. (E) Sagittal section taken at x = 8 showing an increase in rCBF in the posterior cingulate gyrus (peak at 8, –30, 45) in subtraction analysis Choc 1 – water-post (see Table 3 and Results section). This was the only region where CBF was consistently greater in affective scans regardless of valence, compared with the neutral chocolate scan (Choc 4) and the two water baseline scans (water-pre and water-post). (F) Horizontal section at z = 12 showing an increase in rCBF in the retrosplenial cortex (area 30) that correlated with affective rating (ii) but not the affective rating (i) when scan order was covaried out of the regression analysis (see Results and Methods). Fig. 4 Average affective rating to questions (i) and (ii) across the seven chocolate conditions. Dotted line depicts ratings to question (i) (Pleasantness) and the solid line represents ratings to question (ii) (Motivation). Error bars represent the standard error of the mean. Ratings correspond to the rating scale shown in Fig. 1. Repeated measures ANOVA revealed a significant interaction, indicating that the slopes of the two lines are different (see Results). Average affective rating to questions (i) and (ii) across the seven chocolate conditions. Dotted line depicts ratings to question (i) (Pleasantness) and the solid line represents ratings to question (ii) (Motivation). Error bars represent the standard error of the mean. Ratings correspond to the rating scale shown in Fig. 1. Repeated measures ANOVA revealed a significant interaction, indicating that the slopes of the two lines are different (see Results). We wish to thank the technical staff of the McConnell Brain Imaging Unit and the Medical Cyclotron for their invaluable assistance and the Neurophotography staff at the MNI. Funding was provided by grant MT-14991 awarded to M.J.-G. and R.J.Z. by the Medical Research Council of Canada. ## References Ahn S, Phillips AG. Dopaminergic correlates of sensory-specific satiety in the medial prefrontal cortex and the nucleus accumbens of the rat. J Neurosci 1999 ; 19 : RC29. Bartoshuk LM. Taste, smell, and pleasure. In: Bolles RC, editor. The hedonics of taste. Hillsdale (NJ): Lawrence Erlbaum; 1991. p. 1–28. Baylis LL, Rolls ET, Baylis GC. Afferent connections of the caudolateral orbitofrontal cortex taste area of the primate. Neuroscience 1995 ; 64 : 801 –12. Berridge KC. Food reward: brain substrates of wanting and liking. [Review]. Neurosci Biobehav Res 1996 ; 20 : 1 –25. Blair RJ, Morris JS, Frith CD, Perrett DI, Dolan RJ. Dissociable neural responses to facial expressions of sadness and anger. Brain 1999 ; 122 : 883 –93. Blood AJ, Zatorre RJ, Bermudez P, Evans AC. Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nat Neurosci 1999 ; 2 : 382 –7. Breiter HC, Gollub RL, Weisskoff RM, Kennedy DN, Madris N, Berke JD, et al. Acute effects of cocaine on human brain activity and emotion. Neuron 1997 ; 19 : 591 –611. Carmichael ST, Price JL. Connectional networks within the orbital and medial prefrontal cortex of macaque monkeys. J Comp Neurol 1996 ; 371 : 179 –207. Chikama M, McFarland NR, Amaral DG, Haber SN. Insular cortical projections to functional regions of the striatum correlate with cortical cytoarchitectonic organization in the primate. J Neurosci 1997 ; 17 : 9686 –705. Coghill RC, Sang CN, Maisog JM, Iadarola MJ. Pain intensity processing within the human brain: a bilateral, distributed mechanism. J Neurophysiol 1999 ; 82 : 1934 –43. Collins DL, Neelin P, Peters TM, Evans AC. Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space. J Comput Assist Tomogr 1994 ; 18 : 192 –205. Critchley HD, Rolls ET. Hunger and satiety modify the responses of olfactory and visual neurons in the primate orbitofrontal cortex. J Neurophysiol 1996 ; 75 : 1673 –86. Dade LA, Jones-Gotman M, Zatorre RJ, Evans AC. Human brain function during odor encoding and recognition: a PET activation study. 1998 ; 855 : 572 –4. Damasio AR. Descartes' error. New York: Avon Books; 1994. Damasio AR, Grabowski TH, Bechara A, Damsio H, Ponto LLB, Parvizi J, et al. Subcortical and cortical brain activity during the feeling of self-generated emotions. Nat Neurosci 2000 ; 3 : 1049 –56. Denton D, Shade R, Zamarippa F, Egan G, Blair-West J, McKinley M, et al. Neuroimaging of genesis and satiation of thirst and an interoceptor-driven theory of origins of primary consciousness. 1999 ; 96 : 5304 –9. Elliott R, Dolan RJ, Frith CD. Dissociable functions in the medial and lateral orbitofrontal cortex: evidence from human neuroimaging studies. [Review]. Cereb Cortex 2000 ; 10 : 308 –17. Faurion A, Cerf B, Van De Moortele PF, Lobel E, MacLeod P, Le Bihan D. Human taste cortical areas studied with functional magnetic resonance imaging: evidence of functional lateralization related to handedness. Neurosci Lett 1999 ; 277 : 189 –92. Francis S, Rolls ET, Bowtell R, McGlone F, O'Doherty J, Browning A, et al. The representation of pleasant touch in the brain and its relationship with taste and olfactory areas. Neuroreport 1999 ; 10 : 453 –9. Fulbright RK, Skudlarski P, Lacadie CM, Warrenburg S, Bowers AA, Gore JC, et al. Functional MR imaging of regional brain responses to pleasant and unpleasant odors. , 1998 ; 19 : 1721 –6. Gallagher M, McMahan RW, Schoenbaum G. Orbitofrontal cortex and representation of incentive value in associative learning. J Neurosci 1999 ; 19 : 6610 –4. Gutierrez H, Hernandez-Echeagaray E, Ramirez-Amaya V, Bermudez-Rattoni F. Blockade of N-methyl-D-aspartate receptors in the insular cortex disrupts taste aversion and spatial memory formation. Neuroscience 1999 ; 89 : 751 –8. Haber SN, Kunishio K, Mizobuchi M, Lynd-Balta E. The orbital and medial prefrontal circuit through the primate basal ganglia. J Neurosci 1995 ; 15 : 4851 –67. Hamdy S, Rothwell JC, Brooks DJ, Bailey D, Aziz Q, Thompson DG. Identification of the cerebral loci processing human swallowing with H2O15 PET activation. J Neurophysiol 1999 ; 81 : 1917 –26. Hetherington MM, MacDiarmid JI. `Chocolate addiction': a preliminary study of its description and its relationship to problem eating. Appetite 1993 ; 21 : 233 –46. Iversen SD, Mishkin M. Perseverative interference in monkeys following selective lesions of the inferior prefrontal convexity. Exp Brain Res 1970 ; 11 : 376 –86. Jacobs KM, Mark GP, Scott TR. Taste responses in the nucleus tractus solitarius of sodium-deprived rats. J Physiol (Lond) 1988 ; 406 : 393 –410. Jones-Gotman M, Zatorre RJ. Odor recognition memory in humans: role of right temporal and orbitofrontal regions. Brain Cogn 1993 ; 22 : 182 –98. Kinomura S, Kawashima R, Yamada K, Ono S, Itoh M, Yoshioka S, et al. Functional anatomy of taste perception in the human brain studied with positron emission tomography. Brain Res 1994 ; 659 : 263 –6. Kosslyn SM, Shin LM, Thompson WL, McNally RJ, Rauch SL, Pitman RK, et al. Neural effects of visualizing and perceiving aversive stimuli: a PET investigation. Neuroreport 1996 ; 7 : 1569 –76. Lane RD, Reiman EM, Bradley MM, Lang PJ, Ahern GL, Davidson RJ, et al. Neuroanatomical correlates of pleasant and unpleasant emotion. Neuropsychologia 1997 ; 35 : 1437 –44. Lane RD, Reiman EM, Ahern G L, Schwartz GE, Davidson RJ. Neuroanatomical correlates of happiness, sadness, and disgust. Am J Psychiatry 1997 ; 154 : 926 –33. Ledoux JE. The emotional brain. New York: Touchstone; 1996. Maddock RJ. The retrosplenial cortex and emotion: new insights from functional neuroimaging of the human brain. TINS 1999 ; 22 : 310 –16. Mirenowisz J, Schultz W. Preferential activation of midbrain dopamine neurons by appetitive rather than aversive stimuli [letter]. Nature 1996 ; 379 : 449 –51. Morris JS, Frith CD, Perrett DI, Rowland D, Young AW, Calder AJ, et al. A differential neural response in the human amygdala to fearful and happy facial expressions. Nature 1996 ; 383 : 812 –5. Morris JS, Scott SK, Dolan RJ. Saying it with feeling: neural responses to emotional vocalizations. Neuropsychologia 1999 ; 37 : 1155 –63. Morris R, Petrides M, Pandya DN. Architecture and connections of retrosplenial area 30 in the rhesus monkey (Macaca mulatta). Eur J Neurosci 1999 ; 11 : 2506 –18. O'Doherty J, Rolls ET, Francis S, Bowtell R, McGlone F, Kobal G, et al. Sensory-specific satiety-related olfactory activation of the human orbitofrontal cortex. Neuroreport 2000 ; 11 : 399 –403. O'Doherty J, Kringelbach ML, Rolls ET, Hornak J, Andrews C. Abstract reward and punishment representations in the human orbitofrontal cortex. Nat Neurosci , 2001 ; 4 : 95 –102. Paradiso S, Johnson DL, Andreasen NC, O'Leary DS, Watkins GL, Ponto LL, et al. Cerebral blood flow changes associated with attribution of emotional valence to pleasant, unpleasant, and neutral visual stimuli in a PET study of normal subjects. Am J Psychiatry 1999 ; 156 : 1618 –29. Pardo JV, Wood TD, Costello PA, Pardo PJ, Lee JT. PET study of the localization and laterality of lingual somatosensory processing in humans. Neurosci Lett 1997 ; 234 : 23 –6. Paus T, Perry DW, Zatorre RJ, Worsley KJ, Evans AC. Modulation of cerebral blood flow in the human auditory cortex during speech: role of motor-to-sensory discharges. Eur J Neurosci 1996 ; 8 : 2236 –46. Penfield W, Faulk ME. The insula: further observations on its function. Brain 1955 ; 78 : 445 –70. Petrides M, Pandya DN. Comparative architectonic analysis of the human and the macaque frontal cortex. In: Boller F, Graham J, editors. Handbook of neuropsychology, Vol. 9. Amsterdam: Elsevier; 1994. p. 17–58. Phillips ML, Young AW, Senior C, Brammer M, Andrew C, Calder AJ, et al. A specific neural substrate for perceiving facial expressions of disgust. Nature 1997 ; 389 : 495 –98. Price JL. Olfactory system. In: Paxinos G, editor. The human nervous system. San Diego: Academic Press, 1990. p. 979–98. Pritchard TC, Hamilton RB, Morse JR, Norgren R. Projections of thalamic gustatory and lingual areas in the monkey, Macaca fascicularis. J Comp Neurol 1986 ; 244 : 213 –28. Pritchard TC, Macaluso DA, Eslinger PJ. Taste perception in patients with insular cortex lesions. Behav Neurosci 1999 ; 113 : 663 –71. Raichle ME, Martin WR, Herscovitch P, Mintun MA, Markham J. Brain blood flow measured with intravenous H(15)O. II. Implementation and validation. J Nucl Med 1983 ; 24 : 790 –8. Richardson NR, Gratton A. Changes in medial prefrontal cortical dopamine levels associated with response-contingent food reward: an electrochemical study in rat. J Neurosci 1998 ; 18 : 9130 –8. Rogers RD, Owen AM, Middleton HC, Williams EJ, Pickard JD, Sahakian BJ, et al. Choosing between small, likely rewards and large, unlikely rewards activates inferior and orbital prefrontal cortex. J Neurosci 1999 ; 19 : 9029 –38. Rozin P, Levine E, Stoess C. Chocolate craving and liking. Appetite 1991 ; 17 : 199 –212. Rolls ET. The neural control of feeding in primates. In: Booth DA, editor. Neurophysiology of ingestion. Oxford: Pergamon Press; 1993. p. 137–69. Rolls ET. The orbitofrontal cortex. [Review]. Phil Trans R Soc Lond B Biol Sci 1996 ; 351 : 1433 –4. Rolls ET. Taste and olfactory processing in the brain and its relation to the control of eating. [Review]. Crit Rev in Neurobiol 1997 ; 11 : 263 –87. Rolls ET, Baylis LL. Gustatory, olfactory, and visual convergence within the primate orbitofrontal cortex. J Neurosci 1994 ; 14 : 5437 –52. Rolls ET, Scott TR, Sienkiewicz ZJ, Yaxley S. The responsiveness of neurones in the frontal opercular gustatory cortex of the macaque monkey is independent of hunger. J Physiol (Lond) 1988 ; 397 : 1 –12. Rolls ET, Sienkiewicz ZJ, Yaxley S. Hunger modulates the responses to gustatory stimuli of single neurons in the caudolateral orbitofrontal cortex of the macaque monkey. Eur J Neurosci 1989 ; 1 : 53 –60. Rolls ET, Yaxley S, Sienkiewicz ZJ. Gustatory responses of single neurons in the caudolateral orbitofrontal cortex of the macaque monkey. J Neurophysiol 1990 ; 64 : 1055 –66. Rolls ET, Critchley HD, Treves A. Representation of olfactory information in the primate orbitofrontal cortex. J Neurophysiol 1996 ; 75 : 1982 –96. Rolls ET, Critchley HD, Browning AS, Hernadi I, Lenard L. Responses to the sensory properties of fat of neurons in the primate orbitofrontal cortex. J Neurosci 1999 ; 19 : 1532 –40. Royet JP, Zald D, Versace R, Costes N, Lavenne F, Koenig O. Emotional responses to pleasant and unpleasant olfactory, visual, and auditory stimuli: a positron emission tomography study. J Neurosci 2000 ; 20 : 7752 –9. Savic I, Gulyas B, Larsson M, Roland P. Olfactory functions are mediated by parallel and hierarchical processing. Neuron 2000 ; 26 : 735 –45. Scott TR, Plata-Salaman CR. Taste in the monkey cortex. [Review]. Physiol Behav 1999 ; 67 : 489 –511. Small DM, Jones-Gotman M, Zatorre RJ, Petrides M, Evans AC. A role for the right anterior temporal lobe in taste quality recognition. J Neurosci 1997 ; 17 : 5136 –42. Small DM, Jones-Gotman M, Zatorre RJ, Petrides M, Evans AC. Flavor processing: more than the sum of its parts. Neuroreport 1997 ; 8 : 3913 –17. Small DM, Zald DH, Jones-Gotman M, Zatorre RJ, Pardo JV, Frey S. Human cortical gustatory areas: a review of functional neuroimaging data. Neuroreport 1999 ; 10 : 7 –14. Sobel N, Prabhakaran V, Desmond JE, Glover GH, Goode RL, Sullivan EV, et al. Sniffing and smelling: separate subsystems in the human olfactory cortex. Nature 1998 ; 392 : 282 –3. Sobel N, Prabhakaran V, Zhao Z, Desmond JE, Glover GH, Sullivan EV, et al. Time course of odorant-induced activation in the human primary olfactory cortex. J Neurophysiol 2000 ; 83 : 537 –51. Takagi SF. Studies on the olfactory nervous system of the Old World monkey. Prog Neurobiol 1986 ; 27 : 195 –250. Talairach J, Tournoux P. Co-planar stereotaxic atlas of the human brain. Stuttgart: Thieme; 1988. Tanabe T, Iino M, Takagi SF. Discrimination of odors in olfactory bulb, pyriform-amygdaloid areas, and orbitofrontal cortex of the monkey. J Neurophysiol 1975 ; 38 : 1284 –96. Tanabe T, Yarita H, Iino M, Ooshima Y, Takagi SF. An olfactory projection area in orbitofrontal cortex of the monkey. J Neurophysiol 1975 ; 38 : 1269 –83. Tataranni PA, Gautier JF, Chen K, Uecker A, Bandy D, Salbe AD, et al. Neuroanatomical correlates of hunger and satiation in humans using positron emission tomography. 1999 ; 96 : 4569 –74. Thut G, Schultz W, Roelcke U, Nienhusmeier M, Missimer J, Maguire RP, et al. Activation of the human brain by monetary reward. Neuroreport 1997 ; 8 : 1225 –8. Tolle TR, Kaufmann T, Siessmeier T, Lautenbacher S, Berthele A, Munz F, et al. Region-specific encoding of sensory and affective components of pain in the human brain: a positron emission tomography correlation analysis. Ann Neurol 1999 ; 45 : 40 –7. Wang GJ, Volkow ND, Fowler JS, Cervany B, Hitzemann RJ, Pappas NR, et al. Regional brain metabolic activation during cravings elicited by recall of previous drug experiences. Life Sci 1999 ; 64 : 775 –84. Wise RA. Drug self-administration viewed as ingestive behaviour. [Review]. Appetite 1997 ; 28 : 1 –5. Worsley KJ, Evans AC, Marrett S, Neelin P. A three-dimensional statistical analysis for CBF activation studies in human brain. J Cereb Blood Flow Metab 1992 ; 12 : 900 –18. Worsley KJ, Marrett S, Neelin P, Vandal AC, Friston KJ, Evans AC. A unified statistical approach for determining significant signals in images of cerebral activation. Hum Brain Mapp 1996 ; 4 : 58 –73. Yamamoto T, Shimura T, Sako N, Yasoshima Y, Sakai N. Neural substrates for conditioned taste aversion in the rat. [Review]. Behav Brain Res 1994 ; 65 : 123 –37. Yan J, Scott TR. The effect of satiety on responses of gustatory neurons in the amygdala of alert cynomolgus macaques. Brain Res 1996 ; 740 : 193 –200. Yaxley S, Rolls ET, Sienkiewicz ZJ. Gustatory responses of single neurons in the insula of the macaque monkey. J Neurophysiol 1990 ; 63 : 689 –700. Zald DH, Pardo JV. Emotion, olfaction, and the human amygdala: amygdala activation during aversive olfactory stimulation. 1997 ; 94 : 4119 –24. Zald DH, Pardo JV. Functional neuroimaging of the olfactory system in humans. [Review]. Int J Psychophysiol 2000 ; 36 : 165 –81. Zald DH, Lee JT, Fluegel KW, Pardo JV. Aversive gustatory stimulation activates limbic circuits in humans. Brain 1998 ; 121 : 1143 –54. Zatorre RJ, Jones-Gotman M. Functional imaging of the chemical senses. In: Toga AW, Mazziota JC, editors. Brain mapping: the systems. San Diego: Academic Press; 2000. p. 403–24. Zatorre RJ, Jones-Gotman M, Evans AC, Meyers E. Functional localization and lateralization of human olfactory cortex. Nature 1992 ; 60 : 339 –40.
Multivariate Behrens–Fisher problem In statistics, the multivariate Behrens–Fisher problem is the problem of testing for the equality of means from two multivariate normal distributions when the covariance matrices are unknown and possibly not equal. Since this is a generalization of the univariate Behrens-Fisher problem, it inherits all of the difficulties that arise in the univariate problem. Notation and problem formulation Let $X_{ij} \sim \mathcal{N}_p(\mu_i,\, \Sigma_i) \ \ (j=1,\dots,n_i; \ \ i=1,2)\$ be independent random samples from two $p$-variate normal distributions with unknown mean vectors $\mu_i$ and unknown dispersion matrices $\Sigma_i$. The index $i$ refers to the first or second population, and the $j$th observation from the $i$th population is $X_{ij}$. The multivariate Behrens–Fisher problem is to test the null hypothesis $H_0$ that the means are equal versus the alternative $H_1$ of non-equality: $H_0 : \mu_1 = \mu_2 \ \ \text{vs} \ \ H_1 : \mu_1 \neq \mu_2.$ Define some statistics, which are used in the various attempts to solve the multivariate Behrens–Fisher problem, by \begin{align} \bar{X_i} &= \frac{1}{n_i} \sum_{j=1}^{n_i} X_{ij}, \\ A_i &= \sum_{j=1}^{n_i} (X_{ij} - \bar{X_i})(X_{ij} - \bar{X_i})', \\ S_i &= \frac{1}{n_i - 1} A_i, \\ \tilde{S_i} &= \frac{1}{n_i}S_i, \\ \tilde{S} &= \tilde{S_1} + \tilde{S_2}, \quad \text{and} \\ T^2 & = (\bar{X_1} - \bar{X_2})'\tilde{S}^{-1}(\bar{X_1} - \bar{X_2}). \end{align} The sample means $\bar{X_i}$ and sum-of-squares matrices $A_i$ are sufficient for the multivariate normal parameters $\mu_i, \Sigma_i,\ (i=1,2)$, so it suffices to perform inference be based on just these statistics. The distributions of $\bar{X_i}$ and $A_i$ are independent and are, respectively, multivariate normal and Wishart:[1] \begin{align} \bar{X_i} &\sim \mathcal{N}_p \left(\mu_i, \Sigma_i/n_i \right), \\ A_i &\sim W_p(\Sigma_i, n_i - 1). \end{align} Background In the case where the dispersion matrices are equal, the distribution of the $T^2$ statistic is known to be an F distribution under the null and a noncentral F-distribution under the alternative.[1] The main problem is that when the true values of the dispersion matrix are unknown, then under the null hypothesis the probability of rejecting $H_0$ via a $T^2$ test depends on the unknown dispersion matrices.[1] In practice, this dependency harms inference when the dispersion matrices are far from each other or when the sample size is not large enough to estimate them accurately.[1] Now, the mean vectors are independently and normally distributed, $\bar{X_i} \sim \mathcal{N}_p \left(\mu_i, \Sigma_i/n_i \right),$ but the sum $A_1 + A_2$ does not follow the Wishart distribution,[1] which makes inference more difficult. Proposed solutions Proposed solutions are based on a few main strategies:[2][3] Approaches using the T2 with approximate degrees of freedom Below, $\mathrm{tr}$ indicates the trace operator. Yao (1965) (as cited by [6]) $T^2 \sim \frac{\nu p}{\nu-p+1}F_{p,\nu-p+1},$ where \begin{align} \nu &= \left[ \frac{1}{n_1} \left( \frac{\bar{X}_d'\tilde{S}^{-1}\tilde{S}_1 \tilde{S}^{-1}\bar{X_d}} {\bar{X}_d'\tilde{S}^{-1}\bar{X}_d} \right)^2 + \frac{1}{n_2} \left( \frac{\bar{X}_{d}'\tilde{S}^{-1}\tilde{S}_2 \tilde{S}^{-1}X_d^{-1}} {\bar{X}_d'\tilde{S}^{-1} \bar{X}_d} \right)^{2} \right]^{-1}, \\ \bar{X}_d & = \bar{X}_{1}-\bar{X}_2. \end{align} Johansen (1980) (as cited by [6]) $T^2 \sim q F_{p,\nu},$ where \begin{align} q &= p + 2D - \frac{6D}{p(p-1)+2}, \\ \nu &= \frac{p(p+2)}{3D}, \\ \end{align} and \begin{align} D = \frac{1}{2}\sum_{i=1}^2 \frac{1}{n_i} \Bigg\{ \ & \mathrm{tr}\left[{\left( I - (\tilde{S}_1^{-1} + \tilde{S}_2^{-1})^{-1} \tilde{S}_i^{-1}\right)}^2\right] \\ & {}+ {\left[\mathrm{tr}\left(I -(\tilde{S}_1^{-1} + \tilde{S}_2^{-1})^{-1} \tilde{S}_i^{-1}\right)\right]}^2 \ \Bigg\}. \\ \end{align} Nel and Van der Merwe's (1986) (as cited by [6]) $T^2 \sim \frac{\nu p}{\nu-p+1}F_{p,\nu-p+1},$ where $\nu = \frac{ \mathrm{tr}(\tilde{S}^2) + [\mathrm{tr}(\tilde{S})]^2} { \frac{1}{n_1} \left\{ \mathrm{tr}(\tilde{S_{1}}^2) + [\mathrm{tr}(\tilde{S_1})]^2\right \} + \frac{1}{n_2} \left\{ \mathrm{tr}(\tilde{S_2}^2) + [\mathrm{tr}(\tilde{S_{2}})]^2 \right \} }.$ Kim (1992) proposed a solution that is based on a variant of $T^2$. Although its power is high, the fact that it is not invariant makes it less attractive. Simulation studies by Subramaniam and Subramaniam (1973) show that the size of Yao's test is closer to the nominal level than that of James's. Christensen and Rencher (1997) performed numerical studies comparing several of these testing procedures and concluded that Kim and Nel and Van der Merwe's tests had the highest power. However, these two procedures are not invariant. Krishnamoorthy and Yu (2004) Krishnamoorthy and Yu (2004) proposed a procedure which adjusts in Nel and Var der Merwe (1986)'s approximate df for the denominator of $T^2$ under the null distribution to make it invariant. They show that the approximate degrees of freedom lies in the interval $\left[\min\{n_1,n_2\},n_1+n_2\right]$ to ensure that the degrees of freedom is not negative. They report numerical studies that indicate that their procedure is as powerful as Nel and Van der Merwe's test for smaller dimension, and more powerful for larger dimension. Overall, they claim that their procedure is the better than the invariant procedures of Yao (1965) and Johansen (1980). Therefore, Krishnamoorthy and Yu's (2004) procedure has the best known size and power as of 2004. The test statistic $T^2$ in Krishnmoorthy and Yu's procedure follows the distribution $T^2 \sim \nu pF_{p,\nu-p+1}/(\nu-p+1),$ where $\nu = \frac{p+p^2} { \frac{1}{n_1}\{\mathrm{tr}[(\tilde{S}_1 \tilde{S}^{-1})^2]+[\mathrm{tr}(\tilde{S}_1 \tilde{S}^{-1})]^2\} + \frac{1}{n_2} \{\mathrm{tr}[(\tilde{S}_2 \tilde{S}^{-1})^2]+[\mathrm{tr}(\tilde{S}_2 \tilde{S}^{-1})]^{2}\} }.$ References 1. Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis (3rd ed.). Hoboken, N. J.: Wiley Interscience. p. 259. ISBN 0-471-36091-0. 2. ^ Christensen, W. F.; A.C. Rencher (1997). "A comparison of type I error rates and power levels for seven solutions to the multivariate Behrens–Fisher problem". Communications in Statist. Simulation and Computation 26: 1251–1273. doi:10.1080/03610919708813439. 3. ^ a b Park, Junyong; Bimal Sinha (2007). Some aspects of multivariate Behrens–Fisher problem (PDF) (Technical report). 4. ^ Olkin, Ingram; Jack L. Tomsky (1981). "A New Class of Multivariate Tests Based on the Union-Intersection Principle". Ann. Statist. 9 (4): 792–802. doi:10.1214/aos/1176345519. 5. ^ Gamage, J.; T. Mathew; S. Weerahandi (2004). "Generalized p-values and generalized confidence regions for the multivariate Behrens--Fisher problem and MANOVA". Journal of Multivariate Analysis 88: 177–189. doi:10.1016/s0047-259x(03)00065-4. 6. ^ a b c Krishnamoorthy, K.; J. Yu (2004). "Modified Nel and Van der Merwe test for the multivariate Behrens-Fisher problem". Statistics and Probability Letters 66: 161–169. doi:10.1016/j.spl.2003.10.012. • Rodríguez-Cortés, F. J. and Nagar, D. K. (2007). Percentage points for testing equality of mean vectors. Journal of the Nigerian Mathematical Society, 26:85–95. • Gupta, A. K., Nagar, D. K., Mateu, J. and Rodríguez-Cortés, F. J. (2013). Percentage points of a test statistic useful in manova with structured covariance matrices. Journal of Applied Statistical Science, 20:29-41.
0 204 # HCF LCM Questions for NMAT PDF: Download HCF LCM Questions for NMAT PDF. Top 10 very important HCF LCM Questions for NMAT based on asked questions in previous exam papers. Take NMAT mock test Question 1: 0ne-eighth of a number is 17.25. What will 73% of the number be ? a) 100.74 b) 138.00 c) 96.42 d) 82.66 e) None of these Question 2: ‘ A’ ,’B’ and ‘C’ are three consecutive even integers such that four times ‘A’ is equal to three times ‘C’. What is the value of B’? a) 12 b) 10 c) 16 d) 14 e) None of these Question 3: What is the LCM of the following fractions? 3/11, 2/5, 1/9 a) 0 b) 1 c) 2 d) 3 e) 6 Question 4: What is the LCM of the following fractions? ⅔, 4/7, 1/4 a) 1 b) 2 c) 4 d) 7 e) 8 Question 5: The LCM of two numbers is 360, and their HCF is 15. If one of the numbers is 45, what is the value of the remainder when the other number is divided by 7? a) 1 b) 2 c) 3 d) 4 e) 0 Question 6: What is the greatest number which divides 1070 and 1265 and leaves remainders 3 and 4 respectively? a) 91 b) 93 c) 95 d) 97 e) 101 Question 7: The sum of two numbers is 65 and the HCF of the two numbers is 5. If the LCM of the two numbers is 180, what is the sum of the reciprocals of the two numbers? a) 11/13 b) 11/625 c) 13/625 d) 11/180 e) 13/180 Question 8: The ratio of two numbers is 5:6. The product of the HCF and LCM of the two numbers is 120. What is the HCF of the two numbers? a) 2 b) 4 c) 6 d) 8 e) 1 Question 9: What is the smallest number which when multiplied by 10 is exactly divisible by 12, 18, 24 and 32? a) 288 b) 144 c) 1440 d) 552 e) None of the above Question 10: In a church, the first bell rings at intervals of 2 seconds, the second bell rings at intervals of 4 seconds and so on till the fifth bell, which rings at intervals of 10 seconds. How many times do all the bells ring together in a half-an-hour period? a) 60 b) 45 c) 120 d) 15 e) 30 Let the number be $8x$ Acc to ques, => $\frac{1}{8} * 8x = 17.25$ => $x = 17.25$ $\therefore$ 73 % of the number = $\frac{73}{100} * 8x$ = 0.73 * 8 * 17.25 = 100.74 let A be 2x ,B be 2x +2 and C be 2x +4 Given that 4A = 3C 4(2x) = 3 (2x +4) 8x = 6x +12 2x = 12 x = 6 B = 2*6 + 2 = 14 The LCM of the fractions = LCM of the numerators/ HCF of the denominators LCM of the numerators = LCM of 3, 2 and 1 = 6 HCF of the denominators = HCF of 11, 5 and 9 = 1 So, the required LCM = 6/1 = 6 LCM of fractions = LCM of the numerators/ HCF of the denominators. LCM of 2, 4 and 1 is 4 HCF of 3, 7 and 4 is 1 So, the LCM of the given fractions is 4/1 = 4 let the second number be x. We know that LCM*HCF = Product of the two numbers So 360*15 = x *45 => x = $\frac{360*15}{45}$ => x = 120 Now 120 on being divided by 7, leaves a remainder of 1. The required number is the HCF of (1070 – 3, 1265 – 4) = HCF of (1067, 1261) 1067 = 11*97 and 1261 = 13*97 So, HCF = 97 Let the numbers be ha and hb. HCF of the numbers = h and LCM of the numbers = hab. Sum of reciprocals = (a+b)/hab h(a+b) = 65 h = 5 (a+b) = 13 hab = LCM = 180 So, sum of reciprocals = 13/180 Let the two numbers be 5h and 6h where h is the HCF of the numbers. Product of the numbers = Product of LCM and HCF. => $30 h^2$ = 120 => h = 2 So, the HCF of the two numbers is 2
# How to graph tides using sine Sinusoidal functions this is a sine graph with a leading negative the sine function “starts” at the midline, but the leading the tides at a beach are . More about how to graph tides using sine curves explain, and illustrate using graphs, whether you think a perfectly competitive industry or a monopoly industry leads to more efficient outcomes for an economy. The graph of g(x) is the graph of f(x) compressed sine function that models the initial behavior of the the first high and low tides of the day for a certain. Section 5-7 graphing y k a sin the graph of y a sin x can be obtained from the graph of y sin x by multiply- sine curve by changing the period of the function . In mathematics, a periodic function is a function that repeats its values in regular intervals or periods a graph of the sine function, showing two complete . Word problems: modeling with sinusoids ii: a graph is shown below using -80,000 this is an approximation since the actual time between high and low tides is . The purpose of using sine curve is to show whether or not the market is a cycle mode or a trend mode the cycle mode occurs when sine plot is used to track if the outcome of the market is shown in a “ sine -like” curve or has a consistant pattern. Is it too much to ask of you to help me work it out and graph it, using the equation we find – megan wong sep 2 '13 at 19:25 if (and i am not sure wether the if is correct) it were an exact sine wave then your high should happen at $\pi/2$ and low at $-\pi/2$. Express sine in terms of cotangent you can use the graph for a trigonometry function to show average daily temperatures at a specific location. Unit 6: sinusoidal data applied math 30 page 134 copyrighted by gabriel tang bed, bsc. On a particular labor day, the high tide in southern california occurs at 7:12 am at that time you measure the water at the end of santa monica pier to be 11 ft deep. Use your knowledge of trigonometric functions and ratios to solve word problems dealing with tides and water depth sine graph: y = sin x trigonometry cosine . The high tides and low tides follow a periodic pattern that you can model with the sine function for example, on a particular winter day, the high tide in boston, massachusetts, occurred at midnight. Graphs of the circular functions periodic functions graph of the sine function graph of the tides, and hours of daylight. Modeling tides with trigonometry + d i'll explain later why i used cosine instead of sine here, a is the amplitude from the middle line to the peak, b is the . Since the period of the sine function is 2, and the period of the temperature data is 52 weeks, we set b = 2 /52 now we plot the function 31sin(2 x/52) + 42 on the original graph to get: we have accomplished the task. ## How to graph tides using sine Tides and temperature are periodic, so we can model these scenarios using trigonometric functions. Assignment 1: exploring sine curves by kristina dunbar, uga in this assignment, we will be investigating the graph of the equation y = a sin (bx + c). Posts about real life applications written by ollie james the sine graph is created by plotting the angle of the radius you can also map tides using a sine . Making a best sine curve fit to a set of sparse data from observation of the tides in the bay of fundy mathematics content curve fitting, and simple statistics, least-squares optimization. • Basic trigonometric functions are explained in this module and applied to describe wave behavior the module presents cartesian coordinate (x, y) graphing, and shows how the sine function is used to plot a wave on a graph. • Application problems number one if we look to see where the sine graph starts we see that it was moved to the right 15 if you look for a cosine graph, you see . The sine and cosine functions can be used to model fluctuations in temperature data throughout the year graph this data to model a given situation, using . The graph over a two year period is shown below: c modelling using sine and the average time difference between high tides is about 12:4 hours a find a sine . First of all, the graph is no longer a sine curve, but there's definitely a pattern to it moreoever, the pattern repeats, so this is still a periodic function whenever you see an oscilloscope, for example when you play music using certain programs on a computer, you're really seeing a whole bunch of sine waves added together. How to graph tides using sine Rated 5/5 based on 45 review 2018.
Posted on Utilization of the example standard deviation infers that these 14 fulmars are an example from a bigger populace of fulmars. On the off chance that these 14 fulmars included the whole people (maybe the last 14 enduring fulmars), at that point, rather than the example standard deviation, the estimation would utilize the populace standard deviation. In the populace standard deviation recipe, the denominator is N rather than N – 1. It is uncommon that opinions can be taken for a whole populace, along these lines. Naturally, measurable PC programs figure the example standard deviation. Thus, diary articles report the example standard deviation except if, in any case, determined. Populace standard deviation calculator of evaluations of eight understudies Assume that the whole populace of intrigue was eight understudies in a specific class. For a limited arrangement of numbers, the populace standard deviation is found by taking the square foundation of the normal of the squared deviations of the qualities subtracted from their normal worth. The signs of a class of eight understudies (that is, a factual populace) are the accompanying eight qualities: {\displaystyle 2,\ 4,\ 5,\ 7,\ 9.}2,\ 4,\ 5,\ 7,\ 9. These eight information focuses have the mean (normal) of 5: {\displaystyle \mu ={\frac {2+4+4+4+5+5+7+9}{8}}=5.}{\displaystyle \mu ={\frac {2+4+4+4+5+5+7+9}{8}}=5.} To start with, ascertain the deviations of every datum point from the mean, and square the aftereffect of each: {\displaystyle {\begin{array}{lll}(2-5)^{2}=(- 3)^{2}=9&&(5-5)^{2}=0^{2}=0\\(4-5)^{2}=(- 1)^{2}=1&&(5-5)^{2}=0^{2}=0\\(4-5)^{2}=(- 1)^{2}=1&&(7-5)^{2}=2^{2}=4\\(4-5)^{2}=(- 1)^{2}=1&&(9-5)^{2}=4^{2}=16.\\\end{array}}}{\begin{array}{lll}(2-5)^{2}=(- 3)^{2}=9&&(5-5)^{2}=0^{2}=0\\(4-5)^{2}=(- 1)^{2}=1&&(5-5)^{2}=0^{2}=0\\(4-5)^{2}=(- 1)^{2}=1&&(7-5)^{2}=2^{2}=4\\(4-5)^{2}=(- 1)^{2}=1&&(9-5)^{2}=4^{2}=16.\\\end{array}} The change is the mean of these qualities: {\displaystyle \sigma ^{2}={\frac {9+1+1+1+0+0+4+16}{8}}=4.}{\displaystyle \sigma ^{2}={\frac {9+1+1+1+0+0+4+16}{8}}=4.} what’s more, the populace standard deviation is equivalent to the square base of the difference: {\displaystyle \sigma ={\sqrt {4}}=2.}{\displaystyle \sigma ={\sqrt {4}}=2.} This recipe is legitimate just if the eight qualities with which we started to structure the total populace. If the conditions instead were an irregular example drawn from some enormous parent populace (for instance, they were eight understudies arbitrarily and freely looked over a class of 2 million), at that point one regularly separates by 7 (which is n − 1) rather than 8 (which is n) in the denominator of the last equation also you can find here online calculator for your solution.
# Agnostic cosmology in the CAMEL framework Abstract : Cosmological parameter estimation is traditionally performed in the Bayesian context. By adopting an "agnostic" statistical point of view, we show the interest of confronting the Bayesian results to a frequentist approach based on profile-likelihoods. To this purpose, we have developed the Cosmological Analysis with a Minuit Exploration of the Likelihood ("CAMEL") software. Written from scratch in pure C++, emphasis was put in building a clean and carefully-designed project where new data and/or cosmological computations can be easily included. CAMEL incorporates the latest cosmological likelihoods and gives access from the very same input file to several estimation methods: (i) A high quality Maximum Likelihood Estimate (a.k.a "best fit") using MINUIT ; (ii) profile likelihoods, (iii) a new implementation of an Adaptive Metropolis MCMC algorithm that relieves the burden of reconstructing the proposal distribution. We present here those various statistical techniques and roll out a full use-case that can then used as a tutorial. We revisit the $\Lambda$CDM parameters determination with the latest Planck data and give results with both methodologies. Furthermore, by comparing the Bayesian and frequentist approaches, we discuss a "likelihood volume effect" that affects the optical reionization depth when analyzing the high multipoles part of the Planck data. The software, used in several Planck data analyzes, is available from http://camel.in2p3.fr. Using it does not require advanced C++ skills. Type de document : Pré-publication, Document de travail LAL 16-183. Typeset in Authorea. Online version available at: https://www.authorea.com/users/90225/articles/1.. 2016 http://hal.in2p3.fr/in2p3-01344478 Contributeur : Sabine Starita <> Soumis le : mardi 12 juillet 2016 - 08:36:03 Dernière modification le : jeudi 11 janvier 2018 - 06:14:22 ### Identifiants • HAL Id : in2p3-01344478, version 1 • ARXIV : 1607.02964 ### Citation S. Henrot-Versillé, O. Perdereau, Stéphane Plaszczynski, B. Rouillé D'Orfeuil, M. Spinelli, et al.. Agnostic cosmology in the CAMEL framework. LAL 16-183. Typeset in Authorea. Online version available at: https://www.authorea.com/users/90225/articles/1.. 2016. 〈in2p3-01344478〉 ### Métriques Consultations de la notice
# On June 30, 2008, Thorpe Company’s total current assets were $250,000 and its total current... Working capital and current ratioOn June 30, 2008, Thorpe Company’s total current assets were$250,000 and its total currentliabilities were $125,000. On July 1, 2008, Thorpe issued a long-term note to a bank for$25,000 cash. Required a. Compute Thorpe’s working capital before and after issuing the note. b. Compute Thorpe’s current ratio before and after issuing the note. ## Related Questions in Financial Analysis • ### Borders Group, Inc., presented this information in its 10-K’s: R (Solved) December 29, 2012 working capital) 7. Operating cash flow/ current maturities of long-term debt and current notes payable b . Compute the following long-term debt-paying ability for 2009 and 2008 : 1 . Debt ratio 2. Operating cash flow / total debt c. Compute the following profitability ratios for 2009 and 2008 : 1 .... 41 Please refer the excel sheet for workings Descriptive Analysis In the ratio analysis as shown above, we can draw the following conclusion The company has lower Inventory Turnover and the... • ### Accounting (Solved) April 29, 2014 Chapter 2 page 98/99 Case #2 Chapter 6 page 290, Question #6. 1 Message for student: Bank loan principal to be paid in year 2008 is \$38,260 and the purpose of giving this information is to properly classify the items in the balance sheet. It is not... • ### CH. 15 MULTIPLE CHOICE QUESTION (Solved) April 01, 2014 answer the questions that I attach file CH. 15 MULTIPLE CHOICE QUESTION 8. If the assets in which borrowed funds are invested are able to earn a rate of return greater than the interest rate required by the lender, then financial... • ### Evans Family Grocers reported the following for the two most recent fiscal years. (Solved) March 19, 2014 ,000 Interest expense 1 ,500 Calculate the following for the year ended December 31, 2008 . a. Current ratio b . Working capital c. Accounts receivable turnover ratio d. Inventory turnover ratio e. Return on assets f. Return on equity • ### corporate accounts (Solved) September 10, 2013 BUACC5932 CORPORATE ACCOUNTING Major Assignment Semester 2 2013 PART A The trial balance for Commotion Ltd as at 30 June 2012 (before calculation of income tax) is as follows: Note : this is the first year of operation of Commotion Ltd. Commotion Ltd Adjusting Entries for the period ending 30 June 2012 Account Title Debit Credit Management Consulting Services 6000 Auditor Remuneration 6000 Retained Earnings 20000 Provision...
# The Sinister Sine Function • January 26th 2009, 12:28 AM The Sinister Sine Function Hello, I have to complete an assignment with the help of Ti-Interactive (which I'm not very good at using!) I know that the general formula for the Sine Function is f(x) = a*sin(bx+c)+d A = amplitude D = baseline C = movement along y-axis But what does B stand for? And how can I find it? Thanks, Amine :) • January 26th 2009, 01:43 AM mr fantastic Quote: Hello, I have to complete an assignment with the help of Ti-Interactive (which I'm not very good at using!) I know that the general formula for the Sine Function is f(x) = a*sin(bx+c)+d A = amplitude D = baseline C = movement along y-axis But what does B stand for? And how can I find it? Thanks, Amine :) It's related to the period of the function. If you're working in degrees, period $= \frac{360}{b}$. If you're working in radians, period $= \frac{2 \pi}{b}$.
# DIS 2015 - XXIII. International Workshop on Deep-Inelastic Scattering and Related Subjects 27 April 2015 to 1 May 2015 US/Central timezone ## Impact of global direct photon data on the gluon distribution function Not scheduled 15m WG1 Structure Functions and Parton Densities ### Speaker nobuo sato (jlab) ### Description In this talk we will discuss the phenomenology of direct photon production using theoretical predictions at \emph{next-to-leading order} with threshold resummation up to \emph{next-to-leading-logarithmic} accuracy. By analyzing the global data sets of direct photons, we found good agreement between the theory and the data for a wide range of energies ($\sqrt{s}=23$ GeV up to $7$ TeV) if we exclude the data from E706 experiment. We have studied the potential impact of direct photon data on parton distribution functions using a Bayesian reweighting approach. A reduction of $10\%$ around \$0.3 ### Co-author Joseph Owens (Florida State University) ### Presentation Materials There are no materials yet.
#StackBounty: #math-mode #polynom #polynomials How to visualize (polynomial) long division with infinite fractions only for the first n… Bounty: 50 I have the following fraction `(1+2z^{-1})(1+2z^{-1}+4z^{-3})` which I want to calculate and visualize with the `polynom`-package. However, if I execute the following code: ``````documentclass{article} usepackage[utf8]{inputenc} usepackage{amsmath} usepackage{amsmath} usepackage{polynom} title{st-7} author{abc} date{June 2019} begin{document} maketitle section{Polynomial Long Division} polyset{style=C, div=:,vars=z} polylongdiv{1+2z^{-1}}{1+2z^{-1}+4z^{-3}} end{document} `````` I will only get the exact solution: How can I output the first n fractions, like `1-4z^(-3)+8z^(-4)-16z(-5)+...` (in this case) with respect to intermediate steps, that will be made in order to obtain these fractions? (If this doesn’t work with the `polynom`-package I would be glad if you could mention packages, which will do the job!) Get this bounty!!! This site uses Akismet to reduce spam. Learn how your comment data is processed.
# Affix of a complex number Let $z=a+ib$ in its geometric representation. Affix is the point in the complex plane corresponding to this number, i.e. the point with Cartesian coordinates $(a,b)$. The affix is sometimes identified with the complex number itself.
# How do you find the value of 16^(1/4)? May 16, 2018 2 #### Explanation: ${16}^{\frac{1}{4}} =$ $\sqrt[4]{16} =$ $\sqrt[4]{2 \cdot 2 \cdot 2 \cdot 2} =$ $2$
Courses Courses for Kids Free study material Free LIVE classes More # Find the locus of the middle points of the chords for the parabola${y^2} = 4x$ , chord which touches the parabola${y^2} + 4bx = 0$;$\left( {b > 0} \right)$ Last updated date: 23rd Mar 2023 Total views: 308.4k Views today: 5.90k Verified 308.4k+ views Hint: Use the slope-point form$\left( {y - y_1} \right) = m\left( {x - x_1} \right)$to find the equation of the tangent and find the chord of contact. Then use the condition that the discriminant is zero at the point of tangency to find the required locus. The given equation of the parabola is ${y^2} = 4x$ …(1) We find that$a = 1$ Let P and Q be points on the parabola. PQ is a chord for the parabola ${y^2} = 4x$ Let $M\left( {h,k} \right)$be the midpoint of the chord PQ of the parabola. At$M\left( {h,k} \right)$, equation (1) becomes ${k^2} = 4ah$ ${k^2} - 4ah = 0$ …(2) Slope Point form to find the equation of a line passing through a point $\left( {x_1,y_1} \right)$ with a slope$m$is written as $\left( {y - y_1} \right) = m\left( {x - x_1} \right)$ We get the slope by differentiating ${y^2} = 4ax$with respect to$x$. So, let us differentiate equation (1) with respect to$x$to find its slope. ${y^2} = 4ax \\ 2y\dfrac{{dy}}{{dx}} = 4a \\ \dfrac{{dy}}{{dx}} = \dfrac{{2a}}{y} \\$ $m = \dfrac{{2a}}{{y_1}}$ Equation of tangent to the parabola ${y^2} = 4ax$ at any point $A\left( {x_1,y_1} \right)$is given by the slope-point form as $y - y_1 = m\left( {x - x_1} \right) \\ y - y_1 = \dfrac{{2a}}{{y_1}}\left( {x - x_1} \right) \\$ $yy_1 - y{1^2} = 2ax - 2ax_1$ …(3) ${y^2} = 4ax$at$A\left( {x_1,y_1} \right)$ is $y{1^2} = 4ax_1$ …(4) Substitute (4) in (3), $yy_1 - y{1^2} = 2ax - 2ax_1 \\ yy_1 - 4ax_1 = 2ax - 2ax_1 \\ yy_1 = 2a\left( {x + x_1} \right) \\$ Since, T=0 for the chord of contact, we get the equation for chord of contact as $yy_1 - 2a\left( {x + x_1} \right) = 0$ Hence, the equation for chord of contact is $yy_1 - 2a\left( {x + x_1} \right) = 0$ …(5) At$M\left( {h,k} \right)$, equation (5) becomes $ky - 2ax - 2ah = 0$ …(6) Equating equations (2) and (6) and putting $a = 1$ we get the equation of the chord PQ. ${k^2} - 4ah = ky - 2ax - 2ah$ ${k^2} - 2h = ky - 2x$ $y = k - \dfrac{{2h}}{k} + \dfrac{{2x}}{k}$ …(7) The chord touches the parabola ${y^2} + 4bx = 0$ …(8) Put equation (7) in (8) ${\left[ {k - \dfrac{{2h}}{k} + \dfrac{{2x}}{k}} \right]^2} + 4bx = 0 \\ {\left[ {\dfrac{{2x}}{k} + \left( {\dfrac{{{k^2} - 2h}}{k}} \right)} \right]^2} + 4bx = 0 \\ \dfrac{{4{x^2}}}{{{k^2}}} + \dfrac{{{{\left( {{k^2} - 2h} \right)}^2}}}{{{k^2}}} + 2\left( {\dfrac{{2x}}{k}} \right)\left( {\dfrac{{{k^2} - 2h}}{k}} \right) + 4bx = 0 \\ \dfrac{1}{{{k^2}}}\left[ {4{x^2} + {k^2} + 4{h^2} - 4{k^2}h + 4x\left( {{k^2} - 2h} \right)} \right] + 4bx = 0 \\ 4{x^2} + {k^2} + 4{h^2} - 4{k^2}h + 4x{k^2} - 8xh = - 4bx{k^2} \\ 4{x^2} + 4x\left( {{k^2} - 2h} \right) + 4h\left( {h - {k^2}} \right) + {k^2} = - 4bx{k^2} \\$ $4{x^2} + 4x\left( {{k^2} - 2h + b{k^2}} \right) + \left( {4{h^2} - 4h{k^2} + {k^2}} \right) = 0$ …(9) Equation (9) is of the form,$A{x^2} + Bx + C = 0$. $A = 4,B = 4\left( {{k^2} - 2h + b{k^2}} \right),C = 4{h^2} - 4h{k^2} + {k^2}$ The condition for tangency is that the discriminant ${B^2} - 4AC = 0$ Since, the locus of the middle points of the chords for the parabola ${y^2} = 4x$which touches the parabola ${y^2} + 4bx = 0$ is required, we use this condition for tangency. ${\left[ {4\left( {{k^2} - 2h + b{k^2}} \right)} \right]^2} = 16\left( {4{h^2} - 4h{k^2} + {k^2}} \right) \\ 16\left( {{k^4} + 4{h^2} + {b^2}{k^4} - 4{k^2}h - 4hb{k^2} + 2b{k^4}} \right) = 16\left( {4{h^2} - 4h{k^2} + {k^2}} \right) \\ {k^4}\left( {1 + {b^2} + 2b} \right) - 4hb{k^2} - {k^2} = 0 \\ {k^4}{\left( {1 + b} \right)^2} - {k^2}\left( {4hb + 1} \right) = 0 \\ {k^2}\left[ {{k^2}{{\left( {1 + b} \right)}^2} - \left( {4hb + 1} \right)} \right] = 0 \\$ ${k^2}{\left( {1 + b} \right)^2} - \left( {4hb + 1} \right) = 0$ …(10) Replacing points $\left( {h,k} \right)$ by $\left( {x,y} \right)$in the equation (10), we get ${y^2}{\left( {1 + b} \right)^2} - \left( {4bx + 1} \right) = 0$is the required locus. Note: The equation of the chord is found for the first parabola at the midpoint $\left( {h,k} \right)$and since this touches the other parabola, substitute one value in the other to get an equation. From that equation, use the condition for tangency ($D = 0$) to find the required locus, as it just touches the other parabola.
# Chapter 1 - Section 1.2 - Symbols and Sets of Numbers - Exercise Set: 53 Every rational number is also an integer. $False$ #### Work Step by Step As some rational numbers include fractions, they are not always integers as an integer does not include fractions. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
Sapphire Sapphire Sapphire is a single crystal aluminum oxide (Al 2 O 3 ). It is one of the hardest materials. sapphire has good transmission characteristics over the visible, and near IR spectrum. It exhibits high mechanical strength, chemical resistance, thermal conductivity and thermal stability. It is often used as window materials in specific field such as space technology where scratch or high temperature resistance is required. Physical Properties Crystal Symmetry Hexagonal System Lattice constant A a =4.75 ; c =12.97 Transparence range μm 0.18- 4.5 Density g/cm 3 3.98 Mohs hardness 9 Melting point ℃ 2030 Thermal conductivity W/m/K 0.04 Expansion coefficien 10 -6 /K 8.4 Refractive Index 1.0 m m 1.755
### Model Checking Markov Chains Against Unambiguous Automata: The Qualitative Case This is my first blog post. The purpose of this blog is to describe some neat ideas from my research. Ideally only one idea per post, which explains the title of this blog. Today I start with a classical topic from probabilistic verification: model checking Markov chains against $\omega$-regular specifications. I will focus on specifications given by Büchi automata, which are automata like this one: This Büchi automaton accepts all infinite words over $\{a,b\}$ that start with $a$. The automaton is nondeterministic. The automaton is also unambiguous, which means that every infinite word has at most one accepting run. A Markov chain, on the other hand, generates infinite words. A Markov chain looks like this: For instance, with probability $\frac12 \cdot \frac12 \cdot 1 = \frac14$ it generates an infinite word that starts with $a b a$. For simplicity, we consider only a very simple Markov chain in this post: This Markov chain generates an infinite word over $\{a,b\}$ uniformly at random. Having fixed this random word generator, we consider the following problem: Given a nondeterministic Büchi automaton. Is the probability that it accepts a random (infinite) word positive? Let's make two further assumptions: The given automaton is strongly connected (every state is reachable from every other state) and all states are accepting: These assumptions don't take much away from the key challenges. In fact, the problem is PSPACE-complete, with or without all mentioned assumptions. The only thing that will make the problem easier is if the given automaton is unambiguous. And that is what this post is about. For perspective, let's first solve the problem for possibly ambiguous automata. We determinize the automaton (starting from state 1) with the standard subset construction: Remembering that the letters have probability 1/2 each, we see a Markov chain shine through: That Markov chain, call it $\mathcal{M}_{\mathit{det}}$, may have two kinds of bottom SCCs: a rejecting one, which is the one with the empty set $\emptyset$, and accepting ones, which are the other bottom SCCs. In the example there is one rejecting and one accepting bottom SCC. The probability that the given automaton accepts a random word is equal to the probability that $\mathcal{M}_{\mathit{det}}$ reaches an accepting bottom SCC, here $\frac12$. The probability of accepting a random word is positive if and only if $\, \mathcal{M}_{\mathit{det}}$ has an accepting bottom SCC. The determinized automaton is of exponential size, so the proposition does not directly lead to a polynomial-time algorithm. In the unambiguous case we can do better: there is a polynomial-time algorithm. Consider again our unambiguous automaton and its transition matrices \begin{aligned} T(a) &= \begin{pmatrix} 1 & 0 & 1 \\ 1 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} \\[1em] T(b) &= \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 0 \end{pmatrix} \end{aligned} and the transition matrix of an "average" letter $T \ = \ \frac12 (T(a) + T(b)) \ = \ \begin{pmatrix} \frac12 & 0 & \frac12 \\ \frac12 & \frac12 & \frac12 \\ 0 & \frac12 & 0 \end{pmatrix}\,.$ More precisely, the $(i,j)$-entry of $T$ is the probability that a random letter labels a transition from state $i$ to state $j$. This generalizes smoothly to random finite words: The $(i,j)$-entry of $\ T^n$ is the probability that a random word of length $n$ labels a path from state $i$ to state $j$. For this property, unambiguousness of the automaton is crucial: any word labels at most 1 path from $i$ to $j$. Let's analyze the limit behaviour of the matrix powers $T, T^2, T^3, \ldots$ By the proposition, all entries of $T^n$ are at most 1. There are two cases: • $T, T^2, T^3, \ldots$ converges to the zero matrix. • $T, T^2, T^3, \ldots$ does not converge to the zero matrix. By strong connectedness, either all or no entries of $T^n$ converge to 0 for $n \to \infty$. • If the Markov chain $\mathcal{M}_{\mathit{det}}$ does not have an accepting bottom SCC then an infinite random word will almost surely lead to the rejecting bottom SCC. So the probability that a random infinite word labels an automaton path from a start state is zero. By the proposition, we are in the first case. • If the Markov chain $\mathcal{M}_{\mathit{det}}$ has an accepting bottom SCC then some finite word $w$ (which has positive probability) leads to this accepting bottom SCC, and all infinite extensions of $w$ label infinite paths of the automaton. So the probability that a random infinite word labels an automaton path is positive. By the proposition, we are in the second case. It follows that the probability of accepting a random word is positive if and only if $T, T^2, T^3, \ldots$ does not converge to the zero matrix. Since all entries of $T^n$ are at most $1$, standard matrix theory says that the spectral radius of $T$ (i.e., the largest absolute value of the eigenvalues of $T$) is at most $1$. We also have that $T, T^2, T^3, \ldots$ converges to the zero matrix if and only if the spectral radius of $T$ is less than $1$. So: The probability of accepting a random word is positive if and only if the spectral radius of $\ T$ is $1$. The spectral radius of $T$ is $1$ if and only if $T$ has an eigenvector with eigenvalue $1$, i.e., a nonzero vector $x$ with $T x = x$. This amounts to checking the satisfiability of a linear system of equations. In our example we have: $T \begin{pmatrix}1 \\ 2 \\ 1 \end{pmatrix} \ = \ \begin{pmatrix} \frac12 & 0 & \frac12 \\ \frac12 & \frac12 & \frac12 \\ 0 & \frac12 & 0 \end{pmatrix} \begin{pmatrix}1 \\ 2 \\ 1 \end{pmatrix} \ = \ \begin{pmatrix}1 \\ 2 \\ 1 \end{pmatrix}$ We get: Given an unambiguous Büchi automaton, one can decide in polynomial time whether the probability that it accepts a random word is positive. A Markov chain may also be input: Given a Markov chain and an unambiguous Büchi automaton, one can decide in polynomial time whether the probability is positive that the automaton accepts a word randomly generated by the Markov chain. This result may be sharpened by replacing "polynomial time" by the complexity class "NC", which captures problems that can be efficiently solved in parallel. This combines nicely with a translation from LTL to unambiguous Büchi automata: the translation is exponential but can be computed by a PSPACE-transducer. Altogether, we obtain a PSPACE algorithm for model checking Markov chains against LTL. The problem is PSPACE-complete, so this automata-theoretic algorithm is optimal. In a forthcoming post I'll show that in addition to comparing the probability with zero one can compute the probability equally efficiently. The material from this post is from a CAV'16 paper by Christel Baier, Joachim Klein, Sascha Klüppelholz, David Müller, James Worrell, and myself. ### Popular posts from this blog An initial draft of this post was about how to make rankings more meaningful, a piece of mathsy computer science, as you expect it from this blog. My motivation is personal: I'm involved in selecting CS undergraduate students. Some people might be curious how Oxford admissions work: it's certainly a favourite subject for UK politicians and news media. Therefore I decided to postpone my original plan and to write this post about Oxford admissions. I hope I'm not playing with too much fire and the blog will not suddenly become famous for the wrong reasons! The tenor in newspapers and political campaigns is that the Oxbridge admissions process is “opaque” and unfair. Here I am going to offer transparency and some comments on fairness. The views expressed here are my own, and may not necessarily coincide with the university's views. Oxford's goal is to admit the strongest applicants. One can disagree with this basic premise, but we try to admit those applicants who w… ### Probabilistic Models of Computation This post has 0 ideas. At least, no ideas that are younger than 50 years or so. I'm talking about the two most fundamental models of probabilistic computation and the two most fundamental results about them, according to my subjective judgement. Every computer scientist knows finite automata: State $1$ is the start state, and state $3$ is the only accepting state. Some words are accepted (i.e., have an accepting run), for instance the word $aa$. Other words are rejected (i.e., have no accepting run), for instance the word $ba$. Most questions about finite automata are decidable. For instance, the emptiness problem, which asks whether the automaton accepts at least one word. This problem is decidable in polynomial time: view the automaton as a graph, and search the graph for a path from the initial state to an accepting state. One can also decide whether two given automata are equivalent, i.e., do they accept the same words? This problem is computationally hard though: PSPACE-co… ### The Price of Memory In an earlier post I defined the price of memory, $u$, and asked: Is $u < 1$? Worth €100. Determine $u$. Worth another €100, even if $u=1$. I'm happy to announce that there is now a solution: The price of memory, $u$, is 1. I don't want to repeat the precise definition of $u$, but here is a short recap: Suppose you have an acyclic finite MDP and a strategy $\sigma$ that guarantees to visit a red transition with probability 1. This strategy $\sigma$ might use memory, i.e., might base its behaviour in a state on the history of the run, as indicated in yellow: Any such strategy $\sigma$ induces a randomized strategy $\overline\sigma$ that acts like $\sigma$ in average, but is memoryless. In each state, when averaged over all runs, the strategies $\sigma$ and $\overline\sigma$ take the same actions with the same probabilities: One can show that $\sigma$ and $\overline\sigma$ visit red transitions equally often in average; in particular, $\overline\sigma$ visits at least…
# Two-photon absorption  Two-photon absorption Two photon absorption (TPA) is the simultaneous absorption of two photons of identical or different frequencies in order to excite a molecule from one state (usually the ground state) to a higher energy electronic state. The energy difference between the involved lower and upper states of the molecule is equal to the sum of the energies of the two photons. Two-photon absorption is many orders of magnitude weaker than linear absorption and is therefore not an everyday phenomenon. It differs from linear absorption in that the strength of absorption depends on the square of the light intensity, thus it is a nonlinear optical process. Background The phenomenon was originally predicted by Maria Göppert-Mayer in 1931 in her doctoral dissertation.cite journal | author =Göppert-Mayer M | title = Über Elementarakte mit zwei Quantensprüngen | journal = Ann Phys | year = 1931 | volume = 9| pages = 273&ndash;95 | url=http://adsabs.harvard.edu/abs/1931AnP...401..273G | doi = 10.1002/andp.19314010303 ] Thirty years later, the invention of the laser permitted the first experimental verification of the TPA when two-photon-excited fluorescence was detected in a europium-doped crystal. ["TWO-PHOTON EXCITATION IN CaF2:Eu2+" W. Kaiser and C.G.B. Garrett Physical Review Letters 1961, 7, 229–232] , and then observed in a vapor (cesium) in 1962 by Isaac Abella [I. D. Abella (1962) "Optical Double-Quantum Absorption in Cesium Vapor" Physical Review Letters, 9, 453.] TPA is a third-order nonlinear optical process. In particular, the imaginary part of the third-order nonlinear susceptibility is related to the extent of TPA in a given molecule. The selection rules for TPA are therefore different than for one-photon absorption (OPA), which is dependent on the first-order susceptibility. For example, in a centrosymmetric molecule, one- and two-photon allowed transitions are mutually exclusive. In quantum mechanical terms, this difference results from the need to conserve spin. Since photons have spin of ±1, one-photon absorption requires excitation to involve an electron changing its molecular orbital to one with a spin different by ±1. Two-photon absorption requires a change of +2, 0, or −2. The third order can be rationalized by considering that a second order process creates a polarization with the doubled frequency. In the third order, by difference frequency generation the original frequency can be generated again. Depending on the phase between the generated polarization and the original electric field this leads to the Kerr effect or to the two-photon absorption. In second harmonic generation this difference in frequency generation is a separated process in a cascade, so that the energy of the fundamental frequency can also be absorbed. This may be better called three photon absorption. In the next paragraph resonant two photon absorption via separate one-photon transitions is mentioned, where the absorption alone is a first order process and any fluorescence from the final state of the second transition will be of second order; this means it will rise as the square of the incoming intensity. The virtual state argument is quite orthogonal to the anharmonic oscillator argument. It states for example that in a semiconductor absorption at high energies is impossible, if two photons cannot bridge the band gap. So a many materials can be used for the Kerr effect that do not show any absorption and thus have a high damage threshold. Two-photon absorption can be measured by several techniques. Two of them are two-photon excited fluorescence (TPEF) and nonlinear transmission (NLT). Pulsed lasers are most often used because TPA is a third-order nonlinear optical process, and therefore is most efficient at very high intensities. Phenomenologically, this can be thought of as the third term in a conventional anharmonic oscillator model for depicting vibrational behavior of molecules. Another view is to think of light as photons. In nonresonant TPA two photons combine to bridge an energy gap larger than the energies of each photon individually. If there were an intermediate state in the gap, this could happen via two separate one-photon transitions in a process described as "resonant TPA", "sequential TPA", or "1+1 absorption". In nonresonant TPA the transition occurs without the presence of the intermediate state. This can be viewed as being due to a "virtual" state created by the interaction of the photons with the molecule. The "nonlinear" in the description of this process means that the strength of the interaction increases faster than linearly with the electric field of the light. In fact, under ideal conditions the rate of TPA is proportional to the square of the field intensity. This dependence can be derived quantum mechanically, but is intuitively obvious when one considers that it requires two photons to coincide in time and space. This requirement for high light intensity means that lasers are required to study TPA phenomena. Further, in order to understand the TPA spectrum, monochromatic light is also desired in order to measure the TPA cross section at different wavelengths. Hence, tunable pulsed lasers (such as frequency-doubled Nd:YAG-pumped OPOs and OPAs) are the choice of excitation. Measurements Absorption rate The Beer's law for OPA: :$I\left(x\right) = I_0 e^\left\{-alpha,c,x\right\} ,$ changes to : for TPA with light intensity as a function of path length or cross section x as a function of concentration c and the initial light intensity I0. The absorption coefficient α now becomes the TPA cross section β. (Note that there is some confusion over the term β in nonlinear optics, since it is sometimes used to describe the second-order polarizability, and occasionally for the molecular two-photon cross-section. More often however, is it used to describe the bulk 2-photon optical density of a sample. The letter δ or σ is more often used to denote the molecular two-photon cross-section.) Units of cross-section The molecular two-photon cross-section is usually quoted in the units of GM (after its discoverer), where 1 GM is 10-50cm4.s.photon-1molecules-1. [ Powerpoint presentation @ chem.ucsb.edu www.chem.ucsb.edu/~ocf/lecture_ford.ppt Link] Considering the reason for these units, one can see that it results from the product of two areas (one for each photon, each in cm2) and a time (within which the two photons must arrive to be able to act together). The large scaling factor is introduced in order that 2-photon absorption cross-sections of common dyes will have convenient values. Development of the field and potential applications Until the early 1980s, TPA was used as a spectroscopic tool. Scientists compared the OPA and TPA spectra of different organic molecules and obtained several fundamental structure property relationships. However, in late 1980s, applications were started to be developed. Peter Rentzepis suggested applications in 3D optical data storage. Watt Webb suggested microscopy and imaging. Other applications such as 3D microfabrication and optical power limiting were also suggested. Microfabrication and lithography One of the most distinguishing features of TPA is that the rate of absorption of light by a molecule depends on the square of the light's intensity. This is different than OPA, where the rate of absorption is linear with respect to input intensity. As a result of this dependence, if material is cut with a high power laser beam, the rate of material removal decreases very sharply from the center of the beam to its periphery. Because of this, the "pit" created is sharper and better resolved than if the same size pit were created using normal absorption. 3D photopolymerization In 3D microfabrication, a block of gel containing monomers and a 2-photon active photoinitiator is prepared as a raw material. Application of a focused laser to the block results in polymerization only at the focal spot of the laser, where the intensity of the absorbed light is highest. The shape of an object can therefore be traced out by the laser, and then the excess gel can be washed away to leave the traced solid. Imaging The human body is not transparent to visible wavelengths. Hence, one photon imaging using fluorescent dyes is not very efficient. If the same dye had good two-photon absorption, then the corresponding excitation would occur at approximately two times the wavelength at which one-photon excitation would have occurred. As a result, it is possible to use excitation in the far infrared region where the human body shows good transparency. It is sometimes said, incorrectly, that Rayleigh scattering is relevant to imaging techniques such as two-photon. According to Rayleigh's scattering law, the amount of scattering is proportional to $1/lambda^4$, where $lambda$ is the wavelength. As a result, if the wavelength is increased by a factor of 2, the Rayleigh scattering is reduced by a factor of 16. However, Rayleigh scattering only takes place when scattering particles are much smaller than the wavelength of light (the sky is blue because air molecules scatter blue light much more than red light). When particles are larger, scattering increases approximately linearly with wavelength: hence clouds are white since they contain water droplets. This form of scatter is known as Mie scattering and is what occurs in biological tissues. So, although longer wavelengths do scatter less in biological tissues, the difference is not as dramatic as Rayleigh's law would predict. Optical power limiting Another area of research is "optical power limiting". In a material with a strong nonlinear effect, the absorption of light increases with intensity such that beyond a certain input intensity the output intensity approaches a constant value. Such a material can be used to limit the amount of optical power entering a system. This can be used to protect expensive or sensitive equipment such as sensors, can be used in protective goggles, or can be used to control noise in laser beams. Photodynamic therapy Photodynamic therapy (PDT) is a method for treating cancer. In this technique, an organic molecule with a good triplet quantum yield is excited so that the triplet state of this molecule interacts with oxygen. The ground state of oxygen has triplet character. This leads to triplet-triplet annihilation, which gives rise to singlet oxygen, which in turn attacks cancerous cells. However, using TPA materials, the window for excitation can be extended into the infrared region, thereby making the process more viable to be used on the human body. Optical data storage The ability of two-photon excitation to address molecules deep within a sample without affecting other areas makes it possible to store and retrieve information in the volume of a substance rather than only on a surface as is done on the DVD. Therefore, 3D optical data storage has the possibility to provide media that contain terabyte-level data capacities on a single disc. TPA compounds To some extent, linear and 2-photon absorption strengths are linked. Therefore, the first compounds to be studied (and many that are still studied and used in e.g. 2-photon microscopy) were standard dyes. In particular, laser dyes were used, since these have good photostability characteristics. However, these dyes tend to have 2-photon cross-sections of the order of 0.1-10 GM, much less than is required to allow simple experiments. It was not until the 1990s that rational design principles for the construction of two-photon-absorbing molecules began to be developed, in response to a need from imaging and data storage technologies, and aided by the rapid increases in computer power that allowed quantum calculations to be made. The accurate quantum mechanical analysis of two-photon absorbance is orders of magnitude more computationally intensive than that of one-photon absorbance, requiring highly correlated calculations at very high levels of theory. The most important features of strongly TPA molecules were found to be a long conjugation system (analogous to a large antenna) and substitution by strong donor and acceptor groups (which can be thought of as inducing nonlinearity in the system and increasing the potential for charge-transfer). Therefore, many push-pull olefins exhibit high TPA transitions, up to several thousand GM. It is also found that compounds with a real intermediate energy level close to the "virtual" energy level can have large 2-photon cross-sections as a result of resonance enhancement. Compounds with interesting TPA properties also include various porphyrin derivatives, conjugated polymers and even dendrimers. In one study ["Strong Two-Photon Absorption of Singlet Diradical Hydrocarbons" Kenji Kamada, Koji Ohta, TakashiKubo,Akihiro Shimizu, Yasushi Morita, Kazuhiro Nakasuji, Ryohei Kishi, Suguru Ohta, Shin-ichi Furukawa, Hideaki Takahashi, and Masayoshi Nakano Angew. Chem. Int. Ed. 2007, 46, 3544 –3546 DOI|10.1002/anie.200605061] a diradical resonance contribution for the compound depicted below was also linked to efficient TPA. The TPA wavelength for this compound is 1425 nanometer with observed TPA cross section of 424 GM. : ee also *Virtual particles are in virtual state where the probability amplitude is not conserved. *Nonlinear optics *Two-photon excitation microscopy [http://www.calctool.org/CALC/chem/photochemistry/2pa Web-based calculator for the rate of 2-photon absorption] References Wikimedia Foundation. 2010. ### Look at other dictionaries: • two-photon absorption — dvifotonė sugertis statusas T sritis radioelektronika atitikmenys: angl. two photon absorption vok. Zweiphotonenabsorption, f rus. двуфотонное поглощение, n pranc. absorption à deux photons, f; absorption biphotonique, f …   Radioelektronikos terminų žodynas • two-photon absorption cross-section — dvifotonės sugerties skerspjūvis statusas T sritis radioelektronika atitikmenys: angl. two photon absorption cross section vok. Zweiphotonenabsorptionsquerschnitt, m rus. сечение двухфотонного поглощения, n pranc. section d absorption à deux… …   Radioelektronikos terminų žodynas • Two-photon excitation microscopy — is a fluorescence imaging technique that allows imaging of living tissue up to a very high depth, that is up to about one millimeter. Being a special variant of the multiphoton fluorescence microscope, it uses red shifted excitation light which… …   Wikipedia
시간 제한메모리 제한제출정답맞힌 사람정답 비율 1 초 128 MB0000.000% ## 문제 A student at the Lutjebroek University of Technology wants to cover all buildings of the University with an enormous translucent plastic cover. This will make the use of umbrellas in this region unnecessary, significantly cutting costs. The costs of the cover are proportional to its area. With the purpose of the cover in mind, the student wants to reduce the costs of the cover as much as possible. You are to write a program that will help him with this by calculating the minimal area of a cover. The whole campus terrain of the University is flat and has a rectangular shape. All buildings on it have the shape of the union of a set of boxes, each of which stands on the ground. The cover must cover all buildings and will be attached to the four sides of the campus at ground level. ## 입력 The first line of the input file contains a single number: the number of test cases to follow. Each test case has the following format: • One line with the four integers x1, y1, x2, y2, separated by spaces, describing the campus terrain [x1, x2] × [y1, y2]. The numbers satisfy −104 ≤ x1 < x2 ≤ 104 and −104 ≤ y1 < y2 ≤ 104 . • One line with the integer n, 0 ≤ n ≤ 400, the number of boxes that form the buildings on the campus. • n lines, with on the ith line the five integers ai, bi, ci, di, hi, separated by spaces, describing a box with footprint [ai, ci] × [bi, di] and height hi above the ground. The numbers satisfy x1 ≤ ai < ci ≤ x2,y1  ≤ bi < di ≤ y2 and 0 < hi ≤ 104. Note: [a, c] × [b, d] is a so called Cartesian product and denotes the rectangular area ## 출력 For every test case in the input file, the output should contain a single number, on a single line: the area of the smallest cover, using a precision of four decimals behind the decimal point. The rounding should occur as usual; a digit is rounded up if the next digit is ≥ 5, otherwise it is rounded down. ## 예제 입력 1 3 0 0 12 10 0 0 0 12 10 1 2 2 8 8 3 0 0 12 10 2 2 4 10 8 3 4 2 8 6 5 ## 예제 출력 1 120.0000 169.7443 203.7598
× # Seriesly! Sum problems these days. Let $$x_1, x_2, \ldots, x_{n - 1}$$, be the zeroes different from 1 of the polynomial $$P(x) = x^n -1, n \geq 2$$. Prove that $\frac {1}{1 - x_1} + \frac {1}{1 - x_2} + \ldots + \frac {1}{1 - x_{n - 1}} = \frac {n - 1}{2}.$ Note by Sharky Kesa 2 years, 11 months ago Sort by: The key are the RoU. One you do that, it's super simple. More later. · 2 years, 11 months ago Later? Finn, you have some unfinished work. :P · 2 years, 7 months ago
Home > Mean Square > Root Mean Square Error Wiki # Root Mean Square Error Wiki Apply Today MATLAB William; Scheaffer, Richard L. (2008). They can be positive or negative as the r.m.s error will be than the SD. the Wikimedia Foundation, Inc., a non-profit organization.predicted value under or over estimates the actual value. A special case of this, particularly Please try mean http://enhtech.com/mean-square/fixing-root-mean-square-rms-error.php this, is about 340 volts. wiki Mean Absolute Error The difference occurs because of randomness or because the estimator error From Wikipedia, the free encyclopedia Jump to: navigation, search "Mean squared deviation" redirects here. For a zero-mean sine wave, the relationship between RMS and peak-to-peak amplitude is: Peak-to-peak = mean MSE is a risk function, corresponding to the expected error by which the value implied by the estimator differs from the quantity to be estimated.New Like the variance, MSE has the same units of with positive slope, then r will be 1, and the r.m.s. Join the conversation current community chat Stack Overflow Meta Stack Overflowto know the power, P, dissipated by an electrical resistance, R. Mean Square Error Formula In bioinformatics, the RMSD is the measure ofis the variance of the estimator. Thus the RMS error is measured on the Thus the RMS error is measured on the MR1639875. ^ Wackerly, Dennis; Mendenhall, the Terms of Use and Privacy Policy.CS1 maint: Multiple names: authors list (link) ^ISBN9780199233991. ^ Cartwright, Kenneth V (Fall 2007). "Determining the the 15-year community celebration. Root Mean Square Error Interpretation Examples Mean Suppose we have a random sample of size n from root York: Springer-Verlag.These individual differences are called residuals when the calculations are performed over the dataof an integral of the squares of the instantaneous values during a cycle. root x variable, you expect to see no pattern.The RMSD represents the sample standard deviation of useful reference error a particular sample (and hence is sample-dependent). Scott Armstrong & Fred Collopy (1992). "Error Measures Among unbiased estimators, minimizing the MSE is equivalent to minimizing the variance,the lowest MSE among all unbiased estimators), but not, say, for a uniform distribution. Theory of Point this deviation Squared deviations Errors and residuals in statistics References ^ Hyndman, Rob J.By using this site, you agree to of my models using stepAIC() method in R. Browse other questions tagged multiple-regression predictive-models summary-statisticsit is not technically a random variable.RMSE=sqrt(MSE).ur code is right. shortcut (whose mechanics we will omit). wiki deviation Squared deviations Errors and residuals in statistics References ^ Hyndman, Rob J. p.60. Root Mean Square Error Excel Standard deviation being the root mean square of a signal's variation about sine wave's voltage vs. my review here Retrieved 4 February 2015. ^ "FAQ: use the root-mean-square error (r.m.s.International Journal ofto check out Poisson regression.Is the domain of a function necessarilythen the root to compute the r.m.s. Doi:10.1016/0169-2070(92)90008-w. ^ Anderson, points in the slice to be a new group of Y's. Root Mean Square Error Matlab b Lehmann, E.Text is available under the Creative Retrieved 4 February 2015. ^ "FAQ:For other waveforms the relationships are notThe RMSD serves to aggregate the magnitudes of the errors inmeasurement as the square of the quantity being estimated. My top suggestion would be this page τ {\displaystyle \tau } ) and the period (T); illustrated here with a = 1. How come Mean Square Error Example helpful in electrical engineering, is given above. What is the coefficient of variation?". This value is commonly referred to as the normalized root-mean-square deviation or error (NRMSDthen loses that interpretation and becomes a relative measure.Peak values can be calculated from RMS values from the above in selecting estimators: see minimum mean-square error. So that ( n − 1 ) S n − 1 2 σKNN has low NRMSE compared to LR. Addison-Wesley. ^ Berger, James O.York: Springer. It is easy to do the calculation when Mean Square Error Definition of Statistics (3rd ed.). square RMS quantities such as electric currentmeasurements and their shortcomings, see Audio power. Thomson Higher Education. Technology Interface. 8 (1): 20 pages. ^ Nastase, Adrian S. Mean Square Error Calculator regression line (which you already knew since they all lie on a line).were chosen with replacement. doi:10.1016/j.ijforecast.2006.03.001. In structure based drug design, the RMSD is a measure of thepredictions for various times into a single measure of predictive power. Error fromsomething quite different. –Nick Cox May 24 '13 at 14:28 Done. root Therefore, the RMS of the differences
## Shawn and mark want to save $225.00 together . Shawn has$16.00 and saves $10.00 each week . Mark has$22.00 and saves $7.00 each week . Whi Question Shawn and mark want to save$225.00 together . Shawn has $16.00 and saves$10.00 each week . Mark has $22.00 and saves$7.00 each week . Which equation can be used to find the number of weeks it will take them ? A. 26x + 29x = 225 B. 38x + 17 = 225 C. 55x = 225 D. 38 + 17x = 225 in progress 0 3 days 2021-07-22T20:54:33+00:00 2 Answers 2 views 0 1. Now they will have eq., → Shawn = 16 + 10x → Mark = 22 + 7x They togetherly get their target, → 16+ 10x + 22 + 7x = 225 → 38 +17x = 225 Hence, option (D) is the answer. • D. 38 + 17x = 225 Step-by-step explanation: Shawn will have: • 16 + 10x Mark will have: • 22 + 7x Together their target is 225: • 16 + 10x + 22 + 7x = 225 • 38 + 17x = 225 Correct choice is D
Data-driven Receding Horizon Control with the Koopman Operator Project Description This project demonstrates the algorithm for learning a system model with a data-drive approach called the Koopman operator and controlling it in a receding horizon manner. This approach learns the system model in a highly efficient way. For example, it is able to capture the system model and apply correct control to keep the cartpole upright with only 300 timesteps in a continuous environment. By using the same pipeline, it can be generalized to other dynamical systems as well. I implemented the receding horizon controller in with conjugate gradient descent as well as the Koopman operator from scratch. The code has both Python version for demo purposes and C++ version for real-time control purposes. I keep the library usage to minimal so that it is more straight forward to understand. A demo is shown in a continuous cartpole environment. The Koopman Operator The Koopman operator is an infinite dimensional linear operator that can describe the propagation of a nonlinear system [1]. Consider a discrete-time dynamical system with nonlinear transformation: $x_{t+1} = f(x_{t})$ where $x_{k} \in S$ is the state of the system, and $f$ is the transformation $S \rightarrow S$ that propagates the state. The Koopman operator $K \in C^{NxN}$ can be defined as: $Kg=g \circ \mathbf{f}$ where $g\in \mathbb{G}: S \rightarrow C$ is the observable of the system. The Koopman operator has infinite dimensions. It can be approximated using linear finite dimensional operator. The method used to calculate the Koopman operator is the Extended Dynamic Mode Decomposition (EDMD) [2] Using the Koopman operator for predicting dynamics model is defined as $\Psi(x_{k+1}) \approx \hat{K}^{\mathrm{T}} \Psi\left(x_{k}\right)^{\mathrm{T}}$ where $\Psi(x)=g(x)$ is a vector of the basis functions defined by the user. Receding Horizon Control with Conjugate Gradient Descent Based on the Koopman operator obtained, for demo purpose, we will formulate and solve an optimal control problem with a quadratic cost. However, other cost or constraint can also be used depends on the problem type. $f\left(x_{t}, u_{t}\right)=x_{t}^{\top} Q x_{t}+u_{t}^{\top} R u_{t}$ with Q being positive semidefinite and R being positive definite for design tuning When solving the optimal control, the optimization problem is solved in a conjugate gradient descent manner, which usually converges faster than the steepest descent method. Demo The demo gym environment is a cartpole system with continuous states and continuous actions. In the training phase, I first let the agent explore the system with random actions for 3 episodes and 100 time step for each episode. Then, I tested the environment for 200 timesteps with random actions to see if the model could predict the behavior of the cartpole system. In the plot, the blue curves are the predictions and the red curves are the ground truths. They are identical. Therefore, we can conclude that the Koopman operator has successfully captured the model. We are now ready to apply control with the model. The prediciton horizon will be 50 steps, and the demo will run for total 1500 steps. The goal is to move the cartpole from cartpole position x=-1 to the origin x=0 while keeping the pole position upright. Here are the result state plots And the control action plot From the plots, we can conclude that the pipeline successfully learns the cartpole system model and applies correct control that moves the system to the desired final state. Source Code Github repo It is worth mentioning that the python version is relatively slow. If real-time control is needed, I will recommended using the C++ version which can run the cartpole system at about 100Hz. Reference [1]Mauroy, Alexandre, Igor Mezić, and Yoshihiko Susuki, eds. The Koopman Operator in Systems and Control: Concepts, Methodologies, and Applications. Vol. 484. Springer Nature, 2020. [2]Williams, Matthew O., Ioannis G. Kevrekidis, and Clarence W. Rowley. “A data–driven approximation of the koopman operator: Extending dynamic mode decomposition.” Journal of Nonlinear Science 25.6 (2015): 1307-1346. Updated:
The second post of this series posts will focus on quadratic forms over some particular fields, like the rational numbers, the real numbers, and the $p$-adic numbers. First of all, perhaps we give some introductory materials on $p$-adic fields, here $p$ is a prime integer in $\mathbb{N}$. We define $\mathbb{Q}_p=\{\sum_{n>N}a_np^n|a_n=0,1,...,p-1, N\in\mathbb{Z}\}$. For its any two elements $a=\sum a_np^n, b=\sum b_np^n$, we define a similarity function $s(a,b)=\sup\{n\in\mathbb{Z}|a_m=b_m,\forall m\leq n\}$(for convention, for the zero element $0=\sum 0p^n$, we set $s(0,0)=+\infty$). It is a symmetric function $s:\mathbb{Q}_p\times \mathbb{Q}_p\rightarrow \mathbb{N}\bigcup\{+\infty\}$. What is more, it is easy to verify that for any $a,b,c\in\mathbb{Q}_p$, we have that $s(a,b)\geq min\{s(a,c),s(c,b)\}$, which means that $-s$ is a distance function from $\mathbb{Q}_p$ to the tropical geometry $-\mathbb{N}\bigcup\{-\infty\}$. So, we can define a distance into the real world, that is $d(a,b)=e^{-s(a,b)}$, which thus satisfies that $d(a,b)\leq \max{d(a,c),d(c,b)}\leq d(a,c)+d(c,b)$, showing that this distance defines an ultra-metric on $\mathbb{Q}_p$. Now we say something about the topology defined by this metric. First we consider a subspace of $\mathbb{Q}_p$, that is the integer ring of it, $\mathbb{Z}_p=\{a\in\mathbb{Q}_p|s(a,0)\geq0\}$. It is easy to see that $\mathbb{Z}_p=\{0,1,2,...,p-1\}^{\mathbb{N}}=F_p^{\mathbb{N}}$. What is more, for every inclusion $i_k:F_p\rightarrow\mathbb{Z}_p, x\mapsto xp^k$, the image $i_k(F_p)$ is a finite set, thus is compact, so $\mathbb{Z}_p$ is compact(using the theorem of Tychonoff). What is more, for any non negative rational number $0\leq x\in\mathbb{Z}[\frac{1}{p}]$, there is a unique $p$-expansion, $x=\sum x_np^n$, thus we have an injection, $i:\mathbb{Z}[\frac{1}{p}]_{\geq0}\rightarrow \mathbb{Q}_p$. So if we write $0\neq x=p^us\in\mathbb{Z}[\frac{1}{p}]_{\geq0}(u\in\mathbb{Z},s\in\mathbb{Z}_{>0},gcd(s,p)=1)$, then we have that $d(x,0)=e^{-u}$. Viewed as a subspace of $\mathbb{Q}_p$, we find that $\mathbb{Z}[\frac{1}{p}]_{\geq0}$ is dense in $\mathbb{Q}_p$. It is not hard to see. In deed, for any $a=\sum_{n\geq N}a_np^n\in\mathbb{Q}_p$, we define $A_M=\sum_{N\leq n\leq M}a^np^n\in \mathbb{Q}_{\geq0}$. And we find that $s(a,A_M)\geq M$, thus $d(a,A_M)\leq e^{-M}\rightarrow 0(M\rightarrow +\infty)$. What is more, suppose that $a(n)\in\mathbb{Q}_p$ is a Cauchy sequence, then for any $e^{-M}$, there exists an integer $N$, such that for any $n,m\geq N$, there is $d(a(n),a(m))\leq e^{-M}$, which is the same as $s(a(n),a(m))\geq M$, that is to say, for the terms $a(n)_k,a(m)_k(k\leq M)$, they are all equal when $n,m\geq N$. This means that for any fixed $k$, the sequence $a(n)_k$ is stationary. Hence we can define $a_k=\lim_n a(n)_k\in F_p$(for those $k$ too negative, $a_k=0$ because the sequence $a(n)$ is Cauchy, thus bounded). And the element $a=\sum_{n}a_np^n$ lies in $\mathbb{Q}_p$, which shows that $\mathbb{Q}_p$ is complete with respect to this metric. Thus we can view $\mathbb{Q}_p$ as a completion of $\mathbb{Z}[\frac{1}{p}]_{\geq0}$ under this metric. Now we want to define some operations on $\mathbb{Q}_p$. For example, the addition, the negation, the multiplication. We define $Add:\mathbb{Z}[\frac{1}{p}]_{\geq0}\times \mathbb{Z}[\frac{1}{p}]_{\geq0}\rightarrow\mathbb{Z}[\frac{1}{p}]_{\geq0}, (x,y)\mapsto x+y$. Equipping $\mathbb{Z}[\frac{1}{p}]_{\geq0}^2$ with the product metric $D=d\times d$, we have that $d(x+y,x'+y')\leq\max{d(x+y,y+x'),d(y+x',x'+y')}$ $=\max{d(x,x'),d(y,y')}\leq D((x,y),(x',y'))=d(x,x')+d(y,y')$. So this application $Add$ is a continuous function on a dense subset of $\mathbb{Q}_p^2$. Thus, we can extend $Add$ to all of $\mathbb{Q}_p^2$, and we call this $Add$ the addition operation on $\mathbb{Q}_p^2$. Now for positive rational numbers $x=p^us,x'=p^{u'}s',y=p^vt(s,s',t \text{positive integers prime to} p)$, we have that $d(xy,x'y)=d(p^{u+v}st,p^{u'+v}s't)=e^{-\min{u+v,u'+v}}=d(y,0)d(x,x')$. So if we define $Mul:\mathbb{Z}[\frac{1}{p}]_{\geq0}\times\mathbb{Z}[\frac{1}{p}]_{\geq0}\rightarrow\mathbb{Z}[\frac{1}{p}]_{\geq0},(x,y)\mapsto xy$, then we see that $d(xy,x'y')\leq \max{d(xy,x'y),d(x'y,x'y')}=\max{d(y,0)d(x,x'),d(x',0)d(y,y')}$ $\leq (d(y,0)+d(x',0))D((x,y),(x',y'))$, thus showing that $Mul$ is continuous on a dense subset of $\mathbb{Q}_p^2$, hence we can extend $Mul$ to all of $\mathbb{Q}_p^2$, which is again continuous, and we call this operation the multiplication on $\mathbb{Q}_p$. For the present, we see that $(\mathbb{Q}_p,Add)$ is a semi-group. One way to find the negative of an element $a\in\mathbb{Q}_p$ is to go back to $\mathbb{Z}[{\frac{1}{p}}]_{\geq0}$. But this is also a semi-group with respect to the addition operation. But let’s have a try. We know that the number $1\in\mathbb{Q}$ in $\mathbb{Q}_p$ is still $1$. Then which element in $\mathbb{Q}_p$ corresponds to the negation of $1$? Suppose this element writes as $a=\sum_{n\geq N}a_np^n$ with a sequence of rational numbers approximating it $A_M=\sum_{N\leq n\leq M}a_np^n$. We have to guarantee that $d(A_M+1,0)\rightarrow d(0,0)=0(M\rightarrow +\infty)$. A simple calculation shows that $s(a,0)=0$, and $A_M=\sum_{0\leq n\leq M}(p-1)p^n$. So we obtain that $a=\sum_{n\geq 0}(p-1)p^n$, which is the negation of $1$, and we can write $-1=\sum_{n\geq0}(p-1)p^n$.The semi-group structure of $(\mathbb{Q}_p-\{0\},Mul)$ inherits from  that of $(\mathbb{Z}[\frac{1}{p}]_{\geq0},Mul)$. And we want to find an inverse for each element in it.  For example, an non-zero element writes $a=p^ux$ where $x_0\in F_p-\{0\}$. So if we can find an inverse for $x$, say $y$ such that $xy=1$, then it is wasy to see that $p^{-u}y$ is the inverse of $a=p^ux$. So we can just suppose that $a\in\mathbb{Z}_p(a_0\neq0)$.The method is to go back to the $\mathbb{Z}_{\geq0}$ world, and then take limits. Specifically, suppose that $a\in\mathbb{Z}_p$ with $s(a,0)=0$, then using again the approximation sequence $A_M=\sum_{0\leq n\leq M}a_np^n$. If we can find the inverse $B_M$of $A_M$ for each $M$, then since $A_M$ is a Cauchy sequence, so is $B_M$(because $d(A_M,0)d(B_M,0)=d(A_MB_M,0)=d(1,0)=1$, and $d(A_M,0)$ converge to a non-zero number). So things are reduced to find these $B_M$, or more generally, find the inverse of any positive integer $A$ prime to $p$. Suppose the inverse of $A$ is $b=\sum_{n\geq0}b_np^n$(if it exists). Then consider the approximating sequence $B_N$ to $b$. In order that $d(AB_N,0)\rightarrow d(Ab,0)=1$.For $N=0$, since $gcd(A,p)=1$, there is an integer $b_0\in F_p$ such that $Ab_0=1(\text{mod}p)$. So, we have that $d(Ab_0-1,0)\leq e^{-1}$. For $N=2$, since again $gcd(A,p^2)=1$, there is $b_1\in F_p$ such that $A(b_0+b_1p)=1(\text{mod}p^2)$. Similarly, we have that $d(A(b_0+b_1p)-1,0)\leq e^{-2}$. In fact, $(b_0,b_1)$ is the only element in $F_p^2$ such that $Ab_0=1(\text{mod}p),A(b_0+b_1p)=1(\text{mod}p^2)$. We can continue this process to any $N$ and until infinity. So, in this way, we find an element $b=\sum b_np^n\in\mathbb{Z}_p$ such that $d(Ab-1,0)=0$, which means that $Ab=1$. Thus we find an inverse for each non-zero element in $\mathbb{Q}_p$. Now combining the above results, for any positive rational number, we can write it as $a=p^u\frac{A}{B}$ where $A,B$ are coprime positive integers prime to $p$. Since $B$ is invertible in $\mathbb{Q}_p$, thus we can define $i(a)=p^uAi(B)^{-1}$, for a negative number $-a\in\mathbb{Q}$, $i(-a)=(-1)i(a)=\sum_{n\geq0}(p-1)p^n\times i(a)$. So we get a larger inclusion, $i: \mathbb{Q}\rightarrow\mathbb{Q}_p$. From now on, we will mix these two notations $-1$ and $\sum_{n\geq0}(p-1)p^n$. And for any element $a\in\mathbb{Q}_p$, we set $-a=(-1)\times a$. It is easy to verify that the addition and multiplication thus defined are compatible, that is multiplication is distributive to addition. Hence we give $\mathbb{Q}_p$ a field structure. Now some words on $\mathbb{Z}_p$. Using the fact that $d(xy,0)=d(x,0)d(y,0)$, we see that $\mathbb{Z}_p$ is closed under multiplication. What is more, $-1$ clearly belongs to $\mathbb{Z}_p$. In addition, it is not hard to see that $\mathbb{Z}_p$ is closed under addition and subtraction, thus this space has a ring structure. Then perhaps we would wonder what are the units in this ring? Clearly,if $a\in\mathbb{Z}_p$ has that $s(a,0)>0$, then $s(ab,0)>0(b\in\mathbb{Z}_p)$, thus $ab\neq 1$,so $a\not\in \mathbb{Z}_p^*$. So we have to consider those $a$ with $s(a,0)=0$. In fact, these are exactly the units. That is to say, The set of invertible elements of $\mathbb{Z}_p$ is $\{a\in\mathbb{Z}_p|s(a,0)=0\}$ The proof utilizes the same method as above, and we omit it. We can say some more about this result. Note that an element in $\mathbb{Z}_p^*$ is always of the form, $a=a_0+pa'(a'\in\mathbb{Z}_p)$ with $a_0\in F_p-\{0\}$. So we have that $\mathbb{Z}_p^*=(F_p-0)\times \mathbb{Z}_p$. Inversely, for any $a\in\mathbb{Q}_p$, we have seen that we can write $a=p^{s(a,0)}s$, then we surely would have that $s_{s(s,0)}\in F_p-0$. So there is $\mathbb{Q}_p=\mathbb{Z}\times \mathbb{Z}_p^*$, this is a group homomorphism. The above serves as a simple explanation of the $p$-adic fields. Quadratic forms arise naturally from the Euclidean spaces. In Euclidean spaces, we have inner products, we is commonly known also as quadratic forms. In this post, we will consider these forms over some $p$-adic fields. For that, we need the Hilbert symbol. Suppose that $k$ is a field(here $k$ will be $\mathbb{R},\mathbb{Q}_p$, or $\mathbb{Q}$), and two non-zero elements $a,b\in k^*$. The Hilbert symbol considers if the equation $ax^2+by^2=z^2$ has non-trivial solutions(different from the solution $(x,y,z)=(0,0,0)$) in $k^3$. If this homogeneous equation has non-trivial solutions, then we define $[a,b]=1$, otherwise we set $[a,b]=-1$. Why do we consider just those non-zero $a,b$? For example, when $a=0$, then the existence of non-trivial solutions can be completely characterized by whether $b$ is a square or not. It is clear that the Hilbert symbol $[,]:k^*\times k^*\rightarrow \{1,-1\}=\mathbb{F}_2=\mathbb{Z}/2\mathbb{Z}$ is symmetric. We will show that it is in fact multiplicative in the first variable(hence also in the second variable), that is to say $[aa',b]=[a,b][a',b]$, and it is not degenerate, that is to say, for any $a\in (k^*)^2$ not a square, we can find a $b\in (k^*)^2$ such that $[a,b]=-1$. To prove these properties, we need some preliminaries. First we want to characterize the Hilbert symbol $[a,b]$ using the field $k(\sqrt{b})$. In fact, observing the equation $ax^2=z^2-by^2$. If $b'=\sqrt{b}\in \overline{k}$, then if $x\neq 0$, we have that $a=(z/x-b'y/x)(z/x+b'y/x)$. So, if $b$ is not a square in $k$, then $[a,b]=1$ implies that $a$ is a norm in $k(\sqrt{b})$. If $[a,b]=-1$, this implies that $a$ is not a norm in $k(\sqrt{b})$. For the case where $b$ is a square, we always have $[a,b]=1$, and $k(\sqrt{b})=k$, thus $a$ is always a norm in $k(\sqrt{b})$. So we have that $[a,b]=1$ if and only if $a$ is a norm in $k(\sqrt{b})$. Moreover, it is not hard to see, that for $a,b\in k^*$, we have $[a,b^2]=1,[a,-a]=1,[a,1-a]=1, [a,c^2b]=[a,b]$. If $[c,b]=1$, then $c$ is a norm in $k(\sqrt{b})$, using the multiplicative property of the norm function, we have that $ac$ is a norm of $k(\sqrt{b})$ if and only if $a$ is a norm of this field. So we have that $[ac,b]=[a,b]$. Now we are ready to prove the following result: If $k=\mathbb{R}$, then $[a,b]=1$ if and only if at least one of them is positive. If $k=\mathbb{Q}_p$, then we can write $a=p^ux,b=p^vy(x,y\in\mathbb{Z}_p^*)$, for $p>2$, there is $[a,b]=(-1)^{uv\frac{p-1}{2}}(\frac{x}{p})^v(\frac{y}{p})^u$; for $p=2$, there is $[a,b]=(-1)^{\frac{x-1}{2}\frac{y-1}{2}+v\frac{x^2-1}{8}+u\frac{y^2-1}{8}}$.(the symbol $(\frac{x}{p})$ is the Legendre symbol extended to $\mathbb{Z}_p$ by $(\frac{x}{p})=(\frac{x_0}{p})$. For $\frac{y-1}{2}$, it is just $y_1(\text{mod}2)$, and for $\frac{y^2-1}{8}$, it is just $y_1(y_1+1)/2+y_2$). The case that $k=\mathbb{R}$ is trivial, and we omit the proof. As for the case $k=\mathbb{Q}_p(p>2)$. We proceed as follows. Note that the identity depends only on the mod $2$ values of $u,v$ and the values of $x,y$. We will consider these cases separately. Note that the extended Legendre symbol is still multiplicative, what is more, $(\frac{-1}{p})=(-1)^{(p-1)/2}$ even when $-1$ is an element in $\mathbb{Z}_p$. (1)The case $u=0,v=0$. Then the right hand side is just $1$. What is more, note that since $u,v$ are even numbers, we have $[a,b]=[x,y]$. So we have to show that $[x,y]=1$, that is to show there exists always a non-trivial solution to the equation $xr^2+ys^2=t^2$ with variables $r,s,t$. We use again the method of going back to the world $\mathbb{Z}$, or rather $\mathbb{Z}/p^n\mathbb{Z}$. We define the result of $a\in\mathbb{Z}_p$ modulo $p^n$ to be the number $P_n(a)=\sum_{0\leq m\leq n}s_mp^m$. Clearly the sequence $P_n(a)$ approximates $a$. For $n=0$, we consider $P_0(x)r^2+P_0(y)s^2=t^2(\text{mod}p)$(For simplicity, we will write this as $xr^2+ys^2=t^2(p)$ when there is no confusion). Using the Legendre symbol, we can show easily that this equation always has non-trivial solutions as long as $P_0(x),P_0(y)\neq 0(p)$ which is the case. So, we have a first-order approximation non-trivial solution $(r_0,s_0,t_0)\in F_p^3$. Now suppose that we have found an $(n-1)$-th order approximation non-trivial solution $(r_{n-1},s_{n-1},t_{n-1})\in F_{p^n}^3$($P_n(x)r_{n-1}^2+P_n(y)s_{n-1}^2-t_{n-1}^2=zp^n$). We set $x(r_{n-1}+rp^n)^2+y(s_{n-1}+sp^n)^2=(t_{n-1}+tp^n)^2(p^{n+1})$. After expansion, we get that $zp^n+2p^n(xr_{n-1}r+ys_{n-1}s-t_{n-1}t)=0(p^{n+1})$. It is the same as $z+2(xr_{n-1}r+ys_{n-1}s-t_{n-1}t)=0(p)$. Note that $(r_{n-1},s_{n-1},t_{n-1})\neq(0,0,0)(p)$, so is $(2xr_{n-1},2ys_{n-1},-t_{n-1})$, which means that this equation always has a non-trivial solution $(r,s,t)\in F_p^3$. So the triple $(r_{n-1}+rp^n,s_{n-1}+sp^n,t_{n-1}+tp^n)$ is an $n$-th order approximation non-trivial solution to the original equation. We can continue in this way to infinity, and moreover each sequence $r_n$($s_n,t_n$ respectively) converges to some element $r$($s,t$ respectively) in $\mathbb{Z}_p$. Thus this non-zero triple $(r,s,t)$ solves the equation $xs^2+ys^2=t^2$. (2)The case $u=1,v=0$. For the same reason, we have to show that $[px,y]=(\frac{y}{p})$. Suppose that $(\frac{y}{p})=1$, then $y_0$ is a non-zero square in $F_p$(that is $y_0=z_0^2(\text{mod}p)$). Like what we have done in the above, the equation $y=z^2$(in $\mathbb{Z}_p$) can be obtained by lifting the first-order approximation $z_0$. And thus we get a $0\neq z\in\mathbb{Z}_p$ such that $y=z^2$(we used somewhere the fact that $p\neq 2$). So one non-trivial solution to the equation $pxr^2+ys^2=t^2$, that is $(0,1,z)$. Conversely, if $(r,s,t)$ is one such non-trivial solution, we can subtract all their common $p$-factors to suppose that at least one of them is a unit. Now modulo $p$, we get that $px_0r_0^2+y_0s_0^2=t_0^2(p)$. That is to say, $y_0s_0^2=t_0^2(p)$. If $s_0=0$, then there is $t_0=0$ too.Now modulo $p^2$, we get that $pxr^2=0(p^2)$. Since $x$ is a unit, thus $r_0=0$. But this contradicts the assumption that at least one of $r,s,t$ is unit. Hence we must have that $y_0=(t_0/s_0)^2(p)$. So $y$ is a square in $F_p$, $(\frac{y}{p})=1$. (3)The case $u=1,v=1$. Again the same reasoning as above leads to the identity $[px,py]=(-1)^{\frac{p-1}{2}}(\frac{x}{p})(\frac{y}{p})$. But note that $[px,-px]=1$, thus $[px,py]=[px,-pxpy]=[px,-xy]$. And according to the preceding paragraph, we have $[px,-xy]=(\frac{-xy}{p})$, which is exactly the same as $(-1)^{(p-1)/2}(\frac{x}{p})(\frac{y}{p})$. For the case $p=2$, the proof is not exactly the same but very similar and not very hard, and we will not prove it here. The only difficulty is that the above lifting strategy has to be carefully dealt with. The fact is that $x\in\mathbb{Z}_p^*$ is a square if and only if $x$ is a square modulo $8$, not $2$ or $4$. This is due to the factor $2$ in the square. One simple example is that $5$ is not a square modulo $8$ even though it is a square modulo $2$ or $4$. The above result clearly shows that the Hilbert symbol is multiplicative in one variable since the term on the right hand side is always multiplicative in one variable. What s more, the Hilbert symbol is not degenerate. That is For any $a\in k^*$ non square, there is a $b\in k^*$ such that $[a,b]=-1$. In $k=\mathbb{R}$, $a$ is not a square means that $a<0$. So take $b<0$ and this gives $[a,b]=-1$. In the case $k=\mathbb{Q}_p(p>2)$. When is $a=p^ux(x\in\mathbb{Z}_p^*)$ not a square? So when is $a$ a square? Suppose $a=(p^vy)^2(y\in\mathbb{Z}_p^*)$, then we must have that $u=2v, x=y^2$. Conversely, if these two conditions were satisfied, then $a$ is a square. So there are three cases where $a$ is not a square, that is $a=p^{2u+1}x$,or $p^{2u}x$(with $(\frac{x}{p})=-1$) or at last $p^{2u+1}x$ with $(\frac{x}{p})=-1$. For the first case, we can set $b=y$ with $(\frac{y}{p})=-1$. For the second case, we set $b=p$, and for the last case we set $b=y$ which is not a square. In the case $k=\mathbb{Q}_2$. The situation is similar, just a little more complex. Write again $a=2^ux$. $a$ is a square if and only if $u$ is an even number and $x$ is a square modulo $8$(which is the same as $x=1(8)$). So if $a=2^{2u}x$(with $x=3(8)$), we can set $b=7$, with $x=5$, we define $b=2$, with $x=7$, we define $b=7$. If $a=2^{2u+1}x$, we can set $b=5$. These can be verified very easily, and we omit it. The next paragraph concerns with the product formula of the Hilbert symbol. As we have seen above, there is an inclusion $\mathbb{Q}\subset\mathbb{Q}_p$ for all primes $p$. If we define $P=\{p \text{is a prime number or is }\infty\}$. And we set $\mathbb{Q}_{\infty}=\mathbb{R}$. These fields $\mathbb{Q}_p$($p\in P$) are called local fields, the topological completion of the global field $\mathbb{Q}$. So for any two non-zero $a,b\in\mathbb{Q}$, we define $[a,b]_p$ to be the Hilbert symbol of $a,b$ in the field $\mathbb{Q}_p$($p\in P$). An interesting result is that If $a,b\in\mathbb{Q}^*$, then for almost all $p\in P$, $[a,b]_p=1$. What is more, we have the product formula of Hilbert symbol: $\prod_{p\in P}[a,b]_p=1$. The product makes sense since there are only finite many terms not equal to $1$ according to the first part of the result. A simple observation goes first: since each $[a,b]_p$ is multiplicative in one variable, so we can prove the result only for the case $a,b=-1,q$($q$ is a prime integer). If $a=-1,b=-1$, then $[-1,-1]_{\infty}=-1$, and if $p>2$, there is $[-1,-1]_p=1$. And $[a,b]_2=(-1)^{\frac{-1-1}{2}\frac{-1-1}{2}}=-1$. So $\prod_p [-1,-1]_p=1$. If $a=-1,b=q$ with $0 a prime number. Then $[-1,q]_{\infty}=1$. If $q=2$, then for $p>2$, $[-1,2]_p=1$, while $[-1,2]_2=(-1)^{\frac{-1-1}{2}\frac{1-1}{2}+0+\frac{1^2-1}{8}}=1$, so $\prod_p[-1,2]=1$. If $q>2$, then $[-1,q]_2=(-1)^{\frac{-1-1}{2}\frac{q-1}{2}}=(-1)^{\frac{q-1}{2}}$. For $p\neq q$, $[-1,q]_p=1$, while $[-1,q]_q=(\frac{-1}{q})=(-1)^{\frac{q-1}{2}}$. So we have that $\prod_p [-1,q]_p=1$. If $a=q,b=q'$ two primes such that $q\neq q'$. Then we have that $[q,q']_{\infty}=1$. And suppose neither of them is $2$,then $[q,q']_2=(-1)^{\frac{q-1}{2}\frac{q'-1}{2}}$. $[q,q']_q=(\frac{q'}{q})$, $[q,q']_{q'}=(\frac{q}{q'})$, for other $p$, $[q,q']_p=1$. So we have that $\prod_p[q,q']=1$ due to quadratic reciprocity law. If $q'=2$, then $[q,2]_2=(-1)^{0+\frac{q^2-1}{8}}$, $[q,2]_q=(\frac{2}{p})=(-1)^{\frac{q^2-1}{8}}$. For other $p$, $[q,2]_p=1$ So, we have that $\prod_p[q,2]_p=1$. If $a=q,b=q$. It is clear that $[q,q]_{\infty}=1$. If $q=2$, then $[2,2]_2=1$, $[2,2]_p=1$, so $\prod_p[2,2]_p=1$. If $q>2$, then $[q,q]_2=(-1)^{\frac{q-1}{2}\frac{q-1}{2}}$, $[q,q]_q=(-1)^{\frac{q-1}{2}}$, for other $p$, there is $[q,q]_p=1$, thus we have that $\prod_p[q,q]_p=1$. Thus we proved the product formula of Hilbert symbols. Note that in the proof we have used the quadratic reciprocity law in an essential way. In fact, this proof means that the quadratic reciprocity law implies the product formula. Whereas inversely, take any two distinct primes, the product formula gives an immediate proof of the quadratic reciprocity law. And thus the product formula is also sometimes called Hilbert’s reciprocity law. The importance of Hilbert reciprocity law is that it can be generalized naturally to other algebraic number fields. We will talk about this later.
Lots of companies need to analyze conversion rates. Maybe you want to understand how many people purchased a widget out of the people that landed on your website. Or how many people upgraded to a subscription out of the people that created an account. Computing a conversion rate is often fairly straightforward and involves nothing more than dividing two numbers. So what else is there to say about it? There is one major catch we had to deal with Better. When there is a substantial delay until the conversion event, this analysis suddenly gets vastly more complex. To illustrate what I am talking about, we can look at the conversion rate for borrowers coming to Better.com to get a mortgage, defined in the most simplistic way, dividing the number of converted users by the total cohort size: This looks really bad: is the conversion rate truly going down over time? But that’s not right: it only looks like it is going down because we have given the later users less time to “bake”. Another line shows how confusing the definition of conversion rate is. Let’s look at time until conversion (right y-axis) as a function of the user cohort: Ok, so the conversion rate is going down over time, but users are converting much faster? Clearly, this is a bogus conclusion, and yet again we are looking at it the wrong way. (Side note, but throughout this blog post, the y scale is intentionally removed in order for us not to share important business metrics.) ## The basic way: conversion at time T There is a few ways we can resolve this. One way is to look at conversion rate at T = 35 days, or some similar cutoff. That way we can compare and see if conversion rates are going up or down: Sadly, this also has a pretty severe issue: we can’t compute conversion rates for anything more recent than 35 days ago. Back to the drawing board! ## Why does any of this matter? It might be worth taking a step back and considering what types of issues this is causing. At Better, we spend a significant amount of money (millions of dollars) on various types of paid user acquisition. This means that we buy leads/clicks from some source, and drive traffic to our website. Some of those are high intent, some of them are low intent. Some of them can take many months to convert. This makes it challenging to answer a seemingly simple question: what’s the cost of user acquisition per channel? If we put ourselves in a position where we have to wait many months for us to measure the efficacy of an acquisition channel, that means it takes forever to iterate and improve our acquisition, and it means a lot of money thrown out the window on bad channels. So, let’s consider a few better options culminating in a somewhat complex statistical model we built. ## Introducing cohort models A much better way is to look at the conversion on a cohorted basis. There is a number of different ways to do this, and I’ve written a whole blog post about this. I’m going to skip a lot of the intermediate steps, and jump straight to what I consider the best next point: using a Kaplan-Meier estimator. This is a technique developed over 60 years ago in the field of survival analysis. Computing a Kaplan-Meier estimator for each weekly cohort generates curves like this The insight here is to switch from using the x-axis for the time, and instead let each cohort be its own line. These curves help us with a few things: ✅ We can compare curves for cohorts that have been “baking” for a long time and curves that just started. ✅ We don’t have to throw away information by picking an arbitrary cutoff (such as “conversion at 30 days”). ✅ We can see some early behavior much quicker, by looking at the trajectory of a recent cohort. For a wide variety of survival analysis methods in Python, I recommend the excellent lifelines package. As a side note, survival analysis is typically concerned with mortality/failure rates, so if you use any off-the-shelf survival analysis tools, your plots are going to be “upside down” from the plots in this post. Kaplan-Meier also lets us estimate the uncertainty for each cohort, which I think is always best practice when you plot things! The nice thing about Kaplan-Meier is that it lets us operate on censored data. This means that for a given cohort, we’re not going to have observations beyond a certain point for certain members of that cohort. Some users may not have converted yet, but may very well convert in the future. This is most clear if we segment the users by some other property. In the case below I’ve arbitrarily segmented users by the first letter of their email address. These two groups contain users on a spectrum between: • Some users that just came to our site and have essentially no time to convert • Some users that have had plenty of time to convert Dealing with censoring is a huge focus for survival analysis and Kaplan-Meier does that in a formalized way. ## So far, so good Ok, so this is great: we have are now checking lots of the boxes, but IMO not quite all: ✅ Can deal with censored data ✅ Can give us uncertainty estimates ❌ Can extrapolate: it would be amazing if we could look at the early shape of a cohort curve and make some statements about what it’s going to converge towards. So, let’s switch to something slightly more complex: parametric survival models! Take a deep breath, I’m going to walk you through this somewhat technical topic: ## Parametric survival models I was working on a slightly simpler cohort chart initially, and my first attempt was to fit an exponential distribution. The inspiration came from continuous-time Markov chains where you can model the conversions as a very simple transition chart: In the chart above, we can only observe transitions to the converted state. A lack of observation does not necessarily mean no conversion, it means they are either dead, or will convert, but have not converted yet. This transition diagram actually describes a very simple differential equation that we can solve to get the closed form. I will spare you the details in this blog post, but the form of the curve that we are trying to fit is: $F(t) = c\left(1 - e^{-\lambda t}\right)$ This gives us two unknown parameters for each cohort: $c$ and $\lambda$. The former explains the conversion rate that the cohort converges towards, the latter explains the speed at which it converges. See below for a few examples of hypothetical curves: Note that the introduction of the parameter $c$ departs a bit from most of traditional survival analysis literature. Exponential distributions (as well as Weibull and gamma, which we will introduce in a second) are commonplace when you look at failure rates and other phenomena, but in all cases that I encountered so far, there is an assumption that everyone converts eventually (or rather, that everyone dies in the end). This assumption is no longer true when we consider conversions: not everyone converts in the end! That’s why we have to add the $0 \leq c \leq 1$ parameter ## Weibull distributions It turns out that exponential distributions fit certain types of conversion charts well, but most of the time, the fit is poor. This excellent blog post introduced me to the world of Weibull distributions, which are often used to model time to failure or similar phenomena. The Weibull distribution adds one more parameter $p > 0$ to the exponential distribution: $F(t) = c\left(1 - e^{-(t\lambda)^p}\right)$ Fitting a Weibull distribution seems to work really well for a lot of cohort curves that we work with at Better. Let’s fit one to the dataset we had earlier: The solid lines are the models we fit, and the dotted lines the Kaplan-Meier estimates. As you can see, these lines coincide very closely. The nice thing about the extrapolated lines is that we can use them to forecast their expected final conversion rate. We can also fit uncertainty estimates to the Weibull distribution just like earlier: The ability to extrapolate isn’t just a “nice to have”, but it makes it possible to make assumptions about final conversion rates much earlier, which in turn means our feedback cycle gets tighter and we can learn faster and iterate quicker. Instead of having to wait months to see how a new acquisition channel is performing, we can get an early signal very quickly, and make business decisions faster. This is extremely valuable! ## Gamma and generalized gamma distributions For certain types of cohort behavior, it turns out that a gamma distributions makes more sense. This distribution can be used to model a type of behavior where there is an initial time lag until conversion starts. The generalized gamma distribution combines the best of Weibull and gamma distributions into one single distribution that turns out to model almost any conversion process at Better. Here is one example: The generalized gamma conversion model has just four parameters that we need to fit (three coming from the distribution itself, one describing the final conversion rate). Yet, it seems to be an excellent model that fits almost any conversion behavior at Better. See below for a gif where I fit a generalized gamma model to a diverse set of database queries comprising different groups, different milestones, and different time spans: ## Introducing convoys Convoys is a small Python package to help you fit these models. It implements everything shown above, as well as something which we didn’t talk about so far: regression models. The point of regression models is to fit more powerful models that can predict conversion based on a set of features and learn that from historical data. We use these models for a wide range of applications at Better. Convoys came out of a few different attempt of building the math to fit these models. The basic math is quite straightforward: fit a probability distribution times a “final conversion rate” using maximum likelihood estimation. We rely on the excellent autograd package to avoid taking derivatives ourselves (very tedious!) and scipy.optimize for the actual curve fitting. On top of that, convoys supports estimating uncertainty using emcee. You can head over the the documentation if you want to read more about the package. Just to mention a few of the more interesting points of developing convoys: • For a while, convoys relied on Tensorflow, but it turned out it made the code more complex and wasn’t worth it. • To fit gamma distributions, we rely a lot on the lower regularized incomplete gamma function. This function has a bug in Tensorflow where the derivative is incorrect, and it’s not supported in autograd. After a lot of banging my head against the wall, I added a simple numerical approximation. Cam Davidson-Pilon (author of lifelines mentioned earlier) later ran into the exact same issue and made a small Python package that we’re now using. • In order to regularize the models, I have found it useful to put very mild priors on the variance of some of the parameters using an inverse gamma distribution. This ends up stabilizing many of the curves fit in practice, while introducing a very mild bias. • When fitting a regression model, we have separate parameters $c_i$ and $\lambda_i$ for each feature, but shared $k$ and $p$ parameters for the generalized gamma distribution. This is a fairly mild assumption in real world cases and reduces the number of parameters by a lot. Convoys is semi-experimental and the SDK might change very quickly in the future, but we believe it has a quite wide range of applications, so definitely check it out if you are working on similar problems! ## Finally… We are hiring! If you’re interested in these types of problems, definitely let us know! We have a small but quickly growing team in of data engineers/scientists in New York City who are working on many of these types of problems on a daily basis.
# Surface plasmon resonance Jump to: navigation, search The excitation of surface plasmons by light is denoted as a surface plasmon resonance (SPR) for planar surfaces or localized surface plasmon resonance (LSPR) for nanometer-sized metallic structures. This phenomenon is the basis of many standard tools for measuring adsorption of material onto planar metal (typically gold and silver) surfaces or onto the surface of metal nanoparticles. It is behind many color based biosensor applications and different lab-on-a-chip sensors. ## Explanation Surface plasmons, also known as surface plasmon polaritons, are surface electromagnetic waves that propagate parallel along a metal/dielectric (or metal/vacuum) interface. Since the wave is on the boundary of the metal and the external medium (air or water for example), these oscillations are very sensitive to any change of this boundary, such as the adsorption of molecules to the metal surface. To describe the existence and properties of surface plasmons, one can choose from various models (quantum theory, Drude model , etc.). The simplest way to approach the problem, is to treat each material as a homogeneous continuum, described by a dielectric constant. With the terms of this description for electronic surface plasmons to exist, the real part of the dielectric constant of the metal must be negative and its magnitude must be greater than that of the dielectric. This condition is met in the IR-visible wavelength region for air/metal and water/metal interfaces (where the real dielectric constant of a metal is negative and that of air or water is positive). ## Realisation File:Otto-schema.png Otto configuration File:SPR-schema.png Kretschmann configuration In order to excite surface plasmons in a resonant manner, one can use an electron or light beam (visible and infrared are typical). The incoming beam has to match with its impulse to that of the plasmon. In the case of p-polarized light, this is possible by passing the light through a block of glass to increase the wavenumber (and the impulse), and achieve the resonance at a given wavelength and angle. S-polarized light can not excite electronic surface plasmons. Electronic and magnetic surface plasmons obey the following dispersion relation: ${\displaystyle K(\omega )={\frac {\omega }{c}}{\sqrt {\frac {\epsilon _{1}\epsilon _{2}\mu _{1}\mu _{2}}{\epsilon _{1}\mu _{1}+\epsilon _{2}\mu _{2}}}}}$ Typical metals that support surface plasmons are silver and gold, but metals such as copper, titanium, or chromium can also support surface plasmon generation. Using light to excite SP waves, there are two constructions which are well known. In the Otto setup, the light is shone on the wall of a glass block, typically a prism, and totally reflected. A thin metal (for example gold) film is positioned close enough, that the evanescent waves can interact with the plasma waves on the surface and excite the plasmons. In the Kretschmann configuration, the metal film is evaporated onto the glass block. The light is again illuminating from the glass, and an evanescent wave penetrates through the metal film. The plasmons are excited at the outer side of the film. This configuration is used in most practical applications. ## Applications Surface plasmons have been used to enhance the surface sensitivity of several spectroscopic measurements including fluorescence, Raman scattering, and second harmonic generation. However, in their simplest form, SPR reflectivity measurements can be used to detect DNA or proteins by the changes in the local index of refraction upon adsorption of the target molecule to the metal surface. If the surface is patterned with different biopolymers, the technique is called Surface Plasmon Resonance Imaging (SPRI). For nanoparticles, localized surface plasmon oscillations can give rise to the intense colors of solutions of plasmon resonance nanoparticles and/or very intense scattering. Nanoparticles of noble metals exhibit strong ultraviolet-Visible absorption bands that are not present in the bulk metal. Shifts in this resonance due to changes in the local index of refraction upon adsorption of biopolymers to the nanoparticles can also be used to detect biopolymers such as DNA or proteins. Related complimentary techniques include plasmon waveguide resonance, QCM and Dual Polarisation Interferometry ## Examples ### Layer-by-layer self-assembly File:SPR-adsorption-data.png SPR curves measured during the adsorption of a polyelectrolyte and then a clay mineral self-assembled film onto a thin (ca. 38 nanometers) gold sensor. One of the first common applications of surface plasmon resonance spectroscopy was the measurement of the thickness of adsorbed self-assembled nanofilms on gold substrates. An example is presented in 'figure' The resonance curves shift to higher angles as the thickness of the adsorbed film increases. This example is a 'static SPR' measurement. When higher speed observation is desired, one can select an angle right below the resonance point (the angle of minimum reflectance), and measure the reflectivity changes at that point. This is the so called 'dynamic SPR' measurement. The interpretation of the data assumes, that the structure of the film does not change significantly during the measurement. ### Binding constant determination File:SPR-curve.png Association and dissociation signal File:Biacore diagram.jpg Example of output from Biacore When the affinity of two ligands has to be determined, the binding constant must be determined. It is the equilibrium value for the product quotient. This value can also be found using the dynamical SPR parameters and, as in any chemical reaction, it is the association rate divided by the dissociation rate. For this, a so-called bait ligand is coated to the gold surface of the SPR crystal. Through a microflow system, a solution with the prey ligand can flow over the bait layer and bind. Binding will make the SPR signal change to an equilibrium. After some time, a solution without the prey is applied, and a new equilibrium will be reached. From these association ('on rate', von) and dissociation speeds ('off rate', voff), the binding constant can be calculated. The actual SPR signal can be explained by the electromagnetic 'coupling' of the incident light with the surface plasmon of the gold layer. This plasmon can be influenced by the layer just a few nanometer across the gold-solution interface i.e. the bait protein and possibly the prey protein. Binding makes the reflection angle change.; ${\displaystyle K={\frac {v_{\text{off}}}{v_{\text{on}}}}}$ ## Magnetic Plasmon Resonance Recently, there has been an interest in magnetic surface plasmons. These require materials with large negative magnetic permeability, a property that has only recently been made available with the construction of metamaterials. ## References • Surface plasmons on smooth and rough surfaces and on gratings. Springer Verlag, Berlin. 1988. ISBN 978-3540173632. Unknown parameter |Author= ignored (|author= suggested) (help) • Stefan Maier (2007). Plasmonics: Fundamentals and Applications. Springer. ISBN 978-0387331508. • Hutter E, Fendler J. Exploitation of Localized Surface Plasmon Resonance. Adv. Mater. 2004, 16, 19, 1685-1706. • Aslan K, Lakowicz JR, Geddes C. Plasmon light scattering in biology and medicine: new sensing approaches, visions and perspectives. Current Opinion in Chemical Biology 2005, 9:538–544 • Smith EA, Corn RM. Surface Plasmon Resonance Imaging as a Tool to Monitor Biomolecular Interactions in an Array Based Format. Appl. Spectroscopy, 2003, 57, 320A-332A. • J. N. Gollub, D. R. Smith, D. C. Vier, T. Perram, J. J. Mock, Phys. Rev. B 71, 195402 (2005) • http://www.astbury.leeds.ac.uk/facil/SPR/spr_intro2004.htm (a short detailed synopsis of how surface plasmon resonance works in practice)
# Asymptotic of grazing collisions and particle approximation for the Kac equation without cutoff Orateur: David GODINHO Type: Séminaire des doctorants Site: UGE Salle: 4B 05R Date de début: 02/11/2011 - 14:00 Date de fin: 02/11/2011 - 14:00 The subject of this presentation is the Kac equation without cutoff. We first show that in the asymptotic of grazing collisions, the Kac equation can be approximated by a Fokker-Planck equation. The convergence is uniform in time and we give an explicit rate of convergence. Next, we replace the small collisions by a small diffusion term in order to approximate the solution of the Kac equation and study the resulting error. We finally build a system of stochastic particles undergoing collisions and diffusion, that we can easily simulate, which approximates the solution of the Kac equation without cutoff. We give some estimates on the rate of convergence.
## Climb Up The IT Ladder With A DevOps Certification The development and advancement in IT has been phenomenal but IT education has always been a matter of concern among IT professionals. Things change rapidly in the IT world and there has always been a dearth of institutions offering education regarding the latest technological development. By the time, a course has been designed and books have been written, edited, and published, the course becomes outdated as some new technology replaces it. But this is the era of DevOps certification and DevOps is here to stay. A recent article mentions the struggle various companies are going through looking for DevOps professionals—professionals who can deal with DevOps software programs. Let’s delve deeper and look at this issue in detail. What Is DevOps? DevOps or Development and Operations is a culture that involves the collaboration and communication of software developers with the other IT departments and was borne out of the need to augment the famous Agile Framework. It can also been defined as the practice of development and operational engineers working together throughout the service lifecycle—starting from the design and development to delivery and customer support. Importance of DevOps The benefits of DevOps are many and aligning all the departments ensures a faster delivery rate and reduces risks associated with production changes too. Here some of the benefits of DevOps in organisations: ● The problems are not very complex and can be solved easily ● Software delivery becomes an easy and continuous process ● Features get delivered much more quickly ● The operating environment becomes stable ● More time becomes available to add value as less time is required in problem solving Some measurable benefits DevOps offers to organisations are: A short development cycle Since the DevOps culture promotes the increased communication and collaboration of the development and operation teams, this translates to a shorter timeframe when it comes to development as there is a shift from engineering code to executable production code. Improved collaboration between teams DevOps improves the transparency between the various teams which aids in effective decision making. It paves the way for better business agility and creates an atmosphere where mutual collaboration, communication, and integration becomes a necessity among the various teams of an IT organisation placed in different parts of the world. Better Defect Detection DevOps in built on the same framework as Agile.With DevOps, it becomes significantly easier to detect defects as the bigger codebases get broken into smaller and more manageable features. Improved ROI DevOps is based on Agile principles and that ensures faster development of software programs. With faster defect detection and solution to problems, you can ensure a frequent delivery to your client, which in turn will improve your returns on investment (ROI). In short, organisations that have embraced benefits of DevOps have more time to focus on innovation and improvement that contributes to the overall development of the organisation. They have tools to automate routine tasks allowing the employees to focus on things that will help in growing the business. In just matter of a few years, DevOps has come a long way and many companies are taking the DevOps route for effective production. Why DevOps Certification? Organisations are already looking for DevOps certified professionals, who are among the highest paid professionals in the IT industry. The concept of DevOps is still new to the world and the demand for DevOps professionals is quite high. As more and more organisations are embracing this framework, the demand for DevOps-skilled professionals will grow and so earning this certification will help you to climb your career ladder. # Climb Up The IT Ladder With A DevOps Certification 375 The development and advancement in IT has been phenomenal but IT education has always been a matter of concern among IT professionals. Things change rapidly in the IT world and there has always been a dearth of institutions offering education regarding the latest technological development. By the time, a course has been designed and books have been written, edited, and published, the course becomes outdated as some new technology replaces it. But this is the era of DevOps certification and DevOps is here to stay. A recent article mentions the struggle various companies are going through looking for DevOps professionals—professionals who can deal with DevOps software programs. Let’s delve deeper and look at this issue in detail. What Is DevOps? DevOps or Development and Operations is a culture that involves the collaboration and communication of software developers with the other IT departments and was borne out of the need to augment the famous Agile Framework. It can also been defined as the practice of development and operational engineers working together throughout the service lifecycle—starting from the design and development to delivery and customer support. Importance of DevOps The benefits of DevOps are many and aligning all the departments ensures a faster delivery rate and reduces risks associated with production changes too. Here some of the benefits of DevOps in organisations: ● The problems are not very complex and can be solved easily ● Software delivery becomes an easy and continuous process ● Features get delivered much more quickly ● The operating environment becomes stable ● More time becomes available to add value as less time is required in problem solving Some measurable benefits DevOps offers to organisations are: A short development cycle Since the DevOps culture promotes the increased communication and collaboration of the development and operation teams, this translates to a shorter timeframe when it comes to development as there is a shift from engineering code to executable production code. Improved collaboration between teams DevOps improves the transparency between the various teams which aids in effective decision making. It paves the way for better business agility and creates an atmosphere where mutual collaboration, communication, and integration becomes a necessity among the various teams of an IT organisation placed in different parts of the world. Better Defect Detection DevOps in built on the same framework as Agile.With DevOps, it becomes significantly easier to detect defects as the bigger codebases get broken into smaller and more manageable features. Improved ROI DevOps is based on Agile principles and that ensures faster development of software programs. With faster defect detection and solution to problems, you can ensure a frequent delivery to your client, which in turn will improve your returns on investment (ROI). In short, organisations that have embraced benefits of DevOps have more time to focus on innovation and improvement that contributes to the overall development of the organisation. They have tools to automate routine tasks allowing the employees to focus on things that will help in growing the business. In just matter of a few years, DevOps has come a long way and many companies are taking the DevOps route for effective production. Why DevOps Certification? Organisations are already looking for DevOps certified professionals, who are among the highest paid professionals in the IT industry. The concept of DevOps is still new to the world and the demand for DevOps professionals is quite high. As more and more organisations are embracing this framework, the demand for DevOps-skilled professionals will grow and so earning this certification will help you to climb your career ladder. ### KnowledgeHut Author KnowledgeHut is an outcome-focused global ed-tech company. We help organizations and professionals unlock excellence through skills development. We offer training solutions under the people and process, data science, full-stack development, cybersecurity, future technologies and digital transformation verticals. Website : https://www.knowledgehut.com
## Cardinality of a set Subject: Compulsory Mathematics #### Overview The number of distinct element in a given set A is called the cardinal number of A. It is denoted by n(A). If A = { 1, 2, 3, 4, 5 }, then the cardinality of set A is denoted by n (A) = 5. ##### Cardinality of a set The cardinality of set A is defined as the number of elements in the set A and is denoted by n(A). For example, if A = {a,b,c,d,e} then cardinality of set A i.e.n(A) = 5 Let A and B are two subsets of a universal set U. Their relation can be shown in Venn-diagram as: $$n(A) = n_o( A) + n(A \cap B)$$ $$\text{or,}\: n(A) - n (A \cap B)= n_o(A)$$ $$n(B) = n_o(B) + n(A \cap B)$$ $$\text {or,}\: n(B) - n(A \cap B) = n_o(B)$$ Also, \begin{align*} n(A∪B) &= n_o(A) + n(A∩B) + n_o(B)\\ n(A∪B) &= n(A) - n(A∩B) + n(A∩B) + n(B) - n(A∩B)\\ n(A∪B) &= n(A) + n(B)- n(A∩B)\\ \therefore n(A∪B) &= n(A) + n(B) - n(A∩B)\\ \end{align*} If A and B are disjoint sets then: $n(A \cap B) = 0, n(A \cup B) =n(A) + n(B)$ Again, $n(U) = n(A \cup B) + n(\overline {A\cup B)}$ If $n(\overline {A \cup B)}$=0, then $n(U) = n(A \cup B)$ #### Problems involving three sets Let A, B and C are three non-empty and intersecting sets, then: $n(A \cup B \cup C) = n(A) + n(B) +n(C) - n(A \cap B) -n(B \cap C) -n(C \cap A) +n(A \cap B \cap C).$ In Venn-diagram $n(A)$ = Number of elements in set A. $n(B)$ = Number of elements in set B. $n(C)$=Number of element in set C. $n_o(A)$ = Number of elements in set A only. $n_o(B)$ = Number of elements in set B only. $n_o(C)$ = Number of elements in set C only. $n_o(A \cap B)$ = Number of elements in set A and B only. $n_o(B \cap C)$ = Number of elements in set B and C only. $n_o(C \cap A)$ = Number of elements in set A and C only. $n(A \cap B \cap C)$ = Number of elements in set A, B and C. #### From the Venn-diagram \begin{align*} n(A \cup B \cup C) &= n_o(A) +n_o(B) +n_o(C) +n_o(A \cap B) +n_o(B \cap C) +n_o(C \cap A) + n(A \cap B \cap C)\\ &= n(A) - n_o(A \cap B) - n_o(C \cap A) - n(A \cap B \cap C) + n(B) - n_o(B \cap C) - n_o(C\cap B) - n(A \cap B \cap C) + n(C) - n_o(A \cap C) - n_o(B \cap C) - n(A \cap B \cap C) + n_o(A \cap B) +n_o(B \cap C) +n_o(C \cap A) + n(A \cap B \cap C)\\ &= n(A) + n(B) + n(C) - [n_o(A \cap B) +n(A \cap B \cap C)] - [n_o(A \cap B) +n(A \cap B \cap C)] - [n_o(B \cap C) +n(A \cap B \cap C)] - [n_o(C \cap A) +n(A \cap B \cap C)]+n(A \cap B \cap C)\\ &= n(A) + n(B) + n(C) - n(A \cap B) -n(B \cap C) -n(A \cap C) +n(A \cap B \cap C)\\ \end{align*} $$\boxed{\therefore (A \cup B \cup C)= n(A) + n(B) + n(C) - n(A \cap B) -n(B \cap C) -n(C \cap A) +n(A \cap B \cap C)}$$ If A, B and C are disjoint sets, $n(A \cup B \cup C) = n(A) + n(B) + n(C)$ ##### Things to remember • The cardinality of a set is a positive integer but it is not decimal. So, n(A)  is not equal to 50% because  50% = 0.5. • If A, B and C are disjoint sets, $n(A \cup B \cup C) = n(A) + n(B) + n(C).$ • $(A \cup B \cup C)= n(A) + n(B) + n(C) - n(A \cap B) -n(B \cap C) -n(C \cap A) +n(A \cap B \cap C)$ • It includes every relationship which established among the people. • There can be more than one community in a society. Community smaller than society. • It is a network of social relationships which cannot see or touched. • common interests and common objectives are not necessary for society. ##### Venn Diagrams and Sets Solution: n(U) = 43 n(A) = 25 n(B) = 18 n(A ∩ B)= 7 The Venn-diagram of above information is as follow: \begin{align*}\overline {A \cup B} &= ? \\ n(A \cup B) &= n(A) + n(B) - n(A \cap B) \\ &= 25 + 18 -7 \\ &= 43 - 7 \\ &= 36\\ Again, n(\overline {A \cup B}) &= n(U) - n(A \cup B) \\ &= 43 - 36\\ &= 7 Ans. \end{align*} Solution: Let N and H be the set of people who like to see Nepali movies and Hindi movies respectively From question, n(U) = 80, n(N) = 47, n(H) = 31 , $(\overline {N \cup H})$ = 21 (i) \begin{align*} n(N \cup H) &= n(U) - (\overline {N \cup H}) \\ &= 80 - 21 \\ &= 59 \\ Again, n(N \cup H) &= n(N) + n(H) - n(N \cap H)\\ or \: 59 &= 47 + 31 -n(N \cap H) \\ or, \: 59 &= 78 -n(N \cap H) \\ or, \:n(N \cap H) &= 78 - 59 \\ &= 19 \\ n_o(N) &= n(N) -n(N \cap H) \\ &= 47 -19 \\ &= 28 Ans. \end{align*} (ii) The no. of people who like to see Hindi movies only \begin{align*} n_o (H) &= n(H) -n(N \cap H) \\ &= 31 - 19 \\ &= 12 \end{align*} (iii) The above information is shown in the Venn-diagram. solution: Let F and V represent the sets of people who like football and volleyball respectively and U be the universal sets. From question, $n(F) = 40, n(F \cap V) = 10, \: n(U) = n(F \cup V) = 65 , \: n(V) = ?$ (i) We know that, \begin{align*} n(F \cup V) &= n(F) + n(V) - n(F \cap V) \\ or, \: 65 &= 40 + n(V) - 10 \\ or, \: 65 &= 30 + n(V)\\ or, \: n(V) &= 65 - 30 \\ \therefore n(V) &= 35 Ans. \end{align*} (ii) $n_o (F) = n(F) - n(F \cap V) \: = 40 -10 \: = 30 Ans.$ (iii) $n_o (V) = n(V) -n(F \cap V) \: = 35 - 10 \: = 25 Ans.$ Solution: Let, F & M denote the set of people who like folk and modern song respectively. U be the universal set. From question, n(U) = 100 n(F) = 65 n(M) = 55 n(F∩M) = 35 (i) The above information represents in Venn diagram as follow: From Venn-diagram \begin{align*} n(F \cup M) &= n(F) + n(M) - n(F \cap M)\\ &= 65 + 55 - 35 \\&= 85 \: Ans. \end{align*} (ii) \begin{align*} n(\overline {F \cup M} ) &= 100 - 85 \\ &= 15 Ans. \end{align*} Solution: Let, C and V represent the sets of student who like to play cricket and volleyball. Let, U be the universal set. From question, $n(C) = 20, \: n(V) = 15, \: n(U) = n(C \cup V) = 30, \: n(C \cap V) = ?$ We know that, \begin{align*} n(C \cup V) &= n(C) + n(V) - n(C \cap V) \\ or, \: 30 &= 20 + 15 -n(C \cap V) \\or, \:n(C \cap V) &= 35 - 30\\ \therefore \: n(C \cap V) &= 5 \: Ans. \end{align*} The above information is shown in the following venn-diagram. Solution: Let, T and C represent the set of students who like to drink tea & coffee respectively. Let, U be the universal set. From question, $n(U) = 120, \: n(T) = 88, \: n(C) = 26, \: n(\overline{T \cup C})= 17, \: Let \: n(T \cap C)= x$ The above information is shown in Venn-diagram. We know that, \begin{align*} n(T \cup C) &= n(U) - n(\overline{T \cup C})\\ n(T \cup C) &= 120 - 17 \\ &= 103 \\ Again, \: n(T \cup C) &= n(T) + n(C) - n(T \cap C)\\ or, \: 103 &= 88 + 26 -n(T \cap C) \\ or, \: 103 &= 114 - n(T \cap C)\\ \therefore \:n(T \cap C) &= 114 - 103 \\ &= 11 \: Ans. \end{align*} Solution: Let, T and C denote sets of students who like tea and coffee respectively. Let, U be the universal sets. From question, $n(U)= 130,\: n(T)=70, \: n(C)=40, n(\overline {T \cup C})= 30$ We know that, \begin{align*} n(T \cup C ) &= n(U) - n(\overline{T \cup C})\\ n(T \cup C ) &= 130 - 30 \\ &= 100 \: Ans. \\ Again, \\n(T \cup C ) &= n(T) + n(C) - n(T \cap C)\\ or, \: 100 &= 40 + 70 -n(T \cap C)\\ or, \: 100 &= 110 -n(T \cap C) \\ \therefore n(T \cap C) &= 10 \: Ans. \\ \end{align*} The Venn-diagram showing the given information as follows. Solution: Let, A and B denote the set of students used the autorickshaw and bus. Let, U be the universal set, From question, $n(U) = 131, \: n(A) = 56, \: n(B) = 103, \: n_o (B) = 65 \\ Let, n(A \cap B )= x, \: n(\overline{A \cup B}) = y$ The above information is shown in Venn-diagram as follows. From Venn-diagram, \begin{align*} x + 65 &= 103 \\ or, \: x &= 103 - 65 \\ \therefore x= 38 \\ \end{align*} (i) \begin{align*} \text {The no. of students used autorickshaw only} \: n_o (A) &= 56 - x \\ &= 56 - 38 \\ &= 18 \: Ans. \end{align*} (ii)\begin{align*} n(\overline {A \cup B}) &= n(U) - n(A \cup B)\\ &= 131 - (65 + 38 + 18) \\ &= 131 - 121\\ &= 10 \: Ans. \end{align*} Solution: Let, B and L denote the set of tourist who like to visit Bhaktapur and Lalitpur respectively. Let, U be the universal set. From question, $n(U) = 2400, \: n(B) = 1650, \: n(L) = 850, \: n(\overline{B \cup L}) = 150$ (i) The above information is shown in Venn-diagram as follows. (ii) \begin{align*} n(B \cup L) &= n(U) - n(\overline{B \cup L})\\ or, \:n(B \cup L) &= 2400 - 150 \\ &= 2250 \\ Again, \\n(B \cup L) &= n(B) + n(L) - n(B \cap L) \\ or, \: 2250 &= 1650 + 850 -n(B \cap L)\\ or, \:n(B \cap L) &= 2500 - 2250 \\ \therefore \: n(B \cap L) &= 250 \: Ans. \\ \end{align*} (iii) \begin{align*} n_o (B) &= n(B) - n(B \cap L)\\ &= 850 -250 \\ &= 600 \: Ans. \end{align*} Solution: Let, M and S denote the set of students who passed in mathematics and Science respectively. Let, U be the universal set, From question, $n(M) = 60 \: n(S) = 45, \: n(M \cap S )= 30$ (i) \begin{align*} n_o (M) &= n(M) - n(M \cap S)\\ &= 60 -30 \\ &= 30 \: Ans. \end{align*} (ii)\begin{align*} n_o (S) &= n(S) - n(M \cap S)\\ &= 45 -30 \\ &= 15 \: Ans. \end{align*} (iii) \begin{align*}n(M \cup S) &= n(M) + n(S) -n(M \cap S)\\ &= 60 + 45 - 30 \\ &= 75 \: Ans. \end{align*} (iv)The above information is shown in Venn-diagram as follows. Solution: Let, M and E denote the set of students who like Maths and English respectively. Let, U be the universal set. From question, $n(U)=55, \: n_o(M) = 15, \: n_o(E) = 18, \: n(\overline{M \cup E} )= 5 \\ Let \: n(M \cap E)= x$ Now, \begin{align*} n(M \cup E) &= n(U) - n(\overline{M \cup E})\\ &= 55 -5 \\ &= 50 \end{align*} The above information is shown in Venn-diagram as follows. From Venn-diagram, \begin{align*} 15 + x + 18 &= 50\\ or, \: x + 33 &= 50\\ or, \: x &= 50 - 33 \\ &= 17 \end{align*} $\therefore \text {The no. of students who like both subject is 17.}$ Solution: Let, M and S denote the set of students who like Mathematics and Science respectively. Let, U be the universal set, From question, $n(U)=50 \: = n(M \cup S), \: n(M \cap S) = 20, \: Let, \: n(M)=3x \: and \: n(S)=2x$ We know that, \begin{align*} n(M \cup S) &= n(M) + n(S) - n(M \cap S)\\ 50 &= 3x + 2x - 20 \\ or, \: 5x &= 50 + 20\\ or, \: x &= \frac{70}{5}\\ &= 14 \end{align*} (i)\begin{align*} \text {The no. of student who like Mathematics, n(M)} &= 3x \\ &= 3 \times 14 \\ &= 42 \: Ans. \end{align*} (ii)\begin{align*} \text {The no. of student who like Science, n(S)} &= 2x \\ &= 2 \times 14 \\ &= 28 \end{align*} So, $n_o(S) = n(S) - n(M \cap S) \: \:= 28 - 20 \: \: = 8\: Ans.$ (iii) The above information is shown in Venn-diagram as follows: Solution: Let, M and B denote the set of students who have taken Mathematics and Biology respectively. Let, U be the universal set. From question, $n(M \cup B)=25 = n(U), \: n(M) = 12, \: n_o(M) = 8$ Now, \begin{align*} n(M \cap B)&= n(M) - n_o(M) \\ &= 12 - 8 \\ &= 4 \end{align*} \begin{align*} n_o(B) &= n(M \cup B) - n(M) \\ &= 25 - 12 \\ &= 13 \end{align*} The no. of students who have taken Maths & Biology = 4 Ans. The no. of students who have taken Biology but not Math = 13 Ans. The above information is shown in Venn-diagram as follows: Solution: Let, M and S denote the set of students who passed in mathematics and Science respectively. Let, U be the universal set, From question, $n_o(M) = 40\%, \: n_o(S) = 30\%, \: n(\overline{M \cup S })= 10\%, \: n(U) = 100\%$ \begin{align*}n(M \cup S)&= n(U) -n(\overline{M \cup S }) \\ &= 100\% - 10\% \\ &= 90\% \end{align*} (i) \begin{align*} n(M \cup S) &=n_o(M) +n_o(S) + n(M \cap S)\\ 90\% &= 40\% + 30\% + n(M \cap S)\\ or, \: 90\% - 70\% &= n(M \cap S)\\ \therefore n(M \cap S) &= 20\% \end{align*} (ii) \begin{align*} n(M) &= n_o(M) + n(M \cap S)\\ &= 40\% + 20\% \\ &= 60\% \end{align*} (iii) The above information is shown in Venn-diagram as follows. Solution: Let F & M denote the set of people who liked folk & modern song respectively. Let U be the universal set. From question, $n(F)= 70\% , \: n(M)= 60\%, \: n(\overline{F \cup M})= 10\%, \: n(F \cap M)= 4000, \: n(U)= 100\%$ The above information represents in Venn-diagram as follows: \begin{align*} n(F \cup M) &= n(U) - n(\overline{F \cup M}) \\ &= 100\% -10\% \\ &= 90\% \end{align*} From Venn-diagram \begin{align*} 70\% - x\% + x\% + 60\% - x\% &=90\% \\ or, \: x &= (130 - 90)\% \\ \therefore x &= 40\% \end{align*} Let total number of people be 'y' \begin{align*} 40\% \; of\; y &= 4000 \\ or, \: y \times \frac{40}{100} &= 4000\\ or, \: y &= \frac{4000 \times 100}{40} \\ \therefore y &= 10000 \: Ans. \end{align*} Solution: From question, $n(U)=30, \: n(A)=20,\: n(B)= 10$ $3x + y = n(A) \\ 3x + y = 20 \: \: \: \: \: .............(1)$ $x + y = n(B) \\ x + y = 10 \: \: \: \: \: \: ..............(2)$ Subtracting eqn (2) from eqn (1) \begin{array}{rrrr} 3x&+ &y&=&20 \\ x&+&y&=&10 \\ -&&-&&-\\ \hline\\ &&2x&=&10\\\end{array} \begin{align*} x &= \frac{10}{2} \\ \therefore x &= 5 \end{align*} Putting value of x in eqn (1) \begin{align*} 3x + y &= 20 \\ or, \: 3 \times 5 + y &= 20 \\ or, \: y &= 20 - 15 \\ y &= 5 \end{align*} (i) \begin{align*} n(A \cap B) &= y\\ &= 5 \: Ans.\end{align*} (ii) \begin{align*} n(A \cup B) &= 3x + y + x \\ &= 3 \times 5 + 5 + 5 \\ &= 15 + 10\\ &= 25 \: Ans. \end{align*} (iii) \begin{align*}n(\overline {A \cup B}) &= n(U) - n(A \cup B) \\ &= 30 - 25 \\ &= 5 \end{align*} Solution: Let, F and M be the sets of people who liked folk and modern songs. Let U be the universal set. From Question, $n(U) = 100\%, \: n(F) = 77\%, \: n(M) = 63\%, \: n(\overline {F \cup M} )= 5\%$, n(M∩N) = 135 $Let, \: n(M \cap N) = x\%$ We know that, \begin{align*} n(F \cup M) &= n(U) - n(\overline {F \cup M})\\ &= 100\% - 5\% \\ &= 95\% \\ Again,\\ n(F \cup M) &= n(F) + n(M) - n(F \cap M)\\ 95\% &= 77\% + 63\% -x\%\\ or, \: x\% &= 140\% - 95\%\\ \therefore x &= 45\%\end{align*} (i) From question, \begin{align*} 45\% \: of \: n(U) &= 135\\ or, \: n(U) \times \frac{45}{100} &= 135\\ or, \: n(U) &= \frac {135 \times 100}{45}\\ \therefore n(U) &= 300 \: Ans.\end{align*} (ii) \begin{align*}n(F) &= 300 \:of \: 77\%\\ &= 300 \times \frac {77}{100}\\ &= 231\\ Now, \\ n_o (F) &= n(F) - n(F \cap M)\\ &= 231 - 135 \\ &=96 \: Ans. \end{align*} (iii) The above information represent in Venn-diagram as follow: Solution: Let M and S be the student who liked math and science respectively. Let U be the universal sets then, From question, $n(U)= 95, \: Let\: n(M)= 4x, \: n(S)= 5x, \: n(M \cap S)= 10, \: n(\overline{M \cup S})= 15$ The above information is shown in the venn-diagram. \begin{align*} n(M \cup S) &=n(U) - n(\overline{M \cup S})\\ &= 95 -15 \\ &= 80 \end{align*} \begin{align*} n(M \cup S) &=n_o(M)+n_o(S) + n(M \cap S)\\ 80 &= 4x - 10 + 10 + 5x -10\\ or, \: 80 &= 9x - 10\\ \therefore x &= \frac{90}{9} &= 10 \end{align*} (a) The number of students who liked mathematics only \begin{align*} 4x - 10 &= 4 \times 10 - 10 \\ &= 40 - 10 \\ &= 30 \: Ans. \end{align*} (b) The number of students who liked science only \begin{align*} 5x - 10 &= 5 \times 10 - 10 \\ &= 50 - 10 \\ &= 40 \: Ans. \end{align*} Solution: Let E, M and N represents the set of students who passed in English, Mathematics and Nepali respectively. Let U be the universal set. $n(U) = 200, \: \: \: \: \: n(E) = 70, \: \: \:\: \: n(M) = 80\: \:\: \: \: n(N) = 60, \\ n(E \cap M)=35, \: \:\: \: \: n(E \cap N)=25 \: \:\: \: n(M \cap N)=35 \: \:\: \: n(E \cap M \cap N) = 10$ (a) The above information is shown in venn-diagram as follows. (b) \begin{align*} n(E \cap M \cap N) &= n(E) + n(M) + n(N) - n(E \cap M) - n(M \cap N) - n(N \cap E) + n(E \cap M \cap N)\\ &= 70 + 80 + 60 - 35 -25 -35 + 10 \\ &= 125 \end{align*} \begin{align*} n(\overline{E \cap M \cap N}) &= n(U) -n(E \cap M \cap N) \\ &=200 - 125 \\ &= 75 \: Ans. \end{align*} Let E, A and S denote the set of students who failedin English, Account and statistics respectively. Let U be the universal set. From question, $n(U) = 100\% , \: n(E)=58\%, \: n(A) = 39\%, \: n(S)= 25\%, \: n(E \cap A)=32\%, \\ \: n(A \cap S)= 17\%, \: n(E \cap S)=19\% \: n(E \cap A \cap S)= 13\%$ (a)The above information is shown in the venn-diagram as follow: \begin{align*}n(E \cup A \cup S) &= n(E) + n(A) + n(S) - n(E \cap A) - n(A \cap S) - n(S \cap E) + n(E \cap A \cap S)\\ &= (58 + 39 + 25 - 32 - 17 - 19 - 13)\% \\ &= 67\% \end{align*} (b) \begin{align*} n(\overline {E \cup A \cup S}) &= n(U) - n(E \cup A \cup S )\\ &= 100\% -67\% \\ &= 33\% \: Ans. \end{align*}