content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
big integers
adrin wrote:
> i have to write a simple class to represent big integer numbers and
> implement basic arithmetical operations such as + - * / modulo, etc.
> all will be written in c++ under linux OS on x86 architecture (however
> that's not that important)
> could you tell me what is an idea behind such construction?
The idea is to store numbers that your architecture doesn't allow to
store using normal ways.
> i think about creating an dynamic array in which single digits in 255 base
> will be stored(one byte). it is a common solution way when dealing with
> BigIntegers isnt it?
To know about what's common, you need to set the sampling parameters.
> adding would be implemented by adding digits, bit by bit,
Why bit by bit? Couldn't you simply add two 255-base digits using normal
[unsigned] octet rules? Especially if your platform has 8-bit bytes, it
would be really simple, no?
> but how can i
> solve the problem with a carry?
You keep the carry flag and add it to the next pair. Just like any other
long addition/subtraction/multiplication/division...
> maybe writing accessing carry flag using
> asm code is a good solution? is there a better way?
You don't need asm to write C++. If I were your instructor, I would
fail you for using asm, actually.
> and what about multiplication? is it made by adding the number repeatedly?
Well, sort of. For every multidigit numbers A and B you can define the
pairs of single-digit multiplications which need to be added to each other
> maybe you have a relevant tutorial or howto
And come back when you have a C++ language question. | {"url":"http://www.velocityreviews.com/forums/t288218-big-integers.html","timestamp":"2014-04-24T15:50:31Z","content_type":null,"content_length":"59208","record_id":"<urn:uuid:b751d0f9-a9f9-491a-aca8-8e9cbf71543f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Old Greenwich Statistics Tutor
Find an Old Greenwich Statistics Tutor
...I work with students to develop a custom study plan that attacks their weaknesses and enhances their strengths. Students that hone these techniques over considerable practice have had great
success! I developed techniques that helped a recent student raise his score from the low 20s to 30!
34 Subjects: including statistics, calculus, writing, GRE
...True, but you need to learn how to *solve* problems first, *before* you can do it fast. The first time I looked at an SAT test, I did it (perfectly) in just over half the time--but I had never
trained for speed. It may sound counter-intuitive, but speed comes naturally to those who (1) are good at solving problems and (2) have a complete command of the subject matter.
27 Subjects: including statistics, calculus, physics, geometry
...As you can see from my profile I had learning issues growing up. I was able to find my way around those and have great sympathy for students and their parents who are dealing with learning
problems. I have tutored several students with issues and I am currently work with two.
24 Subjects: including statistics, reading, English, biology
...I have a long background in Finance, mathematics and statistics, including Fellowship status as a Chartered Certified Accountant. I qualified with KPMG, then working as Audit Manager in the
Caribbean. During my time as a mathematics and test preparation tutor, I have worked extensively with ADD students of all ages.
55 Subjects: including statistics, English, reading, writing
...I am an engineering graduate from Cornell University with a Masters in Math Education. I have been a math teacher for the past six years, teaching a wide range of courses from basic math to
more advanced levels. I developed the curricula for and taught college level courses, including Advanced Placement (AP) Statistics, AP Microeconomics and AP Macroeconomics.
9 Subjects: including statistics, geometry, algebra 2, SAT math | {"url":"http://www.purplemath.com/Old_Greenwich_Statistics_tutors.php","timestamp":"2014-04-16T04:23:20Z","content_type":null,"content_length":"24416","record_id":"<urn:uuid:b859e40b-7510-4176-b8bf-37ac8b0c943b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
Madrid duo fire up quantum contender to Google search
Comparison of the hierarchies that result from the Classical and the Quantum PageRank. For more details, please see the paper at arXiv:1112.2079v1 [quant-ph]
(PhysOrg.com) -- Two Madrid scientists from The Complutense University think they have an algorithm that may impact the nature of the world's leading search engine. In essence, they are saying Hey,
world, Google This. "We have found an instance of this class of quantum protocols that outperforms its classical counterpart and may break the classical hierarchy of web pages depending on the
topology of the web," say the researchers.
Google's PageRank algorithm represents of the idea that the importance of a webpage is measured by the number of important papers that point towards it. PageRank from Google not only measures a web
page s popularity by how many sites, but the authority of the sites linking to the page.
Giuseppe Paparo and Miguel Martín-Delgado at The Complutense University in Madrid are taking the Google approach a step further. They have revealed a quantum version of the algorithm, and they have
presented their findings in a paper dated December 9, Google in a Quantum Network.
The distinguishing feature is speed. Quantum algorithms produce results extremely rapidly, note reports, faster than a so called classical algorithm.
In their research using a tree graph, the quantum algorithm outperformed the classical algorithm in ranking the root page. They achieved similar results using a directed graph. The quantum algorithm
identified the highest ranking page faster than a classical algorithm.
In quantum networks, information is routed as quantum bits, or qubits, rather than classical.
Technologists familiar with quantum computing believe that the classical web will be replaced, or enhanced, by a network of quantum nodes. Nonetheless, the recent research can be seen as an early
The possibility of establishing a quantum network of practical use is still under active investigation, say the authors, and they acknowledge they are not the first to explore the waters. Some early
versions of them, modest as they may be, have been designed and realized in real world in the recent years or in some instances they are under way.
The authors say that what they are introducing is not something opposite to Google s approach, but rather a quantum PageRank algorithms in a scenario in which some kind of quantum network is
realizable out of the current classical internet web, but no quantum computer is yet available.
They say their work is a quantization of the PageRank protocol that is used to list web pages according to their importance.
We have found an instance of this class of quantum protocols that outperforms its classical counterpart and may break the classical hierarchy of web pages depending on the topology of the web.
The authors also recognize their study has limitations and that more research is needed. Their ultimate goal is devising a quantum algorithm that can overcome difficulties but their explorations were
only on small networks. They said it would be interesting to perform computations with the quantum PageRank applied to very large networks with the properties exhibited by the complex structure of
the real web.
More information: Google in a Quantum Network, arXiv:1112.2079v1 [quant-ph] arxiv.org/abs/1112.2079
We introduce the characterization of a class of quantum PageRank algorithms in a scenario in which some kind of quantum network is realizable out of the current classical internet web, but no quantum
computer is yet available. This class represents a quantization of the PageRank protocol currently employed to list web pages according to their importance. We have found an instance of this class of
quantum protocols that outperforms its classical counterpart and may break the classical hierarchy of web pages depending on the topology of the web.
via Arxiv Blog
4 / 5 (8) Dec 14, 2011
?Every time they change the damn search engine I have to learn how to search again!!!
Anyone prefer google the way it was 5 years ago? It was a little quirky to learn how to search but once you got it down you could find ANYTHING you wanted.
Now its like if im searching the same keywords or similar keywords over and over its not because i want the SAME EXACT results as last time right? If im searching for information I want whats out
there not what google THINKS I want and usually not the most "popular" pages.......
4.2 / 5 (5) Dec 14, 2011
I actually agree.
The "most popular" pages are often not even relevant, or full of bad information.
I often find myself having to manually pore over stuff on Google now.
And you are right, 5 years ago that wasn't the case.
not rated yet Dec 14, 2011
Then just use bing... oh wait bing copies google results.
yahoo? wait, thats just bing...
2 / 5 (3) Dec 14, 2011
Google should let people choose. Some people actually know what they are looking for, some others even know about boolean operators, and of course then there's this majority who can't even spell.
5 / 5 (3) Dec 14, 2011
you can choose. Go into the advanced search. You can even do a "tool w/p rawa1" and probably find a few posts down here.
:-) sorry rawa... kinda.
although with the general mood on this site you could probably search for any poster "w/p idiot" and find some hits.
5 / 5 (3) Dec 14, 2011
I actually agree.
The "most popular" pages are often not even relevant, or full of bad information.
I often find myself having to manually pore over stuff on Google now.
And you are right, 5 years ago that wasn't the case.
I don't suppose this could be that the size of the web has grown a little in the last 5 years? The article is talking about a faster implementation of the Google algorithm, not a new one. In other
words, you'll still be inundated with noise, it will just happen faster.
----> just what I need. sheesh
not rated yet Dec 14, 2011
If you hit the back button from any google search link, google will give you the option to block that site from all searches you do in the future. But you may need to be signed in to a google account
for that to work, not sure if it will work with just cookies.
so if you really hate techcrunch for instance, you can never see their content in a google search again. Or more likely you will ban stupid spam sites.
Anyone not have a google account that can attest to that feature working or not?
3 / 5 (2) Dec 14, 2011
Anyone prefer google the way it was 5 years ago? It was a little quirky to learn how to search but once you got it down you could find ANYTHING you wanted.
I doubt that the actual search engine has changed much at all (just minor tweaks). The most visible changes have been to the UI, and with that, I have to agree that the changes have not been for the
They keep changing the left column options etc, live search (hate it, but at least you can turn if off), but my biggest dislike is the image search where you get a messy mosaic of variously sized
thumbnails which you have to hover over to see any details like pic size and url - argh).
Also, was this quantum algorithm run on a classical computer?
4.2 / 5 (5) Dec 14, 2011
Great I always wanted to shop online in parallel universes. But how will they calculate the shipping?
not rated yet Dec 15, 2011
Also, was this quantum algorithm run on a classical computer?
No, but if one existed, they'd like to, according to the first sentence of the abstract at the provided link:
"We introduce the characterization of a class of quantum PageRank algorithms in a scenario in which some kind of quantum network is realizable out of the current classical internet web, but no
quantum computer is yet available."
Don't you feel better just knowing that the algorithm class is characterized?
not rated yet Dec 15, 2011
Great I always wanted to shop online in parallel universes. But how will they calculate the shipping?
Lol this one made me smile. And yep five years ago was better for the stated reasons. Just as well we are decades from a home Quantum computer i guess.
not rated yet Dec 15, 2011
Don't you feel better just knowing that the algorithm class is characterized?
You should. Much like the algorithms for large number factorization via quantum computing have already been characterized.
It never hurts to know what you can use something for before it's been built. Makes investors more likely to fund the research rather than just having the funding 100% tax-based
5 / 5 (1) Dec 18, 2011
?Every time they change the damn search engine I have to learn how to search again!!!
Anyone prefer google the way it was 5 years ago? It was a little quirky to learn how to search but once you got it down you could find ANYTHING you wanted.
Now its like if im searching the same keywords or similar keywords over and over its not because i want the SAME EXACT results as last time right? If im searching for information I want whats out
there not what google THINKS I want and usually not the most "popular" pages.......
Hear, hear!
I miss being able to search for the following:
And it would only find instances of either "wordphrase", "word-phrase", or "word phrase".
They changed it not long ago and removed that feature, diluting all the search results to anything where "word" and "phrase" were simply near each other. I loved that old feature and hate the new | {"url":"http://phys.org/news/2011-12-madrid-duo-quantum-contender-google.html","timestamp":"2014-04-18T08:45:17Z","content_type":null,"content_length":"87084","record_id":"<urn:uuid:99ec2aa7-f8aa-4e88-8277-730f396e41e3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
...colors make the world beautiful...
Flying colours
This is my latest appearance today after a long time absent from this blog. I'd like to write about metamorphosis.
According to Wikipedia, the free encyclopedia, metamorphosis is a biological process by which an animal physically develops after birth or hatching, involving a conspicuous and relatively abrupt
change in the animal's form or structure through cell growth and differentiation.
Biology question:
What is metamorphosis?
When an animal changes form, like when a tadpole changes into a frog, or a caterpillar into a butterfly.
A dragonfly in its final moult, undergoing metamorphosisrom its nymph form.
Stages in the metamorphosis of a frog
Pictures courtesy of Wikipedia
I don't know why I choose this subject to write today. However,Hi again to my beloved readers.
Tu sais je t'aime.
What they do ?
In comics, a colorist is responsible for adding color to black
and white line art. Originally, this was done by cutting out
films of various densities in the appropriate shapes to be
used in producing color-separated printing plates. More
recently, colorists have worked in transparent media such as
watercolors or airbrush, which is then photographed,
allowing more subtle and painterly effects.
In fashion, a colorist is hairstylist that specializes
in coloring hair.
Who needs colors ?
A color scheme, webmasters need this.
A color wheel, showing tertiary colors
What is flying colors?
Wait for my next post, Insya Allah (God willing).
Which part of the body that you colored it?
Finger nails?
Actually you need not color any part of your body. Let's say that if you're born in
the South East Asian or Middle East region, usually your hair would be black.But if
you're born in Europe or America, you either have black, red, blonde or other colors
of hair. It's what Allah bestow upon us. Unless, if there is something which is very
important, such as, let say someone force you to color your hair red. If you don't color
you hair red, he will kill you. But that sound funny, isn't it ?
But that's my opinion.
In the Sun (Malaysia) tabloid today
IGP: Police must remain colour blind when handling crime.
What is COLOUR BLIND ?
Color blindness, or color vision deficiency, in humans is the inability
to perceive differences between some or all colors that other
people can distinguish. It is most often of genetic nature,
but may also occur because of eye, nerve, or brain damage .....
But what the IGP means is...
Police cannot be moved by ethnic or any other bias, and have to remain neutral in enforcing law and order.
WHAT ARE THEY?
Plasma fractals are perhaps the most useful fractals there are. Unlike most other fractals, they have a random element in them, which gives them Brownian self-similarity. To create a plasma fractal
on a rectangular piece of plane, you do the following:
1. Randomly pick values for the corners of the rectangle.
2. Calculate the value for the center of the rectangle by taking the average of the corners and adding a random number multiplied by a preset roughness parameter.
3. Calculate the midpoints of the rectangles� sides by taking averages of the two nearest corners and adding a random number multiplied by the roughness parameter.
4. You now have four smaller rectangles. Do steps 2-4 for each of them.
Here is the basic algorithm of drawing plasmas:
Constants used:
number_of_colors: number of colors available in the computer language you�re using
size: size of the fractal; must be a power of 2
roughness: higher values make the fractal more fragmented
� randomly choose the rectangle�s corners
array(0, 0) = RND * number_of_colors
array(size, 0) = RND * number_of_colors
array(0, size) = RND * number_of_colors
array(size, size) = RND * number_of_colors
� go through the array, decreasing the interval size every time
FOR p = LOG(size) / LOG(2) TO 0 STEP -1
FOR x = 0 TO size STEP 2 ^ p
FOR y = 0 TO size STEP 2 ^ p
IF x MOD 2 ^ (p + 1) = 0 AND y MOD 2 ^ (p + 1) = 0 GOTO nxt
IF x MOD 2 ^ (p + 1) = 0 THEN
average = (array(x, y + 2 ^ p) + array(x, y - 2 ^ p)) / 2
color = average + roughness * (RND - .5)
array(x, y) = color: GOTO nxt
IF y MOD 2 ^ (p + 1) = 0 THEN
average = (array(x + 2 ^ p, y) + array(x - 2 ^ p, y)) / 2
color = average + roughness * (RND - .5)
array(x, y) = color: GOTO nxt
IF x MOD 2 ^ (p + 1) > 0 AND y MOD 2 ^ (p + 1) > 0 THEN
v1 = array(x + 2 ^ p, y + 2 ^ p)
v2 = array(x + 2 ^ p, x - 2 ^ p)
v3 = array(x - 2 ^ p, x + 2 ^ p)
v4 = array(x - 2 ^ p, y - 2 ^ p)
average = (v1 + v2 + v3 + v4) / 4
color = average + roughness * (RND - .5)
array(x, y) = color: GOTO nxt
NEXT y
NEXT x
NEXT p
� go through the array, plotting the points
FOR x = 0 TO size
FOR y = 0 TO size
PSET (x, y), array(x, y)
NEXT y
NEXT x
I don't know what it is.
Are you really sensitive to colors around you? Are you really sensitive like the thermometer?
Or you are just like the clock, never sensitive to changes around you.
Ok. Let's test your snsitiveness to colors.
Try to count how many BLACK dots are there in the picture below.
All the best !
If you get the answer --- how many BLACK dots are there---Please,please,please leave it in the Comment
Thank you
We always hear beautiful songs. Songs are as beautiful as colors. Just take a look at the titles of songs below:-
Black Magic Woman - Santana
Yellow River - The Beatles
Green, Green Grass Of Home - Elvis Presley
on and on.
Just look at the lyrics below:-
Love Is Blue
-Artist: Andy Williams
-Instrumental version by Paul Mauriat was the # 12 song of the 1960-1969 rock
era and was # 1 for 5 weeks in 1968.
-Vocals that same year were charted by All Martino (# 57) and Andy's wife,
-Claudine Longet (# 71).
--Music by Andre Popp. Lyrics by Pierre Cour. (French) Brian Blackburn (English)
Blue, blue, my world is blue
Blue is my world now I'm without you
Gray, gray, my life is gray
Cold is my heart since you went away
Red, red, my eyes are red
Crying for you alone in my bed
Green, green, my jealous heart
I doubted you and now we're apart
When we met how the bright sun shone
Then love died, now the rainbow is gone
Black, black, the nights I've known
Longing for you so lost and alone
Transcribed by Ronald E. Hontz
Roses Are Red (My Love)
Bobby Vinton
Roses are red, my love ... doo doo da doooo ...
A long, long time ago
On graduation day
You handed me your book
I signed this way:
"Roses are red, my love.
Violets are blue.
Sugar is sweet, my love.
But not as sweet as you."
Talking about red roses, I would like to share what's describe below:
An image taken by the Hubble telescope of the "Cat's Eye Nebula",
an exploding star 3,000 light years away, the image was published
by NASA on Oct. 31 1999. original image at
About 1400 years ago,
Allah Almighty said: "And when the heaven splitteth asunder and becometh
ROSY LIKE RED HIDE - (The Noble Quran, 55:37)"
Have a nice day!
Why eat vegetables?
Because the health expert says it contains fibres.
Then there are vegetarians.
Who are these people?
Are they aliens? Nope!
There are several categories of vegetarians, all of whom avoid meat and/or animal products.
The vegan or total vegetarian diet includes only foods from plants: fruits,
vegetables, legumes (dried beans and peas), grains, seeds, and nuts.
The lactovegetarian diet includes plant foods plus cheese and other
dairy products. The ovo-lactovegetarian
(or lacto-ovovege-tarian) diet also includes eggs.
So let's look at their types of food and observe their COLORS.
Can colors increase your appetite ?
Yes HUNGRY can increase your appetite. | {"url":"http://ourworldofcolors.blogspot.com/","timestamp":"2014-04-16T04:11:33Z","content_type":null,"content_length":"44335","record_id":"<urn:uuid:7cdd32db-6bd2-44b2-8975-1d40b93b5e30>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
, 1997
"... In recent years, there has been a considerable amount of work on using continuous domains in real analysis. Most notably are the development of the generalized Riemann integral with applications
in fractal geometry, several extensions of the programming language PCF with a real number data type, and ..."
Cited by 43 (8 self)
Add to MetaCart
In recent years, there has been a considerable amount of work on using continuous domains in real analysis. Most notably are the development of the generalized Riemann integral with applications in
fractal geometry, several extensions of the programming language PCF with a real number data type, and a framework and an implementation of a package for exact real number arithmetic. Based on
recursion theory we present here a precise and direct formulation of effective representation of real numbers by continuous domains, which is equivalent to the representation of real numbers by
algebraic domains as in the work of Stoltenberg-Hansen and Tucker. We use basic ingredients of an effective theory of continuous domains to spell out notions of computability for the reals and for
functions on the real line. We prove directly that our approach is equivalent to the established Turing-machine based approach which dates back to Grzegorczyk and Lacombe, is used by Pour-El &
Richards in their found...
- Notices of the AMS
"... We give a detailed treatment of the “bit-model ” of computability and complexity of real functions and subsets of R n, and argue that this is a good way to formalize many problems of scientific
computation. In Section 1 we also discuss the alternative Blum-Shub-Smale model. In the final section we d ..."
Cited by 32 (3 self)
Add to MetaCart
We give a detailed treatment of the “bit-model ” of computability and complexity of real functions and subsets of R n, and argue that this is a good way to formalize many problems of scientific
computation. In Section 1 we also discuss the alternative Blum-Shub-Smale model. In the final section we discuss the issue of whether physical systems could defeat the Church-Turing Thesis. 1
- Comm. ACM , 1993
"... We describe computational science as an interdisciplinary approach to doing science on computers. Our purpose is to introduce computational science as a legitimate interest of computer
scientists. We present a foundation for computational science based on the need to incorporate computation at the s ..."
Cited by 25 (2 self)
Add to MetaCart
We describe computational science as an interdisciplinary approach to doing science on computers. Our purpose is to introduce computational science as a legitimate interest of computer scientists. We
present a foundation for computational science based on the need to incorporate computation at the scientific level; i.e., computational aspects must be considered when a model is formulated. We next
present some obstacles to computer scientists' participation in computational science, including a cultural bias in computer science that inhibits participation. Finally, we look at some areas of
conventional computer science and indicate areas of mutual interest between computational science and computer science. Keywords: education, computational science. 1 What is Computational Science ?
In December, 1991, the U. S. Congress passed the High Performance Computing and Communications Act, commonly known as the HPCC . This act focuses on several aspects of computing technology, but two
- Trans. Amer. Math. Soc
"... Abstract. Let (α, β) ⊆ R denote the maximal interval of existence of solution for the initial-value problem { dx = f(t, x) dt x(t0) = x0, where E is an open subset of R m+1, f is continuous in E
and (t0, x0) ∈ E. We show that, under the natural definition of computability from the point of view o ..."
Cited by 14 (13 self)
Add to MetaCart
Abstract. Let (α, β) ⊆ R denote the maximal interval of existence of solution for the initial-value problem { dx = f(t, x) dt x(t0) = x0, where E is an open subset of R m+1, f is continuous in E and
(t0, x0) ∈ E. We show that, under the natural definition of computability from the point of view of applications, there exist initial-value problems with computable f and (t0, x0) whose maximal
interval of existence (α, β) is noncomputable. The fact that f may be taken to be analytic shows that this is not a lack of regularity phenomenon. Moreover, we get upper bounds for the “degree of
noncomputability” by showing that (α, β) is r.e. (recursively enumerable) open under very mild hypotheses. We also show that the problem of determining whether the maximal interval is bounded or
unbounded is in general undecidable. 1.
- In Logic Colloquium 2000 , 2005
"... We discuss the conceptual problem of identifying the natural notions of computability at higher types (over the natural numbers). We argue for an eclectic approach, in which one considers a wide
range of possible approaches to defining higher type computability and then looks for regularities. As a ..."
Cited by 12 (5 self)
Add to MetaCart
We discuss the conceptual problem of identifying the natural notions of computability at higher types (over the natural numbers). We argue for an eclectic approach, in which one considers a wide
range of possible approaches to defining higher type computability and then looks for regularities. As a first step in this programme, we give an extended survey of the di#erent strands of research
on higher type computability to date, bringing together material from recursion theory, constructive logic and computer science. The paper thus serves as a reasonably complete overview of the
literature on higher type computability. Two sequel papers will be devoted to developing a more systematic account of the material reviewed here.
, 1997
"... We use the category of presheaves over PTIME-functions in order to show that Cook and Urquhart's higher-order function algebra PV ! defines exactly the PTIME-functions. As a byproduct we obtain
a syntax-free generalisation of PTIME-computability to higher types. By restricting to sheaves for a sui ..."
Cited by 11 (6 self)
Add to MetaCart
We use the category of presheaves over PTIME-functions in order to show that Cook and Urquhart's higher-order function algebra PV ! defines exactly the PTIME-functions. As a byproduct we obtain a
syntax-free generalisation of PTIME-computability to higher types. By restricting to sheaves for a suitable topology we obtain a model for intuitionistic predicate logic with \Sigma b 1 -induction
over PV ! and use this to reestablish that the provably total functions in this system are in polynomial time computable. Finally, we apply the category-theoretic approach to a new higher-order
extension of Bellantoni-Cook's system BC of safe recursion. 1 Introduction Cook and Urquhart's system PV ! [3] is a simply-typed lambda calculus providing constants to denote natural numbers and an
operator for bounded recursion on notation like in Cobham's characterisation of polynomial-time computability. 1 Although functionals of arbitrary type can be defined in this system one can show that
, 2009
"... Let S be the domain of attraction of a computable and asymptotically stable hyperbolic equilibrium point of the non-linear system ˙x = f(x). We show that the problem of determining S is
computationally unsolvable. We also present an upper bound of the degree of unsolvability of this problem. ..."
Cited by 6 (6 self)
Add to MetaCart
Let S be the domain of attraction of a computable and asymptotically stable hyperbolic equilibrium point of the non-linear system ˙x = f(x). We show that the problem of determining S is
computationally unsolvable. We also present an upper bound of the degree of unsolvability of this problem.
, 1999
"... : In the present work the notion of the computable (primitive recursive, polynomially time computable) p--adic number is introduced and studied. Basic properties of these numbers and the set of
indices representing them are established and it is proved that the above defined fields are p--adically c ..."
Add to MetaCart
: In the present work the notion of the computable (primitive recursive, polynomially time computable) p--adic number is introduced and studied. Basic properties of these numbers and the set of
indices representing them are established and it is proved that the above defined fields are p--adically closed. Using the notion of a notation system introduced by Y. Moschovakis an abstract
characterization of the indices representing the field of computable p--adic numbers is established. Keywords: Computable numbers, computable (primitive recursive, polynomially time computable
p--adic numbers, p--adically closed fields, notation systems. 1 Introduction The present work brings together two ideas, namely type two computability, and p--adic fields. We start with a brief
review of the notion a computable number and of a p--adic field as a background. The basic idea for computable real numbers is contained in Turing's fundamental paper [20], where he introduced the
notion of a computable (or... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1254568","timestamp":"2014-04-21T02:58:06Z","content_type":null,"content_length":"31325","record_id":"<urn:uuid:9b5e69c5-78cb-4143-8e86-49a2a9c98fb3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Vector magnitude?
Zachary Pincus zpincus@stanford....
Wed Sep 5 13:49:45 CDT 2007
'len' is a (pretty basic) python builtin function for getting the
length of anything with a list-like interface. (Or more generally,
getting the size of anything that is sized, e.g. a set or dictionary.)
Numpy arrays offer a list-like interface allowing you to iterate
along their first dimension, etc. (*) Thus, len(numpy_array) is
equivalent to numpy_array.shape[0], which is the number of elements
along the first dimension of the array.
(*) For example, this is useful if you've got numerous data vectors
packed into an array along the first dimension, and want to iterate
across the different vectors.
a = numpy.ones((number_of_data_elements, size_of_data_element))
for element in a:
# element is a 1-D array with a length of 'size_of_data_element'
Note further that this works even if your data elements are multi-
dimensional; i.e. the above works the same if:
element_shape = (x,y,z)
a = numpy.ones((number_of_data_elements,)+element_shape)
for element in a:
# element is a 3-D array with a shape of (x,y,z)
On Sep 5, 2007, at 2:40 PM, Robert Dailey wrote:
> Thanks for your response.
> I was not able to find len() in the numpy documentation at the
> following link:
> http://www.scipy.org/doc/numpy_api_docs/namespace_index.html
> Perhaps I'm looking in the wrong location?
> On 9/5/07, Matthieu Brucher <matthieu.brucher@gmail.com > wrote:
> 2007/9/5, Robert Dailey < rcdailey@gmail.com>: Hi,
> I have two questions:
> 1) Is there any way in numpy to represent vectors? Currently I'm
> using 'array' for vectors.
> A vector is an array with one dimension, it's OK. You could use a
> matrix of dimension 1xn or nx1 as well.
> 2) Is there a way to calculate the magnitude (length) of a vector
> in numpy?
> Yes, len(a)
> Matthieu
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-September/029102.html","timestamp":"2014-04-17T16:27:51Z","content_type":null,"content_length":"5628","record_id":"<urn:uuid:6c499725-0d08-4635-b221-8625fe8f72c6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Allston Prealgebra Tutor
Find an Allston Prealgebra Tutor
...I also periodically helped some of the other people in the class. I found the material very intuitive and still remember almost all of it. I've also performed very well in several math
competitions in which the problems were primarily of a combinatorial/discrete variety.
14 Subjects: including prealgebra, calculus, geometry, GRE
...Smart studying: Involving multiple senses in memorization work; optimizing the study environment (location, timing, stimulation, etc.); breaking material up and focusing on smaller, more
manageable pieces at a time. All of this is helped by heavy doses of encouragement; by identifying, tracking,...
26 Subjects: including prealgebra, English, reading, ESL/ESOL
I have three years of experience tutoring in physics, math, biology, and chemistry. I have worked mostly with college level introductory courses and high school students. I have worked with many
students of different academic levels from elementary to college students.
19 Subjects: including prealgebra, Spanish, chemistry, calculus
...I have been a semi-professional poker player for the last 4 years playing No Limit Texas Hold'Em; people enjoy my poker lessons. I used to play online at Full Tilt and Pokerstars.com. I play
every week at a casino where I am a gold card carrying member.
67 Subjects: including prealgebra, reading, English, geometry
...I've tutored nearly all the students I've worked with for many years, and I've also frequently tutored their brothers and sisters - also for many years. I enjoy helping my students to
understand and realize that they can not only do the work - they can do it well and they can understand what they're doing. My references will gladly provide details about their own experiences.
11 Subjects: including prealgebra, geometry, algebra 1, precalculus
Nearby Cities With prealgebra Tutor
Belmont, MA prealgebra Tutors
Brighton, MA prealgebra Tutors
Brookline Village prealgebra Tutors
Brookline, MA prealgebra Tutors
Cambridge, MA prealgebra Tutors
Cambridgeport, MA prealgebra Tutors
Charlestown, MA prealgebra Tutors
Jamaica Plain prealgebra Tutors
Newton Center prealgebra Tutors
Newton Centre, MA prealgebra Tutors
Newtonville, MA prealgebra Tutors
Roxbury, MA prealgebra Tutors
Somerville, MA prealgebra Tutors
South Boston, MA prealgebra Tutors
Watertown, MA prealgebra Tutors | {"url":"http://www.purplemath.com/Allston_prealgebra_tutors.php","timestamp":"2014-04-17T11:10:54Z","content_type":null,"content_length":"24043","record_id":"<urn:uuid:49b6d8b2-973a-4b04-ac64-ac59e6b217cc>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vega stocks: a case study
In this back-to-basics series for beginning options traders, TradeKing's Brian Overby explains vega, the Greek that measures how implied volatility changes impact your options position. Today's post
explores the factors contributing to large vega, using a case study of GOOG and IBM.
measures an option's sensitivity to swings in implied volatility (IV). Watching vega can give you crucial clues as to why your option's price is changing and where it may move next. Vega is just one
of several options "Greeks", which help you measure the impact of variables on the price of your options – like interest rates, time passing, and so on – that will likely affect the price of your
options contract. (While implied volatility and Vega represent the consensus of the marketplace as how a theoretical option’s price will change, there is no guarantee that this forecast will be
So far we've
introduced delta
and explained how
delta is dynamic
. We then discussed
, the
push-vs-pull relationship
between those two, and
calculating position theta
A little trivia first about Vega, before we dive into its use. Vega is not actually a Greek letter -- but since it begins with "V" and measures changes in volatility, the name stuck as a useful
mnemonic. Some of the actual Greek letters that are frequently used for changes in volatility are Omega , Kappa, or Tau. As it happens, vega is actually most commonly known as the brightest star in
the constellation Lyra.
But enough trivia! In my last post we reviewed the usual characteristics of options with large Vega. Here's a quick summary of those characteristics:
Options with Large Vega:
1) at-the-money vs in or out-of-the-money
2) longer-term vs near-term
3) expensive underlyings
4) large implied volatility
To give you a more concrete feeling for what this all means, let's look at two very different stocks in regards to price,
Google (GOOG)
. (These stocks and symbols are for educational and illustrative purposes only and do not imply a recommendation or solicitation to buy or sell a particular security.) Google definitely has all the
hallmarks of a high-vega stock: it's an expensive stock (trading ~$571) trades with an ATM implied volatility of 25%
First, let's compare the ATM strike to the In- and Out-of-The-Money strikes. True to our rule-of-thumbs above, the Vega is larger for the 570 strike: 0.66 or 66 cents. This means if the implied
volatility of this option moves one percentage point up or down, the option value in theory will either increase or decrease by 66 cents accordingly.
You'll notice the vega on the 600 OTM strike is smaller (0.56, or 56 cents), but it represents a larger percentage of the option's premium. We're talking 56 cents on a $7.60 option (7.3%) vs 66 cents
on a $19.10 option (3.4%).
Now let's turn to the IBM call:
IBM stock is trading at a much lower per-share price than Google - around $125 - and with a little lower ATM implied volatility of 17%. The Vega for the ATM Strike (125) is 0.16 or 16 cents - which
is smaller than Google's 570 ATM call (vega 0.66).
At the same time, on a percentage basis 16 cents is still a major factor in the price; 16 cents on a $3.10 option equates to 5% of the option price. When looking at vega on a percentage of price
basis it shows why I think vega is the Rodney Dangerfield of the Greeks: it just doesn't get the respect it deserves.
If the IV of the option contract moves just 1% in the wrong direction, this will option lose a 5% of its value.
This situation may cause the most annoying outcome for ATM option buyers: sometimes you're right about the direction, but you still lose on the trade because of an implied volatility crunch.
If you're buying an option, and you notice your susceptibility to vega looks high (for example, due to major news events), one way to counteract this is to buy a spread (buy one option while selling
another). This way, if a drop in implied volatility is hurting the option you bought, it should be helping the option that you sold -- helping curtail the effects of the implied volatility.
(If you're not familiar with spreads, check out
long call spreads
for more info, including max potential losses and profits on the many types of spreads. Spreads are multiple-leg options strategies involving
additional risks and multiple commissions
and may result in complex tax treatments. Consult with your tax advisor as to how taxes may affect the outcome of these strategies.)
We'll talk more about vega, spread trading and the importance of checking position vega in my next post. Stay tuned!
Brian Overby
TradeKing's Options Guy
Fabulous Ash Vegas
on Flickr]
Options involve risk and are not suitable for all investors. Please read Characteristics and Risks of Standardized Options available at http://www.tradeking.com/ODD.
Content, research, tools, and stock or option symbols are for educational and illustrative purposes only and do not imply a recommendation or solicitation to buy or sell a particular security or to
engage in any particular investment strategy. The projections or other information regarding the likelihood of various investment outcomes are hypothetical in nature, are not guaranteed for accuracy
or completeness, do not reflect actual investment results and are not guarantees of future results.
Even though the Greeks represent the consensus of the marketplace as to how the option will react to changes in certain variables associated with the pricing of an option contract. There is no
guarantee that these forecasts will be correct.
Supporting documentation for any claims made in this post will be supplied upon request. Send a private message to All-Stars using the link below the profile image.
TradeKing provides self-directed investors with discount brokerage services, and does not make recommendations or offer investment, financial, legal or tax advice.
(c) TradeKing, Member | {"url":"http://community.tradeking.com/members/optionsguy/blogs/57528-vega-stocks-a-case-study","timestamp":"2014-04-17T09:35:24Z","content_type":null,"content_length":"25844","record_id":"<urn:uuid:0261c3b9-f7aa-435b-bf5e-f6ecf0aa8f67>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Computation of the matrix exponential
Replies: 6 Last Post: Feb 10, 2013 5:00 AM
Messages: [ Previous | Next ]
Re: Computation of the matrix exponential
Posted: May 24, 2010 10:53 PM
On May 24, 8:37 pm, luca <luca.frammol...@gmail.com> wrote:
> Hi,
> i have the following problem: given a 3x3 real matrix, compute exp(A).
> I need a really fast way to do this. I have searched a bit with
> google, but it seems to me that
> computing the matrix exponential is not so simple, at least if your
> matrix does not have a special
> structure (for example A=diagonal matrix).
> I have found a simple method that use the diagonalization of A. If A
> has 3 distinct eigenvalues, than compute
> A=PDP^-1, where P is the matrix of the eigenvectors, D is a diagonal
> matrix (whose diagonal elements are
> the eigenvalues of A). Than, exp(A) = P exp(D) P^-1. Since P^-1 is
> fast enough and exp(D) is simple
> to compute, this should be a fast method.
> But, the problem is: i am not sure that the matrix A will always have
> 3 distinct eigenvalues...what happens
> if this does not happen? Can i use that formula even if 2 (or all
> three) eigenvalues are equal?
> Are there any other ways to compute exp(A) in a fast way?
> Thank you,
> Luca
If you have time to do the diagonalization
(by similarity), then of course that is a
way to get the matrix exponential.
If a matrix is not diagonalizable, then it
can be written in Jordan canonical form,
i.e. diagonal matrix plus some nilpotent
blocks corresponding to eigenvalues with
algebraic multiplicity greater than their
geometric multiplicity.
The application of power series to such
diagonal + nilpotent terms gives (esp.
in the 3x3 case) just a slight modification
to what you were already looking at for the
exponential of a purely diagonal matrix.
Let's consider an illustrative case, where
a eigenvalue r has geometric multiplicity
1 but algebraic multiplicity 3. Instead of
being diagonalizable, such a matrix is then
similar to J =
[ r 1 0 ]
[ 0 r 1 ]
[ 0 0 r ]
Since rI and the nilpotent part N of this
commute, exp(J) = exp(r)I * exp(N). The
nice thing about the power series exp(N)
is that it has only a (small) finite number
of terms:
exp(N) = I + N + (1/2)N^2 =
[ 1 1 0.5 ]
[ 0 1 1 ]
[ 0 0 1 ]
All the other cases are actually simpler
than this one.
regards, chip
Date Subject Author
5/24/10 Computation of the matrix exponential luca
5/24/10 Re: Computation of the matrix exponential Chip Eastham
5/25/10 Re: Computation of the matrix exponential Chip Eastham
5/25/10 Re: Computation of the matrix exponential luca
5/25/10 Re: Computation of the matrix exponential Chip Eastham
5/29/10 Re: Computation of the matrix exponential robin
2/10/13 Re: Computation of the matrix exponential Mircea GURBINA | {"url":"http://mathforum.org/kb/message.jspa?messageID=7078983","timestamp":"2014-04-20T16:00:14Z","content_type":null,"content_length":"25476","record_id":"<urn:uuid:12048788-cc84-4ea9-9de6-8d19420b2456>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Davisha on Tuesday, February 3, 2009 at 11:57pm.
• algebra - PsyDAG, Wednesday, February 4, 2009 at 11:00am
First, do not use all caps, since this is like SHOUTING on the Internet. It is rude and will not get you faster or better service.
Second, your question was answered in your previous post.
Third, repeating your same question on multiple posts will not get you better or faster help. It just wastes our time looking at these repeated posts, when we could be answering other posts.
It might help if you realize that we are not always available. Volunteers search the message boards at least daily, if not more often. However, it might be a while before someone with expertise
in your area reads your post(s). Typically your post will be answered within 24 hours.
I hope this helps you to understand the message boards better.
Related Questions
MATH - How can you determine if two lines are perpendicular? · How the slopes of...
MATH - Post your 50 or more word response to the following in the assignment ...
algebra need by 10p cst - How can you determine if two lines are perpendicular? ...
algebra - how can you determine if two lines are perpendicular?
algebra - how can you determine if two lines are perpendicular?
math needed by 12midnight - How can you determine if two lines are ...
Algebra - How can you determine if two lines are perpendicular?
algebra - HOW CAN YOU DETERMINE IF TWO LINES ARE PERPENDICULAR?
Algebra - How can you determine if two lines are perpendicular?
geometry - Martin drew a pair of perpendicular lines and a pair of parallel ... | {"url":"http://www.jiskha.com/display.cgi?id=1233723479","timestamp":"2014-04-20T14:02:38Z","content_type":null,"content_length":"8793","record_id":"<urn:uuid:161ed185-5b81-4290-8093-a129dfab0298>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modeling Extragalactic Jets - A. Ferrari
Annu. Rev. Astron. Astrophys. 1988. 36: 539-598
Copyright © 1998 by . All rights reserved
5.5. Understanding Jet Microphysics
The above numerical experiments qualify the classical continuous jet model to explain the basic observational characteristics of extended radio sources. On the other hand, our quantitative
understanding of the interlacing physical processes is still preliminary. For instance, the activity of the working surface and the formation and expansion of the cocoon are typical examples of
boundary layer physics with instabilities, mixing, turbulence, etc. So far, these conditions have been simulated in a hydrodynamic or MHD approximation, while most of the important processes are
plasma effects. On the other hand, plasma simulations are at present prohibitive over time scales of a few proton gyroradii for any available supercomputer. A possible approach that should be tested
soon is a two-fluid model in which currents are included explicitly and the interaction between electrons and ions is taken into account for an explicit calculation of the transport coefficients,
diffusion, resistivity, thermal conduction, etc.
Along the same lines, hydrodynamic or MHD models cannot properly represent crucial physical elements as currents and electromagnetic fields. The only attempt to deal with this problem is by Clarke et
al (1986). However, those authors assumed an unmagnetized external medium so that the jet magnetic field cannot diffuse outside its boundaries and, although it insures collimation, does not
participate in the formation of the boundary layer.
Benford (1978) pointed out that current-carrying jets may be naturally self-confined by azimuthal magnetic fields closing inside an extended cocoon that is transporting a return current. In such a
scheme, Jafelice & Opher (1992) examined the radial evolution of the plasma discharge generated in the ambient plasma by a charged jet, assuming that the return current is diffused over an extended
region defined by a balance between the gravitational pull and the effect of the Lorentz forces.
When a current-carrying jet is injected into a plasma, electron currents are induced to flow in such a way that they oppose the self-magnetic field (Miller 1982). Depending on the plasma conductivity
with y) as the step function; then the response plasma current is calculated to be
with t[*] = z/v[j] and [d] = 4R^2 / c^2. The consistent magnetic field is
If the jet contracts under the self-pinching action of the azimuthal magnetic field consistent with the carried current, then dR / dt < 0 and J[2] < 0, and the return current is less, which focuses
the jet more. The reverse case, which might be expected from a jet that is small at the head and grows larger behind it, J[2] > 0 and the return current is preferentially driven in the region outside
the radius R(t). This case may correspond to a return current sheath around the jet. The radial distribution of J[2] is peaked at the edge of the jet because it is driven by the expansion/contraction
of the radius.
Further behind the working surface, the plasma current is dissipated by ohmic losses. Following Lovelace & Sudan (1971), the energetics can be evaluated in terms of the Poynting theorem. If a sharp
front of considerable energy [0] (in units of mc^2) is considered, the jet velocity changes little in the setting up of the jet current system:
and the beam current itself is almost constant J[j] / t B[j] / t
where E is the induction electric field and the integral is over the jet cross section A. Therefore, the plasma magnetic field decays behind the front and does not exactly counter the jet field
because of resistive decay or other dissipative effects, so that
with g
we can express the quick equilibrium setup (corresponding to the current or magnetic field decay) using the Poynting theorem. We then obtain the total energy dissipated by the interaction of the
current with the induced field E:
Integrating over the setup time t with B[j] constant gives
A fraction (1 + g) / 2 of the total dissipated energy goes into the plasma through inductive ohmic heating, independent of the conductivity, as the return current decays; this occurs as g evolves
from 1 to nearly zero as time approaches the magnetic diffusion time for the relevant distance scale. Also, a fraction (1 - g) / 2 goes into the creation of magnetic fields. All this occurs over the
relevant distance for magnetic diffusion, and so such estimates are qualitatively correct if decay occurs in small filaments built up by filamentary instability very near the jet head. Energy
deposition in the plasma appears as raw heating and as electrostatic fields driven by plasma instabilities, which eventually will decay into further heating. This is the final state of the
electrodynamic braking of the jet. Such qualitative considerations apply to "sudden" jets, which induce return currents across their cross sections, setting up the return current path within or very
near the jet area. This is because the "skin effect" of rapid induction confines currents to within a short distance of the jet radius. Such a picture applies best to very fast (perhaps relativistic)
jet heads, which then are slowed quickly by inductive braking and suffer filamentary instability as well.
A different situation envisions a jet that builds gradually, so that inductive energy does not have to drive return currents within the narrow channel of the jet radius but rather does so over the
expanded jet head. This requires much less energy investment, since the return current is carried by a far larger number of electrons across a broad cocoon that has a typical radius comparable to the
jet head. The energy invested in setting up the return current system over a large cocoon radius is given by the ratio of the total charge carriers (electrons) carrying the jet current (N[j])
compared to the number within the cocoon radius carrying the return current (N[p]):
Since g = 1 at the beginning of the return current setup, large cocoon radii are energetically preferred, and as g falls, more energy must be invested by the entire jet system to maintain itself
against diffusion of return current outward while maintaining self-confinement at the jet core. Such scenarios suggest that jets may begin as narrow, fast (perhaps relativistic) flows, but inductive
braking slows them until their heads are broad enough to develop larger inductive return current systems. Then they acquire cocoons of backflowing plasma. This links the inner jet, which is
self-confined by the greater self-field near the center, to the outer cocoon, where the eventual return currents primarily flow. Such cocoons have substantial stabilizing powers, as they inhibit
lateral instability by adding the cocoon mass to the jet, increasing its inertia. | {"url":"http://ned.ipac.caltech.edu/level5/March03/Ferrari/Ferrari5_5.html","timestamp":"2014-04-18T03:00:47Z","content_type":null,"content_length":"12426","record_id":"<urn:uuid:5085d281-739d-41fd-9bc6-f734d0b33ab8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: matrix differentials
Replies: 3 Last Post: Apr 4, 2013 2:28 PM
Messages: [ Previous | Next ]
Re: matrix differentials
Posted: Apr 4, 2013 2:28 PM
> Hello people!
> I also have one problem, not like this, but I'm using
> your question to ask something hoping that someone
> will answear to both of us.
> Can anyone tell me how can I calculate a maxim
> residual od matrix in MATLAB? When I enter a matrix,
> what command do I need to use to get maxim residual
> of that matrix?
> Thanks!
I Googled 'residual of a matrix' and found
"Minimum residual method - MATLAB minres".
This may be what you are looking for.
If not, use Google to search further.
Regards, Peter Scales. | {"url":"http://mathforum.org/kb/message.jspa?messageID=8829516","timestamp":"2014-04-16T19:37:50Z","content_type":null,"content_length":"19971","record_id":"<urn:uuid:1c09d1df-6cec-45e2-97ba-2ed3355e55cc>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Integral test
Date: Dec 10, 2012 5:10 PM
Author: Jose Carlos Santos
Subject: Re: Integral test
On 10-12-2012 21:34, Virgil wrote:
>> One of my students asked me today a question that I was unable to
>> answer. Let _f_ be an analytical function from (0,+oo) into [1,+oo) and
>> suppose that the integral of _f_ from 1 to +oo converges. Does it follow
>> that the series sum_n f(n) converges? I don't think so, but I was unable
>> to find a counter-example. Any ideas?
> One can imagine an analytic function which is equal to 1 at every
> natural number but such that the sequence of its integrals from n-1/2 to
> n+1/2 converges.
> I do not have a concrete example in mind but I'm certain that it is
> possible.
> It could easily be derived from an analytic function with value 0
> outside [-0.5 , .5] and value 1 at 0.
Yes, but no such function exists.
Best regards,
Jose Carlos Santos | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7935094","timestamp":"2014-04-19T17:33:53Z","content_type":null,"content_length":"1928","record_id":"<urn:uuid:98710ea8-322a-4e68-86ff-151958e207c9>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
NA Digest Sunday, August 16, 1992 Volume 92 : Issue 32
NA Digest Sunday, August 16, 1992 Volume 92 : Issue 32
Today's Editor:
Cleve Moler
The MathWorks, Inc.
Submissions for NA Digest:
Mail to na.digest@na-net.ornl.gov.
Information about NA-NET:
Mail to na.help@na-net.ornl.gov.
From: Eric Grosse 908-582-5828 <ehg@research.att.com>
Date: Sun Aug 9 22:52:50 EDT 1992
Subject: Encrypted SIAM Membership List Available
The SIAM membership list, a useful source of up-to-date addresses and
phone numbers, has long been searchable via netlib. Now you can also
download it to your own machine for faster searching.
To preserve privacy, i.e. to keep the list from being used by mass mailers
and telemarketers, the database is encrypted. Given a person's last name
or phone number, you can decrypt that one database entry. But there is
no feasible way to crack the entire list. To learn how we do this, read
J. Feigenbaum, E. Grosse and J. Reeds (1992) "Cryptographic Protection of
Membership Lists", Newsletter of the International Association for
Cryptologic Research, 9:1,16-20. (This paper is available from netlib by
"send 91-12 from research/nam".)
You can use the system without understanding the mechanism. First, get
the decryption program and (1.2 megabyte) database by
ftp research.att.com
login: netlib
password: <your email address>
cd research
get decryptdb.c
get siamdb
then follow the instructions at the start of decryptdb.c to install.
For now, you must have ftp access and a C compiler; if demand
warrants, SIAM headquarters may make the system available on other
media at a later time.
The database, which is updated quarterly, will continue to be
searchable via netlib's "whois" command. But fast local access allows
new uses; for example, my computer is connected to my phone and, when
caller-ID is functioning, automatically translates the calling number
into a name.
From: Steve Stevenson <steve@hubcap.clemson.edu>
Date: Mon, 10 Aug 92 08:44:59 -0400
Subject: Looking for Tricks of the Trade
In the past several months, I have read a couple of texts which have made
a big deal about Horner's rule, like it was a new can for beer.
Other texts seem to be totally oblivious to certain computational facts of
life, like using extrapolation. Yet another trick is converting
Taylor series from interative (natural) to recursive. Most of this
stuff goes back to the pre-computer days when things had to be
done by hand.
I would like to compile a list of all the old and new computational techniques
which people use to accelerate computations. If you would please send me
just the name and a reference, I'll summerize.
Steve (really "D. E.") Stevenson steve@hubcap.clemson.edu
Department of Computer Science, (803)656-5880.mabell
Clemson University, Clemson, SC 29634-1906
From: Ching-ju Ashraf Lee <leec2@rpi.edu>
Date: Thu, 13 Aug 92 12:26:24 -0400
Subject: Huge Sparse Eigenvalue Problem
Dear Sir/Madame:
I have a huge sparse eigensystem Ax=lambda*Bx to solve. A and
B are real, symmetric and even positive definite. I only need
the smallest eigenvalue of this system and such eigenvalue in my
system is always simple and believed to be well-isolated. An
immediate numerical method for such problem is the inverse iteration
method: (A-mu*B)x(k+1) = Bx(k). Unfortunately, the best method I
come up with in solving the above system is the Cholesky decompo-
sition(I am able to get a lower bound of lambda, hence mu may be
taken to be the lower bound). But since A and B are 70,000 by
70,000, the half bandwidth of A and B is usually around 30,000.
Even though I only have at most 24 nonzero entries per row in the
matrices, Cholesky decomposition will fill nonzero entries inside
the band. So the virtual memory required by the method is way
beyond the limit on my local machine(700 megabytes). Note also
that if mu is close to lambda, therefore a good initial approxi-
mation of the system, A-mu*B is quite singular. So the ordinary
iterative methods will not work well in solving the system
(A-mu*B)x(k+1)=Bx(k) for x(k+1) (correct me if my impression is
false). I would like to utilize the special sparseness I have in
this system, if possible, before jump on a larger machine. So any
thoughtful suggestions or reference of the related literature will
be greatly appreciated.
Ching-ju Ashraf
From: Luciano Molinari <molinari@cumuli.ethz.ch>
Date: Fri, 14 Aug 1992 11:33:23 +0200
Subject: Unix "tar" on DOS.
I am looking for an MS-DOS utility to dearchive and decompress Unix tar.Z
files. Can anybody help me?
L. Molinari
The Children's Hospital
CH-8032 Zurich
From: Richard Brualdi <brualdi@math.wisc.edu>
Date: Wed, 12 Aug 92 07:28:17 CDT
Subject: Contents: Linear Algebra and its Applications
Contents of LAA Volume 174, September 1992
Joel V. Brawley (Clemson, South Carolina) and Gary L. Mullen
(University Park, Pennsylvania)
Scalar Polynomial Functions on the Nonsingular Matrices Over a Finite Field 1
Robert Brawer and Magnus Pirovino (Zurich, Switzerland)
The Linear Algebra of the Pascal Matrix 13
A. A. Chernyak and Z. A. Chernyak (Minsk, U.S.S.R.)
Joint Realization of (0, 1) Matrices Revisited 25
Lifeng Ding (Atlanta, Georgia)
Separating Vectors and Reflexivity 37
Massoud Malek (Hayward, California)
Notes on Permanental and Subpermanental Inequalities 53
Keith Bourque and Steve Ligh (Lafayette, Louisiana)
On GCD and LCM Matrices 65
K. H. Kim and F. W. Roush (Montgomery, Alabama)
Automorphisms of gl-Matrices 75
Charles Lanski (Los Angeles, California)
An Identity for Matrix Rings With Involution 91
P. J. Maher (London, England)
Some Norm Inequalities Concerning Generalized Inverses 99
Desmond J. Higham (Dundee, Scotland) and Nicholas J. Higham (Manchester,
Componentwise Perturbation Theory for Linear Systems With Multiple Right-Hand
Sides 111
John A. Holbrook (Guelph, Canada)
Spectral Variation of Normal Matrices 131
Lei Wu (Dalian, People's Republic of China)
The Re-Positive Definite Solutions to the Matrix Inverse Problem AX=B 145
O. L. Mangasarian (Madison, Wisconsin)
Global Error Bounds for Monotone Affine Variational Inequality Problems 153
Barbu C. Kestenband (Old Westbury, New York)
Quadrics as Hyperplanes in Finite Affine Geometries 165
Max Bauer (Rennes, France)
Dilatations and Continued Fractions 183
Bernhard A. Schmitt (Marburg, Germany)
Perturbation Bounds for Matrix Square Roots and Pythagorean Sums 215
Vlad Ionescu and Martin Weiss (Bucharest, Romania)
On Computing the Stabilizing Solution of the Discrete-Time Riccati Equation
Daniel B. Szyld (Philadelphia, Pennsylvania)
A Sequence of Lower Bounds for the Spectral Radius of Nonnegative Matrices
Frank Uhlig (Auburn, Alabama)
Review of Topics in Matrix Analysis by Roger A. Horn and Charles R. Johnson
Author Index 247
From: Beth Gallagher <gallaghe@siam.org>
Date: Wed, 12 Aug 92 11:03:54 EST
Subject: Contents: SIAM Optimization
SIAM Journal on Optimization
November 1992 Volume 2, Number 4
On the Behavior of Broyden's Class of Quasi-Newton Methods
Richard H. Byrd, Dong C. Liu, and Jorge Nocedal
New Results on a Continuously Differentiable Exact Penalty Function
Stefano Lucidi
On the Implementation of a Primal-Dual Interior Point Method
Sanjay Mehrotra
On Regularized Least Norm Problems
Achiya Dax
On the Continuity of the Solution Map in Linear Complementarity
M. Seetharma Gowda
Linear Inequality Scaling Problems
Uriel G. Rothblum
New Proximal Point Algorithms for Convex Minimization
Osman Guler
A Necessary and Sufficient Condition for a Constrained Minimum
J. Warga
Diagonal Matrix Scaling and Linear Programming
Leonid Khachiyan and Bahman Kalantari
Author Index
End of NA Digest | {"url":"http://www.netlib.org/na-digest-html/92/v92n32.html","timestamp":"2014-04-16T19:02:39Z","content_type":null,"content_length":"10193","record_id":"<urn:uuid:bf04f43f-99c1-4872-bede-d6935c2e25d6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
Noun: multiplication ,múl-ti-pli'key-shun
1. The act of producing offspring or multiplying by such production
- generation, propagation
2. A multiplicative increase
"repeated copying leads to a multiplication of errors"; "this multiplication of cells is a natural correlate of growth"
3. An arithmetic operation that is the inverse of division; the product of two numbers is computed
"the multiplication of four by three gives twelve";
- times
Derived forms: multiplications
Type of: arithmetic operation, breeding, facts of life, growth, increase, increment, procreation, reproduction
Encyclopedia: Multiplication | {"url":"http://www.wordwebonline.com/en/MULTIPLICATION","timestamp":"2014-04-18T16:23:09Z","content_type":null,"content_length":"8971","record_id":"<urn:uuid:4d0dcaa9-0c31-4770-8f10-e1d910067fab>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Energy of a particle
Total energy is always potential plus kinetic. From general central motion techniques, how do you find the total energy of an orbit when the central potential goes as r^4?
I don't have a clue. The force can be shown as [itex]\vec F = -\vec \nabla V[/itex]. The work done will be: [itex]W = \int \vec F \cdot dr[/itex]. Will this equal the energy? | {"url":"http://www.physicsforums.com/showpost.php?p=920434&postcount=3","timestamp":"2014-04-17T00:57:20Z","content_type":null,"content_length":"7921","record_id":"<urn:uuid:d3398264-ae8e-4201-8276-b6c2c46c6246>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question that has me stumped.
August 5th 2009, 05:39 PM #1
Aug 2009
Question that has me stumped.
Carlos leaves Los Angeles on a cross-country car trip at 8:00 AM. He averages 50 Miles per hour.
Juanita plans to take exactly the same route, but does not leave 9:00 AM. He averages 60 miles per hour.
Develop a diagram or table to determine at what time Juanita will pass Carlos.
I've figured out that in between 1 and 2PM the cars will have both traveled 300 miles - but I am finding difficulty in determining when the vehicles will be physically passing.
Thanks for reading...
how about plotting a graph from the values you have, where they intersect would be where the cars pass, that is if you plot a distance-time graph
Worked like a charm!
Carlos leaves Los Angeles on a cross-country car trip at 8:00 AM. He averages 50 Miles per hour.
Juanita plans to take exactly the same route, but does not leave 9:00 AM. He averages 60 miles per hour.
Develop a diagram or table to determine at what time Juanita will pass Carlos.
I've figured out that in between 1 and 2PM the cars will have both traveled 300 miles - but I am finding difficulty in determining when the vehicles will be physically passing.
Thanks for reading...
If you need to use a table how about
Hour Carlos Juanita
1 50 0 9am
4 . .
5 . .
6 . .
Also, consider if you let x equal the number of hours it will take for Juanita
to catch Carlos you would have
50mph*x number of hours equals 60 mph*x number of hours minus 60 because Juanita stats 1 hour later.
6=x Six hours or 2pm.
You can do this one in your head, Sarlow:
When J leaves, she's 50 miles behind C (C travelled 50 miles from 8 to 9)
Since J travels at 10 mph faster than C (60 mph - 50 mph), then it will
take 50 / 10 = 5 hours to catch up.
August 5th 2009, 06:06 PM #2
Senior Member
Jul 2009
August 5th 2009, 06:36 PM #3
Aug 2009
August 5th 2009, 06:46 PM #4
August 5th 2009, 07:35 PM #5
MHF Contributor
Dec 2007
Ottawa, Canada | {"url":"http://mathhelpforum.com/algebra/97129-question-has-me-stumped.html","timestamp":"2014-04-18T06:00:51Z","content_type":null,"content_length":"41142","record_id":"<urn:uuid:e11b19b2-0bf1-4a42-8761-dd0239028cca>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
The current classification system, 2000 Mathematics Subject Classification (MSC2000), is a revision of the 1991 Mathematics Subject Classification, which is the classification that has been used by
MR and Zbl since the beginning of 1991. MSC2000 is the result of a collaborative effort by the editors of MR and Zbl to update the classification. The editors acknowledge the many helpful suggestions
from the mathematical community during the revision process.
00-xx General
01-xx History and biography [See also the classification number -03 in the other sections]
03-xx Mathematical logic and foundations
04-xx This section has been deleted {For set theory see 03Exx}
05-xx Combinatorics {For finite fields, see 11Txx}
06-xx Order, lattices, ordered algebraic structures [See also 18B35]
08-xx General algebraic systems
11-xx Number theory
12-xx Field theory and polynomials
13-xx Commutative rings and algebras
14-xx Algebraic geometry
15-xx Linear and multilinear algebra; matrix theory
16-xx Associative rings and algebras {For the commutative case, see 13-xx}
17-xx Nonassociative rings and algebras
18-xx Category theory; homological algebra {For commutative rings see 13Dxx, for associative rings 16Exx, for groups 20Jxx, for topological groups and related structures 57Txx; see also 55Nxx and
55Uxx for algebraic topology}
19-xx $K$-theory [See also 16E20, 18F25]
20-xx Group theory and generalizations
22-xx Topological groups, Lie groups {For transformation groups, see 54H15, 57Sxx, 58-xx. For abstract harmonic analysis, see 43-xx}
26-xx Real functions [See also 54C30]
28-xx Measure and integration {For analysis on manifolds, see 58-xx}
30-xx Functions of a complex variable {For analysis on manifolds, see 58-xx}
31-xx Potential theory {For probabilistic potential theory, see 60J45}
32-xx Several complex variables and analytic spaces {For infinite-dimensional holomorphy, see 46G20, 58B12}
33-xx Special functions (33-xx deals with the properties of functions as functions) {For orthogonal functions, see 42Cxx; for aspects of combinatorics, see 05Axx; for number-theoretic aspects, see
11-xx; for representation theory, see 22Exx}
34-xx Ordinary differential equations
35-xx Partial differential equations
37-xx Dynamical systems and ergodic theory [See also 26A18, 28Dxx, 34Cxx, 34Dxx, 35Bxx, 46Lxx, 58Jxx, 70-xx]
39-xx Difference and functional equations
40-xx Sequences, series, summability
41-xx Approximations and expansions {For all approximation theory in the complex domain, see 30Exx, 30E05 and 30E10; for all trigonometric approximation and interpolation, see 42Axx, 42A10 and 42A15;
for numerical approximation, see 65Dxx}
42-xx Fourier analysis
43-xx Abstract harmonic analysis {For other analysis on topological and Lie groups, see 22Exx}
44-xx Integral transforms, operational calculus {For fractional derivatives and integrals, see 26A33. For Fourier transforms, see 42A38, 42B10. For integral transforms in distribution spaces, see
46F12. For numerical methods, see 65R10}
45-xx Integral equations
46-xx Functional analysis {For manifolds modeled on topological linear spaces, see 57N20, 58Bxx}
47-xx Operator theory
49-xx Calculus of variations and optimal control; optimization [See also 34H05, 34K35, 65Kxx, 90Cxx, 93-xx]
51-xx Geometry {For algebraic geometry, see 14-xx}
52-xx Convex and discrete geometry
53-xx Differential geometry {For differential topology, see 57Rxx. For foundational questions of differentiable manifolds, see 58Axx}
54-xx General topology {For the topology of manifolds of all dimensions, see 57Nxx}
55-xx Algebraic topology
57-xx Manifolds and cell complexes {For complex manifolds, see 32Qxx}
58-xx Global analysis, analysis on manifolds [See also 32Cxx, 32Fxx, 32Wxx, 46-xx, 47Hxx, 53Cxx] {For geometric integration theory, see 49Q15}
60-xx Probability theory and stochastic processes {For additional applications, see 11Kxx, 62-xx, 90-xx, 91-xx,92-xx, 93-xx, 94-xx]
62-xx Statistics
65-xx Numerical analysis
68-xx Computer science {For papers involving machine computations and programs in a specific mathematical area, see Section -04 in that area}
70-xx Mechanics of particles and systems {For relativistic mechanics, see 83A05 and 83C10; for statistical mechanics, see 82-xx}
73-xx This section has been deleted {For mechanics of solids, see 74-xx}
74-xx Mechanics of deformable solids
76-xx Fluid mechanics {For general continuum mechanics, see 74Axx, or other parts of 74-xx}
78-xx Optics, electromagnetic theory {For quantum optics, see 81V80}
80-xx Classical thermodynamics, heat transfer {For thermodynamics of solids, see 74A15}
81-xx Quantum theory
82-xx Statistical mechanics, structure of matter
83-xx Relativity and gravitational theory
85-xx Astronomy and astrophysics {For celestial mechanics, see 70F15}
86-xx Geophysics [See also 76U05, 76V05]
90-xx Operations research, mathematical programming
91-xx Game theory, economics, social and behavioral sciences
92-xx Biology and other natural sciences
93-xx Systems theory; control {For optimal control, see 49-xx}
94-xx Information and communication, circuits
97-xx Mathematics education | {"url":"http://www.lib.tsinghua.edu.cn/database/guide/MathSciNet/Mathematical%20Subject%20Classification.htm","timestamp":"2014-04-17T15:29:07Z","content_type":null,"content_length":"21472","record_id":"<urn:uuid:06dc07d2-154d-48cd-855d-24d704720b23>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Another day full of interesting and challenging—in the sense they generated new questions for me—talks at the SuSTain workshop. After another (dry and fast) run around the Downs; Leo Held started the
talks with one of my favourite topics, namely the theory of g-priors in generalized linear models. He did bring a new perspective on
Simulation: The modeller’s laboratory
In his 2004 paper in Trends in Ecology and Evolution, Steven Peck argues: Simulation models can be used to mimic complex systems, but unlike nature, can be manipulated in ways that would be
impossible, too costly or unethical to do in natural systems. Simulation can add to theory development and testing, can offer hypotheses about the
R script to calculate QIC for Generalized Estimating Equation (GEE) Model Selection
Generalized Estimating Equations (GEE) can be used to analyze longitudinal count data; that is, repeated counts taken from the same subject or site. This is often referred to as repeated measures
data, but longitudinal data often has more repeated obse...
R script to calculate QIC for Generalized Estimating Equation (GEE) Model Selection
Generalized Estimating Equations (GEE) can be used to analyze longitudinal count data; that is, repeated counts taken from the same subject or site. This is often referred to as repeated measures
data, but longitudinal data often has more repeated observations. … Continue reading →
Montreal R Workshop: Likelihood Methods and Model Selection
Monday, March 19, 2012 14h-16h, Stewart Biology N4/17 Corey Chivers, McGill University Department of Biology This workshop will introduce participants to the likelihood principal and its utility in
statistical inference. By learning how to formalize models through their likelihood function, participants will learn how to confront these models with data in order to make statistical | {"url":"http://www.r-bloggers.com/tag/model-selection/","timestamp":"2014-04-17T04:14:57Z","content_type":null,"content_length":"33283","record_id":"<urn:uuid:659fb2f3-b739-4706-b118-ab231d1e80ea>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
Student Scholars Day
Event Title
T-Test for Proportions? Making Do When Your Software Can't Do Confidence Intervals for Proportions
Presentation Type
Presenter Major(s)
Biostatistics, Statistics
Mentor Information
Sango Otieno, otienos@gvsu.edu; Gerald Shoultz, shoultzg@gvsu.edu
Kirkhof Center KC23
Start Date
13-4-2011 12:00 PM
End Date
13-4-2011 1:00 PM
Mathematical Science
In Introductory Statistics courses, students are taught how to calculate confidence intervals for the population proportion and the difference between two population proportions. However, statistical
software packages often lack syntax for computing such intervals. We assume that if proportion data is recoded in a binary fashion, the resulting t-confidence intervals for one and two sample
problems are equivalent to the corresponding z-intervals. We determine mathematically and by simulations, at what sample size, the t and z intervals can be considered equivalent for various
confidence levels.
This document is currently not available here.
Apr 13th, 12:00 PM Apr 13th, 1:00 PM
T-Test for Proportions? Making Do When Your Software Can't Do Confidence Intervals for Proportions
Kirkhof Center KC23
In Introductory Statistics courses, students are taught how to calculate confidence intervals for the population proportion and the difference between two population proportions. However, statistical
software packages often lack syntax for computing such intervals. We assume that if proportion data is recoded in a binary fashion, the resulting t-confidence intervals for one and two sample
problems are equivalent to the corresponding z-intervals. We determine mathematically and by simulations, at what sample size, the t and z intervals can be considered equivalent for various
confidence levels. | {"url":"http://scholarworks.gvsu.edu/ssd/2011/Posters/143/","timestamp":"2014-04-20T00:41:17Z","content_type":null,"content_length":"22341","record_id":"<urn:uuid:e7ecd2eb-65f1-4aa2-9554-a326e7438265>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
spheres volume calculator help
Hi there,
What kind of errors are you getting? Compilation errors or runtime errors? Also, could you please copy the errors you are getting into your post?
MiiNiPaa, you are right I had too many arguments. Looking at the code you wrote made it much more simple. I was trying to complicate things by parting out the equation. I forgot to make it 4.0/3.0
instead of 4/3.
Thanks a lot for your help.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/112446/","timestamp":"2014-04-20T21:04:59Z","content_type":null,"content_length":"10652","record_id":"<urn:uuid:3f69abd0-665c-4b45-97f8-49ff9a6c70ab>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
DNA on Trial
DNA on Trial
Grades 9-12
This lesson is used to introduce issues that arise when DNA fingerprinting is used as evidence.
Green Means #201: Crimes Against Nature
NOVA: Murder, Rape and DNA
College Algebra #26: Probability
Students will understand:
· That DNA fingerprinting can be used to eliminate a suspect from a crime.
· The controversy around using DNA as legal grounds in the American court system.
· How probability of an event is determined.
ACTIVITY #1
Explain to students that each individual organism has it's own specific combination of DNA. In fact, the information is so specific that these "fingerprints" are used in courts of law to prove or
disprove the guilt of certain suspects.Direct students to brainstorm how DNA fingerprinting might be used in a variety of fields: example - protection of wildlife, determining hereditary disease,
guilt or innocence in a court of law. Use the GREEN MEANS video to introduce applications of DNA fingerprinting.
ACTIVITY #2
Ask students if it is possible to prove the innocence of a suspect based solely on the DNA evidence.
PLAY the 4 minute video "Crimes Against Nature" from the beginning. Instruct students to focus on how DNA is used by the Federal Fish and Game Department. They should record this information in
their journals. (DNA fingerprints are used to prove that a certain animal was poached from Clint Eastwood's land.)
SHOW the first 20 minutes of NOVA: Murder, Rape and DNA which involves a father who is accused of the rape of his daughter. DNA samples from him and evidence (semen) do not match.Therefore, he is
eliminated as the suspect.
ACTIVITY #3
SHOW the video College Algebra #26: Probability. Segmented viewing: Tape should be cued to the segment about finding somebody with the same birth date. (Eight minutes into tape.) This is a good
explanation of probability. PAUSE the tape. Ask students to predict the results in their class. Then survey the class for shared birth dates.
ACTIVITY #1
Review students' responses. Student challenge: Can a single fingerprint prove that there is an exact match between two samples? (The answer is NO, because each sample is simply a minuscule part
of an organism's genome. A human's is 60 feet long when extended. Scientists look at a portion less than .25 inch at the most.)
Discuss any other instances or cases in which DNA has been used as evidence.
ACTIVITY #2
Compare the results in both videos. Discuss how it takes ONLY ONE sample location that does not match to eliminate a suspect.
Ask students if they could guess how many matching samples it might take to prove guilt. Lead a discussion of the probability of an event.
ACTIVITY #3
Review the fundamentals of probability, giving examples of coin tossing. For example, what is the probability that two coins tossed simultaneously will be TT? (1/2 times 1/2 = 1/4)
Ask again, how one would determine if matching DNA samples would prove somebody's guilt? How many samples would have to match?
Explain that this is the exact question that many legal courts are wrestling with right now. The type of testing (PCR or RFLP makes a big difference.) You may want to have students read "How DNA
Fingerprinting Works," an excerpt from an article from the November, 1994 issue of Popular Science, to help explain the difference.
Unfortunately, there is disagreement between probability of matches. For example, the probability of two matches between two blacks is greater than that between a Black and an Asian.
MATH CALCULATIONS
Given the probability of an event occurring is a product of each individual event, what would the probability be that there would be a match between two samples if the probability of each of five
matches for RFLP testing was:
1. 1 in 2 (1/2 to the 5th power, or 1/32)
2. 1 in 5 (1/5 to the 5th power, or 1/3125)
3. 1 in 10 (1/10 to the 5th power, or 1/100,000)
4. 1/15 (1/15 to the 5th power, or 1/759,375)
5. 1/25 (1/25 to the 5th power, or 1/9,765,625)
Cited range of the probabilities for RFLP analysis's is one in tens of thousands to one in hundreds of thousands, or even millions for five locations to match. Ask students to calculate the
probability of each individual match for the following:
6. five matches = 1 in 1,000 (about 1/4 each event)
7. five matches = 1 in 33,000 (about 1/8 each event)
8. five matches = 1 in 250,000 (about 1/12 each event)
9. five matches = 1 in 760,000 (about 1/15 each event)
10. five matches = 1 in 1,800,000 (about 1/18 each event)
11. five matches = 1 in 3,200,000 (about 1/20 each event)
FAST FORWARD tape where explanations are given as to how to determine the probability of an event. (Number of positive outcomes/number of total outcomes.) PAUSE tape and relate this discussion to
DNA use as evidence.
Next, SHOW the segment where the probability of an event consisting of numerous events = probability of each event times each other.
DISCUSSION QUESTIONS
1. Discuss which probabilities would be acceptable in a court of law to alone prove guilt. (Remember, other evidence frequently enters into the trial. DNA alone may not be necessary to prove
2. Discuss how other circumstances, such as the source of the samples (general population, Blacks, Whites, Asians, etc.) will change these probabilities.
3. Discuss what needs to be done in order to determine a number where probability can be used to prove guilt (standardized tests of each ethnic group, or standardizing the gene pool sample by
location, for example.)
4. Discuss how much of the human genome is actually being sampled (very small portion) and why more samples cannot be taken. (It is a difficult procedure finding these sites within the whole
5. Discuss the highly technical and complicated nature of this science, and how universal agreement as to interpretation of results is not determined.
· Connect with an on-line service to investigate current DNA technology. Use keywords such as DNA and biotechnology. Is it possible to test more sample locations? Genentech has a service on
America OnLine.
· Using the on-line service, investigate how our legal system uses DNA fingerprinting in courts. AOL keywords would include Court TV, CNN, forensic science, reference desks and DNA. Under "Court"
there are current rulings on DNA standards that various courts are presently following.
· Students can experience an interview with a real-life Forensic Scientist by visiting NewsMaker on the World Wide Web. Dr. Bruce Weir, noted DNA expert from the OJ Simpson trial, is interviewed
on some of the most asked questions stemming from "The Trial of the Century". Questions which students can answer from the interview include:
1. Where does DNA evidence come from?
2. What is the difference between PCR and RFLP methods of DNA profiling?
3. What statistical techniques are used to confirm the reliability of DNA evidence?
4. What role does race and ethnicity play in the accuracy of DNA evidence?
5. What is the effect of human error in the preparation of DNA evidence?
6. Why is the validity of DNA forensics and the statistical accuracy of DNA profiling in question?
Master Teachers: Stan Hitomi and Randall Lam
Lesson Plan Database
Thirteen Ed Online | {"url":"http://www.thirteen.org/edonline/nttidb/lessons/sf/dnasf.html","timestamp":"2014-04-16T19:01:51Z","content_type":null,"content_length":"11324","record_id":"<urn:uuid:5113f4db-eac9-46e5-ab63-d61c230945f3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Good Math/Bad Math
A bit of logic
The other day, I was at work, hacking away. Now, most days, I wind up working pretty much without interruptions, because I'm on assignment to a product development group, and I'm the only guy in New
York working on this product.
But, once in a while, someone bangs on my door to talk. So I'm chatting away, actually talking about Battlestar Galactica, and the guy I'm talking with responds to something with "But that's not
logical". I
what Star Trek did to the common understanding of "logical".
All of this is a roundabout way of saying that I want to talk about logic.
What is Logic?
There's no way to look at a statement in isolation and say that it's not logical. In fact, it's pretty darned close to impossible to say whether something is logical
at all
without first agreeing on just what logic you're talking about!
Logic is not a single set of rules. It's a structure - or more correctly, a way of studying a particular kind of structured system of rules. Logic is a way of building up a structure that lets you
decide whether or not a particular set of statements is meaningful; and if they are, what they mean, and what kind of reasoning you can do using them.
What we usually think of as logic is actually one particular logic,
called first order predicate logic (FOPL). It's the logic of basic "and", "or", and "not" statements, with the ability to talk about "all" of some group of objects having a property, or the existence
of an item with a property. (I'll define it in more detail later.)
FOPL is far from the only useful logic out there. Just to give a quick idea of a few different logics, here's a couple with brief descriptions.
Intuitionistic Logic
Something close to FOPL, but it's got the very interesting property that "A or (not A)" is not always true.
Temporal Logic
A logic that is focused on reasoning about how things happen as time passes. Quantifiers in temporal logic aren't things like "For all X, P(x) is true", but things like "Eventually there will be
an X where P(X) is true", or "For all time, P(x) is true". Temporal logic is widely used in something called model checking, which is a technique use for (among other things) verifying the
correctness of operations in a CPU design.
Fuzzy Logic
A logic where statements aren't specifically true or false, but have degrees of truth associated with them. (corrected from "probablity of truth" based on comment -markcc)
Logic is crucial to mathematics, because math is nothing but the formal systems that you can build using logics.
So - what is a logic?
There are three components to a formal logic:
1. Syntax: what does a valid statement in the logic look like?
2. Semantics: what does a valid statement in the logic mean?
3. Inference Rules: how can you use meaningful statements in the logic to
infer other meaningful statements?
An Example Logic: Basic Propositional Logic
For a simple example, we can look at basic propositional logic. Propositional logic is a very basic logic: it has a set of primitive statements, and just four operators: AND, OR, NOT, and IMPLIES. So
it's easy to explain.
Syntax of Valid Statements in Propositional Logic
So, let's start with syntax.
In propositional logic, we have a set of basic, atomic propositions. (Atomic means that they can't be subdivided into smaller propositions.) We write those with capital letters. Any single basic
proposition is a valid statement in the logic.
We also have the operators: AND, OR, NOT, and IMPLIES. If f and g are valid statements, then not f, not g, f and g, f or g, and f implies g are all valid statements.
Finally, we have parenthesis for grouping: if f is a valid statement, then ( f ) is a valid statement.
So, each of the following is a valid statement:
A or B
A or B
A and B and C or (D and E) or (E implies F)
not A
not A implies (C and D or E)
That's it for syntax.
Semantics: The meaning of propositional logic
For semantics, we start by saying _which_ of the basic propositions is true. So, for example, if every upper-case letter were a proposition, we could say that A through M were true, N through Z were
Then we define the meanings of the operators:
┃ x AND y │x is TRUE│x is FALSE ┃
┃ y is TRUE │TRUE │FALSE ┃
┃y is FALSE │FALSE │FALSE ┃
┃ x OR y │x is TRUE│x is FALSE ┃
┃ y is TRUE │TRUE │TRUE ┃
┃y is FALSE │TRUE │FALSE ┃
┃x IMPLIES y │x is TRUE│x is FALSE ┃
┃ y is TRUE │TRUE │TRUE ┃
┃ y is FALSE │FALSE │TRUE ┃
┃ NOT x │ ┃
┃ x is TRUE │FALSE ┃
┃x is FALSE │TRUE ┃
Given those tables, and a truth binding for all of the atomic propositions, and we can work out whether a given statement is true or false.
Inference rules: reasoning using propositional logic
Now, finally, inference rules. Suppose that we aren't given an exhaustive list of all of the atomic propositions and whether they're true or false. We'd like to be able to use statements that we know
to decide whether or not other statements are true. There are a set of rules we can use to do this:
• If we know: x is TRUE, and x IMPLIES y is TRUE;
then we can conclude: B is TRUE.
• If we know: y is FALSE and x IMPLIES y is TRUE;
then we can conclude that x is FALSE.
• If we know: x is TRUE and x AND y is TRUE;
then we can conclude that y is TRUE.
• If we know: x is TRUE and NOT (x AND y) is TRUE;
then we can conclude that y is FALSE
• If we know: x is FALSE and x OR y is TRUE;
then we can conclude that y is TRUE
• If we know: x OR y is FALSE;
then we can conclude: X is FALSE and Y is FALSE
And so on. To be complete, we'd need to add some more, but the above are the basics. (We would need a rule for NOT; for AND and OR we would need some rules for commutativity, etc.)
So, finally, let's take an example of propositional logic.
Let's take some basic propositions (note that I'm not yet saying whether any of these are true or false!):
Proposition A = I am an annoying person.
Proposition B = I am talking to you.
Proposition C = You are annoyed at me.
We assert that the following two things are
1. (A AND B) IMPLIES C (In English, "IF I am an annoying person and AND I am talking to you THEN you are annoyed.")
2. NOT C. (In English, "You are not annoyed at me. Remember, this is just for illustrations sake, by now, you may well be annoyed at me in reality!)
Now, what can we figure out by knowing that those two statements are true?
By one the inference rules we gave above, "If we know: y is FALSE and x IMPLIES y is TRUE; then we can conclude that x is FALSE.". We can use this rule, by using (A AND B) for x, and C for y: If we
know that C is false, and (A and B) implies C, then we know that (A AND B) is false. So we can conclude NOT (A AND B), which means "It is not the case that I am annoyed and I am talking to you";
which actually can be switched (through a sequence of changes) to "Either I am not annoying, or I am not talking to you (or both)."
16 Comments:
• A great post overall! A good intro to logic.I have only a very small quible. The definition of fuzzy logic is not quite right. Fuzzy logic attaches degree of truth to proposition not probability
of truth. For example in fuzzy logic one could say it is "somewhat true that 70 degrees is warm" Of course one can jump up one level of abstraction and say "the previous statement is quite true"
Keep up the good work, I enjoy reading this blog!
By RandomDNA, at 3:11 PM
• Thanks for the correction. I haven't studied fuzzy logic, just read a couple of references to it in other texts; I misinterpreted what they said about it. I've corrected it in the post.
By MarkCC, at 3:37 PM
• A couple of nits:
"If we know: x is TRUE, and x IMPLIES y is TRUE;
then we can conclude: B is TRUE."
I think you mean: "then we can conclude: y is TRUE"
"If we know: x is TRUE and x AND y is TRUE;
then we can conclude that y is TRUE."
Actually, "x AND y is TRUE" is sufficient to conclude that y is true.
By bcarson, at 4:12 PM
• Don't forget the pioneers, Boole and DeMorgan.
By Broadside, at 5:19 PM
• quick correction on the bullet list
If we know: x is TRUE, and x IMPLIES y is TRUE;
then we can conclude: B is TRUE.
I think you meant "y is TRUE", not "B".
By Joe Shelby, at 5:31 PM
• oh, oops, someone got it already.
gotta remember to refresh before posting. :)
By Joe Shelby, at 5:32 PM
• The most enlightening moment while learning to program is the absolute comprehension that everything is really one's and zero's. The average user never realizes this underlying fact and in my
experience believes that programming is some form of black magic brewed from a witches cauldron. Once a journeyman programmer comprehends the digital nature of all code, it is a transcendant
event from ignorance to reality.
It really is a shame that more people cannot fathom how computers work. Abstraction has the downside of creating mystery where there is none. The real mystery is how a complicated list of logical
statements can ever work. Of course, this is the cultural question society is haggeling over right now.
By Broadside, at 5:44 PM
• broadside:
I actually disagree; to me, the specific reduction of programming to "It's all zero and ones" is pretty uninteresting - and for many people, I think it's actually misleading. I remember a
conversation with my brother when I was in college, where he said something along the lines of "can you imagine computers could do if they hand *three* values instead of two? (The answer is, of
course, absolutely nothing that they can't do right now.)
What I think is interesting is that you can reduce almost any problem down to a really small set of symbols. They could be zero and one, or they could be on and off, or they could be -1, 0, and
1, or they could be "a", "b", "c", and "d".
The way that I think about it, the fact that it's binary under the covers doesn't affect a lot of normal day to day programming (if you think about it, the only places where the underlying binary
nature come into play are at numeric overflow points, and in floating point numbers, where there's a distinctive binary floating point roundoff.) But the reduction to discrete symbolic values -
that's *everything* we do as a programmers.
I don't mean to say that you're wrong - there is something profound about the whole thing, the way that we can encode almost anything we want in a program. I mean, there's a reason that I'm a
computer scientist, not a pure mathematician: there's something so deeply cool about how you can solve problems with programs, once I saw my first computer, I knew that that's what I was going to
By MarkCC, at 7:26 PM
• The transcendant part for me was breaking through the mystery and realizing that all computational programming is simply the use of a cipher much like the magic decoder ring. Beyond that initial
realization is the additional wonder that these abstract concepts are essentially translated into human language for human consumption. A user working in a program designed in a 4GL language has
no concept that the underlying layers of logic end up translating into the humble one and zero. This is why the uninitiated computer user sees magic. I see only binary, yet remain amazed in the
complexity of the whole assemblage and thankful for the intelligence and creativity of others.
By Broadside, at 8:57 PM
• Think of the Class. The power comes from the ability to turn it into a representation of anything. Almost no one writes in binary, instead we use a human compatible language. Once the Class is
compiled, it is translated into another language, binary, and resides as a lump of ones and zeros in a bank of transistors with the same properties as the conceptual object.
Humanity engineers technologies capable of comprehending, responding and interacting with these binary languages and our real world. Entire universes are simulated consisting entirely out of ones
and zeros. And we bridge between the real universe and these virtual universes. That to me is absolutely amazing and has taught me the adage "the truth is stranger than fiction" is sage advice.
This internet we use to communicate is really ones and zeros. You and I are communicating in binary. Just under this veneer of English words is the seedy binary underbelly doing all the grunt
Yes, I do I know that I am weird.
By Broadside, at 9:42 PM
• markcc said:
"But the reduction to discrete symbolic values - that's *everything* we do as a programmers."
You are absolutely correct, but it is easy to lose sight of the underlying logic driving it all. Programmers are users too. We become dependant on the languages we code in. Each of these
languages is speaking some form of binary. Whenever I need to peek at unknown data, I use an ascii/hex editor.
By Broadside, at 10:26 PM
• Good post. I think there needs to be more logic blogging. As a guy who has TA'd more than his fair share of intro logic classes, I can safely say that people are far too ignorant regarding formal
deductive systems.
By CK, at 12:59 AM
• nice webpage! i'm going to do my dissertation soon and my field consist of programming and fuzzy maths. do you have any suggestions what i can do with these 2 subjects for my dissertation?:)
p/s:you can contact me at haw_alexis@yahoo.com!
By , at 7:42 AM
• actually, all these logics are just different aspects of the same logic...... either some thing is true or false (0 or 1) I also hate it when people equal "logical" with not taking feelings into
consideration, emotions exsist, therefore, they can be logical, if my boyfriend is flirting with some hot cocktail waitress at my department's Christmas party, then logically I'm going to be
jealous because I'm insecure.
By , at 8:59 AM
• I'm really enjoying this blog a great deal (and just sent the URL to my two mathmatician uncles).
You did omit the simpler version of the definition:
Logic is little tweeting bird, chirpping in meadow.
Logic is a bunch of pretty flowers that smell bad...[/trek_geek]
Sorry, but that was one I couldn't let pass.
By usagi, at 2:28 PM
• Doesn't the fact that such complex and adaptive systems composed entirely of ones and zeros, make the theories underlying genetics, chemistry and quantum mechanics entirely plausible in your
mind? Sure the details are hard to comprehend without the appropriate educational background, but conceptually, dna, atomic composition and quarks are analogs to the binary language. It is
entirely reasonable to agree that our complex universe is composed enitrely out of simple bits.
By Broadside, at 2:52 PM
Links to this post:
<< Home | {"url":"http://goodmath.blogspot.com/2006/03/bit-of-logic.html","timestamp":"2014-04-16T10:09:43Z","content_type":null,"content_length":"48225","record_id":"<urn:uuid:9d4918c4-dcb0-4de1-a942-608ecbc10d79>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: What databases have taught me
Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
Home -> Community -> Usenet -> comp.databases.theory -> Re: What databases have taught me
Re: What databases have taught me
From: Aloha Kakuikanu <aloha.kakuikanu_at_yahoo.com> Date: 23 Jun 2006 19:01:56 -0700 Message-ID: <1151114516.896285.158530@b68g2000cwa.googlegroups.com>
Bob Badour wrote:
> Consider:
> Ra = Rb AND x = 1 AND y = 1
> Where Ra and Rb are two relvars. Is that still a join? To what do x and
> y refer?
There is ambiguity caused by multiple equality signs. Did you mean
(Ra = Rb) AND x = 1 AND y = 1
Ra := (Rb AND x = 1 AND y = 1)
The first interpretation is absurd since there is no equality operator in relational algebra. In the second case there is no longer equality, but assignment := (and I still keep the redundant
brackets for clarity).
> Consider:
> R AND x = 1 AND y = 1
> where R has no x attribute. Will that cause a compile-time error? Why or
> why not?
No. The cartesian product of
R AND y = 1
x = 1
is the EXTEND operator.
I realize that this kind of thinking would steer quite away from exact Tutorial D semantics. BTW, only now I noticed that Tutorial D is bastardised version of D&D "A" algebra. Of course, it may be
argued that the connection Tutorial D <--> "A" is better designed than SQL <--> RA, but this parallel is kind of ironic. Received on Fri Jun 23 2006 - 21:01:56 CDT
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US | {"url":"http://www.orafaq.com/usenet/comp.databases.theory/2006/06/23/1724.htm","timestamp":"2014-04-17T22:10:01Z","content_type":null,"content_length":"7962","record_id":"<urn:uuid:d8e93d1e-96e5-4d62-afe8-5f086a45278e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Molecular Expressions Microscopy Primer: Anatomy of the Microscope - Airy Patterns and the Rayleigh Criterion: Interactive Java Tutorial
Image Formation
Interactive Java Tutorials
Airy Patterns and the Rayleigh Criterion
Airy diffraction pattern sizes and their corresponding radial intensity distribution functions are sensitive to the combination of objective and condenser numerical apertures as well as the
wavelength of illuminating light (when monochromatic light is used to illuminate the specimen). For a well-corrected objective with a uniform circular aperture, two adjacent points are just resolved
when the centers of their Airy patterns are separated by a minimum distance (D) equal to the radius (r) of the central disk in the Airy pattern.
This tutorial explores how Airy disk sizes, at the limit of optical resolution, vary with changes in objective numerical aperture (NA) and illumination wavelength, and how these changes affect the
resolution (r) of the objective (lower values for r indicate increasing resolution). The tutorial initializes with the Wavelength slider set to 528 nanometers in the green color region of the visible
light spectrum. Also upon initialization, the virtual Numerical Aperture slider is set to a value of 0.16 and the two Airy patterns are separated by a distance (D) of 4.3 micrometers. Beneath the
sliders is real-time calculation of the resolution according to Abbe diffraction theory that changes with adjustments to the sliders. Above the sliders is a simulated illumination cone that becomes
wider or narrower as the numerical aperture is changed. The light cone size will increase with numerical aperture (at fixed wavelength) and produce a corresponding decrease in the size of the Airy
disks (and an increase in resolution). A decrease in wavelength at fixed numerical aperture will result in a decrease in Airy pattern size.
To operate the tutorial, use the Wavelength slider on the left to adjust the wavelength of illumination. As this value is decreased, note how the Airy disk size decreases (and resolution (r)
increases). The Numerical Aperture slider can also be used to adjust Airy pattern size by changing the numerical aperture of a virtual microscope objective. As the Airy patterns approach each other,
by adjustment of the Separation Distance slider, they eventually reach the Rayleigh Criterion limit of separation (discussed below), followed by the Sparrow limit. This tutorial assumes that virtual
objective lenses are completely free of aberrations, that the Airy patterns are of identical brightness, and that the unit diffraction pattern generated by the specimen through a circular aperture is
in fact an Airy disk. Airy patterns are modified to alternative diffraction patterns by lens aberrations or non-standard aperture conditions.
Objects in the optical microscope that are either self-luminous or illuminated by a large-angle cone of light form Airy patterns at the intermediate image plane that are incoherent and do not
interfere with each other. This allows the determination of the minimum separation distance between adjacent Airy patterns by examination of the total intensity distribution (the sum of intensities)
when these patterns are closely spaced or overlapping. In the case of Airy patterns produced by coherent illumination, the minimum separation distance must be ascertained by adding the pattern
amplitudes rather than their intensities.
When the separation distance (D) between adjacent Airy patterns is greater than the central disk radius (r), the sum of the intensities yields two individual peaks. As the disks approach each other,
the separation distance will reach a value equal to the central disk radius, a condition known as the Rayleigh criterion. At even closer approach, the separation distance is less than the central
disk radius and the sum of the two peaks merges into a single peak. In the latter instance, the two Airy patterns are said not to be resolved.
In the ideal case, when the objective is aberration-free and provides a uniform circular aperture, two adjacent points are just resolved when the centers of their Airy disks are separated by r (the
central Airy disk radius). When the objective numerical aperture matches that of the substage condenser, r is determined from the equation:
r = 1.22l/2NA(obj)
where l is the wavelength of light with air as the immersion medium and NA(obj) is the numerical aperture of the objective (and condenser). According to Abbe diffraction theory, the objective
numerical aperture is determined by:
NA(obj) = n(sin(q))
where n is the refractive index of the medium separating the objective front lens and the specimen, and is the half-cone of light captured by the objective. If the specimen is not self-luminous or
when the objective and condenser numerical apertures do not match, the equation for r is given by:
r = 1.22l/(NA(obj) + NA(cond))
where the working numerical aperture of the condenser (NA(cond)) is determined from:
NA(cond) = n(sin(q))
Figure 1 illustrates the calculated impact of condenser numerical aperture on resolution and includes values where the objective numerical aperture is less than that of the condenser. This figure
plots the ratio of the numerical apertures of the condenser and objective versus the minimum resolved distance between Airy patterns (Rayleigh criterion) in units of wavelength divided by the
objective aperture.
When two adjacent Airy patterns reach the Rayleigh criterion, the resolution equation reduces to:
D = r = 0.61l/NA
In cases where two adjacent Airy patterns become closer to each other than in the Rayleigh criterion, the separation is known as the Sparrow limit, which is given by the equation:
D = 0.47l/NA
As the Separation Distance slider is adjusted to move the Airy patterns closer together in the tutorial, they eventually reach the Rayleigh criteria distance, which is noted by a pop-up in the
tutorial window. Upon even closer approach, the Sparrow limit is reached, also noted by a pop-up.
Contributing Authors
Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657.
Matthew J. Parry-Hill and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
BACK TO IMAGE FORMATION
BACK TO DIFFRACTION OF LIGHT
Questions or comments? Send us an email.
© 1998-2013 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the
copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
Last modification: Wednesday, Mar 26, 2014 at 02:23 PM
Access Count Since September 15, 2000: 93080
For more information on microscope manufacturers,
use the buttons below to navigate to their websites: | {"url":"http://micro.magnet.fsu.edu/primer/java/imageformation/rayleighdisks/index.html","timestamp":"2014-04-18T18:58:09Z","content_type":null,"content_length":"46944","record_id":"<urn:uuid:00aab0b9-b666-4c14-8948-a65b0dc4bd9d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Kate on Saturday, July 3, 2010 at 2:55pm.
I have realized I don't understand Avagado's number at all
My book shows me this diagram instead of just providing a simple formulas that makes no sense at all
here is what I have made sense of so far
(6.022 x 10^23 atoms/mol)(mass of element) = molecular mass
were molecular mass is just the mass number off the periodic table with units g/mol
solving for mass of element will provide the units of g/atoms which makes perfect sense so I think I have this part down if you want to be gramtically correct g/atom
Here's a sample problem in which I would use this equation: Calculate the mass in grams of a single carbon-12 atom
(12.0 g/mol)/(6.022 x 10^23 atoms/mol) = 1.993 x 10^-23 g/atom
(molecular mass)n = mass of element
were mass is the mass of the element given in g and molecular mass is the mass number from periodic table with units g/mol
n is the number of moles
solving for n provides moles for the final unit so I think I get this equation down
Example probelm: How many moles of He atoms are in 6.46 g?
(6.46 g)/(4.003 g/mol) = 1.61 mol
I'm trying to solv this problem
How mnay molecules of ethane C2H6 are present in .334 g of C2H6?
Now I don't now how to solve the problem I posted above were its asking me for molecules
Can someone just show me the apropriate formulas as I'm just reading this crazy diagram and trying to make sense of it and deriving the formulas from it myself... thanks
• Chemistry - DrBob222, Saturday, July 3, 2010 at 4:11pm
You are correct that
moles = grams/molar mass
atomic mass in grams/6.022 x 10^23 = mass of one atom.
# molecules in 0.334 g ethane.
Convert to moles.
moles = grams/molar mass.
0.334/30 = ?? I estimated the 30; you need to confirm it and change if appropriate.
We know 1 mole of anything contains 6.022 x 10^23 molecules; therefore,
# molecules = 6.022 x 10^23 molecules/mole x ??moles from above = # molecules.
By the way, I notice throughout the post that you are omitting an h in WHERE which made it a little difficult to understand when I first read it as WERE.
Related Questions
Algebra - What does it mean to be "simplified" which one is most "simple" 2(3+5...
statistics - Please Help! My book and instructor are no help. Please help me ...
science - Hi it's me again. I realize that my last question didn't make much ...
6th grade - What is a letter to the editor? I'm doing a book report, but all I ...
math - The diagram shows a shape, all the corners are right angles, all the ...
Reading - Which sentence best shows a point of view about population growth? A. ...
organic chemistry - Looking at a diagram of cortisone, where are all the ...
Algebra 2 - I need help factoring... I refuse to just sit there and play little ...
Reiny - Solve: -x^4 + 200 = 102x^2 I know the last line was wrong I just took a ...
programming - Design a class named Book that holds a stock number, author, title... | {"url":"http://www.jiskha.com/display.cgi?id=1278183341","timestamp":"2014-04-20T05:29:19Z","content_type":null,"content_length":"10311","record_id":"<urn:uuid:24edcbcd-13b9-49a1-8353-a488a02e4234>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Deep Thought on number theory
Gabriel Stolzenberg gstolzen at math.bu.edu
Thu Apr 13 12:52:57 EDT 2006
This morning I talked with a number theorist, who will be known
here as "Deep Thought," who disagrees with the idea that bounds
are intrinsically interesting and said that number theorists are
sometimes able---often enough to make it interesting---to go from
an "effective" bound to a realistic one.
I find this fascinating and wonder why number theorists don't
advertise it. E.g., with simple examples in number theory courses.
(I know about bootstrap and speedup methods in analysis but I have
no idea how to go from "effective" to realistic in number theory.)
As for the role of the classical existence proof with which number
theorists allegedly start, Deep Thought said that his colleagues go
directly for a constructive proof---except that (1) they really don't
know what this means and (2) as we know, they sometimes end up with
a classical proof.
Finally, Deep Thought agrees that "effective" algorithm is an
oxymoron. Maybe this is a sign that he thinks constructively rather
than classically. There is other evidence of this. Yet he claims
to be a Platonist, as in "Aristotle no, Plato yes."
Gabriel Stolzenberg
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2006-April/010399.html","timestamp":"2014-04-19T01:53:51Z","content_type":null,"content_length":"3552","record_id":"<urn:uuid:1eda987e-432f-433d-b7bd-3e6e0a3137ff>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
Selected Publications of Marie A. Vitulli
Back to Prof. Vitulli’s home page
Selected Publications of Marie A. Vitulli
Weierstrass points and monomial curves, J. of Alg., 48 (1977), 454-476.
(with D.S. Rim)
The obstruction of the formal moduli space in the negatively-graded
case,Pacific J. Math., 82 (1979), 281-288.
Complete intersections in C^n and R^2n, Proc. of the A.M.S., 78 (1980),
Seminormal rings and weakly normal varieties, Nagoya Math. J., 82
(1981), 27-56. (with J.V. Leahy)
Weakly normal varieties: the multicross singularity and some vanishing
theorems on local cohomology, Nagoya Math. J., 83 (1981), 137-152.
(with J.V. Leahy).
On grade and formal connectivity for weakly normal varieties, J. of
Alg., 80 (1983), 23-28.
The hyperplane sections of weakly normal varieties, Amer. J. Math., 105
(1983), 1357-1368.
Seminormal graded rings and weakly normal projective varieties,
Internat. J. of Math. Math. Sci., 8 (1985), 231-240. (with J. V. Leahy).
Corrections to "Seminormal rings and weakly normal varieties", Nagoya
Math. J., 107 (1987), 147-157.
Homeomorphism versus isomorphism for varieties, J. Pure and Appl.
Algebra, 56 (1989), 313-318.
V-Valuations of a commutative ring I., J. of Alg., 126 (1989), 264-292
(with D.K. Harrison).
Complex-Valued places and CMC subsets of a field, Comm. in Alg. 17
(1989), 2529-2538 (with D.K. Harrison).
A Categorical approach to the theory of equations, J. Pure and Appl.
Algebra 67 (1991), 15-31(with D.K. Harrison).
Complex-valued preplaces and the nonring CMC subsets of a ring, Comm.
in Alg. 18 (1990) 3743-3757 (with K.G. Valente).
Orderings and prime-like subsets of a ring, J. of Pure and Applied
Alg. 81 (1992) 197-218 (with K.G. Valente)
Prime-like subsets of a commutative ring, in "p-Adic Methods in Number
Theory And Algebraic Geometry", Contemporary Mathematics Vol. 133
The weak subintegral closure of an ideal, J. of Pure and Applied Alg.
141 (1999) 185-200 (with J.V. Leahy) dvi file
The weak subintegral closure of a monomial ideal, Comm. in Alg.
27(11) (1999), 5649-5667 (with L. Reid) dvi file
Weak normalization and weak subintegral closure, in “Singularities in
Algebraic and Analytic Geometry,” C.G. Melles and R.I. Michler, Ed.,
Contemporary Mathematics Vol. 266 (2000), 11 - 22. ps file
Some results on normal homogeneous ideals, Comm. in Alg. 31(9) (2003),
4485-4506. (with L. Reid and L. G. Roberts) dvi file
On normal monomial ideals, "Topics in Algebraic and Noncommutative
Geometry," and C.G. Melles, J-P Brasselet, G. Kennedy, K. Lauter,
L. McEwan, Eds., Contemporary Mathematics Vol. 324 (2003), 205 - 218.
dvi file
The core of monomial ideals, Advances in Math. 211(1) (2007) 72 - 93.
Download paper from ArXiV (with C. Polini and B. Ulrich)
Serre’s condition R[k] for affine semigroup rings, Comm. in Algebra,
Vol. 37, Issue 3 (2009), 743 - 756. Download paper from ArXiV.
Weak subintegral closure of ideals, Advances in Math. 226(3) (2011) 2089 - 2117. (with T. Gaffney)
Download paper from ArXiV
Weak normality and seminormality, invited paper in “Commutative algebra: Noetherian and
Non-Noetherian Perspectives,” M. Fontana, S.-E. Kabbaj, B. Olberding, I. Swanson, Eds.,
Springer Verlag (2011) 441 - 480. pdf file Also available on the ArXiV.
This page updated on April 3, 2011 by Marie A. Vitulli | {"url":"http://pages.uoregon.edu/vitulli/research.html","timestamp":"2014-04-19T07:04:30Z","content_type":null,"content_length":"4827","record_id":"<urn:uuid:fe38550f-52fe-43af-8bdf-af61e8c0b94b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus Tutors
Fort Lee, NJ 07024
Math, Science, Technology, and Test Prep Tutor
...I've been teaching and tutoring for over 15 years in several subjects including pre-algebra, algebra I & II, geometry, trigonometry, statistics, math analysis, pre-
(AP), number theory, SAT Math/Critical Reading/Writing, programming, web design,...
Offering 10+ subjects including calculus | {"url":"http://www.wyzant.com/Whitestone_calculus_tutors.aspx","timestamp":"2014-04-21T01:09:26Z","content_type":null,"content_length":"59280","record_id":"<urn:uuid:9dfd7cea-f04a-4ecf-9bc9-29b0de1a286a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hoboken, NJ Precalculus Tutor
Find a Hoboken, NJ Precalculus Tutor
...From my time teaching, I have accumulated many resources from various websites, books, and articles to holistically address these problems. I believe I can present seemingly difficult math
concepts in a very tangible and understandable manner. The key is to put it into real-world situations, which are actually meaningful and tangible to us.
26 Subjects: including precalculus, calculus, writing, statistics
...I think mistakes are great as I see them as learning opportunities, so I always try to create an environment in which my students feel comfortable answering questions, even when they are
unsure of the correct response. I pride myself on being able to deliver results. If you are not completely satisfied with my tutoring on any occasion, I will not bill for that lesson.
18 Subjects: including precalculus, geometry, GRE, algebra 1
Hello,My goal in tutoring is to develop your skills and provide tools to achieve your goals. My teaching experience includes varied levels of students (high school, undergraduate and graduate
students).For students whose goal is to achieve high scores on standardized tests, I focus mostly on tips a...
15 Subjects: including precalculus, chemistry, calculus, geometry
...One on one tutoring and homework drills (if needed) for all levels of math and statistics (Kids to PhD) students. *I offer completely individualized tutoring according to the student's
strengths, weaknesses, time constraints, personality traits, motivation and my instincts. * Please give me ...
30 Subjects: including precalculus, calculus, statistics, geometry
...Even some certified math teachers are not fluent in this subject. I spent 36 years as a mathematics teacher and 22 of those years supervising a department of approximately 30 mathematics
teachers. Teaching students how to study was always a priority in our professional development meetings.
9 Subjects: including precalculus, geometry, algebra 1, algebra 2
Related Hoboken, NJ Tutors
Hoboken, NJ Accounting Tutors
Hoboken, NJ ACT Tutors
Hoboken, NJ Algebra Tutors
Hoboken, NJ Algebra 2 Tutors
Hoboken, NJ Calculus Tutors
Hoboken, NJ Geometry Tutors
Hoboken, NJ Math Tutors
Hoboken, NJ Prealgebra Tutors
Hoboken, NJ Precalculus Tutors
Hoboken, NJ SAT Tutors
Hoboken, NJ SAT Math Tutors
Hoboken, NJ Science Tutors
Hoboken, NJ Statistics Tutors
Hoboken, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Hoboken_NJ_precalculus_tutors.php","timestamp":"2014-04-20T13:42:15Z","content_type":null,"content_length":"24352","record_id":"<urn:uuid:6ba3b327-55a5-4c17-9c33-d58afe4c938d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help setting up this triple intergral for the volume of the solid.
The solid bounded above by the cylinder z=4-x^2 and below by the paraboloid z=x^2+3y^2.
For these problems, you need to know what the volume looks like - I find it helps to draw a diagram. The limits for the first or innermost integral (with respect to z) are, of course, $x^2+3y^2$ to
$4-x^2$. Next, you need to know the intersection of the two surfaces, which is given by $4-x^2=x^2+3y^2$, or $2x^2+3y^2=4$, an ellipse. This is the area you need to cover with the other two
integrals. Let's integrate with respect to y next. For a given x, what is the range of y? All you have to do is solve the ellipse equation for y as a function of x, which gives you two points - the
bottom and the top of the ellipse. Those are your limits of integration with respect to y. For the outermost integral, you just need the range of x that you're dealing with, which is just where the
ellipse crosses the x axis - setting y=0 gives $2x^2=4$, so $x=\pm\sqrt{2}$, which are your limits of integration for x. I like to think of these problems as adding up tiny cubes of volume. Each cube
is labeled with its coordinates (x,y,z). First you add up the cubes from the bottom $x^2+3y^2$ to the top $4-x^2$ to make "sticks" of different lengths, labeled with coordinates (x,y). Then you add
up the sticks, and for each x, the sticks have a range of y coordinates. Adding up the sticks makes "plates", which are now labeled with x coordinates only. So the final step is to add up all the
plates. The plates have different shapes, but the limits of integration just tell you which plates to choose. I suppose that might not help you, but that's how I think of it. Post again in this
thread if you're still having trouble. - Hollywood | {"url":"http://mathhelpforum.com/calculus/140764-need-help-setting-up-triple-intergral-volume-solid.html","timestamp":"2014-04-19T17:26:54Z","content_type":null,"content_length":"36351","record_id":"<urn:uuid:f5c3425c-8644-4676-8e1b-7989b3c4a1b0>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intensional Logic
First published Thu Jul 6, 2006; substantive revision Thu Jan 27, 2011
There is an obvious difference between what a term designates and what it means. At least it is obvious that there is a difference. In some way, meaning determines designation, but is not synonymous
with it. After all, “the morning star” and “the evening star” both designate the planet Venus, but don't have the same meaning. Intensional logic attempts to study both designation and meaning and
investigate the relationships between them.
If you are not skilled in colloquial astronomy, and I tell you that the morning star is the evening star, I have given you information—your knowledge has changed. If I tell you the morning star is
the morning star, you might feel I was wasting your time. Yet in both cases I have told you the planet Venus was self-identical. There must be more to it than this. Naively, we might say the morning
star and the evening star are the same in one way, and not the same in another. The two phrases, “morning star” and “evening star” may designate the same object, but they do not have the same
meaning. Meanings, in this sense, are often called intensions, and things designated, extensions. Contexts in which extension is all that matters are, naturally, called extensional, while contexts in
which extension is not enough are intensional. Mathematics is typically extensional throughout—we happily write “3 + 2 = 2 + 3” even though the two terms involved may differ in meaning (more about
this later). “It is known that…” is a typical intensional context—“it is known that 3 + 2 = 2 + 3” may not be correct when the knowledge of small children is involved. Thus mathematical pedagogy
differs from mathematics proper. Other examples of intensional contexts are “it is believed that…”, “it is necessary that…”, “it is informative that…”, “it is said that…”, “it is astonishing that…”,
and so on. Typically a context that is intensional can be recognized by a failure of the substitutivity of equality when naively applied. Thus, the morning star equals the evening star; you know the
morning star equals the morning star; then on substituting equals for equals, you know the morning star equals the evening star. Note that this knowledge arises from purely logical reasoning, and
does not involve any investigation of the sky, which should arouse some suspicion. Substitution of co-referring terms in a knowledge context is the problematic move—such a context is intensional,
after all. Admittedly this is somewhat circular. We should not make use of equality of extensions in an intensional context, and an intensional context is one in which such substitutivity does not
The examples used above involve complex terms, disguised definite descriptions. But the same issues come up elsewhere as well, often in ways that are harder to deal with formally. Proper names
constitute one well-known area of difficulties. The name “Cicero” and the name “Tully” denote the same person, so “Cicero is Tully” is true. Proper names are generally considered to be rigid, once a
designation has been specified it does not change. This, in effect, makes “Cicero is Tully” into a necessary truth. How, then, could someone not know it? “Superman is Clark Kent” is even more
difficult to deal with, since there is no actual person the names refer to. Thus while the sentence is true, not only might one not know it, but one might perfectly well believe Clark Kent exists,
that is “Clark Kent” designates something, while not believing Superman exists. Existence issues are intertwined, in complex ways, with intensional matters. Further, the problems just sketched at the
ground level continue up the type heirarchy. The property of being an equilateral triangle is coextensive with the property of being an equiangular triangle, though clearly meanings differ. Then one
might say, “it is trivial that an equilateral triangle is an equilateral triangle,” yet one might deny that “it is trivial that an equilateral triangle is an equiangular triangle”.
In classical first-order logic intension plays no role. It is extensional by design since primarily it evolved to model the reasoning needed in mathematics. Formalizing aspects of natural language or
everyday reasoning needs something richer. Formal systems in which intensional features can be represented are generally referred to as intensional logics. This article discusses something of the
history and evolution of intensional logics. The aim is to find logics that can formally represent the issues sketched above. This is not simple and probably no proposed logic has been entirely
successful. A relatively simple intensional logic that can be used to illustrate several major points will be discussed in some detail, difficulties will be pointed out, and pointers to other, more
complex, approaches will be given.
Recognition that designating terms have a dual nature is far from recent. The Port-Royal Logic used terminology that translates as “comprehension” and “denotation” for this. John Stuart Mill used
“connotation” and “denotation.” Frege famously used “Sinn” and “Bedeutung,” often left untranslated, but when translated, these usually become “sense” and “reference.” Carnap settled on “intension”
and “extension.” However expressed, and with variation from author to author, the essential dichotomy is that between what a term means, and what it denotes. “The number of the planets” denotes the
number 9 (ignoring recent disputes about the status of bodies at the outer edges of the solar system), but it does not have the number 9 as its meaning, or else in earlier times scientists might have
determined that the number of planets was 9 through a process of linguistic analysis, and not through astronomical observation. Of the many people who have contributed to the analysis of intensional
problems several stand out. At the head of the list is Gotlob Frege.
The modern understanding of intensional issues and problems begins with a fundamental paper of Gottlob Frege, (Frege 1892). This paper opens with a recital of the difficulties posed by the notion of
equality. In his earlier work, Frege notes, he had taken equality to relate names, or signs, of objects, and not objects themselves. For otherwise, if a and b designate the same object, there would
be no cognitive difference between a = a and a = b, yet the first is analytic while the second generally is not. Thus, he once supposed, equality relates signs that designate the same thing. But, he
now realizes, this cannot be quite correct either. The use of signs is entirely arbitrary, anything can be a sign for anything, so in considering a = b we would also need to take into account the
mode of presentation of the two signs—what it is that associates them with the things they designate. Following this line of thought, equality becomes a relation between signs, relative to their
modes of presentation. Of course the notion of a mode of presentation is somewhat obscure, and Frege quickly shifts attention elsewhere.
A sign has both a reference, and what Frege calls a sense —we can think of the sense as being some kind of embodiment of the mode of presentation. From here on in his paper, sense is under
discussion, and modes of presentation fade into the background. A name expresses its sense, and designates its reference. Thus, “morning star” and “evening star” have the same designation, but
express different senses, representing different modes of presentation—one is a celestial body last seen in the morning before the sun obscures it, the other is a celestial body first seen in the
evening after the sun no longer obscures it. Frege goes on to complicate matters by introducing the idea associated with a sign, which is distinct from its sense and its reference. But the idea is
subjective, varying from person to person, while both sense and denotation are said by Frege to be not dependent in this way. Consequently the idea also fades into the background, while sense and
reference remain central.
Generally when a sign appears as part of a declarative sentence, it is the reference of the sign that is important. Both “Venus” and “the morning star” designate the same object. The sentence “The
morning star is seen in the sky near sunrise” is true, and remains true when “the morning star” is replaced with “Venus”. Substitution of equi-designating signs preserves truth. But not always; there
are contexts in which this does not happen, indirect reference contexts. As a typical example, “George knows that the morning star is seen in the sky near sunrise” may be true while “George knows
that Venus is seen in the sky near sunrise” may be false. Besides knowledge contexts, indirect reference arises when a sentence involves “I believe that…”, “I think that…”, “It seems to me that…”,
“It is surprising that…”, “It is trivial that…”, and so on. In such contexts, Frege concludes, not designation but sense is central. Then, since “George knows that…” is an indirect reference context,
senses are significant. The signs “the morning star” and “Venus” have different senses, we are not replacing a sense by a sense equal to it, and so should not expect truth to be preserved.
Frege notes that an expression might have a sense, but not a reference. An example he gives is, “the least rapidly convergent series.” Of course an object might have several signs that designate it,
but with different senses. Frege extends the sense/reference dichotomy rather far. In particular, declarative sentences are said to have both a sense and a reference. The sense is the proposition it
expresses, while the reference is its truth value. Then logically equivalent sentences have the same designation, but may have different senses. In indirect contexts sense, and not designation,
matters and so we may know the well-ordering principle for natural numbers, but not know the principle of mathematical induction because, while they are equivalent in truth value, they have different
No formal machinery for dealing with sense, as opposed to reference, is proposed in Frege 1892. But Frege defined the terms under which further discussion took place. There are two distinct but
related notions, sense and reference. Equality plays a fundamental role, and a central issue is the substitutivity of equals for equals. Names, signs, expressions, can be equal in designation, but
not equal in sense. There are both direct or extensional, and indirect or intensional contexts, and reference matters for the first while sense is fundamental for the second.
Frege gave the outline of a theory of intensionality, but no intensional logic in any formal sense. There have been attempts to fill in his outline. Alonzo Church (1951) went at it quite directly. In
this paper there is a formal logic in which terms have both senses and denotations. These are simply taken to be different sorts, and minimal requirements are placed on them. Nonetheless the logic is
quite complex. The formal logic that Frege had created for his work on the foundations of mathematics was type free. Russell showed his famous paradox applied to Frege's system, so it was
inconsistent. As a way out of this problem, Russell developed the type theory that was embodied in Principia Mathematica. Church had given an elegant and precise formulation of the simple theory of
types (Church 1940), and that was incorporated into his work on intensionality, which is one of the reasons for its formal complexity.
Church uses a notion he calls a concept, where anything that is the sense of a name for something can serve as a concept of that something. There is no attempt to make this more precise—indeed it is
not really clear how that might be done. It is explicit that concepts are language independent, and might even be uncountable. There is a type ο[0] of the two truth values. Then, there is a type ο[1]
of concepts of members of ο[0], which are called propositional concepts. There is a type ο[2] of concepts of members of ο[1], and so on. There a type ι[0] of individuals, a type ι[1] of concepts of
members of ι[0], a type ι[2] of concepts of members of ι[1], and so on. And finally, for any two types α and β there is a type (αβ) of functions from items of type β to items of type α. Church makes
a simplifying assumption concerning functional types. In order to state it easily he introduces some special notation: if α is a type symbol, for example ((ι[3]ο[2])(ο[5]ι[4])), then α[1] is the
result of increasing each subscript by 1, in our example we get ((ι[4]ο[3])(ο[6]ι[5])). (There is a similar definition for α[n] for each positive integer n, but we will not need it here.) Church's
assumption is that the concepts of members of the functional type (αβ) are the members of the type (α[1]β[1]). With this assumption, uniformly the concepts of members of any type α are the members
of type α[1].
Quantification and implication are introduced, or rather versions appropriate for the various types are introduced. λ abstraction notation is present. And finally, for each type α it is assumed there
is a relation that holds between a concept of something of type α and the thing itself; this is a relation between members of type α[1] and members of type α. This is denoted Δ, with appropriate
type-identifying subscripts.
A fundamental issue for Church is when two names, lambda terms, have the same sense. Three alternatives are considered. Common to all three alternatives are the assumptions that sense is unchanged
under the renaming of bound variables (with the usual conditions of freeness), and under β reduction. Beyond these, Alternative 0 is somewhat technical and is only briefly mentioned, Alternative 1 is
fine grained, making senses distinct as far as possible, while Alternative 2 makes two terms have the same sense whenever equality between them is a logical validity. The proper definition of the
alternatives is axiomatic, and altogether various combinations of some 53 axiom schemes are introduced, with none examined in detail. Clearly Church was proposing an investigation, rather than
presenting full results.
As noted, the primary reference for this work is Church 1951, but there are several other significant papers including Church 1973, Church 1974, and the Introduction to Church 1944, which contains an
informal discussion of some of the ideas. In addition, the expository papers of Anderson are enlightening (Anderson 1984, 1998). It should be noted that there are relationships between Church's work
and that of Carnap, discussed below. Church's ideas first appeared in an abstract (Church 1946), then Carnap's book appeared (Carnap 1947). A few years later Church's paper expanded his abstract in
Church 1951. The second edition of Carnap's book appeared in 1956. Each man had an influence on the other, and the references between the two authors are thoroughly intertwined.
Church simply(!) formalized something of how intensions behaved, without saying what they were. Rudolf Carnap took things further with his method of intension and extension, and provided a semantics
in which quite specific model-theoretic entities are identified with intensions (Carnap 1947). Indeed, the goal was to supply intensions and extensions for every meaningful expression, and this was
done in a way that has heavily influenced much subsequent work.
Although Carnap attended courses of Frege, his main ideas are based on Wittgenstein 1921. In the Tractatus, Wittgenstein introduced a precursor of possible world semantics. There are states of
affairs, which can be identified with the set of all their truths, “(1.13) The facts in logical space are the world.” Presumably these facts are atomic, and can be varied independently, “(1.21) Each
item can be the case or not the case while everything else remains the same.” Thus there are many possible states of affairs, among them the actual one, the real world. Objects, in some way, involve
not only the actual state of affairs, but all possible ones, “(2.0123) If I know an object I also know all its possible occurrences in states of affairs. (Every one of these possibilities must be
part of the nature of the object.) A new possibility cannot be discovered later.” It is from these ideas that Carnap developed his treatment.
Carnap begins with a fixed formal language whose details need not concern us now. A class of atomic sentences in this language, containing exactly one of A or ¬A for each atomic sentence, is a
state-description. In each state-description the truth or falsity of every sentence of the language is determined following the usual truth-functional rules—quantifiers are treated substitutionally,
and the language is assumed to have ‘enough’ constants. Thus truth is relative to a state-description. Now Carnap introduces a stronger notion than truth, L-truth, intended to be “an explicatum for
what philosophers call logical or necessary or analytic truth.” Initially he presents this somewhat informally, “a sentence is L-true in a state description S if it is true in S in such a way that
its truth can be established on the basis of the semantical rules of the system S alone, without any reference to (extra-linguistic) facts.” But this is quickly replaced by a more precise semantic
version, “A sentence is L-true if it holds in every state-description.”
One can recognize in L-truth a version of necessary truth using possible world semantics. There is no accessibility relation, so what is being captured is more like S5 than like other modal logics.
But it is not S5 semantics either, since there is a fixed set of state-descriptions determined by the language itself. (If P is any propositional atom, some state-description will contain P, and so ◊
P will be validated.) Nonetheless, it is a clear anticipation of possible world semantics. But what concerns us here is how Carnap treats designating terms in such a setting. Consider predicates P
and Q. For Carnap these are intensionally equivalent if ∀x(Px ≡ Qx) is an L-truth, that is, in each state-description P and Q have the same extension. Without being quite explicit about it, Carnap is
proposing that the intension of a predicate is an assignment of an extension for it to each state-description—intensional identity means identity of extension across all state-descriptions and not
just at the actual one. Thus the predicate ‘H’, human, and the predicate ‘FB’, featherless biped, have the same extension—in the actual state-description they apply to the same beings—but they do not
have the same intension since there are other state-descriptions in which their extensions can differ. In a similar way one can model individual expressions, “The extension of an individual
expression is the individual to which it refers.” Thus, ‘Scott’ and ‘the author of Waverly’ have the same extension (in the actual state-description). Carnap proposes calling the intension of an
individual expression an individual concept, and such a thing picks out, in each state-description, the individual to which it refers in that state description. Then ‘Scott’ and ‘the author of
Waverly’ have different intensions because, as most of us would happily say, they could have been different, that is, there are state-descriptions in which they are different. (I am ignoring the
problems of non-designation in this example.)
Carnap's fundamental idea is that intensions, for whatever entities are being considered, can be given a precise mathematical embodiment as functions on states, while extensions are relative to a
single state. This has been further developed by subsequent researchers, of course with modern possible world semantics added to the mix. The Carnap approach is not the only one around, but it does
take us quite a bit of the way into the intensional thicket. Even though it does not get us all the way through, it will be the primary version considered here, since it is concrete, intuitive, and
natural when it works.
Carnap's work was primarily semantic, and resulted in a logic that did not correspond to any of the formal systems that had been studied up to this point. Axiomatically presented propositional modal
logics were well-established, so it was important to see how (or if) they could be extended to include quantifiers and equality. At issue were decisions about what sorts of things quantifiers range
over, and substitutivity of equals for equals. Quine's modal objections needed to be addressed. Ruth Barcan Marcus began a line of development in (Marcus 1946) by formally extending the propositional
system S2 of C. I. Lewis to include quantification, and developing it axiomatically in the style of Principia Mathematica. It was clear that other standard modal logics besides S2 could have been
used, and S4 was explicitly discussed. The Barcan formula, in the form ◊(∃α)A ⊃ (∃α)◊A, made its first appearance in (Marcus 1946),^[1] though a full understanding of its significance would have to
wait for the development of a possible-world semantics. Especially significant for the present article, her system was further extended in (Marcus 1947) to allow for abstraction and identity. Two
versions of identity were considered, depending on whether things had the same properties (abstracts) or necessarily had them. In the S2 system the two versions were shown to be equivalent, and in
the S4 system, necessarily equivalent. In a later paper (Marcus 1953) the fundamental role of the deduction theorem was fully explored as well.
Marcus proved that in her system identity was necessary if true, and the same for distinctness. She argued forcefully in subsequent works, primarily (Marcus 1961), that morning star/evening star
problems were nonetheless avoided. Names were understood as tags. They might have their designation specified through an initial use of a definite description, or by some other means, but otherwise
names had no meaning, only a designation. Thus they did not behave like definite descriptions, which were more than mere tags. Well, the object tagged by “morning star ” and that tagged by “evening
star ” are the same, and identity between objects is never contingent.
The essential point had been made. One could develop formal modal systems with quantifiers and equality. The ideas had coherence. Still missing was a semantics which would help with the understanding
of the formalism, but this was around the corner.
Carnap's ideas were extended and formalized by Richard Montague, Pavel Tichý, and Aldo Bressan, independently. All made use of some version of Kripke/Hintikka possible world semantics, instead of the
more specialized structure of Carnap. All treated intensions functionally.
In part, Bressan wanted to provide a logical foundation for physics. The connection with physics is this. When we say something has such-and-such a mass, for instance, we mean that if we had
conducted certain experiments, we would have gotten certain results. This does not assume we did conduct those experiments, and thus alternate states (or cases, as Bressan calls them) arise. Hence
there is a need for a rich modal language, with an ontology that includes numbers as well as physical objects. In Bressan 1972, an elaborate modal system was developed, with a full type hierarchy
including numbers as in Principia Mathematica.
Montague's work is primarily in Montague 1960 and 1970, and has natural language as its primary motivation. The treatment is semantic, but in Gallin 1975 an axiom system is presented. The logic
Gallin axiomatized is a full type-theoretic system, with intensional objects of each type. Completeness is proved relative to an analog of Henkin models, familiar for higher type classical logics.
Tichý created a system of intensional logic very similar to that of Montague, beginning, in English, in Tichý 1971, with a detailed presentation in Tichý 1988. Unfortunately his work did not become
widely known. Like Montague's semantics, Tichý's formal work is based on a type heirarchy with intensions mapping worlds to extensions at each type level, but it goes beyond Montague in certain
respects. For one thing, intensions depend not only on worlds, but also on times. For another, in addition to intensions and extensions Tichý also considers constructions, which will be discussed
further here in Section 3.6.1.
As has been noted several times earlier, formal intensional logics have been developed with a full hierarchy of higher types, Church, Montague, Bressan, Tichý for instance. Such logics can be rather
formidable, but Carnap's ideas are often (certainly not always) at the heart of such logics, these ideas are simple, and are sufficient to allow discussion of several common intensional problems.
Somehow, based on its sense (intension, meaning) a designating phrase may designate different things under different conditions—in different states. For instance, “the number of the planets” was
believed to designate 6 in ancient times (counting Earth). Immediately after the discovery of Uranus in 1781 “the number of the planets” was believed to designate 7. If we take as epistemic states of
affairs the universe as conceived by the ancients, and the universe as conceved just after 1781, in one state “the number of the planets” designates 6 and in the other it designates 7. In neither
state were people wrong about the concept of planet, but about the state of affairs constituting the universe. If we suppress all issues of how meanings are determined, how meanings in turn pick out
references, and all issues of what counts as a possible state of affairs, that is, if we abstract all this away, the common feature of every designating term is that designation may change from state
to state—thus it can be formalized by a function from states to objects. This bare-bones approach is quite enough to deal with many otherwise intractable problems.
In order to keep things simple, we do not consider a full type hierarchy—first-order is enough to get the basics across. The first-order fragment of the logic of Gallin 1975 would be sufficient, for
instance. The particular formulation presented here comes from Fitting 2004, extending Fitting and Mendelsohn 1998. Predicate letters are intensional, as they are in every version of Kripke-style
semantics, with interpretations that depend on possible worlds. The only other intensional item considered here is that of individual concept—formally represented by constants and variables that can
designate different objects in different possible worlds. The same ideas can be extended to higher types, but what the ideas contribute can already be seen at this relatively simple level.
Intensional logics often have nothing but intensions—extensions are inferred but are not explicit. However, an approach that is too minimal can make life hard, so consequently here we explicitly
allow both objects and individual concepts which range over objects. There are two kinds of quantification, over each of these sorts. Both extensional and intensional objects are first-class
Basic ideas are presented semantically rather than proof-theoretically, though both axiom systems and tableau systems exist. Even so, technical details can become baroque, so as far as possible, we
will separate informal presentation, which is enough to get the general idea, from its formal counterpart, which is of more specialized interest. A general acquaintance with modal logic is assumed
(though there is a very brief discussion to establish notation, which varies some from author to author). It should be noted that modal semantics is used here, and generally, in two different ways.
Often one has a particular Kripke model in mind, though it may be specified informally. For instance, we might consider a Kripke model in which the states are the present instant and all past ones,
with later states accessible from earlier ones. Such a model is enlightening when discussing “the King of France” for instance, even though the notion of instant is somewhat vaguely determined. But
besides this use of informally specified concrete models, there is formal Kripke semantics which is a mathematically precise thing. If it is established that something, say □(X ⊃ Y) ⊃ (□X ⊃ □Y), is
valid in all formal Kripke models, we can assume it will be so in our vaguely specified, intuitive models, no matter how we attempt to make them more precise. Informal models pervade our
discussions—their fundamental properties come from the formal semantics.
A propositional language is built up from propositional letters, P, Q, …, using ∧, ∨, ⊃, ¬ and other propositional connectives, and □ (necessary) and ◊ (possible) as modal operators. These operators
can be thought of as alethic, deontic, temporal, epistemic—it will matter which eventually, but it does not at the moment. Likewise there could be more than one version of □, as in a logic of
knowledge with multiple knowers—this too doesn't make for any essential differences.
Kripke semantics for propositional modal logic is, by now, a very familiar thing. Here is a quick presentation to establish notation, and to point out how one of Frege's proposals fits in. A more
detailed presentation can be found in the article on modal logic in this encyclopedia.
3.1.1 The Informal Version
A model consists of a collection of states, some determination of which states are relevant to which, and also some specification of which propositional letters hold at which of these states. States
could be states of the real world at different times, or states of knowledge, or of belief, or of the real world as it might have been had circumstances been different. We have a mathematical
abstraction here. We are not trying to define what all these states might ‘mean,’ we simply assume we have them. Then more complex formulas are evaluated as true or false, relative to a state. At
each state the propositional connectives have their customary classical behavior. For the modal operators. □X, that is, necessarily X, is true at a state if X itself is true at every state that is
relevant to that state (at all accessible states). Likewise ◊X, possibly X, is true at a state if X is true at some accessible state. If we think of things epistemically, accessibility represents
compatibility, and so X is known in a state if X is the case in all states that are compatible with that state. If we think of things alethically, an accessible state can be considered an alternate
reality, and so X is necessary in a state if X is the case in all possible alternative states. These are, by now, very familiar ideas.
3.1.2 The Formal Version
A frame is a structure <G, R>, where G is a non-empty set and R is a binary relation on G. Members of G are states (or possible worlds). R is an accessibility relation. For Γ, Δ ∈ G, Γ R Δ is read “Δ
is accessible from Γ.” A (propositional) valuation on a frame is a mapping, V, that assigns to each propositional letter a mapping from states of the frame to truth values, true or false. For
simplicity, we will abbreviate V(P)(Γ) by V(P, Γ). A propositional model is a structure M = <G, R, V>, where <G, R> is a frame and V is a propositional valuation on that frame.
Given a propositional model M = <G, R, V>, the notion of formula X being true at state Γ will be denoted M, Γ ⊨ X, and is characterized by the following standard rules, where P is atomic.
M, Γ ⊨ P ⇔ V(P, Γ) = true
M, Γ ⊨ X ∧ Y ⇔ M, Γ ⊨ X and M, Γ ⊨ Y
… ⇔ …
M, Γ ⊨ □X ⇔ M, Δ ⊨ X for every Δ ∈ G with Γ R Δ
M, Γ ⊨ ◊X ⇔ M, Δ ⊨ X for some Δ ∈G with Γ R Δ
Suppose we think about formulas using an intensional/extensional distinction. Given a model M, to each formula X we can associate a function, call it f[X], mapping states to truth values, where we
set f[X](Γ) = true just in case M, Γ ⊨ X.
Think of the function f[X] as the intensional meaning of the formula X —indeed, think of it as the proposition expressed by the formula (relative to a particular model, of course). At a state Γ, f[X]
(Γ) is a truth value—think of this as the extensional meaning of X at that state. This is a way of thinking that goes back to Frege, who concluded that the denotation of a sentence should be a truth
value, but the sense should be a proposition. He was a little vague about what constituted a proposition—the formalization just presented provides a natural mathematical entity to serve the purpose,
and was explicitly proposed for this purpose by Carnap. It should be clear that the mathematical structure does, in a general way, capture some part of Frege's idea. Incidentally we could, with no
loss, replace the function f[X] on states with the set { Γ ∈ G | f[X](Γ) = true }. The function f[X] is simply the characteristic function of this set. Sets like these are commonly referred to as
propositions in the modal logic community. In a technical sense, then, Frege's ideas on this particular topic have become common currency.
First we discuss some background intuitions, then introduce a formal semantics. Intensions will be introduced formally in Section 3.3. The material discussed here can be found more fully developed in
(Fitting and Mendelsohn 1998, Hughes and Cresswell 1996), among other places.
3.2.1 Actualist and Possibilist
If we are to think of an intension as designating different things under different circumstances, we need things. At the propositional level truth values play the role of things, but at the first
order level something more is needed. In classical logic each model has a domain, the things of that model, and quantifiers are understood as ranging over the members of that domain. It is, of
course, left open what constitutes a thing—any collection of any sort can serve as a domain. That way, if someone has special restrictions in mind because of philosophical or mathematical
considerations, they can be accommodated. It follows that the validities of classical logic are, by design, as general as possible—they are true no matter what we might choose as our domain, no
matter what our things are.
A similar approach was introduced for modal logics in Kripke 1963. Domains are present, but it is left open what they might consist of. But there is a complication that has no classical counterpart:
in a Kripke model there are multiple states. Should there be a single domain for the entire model, or separate domains for each state? Both have natural intuitions.
Consider a version of Kripke models in which a separate domain is associated with each state of the model. At each state, quantifiers are thought of as ranging over the domain associated with that
state. This has come to be known as an actualist semantics. Think of the domain associated with a state as the things that actually exist at that state. Thus, for example, in the so-called real world
the Great Pyramid of Khufu is in the domain, but the Lighthouse of Alexandria is not. If we were considering the world of, say, 1300, both would be in the domain. In an actualist approach, we need to
come to some decision on what to do with formulas containing references to things that exist in other states but not in the state we are considering. Several approaches are plausible; we could take
such formulas to be false, or we could take them to be meaningless, for instance, but this seems to be unnecessarily restrictive. After all, we do say things like “the Lighthouse of Alexandria no
longer exists,” and we think of it as true. So, the formal version that seems most useful takes quantifiers as ranging over domains state by state, but otherwise allows terms to reference members of
any domain. The resulting semantics is often called varying domain as well as actualist.
Suppose we use the actualist semantics, so each state has an associated domain of actually existing things, but suppose we allow quantifiers to range over the members of any domain, without
distinction, which means quantifiers are ranging over the same set, at every state. What are the members of that set? They are the things that exist at some state, and so at every state they are the
possible existents—things that might exist. Lumping these separate domains into a single domain of quantification, in effect, means we are quantifying over possibilia. Thus, a semantics in which
there is a single domain over which quantifiers range, the same for every state, is often called possibilist semantics or, of course, constant domain semantics.
Possibilist semantics is simpler to deal with than the actualist version—we have one domain instead of many for quantifiers to range over. And it turns out that if we adopt a possibilist approach,
the actualist semantics can be simulated. Suppose we have a single domain of quantification, possibilia, and a special predicate, E, which we think of as true, at each state, of the things that
actually exist at that state. If ∀ is a quantifier over the domain of possibilia, we can think of the relativized quantifier, ∀x(E(x) ⊃ …) as corresponding to actualist quantification. (We need to
assume that, at each state, E is true of something—this corresponds to assuming domains are non-empty.) This gives an embedding of the actualist semantics into the possibilist one, a result that can
be formally stated and proved. Here possibilist semantics will be used, and we assume we have an existence predicate E available.
3.2.2 Possibilist Semantics, Formally
The language to be used is a straightforward first order extension of the propositional modal language. There is an infinite list of object variables, x, y, x[1], x[2], …, and a list of relation
symbols, R, P, P[1], P[2], …, of all arities. Among these is the one-place symbol E and the two-place symbol =. Constant and function symbols could be added, but let's keep things relatively simple,
along with simply relative. If x[1], …, x[n] are object variables and P is an n-place relation symbol, P(x[1], …, x[n]) is an atomic formula. We'll write x = y in place of = (x,y). More complex
formulas are built up using propositional connectives, modal operators, and quantifiers, ∀ and ∃, in the usual way. Free and bound occurrences of variables have the standard characterization.
A first order model is a structure <G, R, D[O], I> where <G, R> is a frame, as in Section 3.1, D[O] is a non-empty object domain, and I is an interpretation that assigns to each n-place relation
symbol P a mapping, I(P) from G to subsets of D[O]^n. We'll write I(P, Γ) as an easier-to-read version of I(P)(Γ). It is required that I(=, Γ) is the equality relation on D[O], for every state Γ, and
I(E, Γ) is non-empty, for every Γ. A first-order valuation in a model is a mapping v that assigns a member of D[O] to each variable. Note that first order valuations are not state-dependent in the
way that interpretations are. A first order valuation w is an x-variant of valuation v if v and w agree on all variables except possibly for x. Truth, at a state Γ of a model M = <G, R, D[O], I>,
with respect to a first order valuation v, is characterized as follows, where P(x[1], …, x[n]) is an atomic formula:
M, Γ ⊨[v] P(x[1], …, x[n]) ⇔ <v(x[1]), …, v(x[n])> ∈ I(P, Γ)
M, Γ ⊨[v] X ∧ Y ⇔ M, Γ ⊨[v] X and M, Γ ⊨[v] Y
… ⇔ …
M, Γ ⊨[v] □X ⇔ M, Δ ⊨[v] X for every Δ ∈ G with Γ R Δ
M, Γ ⊨[v] ◊X ⇔ M, Δ ⊨[v] X for some Δ ∈ G with Γ R Δ
M, Γ ⊨[v] ∀xX ⇔ M, Γ ⊨[w] X for every x-variant w of v
M, Γ ⊨[v] ∃xX ⇔ M, Γ ⊨[w] X for some x-variant w of v
Call a formula valid if it is true at every state of every first order model with respect to every first-order valuation, as defined above. Among the validities are the usual modal candidates, such
as □(X ⊃ Y) ⊃ (□X ⊃□Y), and the usual quantificational candidates, such as ∀xX ⊃ ∃xX. We also have mixed cases such as the Barcan and converse Barcan formulas: ∀x□X ≡ □∀xX, which are characteristic
of constant domain models, as was shown in Kripke 1963. Because of the way equality is treated, we have the validity of both ∀x∀y(x = y ⊃ □x = y) and ∀x∀y(x ≠ y ⊃ □x ≠ y). Much has been made about
the identity of the number of the planets and 9 (Quine 1963), or the identity of the morning star and the evening star (Frege 1892), and how these identities might behave in modal contexts. But that
is not really a relevant issue here. Phrases like “the morning star” have an intensional aspect, and the semantics outlined so far does not take intensional issues into account. As a matter of fact,
the morning star and the evening star are the same object and, as Gertrude Stein might have said, “an object is an object is an object.” The necessary identity of a thing and itself should not come
as a surprise. Intensional issues will be dealt with shortly.
Quantification is possibilist—domains are constant. But, as was discussed in Section 3.2.1, varying domains can be brought in indirectly by using the existence predicate, E, and this allows us to
introduce actualist quantification definitionially. Let ∀^ExX abbreviate ∀x(E(x) ⊃ X), and let ∃^ExX abbreviate ∃x(E(x) ∧ X). Then, while ∀xφ(x) ⊃ φ(y) is valid, assuming y is free for x in φ(x), we
do not have the validity of ∀^Exφ(x) ⊃ φ(y). What we have instead is the validity of [∀^Exφ(x) ∧ E(y)] ⊃ φ(y).
As another example of possibilist/actualist difference, consider ∃x□P(x) ⊃ □∃xP(x). With possibilist quantifiers, this is valid and reasonable. It asserts that if some possibilia has the P property
in all alternative states, then in every alternative state some possibilia has the P property. But when possibilist quantification is replaced with actualist, ∃^Ex□P(x) ⊃ □∃^ExP(x), the result is no
longer valid. As a blatant (but somewhat informal) example, say the actual state is Γ and P is the property of existing in state Γ. Then, at Γ, ∃^Ex□P(x) says something that actually exists has, in
all alternative states, the property of existing in the state Γ. This is true; in fact it is true of everything that exists in the state Γ. But □∃^ExP(x) says that in every alternative state there
will be an actually existent object that also exists in the state Γ, which need not be the case.
In Section 3.2 a first order modal logic was sketched, in which quantification was over objects. Now a second kind of quantification is added, over intensions. As has been noted several times, an
intensional object, or individual concept, will be modeled by a function from states to objects, but now we get into the question of what functions should be allowed. Intensions are supposed to be
related to meanings. If we consider meaning to be a human construct, what constitutes an intension should probably be restricted. There should not, for instance, be more intensional objects than
there are sentences that can specify meanings, and this limits intensions to a countable set. Or we might consider intensions as ‘being there,’ and we pick out the ones that we want to think about,
in which case cardinality considerations don't apply. This is an issue that probably cannot be settled once and for all. Instead, the semantics about to be presented allows for different choices in
different models—it is not required that all functions from states to objects be present. It should be noted that, while this semantical gambit does have philosophical justification, it also makes an
axiomatization possible. The fundamental point is the same as in the move from first to second order logic. If we insist that second order quantifiers range over all sets and relations, an
axiomatization is not possible. If we use Henkin models, in which the range of second order quantifiers has more freedom, an axiomatization becomes available.
Formulas are constructed more-or-less in the obvious way, with two kinds of quantified variables instead of one: extensional and intensional. But there is one really important addition to the
syntactic machinery, and it requires some discussion. Suppose we have an intension, f, that picks out an object in each state. For example, the states might be various ways the universe could have
been constituted, and at each state f picks out the number of the planets which could, of course, be 0. Suppose P is a one-place relation symbol—what should be meant by P(f)? On the one hand, it
could mean that the intension f has the property P, on the other hand it could mean that the object designated by f has the property P. Both versions are useful and correspond to things we say every
day. We will allow for both, but the second version requires some cleaning up. Suppose P(f) is intended to mean that the object designated by f (at a state) has property P. Then how do we read ◊P(f)?
Under what circumstances should we take it to be true at state Γ? It could be understood as asserting the thing designated by f at Γ (call it f[Γ]) has the ‘possible-P’ property, and so at some
alternative state Δ we have that f[Γ] has property P. This is the de re reading, in which a possible property is ascribed to a thing. Another way of understanding ◊P(f) takes the possibility operator
as primary: to say the formula is true at Γ means that at some alternative state, Δ, we have P(f), and so at Δ the object designated by f (call it f[Δ]) has property P. This is the de dicto reading,
possibility applies to a sentence. Of course there is no particular reason why f[Γ] and f[Δ] should be identical. The de re and de dicto readings are different, both need representation, and we
cannot manage this with the customary syntax.
An abstraction mechanism will be used to disambiguate our syntax. The de re reading will be symbolized [λx◊P(x)](f) and the de dicto will be symbolized ◊[λxP(x)](f). The (incomplete) expression [λx
X] is often called a predicate abstraction; one can think of it as the predicate abstracted from the formula X. In [λx◊P(x)](f) we are asserting that f has the possible-P property, while in ◊[λxP(
x)](f) we are asserting the possibility that f has the P property. Abstraction disambiguates. What we have said about ◊ applies equally well to □ of course. It should be noted that one could simply
think of abstraction as a scope-specifying device, in a tradition that goes back to Russell, who made use of such a mechanism in his treatment of definite descriptions. Abstraction in modal logic
goes back to Carnap 1947, but in a way that ignores the issues discussed above. The present usage comes from Stalnaker & Thomason 1968 and Thomason & Stalnaker 1968.
Now the more technical part begins. There are two kinds of variables, object variables as before, and intension variables, or individual concept variables, f, g, g[1], g[2], …. With two kinds of
variables present, the formation of atomic formulas becomes a little more complex. From now on, instead of just being n-place for some n, a relation symbol will have a type associated with it, where
a type is an n-tuple whose entries are members of {O, I}. An atomic formula is an expression of the form P(α[1], …, α[n]) where P is a relation symbol whose type is <t[1], …, t[n]> and, for each i,
if t[i] = O then α[i] is an object variable, and if t[i] = I then α[i] is an intension variable. Among the relation symbols we still have E, which now is of type <O>, and we have =, of type <O,O>.
Formulas are built up from atomic formulas in the usual way, using propositional connectives, modal operators, and two kinds of quantifiers: over object variables and over intension variables. In
addition to the usual formula-creating machinery, we have the following. If X is a formula, x is an object variable, and f is an intension variable, then [λxX](f) is a formula, in which the free
variable occurrences are those of X except for x, together with the displayed occurrence of f.
To distinguish the models described here from those in Section 3.2.2, these will be referred to as FOIL models, standing for first order intensional logic. They are discussed more fully in (Fitting
2004). A FOIL model is a structure M = <G, R, D[O], D[i], I> where <G, R, D[O], I> meets the conditions of Section 3.2.2, and in addition, D[i] is a non-empty set of functions from G to D[O]; it is
the intension domain.
A first-order valuation in FOIL model M is a mapping that assigns to each object variable a member of D[O], as before, and to each intension variable a member of D[i]. If f is an intension variable,
we'll write v(f, Γ) for v(f)(Γ). Now, the definition of truth, at a state Γ of a model M, with respect to a valuation v, meets the conditions set forth in Section 3.2.2 and, in addition, the
M, Γ ⊨[v] ∀fX ⇔ M, Γ ⊨[w] X, for every f-variant w of v
M, Γ ⊨[v] ∃fX ⇔ M, Γ ⊨[w] X, for some f-variant w of v
(1) M, Γ ⊨[v] [λxX](f) ⇔ M, Γ ⊨[w] X, where w is like v except that w(x) = v(f, Γ)
Let us agree to abbreviate [λx[λyX](g)](f) by [λxyX](f, g), when convenient. Suppose f is intended to be the intension of “the morning star,” and g is intended to be the intension of “the evening
star.” Presumably f and g are distinct intensions. Even so, [λxyx = y](f, g) is correct in the real world—both f and g do designate the same object.
Here is another example that might help make the de re / de dicto distinction clearer. Suppose f is the intension of “the tallest person,” and g is the intension of “the oldest person,” and suppose
it happens that, at the moment, these are the same people. Also, let us read □ epistemically. It is unlikely we would say that □[λxyx = y](f, g) is the case. We can read □[λxyx = y](f, g) as saying
we know that f and g are the same. It asserts that under all epistemic alternatives—all the various ways the world could be that are compatible with what we know—f and g designate the same object,
and this most decidedly does not seem to be the case. However, we do have [λxy□(x = y)](f, g), which we can read as saying we know of f and g, that is, of their denotations, that they are the same,
and this could be the case. It asserts that in all epistemic alternative states, what f and g designate in this one will be the same. In the setup described, f and g do designate the same object, and
identity of objects carries over across states.
It should be noted that the examples of designating terms just given are all definite descriptions. These pick out different objects in different possible worlds quite naturally. The situation with
proper names and with mathematics is different, and will be discussed later in section 3.6.
3.4.1 A Formal Example
Here's an example to show how the semantics works in a technical way. An intension is rigid if it is constant, the same in every state. We might think of a rigid intension as a disguised object,
identifying it with its constant value. It should not be a surprise, then, that for rigid intensions, the distinction between de re and de dicto disappears. Indeed, something a bit stronger can be
shown. Instead of rigidity, consider the weaker notion called local rigidity in Fitting and Mendelsohn 1998: an intension is locally rigid at a state if it has the same designation at that state that
it has at all accessible ones. To say f is locally rigid at a state, then, amounts to asserting the truth of [λx□[λyx = y](f)](f) at that state. Local rigidity at a state implies the de re /de
dicto distinction vanishes at that state. To show how the formal semantics works, here is a verification of the validity of
(2) [λx□[λyx = y](f)](f) ⊃ ([λx◊X](f) ⊃ ◊[λxX](f))
In a similar way one can establish the validity of
(3) [λx□[λyx = y](f)](f) ⊃ [◊[λxX](f) ⊃ [λx◊X](f)]
and from these two follows the validity of
(4) [λx□[λyx = y](f)](f) ⊃ ([λx◊X](f) ≡ ◊[λxX](f))
which directly says local rigidity implies the de re /de dicto distinction vanishes.
Suppose (2) were not valid. Then there would be a model M = <G, R, D[O], D[i], I>, a state Γ of it, and a valuation v in it, such that
(5) M, Γ ⊨[v] [λx□[λyx = y](f)](f)
(6) M, Γ ⊨[v] [λx◊X](f)
(7) not M, Γ ⊨[v] ◊[λxX](f)
From (6) we have the following, where w is the x-variant of v such that w(x) = v (f, Γ).
(8) M, Γ ⊨[w] ◊X
By (8) there is some Δ ∈ G with Γ R Δ such that we have the following.
(9) M, Δ ⊨[w] X
Then, as a consequence of (7)
(10) not M, Δ ⊨[v] [λxX](f)
and hence we have the following, where w′ is the x-variant of v such that w′(x) = v(f, Δ).
(11) not M, Δ ⊨[w′] X
Now from (5), since w(x) = v(f, Γ), we have
(12) M, Γ ⊨[w] □[λyx=y](f)
and so
(13) M, Δ ⊨[w] [λyx=y](f)
and hence
(14) M, Δ ⊨[w″] x=y
where w″ is the y-variant of w such that w″(y) = w(f, Δ).
We claim that valuations w and w′ are the same, which means that (9) and (11) are contradictory. Since both are x-variants of v, it is enough to show that w(x) = w′(x), that is, v(f, Γ) = v(f, Δ),
which is intuitively what local rigidity says. Proceeding formally, v(f, Γ) = w(x) = w″(x) since w″ is a y-variant of w and so they agree on x. We also have v(f, Δ) = w″(y). And finally, w″(x) = w″(y
) by (14).
Having reached a contradiction, we conclude that (2) must be valid.
3.4.2 Possible Extra Requirements
In models the domain of intensions is to be some non-empty set of functions from states to objects. We have deliberately left vague the question of which ones we must have. There are some conditions
we might want to require. Here are some considerations along these lines, beginning with a handy abbreviation.
(15) D(f, x) abbreviates [λyy=x](f) (where x and y are distinct object variables).
Working through the FOIL semantics, M, Γ ⊨[v] D(f,x) is true just in case v(f, Γ) = v(x). Thus D(f, x) says the intension f designates the object x.
The formula ∀f∃xD(f, x) is valid in FOIL models as described so far. It simply says intensions always designate. On the other hand, there is no a priori reason to believe that every object is
designated by some intension, but under special circumstances we might want to require this. We can do it by restricting ourselves to models in which we have the validity of
(16) ∀x∃fD(f, x)
If we require (16), quantification over objects is reducible to intensional quantification:
(17) ∀xΦ ≡ ∀f[λxΦ](f).
More precisely, the implication (16) ⊃ (17) is valid in FOIL semantics.
We also might want to require the existence of choice functions. Suppose that we have somehow associated an object d[Γ] with each state Γ of a model. If our way of choosing d[Γ] can be specified by a
formula of the language, we might want to say we have specified an intension. Requiring the validity of the following formula seems as close as we can come to imposing such an existence condition on
FOIL models. For each formula Φ:
□∃xΦ ⊃ ∃f□[λxΦ](f).
“The King of France in 1700” denotes an object, Louis XIV, who does not exist, but did. “The present King of France” does not denote at all. To handle such things, the representation of an intension
can be generalized from being a total function from states to objects, to being a partial function. We routinely talk about non-existent objects—we have no problem talking about the King of France in
1700. But there is nothing to be said about the present King of France—there is no such thing. This will be our guide for truth conditions in our semantics.
3.5.1 Modifying the Semantics
The language stays the same, but intension variables are now interpreted by partial functions on the set of states—functions whose domains may be proper subsets of the set of states. Thus M = <G, R,
D[O], D[i], I> is a partial FOIL model if it is as in Section 3.4 except that members of D[i] are partial functions from G to D[O]. Given a partial FOIL model M and a valuation v in it, an intension
variable f designates at state Γ of this model with respect to v if Γ is in the domain of v(f).
Following the idea that nothing can be said about the present King of France, we break condition (1) from Section 3.4 into two parts. Given a partial FOIL model M and a valuation v in it:
1. If f does not designate at Γ with respect to v,
it is not the case that M, Γ ⊨[v] [λxX](f)
2. If f designates at Γ with respect to v,
M, Γ ⊨[v] [λxX](f) ⇔ M, Γ ⊨[w] X
where w is like v except that w(x) = v(f, Γ)
Thus designating terms behave as they did before, but nothing can be truly asserted about non-designating terms.
Recall, we introduced a formula (15) abbreviated by D(f,x) to say f designates x. Using that, we introduce a further abbreviation.
(18) D(f) abbreviates ∃xD(f, x)
This says f designates. Incidentally, we could have used [λxx = x](f) just as well, thus avoiding quantification.
It is important to differentiate between existence and designation. As things have been set up here, existence is a property of objects, but designation really applies to terms of the formal
language, in a context. To use a temporal example from Fitting and Mendelsohn 1998, in the usual sense “George Washington” designates a person who does not exist, though he once did, while “George
Washington's eldest son,” does not designate at all. That an intensional variable f designates an existent object is expressed by an abstract, [λxE(x)](f). We have to be a bit careful about
non-existence though. That f designates a non-existent is not simply the denial of the previous expression, ¬[λxE(x)](f). After all, [λxE(x)](f) expresses that f designates an existent, so its
denial says either f does not designate, or it does, but designates a non-existent. To express that f designates, but designates a non-existent, we need [λx¬E(x)](f). The formula ∀f([λxE(x)](f) ∨ ¬
[λxE(x)](f)) is valid, but ∀f([λxE(x)](f) ∨ [λx¬E(x)](f)) is not—one can easily construct partial FOIL models that invalidate it. What we do have is the following important item.
(19) ∀f[D(f) ≡ ([λxE](f) ∨ [λx¬E(x)](f))]
In words, intensional terms that designate must designate existents or non-existents.
3.5.2 Definite Descriptions
In earlier parts of this article, among the examples of intensions and partial intensions have been “the present King of France,” “the tallest person,” and “the oldest person.” One could add to these
“the number of people,” and “the positive solution of x^2 − 9 = 0.” All have been specified using definite descriptions. In a temporal model, the first three determine partial intensions (there have
been instants of time with no people); the fourth determines an intension that is not partial; the fifth determines an intension that is rigid.
So far we have been speaking informally, but there are two equivalent ways of developing definite descriptions ideas formally. The approach introduced by Bertrand Russell (Russell 1905, Whitehead and
Russell 1925) is widely familiar and probably needs little explication here. Suffice it to say, it extends to the intensional setting without difficulty. In this approach, a term-like expression, yφ(
y), is introduced, where φ(y) is a formula and y is an object variable. It is read, “the y such that φ(y).” This expression is given no independent meaning, but there is a device to translate it away
in an appropriate context. Thus, [λxψ(x)](yφ(y)) is taken to abbreviate the formula ∃y[∀z(φ(z) ≡ z = y) ∧ ψ(y)]. (The standard device has been used of writing φ(z) to represent the substitution
instance of φ(y) resulting from replacing free occurrences of y with occurrences of z, and modifying bound variables if necessary to avoid incidental capture of z.) The present abstraction notation,
using λ, is not that of Russell, but he used an equivalent scoping device. As he famously pointed out, Russell's method allows us to distinguish between “The present King of France is not bald,”
which is false because there is no present King of France, and “It is not the case that the present King of France is bald,” which is true because “The present King of France is bald” is false. It
becomes the distinction between [λx¬Bald(x)](yKing(y)) and ¬[λxBald(x)](yKing(y)).
As an attractive alternative, one could make definite descriptions into first-class things. Enlarge the language so that if φ(y) is a formula where y is an object variable, then yφ(y) is an intension
term whose free variables are those of φ(y) except for y. Then modify the definition of formula, to allow these new intension terms to appear in places we previously allowed intension variables to
appear. That leads to a complication, since intension terms involve formulas, and formulas can contain intension terms. In fact, formula and term must be defined simultaneously, but this is no real
Semantically we can model definite descriptions by partial intensions. We say the term yφ(y) designates at state Γ of a partial FOIL model M with respect to valuation v if M, Γ ⊨[w] φ(y) for exactly
one y-variant w of v. Then the conditions from section 3.5.1 are extended as follows.
3. If yφ(y) does not designate at Γ with respect to v,
it is not the case that M, Γ ⊨[v] [λxX](yφ(y))
4. If yφ(y) designates at Γ with respect to v,
M, Γ ⊨[v] [λxX](yφ(y)) ⇔ M, Γ ⊨[w] X
where w is the y-variant of v such that M, Γ ⊨[w] φ(y)
One can show that the Russell approach and the approach just sketched amount to more-or-less the same thing. But with definite descriptions available as formal parts of the language, instead of just
as removable abbreviations in context, one can see they determine intensions (possibly partial) that are specified by formulas.
A property need not hold of the corresponding definite description, that is, [λxφ(x)](xφ(x)) need not be valid. This is simply because the definite description might not designate. However, if it
does designate, it must have its defining property. Indeed, we have the validity of the following.
D(xφ(x)) ≡ [λxφ(x)](xφ(x))
One must be careful about the interaction between definite descriptions and modal operators, just as between them and negation. For instance, D(x◊φ(x)) ⊃ ◊D(xφ(x)) is valid, but its converse is not.
For a more concrete example of modal/description interaction, suppose K(x) is a formula expressing that x is King of France. In the present state, [λx◊E(x)](xK(x)) is false, because the definite
description has no designation, but ◊[λxE(x)](xK(x)) is true, because there is an alternative (earlier) state in which the definite description designates an existent object.
It was noted that for rigid terms the de re/de dicto distinction collapses. Indeed, if f and g are rigid, [λxyx=y](f, g), □[λxyx=y](f, g) and [λxy□x=y](f, g) are all equivalent. This is a problem
that sets a limit on what can be handled by the Carnap-style logic as presented so far. Two well-known areas of difficulty are mathematics and proper names, especially in an epistemic setting.
3.6.1 Mathematical Problems
How could someone not know that 3 + 2 = 2 + 3? Yet it happens for small children, and for us bigger children similar, but more complex, examples of mathematical truths we don't know can be found.
Obviously the designations of “3 + 2” and “2 + 3” are the same, so their senses must be different. But if we model sense by a function from states to designations, the functions would be the same,
mapping each state to 5. If it is necessary truth that is at issue, there is no problem; we certainly want that 3 + 2 = 2 + 3 is a necessary truth. But if epistemic issues are under consideration,
since we cannot have a possible world in which “3 + 2” and “2 + 3” designate different things, “3 + 2 = 2 + 3” must be a known truth. So again, how could one not know this, or any other mathematical
One possible solution is to say that for mathematical terms, intension is a different thing than it is for definite descriptions like “the King of France.” The expression “3 + 2” is a kind of
miniature computing program. Exactly what program depends on how we were taught to add, but let us standardize on: x + y instructs us to start at the number x and count off the next y numbers. Then
obviously, “3 + 2” and “2 + 3” correspond to different programs with the same output. We might identify the program with the sense, and the output with the denotation. Then we might account for not
knowing that 3 + 2 = 2 + 3 by saying we have not executed the two programs, and so can't conclude anything about the output.
Identifying the intension of a mathematical term with its computational content is a plausible thing to do. It does, however, clash with what came earlier in this article. Expressions like “the King
of France” get treated one way, expressions like “3 + 2” another. For any given expression, how do we decide which way to treat it? It is possible to unify all this. Here is one somewhat
simple-minded way. If we think of the sense of “3 + 2” as a small program, there are certainly states, possible worlds, in which we have not executed the program, and others in which we have. We
might, then, think of the intension of “3 + 2” as a partial function on states, whose domain is the set of states in which the instructions inherent in “3 + 2” have been executed, and mapping those
states to 5. Then, clearly, we can have states of an epistemic possible world model in which we do not know that “3 + 2” and “2 + 3” have the same outputs.
This can be pushed only so far. We might be convinced by some general argument that addition is a total function—always defined. Then it is conceivable that we might know “3 + 2” designates some
number, but not know what it is. But this cannot be captured using the semantics outlined thus far, assuming arithmetic terms behave correctly. If at some state we know ∃x([λyx = y](3 + 2)), that
is, we know “3 + 2” designates, then at all compatible states, “3 + 2” designates, and since arithmetic terms behave correctly, at all compatible states “3 + 2” must designate 5, and hence we must
know [λy5 = y](3 + 2) at the original state. We cannot know “3 + 2” designates without knowing what.
The discussion so far was simply meant to show what kind of difficulties can arise, and the limitations of simple solutions. More complex and sophisticated solutions have been developed. Tichy 1988
presents a three-level approach. There are intensions and extensions, as discussed above, and also a category of constructions. The idea is that expressions determine intensions and extensions, and
this itself is a formal process in which compound expressions act using the simpler expressions that go into their making; compositionality at the level of constructions, in other words. Using this
formal machinery, “3+2” and “2+3” prescribe different constructions; their meaning is not simply captured by their intensional representation as discussed here. Another more complex approach that
fits well with the Carnap-style treatment is in Moschovakis 2006. In this the logic of Gallin 1975 is extended to allow formally for an algorithmic approach to intensions of mathematical terms. For
both Tichy and Moschovakis, the resulting systems are complicated, and details are not presented here.
It is also possible to address the problem from quite a different direction. One does not question the necessity of mathematical truths—the issue is an epistemic one. And for this, it has long been
noted that a Hintikka-style treatment of knowledge does not deal with actual knowledge, but with potential knowledge—not what we know, but what we are entitled to know. Then familiar logical
omniscience problems arise, and we have just seen yet another instance of them. A way out of this was introduced in Fagin and Halpern 1988, called awareness logic. The idea was to enrich Hintikka's
epistemic models with an awareness function, mapping each state to the set of formulas we are aware of at that state. The idea was that an awareness function reflects some bound on the resources we
can bring to bear. With such semantical machinery, we might know simple mathematical truths but not more complex ones, simply because they are too complex for us.
Awareness, in this technical sense, is a blunt instrument. A refinement was suggested in van Benthem 1991: use explicit knowledge terms. As part of a project to provide a constructive semantics for
intuitionistic logic, a formal logic of explicit proof terms was presented in Artemov 2001. Later a possible world semantics for it was created in Fitting 2005. In this logic truths are known for
explicit reasons, and these explicit reasons provide a measure of complexity. The work was subsequently extended to more general family of justification logics, which are logics of knowledge in which
reasons are made explicit.
In justification logics, instead of the familiar KX of epistemic logic we have t:X, where t is an explicit justification term. The formula t:X is read, “X is known for reason t.” Justification terms
have structure which varies depending on the particular justification logic being investigated. Common to all justification logics is the following minimal machinery. First there are justification
constants, intended to be unanalyzed justifications of accepted logical truths. Second, there are justification variables, standing for arbitrary justifications. And finally there are binary
operations, minimally ⋅ and +. The intention is that if s justifies X ⊃ Y and t justifies X, then s⋅t justifies Y, and also s+t justifies anything that s justifies and also anything that t justifies.
There are very close connections between justification logics and epistemic logics, embodied in Realization Theorems. This is not the appropriate place to go into details; a thorough discussion of
justification logics can be found in the entry on justification logic.
If one follows the justification logic approach one might say, of 3 + 2 = 2 + 3 or some more complicated mathematical truth, that it is knowable but too hard for us to actually know. That is, the
justification terms embodying our reasons for this knowledge are too complex for us. This follows the general idea of awareness logic, but with a specific and mathematically useful measure of the
complexity of our awareness.
3.6.2 Proper Names
Proper names are even more of a problem than mathematical expressions. These days proper names are generally understood to be rigid designators but, unlike mathematical terms, they have no structure
that we can make use of. Here is a very standard example. Suppose “Hesperus” is used as a name for the evening star, and “Phosperus” for the morning star. It should be understood that “the evening
star” is conventional shorthand for a definite description, “the first heavenly body seen after sunset” and similarly for “the morning star”. Definite descriptions have structure, they pick out
objects and in different possible worlds they may pick out different objects. But proper names are not like that. Once the designations of “Hesperus” and “Phosperus” are fixed—as it happens they both
name the planet Venus—that designation is fixed across possible worlds and so they are rigid designators. It follows that while the morning star is the evening star, that identity is not necessary
because definite descriptions are not rigid, but Hesperus is Phosphorus and that identity is a necessary one. How, then, could the identity of Hesperus and Phosphorus not be a known truth, known
without doing any astronomical research?
There is more than one solution of the dilemma just mentioned. One way is very simple indeed. Possible world models can be used to represent various kinds of modalities. They provide mathematical
machinery, but they do not say what the machinery is for. That is up to the user. So, we might want to have such a model to represent necessary truth, or we might want to have such a model to
represent epistemic issues. The argument that proper names are rigid designators applies to models representing necessary truth. It does not follow that this is the case for epistemic models too.
Here is a quote from (Kripke 1980) that sheds some light on the issue.
But being put in a situation where we have exactly the same evidence, qualitatively speaking, it could have turned out that Hesperus was not Phosphorus; that is, in a counterfactual world in
which ‘Hesperus’ and ‘Phosphorus’ were not used in the way that we use them, as names of this planet, but as names of some other objects, one could have had qualitatively identical evidence and
concluded that ‘Hesperus’ and ‘Phosphorus’ named two different objects. But we, using the names as we do right now, can say in advance, that if Hesperus and Phosphorus are one and the same, then
in no other possible world can they be different. We use ‘Hesperus’ as the name of a certain body and ‘Phosphorus’ as the name of a certain body. We use them as names of these bodies in all
possible worlds. If, in fact, they are the same body, then in any other possible world we have to use them as a name of that object. And so in any other possible world it will be true that
Hesperus is Phosphorus. So two things are true: first, that we do not know a priori that Hesperus is Phosphorus, and are in no position to find out the answer except empirically. Second, this is
so because we could have evidence qualitatively indistinguishable from the evidence we have and determine the reference of the two names by the positions of two planets in the sky, without the
planets being the same.
In short, proper names are rigid designators in models where the possible worlds represent logically alternative states. They need not be rigid designators in models where the possible worlds
represent epistemically alternative states. Hesperus and Phosphorus are the same, necessarily so, but we could have used the names “Hesperus” and “Phosphorus” differently without being able to tell
we were doing so—a state in which we did this might be epistemically indistinguishable from the actual one. There can be necessary identities that we do not know because necessary truth and known
truth do not follow the same rules.
That proper names are an issue at all is in part because of the Carnapian modeling of senses that has been followed here, as functions on possible worlds. One need not do things this way. If the
Church approach is followed, one can simply say that “Hesperus” and “Phosphorus” have the same designation rigidly, hence necessarily, but even so they do not have the same sense. This is possible
because senses are, in effect, independent and not derived things. Senses can determine the same extension across possible worlds without being identical.
Another logic breaking the Carnapian mold, that is thorough and fully developed, can be found in Zalta 1988. In this a class of abstract objects is postulated, some among them being ordinary. A
distinction is made between an object exemplifyng a property and encoding it. For instance, an abstract object might perfectly well encode the property of being a round square, but could not
exemplify it. A general comprehension principle is assumed, in the form that conditions determine abstract individuals that encode (not exemplify) the condition. Identity is taken to hold between
objects if they are both abstract and encode the same properties, or they are both ordinary and exemplify the same properties. In effect, this deals with problems of substitutivity. The formal theory
(more properly, theories) is quite general and includes both logical necessity and temporal operators. It is assumed that encoding is not contingent, though exemplifying may be, and thus properties
have both an exemplification extension that can vary across worlds, and an encoding extension that is rigid. With all this machinery available, a detailed treatment of proper names can be developed,
along with much else.
• Anderson, C. A. (1984). “General intensional logic,” in D. Gabbay and F. Guenthner (Eds.), Handbook of Philosophical Logic, Volume II, Chapter II.7, pp. 355–385, Dordrecth: D. Reidel.
• Anderson, C. A. (1998). “Alonzo Church's contributions to philosophy and intensional logic,” The Bulletin of Symbolic Logic, 4: 129–171.
• Beaney, M. (1997). The Frege Reader, Oxford: Blackwell.
• Bressan, A. (1972). A General Interpreted Modal Calculus, New Haven: Yale University Press.
• Carnap, R. (1947). Meaning and Necessity, Chicago: University of Chicago Press. Enlarged edition 1956.
• Church, A. (1940). “A formulation of the simple theory of types,” The Journal of Symbolic Logic, 5: 56–58.
• Church, A. (1944). Introduction to Mathematical Logic, Part I, Princeton University Press. Revised and enlarged, 1956.
• Church, A. (1946). “A formulation of the logic of sense and denotation (abstract),” The Journal of Symbolic Logic, XI: 31.
• Church, A. (1951). “A formulation of the logic of sense and denotation,” in P. Henle (Ed.), Structure, Method and Meaning, New York: The Liberal Arts Press. pp 3–24.
• Church, A. (1973). “Outline of a revised formulation of the logic of sense and denotation (part I),” Noûs, 7: 24–33.
• Church, A. (1974). “Outline of a revised formulation of the logic of sense and denotation (part II),” Noûs, 8: 135–156.
• Fagin, R. F. and J. Y. Halpern (1988). “Belief, awareness, and limited reasoning,” Artificial Intelligence, 34: 39–76.
• Fitting, M. C. (2004). “First-order intensional logic,” Annals of Pure and Applied Logic, 127: 171–193.
• Fitting, M. C. (2005a). “The logic of proofs, semantically,” Annals of Pure and Applied Logic, 132 (1): 1–25.
• Fitting, M. C. and R. Mendelsohn (1998). First-Order Modal Logic, Dordrecht: Kluwer. Errata at http://comet.lehman.cuny.edu/fitting/errata/errata.html.
• Frege, G. (1892). Über Sinn und Bedutung. Zeitschrift für Philosophie und philosophische Kritik, 100: 25–50. English translation as ‘On Sinn and Bedeutung,’ in (Beaney 1997).
• Gallin, D. (1975). Intensional and Higher-Order Modal Logic, Amsterdam: North-Holland.
• Hughes, G. E. and M. J. Cresswell (1996). A New Introduction to Modal Logic, London: Routledge.
• Kripke, S. (1963). “Semantical considerations on modal logics,” in Acta Philosophica Fennica, 16: 83–94.
• Kripke, S. (1980). Naming and Necessity (Second edition), Cambridge, MA: Harvard University Press.
• Marcus, R. (1946). “A Functional Calculus of First Order Based on Strict Implication,” The Journal of Symbolic Logic, 11: 1–16.
• Marcus, R. (1947). “The Identity of Individuals in a Strict Functional Calculus of Second Order,” The Journal of Symbolic Logic, 12: 12–15.
• Marcus, R. (1953). “Strict Implication, Deducibility and the Deduction Theorem,” The Journal of Symbolic Logic, 18: 234–236.
• Marcus, R. (1961) “Modalities and intensional Languages,” Synthese, XIII: 303–322. Reprinted in Modalities, Philosophical Essays, Ruth Barcan Marcus, Oxford University Press, 1993.
• Montague, R. (1960). “On the nature of certain philosophical entities,” The Monist, 53: 159–194. Reprinted in (Thomason 1974), 148–187.
• Montague, R. (1970). “Pragmatics and intensional logic,” Synthèse, 22: 68–94. Reprinted in (Thomason 1974), 119–147.
• Moschovakis, Y. (2006). “A logical calculus of meaning and synonymy,” Linguistics and Philosophy, 29: 27–89.
• Quine, W. V. (1963). “Reference and modality,” in From a Logical Point of View (second ed.), Chapter VIII, pp. 139–159. New York: Harper Torchbooks.
• Russell, B. (1905). “On denoting,” Mind, 14: 479–493. Reprinted in Robert C. Marsh, ed., Logic and Knowledge: Essays 1901-1950, by Bertrand Russell, London: Allen & Unwin, 1956.
• Stalnaker, R. and R. Thomason (1968). “Abstraction in first-order modal logic,” Theoria, 34: 203–207.
• Thomason, R. and R. Stalnaker (1968). “Modality and reference,” Noûs, 2: 359–372.
• Thomason, R. H. (Ed.) (1974). Formal Philosophy, Selected Papers of Richard Montague, New Haven and London: Yale University Press.
• Tichý, P. (1971). “An Approach to Intensional Analysis,” Noûs, 5: 273–297.
• Tichý, P. (1988). The foundations of Frege's logic, Berlin and New York: De Gruyter.
• van Benthem, J. (1991). “Reflections on epistemic logic,” Logique & Analyse, 133/134: 5–14.
• Whitehead, A. N. and B. Russell (1925). Principia Mathematica (second ed.), Cambridge: Cambridge University Press (three volumes).
• Wittgenstein, L. (1921). Tractatus Logico-Philosophicus, London: Routledge and Kegan Paul.
• Zalta, E. (1988). Intensional Logic and the Metaphysics of Intentionality, Cambridge, MA: MIT Press.
• Aczel, P. (1989). “Algebraic semantics for intensional logics,” I. In G. Chierchia, B. Partee, and R. Turner (Eds.), Properties, Types and Meaning. Volume I: Foundational Issues, pp. 17–45,
Dordrecht: Kluwer.
• Bealer, G. (1998). “Intensional entities,” in E. Craig (ed.), Routledge Encyclopedia of Philosophy, London: Routledge.
• Fitting, M. (2002). Types, Tableaus, and Gödel's God, Dordrecht: Kluwer.
• Kripke, S. (2008). “Frege's Theory of Sense and Reference: Some Exegetical Notes,” Theoria, 74: 181–218.
• Menzel, C. (1986). “A complete, type-free second order logic and its philosophical foundations,” Technical Report CSLI-86-40, Stanford: Center for the Study of Language and Information
• Searle, R. (1983). An Essay in the Philosophy of Mind, Cambridge: Cambridge University Press.
• Svoboda, V., Jespersen, B., Cheyne, C. (Eds.) (2004): Pavel Tichý's Collected Papers in Logic and Philosophy, Prague: Filosofia, and Dunedin: Otago University Press, Dunedin.
• Thomason, R. (1980). “A model theory for propositional attitudes,” Linguistics and Philosophy, 4: 47–70.
• van Benthem, J. (1988). A Manual of Intensional Logic, Stanford: CSLI Publications.
How to cite this entry.
Preview the PDF version of this entry at the Friends of the SEP Society.
Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).
Enhanced bibliography for this entry at PhilPapers, with links to its database.
logic: classical | logic: modal | logic: temporal | type theory: Church's type theory | {"url":"http://plato.stanford.edu/entries/logic-intensional/","timestamp":"2014-04-20T08:15:03Z","content_type":null,"content_length":"122846","record_id":"<urn:uuid:1a6a7129-2ad3-43ab-a151-73ec480c58fc>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Santa Western, CA Statistics Tutor
Find a Santa Western, CA Statistics Tutor
...I then help them create a timeline for completing applications and organizing extra materials. Since I am certified in writing and English, I also provide assistance in brainstorming and
editing college essays. Additionally, since I am certified in SAT prep, if their SAT score is not competitive enough for the desired school, I offer SAT tutoring to help them raise their score.
43 Subjects: including statistics, English, reading, writing
...The problem a lot of students have with SAT math is that they look at a problem and they don't know where to begin. Rather than ask questions in a straight forward manner, the test is
constantly asking students to dive in and use what they know -- and figure out the problem as they go. I help t...
49 Subjects: including statistics, reading, writing, English
...I have over 5 years of tutoring experience in all math subjects, including Algebra, Geometry, Trigonometry, Pre-Calculus, Calculus, Probability and Statistics. I have also helped students out
with their coursework in Physics and Finance. When I am tutoring, I enjoy getting to know the student and understanding the way the student learns.
14 Subjects: including statistics, calculus, physics, algebra 1
...Earned Ph.D. degree in physical Chemistry. B.Sc. degree in chem. eng. Professorship in Chemistry in the United States.
10 Subjects: including statistics, chemistry, calculus, algebra 1
...Best of luck in your search! As a child of two math teachers, I was raised to not only understand math, but also to understand how to explain the concepts to others. My love for science came
later, and I’ve spent my career in the health sciences field.
4 Subjects: including statistics, algebra 1, genetics, biostatistics
Related Santa Western, CA Tutors
Santa Western, CA Accounting Tutors
Santa Western, CA ACT Tutors
Santa Western, CA Algebra Tutors
Santa Western, CA Algebra 2 Tutors
Santa Western, CA Calculus Tutors
Santa Western, CA Geometry Tutors
Santa Western, CA Math Tutors
Santa Western, CA Prealgebra Tutors
Santa Western, CA Precalculus Tutors
Santa Western, CA SAT Tutors
Santa Western, CA SAT Math Tutors
Santa Western, CA Science Tutors
Santa Western, CA Statistics Tutors
Santa Western, CA Trigonometry Tutors
Nearby Cities With statistics Tutor
Century City, CA statistics Tutors
Cimarron, CA statistics Tutors
Dowtown Carrier Annex, CA statistics Tutors
Highland Park, LA statistics Tutors
La Canada, CA statistics Tutors
La Tuna Canyon, CA statistics Tutors
Magnolia Park, CA statistics Tutors
Playa, CA statistics Tutors
Rancho La Tuna Canyon, CA statistics Tutors
Sherman Village, CA statistics Tutors
Toluca Terrace, CA statistics Tutors
Vermont, CA statistics Tutors
West Toluca Lake, CA statistics Tutors
Westwood, LA statistics Tutors
Wilcox, CA statistics Tutors | {"url":"http://www.purplemath.com/Santa_Western_CA_statistics_tutors.php","timestamp":"2014-04-21T10:46:26Z","content_type":null,"content_length":"24562","record_id":"<urn:uuid:a9ccbc59-679f-4d50-8ac5-bb5e9edc871f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
One Of The Many Stupid Ideas To Come Out of Ed Schools
There are so many, to which one, you might ask, am I referring?
To differentiate instruction means to provide a different learning experience for every individual student in the class. Perhaps there is a student who is just learning English in your class. And
perhaps that student sits next to another who wants to have an in-depth discussion about Shakespeare. Should these two students prove difficult to teach at once, a normal person might consider
what the root problem is -- that they shouldn't be in the same class. But the wise education bureaucrat knows that any problem here must be the teacher's -- he must not have differentiated his
instruction enough.
Not to worry, it's entirely possible to teach "algebra" (however a non-math person wants to define "algebra") to someone who doesn't understand fractions.
Hat tip to
for the link above.
15 comments:
Don't worry -- this phenomenon will disappear in the next 3-5 years. As soon as the grant money runs out.
That's just the problem, it doesn't require grant money, it saves it! Ed School types say this crap to try to sound important, and school districts pick up on it because it saves them money by
being able to just dump kids in any class, prepared or not, and make their learning solely the responsibility of the teacher.
Can you elaborate on teaching algebra to someone who doesn't understand fractions? Or were the quotes supposed to indicate irony?
-Mark Roulo
So THAT's where it came from. Last year my district when through this whole big rigamarole over "differentiated instruction". Just the paperwork was horrendous, but moreso was the website where
we had to periodically report strategies, notes home and other actions taken to "encourage success" in the student. Bless her heart, the data coach was a nice lady but I honestly dreaded seeing
her because it was always more work for me than it was worth in comparison to the results. What is sad is that most good teacher do adapt material for students that are struggling, but somehow
putting it into another program which probably cost my district a few millions, it just all has a sour taste.
My comment was the very definition of sarcasm. Seriously, go look it up!
Thank you, Darren.
I've been working on the belief that mastery of rational numbers is the gateway to polynomial algebra, which is the gateway to calculus.
I will proceed as panned :-)
-Mark Roulo
Run that by me in a bit more detail?
The claim is a two-parter:
1) Many of the failures in Calculus (possibly the majority) can be traced to poor algebra skills.
2) Many/most of the failures in Algebra can be traced to poor mastery of rationals/fractions/mixed-numbers.
In support of (1), there is the Euler quote from here (http://www.math.uconn.edu/~troby/allquotes.txt):
"Most of the difficulties students have learning calculus can be traced to weak algebra skills." -- Leonhard Euler
The quote might even be a legitimate quote! In any event, I have seen this position many times in discussion threads on teaching calculus and why calculus seems to be so difficult to so many.
There *ARE* other issues that make calculus difficult, but there does seem to be some consensus that for many students who crash and burn in calculus poor algebra is the primary cause.
I don't have an equivalent quote for (2), but the general sense seems to be the same: Much of algebra crash-and-burn can be laid at the foot of poor understanding of using and manipulating
rational numbers.
-Mark Roulo
I find problems with both rational expressions and negative numbers.
Thanks for the heads up on negative numbers, Darren.
Interestingly, I had decided that the next few weeks (starting yesterday...) are the time for my son to get very good/comfortable with negative numbers :-) He already knows what they are, but
hasn't had much experience in manipulating them formally.
I'm not anticipating too many troubles as adding/subtracting go as a unit and build very naturally off of adding/subtracting positive numbers.
Multiplying/dividing will probably take longer as I'm planning on introducing the unit circle as a framework here, rather than just "cancelling negative pairs." The thought is to lay the
groundwork for roots greater than 2 that solve out to complex numbers.
We'll see how it goes. I never did teach him "invert and multiply" for dividing fractions as I consider it a Very Bad Idea(tm). Dividing fractions took longer for him to learn because of this.
Multiplying negative numbers may take longer, too!
-Mark Roulo
There's nothing wrong with "Invert and Multiply" if you show, using concrete examples, how and why it works. Example:
10 divided by 5: How many $5 bills would you get for $10?
10 divided by 2: How many $2 bills would you get for $10?
10 divided by 1: How many $1 bills would you get for $10?
10 divided by 1/2: How many half-dollars would you get for $10?
As for teaching multiplying negatives, here was what was shown to me to make it "concrete". Imagine a pool being filled with water at 1 gallon per minute. The pool currently (time t=0) has 100
gallons in it. How many gallons will it have in 3 min? 2 min? 1 min? Now? 1 minute ago (which is -1)? etc.
What I do not like about "invert and multiply" is that even if the students are shown why/how it works, I expect that it rapidly becomes a magic "trick." I fear that this contributes to the sense
that math is a bunch of unconnected/arbitrary tricks rather than an internally consistent logical system.
Multiplying a negative by a positive is not the problem. The problem will be multiplying a negative by a negative. Or raising a negative to the 7th power.
Interestingly, I often don't need to make things "concrete" other than to tie them in to word problems. My son is perfectly happy working with math in the abstract. I think that this is very odd
as he is NOT a "math brain" :-)
-Mark Roulo
For negative times a negative, make the pool draining. How much water will be in it in 3 min, 2 min, 1 min, now, 1 minute ago, etc.
Another way to teach division of fractions is to get a common denominator, then "divide straight across" like we multiply. The answer's denominator will always be 1, since we had a common
denominator when we divided, so the answer will be the quotient of the 2 numerators. Try it out, it's fun :)
Thanks for the negative × negative with water suggestion. I'll try to work it in.
I think I'm going to skip the division of fractions via common denominator approach for now. My son does division like this:
It works, he seems to understand the pieces, and he remembers it. Right now I don't want to create any unnecessary confusion.
Mark Roulo | {"url":"http://rightontheleftcoast.blogspot.com/2011/08/one-of-many-stupid-ideas-to-come-out-of.html","timestamp":"2014-04-20T03:25:13Z","content_type":null,"content_length":"155820","record_id":"<urn:uuid:b6be91b5-dff1-4626-84d7-895a04d440df>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
CBMS Regional Conference Series in Mathematics
2008; 107 pp; softcover
Number: 108
ISBN-10: 0-8218-4456-3
ISBN-13: 978-0-8218-4456-4
List Price: US$30
Member Price: US$24
All Individuals: US$24
Order Code: CBMS/108
The study of convex bodies is a central part of geometry, and is particularly useful in applications to other areas of mathematics and the sciences. Recently, methods from Fourier analysis have been
developed that greatly improve our understanding of the geometry of sections and projections of convex bodies. The idea of this approach is to express certain properties of bodies in terms of the
Fourier transform and then to use methods of Fourier analysis to solve geometric problems. The results covered in the book include an analytic solution to the Busemann-Petty problem, which asks
whether bodies with smaller areas of central hyperplane sections necessarily have smaller volume, characterizations of intersection bodies, extremal sections of certain classes of bodies, and a
Fourier analytic solution to Shephard's problem on projections of convex bodies.
The book is written in the form of lectures accessible to graduate students. This approach allows the reader to clearly see the main ideas behind the method, rather than to dwell on technical
difficulties. The book also contains discussions of the most recent advances in the subject. The first section of each lecture is a snapshot of that lecture. By reading each of these sections first,
novices can gain an overview of the subject, then return to the full text for more details.
Graduate students and research mathematicians interested in convex geometry, emphasizing methods from harmonic analysis. | {"url":"http://ams.org/bookstore?fn=20&arg1=cbmsseries&ikey=CBMS-108","timestamp":"2014-04-20T09:03:39Z","content_type":null,"content_length":"16248","record_id":"<urn:uuid:8769f58f-9c06-469c-b73a-d48f89c1a771>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lilburn SAT Math Tutor
Find a Lilburn SAT Math Tutor
...I do still study the topics to keep the information fresh in my head. Took the actual test when I considered joining the military. Made a 92 or 93 on the actual test.
29 Subjects: including SAT math, chemistry, reading, physics
...I have taken many AP and Honors courses so I know exactly how challenging these classes can be and know how much teachers/professors expect from their students. Since 2008 I have been
volunteering at two different elementary schools tutoring ESL students (K-5) in math, English/reading/writing, and biology. I am very patient and enjoy working with kids.
29 Subjects: including SAT math, chemistry, reading, Spanish
...Currently I am tutoring regularly via Skype which is allowing students to get last minute help with homework when needed - even in 15 minute increments - and allowing for longer scheduled
tutoring sessions to prepare for quizzes and tests all from the convenience of one's home. The key to your s...
10 Subjects: including SAT math, geometry, algebra 1, algebra 2
...Over the years, I have been able to assist first time students obtain high initial scores and dramatically improve scores for students re-taking the exams. On another note, one of my strongest
attributes is that I have an infinite amount of empathy, "patience", and determination with my students. I will not give up on a trying student.
42 Subjects: including SAT math, reading, English, ASVAB
...I have a track record of consistent improvement in students' scores. I offer strategies that students can use immediately to test more effectively. I have worked with grades K-6th over five
years, during high school, college and intermittently throughout my professional years.
42 Subjects: including SAT math, English, geometry, reading
Nearby Cities With SAT math Tutor
Avondale Estates SAT math Tutors
Berkeley Lake, GA SAT math Tutors
Chamblee, GA SAT math Tutors
Clarkston, GA SAT math Tutors
Covington, GA SAT math Tutors
Doraville, GA SAT math Tutors
Duluth, GA SAT math Tutors
Grayson, GA SAT math Tutors
Norcross, GA SAT math Tutors
Pine Lake SAT math Tutors
Scottdale, GA SAT math Tutors
Snellville SAT math Tutors
Stone Mountain SAT math Tutors
Sugar Hill, GA SAT math Tutors
Tucker, GA SAT math Tutors | {"url":"http://www.purplemath.com/lilburn_ga_sat_math_tutors.php","timestamp":"2014-04-21T02:06:11Z","content_type":null,"content_length":"23857","record_id":"<urn:uuid:113520f8-91d9-499a-9277-c192b0d4e0f7>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Model-checking of causality properties
Results 1 - 10 of 31
, 1996
"... . Monadic second order logic (MSOL) over transition systems is considered. It is shown that every formula of MSOL which does not distinguish between bisimilar models is equivalent to a formula
of the propositional -calculus. This expressive completeness result implies that every logic over tran ..."
Cited by 65 (3 self)
Add to MetaCart
. Monadic second order logic (MSOL) over transition systems is considered. It is shown that every formula of MSOL which does not distinguish between bisimilar models is equivalent to a formula of the
propositional -calculus. This expressive completeness result implies that every logic over transition systems invariant under bisimulation and translatable into MSOL can be also translated into the
-calculus. This gives a precise meaning to the statement that most propositional logics of programs can be translated into the -calculus. 1 Introduction Transition systems are structures consisting
of a nonempty set of states, a set of unary relations describing properties of states and a set of binary relations describing transitions between states. It was advocated by many authors [26, 3]
that this kind of structures provide a good framework for describing behaviour of programs (or program schemes), or even more generally, engineering systems, provided their evolution in time is
, 1998
"... Message sequence charts (MSC) are commonly used in designing communication systems. They allow describing the communication skeleton of a system and can be used for finding design errors. First,
a specification formalism that is based on MSC graphs, combining finite message sequence charts, is p ..."
Cited by 52 (9 self)
Add to MetaCart
Message sequence charts (MSC) are commonly used in designing communication systems. They allow describing the communication skeleton of a system and can be used for finding design errors. First, a
specification formalism that is based on MSC graphs, combining finite message sequence charts, is presented. We present then an automatic validation algorithm for systems described using the message
sequence charts notation. The validation problem is tightly related to a natural language-theoretic problem over semi-traces (a generalization of Mazurkiewicz traces, which represent partially
ordered executions). We show that a similar and natural decision problem is undecidable. 1
- IN PROC. 7 TH INTL. CONFERENCE ON TOOLS AND ALGORITHMS FOR THE CONSTRUCTION AND ANALYSIS OF SYSTEMS (TACAS’01), VOLUME 2031 OF LECT. NOTES IN COMP. SCI , 2001
"... Message sequence charts (MSCs) is a standard notation for describing the interaction between communicating objects. It is popular among the designers of communication protocols. MSCs enjoy both
a visual and a textual representation. High level MSCs (HMSCs) allow specifying in nite scenarios and di ..."
Cited by 42 (8 self)
Add to MetaCart
Message sequence charts (MSCs) is a standard notation for describing the interaction between communicating objects. It is popular among the designers of communication protocols. MSCs enjoy both a
visual and a textual representation. High level MSCs (HMSCs) allow specifying in nite scenarios and di erent choices. Speci cally, anHMSC consists of a graph, where each node is a nite MSC with
matched send and receive events, and vice versa. In this paper we demonstrate a weakness of HMSCs, which disallows one to model certain interactions. We will show, by means of an example, that some
simple nite state and simple communication protocol cannot be represented using HMSCs. We then propose an extension to the MSC standard, which allows HMSC nodes to include unmatched messages. The
corresponding graph notation will be called HCMSC, which stands for High level Compositional Message Sequence Charts. With the extended framework, we provide an algorithm for automatically
constructing an MSC representation for nite state asynchronous message passing protocols.
, 1997
"... A basic result concerning LTL, the propositional temporal logic of linear time, is that it is expressively complete; it is equal in expressive power to the first order theory of sequences. We
present here a smooth extension of this result to the class of partial orders known as Mazurkiewicz traces. ..."
Cited by 42 (5 self)
Add to MetaCart
A basic result concerning LTL, the propositional temporal logic of linear time, is that it is expressively complete; it is equal in expressive power to the first order theory of sequences. We present
here a smooth extension of this result to the class of partial orders known as Mazurkiewicz traces. These partial orders arise in a variety of contexts in concurrency theory and they provide the
conceptual basis for many of the partial order reduction methods that have been developed in connection with LTL-specifications. We show that LTrL, our linear time temporal logic, is equal in
expressive power to the first order theory of traces when interpreted over (finite and) infinite traces. This result fills a prominent gap in the existing logical theory of infinite traces. LTrL also
constitutes a characterisation of the so called trace consistent (robust) LTL-specifications. These are specifications expressed as LTL formulas that do not distinguish between different
linearisations of the same trace and hence are amenable to partial order reduction methods.
- In LICS '96 , 1996
"... We study linear time temporal logics of multiple agents, where the temporal modalities are local. These modalities not only refer to local next-instants and local eventuality, but also global
views of agents at any local instant, which are updated due to communication from other agents. Thus agentsa ..."
Cited by 29 (6 self)
Add to MetaCart
We study linear time temporal logics of multiple agents, where the temporal modalities are local. These modalities not only refer to local next-instants and local eventuality, but also global views
of agents at any local instant, which are updated due to communication from other agents. Thus agentsalso reason about the future, present and past of other agents in the system. The models for these
logics are simple : runs of networks of synchronizing automata. Problems like gossipping in interconnection networks are naturally described in the logics proposed here. We present solutions to the
satisfiability and model checking problems for these logics. Further, since formulas are insensitive to different interleavings of runs, partial order based verification methods become applicable for
properties described in these logics. 1. Introduction 1 The Propositional Temporal Logic of Linear Time (PTL) has proved to be a successful logical tool for specifying and reasoning about the
- In Proceedings of CONCUR '96: 7th International Conference on Concurrency Theory , 1995
"... In concurrency theory, there are several examples where the interleaved model of concurrency can distinguish between execution sequences which are not significantly different. One such example
is sequences that differ from each other by stuttering, i. e., the number of times a state can adjacent ..."
Cited by 25 (3 self)
Add to MetaCart
In concurrency theory, there are several examples where the interleaved model of concurrency can distinguish between execution sequences which are not significantly different. One such example is
sequences that differ from each other by stuttering, i. e., the number of times a state can adjacently repeat. Another example is executions that differ only by the ordering of independently executed
events. Considering these sequences as different is semantically rather meaningless. Nevertheless, specification languages that are based on interleaving semantics, such as linear temporal logic
(LTL), can distinguish between them. This situation has led to several attempts to define languages that cannot distinguish between such equivalent sequences. In this paper, we take a different
approach to this problem: we develop algorithms for deciding if a property cannot distinguish between equivalent sequences, i. e., is closed under the equivalence relation. We focus on properties
represented by regular languages, !-regular languages, or propositional LTL formulae and show that for such properties there is a wide class of equivalence relations for which determining closure is
decidable, in fact in PSPACE.
, 1998
"... The complexity of LTrL, a global linear time temporal logic over traces is investigated. The logic is global because the truth of a formula is evaluated in a global state, also called
conguration. The logic is shown to be non-elementary with the main reason for this complexity being the nesting of u ..."
Cited by 21 (3 self)
Add to MetaCart
The complexity of LTrL, a global linear time temporal logic over traces is investigated. The logic is global because the truth of a formula is evaluated in a global state, also called conguration.
The logic is shown to be non-elementary with the main reason for this complexity being the nesting of until operators in formulas. The fragment of the logic without the until operator is shown to be
EXPSPACE-complete. 1 Introduction Innite words, which linear orders on events, are often used to model executions of systems. Innite traces, which are partial orders on events, are often used to
model concurrent systems when we do not want to put some arbitrary ordering on actions occurring concurrently. A state of a system in the linear model is just a prex of an innite word; it represents
the actions that have already happened. A state of a system in the trace model is a conguration, i.e., a nite downwards closed set of events that already happened. Temporal logics over traces come in
- SLD Collaboration), Phys. Rev. D53 , 2004
"... We describe an efficient decentralized monitoring algorithm that monitors a distributed program's execution to check for violations of safety properties. The monitoring is based on formulae
written in PT-DTL, a variant of past time linear temporal logic that we define. PT-DTL is suitable for express ..."
Cited by 20 (3 self)
Add to MetaCart
We describe an efficient decentralized monitoring algorithm that monitors a distributed program's execution to check for violations of safety properties. The monitoring is based on formulae written
in PT-DTL, a variant of past time linear temporal logic that we define. PT-DTL is suitable for expressing temporal properties of distributed systems. Specifically, the formulae of PT-DTL are relative
to a particular process and are interpreted over a projection of the trace of global states that represents what that process is aware of. A formula relative to one process may refer to other
processes' local states through remote expressions and remote formulae. In order to correctly evaluate remote expressions, we introduce the notion of KNOWLEDGEVECTOR and provide an algorithm which
keeps a process aware of other processes' local states that can affect the validity of a monitored PT-DTL formula. Both the logic and the monitoring algorithm are illustrated through a number of
examples. Finally, we describe our implementation of the algorithm in a tool called DIANA.
, 2000
"... A long standing open problem in the theory of (Mazurkiewicz) traces has been the question whether LTL (Linear Time Logic) is expressively complete with respect to the rst order theory. We solve
this problem positively for nite and in nite traces and for the simplest temporal logic, which is b ..."
Cited by 19 (8 self)
Add to MetaCart
A long standing open problem in the theory of (Mazurkiewicz) traces has been the question whether LTL (Linear Time Logic) is expressively complete with respect to the rst order theory. We solve this
problem positively for nite and in nite traces and for the simplest temporal logic, which is based only on next and until modalities. Similar results were established previously, but they were all
weaker, since they used additional past or future modalities. Another feature of our work is that our proof is direct and does not use any reduction to the word case.
- Conference version in LATIN 2004, LNCS 2976
"... Mazurkiewicz traces ⋆ ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=256057","timestamp":"2014-04-18T22:23:35Z","content_type":null,"content_length":"38890","record_id":"<urn:uuid:b615ca0c-b846-4c7e-b5b9-ed6ba0d3d172>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fall 2007
Computer Science 433
Cryptography Fall 2007
Boaz Barak
Lecture notes | Reading | Admin
Course Summary
Cryptography or "secret writing" has been around for about 4000 years, but was revolutionized in the last few decades. The first aspect of this revolution involved placing cryptography on more solid
mathematical grounds, thus transforming it from an art to a science and showing a way to break out of the "invent-break-tweak" cycle that characterized crypto throughout history. The second aspect
was extending cryptography to applications far beyond simple codes, including some paradoxical impossible-looking creatures such as public key cryptography , zero knowledge proofs, and playing poker
over the phone.
This course will be an introduction to modern "post-revolutionary" cryptography with an emphasis on the fundamental ideas (as opposed to an emphasis on practical implementations). Among the topics
covered will be private key and public key encryption schemes (including DES/AES and RSA), digital signatures, one-way functions, pseudo-random generators, zero-knowledge proofs, and security against
active attacks (e.g., chosen ciphertext (CCA) security). As time permits, we may also cover more advanced topics such as the Secure Socket Layer (SSL/TLS) protocol and the attacks on it (Goldberg and
Wagner, Bleichenbacher), secret sharing, two-party and multi-party secure computation, and quantum cryptography.
The main prerequisite for this course is ability to read, write (and perhaps enjoy!) mathematical proofs. In addition, familiarity with algorithms and basic probability theory will be helpful. No
programming knowledge is needed. If you're interested in the course but are not sure you have sufficient background, or you have any other questions about the course, please contact me at
Lecture Notes and Handouts.
Lecture 1: Tuesday, September 18, 2007
Overview of course, crypto history
Overview of crypto goals and history. Some classical ciphers and how they were broken. Outline of crypto 1970's "revolution". Admin info about the course.
Powerpoint Slides
Handout 1 - Mathematical Background
Excerpt from Katz-Lindell Book on Principles of Modern Cryptography (pages 18-27)
KL Book: Chapter 1 - introduction.
Additional reading: You can find more information about historical ciphers on the web page Alex Biryukov's wonderful Course on Cryptanalysis.
Mathematical proofs: Some links on reading, writing and coming up with mathematical proofs. Chapters 1 and 3 of the Lehman-Leighton notes of an MIT course can be useful. Some tips on mathematical
writing in general and proofs in particular can be found in these few pages from Knuth, Larrabee, and Roberts. On a lighter and more general note, you might be interested to read Keith Devlin's
musing about mathematical proofs.
Reading for next class: We'll start to use probability a lot (although only very basic things). The handout contains some references.
We'll start also thinking about defining security for encryption schemes. Throughout this course the theme of such definitions will be rigor - mathematical precision and being conservative - making
very strong demands on the security. The Katz-Lindell excerpt explains some of the motivation behind this. See also pages 20-25 of Goldreich's book (Volume 1).
Lecture 2: Thursday, September 20, 2007
Perfect secrecy and its limitations
Perfect Secrecy, the one-time pad. Limitations of perfect secrecy.
Lecture Outline
Homework 1 - due September 27th (LaTeX source).
I prefer you use LaTeX to write your solution. Here is a short guide for LaTex by Dave Xiao (you might want to look at the source files for the guide: latex-guide.tex and macros.tex).
KL Book: Chapter 2 - Perfectly secret encryption
Additional reading: Lecture 2 of Bellare's course discusses the issues in defining security for encryption schemes and perfect security. See also Section 6.4 in the Golwasser-Bellare lecture notes.
The definition of perfect secrecy was first given by Shannon in this 1949 paper, but our discussion followed more closely the approach of Goldwasser and Micali who, when referring to the
indistinguishability definition for encryption schemes, said: "A good disguise should not allow a mother to distinguish between her own children".
Reading for next class: Next class we'll discuss computational models such as the Turing machine. You might want to take a look at this chapter from my upcoming book with Sanjeev Arora.
Lecture 3: Tuesday, September 25, 2007
Computational model, computational security
Computational models and computational security.
Lecture Outline
Additional reading: You might want to look at the Boolean circuits and NP completeness chapter from my upcoming book with Sanjeev Arora. Some handouts from the last offer of this course: Powerpoint
Slides (did not use these in class) computational models handout and figure. Computational complexity is covered in many places and in particular in Sipser's book. If you prefer PowerPoint slides you
can look at Muli Safra's complexity course. In particular the first 5 presentations there (Introduction, Preliminaries, Reductions, Cook Theorem, and NP-complete Problems) roughly cover the material
we discussed (and of course also some things we did not discuss). As I already mentioned, once we have an impossibility result, the right thing to do is to try to bypass it. This holds also for NP
-completeness results where once a problem is NP-complete, and hence is probably not efficiently solvable, people try to approximately solve it (for example, if we can't color a graph in the minimum
number of colors, try to color it within a factor of at most K times the minimum for some k.). This web site tracks the current approximation status of many problems. In many cases we can prove that
it is NP-hard to even approximate some problems. For a good exposition of this, see Sanjeev Arora's thesis.
Lecture 4: Thursday, September 27, 2007
Computational indistinguishability, pseudorandom generators
Computational indistinguishability, Pseudorandom generators, length extension.
Lecture Outline
Handout: Pages 215-220 from KL book
KL Book: Chapter 3 - Private key encryption and pseudorandomness
Additional reading: Please take a look at Section 6.4.2 of the Katz-Lindell book. You might also want to look at Goldreich's Treatment of pseudorandom generators (Volume I, pages 101-117).
Lecture 5: Tuesday, October 2, 2007
CPA, Pseudorandom functions
Encryption schemes secure against Chosen-Plaintext Attack (CPA), Pseudorandom functions.
Lecture outline
KL book: Pages 82-93 and 221-225. (Sections 3.5, 3.6.1, 3.6.2 and 6.5). See also the following excerpt from Goldreich's book on the construction of PRF's from PRG's. (This excerpt is from a draft -
see Goldreich Vol I for the updated version.)
Additional reading:Goldreich Volume II (Chapter 5) contains an extensive discussion of the definitions of encryption schemes.
Pseudorandom functions were defined and constructed by Goldreich, Goldwasser, and Micali - (see this page for the paper, containing also the full proof). As mentioned, there are other more efficient
candidate constructions, including HMAC by Bellare, Canetti and Krawczyk and a factoring-based PRF by Naor, Reingold and Rosen.
Lecture 6: Thursday, October 4, 2007
Pseudorandom permutations, block-ciphers, AES
Pseudorandom permutations, block ciphers and the AES.
Powerpoint slides.
Additional reading: You should take a look at the Bellare-Rogaway chapter on pseudorandom functions and permutations. It does not cover exactly the same material we do (which is why it would be
especially worth your while to look at it). Pseudorandom permutations, and their construction based on pseudorandom functions is covered in Goldreich Vol I (link is for the older web version, see
there section 3.7 page 114).
A lot more information on the AES and other block ciphers can be found on the web page for Eli Biham's modern cryptology course. In particular, this lecture covers block ciphers. See The AES Lounge
for more information about the AES, its security and implementations.
If you are interested in the principles behind the design and attack of block ciphers, see this tutorial by Howard Heys and this course by David Wagner.
Finally if the skipjack/clipper story whetted your appetite for crypto-conspiracies you might want to look at this site.
For next week: Next week we'll begin to talk about the goal of integrity. There's nothing in particular you should read but try to think of the following questions:
• Does the definition of CPA-security imply that it is impossible for an efficient adversary to take an encryption of plaintext x and convert into an encryption of comp(x) (comp(x) means that all
the bits of x are flipped from 0 to 1 and vice versa)?
• If the definition does not imply that this is impossible, what about our constructions? Is it possible to implement efficiently such an attack on the constructions for CPA-secure encryption
schemes we saw in class and homework (e.g., the PRF-based one and the PRP- based one).
• Should we be worried about such an attack? Can you think of a scenario where the adversary will gain something from it?
Lecture 7: Tuesday, October 9, 2007
The authentication problem, CCA security
Short recap of past lectures (powerpoint)
Lecture outline
KL book: Section 3.7 (pages 103-104),
Additional reading: See this expository paper by Victor Shoup for more on the motivation behind chosen ciphertext security. You can find here the CRYPTO 98 paper of Daniel Bleichenbacher that
attacked the SSL protocol, mainly using the fact that the underlying encryption scheme was not CCA-secure. (These sources talk about public key encryption, but the underlying message is the same.)
Lecture 8: Thursday, October 11, 2007
Message authentication codes
lecture outline
Exercise 4
Additional reading: Lectures 9 and 10 in David Wagner's cryptography course discuss MACs, including examples of real-world protocols that can be attacked due to lack of MACs. The material about the
order of encryption vs. authentication is from this CRYPTO 2001 paper by Hugo Krawczyk.
Lecture 9: Tuesday, October 16, 2007
One way permutations and pseudorandom generators
One-way permutations and hard-core bits, with applications to pseudorandom generators and commitment schemes.
Lecture outline
Additional reading: One-way permutations and the hard-core bits are covered extensively in Goldreich Volume 1 (although it is perhaps too extensive for our purposes). Vadhan's lecture notes cover
one-way functions, Commitment schemes and hardcore bits (see also lecture 10 and 11 there). Luca Trevisan's lecture notes on pseudorandomness have a nice presentation of the proof of the
Goldreich-Levin theorem. Note: You might want to look at these sources after you tried to tackle Exercise 1 on your own.
In many places there is an emphasis not so much on one way permutations but on one way functions. One-way functions are a generalization of one-way permutations in the sense that every one-way
permutation is a one-way function but not necessarily vice versa. The definition is the natural way you'd generalize the definition one-way permutations to the case where the function f() may not be
one-to-one: since for a given y there may be many x's such that y=f(x), adversary is successful if it manages to find one of them.
Lecture 10: Thursday, October 18, 2007
One way permutations, pseudorandom generators, commitment schemes (cont)
Reading for next time: Next week we'll start discussing number theory, in preparation for public key encryption schemes. Some resources for number theory include Chapter 7 and Appendix B of the KL
book. There's also an excellent book of Victor Shoup (available freely on the web). The more you read of this book the better, however, I recommend you look in Chapter 1 (pages 1-10) and the first 5
pages of Chapter 8 (pages 180-184, not including exercise 8.1). Other particularly relevant parts of the rest of the book are: Chapter 2 (up to and including Section 2.5), first 2 pages of Chapter 7,
Chapter 10 (up to and including 10.4.1), first two pages of Chapter 11, Chapter 12 and Chapter 13. See also the mathematical background appendix from my upcoming book with Sanjeev Arora.
Lecture 11: Tuesday, October 23, 2007
Some basic number theory - Benny Applebaum
Some basic number theory. Guest lecturer: Benny Applebaum
slides (taken from Eli Biham's Technion course)
Additional reading: As mentioned above, Shoup's book is an excellent resource for computational number theory.
Lecture 12: Thursday, October 25, 2007
Public key cryptography and Rabin encryption
Public Key Cryptography and the Rabin Encryption Scheme
Lecture outline
Homework 6 (LaTeX source)
Additional reading: KL Book Sections 10.1 , 10.2, 10.4, 10.7, 11.2
Lecture 13: Tuesday, November 6, 2007
Signature schemes
Signature schemes
Lecture outline
Additional reading: Fuller exposition and proofs for this material can be found in Chapter 6 of Goldreich's book (Vol II) or in the fragments on the web.The Goldwasser-Micali-Rivest factoring based
hash function is obtained through the intermediate notion of claw-free permutations. This construction is described somewhat tersely in the Golswasser-Bellare notes and with a bit more detail in
Goldreich's sections 2.4.5 (Vol I) and 6.2.3.1 (Vol II). The paper of Bellare and Rogaway suggesting the random oracle model can be found here. One of the most powerful critiques of this model is in
this paper by Canetti, Goldreich and Halevi (be sure to look at the authors' opinions at the end). Helger Limpaa collected some links related to the random oracle model.
As mentioned in class, Bellare and Rogaway built on an earlier work of Fiat and Shamir that gave a different construction for signature and identification schemes using a "crazy" hash function based
on zero knowledge proofs. The Bellare-Rogaway signature scheme with a tighter security proof can be found here.
Lecture 14: Thursday, November 8, 2007
CCA security for public key encryption
Security against Chosen Ciphertext Attack (CCA) for public key encryptions, random-oracle based constructions.
Lecture outline
Homework 7 (LaTeX source) -- due November 15th, 2007.
Additional reading: Chosen ciphertext security was defined by Rackoff and Simon. As mentioned in the notes for lecture 9, Victor Shoup has a very nice expository paper about this concept. The first
construction that was proven to be chosen-ciphertext secure was given in this paper of Dolev, Dwork and Naor. However, this construction is quite complicated and inefficient. Perhaps the simplest
(although rather inefficient) construction and analysis of a "random-oracle free" CCA-secure encryption is given in this paper by Lindell. Both constructions use an ingredient that we did not talk
about in class (non-interactive zero knowledge proofs) but is described in Goldreich's book (Vol I).
The first scheme we presented in class was taken from the original random-oracle paper by Bellare and Rogaway. An efficient encryption scheme with a proof of "almost CCA security" in the random
oracle model is the OAEP scheme of Bellare and Rogaway. However, in this paper by Shoup he shows some "holes" in that proof and gives a different random-oracle based construction. Perhaps the
simplest and most efficient encryption that has a proof of CCA security in the random oracle model is this one by Dan Boneh.
In this wonderful paper of Cramer and Shoup they present an efficient encryption scheme that has a "real" (no random oracles) proof of CCA security based on the DDH assumption.
Even if a scheme is proven to be CCA secure, this only implies real world security if the real world adversary does not have access to additional information from observing say the time it takes
servers to answer queries or other such things - see this paper for a demonstration of this issue.
Additional reading: See Bleichenbacher's paper for more information about his attack on RSA as used in the SSL protocol. Some related attacks can be found here. As mentioned in the reading for the
previous lecture, even when using assumed CCA secure schemes, an adversary may use timing and/or error message information to attack a scheme, as demonstrated in this CRYPTO 2001 paper by Manger.
The SSL protocol is also described in these notes by Lindell. Some attacks on SSL V3.0 are described in this paper by Schneier and Wagner (although it is pre-Bleichenbacher, and so does not include
many strong attacks, to quote from a summary of a talk by Wagner on this work: "In general, this analysis was informal, not formal, meaning that it can only illustrate flaws in the protocol, not
prove that it's correct."). The attack on the pseudorandom generator used by Netscape was given in this paper by Goldberg and Wagner.
Lecture 15: Tuesday, November 13, 2007
Zero knowledge proofs
Zero Knowledge Proofs.
Lecture outline
HW8 (LaTeX source) November 29
Additional readings: One of the best non-technical explanation of zero knowledge appears in this paper by Naor, Naor and Reingold published in the prestigious Journal of Craptology.
The most extensive treatment of Zero Knowledge is in Goldreich, Vol I, Chapter 3 (see also fragments on the web). For a shorter version, see chapter 4 of foundations of cryptography -- a primer).
Goldreich also has tutorial on zero knowledge including the basic notions and more recent developments as well. protocol QR I described in class is also described in these UCB computer security
lecture notes.
You can find on line the original paper of Goldwasser, Micali and Rackoff presenting zero knowledge. Zero knowledge is also the topic of many dissertations (including my own), I particularly
recommended Uri Feige's thesis
Lecture 16: Thursday, November 15, 2007
Zero knowledge proofs (cont)
Zero Knowledge Proofs (continued).
Lecture outline
Additional reading: I strongly recommend you look at the following lecture notes of an MIT course by Silvio Micali (one of the inventors of zero knowledge). Lectures 1 to 5 cover the material we
talked about in class. If you are interested in more about zero knowledge then the rest of the lectures are a good place to start. A good overview of the material is in pages 1 to 17 of Goldreich's
tutorial on zero knowledge. For full proofs see Goldreich's book or the fragments on the web. In particular, reduction of error by sequential composition is covered in section 4.3.4 of the fragments,
and the protocol for 3-coloring is covered in section 4.4.
If you like slides, you can see some of this material in PowerPoint format from Muli Safra's course (see also these slides from Ely Porat).
Lecture 17: Tuesday, November 20, 2007
Forward security, identity-based encryption
Forward security, identity based encryption.
Lecture outline
Additional reading: Dan Boneh's group has a web page on Identity Based Encryption where you can find the original Boneh-Franklin paper and also download encrypted email software based on IBE. The
forward-secure encryption scheme given in class is from this note by Canetti and Halevi which is a simplification of the construction of this paper by Canetti, Halevi and Katz (the latter however is
better in the sense that it does not need a large storage by the sender and also does not use the random oracle model). See this paper by Bellare and Yee for a treatment of forward security for
private key primitives.
Lecture 18: Tuesday, November 27, 2007
Secret sharing, visual cryptography, threshold signatures
Secret sharing, visual cryptography, threshold signatures.
Lecture outline
Additional reading: You can find more about PGP key reconstruction on the PGP user guide. Here's a nice puzzle about visual cryptography. You can find the relevant papers from Moni Naor's web page.
See also Doug Stinson's page on visual cryptogeaphy. You can find a nice and relatively simple threshold RSA signature scheme in this paper by Shoup. See this paper of Tal Rabin on how to convert the
scheme we saw in class to a general threshold, robust, and proactive scheme. I was actually once involved in writing Java implementation of proactively secure protocols.
Lecture 19: Thursday, November 29, 2007
Oblivious transfer, Private information retrieval
Oblivious transfer, private information retrieval.
Lecture outline
HW9 (LaTeX source) December 6
Additional reading: Some web resources on oblivious transfer are this page by Benn Lynn and this page by Helger Limpaa. A full exposition with constructions and proofs of oblivious transfer and
secure function evaluation can be found in Goldreich's book (Vol II). This paper of Kushilevitz and Ostrovsky gave the first computationally secure PIR with single server and sublinear communication.
It also discusses some possible applications for PIR. Amos Beimel has a webpage on private information retrieval. This project is about obtaining practical PIR protocols. See this paper and the
announcement for this workshop for some information on the connections between PIR protocols and other objects. See also this survey on private information retrieval by Gasarch.
Lecture 20: Tuesday, December 4, 2007
Secure 2-party and multiparty computation
The slides shown in class were taken from Joe Kilian's tutorial and Vitaly Shmatikov's lecture.
For a proof of security of Yao's grabled circuit protocol in the honest-but-curious model see this paper by Lindell and Pinkas.
Lecture 21: Thursday, December 6, 2007
Electronic voting
Lecture outline
Slides (mostly taken from Tal Moran).
HW10 (LaTeX source) December 17
Additional reading: I took much of the material from the excellent MIT course by Canetti and Rivet. I also highly recommend the following slides by Tal Moran. Here are some additional links (curtesy
of Tal Moran). The main contenders for practical voter-verifiable e-voting schemes are Punchscan and Pret-a-voter. Scantegrity is another, slightly newer, idea of Chaum's. All these systems are based
on the notion of a 2-part receipt that the voter makes some checks on in the booth and then gets to keep only one part. The voting system presented in class was based on the secrecy of the order of
events in the booth, this idea was suggested in this paper by Neff. Another interesting suggestion is Rivest's "Three Ballot" scheme that does not use cryptography in the voter verification step at
all, but it has some serious problems such as reliance on very strong assumptions about the voting hardware (the paper mentions a vote buying attack also but this seems to be less serious if the
number of candidates is very small compared to the number of voters). Finally, there is a lot of literature about "internet" voting protocols (that ignore the human component). Until 2004, all the
cryptographic voting schemes were of this type. Helger Lipmaa's list (http://www.adastral.ucl.ac.uk/~helger/crypto/link/protocols/voting.php) has many of the notable ones.
Lecture 22: Tuesday, December 11, 2007
Cramer-Shoup Cryptosystem
Lecture outline
Lecture 23: Thursday, December 13, 2007
Quantum computing and cryptography
Quantum Computing and Cryptography
Slides (PowerPoint format)
For a good starting point on quantum computing, you can do far worst than explore Scott Aaronson's home page
Lecture 24: Monday, December 17, 2007
Course summary
Summary of the course.
Some reading: An excellent discussion of provable security appears in this survey/position paper of Ivan Damgard. It refers also to issues raised in these 2004 and 2006 papers of Koblitz and Menezes,
where they criticize proofs of security. In my view, some of the issues they raise are valid, though not novel, but the entire discussion is incomplete and rather misleading. See also this AMS
Notices article by Koblitz and these responses (including one by me). Perhaps what bothers me most about these discussion is that they tend to focus too much on proofs of security, as opposed to
precise definitions of security, whereas in my opinion the latter are even more fundamental to both the theory and practice of cryptography (although proofs are of course necessary to show that
"crazy-sounding" definitions such as chosen-message secure signatures or chosen-ciphertext secure encryption can actually be satisfied).
Additional reading:
• This course was somewhat biased towards the theory (as opposed to the practice) of cryptography. To see some of the more practical stuff we missed, take a look at the syllabus for this course of
Dan Boneh. See also bottom of this page for some of the lectures in that course. More practical issues are also often tackled in security courses such as cos 432 here and this MIT course.
• If you are interested in what's going on right now in cryptographic research, take a look at the Cryptology ePrint archive. (This is an unrefereed archive and not all papers there are necessarily
of high quality, but many are.)
• we talked about how authentication is often needed even if you just want to have secrecy. This point is yet again demonstrated in this paper attacking encryption-only configurations of the IPSec
• We talked about various forms of the login problem, but we did not talk about the most "bare-bones" variant where both parties do not share even a single public key, but only have a shared
password. It turns out that even in this case one may achieve some security, and this is called password-based key exchange. The first protocol achieving this in the standard model is in this
paper by Goldreich and Lindell. More efficient protocols but in the random-oracle model were given here and here. An efficient protocol in the standard model, assuming that the parties have
access to some public parameter that is fixed once and for all by a third trusted party, was given in this paper.
• There are many subtleties that arise when dealing with concurrent scheduling of protocols such as zero knowledge, secure function evaluation and more - an issue we did not talk about at all. One
place to learn more about this is Yehuda Lindell's thesis (which has also been expanded into a more complete book). See also the first 17 lectures in this course by Canetti and Rivest.
• We did not discuss much the issue of common constructions for hash functions. Some exciting results about their (in)security were given in this paper by Wang, Feng,Lai and Yu.
• If you were interested in the notions of private information retrieval, oblivious transfer and their friends, you might be interested also in group signatures and e-voting protocols. For voting
protocols, see lectures 18 and onward in this course of Canetti and Rivest. For groups signatures, see this paper by Bellare, Micciancio and Warinschi and you can find many other related papers
by searching for papers with "group signature" in their title on the eprint archive.
Administrative Information
Lectures: Tuesday and Thursday 1:30pm-2:50pm, Room: Friend 108
Professor: Boaz Barak - 405 CS Building. Email: Phone: 609-981-4982 ?> (I prefer email)
Undergraduate Coordinator: Donna O'Leary - 410 CS Building - 258-1746 doleary@cs.princeton.edu
Teaching Assistant: Rajsekar Manokaran ( rajsekar@cs )
course mailing list.
Grading: 50% homework, 50% take-home final. See syllabus for more details.
│Homework assignment │Due │
│HW1 (LaTeX source) │September 27 │
│HW2 (LaTeX source) │October 4 │
│HW3 (LaTeX source) │October 11 │
│HW4 (LaTeX source) │October 18 │
│HW5 (LaTeX source) │October 25 │
│HW6 (LaTeX source) │November 8 │
│HW7 (LaTeX source) │November 15 │
│HW8 (LaTeX source) │November 29 │
│HW9 (LaTeX source) │December 6 │
│HW10 (LaTeX source) │December 13 │
The textbook for this book is Introduction to Modern Cryptography by Katz and Lindell. However, we will not completely follow the book in this course. Some of the lectures will follow Oded
Goldreich's book Foundations of Cryptography Volumes 1 and 2. Note that some of this material is online in the form of "fragments" of the book.
There are several lecture notes for cryptography courses on the web. In particular the notes of Vadhan, Bellare and Rogaway, Goldwasser and Bellare and Malkin will be useful.
Some good sources for the probability and complexity/algorithms backgrounds are:
A good source for computational number theory is A Computational Introduction to Number Theory and Algebra by Victor Shoup. Note that this book freely available on-line under the creative commons
license. Another good book on this topic is A Concrete Introduction to Higher Algebra by Lindsay Childs.
Some other more application-oriented crypto books (note that these books take a much less careful approach to definitions and security proofs than we do in the course):
Alfred J. Menezes, Paul C. van Oorschot, and Scott A. Vanstone. Handbook of Applied Cryptography.
Douglas R. Stinson. Cryptography: Theory and Practice.
Bruce Schneier. Applied Cryptography.
Ross Anderson Security Engineering
Honor Code for this class
Collaborating with your classmates on assignments is OK and even encouraged. You must, however, list your collaborators for each problem. The assignment questions have been carefully selected for
their pedagogical value and may be similar to questions on problem sets from past offerings of this course or courses at other universities. Using any preexisting solutions from these other sources
is strictly prohibited. | {"url":"http://www.cs.princeton.edu/courses/archive/fall07/cos433/","timestamp":"2014-04-21T12:11:25Z","content_type":null,"content_length":"50084","record_id":"<urn:uuid:194ed68b-637f-430f-b458-828a459202ae>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Congruent Modulo
March 27th 2006, 03:57 PM
[SOLVED] Congruent Modulo
I have an assignment due tommorow and I am completely lost I was wondering if anyone could give me a hand hehe.
The two silly questions I am stuck on are:
Find all numbers between 10 and 20 which are congruent to 173 modulo 3
Find all numbers between 10 and 20 which are congruent to -173 modulo 4
March 27th 2006, 06:25 PM
Originally Posted by rexsi
I have an assignment due tommorow and I am completely lost I was wondering if anyone could give me a hand hehe.
The two silly questions I am stuck on are:
Find all numbers between 10 and 20 which are congruent to 173 modulo 3
Find all numbers between 10 and 20 which are congruent to -173 modulo 4
You need,
$x\equiv 173 \mod 3$
$173\equiv 2\mod 3$
Thus, (transitive property of congruences)
$x\equiv 2\mod 3$
But, $10\leq x\leq 20$.
Thus, $x=11,14,17,20$
You need,
$x\equiv -173 \mod 4$
$-173\equiv 3\mod 4$
Thus, (transitive property of congruences)
$x\equiv 3\mod 4$
But, $10\leq x\leq 20$
Thus, $x=11,15,19$ | {"url":"http://mathhelpforum.com/number-theory/2363-solved-congruent-modulo-print.html","timestamp":"2014-04-16T16:51:31Z","content_type":null,"content_length":"6653","record_id":"<urn:uuid:dabba3cf-9ea6-4a80-acfb-534d8527b16c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Poiseuille's law
The Poiseuille's law (or the Hagen-Poiseuille law also named after Gotthilf Heinrich Ludwig Hagen (1797-1884) for his experiments in 1839) is the physical law concerning the voluminal laminar
stationary flow Φ[V] of incompressible uniform viscous liquid (so called Newtonian fluid) through a cylindrical tube with the constant circular cross-section, experimentally derived in 1838,
formulated and published in 1840 and 1846 by Jean Louis Marie Poiseuille (1797-1869), and defined by:
is a volume of the liquid, poured in the time unit
median fluid
along the axial
cylindrical coordinate z
internal radius of the tube, Δ
the preasure drop at the two ends, η dynamic fluid viscosity and
characteristic length along
, a linear dimension in a cross-section (in non-cylindrical tube). The law can be derived from the
Darcy-Weisbach equation
, developed in the field of
and which is otherwise valid for all types of flow, and also expressed in the form:
is the
Reynolds number
and ρ fluid density. In this form the law approximates the
friction factor
, the
energy (head) loss factor
friction loss factor
Darcy (friction) factor
Λ in the laminar flow at very low velocities in cylindrical tube. The theoretical derivation of slightly different Poiseuille's original form of the law was made independently by Wiedman in
and Neumann and E. Hagenbach in
). Hagenbach was the first who called this law the Poiseuille's law.
The law is also very important specially in hemorheology and hemodynamics, both fields of physiology.
The Poiseuilles' law was later in 1891 extended to turbulent flow by L. R. Wilberforce, based on Hagenbach's work.
The law itself shows how an interesting field this is, because the Darcy-Weisbach equation should be properly named in full as the
Chézy-Weisbach-Darcy-Poiseuille-Hagen-Reynolds-Fanning-Prandtl-Blasius-von Kármán-Nikuradse-Colebrook-White-Rouse-Moody equation or CWDPHRFPBKNCWRM equation in 'short'.
Relation to electrical circuit
Poiseuille's law corresponds to the Ohm's law for electrical circuits, where pressure drop Δp^* is somehow replaced by voltage V and voluminal flow rate Φ[V] by current I. According to this a term 8η
l/πr^4 is an adequate substitution for the electrical resistance R. | {"url":"http://july.fixedreference.org/en/20040724/wikipedia/Poiseuille's_law","timestamp":"2014-04-19T11:57:59Z","content_type":null,"content_length":"6108","record_id":"<urn:uuid:074fee37-64ad-4ea5-8689-cdcdf0428878>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
SampleSizeMeans: Sample size calculations for normal means
A set of R functions for calculating sample size requirements using three different Bayesian criteria in the context of designing an experiment to estimate a normal mean or the difference between two
normal means. Functions for calculation of required sample sizes for the Average Length Criterion, the Average Coverage Criterion and the Worst Outcome Criterion in the context of normal means are
provided. Functions for both the fully Bayesian and the mixed Bayesian/likelihood approaches are provided.
Version: 1.1
Published: 2012-12-10
Author: Lawrence Joseph and Patrick Belisle
Maintainer: Lawrence Joseph <lawrence.joseph at mcgill.ca>
License: GPL-2 | GPL-3 [expanded from: GPL (≥ 2)]
NeedsCompilation: no
In views: Bayesian
CRAN checks: SampleSizeMeans results
Reference manual: SampleSizeMeans.pdf
Package source: SampleSizeMeans_1.1.tar.gz
OS X binary: SampleSizeMeans_1.1.tgz
Windows binary: SampleSizeMeans_1.1.zip
Old sources: SampleSizeMeans archive | {"url":"http://cran.r-project.org/web/packages/SampleSizeMeans/index.html","timestamp":"2014-04-21T10:33:10Z","content_type":null,"content_length":"3075","record_id":"<urn:uuid:1736d0fe-fadf-4f00-bc50-6524d8726d67>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Thomas Jefferson High School Letter from Algebra 2/Trigonometry Professional Learning Community
The Thomas Jefferson High School of Science and Technology is having problems admitting students who are not prepared for math. This has created a scandal being followed by Emma Brown at the
Washington Post and Lisa Gartner at the Washington Examiner.
An interested party sent this blog the letter and it was posted here at this blog in a world first over WaPo, Washington Examiner and the De Morgan Journal.
Letter from Algebra 2/Trigonometry Professional Learning Community. | {"url":"http://newmathdoneright.com/2012/06/04/re-thomas-jefferson-high-school-letter-from-algebra-2trigonometry-professional-learning-community/","timestamp":"2014-04-21T10:00:06Z","content_type":null,"content_length":"159684","record_id":"<urn:uuid:2df0cab5-7539-4084-a910-1a297f7a1758>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
As the compounding rate becomes lower and lower, the future value... - (93661) | Transtutors
As the compounding rate becomes lower and lower, the future value of inflows approaches
1. As the compounding rate becomes lower and lower, the future value of inflows approaches
A. 0
B. infinity
C. need more information
D. the present value of the inflows
2. As the interest rate increases, the present value of an amount to be received at the end of a fixed period
A. increases
B. remains the same
C. Not enough information to tell
D. decreases
Posted On: Feb 17 2012 05:04 AM
Tags: Accounting, Financial Accounting, Cash Flow Statements, Junior & Senior (Grade 11&12)
Solution to be delivered in 36 hours
after verification
Solution to "As the compounding rate becomes lower and lower,..."
Related Questions in Cash Flow Statements
Ask Your Question Now
Copy and paste your question here...
Have Files to Attach?
Questions Asked
Questions Answered
Topics covered in Accounting | {"url":"http://www.transtutors.com/questions/as-the-compounding-rate-becomes-lower-and-lower-the-future-value-of-inflows-approach-93661.htm","timestamp":"2014-04-18T05:32:51Z","content_type":null,"content_length":"82628","record_id":"<urn:uuid:58db70ce-7b33-40d6-932d-7e908a13d806>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Strong Convergence Theorems for Modifying Halpern Iterations for Quasi-
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 912545, 11 pages
Research Article
Strong Convergence Theorems for Modifying Halpern Iterations for Quasi--Asymptotically Nonexpansive Multivalued Mapping in Banach Spaces with Applications
School of Science, Southwest University of Science and Technology, Mianyang, Sichuan 621010, China
Received 20 August 2012; Accepted 21 November 2012
Academic Editor: Nan-Jing Huang
Copyright © 2012 Li Yi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
An iterative sequence for quasi--asymptotically nonexpansive multivalued mapping for modifying Halpern's iterations is introduced. Under suitable conditions, some strong convergence theorems are
proved. The results presented in the paper improve and extend the corresponding results in the work by Chang et al. 2011.
1. Introduction
Throughout this paper, we denote by and the sets of positive integers and real numbers, respectively. Let be a nonempty closed subset of a real Banach space . A mapping is said to be nonexpansive, if
, for all . Let and denote the family of nonempty subsets and nonempty closed bounded subsets of , respectively. The Hausdorff metric on is defined by for , , where . The multivalued mapping is
called nonexpansive, if , for all . An element is called a fixed point of , if . The set of fixed points of is represented by .
Let be a real Banach space with dual . We denote by the normalized duality mapping from to which is defined by where denotes the generalized duality pairing.
A Banach space is said to be strictly convex, if for all with and . A Banach space is said to be uniformly convex, if for any two sequences with and .
The norm of Banach space is said to be Gâteaux differentiable, if for each , the limit exists, where . In this case, is said to be smooth. The norm of Banach space is said to be Fréchet
differentiable, if for each , the limit (1.3) is attained uniformly, for , and the norm is uniformly Fréchet differentiable if the limit (1.3) is attained uniformly for . In this case, is said to be
uniformly smooth.
Remark 1.1. The following basic properties for Banach space X and for the normalized duality mapping can be found in Cioranescu [1]. (1) (, resp.) is uniformly convex if and only if (, resp.) is
uniformly smooth.(2)If is smooth, then is single-valued and norm-to-weak* continuous.(3)If is reflexive, then is onto.(4)If is strictly convex, then , for all .(5)If has a Fréchet differentiable
norm, then is norm-to-norm continuous.(6)If is uniformly smooth, then is uniformly norm-to-norm continuous on each bounded subset of .(7)Each uniformly convex Banach space has the Kadec-Klee
property, that is, for any sequence , if and , then .(8)If is a reflexive and strictly convex Banach space with a strictly convex dual and is the normalized duality mapping in , then , and .
Next, we assume that is a smooth, strictly convex, and reflexive Banach space and is a nonempty, closed and convex subset of . In the sequel, we always use to denote the Lyapunov functional defined
by It is obvious from the definition of the function that for all and .
Following Alber [2], the generalized projection is defined by Many problems in nonlinear analysis can be reformulated as a problem of finding a fixed point of a nonexpansive mapping.
Example 1.2 (see [3]). Let be the generalized projection from a smooth, reflexive and strictly convex Banach space onto a nonempty closed convex subset of , then is a closed and quasi--nonexpansive
from onto .
In 1953, Mann [4] introduced the following iterative sequence : where the initial guess is arbitrary, and is a real sequence in . It is known that under appropriate settings the sequence converges
weakly to a fixed point of . However, even in a Hilbert space, Mann iteration may fail to converge strongly [5]. Some attempts to construct iteration method guaranteeing the strong convergence have
been made. For example, Halpern [6] proposed the following so-called Halpern iteration: where , are arbitrary given and is a real sequence in . Another approach was proposed by Nakajo and Takahashi [
7]. They generated a sequence as follows: where is a real sequence in and denotes the metric projection from a Hilbert space onto a closed and convex subset of . It should be noted here that the
iteration previous works only in Hilbert space setting. To extend this iteration to a Banach space, the concept of relatively nonexpansive mappings are introduced (see [8–12]).
Inspired by Matsushita and Takahashi, in this paper, we introduce modifying Halpern-Mann iterations sequence for finding a fixed point of multivalued mapping .
2. Preliminaries
In the sequel, we denote the strong convergence and weak convergence of the sequence by and , respectively.
Lemma 2.1 (see [2]). Let be a smooth, strictly convex, and reflexive Banach space, and let be a nonempty closed and convex subset of . Then the following conclusions hold (a) if and only if , for all
; (b), for all , for all ; (c)if and , then , for all .
Remark 2.2. If is a real Hilbert space, then and is the metric projection of onto .
Definition 2.3. A point is said to be an asymptotic fixed point of , if there exists a sequence such that and . Denote the set of all asymptotic fixed points of by .
Definition 2.4. (1) A multivalued mapping is said to be relatively nonexpansive, if , , and , for all , , .
(2) A multivalued mapping is said to be closed, if for any sequence with and , then .
Next, we present an example of relatively nonexpansive multivalued mapping.
Example 2.5 (see [13]). Let be a smooth, strictly convex, and reflexive Banach space, let be a nonempty closed and convex subset of , and let be a bifunction satisfying the conditions: (A1) , for all
; (A2) , for all ; (A3) , for each ; (A4) the function is convex and lower semicontinuous, for each . The “so-called” equilibrium problem for is to find a such that , for all . The set of its
solutions is denoted by .
Let , and define mapping as follows: then (1) is single-valued, and so ; (2) is a relatively nonexpansive mapping, therefore it is a closed and quasi--nonexpansive mapping; (3) .
Definition 2.6. (1) A multivalued mapping is said to be quasi--nonexpansive, if , and , for all , , .
(2) A multivalued mapping is said to be quasi--asymptotically nonexpansive, if , and there exists a real sequence , such that
(3) A multivalued mapping is said to be totally quasi--asymptotically nonexpansive, if , and there exist nonnegative real sequences , with , (as ) and a strictly increasing continuous function with
such that
Remark 2.7. From the definitions, it is obvious that a relatively nonexpansive multivalued mapping is a quasi--nonexpansive multivalued mapping, and a quasi--nonexpansive multivalued mapping is a
quasi--asymptotically nonexpansive multivalued mapping, but the converse is not true.
Lemma 2.8. Let be a real uniformly smooth and strictly convex Banach space with Kadec-Klee property, and let be a nonempty closed and convex subset of . Let and be two sequences in such that and ,
where is the function defined by (1.4), then .
Proof. For , we have . This implies that and so . Since is uniformly smooth, is reflexive and , therefore, there exist a subsequence and a point such that . Because the norm is weakly lower semi
continuous, we have By Lemma 2.1(a), we have . Hence we have . Since and has the Kadec-Klee property, we have . By Remark 1.1, it follows that . Since , by using the Kadec-Klee property of , we get .
If there exists another subsequence such that , then we have This implies that . So . The conclusion of Lemma 2.8 is proved.
Lemma 2.9. Let and be as in Lemma 2.8. Let be a closed and quasi--asymptotically nonexpansive multivalued mapping with nonnegative real sequences , if , then the fixed point set of is a closed and
convex subset of .
Proof. Let be a sequence in , such that . Since is quasi--asymptotically nonexpansive multivalued mapping, we have for all and for all . Therefore, By Lemma 2.1, we obtain , Hence, . So, we have .
This implies that is closed.
Let , and , and put . we prove that . Indeed, in view of the definition of , let , we have Since Substituting (2.8) into (2.9) and simplifying it, we have Hence, we have . This implies that . Since
is closed, we have , that is, . This completes the proof of Lemma 2.9.
Definition 2.10. A mapping is said to be uniformly -Lipschitz continuous, if there exists a constant such that , where , , .
3. Main Results
Theorem 3.1. Let be a real uniformly smooth and strictly convex Banach space with Kadec-Klee property, let be a nonempty, closed and convex subset of , and let be a closed and uniformly -Lipschitz
continuous quasi--asymptotically nonexpansive multivalued mapping with nonnegative real sequences and satisfying condition (2.2). Let be a sequence in . If is the sequence generated by where , is the
fixed point set of , and is the generalized projection of onto . If is nonempty, then converges strongly to .
Proof. (I) First, we prove that are closed and convex subsets in . By the assumption that is closed and convex. Suppose that is closed and convex for some . In view of the definition of , we have
This shows that is closed and convex. The conclusions are proved.
(II) Next, we prove that , for all . In fact, it is obvious that . Suppose , for some . Hence, for any , by (1.6), we have This shows that and so .
(III) Now, we prove that converges strongly to some point . In fact, since , from Lemma 2.1(c), we have Again since , we have It follows from Lemma 2.1(b) that for each and for each , Therefore, is
bounded, and so is . Since and , we have . This implies that is nondecreasing. Hence exists. Since is reflexive, there exists a subsequence such that (some point in ). Since is closed and convex and
. This implies that is weakly closed and for each . In view of , we have Since the norm is weakly lower semicontinuous, we have and so This shows that , and we have . Since , by the virtue of
Kadec-Klee property of , we obtain that . Since is convergent, this together with shows that . If there exists some subsequence such that , then from Lemma 2.1, we have that is, and hence By the way,
from (3.11), it is easy to see that
(IV) Now, we prove that . In fact, since , from (3.1), (3.11), and (3.12), we have Since , it follows from (3.13) and Lemma 2.8 that Since is bounded and is quasi--asymptotically nonexpansive
multivalued mapping, is bounded. In view of . Hence from (3.1), we have that Since , this implies . From Remark 1.1, it yields that Again since this together with (3.16) and the Kadec-Klee-property
of shows that On the other hand, by the assumptions that is -Lipschitz continuous, thus we have From (3.18) and , we have that . In view of the closeness of , it yields that , this implies that .
(V) Finally, we prove that and so . Let . Since , we have . This implies that which yields that . Therefore, . This completes the proof of Theorem 3.1.
By Remark 2.7, the following corollaries are obtained.
Corollary 3.2. Let and be as in Theorem 3.1, and let be a closed and uniformly -Lipschitz continuous a relatively nonexpansive multivalued mapping. Let in with . Let be the sequence generated by
where is the set of fixed points of , and is the generalized projection of onto , then converges strongly to .
Corollary 3.3. Let and be as in Theorem 3.1, and let be a closed and uniformly -Lipschitz continuous quasi--nonexpansive multivalued mapping. Let be a sequence of real numbers such that for all , and
satisfying: . Let be the sequence generated by (3.21). Then, converges strongly to .
4. Application
We utilize Corollary 3.3 to study a modified Halpern iterative algorithm for a system of equilibrium problems.
Theorem 4.1. Let , , and be the same as in Theorem 3.1. Let be a bifunction satisfying conditions (A1)–(A4) as given in Example 2.5. Let be a mapping defined by (2.1), that is, Let be the sequence
generated by If , then converges strongly to which is a common solution of the system of equilibrium problems for .
Proof. In Example 2.5, we have pointed out that , , and is a closed and quasi--nonexpansive mapping. Hence (4.2) can be rewritten as follows: Therefore the conclusion of Theorem 4.1 can be obtained
from Corollary 3.3.
1. I. Cioranescu, Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems, vol. 62 of Mathematics and its Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1990. View
at Publisher · View at Google Scholar
2. Y. I. Alber, “Metric and generalized projection operators in Banach spaces: properties and applications,” in Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, A. G.
Kartosator, Ed., vol. 178 of Lecture Notes in Pure and Applied Mathematics, pp. 15–50, Marcel Dekker, New York, NY, USA, 1996. View at Zentralblatt MATH
3. S. S. Chang, C. K. Chan, and H. W. J. Lee, “Modified block iterative algorithm for quasi-ϕ-asymptotically nonexpansive mappings and equilibrium problem in Banach spaces,” Applied Mathematics and
Computation, vol. 217, no. 18, pp. 7520–7530, 2011. View at Publisher · View at Google Scholar
4. W. R. Mann, “Mean value methods in iteration,” Proceedings of the American Mathematical Society, vol. 4, pp. 506–510, 1953. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
5. A. Genel and J. Lindenstrauss, “An example concerning fixed points,” Israel Journal of Mathematics, vol. 22, no. 1, pp. 81–86, 1975. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH
6. B. Halpern, “Fixed points of nonexpansive maps,” Bulletin of the American Mathematical Society, vol. 73, pp. 957–961, 1967.
7. K. Nakajo and W. Takahashi, “Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups,” Journal of Mathematical Analysis and Applications, vol. 279, no. 2, pp. 372–379,
2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
8. S. Y. Matsushita and W. Takahashi, “Weak and strong convergence theorems for relatively nonexpansive mappings in Banach spaces,” Fixed Point Theory and Applications, vol. 2004, no. 1, pp. 37–47,
2004. View at Publisher · View at Google Scholar · View at Scopus
9. S. Matsushita and W. Takahashi, “An iterative algorithm for relatively nonexpansive mappings by hybrid method and applications,” in Proceedings of the 3rd International Conference on Nonlinear
Analysis and Convex Analysis, pp. 305–313, 2004.
10. S.-y. Matsushita and W. Takahashi, “A strong convergence theorem for relatively nonexpansive mappings in a Banach space,” Journal of Approximation Theory, vol. 134, no. 2, pp. 257–266, 2005. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH
11. X. Qin, Y. J. Cho, S. M. Kang, and H. Zhou, “Convergence of a modified Halpern-type iteration algorithm for quasi-ϕ-nonexpansive mappings,” Applied Mathematics Letters. An International Journal
of Rapid Publication, vol. 22, no. 7, pp. 1051–1055, 2009. View at Publisher · View at Google Scholar
12. Z. Wang, Y. Su, D. Wang, and Y. Dong, “A modified Halpern-type iteration algorithm for a family of hemi-relatively nonexpansive mappings and systems of equilibrium problems in Banach spaces,”
Journal of Computational and Applied Mathematics, vol. 235, no. 8, pp. 2364–2371, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
13. E. Blum and W. Oettli, “From optimization and variational inequalities to equilibrium problems,” The Mathematics Student, vol. 63, no. 1–4, pp. 123–145, 1994. View at Zentralblatt MATH | {"url":"http://www.hindawi.com/journals/jam/2012/912545/","timestamp":"2014-04-16T06:06:29Z","content_type":null,"content_length":"635461","record_id":"<urn:uuid:1cffd545-4fde-4ab7-9237-63ae685aede0>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thermodynamics - ideal gas properties
Enthalpy change:
h[2]-h[1]=∫c[p] dT+ ∫[v-T(∂v/∂T)[P]]dP
At ideal gas the pressure integral goes to zero.
I am not sure how you get this. If H = U + PV, then dH = dU + PdV + VdP and ΔH = ΔU + ∫PdV + ∫VdP = ΔQ + ∫VdP
if pressure is constant
, ∫VdP = 0, so ΔH = ΔQ = ∫C
So we have just the integral of C[p] with respect to T.
All correct so far?
Not in general. This is true only if P is constant.
Its with the entropy and internal energy where I get confused.
ds = (C[p])dT - (∂v/∂T)[P] dP
Since U and PV are state functions, ΔU and Δ(PV) are independent of the path between two states so they are the same whether the path is reversible or irreversible. For a reversible path, ΔH = ΔQ +
∫VdP = ∫TdS + ∫VdP. Consequently, this must be true for all paths.
So you can use: dH = TdS + VdP | {"url":"http://www.physicsforums.com/showthread.php?t=552224","timestamp":"2014-04-19T02:20:28Z","content_type":null,"content_length":"24813","record_id":"<urn:uuid:76d517a1-8f5d-4b57-b953-cd1694f652a7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nothing to Buy
During the reign of Communism in the Soviet Union, there was no unemployment. People had good income regardless of the quality of their work. However, there were very few goods available for
The Soviet Union decided to distribute goods equally. Therefore, all produce grown and goods manufactured within the Soviet Union were sent to a central location to be distributed to each of the
cities and villages throughout the country. Click here lcweb2.loc.gov/frd/cs/sutoc.htm or www.cia.gov/library/publications/the-world-factbook/index.html )
Because economic needs differed from location to location, villages and cities frequently found themselves with many goods they didn't need, and few goods that they wanted. Therefore, many consumers
had income, but were unable to purchase what they wanted or needed.
Since the fall of Communism and the Soviet Union, a different economic situation has developed. Since employment and product manufacturing and distribution are no longer overseen by the government,
the unemployment rate has risen dramatically. At the same time, the value of the ruble dropped so those who had saved money during the Soviet era no longer had much money. Many companies have come
from the west (United States, Europe) and are selling their products in Russia. Therefore, Russian consumers now have plenty to buy but no money to spend.
In this lesson you will compare Soviet-era marketplace with present-day Russian marketplace, and demonstrate consumer decision-making in both Soviet-era marketplace and present-day Russian
Activity 1
After explaining the differences between the marketplaces in the Soviet Union and present-day Russia, divide the students into two groups: Soviet-era marketplace and present-day Russia marketplace.
After the students are divided into the two groups, give each group of students the appropriate "wallet" and list of goods to buy. Explain to each group that they need to purchase enough goods to
support their family of five for one week. Then, explain to each group that they have only one Saturday to do their shopping (8 hours). In the case of the Soviet-era group, there were many long lines
that consumers had to stand in before they could buy the food or goods.
They will use the tables below to determine how they will best spend their time and money to acquire the goods needed for their families.
Soviet-era Wallet: One week's salary is 50 Rubles (there are 100 kopeks in a ruble)
Soviet-era Prices:
│Foods │Price │Time needed to buy │
│loaf of bread │25 kopeks │1 minute │
│Milk │45 kopeks a gallon │1½ hours │
│Meat │2 Rubles for 2 pounds │If available,3 hours │
│Potatoes │3 kopeks for 2 pounds │1 minute │
│Fruits │25 kopeks for 2 pounds│If available, 3 hours│
│Rice │78 kopeks for 1 pound │1 minute │
│Cereal │ │Not Available │
│Soda │ │Not Available │
│Snacks like Potato Chips │ │Not Available │
│Cheese │1 Ruble for 2 Pounds │If Available, 2 hours│
│Macaroni │20 kopeks for 1 pound │1 minute │
│Flour │82 kopeks for 4 pounds│2 hours │
│Sugar │1 Ruble for 1 pound │2 hours │
│Juice │ │Not Available │
│Frozen Prepared Foods │ │Not Available │
│McDonalds │ │Not Available │
│Goods │Price │Time Needed to Buy │
│Toilet Paper │2 kopeks per roll │1 minute │
│Toothpaste │12 kopeks per tube │2 hours │
│Soap │10 kopeks per bar │2 hours │
│Shampoo │1 Ruble per bottle │If available, 3 hours │
│Conditioner │ │Not Available │
│Matches │1 kopeks per box │1 minute │
│Laundry Detergent │2 Rubles per box │2 hours │
│Disposable Diapers│ │Not Available │
│Paper Towels │ │Not Available │
│Gasoline for car │60 kopeks per Gallon│If Available, 6 hours │
│Pet Food │ │Not Available │
Present-day Wallet: 500 Rubles
Present-day Russia Prices:
│Foods │Price │Time needed to buy│
│loaf of bread │10 Rubles │1 minute │
│Milk │45 Rubles per Gallon │1 minute │
│Meat │60 Rubles for 2 pounds │1 minute │
│Potatoes │20 Rubles for 2 pounds │1 minute │
│Fruits │40 Rubles for 2 pounds │1 minute │
│Rice │10 Rubles for 1 pound │1 minute │
│Cereal │15 Rubles for 1 box │1 minute │
│Soda │40 Rubles for 2 Liters │1 minute │
│Snacks like Potato Chips│25 Rubles for 1 large bag │1 minute │
│Cheese │60 Rubles for 2 pounds │1 minute │
│Macaroni │5 Rubles for 1 pound │1 minute │
│Flour │40 Rubles for 4 pounds │1 minute │
│Sugar │20 Rubles for 1 pound │1 minute │
│Juice │5 Rubles for 1 gallon │1 minute │
│Frozen Prepared Foods │150 Rubles for 1 frozen pizza │1 minute │
│McDonalds │40 Rubles for a Cheeseburger │1 minute │
│Goods │Price │Time Needed to Buy│
│Toilet Paper │5 Rubles per roll │1 minute │
│Toothpaste │20 Rubles per tube │1 minute │
│Soap │5 Rubles per bar │1 minute │
│Shampoo │40 Rubles per bottle │1 minute │
│Conditioner │40 Rubles per bottle │1 minute │
│Matches │1 Ruble per box │1 minute │
│Laundry Detergent │20 Rubles per box │1 minute │
│Disposable Diapers│80 Rubles for 28 │1 minute │
│Paper Towels │15 Rubles per roll │1 minute │
│Gasoline for car │40 Rubles per gallon │1 minute │
│Pet Food │20 Rubles per bag │1 minute │
Students must decide how to spend their income and their time. They should generate a list on chart-paper of all of the food and goods that they bought and how much money they have left over. They
should also calculate the amount of time they needed to purchase all of those goods.
Discussion Questions
After students have shared their shopping lists, discuss the following:
1. Would you rather shop in the Soviet-era or present-day Russia? Why?
2. Which group had more money to spend compared to the cost?
3. Which group had more choices of goods to buy?
4. By how much did the price of milk increase between the Soviet Era and present-day Russia?
5. How much did the average wage increase?
6. What do the two calculations above suggest about the state of the present-day Russian economy compared to the Soviet Era economy?
7. How much money did the Soviet Era consumers have after shopping?
8. What does that suggest about the economy in the Soviet Union?
9. Go here to read reviews of restaurants in St. Petersburg. Remembering that $1 = 27 Rubles:
• In dollars, how much would a meal at Count Suvorov cost?
• In Rubles, how much is the same meal?
• What is the average Russian consumer's wage? Could he or she afford to eat a meal at Count Suvorov?
Answer the following questions. Remember to print them out and hand them in when you have finished.
1. Where would you rather live: in the Soviet Union or in present-day Russia? Why?
2. How do you think the economy in the Soviet Union affected relationships between people? Why?
3. How do you think the economy in the present-day Russia affects relationships between people? Why?
Is the Ruble Becoming Rubbish?
In this partner-approved (Illuminations) lesson plan for grades 8-12, students "analyze the effects of economic turmoil on various segments of the Russian economy and relate them to the local economy
and their own lives. " Although this plan is written for older students, it can be easily adapted to a fifth-grade classroom. There are also some interesting Extension Activities at the end of the
lesson. www.nytimes.com/learning/teachers/lessons/19980909wednesday.html?searchpv=learning_lessons | {"url":"http://www.econedlink.org/lessons/EconEdLink-print-lesson.php?lid=177&type=student","timestamp":"2014-04-21T07:23:33Z","content_type":null,"content_length":"15860","record_id":"<urn:uuid:a18d71c5-86f6-4e83-a669-593ba038467e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
Here, a, b, are the major/minor axes of the ellipse. The circle (d) is called [director] circle of the ellipse.
The proof follows from the remarks (from the mongraph of Demetrios Bounakis on conic sections, p. 163):
- The foci project on the tangents, to points G, G', H, ... lying on the auxiliary circle c = (K, KI).
- The projections of the foci on the tangent: EG, FH, have |EG||FH| = |FG'||FH| = |FO||FP| = (a+c)(a-c) = b², for c = |EF|/2.
- |AI|² = |AM||AN| = |GE||FH| = b². Hence |AK|² = |AI|²+|IK|² = a²+b² .
For the meaning of the constants, as well as other basic facts on the ellipse, look at Ellipse.html .
Problem: find the locus of points P viewing the ellipse under a fixed angle (phi).
Jacob Steiner (Werke Bd. II, p. 341) gives an interesting application of the director circle to the problem of the determination of the oscillating circle at a point. This is discussed in the file
EllipseEvolute.html .
Produced with EucliDraw© http://www.euclidraw.com | {"url":"http://www.math.uoc.gr/~pamfilos/eGallery/problems/Director.html","timestamp":"2014-04-18T02:58:46Z","content_type":null,"content_length":"2815","record_id":"<urn:uuid:a53feff1-632d-45a6-a887-98c22bfbdc4c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there a master list of the Big-O notation for everything?
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
Is there a master list of the Big-O notation for everything? Data structures, algorithms, operations performed on each, average-case, worst-case, etc.
up vote 16 down vote favorite algorithm data-structures big-o
add comment
Is there a master list of the Big-O notation for everything? Data structures, algorithms, operations performed on each, average-case, worst-case, etc.
Dictionary of Algorithms and Data Structures is a fairly comprehensive list, and includes complexity (Big-O) in the algorithms' descriptions. If you need more information, it'll
up vote 7 down vote be in one of the linked references, and there's always Wikipedia as a fallback.
add comment
Dictionary of Algorithms and Data Structures is a fairly comprehensive list, and includes complexity (Big-O) in the algorithms' descriptions. If you need more information, it'll be in one of the
linked references, and there's always Wikipedia as a fallback.
The Cormen book is more about teaching you how to prove what Big-O would be for a given algorithm, rather than rote memorization of algorithm to its Big-O performance. The former is
up vote 5 down far more valuable than the latter, and requires an investment on your part.
add comment
The Cormen book is more about teaching you how to prove what Big-O would be for a given algorithm, rather than rote memorization of algorithm to its Big-O performance. The former is far more valuable
than the latter, and requires an investment on your part.
Try "Introduction to Algorithms" by Cormen, Leisersen, and Rivest. If its not in there its probably not worth knowing.
up vote 2 down vote
add comment
Try "Introduction to Algorithms" by Cormen, Leisersen, and Rivest. If its not in there its probably not worth knowing.
In c++ the STL standards is defined by the Big-O characteristics of the algorithms as well as the space requirements. That way you could switch between competing implementations of STL
and still know that your program had the same-ish runtime characteristics. Particularily good STL implementations could even special case lists of particular types to be better than the
up vote 2 It made it easy to pick the correct iterator or list type for a particular problem, because you could easily weigh between space consumption and speed.
down vote
Ofcourse Big-O is only a guide line as all constants are removed. If an algorithm runs in k*O(n), it would be classified as O(n), but if k is sufficiently high it could be worse than O(n^
2) for some values of n and m.
add comment
In c++ the STL standards is defined by the Big-O characteristics of the algorithms as well as the space requirements. That way you could switch between competing implementations of STL and still know
that your program had the same-ish runtime characteristics. Particularily good STL implementations could even special case lists of particular types to be better than the standard-requirements.
It made it easy to pick the correct iterator or list type for a particular problem, because you could easily weigh between space consumption and speed.
Ofcourse Big-O is only a guide line as all constants are removed. If an algorithm runs in k*O(n), it would be classified as O(n), but if k is sufficiently high it could be worse than O(n^2) for some
values of n and m.
Introduction to Algorithms, Second Edition, aka CLRS (Cormen, Leiserson, Rivest, Stein), is the closest thing I can think of.
up vote 0 down vote If that fails, then try The Art of Computer Programming, by Knuth. If it's not in those, you probably need to do some real research.
add comment
Introduction to Algorithms, Second Edition, aka CLRS (Cormen, Leiserson, Rivest, Stein), is the closest thing I can think of.
If that fails, then try The Art of Computer Programming, by Knuth. If it's not in those, you probably need to do some real research. | {"url":"http://stackoverflow.com/questions/180510/is-there-a-master-list-of-the-big-o-notation-for-everything","timestamp":"2014-04-24T21:38:41Z","content_type":null,"content_length":"77641","record_id":"<urn:uuid:516c19b5-6254-46fc-a6cc-23595a46691c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Novato Calculus Tutor
Find a Novato Calculus Tutor
...After tutoring with Andreas, my test scores went up by two whole letter grades in college-level Linear Algebra, which led to an overall B grade in the class. Andreas is excellent and I highly
recommend his services. I learned formal logic as part of my mathematical education and I additionally studied the use of logic as part of set theory and computer science.
41 Subjects: including calculus, geometry, statistics, algebra 1
...Students' natural approach to this is often a fearful one, but with the right guidance and instruction, this can turn to interest and curiosity. At this stage, I relate algebra back to our
everyday lives as much as possible. I also keep encouragement strong since understanding at this level is so critical to student's success in future math courses.
50 Subjects: including calculus, English, reading, GRE
...I am an expert on math standardized testing, as stated in my reviews from previous students. I have worked on thousands of these types of problems and can show your student how to do every
single one, which will dramatically increase their test scores! I can help your student ace the following standardized math tests: SAT, ACT, GED, SSAT, PSAT, ASVAB, TEAS, and more.
59 Subjects: including calculus, chemistry, reading, physics
...Many of the concepts in it apply to real life and are more intuitive than a lot of students believe. That being said, it can be challenging and I work hard to make it straightforward, provided
extra material for extra practice as needed. I have taken all 4 semesters of Calculus and also a partial differential equations class which deals especially with 4th semester Calculus.
28 Subjects: including calculus, chemistry, English, reading
...I teach Business Marketing at the Graduate level to students from all over the world. I have been a consultant in the marketing field to top companies in the Bay Area. I have multiple degrees
in electrical engineering.
39 Subjects: including calculus, English, chemistry, reading | {"url":"http://www.purplemath.com/Novato_calculus_tutors.php","timestamp":"2014-04-16T22:37:47Z","content_type":null,"content_length":"23993","record_id":"<urn:uuid:92fe3336-3bd4-4851-ad97-901657b1a383>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: ELSEVIER Theoretical Computer Science 178 (1997) 103-l 18
Computer Science
Permutations generated by token passing in graphs
M.D. Atkinson*, M.J. Livesey, D. Tulley
School of Mathematical and Computational Sciences, North Haugh, St. Andrews KY16 9SS, UK
Received May 1995; revised February 1996
Communicated by MS. Paterson
A transportation graph is a directed graph with a designated input node and a designated
output node. Initially, the input node contains an ordered set of tokens 1,2,3, .. The tokens
are removed from the input node in this order and transferred through the graph to the output
node in a series of moves; each move transfers a token from a node to an adjacent node. Two or
more tokens cannot reside on an internal node simultaneously. When the tokens arrive at the
output node they will appear in a permutation of their original order. The main result is
a description of the possible arrival permutations in terms of regular sets. This description
allows the number of arrival permutations of each length to be computed. The theory is then
applied to packet-switching networks and has implications for the resequencing problem. It is
also applied to some complex data structures and extends previously known results to the case
that the data structures are of bounded capacity. A by-product of this investigation is a new | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/798/1274151.html","timestamp":"2014-04-19T02:27:25Z","content_type":null,"content_length":"8537","record_id":"<urn:uuid:d23110f6-8b0f-4965-a9dd-7b7261da0fbc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fujishige HomePage
Satoru FUJISHIGE (Professor Emeritus, Dr. Eng.)
Research Institute for Mathematical Sciences
Kyoto University
Research Feilds
Mathematical Engineering, Mathematical Programming
● Combinatorial Optimization
● Discrete Algorithms
● Graphs, Networks, and Matroids
● Submodular Functions
● Location and Scheduling Problems
● The LP-Newton Method for Linear Programming
● 2003 Fulkerson Prize (The American Mathematical Society and The Mathematical Programming Society (the name of the latter was changed to The Mathematical Optimization Society in 2011))
Editorial Board Members
● 1987--Present: Discrete Applied Mathematics
● 2004--Present: Discrete Optimization
● 2005--Present: Pacific Journal of Optimization
● 2008--2010 (Editor): Journal of the Operations Research Society of Japan
● S. Fujishige: "Submodular Functions and Optimization" (North-Holland, 1991) (2nd ed., Elsevier, 2005)
● M. Iri, S. Fujishige, and T. Oyama: "Graphs, Networks, and Matroids" (Sangyo-Tosho, 1986, 2005) (in Japanese)
● S. Fujishige: "Discrete Mathematics " (Iwanami, 1993) (in Japanese)
Recent noteworthy results
● S. Fujishige and J. Massberg: Dual consistent systems of linear inequalities and cardinality constrained polytopes. Mathematical Programming, Ser.B, online published, 29 January 2014. DOI 10.1007
● S. Fujishige, S. Tanigawa, and Y. Yoshida: Generalized skew bisubmodularity: A characterization and a min-max theorem. Discrete Optimization, published online on 17 December 2013.
● B. Chen and S. Fujishige: On the feasible payoff set of two-player repeated games with unequal discounting. International Journal of Game Theory, Vol. 42 (2013), pp. 295--303.
● S. Fujishige: A note on polylinking flow networks. Mathematical Programming, Ser. A, Vol. 137 (2013), pp. 601--607.
● A. Frank, S. Fujishige, N. Kamiyama, and N. Katoh: Independent arborescences in directed graphs. Discrete Mathematics, Vol. 313 (2013), pp. 453--459.
● S. Fujishige and Z. Yang: On revealed preference and indivisibilities. Modern Economy, Vol. 3 (2012), pp. 752--758 (papaer); DOI: 10.4236/me.2012.36096.
● S. Fujishige and N. Kamiyama: "The root location problem for arc-disjoint arborescences. " Discrete Applied Mathematics, Vol. 160 (2012), pp. 1964--1970.
● S. Fujishige and S. Isotani: "A submodular function minimization algorithm based on the minimum-norm base " Pacific Journal of Optimization, Vol. 7 (2011), pp. 3--17 (preprint).
● S. Fujishige: "A note on disjoint arborescences " Combinatorica, Vol. 30(2) (2010), pp. 247--252 (preprint).
● S. T. McCormick and S. Fujishige: "Strongly polynomial and fully combinatorial algorithms for bisubmodular function minimization " Mathematical Programming, Vol. 122 (2010), pp. 87--120.
● S. Fujishige: "Theory of principal partitions revisited. " In: W. Cook, L. Lov\'asz, and J. Vygen (Editors): Research Trends in Combinatorial Optimization (Springer, Berlin, 2009), pp. 127--162
● S. Fujishige, T. Hayashi, and K. Nagano: "Minimizing continuous extensions of discrete convex functions with linear inequality constraints " SIAM Journal on Optimization, Vol. 20 (2009), pp.
● S. Fujishige and K. Nagano: "A structure theory for the parametric submodular intersection problem " Mathematics of Operations Research, Vol. 34 (2009), pp. 513--521.
● S. Fujishige, T. Hayashi, K. Yamashita, and U. Zimmermann: "Zonotopes and the LP-Newton method " Optimization and Engineering, Vol. 10 (2009), pp. 193--205.
● M. Sakashita, K. Makino, H. Nagamochi, and S. Fujishige: "Minimum transversals in posi-modular systems " SIAM Journal on Discrete Mathematics, Vol. 23 (2009), pp. 858--871.
● U. Faigle and S. Fujishige: "A general model for matroids and the greedy algorithm" Mathematical Programming, Ser. A, Vol. 119 (2009), pp. 353--369.
● M. Sakashita, K. Makino, and S. Fujishige: "Minimizing a monotone concave function with laminar covering constraints" Discrete Applied Mathematics, Vol. 156 (2008), pp. 204--219.
● M. Sakashita, K. Makino, and S. Fujishige: "Minimum cost source location problems with flow requirements" Algorithmica, Vol. 50 (2008), pp. 555--583.
● S. Fujishige, G. A. Koshevoy, and Y. Sano: "Matroids on convex geometries" Discrete Mathematics, Vol. 307 (2007), pp. 1936--1950.
● S. Fujishige and A. Tamura: "A two-sided discrete-concave market with possibly bounded side payments: an approach by discrete convex analysis" Mathematics of Operations Research, Vol. 32 (2007),
pp. 136--155.
● S. Fujishige: "A maximum flow algorithm using MA ordering" Operations Research Letters, Vol. 31 (2003), pp. 176--178. Also see: S. Fujishige and S. Isotani: "New maximum flow algorithms by MA
orderings and scaling " Journal of the Operations Research Society of Japan, Vol. 46 (2003), pp. 243-250; and Y. Matsuoka and S. Fujishige: "Practical efficiency of maximum flow algorithms using
MA orderings and preflows " Journal of the Operations Research Society of Japan, Vol. 48 (2005), pp. 297-307.
● S. Iwata, L. Fleischer, and S. Fujishige: "A combinatorial strongly polynomial algorithm for minimizing submodular functions" Journal of ACM, Vol. 48 (2001), pp. 761-777 [2003 Fulkerson
Prize-winning paper].
Some old but noteworthy results
● S. Fujishige: "Algorithms for solving the independent-flow problems " Journal of the Operations Research Society of Japan, Vol. 21, No. 2 (June 1978), pp. 189-203 (pdf).
A code in C for submodular function minimization is available upon request by e-mail.
NIPS Workshop on Discrete Optimization in Machine Learning (DISCML) 2012 (plenary talk): Submodularity and Discrete Convexity (video)
Office : RIMS Annex, Room 312, RIMS
E-mail : fujishig (at) kurims.kyoto-u.ac.jp | {"url":"http://www.kurims.kyoto-u.ac.jp/~fujishig/index.html","timestamp":"2014-04-19T14:42:32Z","content_type":null,"content_length":"7917","record_id":"<urn:uuid:1253207b-37e6-4c77-a4ef-129bddf47e5a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the distortion of turbulence by a progressive surface wave
Teixeira, M. A. C. and Belcher, S. E. (2002) On the distortion of turbulence by a progressive surface wave. Journal of Fluid Mechanics, 458. pp. 229-267. ISSN 0022-1120
Full text not archived in this repository.
To link to this article DOI: 10.1017/S0022112002007838
A rapid-distortion model is developed to investigate the interaction of weak turbulence with a monochromatic irrotational surface water wave. The model is applicable when the orbital velocity of the
wave is larger than the turbulence intensity, and when the slope of the wave is sufficiently high that the straining of the turbulence by the wave dominates over the straining of the turbulence by
itself. The turbulence suffers two distortions. Firstly, vorticity in the turbulence is modulated by the wave orbital motions, which leads to the streamwise Reynolds stress attaining maxima at the
wave crests and minima at the wave troughs; the Reynolds stress normal to the free surface develops minima at the wave crests and maxima at the troughs. Secondly, over several wave cycles the Stokes
drift associated with the wave tilts vertical vorticity into the horizontal direction, subsequently stretching it into elongated streamwise vortices, which come to dominate the flow. These results
are shown to be strikingly different from turbulence distorted by a mean shear flow, when `streaky structures' of high and low streamwise velocity fluctuations develop. It is shown that, in the case
of distortion by a mean shear flow, the tendency for the mean shear to produce streamwise vortices by distortion of the turbulent vorticity is largely cancelled by a distortion of the mean vorticity
by the turbulent fluctuations. This latter process is absent in distortion by Stokes drift, since there is then no mean vorticity. The components of the Reynolds stress and the integral length scales
computed from turbulence distorted by Stokes drift show the same behaviour as in the simulations of Langmuir turbulence reported by McWilliams, Sullivan & Moeng (1997). Hence we suggest that
turbulent vorticity in the upper ocean, such as produced by breaking waves, may help to provide the initial seeds for Langmuir circulations, thereby complementing the shear-flow instability mechanism
developed by Craik & Leibovich (1976). The tilting of the vertical vorticity into the horizontal by the Stokes drift tends also to produce a shear stress that does work against the mean straining
associated with the wave orbital motions. The turbulent kinetic energy then increases at the expense of energy in the wave. Hence the wave decays. An expression for the wave attenuation rate is
obtained by scaling the equation for the wave energy, and is found to be broadly consistent with available laboratory data.
Deposit Details
Centaur Editors: Update this record | {"url":"http://centaur.reading.ac.uk/29255/","timestamp":"2014-04-16T07:48:01Z","content_type":null,"content_length":"28610","record_id":"<urn:uuid:d8db129c-988f-4812-aa35-e29f60d4ee50>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Could You Possibly Love/Hate Math?
Growing up, I never really liked math. I saw it as one of those necessary evils of school. People always told me that if I wanted to do well and get into college, I needed to do well in math. So I
took the courses required of a high school student, but I remember feeling utter confusion from being in those classes. My key problem was my inquisitive nature. I really didn’t like being “told”
that certain things were true in math (I felt this way in most classes). I hated just memorizing stuff, or memorizing it incorrectly, and getting poor grades because I couldn’t regurgitate
information precise enough. If this stuff was in fact “true”, I wanted to understand why. It seemed like so much was told to us without any explanation, that its hard to expect anybody to just buy
into it. But that’s what teachers expected. And I was sent to the principal’s office a number of times for what they called “disturbing class”, but I’d just call it asking questions.
At the same time, I was taking a debate class. This class was quite the opposite of my math classes, or really any other class I’d ever had. We were introduced to philosophers like Immanuel Kant,
John Stuart Mill, Thomas Hobbes, John Rawls, etc. The list goes on and on. We discussed theories, and spoke of how these concepts could be used to support or reject various propositions. Although
these philosophies were quite complex, what I loved was the inquiries we were allowed to make into understanding the various positions. Several classmates and I would sit and point out apparent
paradoxes in the theories. We’d ask about them and sometimes find that others (more famous than us) had pointed out the same paradoxes and other things that seemed like paradoxes could be resolved
with a deeper understanding of the philosophy.
Hate is a strong word, but I remember feeling that mathematicians were inferior to computer programmers because “all math could be programmed”. This was based on the number of formulas I had learned
through high school and I remember having a similar feeling through my early years of college. But things changed when I took a course called Set Theory. Last year, I wrote a piece that somewhat
describes this change:
They Do Exist!
Let me tell you a story about when I was a kid
See, I was confused and here's what I did.
I said "irrational number, what’s that supposed to mean?
Infinite decimal, no pattern? Nah, can't be what it seems."
So I dismissed them and called the teacher wrong.
Said they can't exist, so let’s move along.
The sad thing is that nobody seemed to mind.
Or maybe they thought showing me was a waste of time.
Then one teacher said "I can prove they exist to you.
Let me tell you about my friend, the square root of two."
I figured it'd be the same ol' same ol', so I said,
"Trying to show me infinity is like making gold from lead"
So he replies, "Suppose you're right, what would that imply?"
And immediately I thought of calling all my teachers lies.
"What if it can be written in lowest terms, say p over q.
Then if we square both sides we get a fraction for two."
He did a little math and showed that p must be even.
Then he asked, "if q is even, will you start believing?"
I stood, amazed by what he was about to do.
But I responded, "but we don't know anything about q"
He says, "but we do know that p squared is a factor of 4.
And that is equal to 2 q squared, like we said before."
Then he divided by two and suddenly we knew something about q.
He had just shown that q must be even too.
Knowing now that the fraction couldn't be in lowest terms
a rational expression for this number cannot be confirmed.
So I shook his hand and called him a good man.
Because for once I yould finally understand
a concept that I had denied all my life,
a concept that had caused me such strife.
And as I walked away from the teacher's midst,
Excited, I called him an alchemist and exhaled "THEY DO EXIST!"
Aside from its lack of poetic content, I think that many mathematicians can relate to this poem, particularly the ones who go into the field for its theoretic principles. For many of us, Set Theory
is somewhat of a “back to the basics” course where we learn what math is really about. The focus is no longer on how well you can memorize a formula. Instead, its more of a philosophy course on
mathematics – like an introduction to the theory of mathematics, hence the name Set Theory.
The poem above focuses on a particular frustration of mine, irrational numbers. Early on, we’re asked to believe that these numbers exist, but we’re not given any answers as to why they should exist.
The same could be said for a number of similar concepts though – basically, whenever a new concept is introduced, there is a reasonable question of how do we know this is true. This is not just a
matter of practicality, but a necessity of mathematics. I mean I could say “lets now consider the set of all numbers for which X + 1 = X + 2″, but if this is true for any X, then it means that 1
equals 2, which we know is not true. So the set I’d be referring to is the empty set. We can still talk about it, but that’s the set I’d be talking about.
So why is this concept of answering the why’s of mathematics ignored, sometimes until a student’s college years? This gives students a false impression of what math really is, which leads to people
making statements like “I hate math”, not really knowing what math is about.
4 thoughts on “How Could You Possibly Love/Hate Math?”
1. broccoli
2. I wish someone would’ve given me the same advice when I was in school. Unfortunately, I “hated” math and escaped to my imagination when I should’ve been listening. smh I tell the young ones to
focus on Math and Science, so they can have choices when they go off to college. Great poem and blog.
3. I love and hate math. I think that’s partly due to not finding any practical use for math from algebra all the way to trigonometry. It wasn’t till discreet math that I renewed my interest in
1. This is a common statement. In fact, I probably fall into that same line of reasoning. Discrete math and set theory are beautiful subjects. Unfortunately most people don’t even know that
they’re a part of what we consider mathematics. But these are the maths that are most important to computer scientists, and I’d wager that they are also more related to real world concepts
that people encounter on a daily basis. | {"url":"http://learninglover.com/blog/?p=92","timestamp":"2014-04-20T08:15:42Z","content_type":null,"content_length":"44870","record_id":"<urn:uuid:ae2cb7df-b8eb-4a3d-8da6-affc2515e3eb>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
module without using %
I remember there is a way to calculate the remainder without using the % operator but cant seem to find it. Anyone knows what it was?
In what situation do you need any other means of getting the remainder than using the "%" ?
When you are asked to implement modulus without using %?
I just read an implementation of the standard function div() and remembered there was a way without division and % to get the remainder. Was just curious what it was again
Without division? I don't see how that's possible, but if it is I'd like to see :P.
This page shows a way for powers of 2. In addition, div() seems to calculate both a/b and a%b. If you have done the first, the modulo can be found without doing the same division again. That is,
div() is probably meant for optimizing cases where you need both results.
int a = 10 ; int b = 3 ; int remainder = a - ((a/b)*b) ;
You can always emulate integer division with subtraction and counting. By doing so, you'll get the modulus automatically.
If the number you're dividing by is small enough and is known at compile time, and the number being divided is no larger than half your register size, then to do fast division you can: Multiply by
the fixed-point inverse of the divisor, and then convert back to an integer. That's a multiply and a shift, instead of a division, making it definitely faster. You wouldn't do this for powers of two
though as then you can just use a single shift. e.g. Code: template <unsigned char n> unsigned short div(unsigned short x) { return (int)x * ((0x10000+n-1)/n) >> 16; }Mod can then be trivially based
on this, though there may be a faster way also. Code: template <unsigned char n> unsigned short mod(unsigned short x) { return x - div<n>(x) * n; }Note that I'm also assuming that x is positive as I
can't be bothered checking the bahviour of negatives, hence use of unsigned to make sure. Repeated subtraction until the number is less than the divisior will also work, not that you'd want to do
that of course.
template <unsigned char n> unsigned short div(unsigned short x) { return (int)x * ((0x10000+n-1)/n) >> 16; }
template <unsigned char n> unsigned short mod(unsigned short x) { return x - div<n>(x) * n; } | {"url":"http://cboard.cprogramming.com/cplusplus-programming/97948-module-without-using-%25-printable-thread.html","timestamp":"2014-04-18T11:40:43Z","content_type":null,"content_length":"10959","record_id":"<urn:uuid:e0caf279-993b-4117-9562-86b08cef2ea6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to multivariate data analysis in chemical engineering
Multivariate data analysis methods are being used more and more beyond chemical engineering and have useful, practical uses for process control, though there are challenges to using multivariate
Multivariate data analysis (MVA) is the investigation of many variables, simultaneously, in order to understand the relationships that may exist between them. Multivariate data analysis methods have
been around for decades, but until recently, have primarily been used in laboratories and specialist technical groups, rarely being applied to production processes.
Most chemical manufacturing processes are highly multivariate in nature due to the complex reactions involved i.e. there are a large number of variables which are typically very interactive. Complex
systems require multiple measurements to fully understand them.
However, the Statistical Process Control (SPC) tools used in chemical engineering still rely largely on univariate (i.e. one variable at a time) methods, which do not show the full picture of complex
chemical processes despite collecting masses of data through instruments and control systems. These SPC tools use traditional statistical approaches such as mean, standard deviation and Student’s-t,
which only look at single variables individually.
While univariate statistics can be useful for investigating simple systems, they tend to fail when more complex systems are analyzed. This is because they cannot detect relationships that may exist
between the variables being studied, as they treat all such variables as being independent of each other.
This relationship among variables is known as covariance or correlation, and is a central theme in MVA. Covariance describes the influence that one variable has on others, and process upsets will
typically be caused by several variables acting together.
A common example is the relationship between Temperature and pH, as shown below. Suppose I am the Plant Manager at a chemical plant. My process has been running smoothly until suddenly, the product
quality starts to deteriorate and I have to make a decision what to do.
I have two control charts at my disposal for the measurements performed on the system. These are supposed to be related to product quality and are meant to serve as an indicator that the process is
in control.
The ‘pH’ and ‘Temperature’ control charts indicate that nothing is wrong, but in fact there is. The point marked with a red dot in both charts is an abnormal situation, but how can I detect this?
Obviously univariate statistics have failed me! What if I plot the points of both control charts against each other to form a simple multivariate control chart? The plot labeled ‘Multivariate view’
shows this.
The square region formed by dotted red lines in the ‘Multivariate view’ plot shows the univariate limits my process is allowed to operate in, however, on this simple multivariate graph it can be seen
that variable 1 and variable 2 are related to each other, i.e. higher temperature corresponds to higher pH. Accordingly, the multivariate limits, indicated by the ellipse in the figure, are very
different from the two univariate limits, indicated by the dashed lines in the figure. The red highlighted point in the figure is well inside the two univariate limits but is outside the multivariate
control limits. In this case, if multivariate control charts were applied the process operator would be able to detect the process deviation.
This is a simple example of how multivariate methods enable superior Early Event Detection capabilities compared to univariate control charts, especially when systems are complex and the number of
input variables becomes large i.e. greater than 10.
Typical multivariate techniques
The main multivariate techniques are Exploratory Data Analysis, Regression/Prediction methods, and Classification methods.
Exploratory data analysis (EDA) attempts to find the hidden structure or underlying patterns in large, complex data sets. This gives a better understanding of the process and can lead to insights
that would not have been observed otherwise. EDA methods include Cluster analysis and Principal Component Analysis (PCA). An example application of exploratory data analysis is checking for
contaminants in a process or feedstock, or identifying by-products caused by incorrect process settings.
Regression analysis involves developing a model from available data to predict a desired response or responses for future measurements. Multivariate regression is an extension of the simple straight
line model case, where there are many independent variables and at least one dependent variable. Regression methods include Multiple Linear Regression (MLR), Principal Component Regression (PCR) and
Partial Least Squares Regression (PLSR). Common applications include predicting purity, yield or end product quality from input raw material quality.
Classification is the separation (or sorting) of a group of objects into one or more classes based on distinctive features in the objects. Classification methods include Linear Discriminant Analysis
(LDA), SIMCA, and Support Vector Machine Classification (SVM-C). Example applications include grouping products according to similar characteristics or quality grades. | {"url":"http://www.controleng.com/industry-news/more-news/single-article/introduction-to-multivariate-data-analysis-in-chemical-engineering/5a6f25ebeeafa92271e51eac5b796ee9.html","timestamp":"2014-04-17T19:55:59Z","content_type":null,"content_length":"62216","record_id":"<urn:uuid:dcf00a0d-7c42-4963-a71e-43776b16d2e6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interpretation of organic components from Positive Matrix Factorization of aerosol mass spectrometric data^1Cooperative Institute for Research in the Environmental Sciences (CIRES), Boulder, CO, USA
^2Department of Chemistry and Biochemistry, University of Colorado, Boulder, CO, USA
^3Aerodyne Research, Inc., Billerica, MA, USA
^4Atmos. Sci. Res. Center, University at Albany, State University of New York, Albany, NY, USA
Abstract. The organic aerosol (OA) dataset from an Aerodyne Aerosol Mass Spectrometer (Q-AMS) collected at the Pittsburgh Air Quality Study (PAQS) in September 2002 was analyzed with Positive Matrix
Factorization (PMF). Three components – hydrocarbon-like organic aerosol OA (HOA), a highly-oxygenated OA (OOA-1) that correlates well with sulfate, and a less-oxygenated, semi-volatile OA (OOA-2)
that correlates well with nitrate and chloride – are identified and interpreted as primary combustion emissions, aged SOA, and semivolatile, less aged SOA, respectively. The complexity of
interpreting the PMF solutions of unit mass resolution (UMR) AMS data is illustrated by a detailed analysis of the solutions as a function of number of components and rotational forcing. A public
web-based database of AMS spectra has been created to aid this type of analysis. Realistic synthetic data is also used to characterize the behavior of PMF for choosing the best number of factors, and
evaluating the rotations of non-unique solutions. The ambient and synthetic data indicate that the variation of the PMF quality of fit parameter (Q, a normalized chi-squared metric) vs. number of
factors in the solution is useful to identify the minimum number of factors, but more detailed analysis and interpretation are needed to choose the best number of factors. The maximum value of the
rotational matrix is not useful for determining the best number of factors. In synthetic datasets, factors are "split" into two or more components when solving for more factors than were used in the
input. Elements of the "splitting" behavior are observed in solutions of real datasets with several factors. Significant structure remains in the residual of the real dataset after
physically-meaningful factors have been assigned and an unrealistic number of factors would be required to explain the remaining variance. This residual structure appears to be due to variability in
the spectra of the components (especially OOA-2 in this case), which is likely to be a key limit of the retrievability of components from AMS datasets using PMF and similar methods that need to
assume constant component mass spectra. Methods for characterizing and dealing with this variability are needed. Interpretation of PMF factors must be done carefully. Synthetic data indicate that PMF
internal diagnostics and similarity to available source component spectra together are not sufficient for identifying factors. It is critical to use correlations between factor and external
measurement time series and other criteria to support factor interpretations. True components with <5% of the mass are unlikely to be retrieved accurately. Results from this study may be useful for
interpreting the PMF analysis of data from other aerosol mass spectrometers. Researchers are urged to analyze future datasets carefully, including synthetic analyses, and to evaluate whether the
conclusions made here apply to their datasets.
Citation: Ulbrich, I. M., Canagaratna, M. R., Zhang, Q., Worsnop, D. R., and Jimenez, J. L.: Interpretation of organic components from Positive Matrix Factorization of aerosol mass spectrometric
data, Atmos. Chem. Phys., 9, 2891-2918, doi:10.5194/acp-9-2891-2009, 2009. | {"url":"http://www.atmos-chem-phys.net/9/2891/2009/acp-9-2891-2009.html","timestamp":"2014-04-16T20:15:55Z","content_type":null,"content_length":"28184","record_id":"<urn:uuid:692a0e94-6092-422e-b912-278711e6b17a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vernon Hills Math Tutor
Find a Vernon Hills Math Tutor
I have been teaching Spanish at the college and high school levels at schools such as DePaul University, North Park College and Loyola Academy for 16 years and have a PhD in Hispanic Literature
from the University of Texas at Austin. Moreover, I hold a Bilingual Type 29 teaching certificate from th...
17 Subjects: including statistics, algebra 1, geometry, prealgebra
...MY RECENT TEST SCORES: GMAT: 760 GRE: Quantitative 168/170 (perfect 800 on earlier version) Verbal 168/170I am a certified teacher in Illinois. Last year I worked at a K-8 magnet school in
chicago where I worked as the librarian and computer teacher. As part of my obtaining my teaching certificate my coursework covered intervention techniques when working with students one on one.
38 Subjects: including algebra 1, reading, trigonometry, statistics
...I've worked for several big box test prep companies, and I have a 99% score. In the past 5 years, I've written proprietary guides on ACT strategy for local companies. These guides have been
used to improve scores all over the midwest.
24 Subjects: including differential equations, discrete math, linear algebra, algebra 1
...Constant failure is discouraging. I have taught for three years at the middle school level. I also have experience teaching ACT math prep.
12 Subjects: including calculus, algebra 1, algebra 2, geometry
...I am able to tutor all Math subjects from Pre-Algebra to Calculus and Basic Microsoft Excel. I am available in the Roselle Area after 7pm on weekdays and weekends as needed. I am very
passionate about mathematics and connecting it with students learning.
10 Subjects: including statistics, algebra 1, algebra 2, calculus
Nearby Cities With Math Tutor
Buffalo Grove Math Tutors
Green Oaks, IL Math Tutors
Hawthorn Woods, IL Math Tutors
Indian Creek, IL Math Tutors
Kildeer, IL Math Tutors
Lake Forest, IL Math Tutors
Libertyville, IL Math Tutors
Lincolnshire Math Tutors
Long Grove, IL Math Tutors
Mettawa, IL Math Tutors
Mundelein Math Tutors
North Chicago Math Tutors
Northbrook Math Tutors
Riverwoods, IL Math Tutors
Wheeling, IL Math Tutors | {"url":"http://www.purplemath.com/Vernon_Hills_Math_tutors.php","timestamp":"2014-04-18T15:55:38Z","content_type":null,"content_length":"23784","record_id":"<urn:uuid:0169936c-bf81-45ca-8143-7d8a8adccf64>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to calculate Weighted Average for price increase/decrease
May 12th 2008, 05:11 AM
How to calculate Weighted Average for price increase/decrease
I have been asked recently by my manager to provide some analysis to the average price increase/decrease for our products. He mentioned that I need to use weighted average.
I am comparing the percentage of a price increase/decrease for 5 products during Jan 2007 & Jan 2008. My aim is to find the weighted average for the % of the price increase/decrease.
I have been told that weighted average is calculated using a different method and I am not sure how to do that.
I hope you can explain to me how weighted average is cacluated. Thank you
I have simply used the following method : Sum(13%,12.5%,-18.18%,0%,-3.85%)/5 = 0.69% (Thi is the % increase/decrease of the price when comparing 2007 with 2008 prices)
Please refer to the attached file for the example. Also note that my real life file consist of about 1000 products.
May 12th 2008, 06:32 AM
Your sum is very bad. Never do that again.
Your percentages are calculated with two pieces, right?
Something / (Something Else)
It is likely these "Something Else"s are the ones you want.
It is also possble you want the "Something"s.
It is also possible you need other stuff entirely.
In any case, here's a simple example.
1.00 ==> 1.05 ==> 1.05/1.00 = +5%
2.00 ==> 2.20 ==> 2.20/2.00 = +10%
Weighted Average (Something Else Version)
((+5%)*1.00 + (+10%)*2.00)/(1.00+2.00) = +8.3333%
Weighted Average (Something Version)
((+5%)*1.05 + (+10%)*2.20)/(1.05+2.20) = +8.3846%
Weighted Average (Other Stuff Entirely Version)
In this case, we need a little additional information.
Suppose we sell 1000 of product 1 and 670 of product 2
This may be a way to do it - using counts.
((+5%)*1000 + (+10%)*670)/(1000+670) = +7.005988%
This may be a way to do it - using total original revenue before price changes.
((+5%)*1000(1.00) + (+10%)*670(2.00))/(1000(1.00)+670(2.00)) = +7.863248%
See if you can think up some other ways. Present everything that makes sense to your employer and see which are preferred. There may be more than one or yet another may be suggested. | {"url":"http://mathhelpforum.com/advanced-statistics/38080-how-calculate-weighted-average-price-increase-decrease-print.html","timestamp":"2014-04-20T18:10:35Z","content_type":null,"content_length":"6001","record_id":"<urn:uuid:37016f9c-8351-4803-9edf-4eae77b0ad83>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exeter Math
I recently (re)discovered the
Exeter Math
books. I had seen them once before but apparently didn't think twice about them. Now I can't get seem to get enough of them. I've been obsessed all week.
The books are simply problem sets. Designed to be done in order and
[S:more or less:S]
completely. No mindless
context free pictures
. No chapters. No definitions. No glossy pages with theorems. The questions are not even broken up by topic. The books are just filled with math problems.
Exeter runs them under/with their
Harkness Philosophy
. They describe it better than I can, but it's highly student centered, which in my book makes it worthy of more research if not emulation.
Some of the problems are not unlike word problems in a typical text, some are very tough and some are gems that can be solved half a dozen different ways. My favorite so far is:
This is a modified version I used in a exam, but its the same idea.
Rich Beveridge
also describes his solution to the problems below that involved Fibonacci numbers and Phi.
The problem sets appear to be a potential backbone for true continuum of math classes. No more starting off on Chapter 1 of a new book just because its September. Topics truly spiral through the
problems sets. No more boring the bejeebers out of the kids by doing 20 problems that are exactly the same. I see so much potential...
Take a look at
the books
, see what you think.
Glenn Waddell posted a
great series on Exeter math
. He's got great insight into Exeter's pedagogy and has posted more resources than are available on Exeter's homepage. | {"url":"http://changeinenergy.blogspot.com/2012/03/exeter-math.html","timestamp":"2014-04-20T18:23:09Z","content_type":null,"content_length":"86338","record_id":"<urn:uuid:8a2ba6bc-b76e-46aa-a994-b2c23d8db1a1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
November 11th 2011, 05:40 AM #1
Nov 2011
40+40*0+1 = ?
Hi All,
I'm currently having a 'discussion' with the majority of my friends regarding the outcome of the following equation.
Now, using the Order or Operations I get the answer as 41 by the following steps
40+40*0+1 becomes
40+0+1 this results in
The rest of my friends that I am discussing this with think the answer is 1 as in
40+40*0+1 becomes
80*0+1 becomes
0*1 results in
I'm certain that I am correct as multiplication has to be done first but as I'm the only one saying this way is correct, I'm now starting to doubt myself. If I am wrong, can you explain why?
Kind Regards
Last edited by mr fantastic; November 11th 2011 at 12:25 PM. Reason: Re-titled.
Re: Please help me, I think I'm loosing my mind
You are absolutely 100% correct. Multiplications are done before additions/subtractions, always, everywhere in the world.
The order of operations that your friends are talking about occurs when you enter this equation into a cheap pocket calculator. However that's because those low-tech calculators can only process
a single operation at a time, so they get the order 'wrong'.
Re: Please help me, I think I'm loosing my mind
Thanks Corsica, really appreciate that. Not wanting to offend you as I appreciate your quick reply, but I would also like a couple of other to confirm this as well, not that I don't believe you,
but just one reply doesn’t add much weight to me convincing them they are wrong, it’s currently me against 4 with one of them describing the Order of Operations as 'rubbish'.
Last edited by mr fantastic; November 11th 2011 at 12:27 PM.
Re: Please help me, I think I'm loosing my mind
Thanks Corsica, really appreciate that. Not wanting to offend you as I appreciate your quick reply, but I would also like a couple of other to confirm this as well, not that I don't believe you,
but just one reply doesn’t add much weight to me convincing them they are wrong, it’s currently me against 4 with one of them describing the Order of Operations as 'B******s'.
Your answer is correct (:
Re: Please help me, I think I'm loosing my mind
: )) Cute.
The answer is 41. Uhm, you know you could use a math manual or something of that sort to prove them you're right, don't you?
Re: Please help me, I think I'm loosing my mind
You should see the number of web pages I've linked for them and they still don't believe me. I sure if I brought a math text book out that at somepoint during reading it someone would shout
'Blasphemy!' Haha
Re: Please help me, I think I'm loosing my mind
Your friends are mathematically inept,
and/or this thread is trolling.
cf 48÷2(9+3) = ? | Know Your Meme
Re: Please help me, I think I'm loosing my mind
Re: Please help me, I think I'm loosing my mind
At this point the thread is exhibiting troll-like characteristics.
Thread closed.
November 11th 2011, 06:06 AM #2
Junior Member
Jun 2011
November 11th 2011, 06:51 AM #3
Nov 2011
November 11th 2011, 07:03 AM #4
November 11th 2011, 07:05 AM #5
Mar 2011
November 11th 2011, 07:14 AM #6
Nov 2011
November 11th 2011, 07:19 AM #7
November 11th 2011, 07:31 AM #8
November 11th 2011, 12:28 PM #9 | {"url":"http://mathhelpforum.com/algebra/191643-40-40-0-1-a.html","timestamp":"2014-04-17T20:31:10Z","content_type":null,"content_length":"53227","record_id":"<urn:uuid:c70f1e49-9dd8-4572-83fe-90c9235b1ef1>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Ali on Saturday, October 22, 2011 at 12:30am.
A 4kg block is sitting on the floor.
(a) How much potential energy does it have? (b) How much kinetic energy does it have?
(c) The block is raised to 2m high. How much potential energy does it have? (d) The block is raised to 4m high. How much potential energy does it have?
(e) The block is raised to 45m high. How much potential energy does it have? (f) The block is dropped to the ground. How fast is it traveling when it hits the ground? (remember chapter 2?)
(g) How much kinetic energy does it have when it hits the ground?
• science - Jai, Saturday, October 22, 2011 at 2:40am
recall that potential energy is stored energy, and is given by the formula:
PE = mgh (units in Joules)
m = mass (in kg)
g = acceleration due to gravity = 9.8 m/s^2
h = height (in m)
if the reference position is at the floor (that is, h = 0), the PE is equal to
PE = 4*9.8*0
PE = 0
recall that kinetic energy is energy in motion and is given by the formula:
KE = (1/2)*m*v^2
v = velocity (in m/s)
since it's not moving (v = 0),
KE = 0
at 2 m high,
PE = 4*9.8*2
PE = 78.4 J
at 4 m high,
PE = 4*9.8*4
PE = 158.8 J
at 45 m high,
PE = 4*9.8*45
PE = 1764 J
recall that the motion of the block is uniformly accelerated motion, and we can therefore use the formula:
(v,f)^2 - (v,o)^2 = 2gh
v,f = final velocity (in m/s)
v,o = initial velocity (in m/s)
since it is dropped from rest, v,o = 0:
v,f^2 = 2gh
v,f = sqrt(2gh)
v,f = sqrt(2*9.8*45)
v,f = 29.7 m/s
KE = (1/2)mv^2
KE = (1/2)*4*29.7^2
KE = 1764.18 J
this actually shows the law conservation of energy, which is ΔPE = -ΔKE
hope this helps~ :)
Related Questions
physics - Blocks with masses of 1kg , 4kg , and 6kg are lined up in a row on a ...
physics - block a of mass 4kg is on a horizontal, frictionless tabletop and is ...
Physic - A 3.5 kg block is pushed along a horizontal floor by a force of ...
physics - really confused, A 2kg box is sliding east at 50m/s on a frictionless ...
Physics - A 10 g bullet is fired into a 5 kg wooden block initially at rest on ...
Physics - Two blocks are released from the same height above the floor. Block A ...
Physics - A 2.95 kg block on a horizontal floor is attached to a horizontal ...
physics - A rope is used to pull a 2.54 kg block at constant speed 4.48 m along ...
college physics - A rope is used to pull a 3.57 kg block at constant speed 5.10 ...
ap physics - Starting from rest, a 10.2 kg block slides 2.90 m down a ... | {"url":"http://www.jiskha.com/display.cgi?id=1319257833","timestamp":"2014-04-21T13:56:30Z","content_type":null,"content_length":"9918","record_id":"<urn:uuid:ed7c7484-28e4-4892-b9fb-ab99fde0432c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
to show there are irrational numbers
March 14th 2010, 06:25 AM #1
Mar 2010
to show there are irrational numbers
hi everyone.
if we have to prove there are irrational number x and y are the element of Real numbers,
for example 2x-y is rational (by contradiction).
so how would that be? just getting two random numbers and substitute them into the equation? cant work out what it is so please, can anyone help me? thankyou so much!
Hello yanyannn
Welcome to Math Help Forum!
hi everyone.
if we have to prove there are irrational number x and y are the element of Real numbers,
for example 2x-y is rational (by contradiction).
so how would that be? just getting two random numbers and substitute them into the equation? cant work out what it is so please, can anyone help me? thankyou so much!
I'm sorry, but you must write in intelligible English if we are to understand what you mean. This just doesn't make sense.
Please set out what you mean as carefully and as precisely as you can.
Granddad is much better in both explaining and understanding/answering math questions then me.
So explaining to him what you mean is a good idéa.
I understand your question to be:
Proving that there are irrational numbers by contradiction. Then I assume that the example you made (2x-y) is your own.
I might be wrong but, substituting random number as x;y doesnt give you anything. You have to substitute an irrational number and then prove that it can not be written as rational. (that is in
the form of $\frac{a}{b}$)
A classical example is $\sqrt{2}$ There is a irrational number, and the proof very old, checking that proof could give you an idéa, (it is done by contradiction).
Hope it was helpfull...
March 14th 2010, 08:45 AM #2
March 15th 2010, 04:35 PM #3
Dec 2009 | {"url":"http://mathhelpforum.com/pre-calculus/133731-show-there-irrational-numbers.html","timestamp":"2014-04-18T01:21:15Z","content_type":null,"content_length":"37293","record_id":"<urn:uuid:39b81c07-3d56-4a39-b5af-f4c24b7b46db>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
Asset Retirement Obligation - AccountingTools
Asset Retirement Obligation
An asset retirement obligation (ARO) is a liability associated with the eventual retirement of a fixed asset, such as a legal requirement to return a site to its previous condition. A business should
recognize the fair value of an ARO when it incurs the liability and if it can make a reasonable estimate of the fair value of the ARO. If a fair value is not initially obtainable, then recognize the
ARO at a later date, when the fair value becomes available. If a company acquires a fixed asset to which an ARO is attached, then recognize a liability for the ARO as of the fixed asset acquisition
date. Recognizing this liability as soon as possible gives the readers of a company's financial statements a better grasp of the true state of its obligations.
Initial Accounting for an Asset Retirement Obligation
In most cases, the only way to determine the fair value of an ARO is to use an expected present value technique. When constructing an expected present value of future cash flows, you should
incorporate the following points into the calculation:
• Discount rate. Use a credit-adjusted risk-free rate to discount cash flows to their present value. Thus, the credit standing of a business may impact the discount rate used.
• Probability distribution. When calculating the expected present value of an ARO, and there are only two possible outcomes, assign a 50 percent probability to each one until you have additional
information that alters the initial probability distribution. Otherwise, spread the probability across the full set of possible scenarios.
Follow these steps in calculating the expected present value of an ARO:
1. Estimate the timing and amount of the cash flows associated with the retirement activities.
2. Determine the credit-adjusted risk-free rate.
3. Recognize any period-to-period increase in the carrying amount of the ARO liability as accretion expense. To do so, multiply the beginning liability by the credit-adjusted risk-free rate derived
when the liability was first measured.
4. Recognize upward liability revisions as a new liability layer, and discount them at the current credit-adjusted risk-free rate.
5. Recognize downward liability revisions by reducing the appropriate liability layer, and discount the reduction at the rate used for the initial recognition of the related liability layer.
When you initially recognize an ARO liability, also capitalize the related asset retirement cost by adding it to the carrying amount of the related fixed asset.
Subsequent Measurement of an Asset Retirement Obligation
It is possible that an ARO liability may change over time. If the liability increases, consider the incremental increase in each period to be an additional layer of liability, in addition to any
previous liability layers. The following points will assist in your recognition of these additional layers:
1. Initially recognize each layer at its fair value.
2. Systematically allocate the ARO liability to expense over the useful life of the underlying asset.
3. Measure changes in the liability due to the passage of time, using the credit-adjusted risk-free rate when each layer of liability was first recognized. You should recognize this cost as an
increase in the liability. When charged to expense, this is classified as accretion expense (which is not the same as interest expense).
4. As the time period shortens before an ARO is realized, your assessment of the timing, amount, and probabilities associated with cash flows will improve. You will likely need to alter the ARO
liability based on these changes in estimate. If you make an upward revision in the ARO liability, then discount it using the current credit-adjusted risk-free rate. If you make a downward
revision in the ARO liability, then discount it using the original credit-adjusted risk-free rate when the liability layer was first recognized. If you cannot identify the liability layer to
which the downward adjustment relates, then use a weighted-average credit-adjusted risk-free rate to discount it.
You normally settle an ARO only when the underlying fixed asset is retired, though it is possible that some portion of an ARO will be settled prior to asset retirement. If it becomes apparent that no
expenses will be required as part of the retirement of an asset, then reverse any remaining unamortized ARO to zero. | {"url":"http://www.accountingtools.com/asset-retirement-obligation","timestamp":"2014-04-19T07:29:49Z","content_type":null,"content_length":"50778","record_id":"<urn:uuid:52fbabaf-e753-4af5-b876-ae5e0419b21a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Re: RE: Re: Using backslash in macros
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Re: RE: Re: Using backslash in macros
From "Martin Weiss" <martin.weiss1@gmx.de>
To <statalist@hsphsun2.harvard.edu>
Subject st: Re: RE: Re: Using backslash in macros
Date Thu, 7 May 2009 19:41:31 +0200
If Steinar`s WTP is low, he can also take a look at http://www.stata.com/statalist/archive/2008-04/msg01106.html
----- Original Message ----- From: "Nick Cox" <n.j.cox@durham.ac.uk>
To: <statalist@hsphsun2.harvard.edu>
Sent: Thursday, May 07, 2009 7:36 PM
Subject: st: RE: Re: Using backslash in macros
Martin referred you to an article by me, which is certainly pertinent. Note that this issue is thoroughly ventilated in the manuals at [U] 18.3.11.
The bottom line for you is to use forward slashes for Windows filepath delimiters. Stata will happily translate.
N.B. what you call macrovariables are in Stata called local macros.
Martin Weiss
Steinar Fossedal
I'm having problems combining macrovariables when the first macro ends
with a backslash. Apparently, the end backslash is not included, and
the following macro is not unpacked. This is not a problem if the two
macros do not follow each other directly. The example below
illustrates the problem:
local indatadir e:\data\stata10\
local table mytable
di "indatadir <`indatadir'>"
di "table <`table'>"
// Backslash missing, macro `table' not unpacked:
di "`indatadir'`table'"
// Backslash ok and `table' unpacked when adding a sign (any letter)
behind the first macro:
di "`indatadir'_`table'"
The result is as follows:
local indatadir e:\data\stata10\
. local table mytable
. di "indatadir <`indatadir'>"
indatadir <e:\data\stata10\>
. di "table <`table'>"
table <mytable>
. // Backslash missing, macro `table' not unpacked:
. di "`indatadir'`table'"
. // Backslash ok and `table' unpacked when adding a sign (any letter)
behind the first macro:
. di "`indatadir'_`table'"
What is the reason behind this, and how can I work around it?
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-05/msg00265.html","timestamp":"2014-04-20T14:04:41Z","content_type":null,"content_length":"8776","record_id":"<urn:uuid:bbc566ce-7e76-4402-ae9c-c2c3243c65fc>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
PDL::Filter::LinPred - Linear predictive filtering
$a = new PDL::Filter::LinPred(
{NLags => 10,
LagInterval => 2,
LagsBehind => 2,
Data => $dat});
($pd,$corrslic) = $a->predict($dat);
A filter by doing linear prediction: tries to predict the next value in a data stream as accurately as possible. The filtered data is the predicted value. The parameters are
Number of time lags used for prediction
How many points each lag should be
If, for some strange reason, you wish to predict not the next but the one after that (i.e. usually f(t) is predicted from f(t-1) and f(t-2) etc., but with LagsBehind => 2, f(t) is predicted from
f(t-2) and f(t-3)).
The input data, which may contain other dimensions past the first (time). The extraneous dimensions are assumed to represent epochs so the data is just concatenated.
As an alternative to Data, you can just give the temporal autocorrelation function.
Don't do prediction or filtering but smoothing.
The method predict gives a prediction for some data plus a corresponding slice of the data, if evaluated in list context. This slice is given so that you may, if you wish, easily plot them atop each
The rest of the documentation is under lazy evaluation.
Copyright (C) Tuomas J. Lukka 1997. All rights reserved. There is no warranty. You are allowed to redistribute this software / documentation under certain conditions. For details, see the file
COPYING in the PDL distribution. If this file is separated from the PDL distribution, the copyright notice should be included in the file. | {"url":"http://pdl.perl.org/PDLdocs/Filter/LinPred.html","timestamp":"2014-04-16T19:24:32Z","content_type":null,"content_length":"3589","record_id":"<urn:uuid:8cdd6ce0-2ee6-4a4b-97ef-db27ca1e7c63>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I need help/advice with an dirac-delta integral. I computed the integral seen in the attachment and got 1. However, I am not quite sure if I did it right. Also, I don´t know how to throw this kind of
integral with vectors in wolfram alpha.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
So, does this integral equal 1?
Best Response
You've already chosen the best response.
Isn't the area under the curve of the dirac delta function always equal to 1?
Best Response
You've already chosen the best response.
Yes, but it is a little bit different if you multiply it with another function, like here.
Best Response
You've already chosen the best response.
Well it's beyond my knowledge, that was literally all I knew about the dirac delta function lol.
Best Response
You've already chosen the best response.
is that triple integral??
Best Response
You've already chosen the best response.
@experimentX No but it is a vector integral in R2.
Best Response
You've already chosen the best response.
what is your surface??
Best Response
You've already chosen the best response.
as for Delta-Dirac, \[ \int_p^q \delta {(t - a)} f(t) dt = f(a) \; \text{ where a} \in (p, q)\]
Best Response
You've already chosen the best response.
@experimentX My integral looks a bit harder than your example. You can see it in the attachment.
Best Response
You've already chosen the best response.
I couldn't make head or tals out of it \[ \int_{\mathfrak{ R^3}} \delta(x-y)x^2 dx \; \text{wobei} \; y = (0,1,2)^T \] Usually vector surface integrals are like \[\iint_s \vec F(x,y,z) \cdot d\
vec s \] where 's' is some region of surface like plane or cylinder and F is vector valued function. ]http://tutorial.math.lamar.edu/Classes/CalcIII/SurfIntVectorField.aspx
Best Response
You've already chosen the best response.
Got it. In this case the delta function must be y and therefore the integral is 0^2+1^2+2^2=5.
Best Response
You've already chosen the best response.
i see you have a point
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50d18ff4e4b097f9a772a804","timestamp":"2014-04-16T08:01:21Z","content_type":null,"content_length":"60650","record_id":"<urn:uuid:8ae23fcf-d150-4c59-927a-f5d2eed2a425>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
SLINK: an optimally efficient algorithm for the single-link cluster method
Results 1 - 10 of 77
, 2002
"... Accrue Software, Inc. Clustering is a division of data into groups of similar objects. Representing the data by fewer clusters necessarily loses certain fine details, but achieves
simplification. It models data by its clusters. Data modeling puts clustering in a historical perspective rooted in math ..."
Cited by 247 (0 self)
Add to MetaCart
Accrue Software, Inc. Clustering is a division of data into groups of similar objects. Representing the data by fewer clusters necessarily loses certain fine details, but achieves simplification. It
models data by its clusters. Data modeling puts clustering in a historical perspective rooted in mathematics, statistics, and numerical analysis. From a machine learning perspective clusters
correspond to hidden patterns, the search for clusters is unsupervised learning, and the resulting system represents a data concept. From a practical perspective clustering plays an outstanding role
in data mining applications such as scientific data exploration, information retrieval and text mining, spatial database applications, Web analysis, CRM, marketing, medical diagnostics, computational
biology, and many others. Clustering is the subject of active research in several fields such as statistics, pattern recognition, and machine learning. This survey focuses on clustering in data
mining. Data mining adds to clustering the complications of very large datasets with very many attributes of different types. This imposes unique
, 1993
"... The Scatter/Gather document browsing method uses fast document clustering to produce table-of-contentslike outlines of large document collections. Previous work [1] developed linear-time
document clustering algorithms to establish the feasibility of this method over moderately large collections. How ..."
Cited by 124 (5 self)
Add to MetaCart
The Scatter/Gather document browsing method uses fast document clustering to produce table-of-contentslike outlines of large document collections. Previous work [1] developed linear-time document
clustering algorithms to establish the feasibility of this method over moderately large collections. However, even linear-time algorithms are too slow to support interactive browsing of very large
collections such as Tipster, the DARPA standard text retrieval evaluation collection. We present a scheme that supports constant interaction-time Scatter /Gather of arbitrarily large collections
after nearlinear time preprocessing. This involves the construction of a cluster hierarchy. A modification of Scatter /Gather employing this scheme, and an example of its use over the Tipster
collection are presented. 1 Background Our previous work on Scatter/Gather [1] has shown that document clustering can be used as a first-class tool for browsing large text collections. Browsing is
distinguished from sea...
- In: Proc. of 1st International Workshop on Ontology Learning (OL 2000). Held in Conjunction with the 14th European Conference on Artificial Intelligence (ECAI , 2000
"... Abstract. This paper explores the possibility to exploit text on the world wide web in order to enrich the concepts in existing ontologies. First, a method to retrieve documents from the WWW
related to a concept is described. These document collections are used 1) to construct topic signatures (list ..."
Cited by 104 (5 self)
Add to MetaCart
Abstract. This paper explores the possibility to exploit text on the world wide web in order to enrich the concepts in existing ontologies. First, a method to retrieve documents from the WWW related
to a concept is described. These document collections are used 1) to construct topic signatures (lists of topically related words) for each concept in WordNet, and 2) to build hierarchical clusters
of the concepts (the word senses) that lexicalize a given word. The overall goal is to overcome two shortcomings of WordNet: the lack of topical links among concepts, and the proliferation of senses.
Topic signatures are validated on a word sense disambiguation task with good results, which are improved when the hierarchical clusters are used. 1
- In Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining , 1997
"... Conventional document retrieval systems (e.g., Alta Vista) return long lists of ranked documents in response to user queries. Recently, document clustering has been put forth as an alternative
method of organizing retrieval results (Cutting et al. 1992). A person browsing the clusters can discover ..."
Cited by 101 (2 self)
Add to MetaCart
Conventional document retrieval systems (e.g., Alta Vista) return long lists of ranked documents in response to user queries. Recently, document clustering has been put forth as an alternative method
of organizing retrieval results (Cutting et al. 1992). A person browsing the clusters can discover patterns that could be overlooked in the traditional presentation. This paper describes two novel
clustering methods that intersect the documents in a cluster to determine the set of words (or phrases) shared by all the documents in the cluster. We report on experiments that evaluate these
intersectionbased clustering methods on collections of snippets returned from Web search engines. First, we show that word-intersection clustering produces superior clusters and does so faster than
standard techniques. Second, we show that our O(n log n) time phrase-intersection clustering method produces comparable clusters and does so more than two orders of magnitude faster than all methods
tested. I...
- In TCC , 2005
"... Abstract. We initiate a theoretical study of the census problem. Informally, in a census individual respondents give private information to a trusted party (the census bureau), who publishes a
sanitized version of the data. There are two fundamentally conflicting requirements: privacy for the respon ..."
Cited by 89 (12 self)
Add to MetaCart
Abstract. We initiate a theoretical study of the census problem. Informally, in a census individual respondents give private information to a trusted party (the census bureau), who publishes a
sanitized version of the data. There are two fundamentally conflicting requirements: privacy for the respondents and utility of the sanitized data. Unlike in the study of secure function evaluation,
in which privacy is preserved to the extent possible given a specific functionality goal, in the census problem privacy is paramount; intuitively, things that cannot be learned “safely ” should not
be learned at all. An important contribution of this work is a definition of privacy (and privacy compromise) for statistical databases, together with a method for describing and comparing the
privacy offered by specific sanitization techniques. We obtain several privacy results using two different sanitization techniques, and then show how to combine them via cross training. We also
obtain two utility results involving clustering. 1
- Parallel Computing , 1995
"... Hierarchical clustering is a common method used to determine clusters of similar data points in multidimensional spaces. O(n 2 ) algorithms are known for this problem [3, 4, 10, 18]. This paper
reviews important results for sequential algorithms and describes previous work on parallel algorithms f ..."
Cited by 80 (1 self)
Add to MetaCart
Hierarchical clustering is a common method used to determine clusters of similar data points in multidimensional spaces. O(n 2 ) algorithms are known for this problem [3, 4, 10, 18]. This paper
reviews important results for sequential algorithms and describes previous work on parallel algorithms for hierarchical clustering. Parallel algorithms to perform hierarchical clustering using
several distance metrics are then described. Optimal PRAM algorithms using n log n processors are given for the average link, complete link, centroid, median, and minimum variance metrics. Optimal
butterfly and tree algorithms using n log n processors are given for the centroid, median, and minimum variance metrics. Optimal asymptotic speedups are achieved for the best practical algorithm to
perform clustering using the single link metric on a n log n processor PRAM, butterfly, or tree. Keywords. Hierarchical clustering, pattern analysis, parallel algorithm, butterfly network, PRAM
algorithm. 1 In...
, 1999
"... Clustering techniques are used in database mining for finding interesting patterns in high dimensional data. These are useful in various applications of knowledge discovery in databases. Some
challenges in clustering for large data sets in terms of scalability, data distribution, understanding en ..."
Cited by 64 (0 self)
Add to MetaCart
Clustering techniques are used in database mining for finding interesting patterns in high dimensional data. These are useful in various applications of knowledge discovery in databases. Some
challenges in clustering for large data sets in terms of scalability, data distribution, understanding end-results, and sensitivity to input order, have received attention in the recent past. Recent
approaches attempt to find clusters embedded in subspaces of high dimensional data. In this paper we propose the use of adaptive grids for efficient and scalable computation of clusters in subspaces
for large data sets and large number of dimensions. The bottom-up algorithm for subspace clustering computes the dense units in all dimensions and combines these to generate the dense units in higher
dimensions. Computation is heavily dependent on the choice of the partitioning parameter chosen to partition each dimension into intervals (bins) to be tested for density. The number of bins
- Proceedings of KDD-99, 5th ACM International Conference on Knowledge Discovery and Data Mining , 1999
"... This paper investigates the use of supervised clustering in order to create sets of categories for classification of documents. We use information from a pre-existing taxonomy in order to
supervise the creation of a set of related clusters, though with some freedom in defining and creating the class ..."
Cited by 46 (1 self)
Add to MetaCart
This paper investigates the use of supervised clustering in order to create sets of categories for classification of documents. We use information from a pre-existing taxonomy in order to supervise
the creation of a set of related clusters, though with some freedom in defining and creating the classes. We show that the advantage of using supervised clustering is that it is possible to have some
control over the range of subjects that one would like the categorization system to address, but with a precise mathematical definition of each category. We then categorize documents using this a
priori knowledge of the definition of each category. We also discuss a new technique to help the classifier distinguish better among closely related clusters. Finally, we show empirically that this
categorization system utilizing a machine-derived taxonomy performs as well as a manual categorization process, but at a far lower cost. 1
, 1999
"... . This paper presents the Collective Hierarchical Clustering (CHC) algorithm for analyzing distributed, heterogeneous data. This algorithm first generates local cluster models and then combines
them to generate the global cluster model of the data. The proposed algorithm runs in O(jSjn 2 ) tim ..."
Cited by 44 (8 self)
Add to MetaCart
. This paper presents the Collective Hierarchical Clustering (CHC) algorithm for analyzing distributed, heterogeneous data. This algorithm first generates local cluster models and then combines them
to generate the global cluster model of the data. The proposed algorithm runs in O(jSjn 2 ) time, with a O(jSjn) space requirement and O(n) communication requirement, where n is the number of
elements in the data set and jSj is the number of data sites. This approach shows significant improvement over naive methods with O(n 2 ) communication costs in the case that the entire distance
matrix is transmitted and O(nm) communication costs to centralize the data, where m is the total number of features. A specific implementation based on the single link clustering and results
comparing its performance with that of a centralized clustering algorithm are presented. An analysis of the algorithm complexity, in terms of overall computation time and communication requirements,
is pres...
- IEEE Transactions on Circuits and Systems for Video Technology , 2003
"... The proliferation of video content on the web makes similarity detection an indispensable tool in web data management, searching, and navigation. In this paper, we propose a number of algorithms
to efficiently measure video similarity. We define video as a set of frames, which are represented as hig ..."
Cited by 40 (5 self)
Add to MetaCart
The proliferation of video content on the web makes similarity detection an indispensable tool in web data management, searching, and navigation. In this paper, we propose a number of algorithms to
efficiently measure video similarity. We define video as a set of frames, which are represented as high dimensional vectors in a feature space. Our goal is to measure Ideal Video Similarity (IVS),
defined as the percentage of clusters of similar frames shared between two video sequences. Since IVS is too complex to be deployed in large database applications, we approximate it with Voronoi
Video Similarity (VVS), defined as the volume of the intersection between Voronoi Cells of similar clusters. We propose a class of randomized algorithms to estimate VVS by first summarizing each
video with a small set of its sampled frames, called the Video Signature (ViSig), and then calculating the distances between corresponding frames from the two ViSig's. By generating samples with a
probability distribution that describes the video statistics, and ranking them based upon their likelihood of making an error in the estimation, we show analytically that ViSig can provide an
unbiased estimate of IVS. Experimental results on a large dataset of web video and a set of MPEG-7 test sequences with artificially generated similar versions are provided to demonstrate the
retrieval performance of our proposed techniques. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=275938","timestamp":"2014-04-24T08:39:21Z","content_type":null,"content_length":"41340","record_id":"<urn:uuid:36845981-1354-4513-bb42-586c373a517f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the non-extendibility of quantum theory
I will present a recent theorem that asserts that there cannot exist an "extension of quantum theory" that allows us to make more informative predictions about future measurable events (e.g., whether
a horizontally polarized photon passes a polarization filter with a given orientation) than standard quantum theory. The theorem is based on two assumptions about the extended theory: (i) the theory
should be compatible with quantum theory (this means, in practice, that the theory is not falsified by current experimental data); (ii) the theory should not preclude experimenters from freely
choosing the measurement settings. More precisely, the latter assumption corresponds to the requirement that measurement settings can be chosen at random such that they are independent of any other
events, except of course those that lie in the causal future of the choice (i.e., in its future light cone). In addition, I will discuss a corollary of the non-extendibility theorem which leads to
the same conclusions as a recent theorem by Pusey, Barrett, and Rudolph on the "reality of the quantum state", based only on the above assumptions. | {"url":"https://www.perimeterinstitute.ca/videos/non-extendibility-quantum-theory","timestamp":"2014-04-16T04:12:17Z","content_type":null,"content_length":"27899","record_id":"<urn:uuid:66ae1998-fac0-49b1-817f-e72887d1ba0e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the gradient of u(Ox) when O is orthogonal.
Suppose we know $abla{u(x)}$. Let O be an orthogonal matrix. What is $abla{u(Ox)}$? Thank you!
Yea, I don't understand exactly what that means though. Is it [LaTeX ERROR: Convert failed] ?
I don't think there's a dot product there (except implicitly, in the definition of the gradient.) Looking at the equation applied to your case, we'd have $abla(u(O\mathbf{x}))=O^{T}(abla u)(O\mathbf
{x}).$ As I see it, the way to interpret the LHS of this equation is to take the vector $\mathbf{x}$, rotate by left-multiplying by $O$, then you stuff the result of the rotation into the function
$u$, and finally, you take the gradient. I interpret the RHS of this equation as follows: first, you take the gradient of $u$, and then you evaluate that function at the point $Ox$, and finally you
left-multiply by $O^{T}.$ If I were you, I'd work out a simple example in, say, 2 dimensions. See how it works out. | {"url":"http://mathhelpforum.com/advanced-algebra/156748-what-gradient-u-ox-when-o-orthogonal.html","timestamp":"2014-04-17T07:24:24Z","content_type":null,"content_length":"40354","record_id":"<urn:uuid:72005e6d-a162-4bf5-a705-52ecacf33fa5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: The Model-Theoretic Ordinal Analysis of
Theories of Predicative Strength
Jeremy Avigad and Richard Sommer
October 14, 1997
We use model-theoretic methods described in [3] to obtain ordinal
analyses of a number of theories of first- and second-order arithmetic,
whose proof-theoretic ordinals are less than or equal to 0.
1 Introduction
In [3] we introduced a model-theoretic approach to ordinal analysis
as an interesting alternative to cut elimination. Here we extend these
methods to the analysis of stronger theories of first- and second-order
arithmetic which are nonetheless predicatively justifiable.
When used in this sense, the word "predicative" refers to a foun-
dational stance under which one is willing to accept the set of natural
numbers as a completed totality, but not the set of all subsets of the
natural numbers. In this spirit, predicative theories bar definitions that
require quantification over the full power set of N, depicting instead
a universe of sets of numbers that is constructed "from the bottom
up." Work of Feferman and Sch¨utte has established that the ordinal | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/444/2552875.html","timestamp":"2014-04-17T18:54:11Z","content_type":null,"content_length":"8349","record_id":"<urn:uuid:363ace2b-6da1-4af9-827d-077faa3f849a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
1986 Explanation-Based Learning: An Alternative View - Gerald Dejong and Raymond Mooney
Inductive Inference Hierarchies: Probabilistic vs Pluralistic - R. P. Daley
Probability and Measure - Patrick Billingsley
Chemical Discovery as Belief Revision - Donald Rose and Pat Langley
Editorial: Human and Machine Learning - Pat Langley
News and Notes first of 86 - Thomas G. Dietterich
Induction of Decision Trees - J. R. Quinlan
Integrating Quantitative and Qualitative Discovery: The ABACUS System - Brian C. Falkenhainer and Ryszard S. Michalski
Systems That Learn - D. Osherson, M. Stob and S. Weinstein
Parallel Distributed Processing Volume I: Foundations - D. E. Rumelhart and J. L. McClelland
Editorial: The Terminology of Machine Learning - Pat Langley
Learning Distributed Representations of Concepts - G. E. Hinton
Understanding the Nature of Learning: Issues and Research Directions - R. M. Michalski
On the Inference of Programs Approximately Computing the Desired Function - C. Smith and M. Velauthapillai
Studies on Inductive Inference from Positive Data - T. Shinohara
Determining Arguments of Invariant Functional Descriptions - Mieczyslaw M. Kokar
Learning at the Knowledge Level - Thomas G. Dietterich
Stochastic Complexity and Modeling - J. Rissanen
Chunking in Soar: The Anatomy of a General Learning Mechanism - John E. Laird, Paul S. Rosenbloom and Allen Newell
An Algebraic Framework for Inductive Program Synthesis - K. P. Jantke
The disjunctive Learning Problem - M. Fulk
Machine Learning of Nearly Minimal Size Grammars - J. Case and H. Chi
On Barzdin’s Conjecture - T. Zeugmann
A General Framework for Parallel Distributed Processing - D. E. Rumelhart, G. E. Hinton and J. L. McClelland
Machine Learning: An Artificial Intelligence Approach 2 - R. S. Michalski and J. G. Carbonell and T. M. Mitchell
On the Complexity of Inductive Inference - R. Daley and C. Smith
Using telltales in developing program test sets - J. Cherniavsky and C. Smith
A General Framework for Induction and a Study of Selective Induction - Larry Rendell
On The Inference of Sequences of Functions - W. Gasarch and C. Smith
Inductive inference by refinement - P. Laird
Distributional Expectations and the Induction of Category Structure - M. J. Flannagan, L. S. Fried and K. J. Holyoak
Inductive Inference of Functions From Noised Observations - J. Grabowski
Learning Concepts by Asking Questions - C. Sammut and R. Banerji
News and Notes - Thomas G. Dietterich, Nicholas S. Flann and David C. Wilkins
Experimental Goal Regression: A Method for Learning Problem-Solving Heuristics - Bruce W. Porter and Dennis F. Kibler
Machine Learning and Discovery - Pat Langley and Ryszard S. Michalski
Incremental Learning from Noisy Data - Jeffrey C. Schlimmer and Jr. Richard H. Granger
Linear Function Neurons: Structure and Training - S. E. Hampson and D. J. Volper
A Theory of Historical Discovery: The Construction of Componential Models - Jan M. Zytkow and Herbert A. Simon
Explanation-Based Generalization: A Unifying View - Tom M. Mitchell, Richard M. Keller and Smadar T. Kedar-Cabelli
Aggregating Inductive Expertise - D. Osherson, M. Stob and S. Weinstein
A Framework for Empirical Discovery - P. Langley and B. Nordhausen
Inductive Inference of approximations - J. Royer
A General Theory of Discrimination Learning - P. Langley
On the Inductive Inference of Programs with Anomalies - M. Velauthapillai
News and Notes - Yves Kodratoff, Gheorghes Tecuci and Thomas G. Dietterich
On Machine Learning - Pat Langley
Learning Representations By Back-Propagating Errors - D. E. Rumelhart, G. E. Hinton and R. J. Williams
Some Problems on Inductive Inference from Positive Data - T. Shinohara
Learning Internal Representations by Error Propagation - D. E. Rumelhart, G. E. Hinton and R. J. Williams
A Theory and Methodology of Inductive Inference - R. S. Michalski
Learning Machines - J. Case
Identification in the Limit of First Order Structures - D. N. Osherson and S. Weinstein
Parallel Distributed Processing - Explorations in the Microstructure of Cognition - J. L. McClelland, D. E. Rumelhart and t. P. R. Group
Stochastic Complexity and Sufficient Statistics - J. Rissanen
How Fast is Program Synthesis from Examples - R. Wiehagen
Some Results in the Theory of Effective Program Synthesis - Learning by Defective Information - G. Schäfer-Richter
An Analysis of a Learning Paradigm - D. Osherson, M. Stob and S. Weinstein
On the Complexity of Effective Program Synthesis - R. Wiehagen
Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists - D. N. Osherson, M. Stob and S. Weinstein
Stratified Inductive Hypothesis Generation - Z. S. Szabo
Stochastic Relaxation Methods for Image Restoration and Expert Systems - S. Geman
Machine Learning of Inductive Bias - P. E. Utgoff
On the Complexity of Program Synthesis from Examples - R. Wiehagen
Learning from positive-only examples - R. Berwick
Towards the Development of an Analysis of Learning Algorithms - R. Daley
The Effect of Noise on Concept Learning - J. R. Quinlan
January An Introduction to Hidden Markov Models - L. R. Rabiner and B. H. Juang
February Genetic AI-Translating Piaget into Lisp - G. L. Drescher
May On the Logic of Representing Dependencies by Graphs - J. Pearl and A. Paz
CONSENSUS: A Statistical Learning Procedure in a Connectionist Network - G. J. Goetsch
June Types of queries for concept learning - D. Angluin
A Lemma on the Multiarmed Bandit Problem - J. N. Tsitsiklis
October Analogical and Inductive Inference, International Workshop AII ’86. Wendisch-Rietz, GDR - K. P. Jantke | {"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0coltbib--00-1----0-10-0---0---0direct-10---4-------0-0l--11-en-50---20-about---01-3-1-00-0--4--0--0-0-11-10-0utfZz-8-00&a=d&cl=CL3.14","timestamp":"2014-04-19T02:39:46Z","content_type":null,"content_length":"53997","record_id":"<urn:uuid:b1063570-3aed-4a37-ac1a-5bbd0c5d23d4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Structures and Their Interactions
The book is a collection of examples, each of which shows either how discrete structures interact with each other, or how discrete structures interact with other parts of mathematics. An example of
the former is a discussion of how hypergraphs connect with graph colorings, designs, Ramsey theory and partially ordered sets. An example of the latter is a series of theorems that relate graphs to
linear algebra, topology, analysis, logic, and probability. A particularly interesting example is a proof of the Cayley-Hamilton theorem stating that every square matrix satisfies its own
characteristic polynomial. The proof is interesting because it uses graphs to verify this algebraic result, and it does not contain any computations in the field over which the matrix is defined.
Hence, the author says, the Cayley-Hamilton theorem holds not only algebraically, but also “formally”.
The book is meant for readers who have taken a course in discrete mathematics and so are familiar with most basic discrete structures, though not necessarily with all of them. Some of the structures
that are less well-known for students but are covered in this book include simplicial complexes and multicomplexes. There probably a few researchers in discrete mathematics who are not familiar with
the latter.
Given the book’s nature, it is not surprising that the text is broken up into short sections and subsections, often not longer than two pages. So the book will not be used for deep topical coverage,
but it will provide a few good examples for many instructors of a discrete mathematics or combinatorics course. There are plenty of exercises, but only very few of them have a short answer included
at the end. At the end of the book, we find six appendices, each about one part of mathematics whose connections with discrete math was covered in the book. There is also a list of eleven “Research
Problems”. Perhaps “Directions of Research” would have been a better name for these because they are not specific questions, but indications of areas where it could be possible to use the methods
discussed in the book to prove some stronger statements.
On the whole, I am certain I will use some examples I found in this book when I teach Combinatorics in the upcoming semester.
Miklós Bóna is Professor of Mathematics at the University of Florida. | {"url":"http://www.maa.org/publications/maa-reviews/discrete-structures-and-their-interactions","timestamp":"2014-04-17T02:05:37Z","content_type":null,"content_length":"97901","record_id":"<urn:uuid:8ef19c63-d715-433b-8625-4d6ec7d9268e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Komparasi Karakteristik Menara Pendingin Menggunakan Beberapa Tipe Susunan Pipa-pipa sebagai Pendistribusi Cairan
The aims of the experiments are to characterize of the cooling tower by using the bank of the pipes as the fluid distributor. The cooling tower is constructed by rectangle of glasses which it
has 9 cm x 9 cm in cross section and 100 cm in height. The fluid distributors are constructed by pipes in 5/8 inch of nominal diameter. There were five types of the packing namely the fluid
distributor without baffle, fluid distributor with one baffle, the fuid distributor with two baffles, the fluid distributor with there baffles and the fluid distributor with four baffles.
The experiment have been done by flowing the hot water from the top of cooling tower through the fuid distributor, on the other hand, the air cooler is flown above through the fluid distributor
opposite to the water flow direction. This process makes the exchanging of the heat between air and water on the fluid distributor. The variables varied in this experiments are the inlet water
temperature, Twi (45, 50, 55 and 60 °C), the height of fluid distributor, Z (30.5, 61 and 91.5 mm), the ratio of water/air mass flow, mw/ma (3.624, 5.888 and 7.700) and the number of baffle (without
baffle, one baffle, two baffles, three baffles and four baffles).
The experimental results show that in the ratio of water/air mass flow rate which is lower, the increasing of the numbers baffle cause the decreasing of characteristic. Mean while, in the ratio
of water/air mass flow rate which is higer, the increasing of the numbers of baffle cause the increasing of the characteristic. The cooling tower without the fluid distributor has the average
characteristic is 13.4% and the average effectiveness is 16.2%. For the cooling tower with the fluid distributor has the average characteristic is 31.5% and the average effectiveness is 25.0%.
Keywords : Cooing tower, characteristic, fluid distributor.
• There are currently no refbacks. | {"url":"http://jurnal.ugm.ac.id/index.php/mft/article/view/2351","timestamp":"2014-04-16T07:13:32Z","content_type":null,"content_length":"21200","record_id":"<urn:uuid:8f78bb63-44b5-4c9a-9a0c-38973dedad68>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
What units are used after eV, MeV, GeV, etc, are divided by c^2?
I know that the mass of subatomic particles is usually given in electronvolts
This is sloppy language which is unfortunately commonly used by physicists.
The electron-volt (eV) is a unit of
, equal to 1.602e-19 joule.
When someone says "the mass of an electron is 511 keV" he really means, "the energy-equivalent of the mass of an electron is 511 keV" or "the rest-energy of an electron is 511 keV" or "the mass of an
electron is 511 keV/c^2." Mathematically,
[tex]m_e c^2 = 511 \rm{ keV}[/tex]
Dividing through by c^2 we get
[tex]m_e = 511 \rm{ keV}/c^2[/tex]
so the eV/c^2 is a unit of
, equal to (1.602e-19 J)/(2.998e8 m/s)^2 = 1.782e-36 kg.
To check this, 511 keV = 511 x 1000 x 1.782e-36 kg = 9.108e-31 kg which is indeed the mass of an electron in kg. | {"url":"http://www.physicsforums.com/showthread.php?t=473540","timestamp":"2014-04-18T10:48:05Z","content_type":null,"content_length":"33191","record_id":"<urn:uuid:0678d6f6-f15c-40e1-afab-e1d1e91cfa4f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Let $(X,*)$ be a connected and locally connected based space and $p\colon\thinspace E\to X$ a covering map. We will denote $p^{{-1}}(*)$, the fiber over the basepoint, by $F$, and the fundamental
group $\pi_{1}(X,*)$ by $\pi$. Given a loop $\gamma\colon\thinspace I\to X$ with $\gamma(0)=\gamma(1)=*$ and a point $e\in F$ there exists a unique $\tilde{\gamma}\colon\thinspace I\to E,$ with $\
tilde{\gamma}(0)=e$ such that $p\circ\tilde{\gamma}=\gamma$, that is, a lifting of $\gamma$ starting at $e$. Clearly, the endpoint $\tilde{\gamma}(1)$ is also a point of the fiber, which we will
denote by $e\cdot\gamma$.
Theorem 1.
With notation as above we have:
1. 1.
If $\gamma_{1}$ and $\gamma_{2}$ are homotopic relative $\partial I$ then
$\forall e\in F\quad e\cdot\gamma_{1}=e\cdot\gamma_{2}.$
2. 2.
The map
$F\times\pi\to F,\quad(e,\gamma)\mapsto e\cdot\gamma$
defines a right action of $\pi$ on $F$.
3. 3.
The stabilizer of a point $e$ is the image of the fundamental group $\pi_{1}(E,e)$ under the map induced by $p$:
1. 1.
Let $e\in F$, $\gamma_{1},\gamma_{2}\colon\thinspace I\to X$ two loops homotopic relative $\partial I$ and $\tilde{\gamma}_{1},\tilde{\gamma}_{2}\colon\thinspace I\to E$ their liftings starting
at $e$. Then there is a homotopy $H\colon\thinspace I\times I\to X$ with the following properties:
□ $H(\bullet,0)=\gamma_{1}$,
□ $H(\bullet,1)=\gamma_{2}$,
□ $H(0,t)=H(1,t)=*,\quad\forall t\in I$.
According to the lifting theorem $H$lifts to a homotopy $\tilde{H}\colon\thinspace I\times I\to E$ with $H(0,0)=e$. Notice that $\tilde{H}(\bullet,0)=\tilde{\gamma}_{1}$ (respectively $\tilde{H}
(\bullet,1)=\tilde{\gamma}_{2}$) since they both are liftings of $\gamma_{1}$ (respectively $\gamma_{2}$) starting at $e$. Also notice that that $\tilde{H}(1,\bullet)$ is a path that lies
entirely in the fiber (since it lifts the constant path $*$). Since the fiber is discrete this means that $\tilde{H}(1,\bullet)$ is a constant path. In particular $\tilde{H}(1,0)=\tilde{H}(1,1)$
or equivalently $\tilde{\gamma}_{1}(1)=\tilde{\gamma}_{2}(1)$.
2. 2.
By (1) the map is well defined. To prove that it is an action notice that firstly the constant path $*$ lifts to constant paths and therefore
$\forall e\in F,\quad e\cdot 1=e\,.$
Secondly the concatenation of two paths lifts to the concatenation of their liftings (as is easily verified by projecting). In other words, the lifting of $\gamma_{1}\gamma_{2}$ that starts at
$e$ is the concatenation of $\tilde{\gamma}_{1}$, the lifting of $\gamma_{1}$ that starts at $e$, and $\tilde{\gamma}_{2}$ the lifting of $\gamma_{2}$ that starts in $\gamma_{1}(1)$. Therefore
3. 3.
This is a tautology: $\gamma$fixes $e$ if and only if its lifting starting at $e$ is a loop.
Definition 2.
The action described in the above theorem is called the monodromy action and the corresponding homomorphism
$\rho\colon\thinspace\pi\to{\rm Sym}(F)$
is called the monodromy of $p$.
monodromy, monodromy action, monodromy homomorphism
Mathematics Subject Classification
no label found | {"url":"http://planetmath.org/monodromy","timestamp":"2014-04-18T15:39:55Z","content_type":null,"content_length":"134333","record_id":"<urn:uuid:d58ec155-4c0a-4141-b853-7b828d4ef0e1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to solve for x: (x^a)((1-x)^b)-2c = 0
October 3rd 2013, 02:52 PM #1
Oct 2013
Champaign IL
How to solve for x: (x^a)((1-x)^b)-2c = 0
Hello helpful math whizzes,
I'm working through a game theory problem and am trying to solve this equation for x:
(x^a)((1-x)^b)-2c = 0
I've tried using natural logs to manage the exponents, but I keep running into dead ends. Is it possible to reduce this equation down such that x is all by its lonesome on one side?
Also, this is my first post here, so I apologize if I've placed this in the wrong subforum.
Re: How to solve for x: (x^a)((1-x)^b)-2c = 0
Possibly. Do you have any additional conditions on $x$? For instance, is $x$ a probability? Is $a$ the number of successes and $b$ the number of failures (or replace the words successes and
failures with whatever game terms make sense)? Knowing a little more of the specifics might help.
Anyway, we can start with the Binomial Theorem.
\begin{align*}x^a(1-x)^b & = 2c \\ x^a\sum_{i=0}^b \left( (-1)^i \binom{b}{i} x^i \right) & = 2c\end{align*}
This gives you a polynomial of degree $a+b$. There are many techniques for finding real solutions of polynomials. Knowing if $x$ is a probability would help generate bounds when looking for
solutions. Additionally, knowing more about the problem may help generate another formula that could lead to the same solution (potentially).
Please let me know if I used any notation you don't understand.
Re: How to solve for x: (x^a)((1-x)^b)-2c = 0
Oh, and if you don't mind rational exponents, $x^{\tfrac{a+b}{b}} - x^{\tfrac{a}{b}} + (2c)^{\tfrac{1}{b}} = 0$
Last edited by SlipEternal; October 3rd 2013 at 03:52 PM.
Re: How to solve for x: (x^a)((1-x)^b)-2c = 0
Last bit of advice. If $a+b>4$ then there is no general formula. The best you can do is attempt to estimate the roots of the polynomial. Here is a link that offers a lot of information about
estimating eigenvalues of a matrix: Gershgorin’s Theorem for Estimating Eigenvalues. And this gives an idea of how to set up the matrix to evaluate: Wolfram.com.
October 3rd 2013, 03:33 PM #2
MHF Contributor
Nov 2010
October 3rd 2013, 03:40 PM #3
MHF Contributor
Nov 2010
October 3rd 2013, 05:50 PM #4
MHF Contributor
Nov 2010 | {"url":"http://mathhelpforum.com/algebra/222548-how-solve-x-x-1-x-b-2c-0-a.html","timestamp":"2014-04-17T05:24:48Z","content_type":null,"content_length":"39980","record_id":"<urn:uuid:3e696fc2-a2f3-4ab8-a645-8ed8dd363aa5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
By selling 33 meters of cloth,a man gains the cost price of 11meters.Find the profit%
March 1st 2013, 04:39 AM #1
Sep 2012
This is a common sum with small change. Normally, this type of sum says that the man gains SELLING PRICE of 11 meters. Then the profit % will be 50 %. But what about this one, where he gains the
COST PRICE of 11 meters? I couldn't even understand what it meant! Can someone please explain and answer this question?? | {"url":"http://mathhelpforum.com/algebra/214033-selling-33-meters-cloth-man-gains-cost-price-11meters-find-profit.html","timestamp":"2014-04-18T14:13:47Z","content_type":null,"content_length":"30421","record_id":"<urn:uuid:9a6e9e27-30a5-4148-8010-3743c002e7e7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
U Combinator
U Combinator is a research group in the School of Computing at the University of Utah. We research and develop advanced languages, compilers and tools to improve the performance, parallelism,
security and correctness of software.
Academic members
• Matt Might (Faculty)
• Christopher Earl (Postdoc)
• William Byrd (Postdoc)
• Doaa Hassan (Postdoc)
• Shuying Liang (Ph.D. student)
• Steven Lyde (Ph.D. student)
• Thomas Gilray (Ph.D. student)
• Petey Aldous (Ph.D. student)
• Kimball Germane (Ph.D. student)
• Michael Ballantyne (Ph.D. student)
• Leif Andersen (B.S.M.S. student)
• J.T. Olds (Ph.D. Student, on leave)
Interact with U Combinator
Point your IRC client at the channel #ucombinator on freenode.
Also, if you're a student, sign up for the Google group to get lab announcements:
Please don't sign up if you're not a student.
Public access
We have a public code repository hosted on Google Code.
Member access
U Combinator members, check out the research repository:
svn checkout https://www.ucombinator.org/svn/ucombinator/
Why the name U Combinator?
In the theory of programming languages, the U combinator, U, is the mathematical function that applies its argument to its argument; that is U(f) = f(f), or equivalently, U = λ f . f(f).
Self-application permits the simulation of recursion in the λ-calculus, which means that the U combinator enables universal computation. (The U combinator is actually more primitive than the more
well-known fixed-point Y combinator.)
The expression U(U), read U of U, is the smallest non-terminating program, and U of U is also the local short-hand for the University of Utah. | {"url":"http://www.ucombinator.org/","timestamp":"2014-04-19T04:19:59Z","content_type":null,"content_length":"5262","record_id":"<urn:uuid:330ed70f-a7c7-44b4-8e9b-50091363d7bd>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
Leray spectral sequence
From Encyclopedia of Mathematics
spectral sequence of a continuous mapping
A spectral sequence connecting the cohomology with values in a sheaf of Abelian groups
and its limit [1], [2]).
The condition of local contractibility can be replaced by other topological conditions on
Using singular cohomology, for any Serre fibration with path-connected fibres one can construct an analogue of the Leray spectral sequence that has all the properties listed above of the Leray
spectral sequence of a locally trivial fibre bundle (the Serre spectral sequence). There is an analogous spectral sequence in singular homology.
[1] J. Leray, "L'anneau spectral et l'anneau fibré d'homologie d'un espace localement compact et d'une application continue" J. Math. Pures Appl. , 29 (1950) pp. 1–139
[2] J. Leray, "L'homologie d'un espace fibré dont la fibre est connexe" J. Math. Pures Appl. , 29 (1950) pp. 169–213
[3] R. Godement, "Topologie algébrique et théorie des faisceaux" , Hermann (1958)
[4] S.-T. Hu, "Homotopy theory" , Acad. Press (1959)
[a1] G.W. Whitehead, "Elements of homotopy theory" , Springer (1978) pp. 228
[a2] J.P. Serre, "Homologie singulière des espaces fibrés. Applications" Ann. of Math. (2) , 54 (1951) pp. 425–505
How to Cite This Entry:
Leray spectral sequence. D.A. Ponomarev (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Leray_spectral_sequence&oldid=17934
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Leray_spectral_sequence","timestamp":"2014-04-17T12:30:06Z","content_type":null,"content_length":"19538","record_id":"<urn:uuid:4b7f1cc4-29b7-422f-b50b-77e38b1d4779>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reflection of axisymmetric waves from an arbitrary termination of a
ASA 127th Meeting M.I.T. 1994 June 6-10
2pSA8. Reflection of axisymmetric waves from an arbitrary termination of a fluid-loaded semi-infinite cylindrical shell.
J. Gregory McDaniel
P. W. Smith, Jr.
Kevin D. LePage
Bolt Beranek and Newman, Inc., 70 Fawcett St., Cambridge, MA 02138
This work proposes an approximate theory for the reflection of axisymmetric waves on a semi-infinite fluid-loaded shell from an arbitrary termination. The reflected and incident waves are described
by wave numbers and wave shapes derived from the standard thin-shell equations for an infinite shell with fluid loading. The termination is characterized by an admittance matrix that relates stress
resultants within the shell wall to axial, radial, and rotational velocities of the midsurface. The admittance matrix is determined for arbitrary terminations by finite element experiments on a
fluid-loaded finite shell. An example focusing on variations in the axial admittance shows the effect of added mass at the termination. Results include the reflection and coupling between surface,
longitudinal, and evanescent waves. [Work supported by ONR.] | {"url":"http://www.auditory.org/asamtgs/asa94mit/2pSA/2pSA8.html","timestamp":"2014-04-20T06:59:32Z","content_type":null,"content_length":"1588","record_id":"<urn:uuid:430194ac-f151-423d-9de5-c795fd3c4df0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Total distance traveled by particle
If s(t) = 2t^3 - 21t^2 + 60t is the position function of a particle moving in a straight line, would you be able to find its total distance traveled in, say 3 seconds, by finding s(0), s(1), s(2), s
(3), and calculating the absolute value between each of them and then summing those values, as opposed to differentiating the function first, setting the derivative to 0, and solving for t?
Would you get the same answer? | {"url":"http://www.physicsforums.com/showthread.php?p=2610162","timestamp":"2014-04-19T04:42:01Z","content_type":null,"content_length":"27025","record_id":"<urn:uuid:a8ca3a0e-2629-41d0-8279-c6fecb517f05>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Winter Hill, MA Math Tutor
Find a Winter Hill, MA Math Tutor
I specialize in mechanical engineering subjects such as structural analysis, dynamics, vibrations, fluid mechanics, and the mathematics needed to understand them (linear algebra, calculus,
differential equations). My tutoring style varies depending on what is effective and comfortable for each pers...
8 Subjects: including differential equations, calculus, physics, SAT math
...I took a discrete math class as an undergrad at Umass Boston and got an A. I also periodically helped some of the other people in the class. I found the material very intuitive and still
remember almost all of it.
14 Subjects: including differential equations, algebra 1, algebra 2, calculus
...I will review your proficiency, needs, objectives, study habits, test preparation, homework and test-taking skills, put together a plan and work with you to achieve your specific goals. I have
tutored students at all levels for over a decade from Lexington, Newton, Lincoln, Weston, Belmont, Bedford, Wellesley and many other communities. I tutor both during the summer and the school
34 Subjects: including prealgebra, ACT Math, SAT math, English
...I'm familiar with a wide range of curricula but always take care to review a teacher or professor's specific approach prior to tutorial sessions so as to make the most of our time. If
interested in scheduling sessions in a T-accessible location, please contact me no less than 48 hours in advance...
15 Subjects: including precalculus, nutrition, fitness, algebra 1
...I believe that any person can find the joy in learning, so that school becomes a passion and not just a chore. As an educator, I build excitement for learning by motivating, inspiring, and
sparking curiosity by meeting students at their OWN level, demonstrating respect for students as people wit...
16 Subjects: including SAT math, algebra 1, elementary (k-6th), grammar | {"url":"http://www.purplemath.com/Winter_Hill_MA_Math_tutors.php","timestamp":"2014-04-17T19:31:00Z","content_type":null,"content_length":"24182","record_id":"<urn:uuid:b21413fd-0959-452a-a0ba-dbc7bf397f3c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
A name for a claw-graph with paths attached to it
up vote 4 down vote favorite
I wanted to know if the following family of graphs has a name in graph theory: A claw with paths of any length attached to the three free vertices of the claw. More formally, a connected acyclic
graph, with 1 vertex of degree 3 and the rest of degree 2 or less.
They're interesting because they arise in the study of graph minors. (In particular, if a graph of this type is a minor of another graph G, then it is also a subgraph of G.)
graph-theory terminology
add comment
3 Answers
active oldest votes
I don't see any reason not to call them "subdivisions of claws," since that's exactly what they are; people working in subfactors apparently call them "star-shaped," or I guess in
this case "claw-shaped." I don't know of any other name for them, though.
up vote 4 down
vote accepted Now that I think about it, aren't these exactly the trees with exactly three leaves? Do trees with a specified number of leaves have a name?
"Subdivisions of claws" probably seems best, since it's accurate and not confusing at all. One could call it a tree with three leaves only if you take a non-leaf as the root. If
you consider one of the leaves as the root, then it has only 2 leaves. Since this is not unique, I don't think trees with a specified number of leaves have a name. – Rune Nov 1 '09
at 15:57
You could consider non-rooted trees, in which case the problem doesn't arise. But they don't seem to have a name in any case. – Harrison Brown Nov 1 '09 at 16:20
add comment
See also the question A name for star-graph with long “laces”.
up vote 2 down vote
add comment
In this paper such graphs are referred to as "spiders" and "subdivisions of stars":
up vote 1 down vote http://doi.wiley.com/10.1002/jgt.20244
add comment
Not the answer you're looking for? Browse other questions tagged graph-theory terminology or ask your own question. | {"url":"http://mathoverflow.net/questions/3668/a-name-for-a-claw-graph-with-paths-attached-to-it?sort=newest","timestamp":"2014-04-16T04:57:30Z","content_type":null,"content_length":"57828","record_id":"<urn:uuid:cd96b0e9-1984-48b4-993f-9aa8e4170e53>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computing the index of a Lie algebra: what is known beyond the reductive case?
up vote 2 down vote favorite
Recall that an index of a Lie algebra $\mathfrak{g}$ is $\mathrm{ind}\ \mathfrak{g} := \min\limits_{\xi \in \mathfrak{g}^*} \dim \mathrm{Ann}_{\xi}$ where $\mathrm{Ann}_{\xi}=\{h\in\mathfrak{g}| \
mathrm{ad}_h^*(\xi)=0\}$ is the annihilator (also known as the stabilizer) of $\xi$ with respect to the co-adjoint representation. The relevant Wikipedia article just says that if $\mathfrak{g}$ is
reductive then $\mathrm{ind}\ \mathfrak{g}=\mathrm{rank}\ \mathfrak{g}$ but I would like to build some intuition for the non-reductive case, and my googling hasn't brought about any relevant
references so far. In particular, I would very much like to know:
1. Can one say anything about the index of a solvable Lie algebra?
2. What about the index of a semidirect sum (rather than the direct sum which occurs in the reductive case) $\mathfrak{g}=\mathfrak{h}\triangleright\mathfrak{a}$, where $\mathfrak{a}$ is abelian and
$\mathfrak{h}$ is arbitrary? If something is known for semisimple $\mathfrak{h}$, that would be of interest too.
Many thanks in advance!
lie-algebras reference-request
3 Index is a very crude invariant, it is only the first step towards a description of the coadjoint orbits, whose behavior under standard constructions such as semidirect product is more susceptible
to analysis. You can find a lot of information about orbits in the solvable case in the papers on the orbit method. – Victor Protsak Aug 30 '10 at 4:37
Thanks, Victor, I'll try to look them up. – mathphysicist Aug 30 '10 at 14:08
add comment
3 Answers
active oldest votes
There is quite a bit of literature by now, in the classical characteristic 0 setting of finite dimensional Lie algebras. Looking up some of the papers listed below on arXiv (usually under
math.RT) and others they refer to would be a good way to get into the recent work, including some on nilpotent Lie algebras. Beyond this, I can't answer your specific questions in detail.
But as Victor points out, study of the index is only one step. Even in the reductive case, the rank is just one piece of information.
Dmitri I. Panyushev http://front.math.ucdavis.edu/0107.5031
up vote 1 A.N. Panov http://front.math.ucdavis.edu/0801.3025
down vote
Jean-Yves Charbonnel and Anne Moreau http://front.math.ucdavis.edu/1005.0831
Celine Righi and Rupert W. T. Yu http://front.math.ucdavis.edu/0908.4201
Thanks a lot for the references! – mathphysicist Aug 30 '10 at 15:21
P.S. As other answers indicate, there is widespread literature with many items available on arXiv. Sometimes this quantity just means that the whole subject has been poorly understood, so
don't give up too easily. – Jim Humphreys Jan 5 '11 at 23:17
add comment
The following paper of M. Rais has a formula for index of semi-direct products that you mentioned.
up vote 1 M. Rais "L’indice des produits semi-directs $g\ltimes E$. C. R. Acad. Sci. Paris S´er. A-B 287 (1978), no. 4, A195–A197.
down vote
Many thanks for the reference. – mathphysicist Aug 31 '10 at 11:50
This article by Rais probably requires the help of a library. In any case, the recent literature citing it includes relevant papers by A. Joseph and D.G. Panyushev which might be more
accessible online. (It helps to have MathSciNet access, but the arXiv is also a good resource.) – Jim Humphreys Aug 31 '10 at 15:59
This year of Comptes Rendus seems to be available at gallica.bnf.fr/ark:/12148/cb34484666t/date , but this site is very messy with many gaps, so not sure about this exact article.
2 Also, Rais has two more accessible and more recent arXiv papers with a suggestive title "Notes sur l'indice des algèbres de Lie": arXiv:math/0605499 and arXiv:math/0605500 . – Pasha
Zusmanovich Jan 5 '11 at 22:40
add comment
Additional references:
V. Dergachev, Some properties of index of Lie algebras, arXiv:math/0001042
up vote 1 down vote
O. Yakimova, The index of centralisers of elements in classical Lie algebras, Funct. Anal. Appl. 40 (2006), N1, 42-51, arXiv:math/0407065
add comment
Not the answer you're looking for? Browse other questions tagged lie-algebras reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/37101/computing-the-index-of-a-lie-algebra-what-is-known-beyond-the-reductive-case","timestamp":"2014-04-17T18:43:43Z","content_type":null,"content_length":"70555","record_id":"<urn:uuid:4069dc74-09f6-4473-a48e-6f4a93e46d26>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matlab Numerical Integration - Syntax/Style Query
December 4th 2009, 02:51 AM #1
Dec 2009
Matlab Numerical Integration - Syntax/Style Query
Pretext: I have no formal background in matlab or maths in general, so apologies if any of the following doesn't make sense. I am also new to this forum, so apologies if this post is incorrect in
any way. [n.b. I also posted this thread on physicsforums.com]
Context: Ok, so I wanted to find the response value for various parameter values, using the following equation:
[If you can't see the image, it is an integration with infinite bounds over an equation consisting of a Gaussian probability density function multiplied by the the square of a Gaussian cumulative
density function plus one minus the square of another Gaussian cumulative density function]
I wasn't sure whether it was possible to do this using the matlab symbolic toolkit (syms), so I thought I'd take a crack at it using numerical integration, using the quad command. After much
effort and confusion, I ended up with the following code.
mu=sprintf('%f', mu);
sigma=sprintf('%f', sigma);
s=sprintf('%f', s);
alpha=sprintf('%f', alpha);
f=strcat('normpdf(x+',s,',',mu,',sqrt(2)*',sigma,').*(normcdf(x,',mu,',sqrt(2)*',alpha,'*',sigma,').^2 + (1 - normcdf(x,',mu,',sqrt(2)*',alpha,'*',sigma,')).^2)');
This code *DOES* appear to work (i.e. yields the expected answers). But it strikes me as being more than a little crude. My questions therefore are as follows:
1. From a mathematical point of view, was using numerical integration with suitably wide bounds the right way to go about this problem?
2. From a programming point of view, is there a more elegant way to execute the above numerical integration? (i.e. a better approach than wrapping everything up as one big string which is then
passed to the quad function)
Many thanks for your time,
Pretext: I have no formal background in matlab or maths in general, so apologies if any of the following doesn't make sense. I am also new to this forum, so apologies if this post is incorrect in
any way. [n.b. I also posted this thread on physicsforums.com]
Context: Ok, so I wanted to find the response value for various parameter values, using the following equation:
[If you can't see the image, it is an integration with infinite bounds over an equation consisting of a Gaussian probability density function multiplied by the the square of a Gaussian cumulative
density function plus one minus the square of another Gaussian cumulative density function]
I wasn't sure whether it was possible to do this using the matlab symbolic toolkit (syms), so I thought I'd take a crack at it using numerical integration, using the quad command. After much
effort and confusion, I ended up with the following code.
mu=sprintf('%f', mu);
sigma=sprintf('%f', sigma);
s=sprintf('%f', s);
alpha=sprintf('%f', alpha);
f=strcat('normpdf(x+',s,',',mu,',sqrt(2)*',sigma,').*(normcdf(x,',mu,',sqrt(2)*',alpha,'*',sigma,').^2 + (1 - normcdf(x,',mu,',sqrt(2)*',alpha,'*',sigma,')).^2)');
This code *DOES* appear to work (i.e. yields the expected answers). But it strikes me as being more than a little crude. My questions therefore are as follows:
1. From a mathematical point of view, was using numerical integration with suitably wide bounds the right way to go about this problem?
2. From a programming point of view, is there a more elegant way to execute the above numerical integration? (i.e. a better approach than wrapping everything up as one big string which is then
passed to the quad function)
Many thanks for your time,
You should define the integrand as a function in a .m file or as an inline function definition and then pass that function to the quadrature routine (check how to handle the extra function
arguments you will have)
The other thing you can do is transform variable of integration so that the range is finite.
so long and thanks for all fish
Update for the record:
In terms of Matlab code - Thanks, I have done as suggested (code below). Much less cluttered, thumbs up.
In terms of the mathematics - Shall have to plead ignorance here and say that I'm not sure how to go about transforming the variable of integration. However, the current (numerical) method
appears to be adequate for both my current and forseeable purposes, so on this occasion I'm inclined to let sleeping dogs lie.
Thanks for taking the time,
%anon function handle method
dpc = @(x, mu, sigma, s, alpha)normpdf(x+s,mu,sqrt(2)*sigma).*(normcdf(x,mu ,sqrt(2)*alpha*sigma).^2 + (1 - normcdf(x,mu,sqrt(2)*alpha*sigma)).^2);
[ans,nfe]=quad(@(x)dpc(x,mu, sigma, s, alpha),-1000,1000)
%separate .m file method
[ans,nfe]=quad(@(x)dpc(x,mu, sigma, s, alpha),-1000,1000)
function y=dpc(x, mu, sigma, s, alpha)
y=normpdf(x+s,mu,sqrt(2)*sigma).*(normcdf(x,mu,sqr t(2)*alpha*sigma).^2 + (1 - normcdf(x,mu,sqrt(2)*alpha*sigma)).^2);
December 4th 2009, 03:10 AM #2
Grand Panjandrum
Nov 2005
March 9th 2010, 06:35 AM #3
Dec 2009 | {"url":"http://mathhelpforum.com/math-software/118446-matlab-numerical-integration-syntax-style-query.html","timestamp":"2014-04-16T05:47:14Z","content_type":null,"content_length":"40928","record_id":"<urn:uuid:b9043b3e-e40e-4784-8505-6c64a3d82052>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
9 dimensional grid
"Matt J" wrote in message <j26mj7$7nm$1@newscl01ah.mathworks.com>...
> "Saad " <saad.badaoui07@imperial.ac.uk> wrote in message <j26c1m$8dn$1@newscl01ah.mathworks.com>...
> >
> > Thank you for your reply. I would like to minimize a function say "H". In each grid point, I would like to initialize 9 parameters, these parameters are inside the function. each grid point will
give me one value.
> > First I would like to evaluate the function at one grid point then move to next grid point , change the parameters and evaluate the function again, then keep doing that accross "grid 1" and then
at the end I will pick the function with the minimum value at grid 1. The parameters that give me the minimum value of function will be the chosen ones.
> ==================
> Saad - this was a little hard to understand, but here's what I was able to get from it. You have a function of 9 parameters
> H( x(1) ,...., x(9) )
> You're trying to minimize H by testing every combination of values for the x(i) on some discretized grid.
> If that's what you're trying to do, it's a hopeless approach. The number of values to be explored is just way to big. As dbp was saying, you should be using a minimization algorithm, e.g. from the
Optimization Toolbox. Avoiding exhaustive searches of the parameter space is what optimization algorithms are meant for.
> > Then when I finish with grid 1 I would like to move to grid 2 and do the same analysis. if the minimum value of grid 1 is not much different from the minimum value of grid 2 then I will stop the
iteration. So I dont necessarily need to create 9D grid from the beginning because i might stop half way.
> =========================
> You're basically describing the coordinate descent algorithm, where you minimize with respect to 1 parameter at a time while leaving the others fixed. Normally, though, you'd have to cycle through
all 9 parameters many times before the value of the function stops changing. You'd be very lucky if this weren't the case.
> If you want to use this approach you don't need NDGRID. Just compute
> grid1D= -10:.1:10;
> [Hval,idx] = min( H(grid1D, x(2),x(3),...,x(9)))
> x(1)=grid1D(idx);
> then
> [Hval,idx] = min( H(x(1), grid1D, x(3),...,x(9)))
> x(2)=grid1D(idx);
> then
> [Hval,idx] = min( H(x(1),x(2), grid1D, x(4),...,x(9)))
> x(3)=grid1D(idx);
> and so on in a loop until Hval stops changing. It's hard to know how fast it will converge, however. That will depend very much on the form of H.
Hi Matt
Thanks a lot for your reply. What you wrote "You're basically describing the coordinate descent algorithm, where you minimize with respect to 1 parameter at a time while leaving the others fixed" is
exactly what I would like to do. The only issue is that one of the parameters can not be negative but the others could be negative. Is it still possible to run the code above with this constraint?
How do you think can I adapt the code? Thanks a lot for your advice your help is much appreciated
Best Regards | {"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/311558","timestamp":"2014-04-18T03:26:47Z","content_type":null,"content_length":"74783","record_id":"<urn:uuid:4a3d2b0a-cafb-4799-bdc4-25f926cad09f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
element of order n such that $\pi(n)=\pi(G)$, where $\pi(n)$ denote the prime divisors of $n$
up vote 3 down vote favorite
Hello. I thank for your answer, in advance.
Let $G$ be a finite group and $G$ has an element of order $n$ such that $\pi(n)=\pi(G)$ where $\pi(n)$ denote the set of prime divisors of $n$ and $\pi(G)$ denote the set of prime divisor of $|G|$.
What can be said about the structure of the group? I know in nilpotent group there exist such element.
I generalize my question. What is the relation between the $|\pi(G)|$ and the maximum $|\pi(n)|$, where $n$ range in all order elements of a finite group $G$?
gr.group-theory finite-groups nilpotent-groups
I'd like to suggest a rewording to state and focus the question a little better: For a finite group $G$, let $P(G)$ be the product of the prime divisors of $|G|$. It is known [or "I have proved"]
that if $G$ is a nilpotent group, then there is an element $g$ for which $P(G)$ divides the order of $g$. Are there any non-nilpotent groups with this property? If so, what other properties do
they share with the nilpotent groups (but not all groups)? (This suggestion, of course, is only appropriate if nilpotent groups are the only examples you know of.) – Barry Cipra Apr 28 '13 at
1 You can easily constuct non-nilpotent examples by taking a direct product of an arbitrary group with a cyclic group of the required order. But I would be surprised if there were any nonabelian
simple groups with this property. – Derek Holt Apr 28 '13 at 16:46
add comment
2 Answers
active oldest votes
There is a pathological example that pretty much demonstrates that the existence of such an element gives no significant information about the group:
Example 1. Let $H$ be any finite group, and let $\pi(H)=\{p_1,\dots, p_k\}$. Now let $C$ be a cyclic group of order $p_1\cdot p_2\cdots p_k$ with generator $c$. Then $\pi(H\
times C)=\pi(H)$ and $H\times C$ has an element $g=(1,c)$ of order $n=p_1\cdot p_2 \cdots p_k$, i.e $\pi(n)=\pi(H)$.
@Someone has suggested a second example which allows one to construct perfect groups with the required property (and thereby deals with my earlier remark "My hunch is that if you
up vote 7 down prescribe that $G$ is perfect, i.e. $G=G'$, then there will never exist an element of the kind you seek... But even then I'm not sure"):
vote accepted
Example 2. Let $S$ be any simple group and let $\pi(S)=\{p_1,\dots, p_k\}$. Now let $G=\underbrace{S\times \cdots \times S}_k$ and observe that $\pi(G)=\pi(S)$. Now let $s_i$ be
an element of order $p_i$ in $S$ and observe that $g=(s_1,\dots, s_k)\in G$ has order $\pi(S)$.
Both examples are also relevant to your generalized question.
1 Perfectness doesn't help: Just take the direct product of $k$ copies of $G$ where $k$ is the number of prime divisors of $G$. – Someone Apr 30 '13 at 9:44
Good point, shall edit. – Nick Gill Apr 30 '13 at 10:35
Thanks for your answers. – mousavi Jun 29 '13 at 7:23
add comment
In the case of solvable groups, this may not say much about the structure of the group. For example, if $G$ is a finite $\{p,q\}$-group, where $p,q$ are distinct primes (hence $G$ is
solvable by Burnside's $p^{a}q^{b}$-theorem), then it is quite unusual (though certianly not impossible) for $G$ not to have an element of order $pq.$ If $G$ contains an elementary Abelian
up vote $p$-group of order $p^{2}$ and an elementary Abelian $q$-group of order $q^{2}$, then $G$ will contain an element of order $pq.$ If, for example, $G$ contains no elementary Abelian
3 down $q$-subgroup of order $q^{2},$ then the Sylow $q$-subgroups of $G$ are cyclic or generalized quaternion
Also, there is a result of Lucido that if $p$, $q$ and $r$ are distinct prime divisors of $|G|$ (with $G$ solvable), then $G$ contains an element of at least one of the orders $pq$, $pr$,
$qr$. – Tobias Kildetoft Jul 25 '13 at 6:44
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory finite-groups nilpotent-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/129007/element-of-order-n-such-that-pin-pig-where-pin-denote-the-prime-di","timestamp":"2014-04-20T13:24:10Z","content_type":null,"content_length":"64223","record_id":"<urn:uuid:99868d72-fd42-4fad-9113-a8187e5d5e81>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math - Possible or Potential rational zeros
Number of results: 229,544
Math - Possible or Potential rational zeros
List all possible (or potential) rational zeros for the polynomial below. Find all real zeros of the polynomial below and factor completely over the real numbers. Please show all of your work. f(x) =
x^4 - 7x^3 - 3x^2 + 19x + 14
Wednesday, June 1, 2011 at 9:06pm by CheezyReezy
Math - Possible or Potential rational zeros
Wednesday, June 1, 2011 at 9:06pm by joe
Math - Possible or Potential rational zeros
Wednesday, June 1, 2011 at 9:06pm by Anonymous
Math - Possible or Potential rational zeros
Wednesday, June 1, 2011 at 9:06pm by bill
Perhaps you don't understand the meaning of potential. It means that it is possible. It's possible for every person to be great. If it still isn't clear, please ask your teacher to explain this
Wednesday, January 12, 2011 at 8:00pm by Ms. Sue
Explain with reference to the Nernst equation, how is it possible for a cell to have a positive cell potential understandard conditions but the negative cell potential under other conditions.
Thursday, December 2, 2010 at 10:41pm by Liliy
Math - potential zero
List all possible (or potential) rational zeros for the polynomial below. Find all real zeros of the polynomial below and factor completely over the real numbers. Please show all of your work. f(x) =
x^4 - 7x^3 -3x^2 + 19x + 14 Help
Wednesday, June 1, 2011 at 10:01pm by CheezyReezy
Math - Possible or Potential rational zeros
Potential rational zeros are: +-1,+-2,+-7,+-14 f(-1)=1+7-3-19+14=0 f(x)=(x+1)g(x), g(x)=x^3-8x^2+5x+14 g(-1)=-1-8-5+14=0 g(x)=(x+1)h(x), f(x)=(x+1)^2*h(x), h(x)=x^2-9x+14 h(2)=4-18+14=0 h(x)=(x-2)
(x-7) f(x)=(x-2)(x-7)(x+1)^2
Wednesday, June 1, 2011 at 9:06pm by Mgraph
ap bio
15. The threshold potential is of great significance in the physiology of neurons because if the threshold potential is not reached, __________. A.the neuron cannot regain its resting potential B.the
action potential will be "inversed," with a flux of sodium out of the cell ...
Thursday, November 6, 2008 at 4:41pm by carol
How can I solve the following ecercise: Use perturbation theory to calculate the energy levels of a particle in a box with a cosine function botto. Let the box extend from x=a to x=a and let the
perturbing potential be H' = V0[1+cos(2*Pi*m*x/a)]. This potential oscilates ...
Wednesday, June 15, 2011 at 11:15am by Juan Manuek
Quantum physics
How can I solve the following ecercise: Use perturbation theory to calculate the energy levels of a particle in a box with a cosine function botto. Let the box extend from x=a to x=a and let the
perturbing potential be H' = V0[1+cos(2*Pi*m*x/a)]. This potential oscilates ...
Wednesday, June 15, 2011 at 12:07pm by Juan Manuek
Quantum physics
How can I solve the following ecercise: Use perturbation theory to calculate the energy levels of a particle in a box with a cosine function bottom. Let the box extend from x=a to x=a and let the
perturbing potential be H' = V0[1+cos(2*Pi*m*x/a)]. This potential oscilates ...
Wednesday, June 15, 2011 at 12:07pm by Juan Manuek
The voltage at a receptor site has just changed from -70 millivolts to -75 millivolts as a result of an _______ and will ________. a). inhibitory postsynaptic potential;increase the likelihood of an
action potential. b). excitatory postsynaptic potential;decrease the ...
Thursday, October 3, 2013 at 1:31am by Jesse
A common mistake when writing this sort of paper for the first time is to muddle up hazard and accident. Try to be clear in your writing what is the hazard each time. It is possible to get distracted
by the many possible accidents. Hazard is the potential to cause harm to a ...
Tuesday, July 15, 2008 at 6:36pm by Dr Russ
Why dont you find the potential at x=0? Then add the potential contributions from each of the charge. Then work= potential*e
Saturday, October 2, 2010 at 10:58am by bobpursley
Potenial at each point: kQ/r work= differencein potential* distance y to X. difference in potential= PotentialY-PotentialX The closer is the higher potential
Wednesday, May 16, 2012 at 9:37am by bobpursley
science - physics
A potential charge of magnitude 2.0uC is moved between two points X and Y Point X is at a potential of 6.0V and Point Y is at a potential of 9.0 V Find the gain in potential energy of Point charge
Saturday, May 4, 2013 at 5:38am by BINA
If you call the negative terminal zero potential, the other terminal will be at +12. THus the six volt potential will be half the distance. The potential is a plane, ignoring edges. THe potential
gradient? 12/.06 volts/meter. The equitpotental surface is indeed perpendicular ...
Monday, January 28, 2008 at 10:34pm by bobpursley
quantum chemistry or physics
How can I solve the following exercise: Use perturbation theory to calculate the energy levels of a particle in a box with a cosine function botto. Let the box extend from x=0 to x=a and let the
perturbing potential be H' = V0[1+cos(2*Pi*m*x/a)]. This potential oscilates ...
Thursday, June 16, 2011 at 1:43pm by Juan Manuek
Physics electric potential energy difference
A current loop with radius 20 cm and current 2 A is in a uniform magnetic field of 0.5 T. Considering all possible orientations of the loop relative to the field, what is the largest potential energy
difference (in Joules) you can find between two orientations?
Friday, November 22, 2013 at 10:03am by Carl
What does it mean to say the Coulomb Force is a conservative force? This is what I have compiled, but I still feel like I'm missing something. A conservative force is one in which energy cannot be
created nor destroyed, and with the property that the work done in moving a ...
Monday, July 23, 2012 at 11:56am by Coty
What pitfalls or potential problems do you imagine are possible? How might we work together to avoid them?
Wednesday, September 16, 2009 at 5:27pm by Danny
AP Bio Lab
the water potential of a solution is equal to the osmotic potential plus the pressure potential. Since there is no differential pressure acting on teh solution, the pressure potential is equal to 0
making teh water potential equal to the osmotic potential. If the equilibrium ...
Thursday, October 2, 2008 at 11:28pm by Simone
1. If you used E = k q/R^2, the field strength should be in Newtons per Coulomb or Volts per meter, which are the same thing 2. That is the correct formula 3. It's a different question, and location.
The potential (in volts) is k q1/ R , regardless of the value of q2 4. ...
Friday, April 17, 2009 at 10:52pm by drwls
5)a. Electrons accelerated by a potential difference of 12.23 V pass through a gas of hydrogen atoms at room temperature. Calculate the wavelength of light emitted with the longest possible
wavelength. b. Calculate the wavelength of light emitted with the shortest possible ...
Sunday, April 13, 2008 at 10:58am by Sandhya
A charge of 0.25 C moved from a position where the electric potential is 20 V to a position where the potential is 50 V. What is the change in potential energy?
Monday, April 18, 2011 at 3:42pm by Mikie
current and magnetism
A current loop with radius 20 cm and current 2 A is in a uniform magnetic field of 0.5 T. Considering all possible orientations of the loop relative to the field, what is the largest potential energy
difference (in Joules) you can find between two orientations? I tried the ...
Sunday, November 24, 2013 at 1:27am by James
Find the perimeter of the triangle with vertices at A (0, 0), B (-4, -3) and C (-5, 0). If it is not possible, write “not possible” and explain why it is not possible.
Thursday, June 23, 2011 at 1:07pm by Adrian
incarceration = time spent in jail or prison maximum = the most that can be applied potential penalty = possible punishment
Sunday, September 2, 2012 at 9:25pm by Writeacher
Easy. Potential is a scalar, so add the voltage from each of the other charges. Upper left Potential V=kqLL/sidelength + kQLR/diagonal length Then the electric potentialenergy of the third charge is
V*q Do the same technique for the last charge, add the voltage potential from ...
Tuesday, January 26, 2010 at 10:28pm by bobpursley
To find out as much as you can about the market for your potential business, be sure to talk to A. potential employees. B. the local bookstore owner. C. potential customers. D. anyone who may compete
with you locally. I have no idea what to chose. I know its not C. Is it D or B?
Monday, February 10, 2014 at 5:46pm by Beatrice
A cell has a resting membrane potential of -66 mV and a membrane Cl ion equilibrium potential of -24.0 mV If a neurotransmitter acts on the cell membrane to open channels that are specifically
permeable to Cl, will the membrane potential of the cells change? In which direction...
Monday, December 7, 2009 at 3:36pm by Sean
Multiply the potential difference (in volts) by the electron charge (in Coulombs). The answer will be in Joules. Put a minus sign in front of it since the potential energy of the electron drops when
it moves to higher positive potential.
Tuesday, March 9, 2010 at 10:17pm by drwls
Relative to a situation where the masses are widely separated, the potential energy of the equilateral configuration will be negative. That is because it will require work to separate them. A single
mass m has no potential associated with it. When you add a second mass m at ...
Friday, March 11, 2011 at 7:35pm by drwls
Electric potential exists if work must be done to bring a "test" charge there, no matter what that test charge is. Electric potential (V) depends upon the amount and proximity of other charges.
Electric potential energy is q*V. V (the electric potential) does not depend upon ...
Thursday, July 15, 2010 at 5:30pm by drwls
You'd divide 24 by possible factors, as follows: 24/1 = 24 (so groups of 1 and 24 are possible) 24/2 =12 (so groups of 2 and 12 are possible) 24/3 = 8 (so groups of 3 and 8 are possible) 24/4 = 6 (so
groups of 4 and 6 are possible) 24/5 =??? not an integer, next 24/6 =4, but ...
Thursday, December 15, 2011 at 1:38am by MathMate
Physics *work shown ~ pls help!
You have to change the potential difference to potential energy difference, first. Multiply the potential difference by the charge.
Friday, April 15, 2011 at 11:06am by Austin
Say you have an electron at the origin. The electron has potential energy. The space between the electron and infinity has no potential energy if it has no charge because potential energy is charge
TIMES potential. The empty space does have potential though because there is an...
Thursday, July 15, 2010 at 5:30pm by Damon
If a negative charge is initially at rest in an electric field, will it move towards a region of higher potential or lower potential? What about a positive charge? How does the potential energy of
the charge change in each case?
Sunday, May 27, 2012 at 5:57pm by myra
An electron iwth an initial speed of 500,000 m/s is brought to rest by an electric field. (a) Did the electron move into a region of higher potential or lower potential? (b) What was the potential
difference that stopped the electron?
Thursday, February 25, 2010 at 10:24am by Lucas
college physics,electric potential
electric potential is a scalar, that is, you can just add it. So figure the potential from one corner, multiply it by 4. V=kQ/distance distance=edge*sqrt2
Saturday, October 2, 2010 at 11:32am by bobpursley
The potential energy of a particle on the x-axis is given by U= 8xe ^-x ^2/19 where x is 1 meter and U is in Joules. Can you explain for me how to find the point on the x-axis for which the potential
is a maximum or minimum and is this answer a point of maximum potential ...
Saturday, November 21, 2009 at 12:52pm by jon
Electric charge charge is distributed uniformly along a thin rod of length "a", with total charge Q. Take the potential to be zero at infinity find the potential at P. If z>>a, what would be the
potential at p? From potential find the electric field at point p for z>&...
Wednesday, May 22, 2013 at 12:39am by Anne
the formula for potential energy is p=mgh, where p is potential energy, m is mass, g is gravity, and h is height. what expression can be used to represent g?
Friday, May 30, 2008 at 5:27pm by derek
Thank you. :) Though I was thinking, should it be "full potential"? or just the potential.. I mean, urban life is still moving forward so there's no way of telling that it has reached its full
potential, right? :) Or maybe I'm just overthinking.
Saturday, February 18, 2012 at 1:30pm by annie
kinetic energy is moving energy, or the energy possessed by an object in motion. potential energy is the the energy possessed by an object during rest. Similarity Think of a roller coaster. The chain
pulls the rollercoaster full of people up the hill to the top. On the way up ...
Monday, January 9, 2012 at 9:32pm by Jordan
You don't say whether A or B has the higher potential. You only say that the difference "between" them is 40 V. If a positive charge moves to a higher potential, the potential energy increases. The E
field and force is opposite to the direction of motion in that situation.
Saturday, December 31, 2011 at 9:36pm by drwls
College Physics
A 22.9-kg child is on a swing attached to ropes that are L = 1.33 m long. Take the zero of the gravitational potential energy to be at the position of the child when the ropes are horizontal. Part A)
The child's gravitational potential energy when the child is at the lowest ...
Sunday, February 20, 2011 at 5:21pm by Emma
Preap Physics!!
There are several types of potential energy - spring potential energy, chemical potential energy, etc.
Tuesday, January 11, 2011 at 8:04pm by Marth
The initial potential energy of the watermelon is mass*acceleration*height or mgh for Earth. For practical purposes assume that total energy of the watermelon is constant. So the potential energy +
the kinetic energy is constant. The kinetic energy is given at the point of ...
Monday, January 28, 2008 at 5:53pm by Quidditch
How much work (as measured in joules) is required to push an electron through a region where the potential change is +73 V? Assume the electron moves from a region of low potential to a region of
higher potential. 1 J
Wednesday, September 29, 2010 at 3:15pm by melody
I'd say that while it's strictly not possible, it could make sense if you define 100% as the maximum possible effort under normal circumstances. For example, if you went to a strength test facility,
and found you could not lift more than 300 lbs, it's still quite possible that...
Monday, December 9, 2013 at 2:20pm by Steve
Use the formulas for kinetic energy: (1/2) M V^2, and potential energy: (M g H) They should give you the same answer, except for possible differences in rounding.
Thursday, October 21, 2010 at 9:06am by drwls
There is a beaker with 0.4% sucrose solution and the solute potential is -4. Inside the beaker, there is a dialysis bag with 01% sucrose solution, solute potential -1 and pressure potential 0. When
the beaker is open to the atmosphere, what is the pressure potential of the ...
Sunday, March 8, 2009 at 11:11pm by izzy
Electrical potential is the force experienced by a unit test charge at a particular point. In this case that potential on the test charge is generated by the positive and the negative charges placed
in the two corners of the square. There are two possible configurations for ...
Tuesday, July 8, 2008 at 9:45pm by GK
a) mass*gravity*height= gravitational potential energy. 17*103*9.8= x Joules b) as the object falls the potential energy is converted to kinetic. so just before it hits the ground all of the
potential energy has become kinetic.
Friday, May 11, 2012 at 7:53pm by Jay
An electron enters a region of space where the potential difference between two equipotential surfaces are 12 cm apart. One surface has a potential of 120 volts and the other potential of 75 volts.
What is the magnitude and direction of the electric field in the region?
Tuesday, September 7, 2010 at 4:46pm by Tommy
Which of the followning equations shows the relationship between the potential energy and height? Explain how you know. potential energy=m(height)+b potential energy=a(height)^2 +b(height)+c
potential energy=a/height
Monday, November 2, 2009 at 8:07pm by Willie
what forms of energy are involved in the following situations? (choices: kinetic energy, potential energy - gravitational potential energy, elastic potential energy) a. a bicycle coasting along a
level road b. heating water c. throwing a football d. winding the mainspring of a...
Sunday, January 25, 2009 at 2:32pm by PB and apples
Physics - Electric Potential
A charge of - 0.15 C is moved from a position where the electric potential is 13 V to a position where the electric potential is 60 V. What is the change in potential energy of the charge associated
with this change in position?
Monday, April 1, 2013 at 3:33pm by Danielle
Preap Physics!!
Is gravitational potential energy different from potential energy?? Because I know gravitational potential energy's equation is PE=mass(gravity)(height)
Tuesday, January 11, 2011 at 8:04pm by Jaeka
when something is dropped, its potential energy is turned into kinetic energy. as the object stops [as if it is bouncing back up and starting to slow to fall back down] kinetic energy decreases and
becomes potential energy again. the energy of a system is the total kinetic ...
Sunday, May 2, 2010 at 9:44pm by Kake
Write the oxidation and reduction half-reactions of a corroding galvanized nail exposed to rain. The question says "galvanized" so its zinc. Zn(s) -> Zn2+(aq) + 2e- potential: +0.76V rain is slightly
acidic, so i thought it would be O2(g) + 4H+ + 4e- -> 2H2O potential: +...
Friday, February 15, 2008 at 5:28pm by william
This is quite straight forward if taken stepwise. There are only 5 possible values which are given in the question. Start with D and work through the values. D+D=M If D=1 M can't = 2 (2 is not a
possible value) If D=3 M can't = 6 if D=4 M=8 if D=7 M=4 if D=8 M can't =6 So we ...
Monday, March 10, 2008 at 7:30pm by Dr Russ
the formula for potential energy is p= mgh, where Pis potential energy, m is mass, g is gravity, and h is height. which expression can be used to represent g? 1 p-m-h 2 p-mh 3 p/m0-h 4 p/(mh)
Thursday, May 7, 2009 at 9:23pm by Anonymous
ap physics
When the man stopped at the bottom he had minimal potential energy and no kinetic energy. He had gone down 9.5 meters so the total energy he lost was m g h = 650*9.8*9.5 The net is also not moving
then so all that potential the man had went into potential energy of the net. ...
Sunday, March 25, 2012 at 2:26pm by Damon
Wil kinetic energy increase or decrease based on the amount of potential energy it starts with?? If potential energy is lost, KEnergy increases. Potential Energy + kinetic energy= constant.
Monday, February 5, 2007 at 7:52pm by Linda
Us History
The Lousiana Purchase removed a major point of contention between the United States and a European power but left many others. Which areas of possible conflict remained, and what were the sources of
these potential conflicts?
Wednesday, October 15, 2008 at 8:39pm by Cesar
Us History
The Lousiana Purchase removed a major point of contention between the United States and a European power but left many others. Which areas of possible conflict remained, and what were the sources of
these potential conflicts?
Wednesday, October 15, 2008 at 8:39pm by vania
us history
The Lousiana Purchase removed a major point of contention between the United States and a European power but left many others. Which areas of possible conflict remained, and what were the sources of
these potential conflicts?
Wednesday, October 15, 2008 at 11:57pm by vania
Us History
The Lousiana Purchase removed a major point of contention between the United States and a European power but left many others. Which areas of possible conflict remained, and what were the sources of
these potential conflicts?
Thursday, October 16, 2008 at 8:38pm by Raul
During a particular thunderstorm, the electric potential between a cloud and the ground is Vcloud - Vground = 2.9 x 108 V, with the cloud being at the higher potential. What is the change in an
electron's potential energy when the electron moves from the ground to the cloud?
Wednesday, June 30, 2010 at 8:24am by Debi
For a given mass, the potential energy varies directly with the vertical ht. If the ver. ht is reduced by one-half, the potential energy will be reduced by 1/2. When the bottom of the incline is
reached, the potential energy will be zero. PE = mgh.
Thursday, April 5, 2012 at 12:53am by Henry
Calculate the cell potential for the following reaction as written at 79C given that [Zn^+2] = 0.819 M and [Ni^+2] = 0.014 M. Zn + Ni^+2 = Zn^+2 + Ni I know: E=E -RT/nF ln Q reduction potential for
Ni= -0.26 reduction potential for Zn = -0.76 so E standard potential = 0.5 E= 0...
Sunday, April 29, 2012 at 1:20pm by Mima
(a) Find the potential at a distance of 0.920 cm from a proton. (Note: Assume a reference level of potential V = 0 at r = ∞.) V (b) What is the potential difference between two points that are 0.920
cm and 2.00 cm from a proton? V (c) Repeat parts (a) and (b) for an ...
Sunday, October 13, 2013 at 2:23am by oxygen
An access ramp enters a building 1 meter above ground level and starts 4 meters from the building. How long is the ramp? If it is not possible, write “not possible” and explain why it is not
Thursday, June 23, 2011 at 12:52pm by Jordan
in a exothermic reaction ________ energy is converted to ________ energy. A. potential, kenetic B. kinetic, potential C. thermal, potential D. heat, thermal
Saturday, March 5, 2011 at 2:53pm by emily
Is it possible for two right rectangular prisms to have the same volume yet not have the exact same dimensions? If it is not possible, explain why. If it is possible, give an example that proves the
Thursday, April 10, 2008 at 3:49pm by Tiffany
6th grade math
It is way more. Reasoning is as such: a*b*c a can be any number from 1-9 b can be any number from 2-9 c can be any number from 2-9 (because 1 can only be used once) therefore, you are just
multiplying the total possibilities available. 9 possible in a 8 possible in b 8 ...
Thursday, September 29, 2011 at 1:50am by Whang
Electric potential is a scalar, so the position of the charges on the circle wont matter, just the distance from the center to the charges. Find each potential from each charge, then add them to the
one given. Vgiven=kq/r=k/r (.5) 4.5E4=k/r (.5 so k/r= 9.0E4 Total potential V=...
Friday, January 29, 2010 at 8:45am by bobpursley
hca 230
You probably would want your physician to make you aware of all possible options for treatment/tests with potential positive and negative effects of these options. I hope this helps a little more.
Thanks for asking.
Monday, April 13, 2009 at 6:58pm by PsyDAG
Physics Please help !!
A single conservative force = (6.0x - 12) N where x is in meters, acts on a particle moving along an x axis. The potential energy U associated with this force is assigned a value of 23 J at x = 0.
(a) Write an expression for U as a function of x, with U in joules and x in ...
Tuesday, October 23, 2012 at 12:05am by member of physics
In general, the frequency of APs in an afferent neuron is related to the amplitude of the receptor potential in the following way: a. exponentially. b. there is no general relationship. c.
logarithmically. d. linearly. Is it exponentially? The other options make no sense. ...
Wednesday, January 6, 2010 at 2:11pm by Alex
Is this correct? The Energy Transformation of a Slingshot: elastic Potential-Kinetic-Potential
Thursday, May 20, 2010 at 6:49pm by Chris
What does the density potential (also known as the electrostatic potential) measure? What do the colors on the images indicate?
Monday, April 14, 2008 at 10:40pm by David
Is this correct? The Change of energy of a Slingshot: Elastic Potention-Gravitational potential-potential?
Thursday, May 20, 2010 at 8:26pm by Chris
You simply add potential and kinetic energies since it has not reached its full potential height. E=(1/2)*m*v^2+m*g*h
Saturday, July 9, 2011 at 11:51pm by ZB
I think you're right. Potential, not full potential, is better.
Saturday, February 18, 2012 at 1:30pm by Ms. Sue
Chemistry Check Please!!!
I think Eox = -0.771 and Ered = +1.36 Then 1.36 + (-0.771) = ? Your answer is right; I don't agree with what you called Eox. The anode is oxidized, yes, and that is an oxidation, yes. But the
potential for that electrode is -0.771. The reduction potential is 0.771; the oxidn ...
Monday, March 17, 2014 at 11:39pm by DrBob222
physics - DAMON
It should be done with integrals, that is the basic method. Now there is another method: What is the potential difference between the mid point and the final point? V= kq/r. Find the POtential from
the left side, add it to the right side. q is the same, of course, 35E-6. Then ...
Sunday, February 24, 2008 at 4:59pm by bobpursley
height = x sin theta potential energy = m g h = m g x sin theta assuming x is zero at the bottom and positive up ramp potential energy + kinetic energy = total energy = constant if ignoring friction
potential = m g x sin theta kinetic = (1/2) m v^2 sum is constant so as x ...
Thursday, March 13, 2014 at 6:19pm by Damon
Potential is a scalar, so you can add the potential at the center from each of the corners. Vcenter=Vc1+Vc2+...=kq1/d+kq2/d + ...
Tuesday, March 15, 2011 at 8:41am by bobpursley
please check my answers what are the SI units for momentum? kg*m/s which of the following is not involved in hitting a tennis ball? a. kinetic energy b. chemical potential energy c. gravitational
potential energy d. elastic potential energy i picked B How much power is ...
Monday, February 16, 2009 at 8:35pm by y912f
Potential is a scalar. Figure one potential, multipy it by 4 V1=kq/x where x is half the diagnol. Now V=4V1
Thursday, April 8, 2010 at 7:42am by bobpursley
The resting potential is increased, and the change must be to positive to promote an action potential. With this info, what would you pick?
Thursday, October 3, 2013 at 1:31am by PsyDAG
what does the density potential [electrostatic potential] measure?
Saturday, April 12, 2008 at 7:04pm by peter
During a particular thunderstorm, the electric potential difference between a cloud and the ground is Vcloud-Vground= 1.3x10^8, with the cloud being at the higher potential. What is the change in an
electron's electric potential energy when the electron moves from ground to ...
Wednesday, July 9, 2008 at 2:49pm by H
We don't do homework but we can help you understand. What about the problem do you not understand? Do you know to look up the cell potential for the Mn to Mn^+2 and add it to the cell potential for
the Ce^+4 ==>Ce^+3? That should give you the cell potential for the reaction...
Thursday, October 16, 2008 at 7:53pm by DrBob222
During a particular thunderstorm, the electric potential difference between a cloud and the ground is Vcloud - Vground = 4.3 108 V, with the cloud being at the higher potential. What is the change in
an electron's electric potential energy when the electron moves from the ...
Tuesday, March 9, 2010 at 10:17pm by Erin
During a particular thunderstorm, the electric potential difference between a cloud and the ground is Vcloud - Vground = 1.3 × 108 V, with the cloud being at the higher potential. What is the change
in an electron's electric potential energy when the electron moves from the ...
Friday, August 27, 2010 at 2:49pm by ash
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=Math+-+Possible+or+Potential+rational+zeros","timestamp":"2014-04-20T08:36:47Z","content_type":null,"content_length":"41951","record_id":"<urn:uuid:9f736cbc-d838-4d23-9b66-31e67d177703>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATHEMATICIAN ranked #1 Occupation in Latest Study
Les Krantz, author of The Jobs Rated Almanac, has recently completed a study for JobsRated.com ranking 200 jobs from best to worst. The criteria used were stress, physical demands, hiring outlook,
compensation, and work environment. The top 20 jobs are as follows:
1. Mathematician
2. Actuary
3. Statistician
4. Biologist
5. Software Engineer
6. Computer Systems Analyst
7. Historian
8. Sociologist
9. Industrial Engineer
10. Accountant
11. Economist
12. Philosopher
13. Physicist
14. Parole Officer
15. Meteorologist
16. Medical Laboratory Technician
17. Paralegal Assistant
18. Computer Programmer
19. Motion Picture Editor
20. Astronomer
Also see the article "Doing the Math to Find Good Jobs" in the Wall Street Journal, January 6, 2009.
Check out the article "Why It Pays to Be a Math Geek" on www.careerbuilder.com. Also read about the attributes math majors have for which they are frequently hired.
Here are various links to nonacademic job opportunities for mathematics majors.
Check out a website maintained by Luchsinger Mathematics listing all sorts of job opportunities for mathematicians all over the world (including the US)!!
Don't forget to check out our own alumni questionnaires!!
You might want to look at the following books published by the Mathematical Association of America:
101 Careers in Mathematics, Andrew Sterrett, Editor
She Does Math! (Real-Life Problems from Women on the Job), Marla Parker, Editor
On a related matter, see Agnes Scott College's wonderful history of women in mathematics. | {"url":"http://math.uga.edu/~shifrin/jobops.html","timestamp":"2014-04-16T16:22:15Z","content_type":null,"content_length":"5313","record_id":"<urn:uuid:9559a904-2726-4776-b2e2-68d13ca2ca3a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
What are Some Properties of Zero?
I agree with anon and would like to see it presented mathematically. The only difference is: I see 6 x 0 as six of something being multiplied by nothing 6 x 0 = 6 and 0 x 6 as 0 of something being
multiplied by 6. Since there is nothing to begin with then there is not anything to multiply therefore 0 x 6 = 0. Somehow, I think this benefits a capitalist economy. | {"url":"http://www.wisegeek.com/what-are-some-properties-of-zero.htm","timestamp":"2014-04-20T11:15:20Z","content_type":null,"content_length":"74248","record_id":"<urn:uuid:edb5d8a8-fe7e-4658-a704-897f15e7618a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
HLM Example
HLM Example-Simple Example
Example from, Woltman, Feldstain, MacKay, Rocchi (2012). An introduction to hierarchical linear modeling. Tutorial in Quantitative Methods for Psychology, 8(1), 52-69.
30 basketball teams, 10 players per team
Three variables = Life satisfaction (outcome), shots, coach years of experience
Software (free student version):
SPSS Data:
Level 1
Level 2
Data Condition
Unconstrained (null) Model
One-way ANOVA
Confirms that variability in outcome, by level 2 group, is different from 0.0.
ICC = 14.96/(14.96+14.61) = .51 (51% of the variance in outcome is at the group level (quite large)
Random Intercepts Model
Note: Group center Shots_on & click on u1
Effect size (r2) = (14.61-4.6)/14.61 = .715 [note that 14.61 came from the null model]
Confirms that variability in outcome and slope, by level 2 group, is different from 0.0.
Means as Outcomes Model
Note: COACH_EX is grand-mean centered
Effect size (r2) = (14.61-1.68)/14.61 = .888 [note that 14.61 came from the null model]
Coaching experience explains 88.8% between variance in life satisfaction
Random Intercepts and Slopes Model | {"url":"http://coedpages.uncc.edu/cpflower/rsch8140/hlm_example.htm","timestamp":"2014-04-19T14:29:15Z","content_type":null,"content_length":"3829","record_id":"<urn:uuid:ca45ee9f-2d77-4a5a-b5c0-ab2c92a834be>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
The importance of the null hypothesis.
The importance of the null hypothesis.
When researchers carry out studies it is the null hypothesis they are testing, not the research hypothesis. The null hypothesis states that there will be no difference between the groups being tested
so the means of the groups will be exactly the same. In comparison the research hypothesis states that there is a difference between the groups. With the null hypothesis declaring that there is no
difference between samples then there are two possible outcomes, there is or is not a difference. However if you were to test the research hypothesis there would be so many possible outcomes it would
be near impossible to test.
In order to see if there actually is a difference we need to examine how different the two groups actually are. We do this by looking at the variability within groups and between groups. The more
different the individuals are in a group the more difference there needs to be between groups in order for there to be a difference overall. To work out the difference between groups you have to
divide between group variance by within group variance. Ideally you want large between group variance and small within group variance, so the standard deviation is clustered tightly around the mean.
Most commonly researchers find that they have small between group variance and small within group variance, meaning a smaller effect size. The less overlap there is between the distributions the more
likely you are to find a significant difference.
In conclusion we need the null hypothesis to determine if there is a difference between groups being tested or not. Without it we would be swamped with possibilities making it almost impossible to
5 Responses
I agree that the null hypothesis is needed and important when conducting research. Without using the null hypothesis in research, you could not do the statistical hypothesis test. When you do he
statistical hypothesis test you reject or fail to reject the hypothesis, (Howitt & Cramer, 2011). Despite there being a lot of evidence for a theory (such as the theory of gravity) there is no
proof that more evidence may come along in the future and it can not be generated to the population therefore, we aim to reject the hypothesis an not prove the hypothesis. Therefore the null
hypothesis is important.
Howitt, D. & Cramer, D. (2011). Introduction to Research Methods in Psychology, third edition. Pearsons Education.
There is also one more importance of the null hypothesis and that is its falsifiability. The example that everyone uses is that you only need to find one black swan to disprove the theory that
all swans are white. much easier than going around and staring at every single swan around. That’s the main use of the null as part of the scientific method without the null it would be virtually
impossible to progress in science!
http://en.wikipedia.org/wiki/Null_hypothesis, http://en.wikipedia.org/wiki/Falsifiability
[...] 6)http://amw1992.wordpress.com/2011/10/28/the-importance-of-the-null-hypothesis/#comment-20 [...]
What’s up colleagues, fastidious piece of writing and good urging commented here, I am in fact enjoying by these. Does running a blog such as this take a massive amount work? I’ve very little
understanding of computer programming but I was hoping to start my own blog soon. Anyway, should you have any recommendations or tips for new blog owners please share. I know this is off subject
nevertheless I simply had to ask. | {"url":"http://amw1992.wordpress.com/2011/10/28/the-importance-of-the-null-hypothesis/","timestamp":"2014-04-20T14:03:56Z","content_type":null,"content_length":"55211","record_id":"<urn:uuid:3938496e-63ab-44ee-8547-e0fd2cad490f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
This series covers all areas of research at Perimeter Institute, as well as those outside of PI's scope.
The gravitational observatory LISA will detect radiation from massive black hole sources at cosmological distances, accurately measure their luminosity distance and help identify the electromagnetic
counterparts that such sources may generate. I will describe various astrophysical scenarios for the generation of electromagnetic counterparts and discuss observational strategies aimed at
identifying them. Successful identifications will enable novel studies of black hole astrophysics and cosmological physics.
Much work on quantum gravity has focused on short-distance problems such as non-renormalizability and singularities. However, quantization of gravity raises important long-distance issues, which may
be more important guides to the conceptual advances required. These include the problems of black hole information and gauge invariant observables, and those of inflationary cosmology. An overview of
aspects of these problems, and apparent connections, will be given.
A system of spins with complicated interactions between them can have many possible configurations. Many configurations will be local minima of the energy, and to get from one local minimum to
another requires changing the state of very many spins. A system like this is called a spin glass, and at low temperatures tends to get caught for very long times at a local minimum of energy, rather
than reaching its true ground state.
The AdS/CFT correspondence relates large-N, planar quantum gauge theories to string theory on the Anti-de-Sitter background. I will discuss exact results in field theories with AdS duals, which can
be obtained with the help of diagram resummations, mapping to quantum spin chains and two-dimensional sigma-models.
"Conventional" superconductivity is one of the most dramatic phenomena in condensed matter physics, and yet by the 1970's it was fully understood - a solved problem much like quantum electrodynamics.
The discovery of high temperature superconductivity changed all that and opened the door, not only to higher Tc's, but also to a wealth of even more exotic phenomena, including things like
topologically ordered superconductors with factional vortices and non-Abelian statistics.
A Majorana fermion is a particle that is its own antiparticle. It has been studied in high energy physics for decades, but has not been definitely observed. In condensed matter physics, Majorana
fermions appear as low energy fractionalized quasi-particles with non-Abelian statistics and inherent nonlocality. In this talk I will first discuss recent theoretical proposals of realizing Majorana
fermions in solid-state systems, including topological insulators and nanowires. I will next propose experimental setups to detect the existence of Majorana fermions and their striking properties.
In this talk we will explore a "toy model" of quantum theory that is similar to actual quantum theory, but uses scalars drawn from a finite field. The set of possible states of a system is discrete
and finite. Our theory does not have a quantitative notion of probability, but only makes the "modal" distinction between possible and impossible measurement results. Despite its very simple
structure, our toy model nevertheless includes many of the key phenomena of actual quantum systems: interference, complementarity, entanglement, nonlocality, and the impossibility of cloning.
Black holes play a central role in astrophysics and in physics more generally. Candidate black holes are nearly ubiquitous in nature. They are found in the cores of nearly all galaxies, and appear to
have resided there since the earliest cosmic times. They are also found throughout the galactic disk as companions to massive stars. Though these objects are almost certainly black holes, their
properties are not very well constrained. We know their masses (often with errors that are factors of a few), and we know that they are dense.
I will describe recent work by Cutler&Holz and Hirata, Holz, & Cutler showing that a highly sensitive, deci-Hz gravitational-wave mission like BBO or Decigo could measure cosmological parameters,
such as the Hubble constant H_0 and the dark energy parameters w_0 and w_a, far more accurately than other proposed dark-energy missions. | {"url":"http://perimeterinstitute.ca/video-library/collection/colloquium?page=12","timestamp":"2014-04-17T11:22:55Z","content_type":null,"content_length":"63188","record_id":"<urn:uuid:169d3118-3a62-491f-bc87-e58bea562f68>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haddon Heights Statistics Tutor
Find a Haddon Heights Statistics Tutor
I have been a part time college instructor for over 10 years at a local university. While I have mostly taught all levels of calculus and statistics, I can also teach college algebra and
pre-calculus as well as contemporary math. My background is in engineering and business, so I use an applied math approach to teaching.
13 Subjects: including statistics, calculus, geometry, algebra 1
...I am comfortable and experienced with all levels of students. Because many of my students have achieved dramatic increases in their scores, some people get the impression that I am mainly an
SAT coach. Such is NOT the case.
23 Subjects: including statistics, English, calculus, algebra 1
...Courses were also taken in the fields of probability and statistics. Finally, as part of my research, I was required to learn and utilize a variety of computer programming languages, including
Matlab, C, and C++. I feel comfortable tutoring students in most branches of advanced mathematics and m...
20 Subjects: including statistics, chemistry, physics, calculus
...I am highly committed to students' performances and to improve their comprehension of all areas of mathematics.I have excelled in courses in Ordinary Differential Equations in both
undergraduate and graduate school, as well as partial differential equations at the graduate level. Also, I have tu...
19 Subjects: including statistics, calculus, geometry, algebra 1
I have been working as a statistician at the University of Pennsylvania since 1991, providing assistance to researchers in various areas of health behavior. I am proficient in several statistical
packages, including SPSS, STATA, and SAS. One of my particular strengths is the ability to explain sta...
1 Subject: statistics | {"url":"http://www.purplemath.com/Haddon_Heights_Statistics_tutors.php","timestamp":"2014-04-18T16:01:29Z","content_type":null,"content_length":"24110","record_id":"<urn:uuid:70954d74-d2c0-4458-81be-3c3f2578c0d5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exact real arithmetic
From HaskellWiki
1 Introduction
Exact real arithmetic is an interesting area: it is a deep connection between
• numeric methods
• and deep theoretic fondations of algorithms (and mathematics).
Its topic: computable real numbers raise a lot of interesting questions rooted in mathematical analysis, arithmetic, but also computability theory (see numbers-as-programs approaches).
Computable reals can be achieved by many approaches -- it is not one single theory.
2 Theory
Jean Vuillemin's Exact real computer arithmetic with continued fractions is very good article on the topic itself. It can serve also as a good introductory article, too, because it presents the
connections to both mathematical analysis and computability theory. It discusses several methods, and it describes some of them in more details.
3 Portal-like homepages
There are functional programming materials too, even with downloadable Haskell source. | {"url":"http://www.haskell.org/haskellwiki/index.php?title=Exact_real_arithmetic&oldid=3749","timestamp":"2014-04-17T05:03:03Z","content_type":null,"content_length":"15452","record_id":"<urn:uuid:0ffea6aa-b39d-465c-a044-8b21e484936b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 13 - Inverse Functions
Chapter 13 - Inverse Functions
In the second part of this book on Calculus, we shall be devoting our study to another type of function, the exponential function and its close relative the Sine function. Before we immerse ourselves
in this complex and analytical study, we first need to understand something about inverse functions.
The Inverse function is by definition a function whose output becomes the input or the dependent variable becomes an independent variable. For example given the function:
Which is Newton.s second law, or the force acting on a body of mass, m, is a function of the acceleration given to it. We are free to input any a and what we get out is the force. The inverse of this
Force function, according to the definition, will give us the acceleration as a function of Force. This is done by simply solving for the independent variable, a:
Now I can let F be anything and then find the acceleration as a function of it.
The inverse of a function, f(x), is commonly written as,
What does this do to the inverse function? This essential flips the graph of f(x) about the line y=x such that for every point (x,y) there is a corresponding point (y,x) on the graph of the inverse
function. Now both the functions can be graphed in the same x-y plane.
Remember that if we just solve for the dependent, we are not changing the equation but merely re-writing it. For this reason its graph is the same. By flipping the x and y, we get another function of
x, whose relation to f(x) is that it has been graphed as though the x-axis were the y-axis and vice-versa. It is best we look at the two graphs:
Notice how every point (x, y) has a corresponding point (y,x) on the inverse function. The graph of the inverse function is therefore exactly the same as the original function except that the x and
y-axis have been switched:
Since every point (x, y) has a corresponding point (y,x) then any point y from the inverse function when inputted in the original function should yield x:
Remember the a function and its inverse are both function.s of x. The way they are related is that the inverse function represents the original function by just having its dependent and independent
variable switched around. As you can see from the first graph, when the two function.s are graphed together, the inverse function contains all the point (x, y), of the first function, plotted as (y,
x) with the exception that y is given as function of x. For this reason
What is important to understand about the inverse function is that it is obtained by solving for the independent variable, then replacing it with y, to create a function that is also a function of x
and can be graphed along with the original function.
Now that we know how a function and its inverse function are closely related, it brings us to the question, how are the derivatives related? Logic would tell us that instead of
The derivative of the inverse function might be:
Or the derivative of
Let us examine the graph of f(x) and its inverse function to see what exactly is going on.
Note that at x=2, the slopes are not reciprocals but are reciprocals only at y values of on the inverse function or through (x, f(x)) and (f^-1(f(x)), x). Or the point (3,9) will have a reciprocal
slope at (9,3) since at this point x and y are reversed hence the slope becomes the reciprocal or
By replacing x with y and y with x in this last expression we get:
What we have just done is calculated the derivative of the inverse function only by looking at the original function and its derivative. The reason the derivative was not just the reciprocal of y=2x
was because we forgot to do the following two steps:
1) Replace x with its equivalent expression in terms of y.
The slope in the following graph is
By replacing x with same x-axis but instead with a y-value.
At y=4 the slope is
Since the inverse function is graphed in the same xy plane as
This last expression is the derivative for the function.s inverse with respect to the x-axis.
To summarize we can state the following theorem:
To find the derivative of the inverse function,
1) Remember, an inverse function is related to the main function in that if you reflect it over the line y=x, you will land on the main function.
2) First find the derivative of f(x)
3) Replace any x in the derivative with its y-equivalent, so as to be able to find the derivative with any given y-value.
4) Take the reciprocal of the derivative to get
5) Since the inverse function is graphed with respect to x, replace every y with x and x with y to find the derivative of the inverse function.
To summarize further:
For example given | {"url":"http://www.understandingcalculus.com/chapters/13/13-1.php","timestamp":"2014-04-19T01:47:49Z","content_type":null,"content_length":"31430","record_id":"<urn:uuid:0ab8d6d5-768a-4a86-b9bd-defeab7d43ac>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Start the Cruise Control Case Study with Physics
This case study begins with model construction, starting from basic physics. Special consideration is given to incorporate wind resistance and include as a system disturbance, the onset of a hill.
The laws of motion say that given a vehicle mass m and engine supplied force cw(t), where c is proportionality constant and 0 ≤ w(t) ≤ 1 represents the engine throttle, F[tot](t) = mv(t) = cw(t), t ≥
Credit: Illustration by Mark Wickert, PhD
Air resistance, proportional to the velocity squared times constant ρ, produces drag on the vehicle. Additionally when the vehicle encounters a hill (the system disturbance described in the
introduction) of angle θ, gravity creates a second counter force, mg sin(θ), where g is the gravitational constant of 9.8 m/s^2. A small angle is assumed in this case, so the following is valid.
The equation of motion is now
where the last line is divided through by the mass m. This is a nonlinear differential equation in terms of the vehicle velocity v(t) because of the appearance of v^2(t), the air resistance term.
Observe the nonlinear differential equation
Although the nonlinear differential equation is not in a form suitable for design of a cruise control, there are some interesting observations that can be made. This understanding also helps with the
linearizing process.
When the vehicle is on the flat (θ = 0) and at the maximum throttle value of 1.0, the nonlinear differential equation reduces to
The maximum velocity will be reached as t becomes large. At maximum velocity the acceleration must be zero, so
The equation further simplifies to
Don’t try this at home with your car!
A second observation is that the vehicle stalls when climbing a hill at full throttle for some critical angle θ[s]. You first need to know that stall means the vehicle velocity is zero and the
acceleration is zero. From the full nonlinear differential equation
You now solve for θ[s] as sin^–1[c/(mg)]. Note this analysis assumes that the vehicle stays in a fixed gear. In driving your own car you probably downshift to the lowest gear to avoid stalling.
The third observation is the solution to the nonlinear differential equation at maximum throttle and θ = 0. Begin by defining the constant
the differential equation with this T inserted becomes:
The exact solution to this simplified form is v(t) = v[max]tanh(t / T). You can verify this is a valid solution by inserting it back into the differential equation and see that the equation is
satisfied. This solution gives you the velocity profile versus time with the accelerator held to the floor.
Note this doesn’t quite apply to a real car because the model does not take into account gear shifting with a transmission. By plotting v(t) for a given v[max] and T you can see approximately how
long it takes to accelerate to a particular cruising velocity, for example, 0 to 60 mph in 10 seconds.
Linearize to a nominal cruising velocity
To linearize the differential equation for a nominal cruise velocity of v[0] < v[max] and corresponding throttle setting of 0 < w[0] < 1, use a one-term Taylor series expansion. In calculus, you
learn that any function, say y = f(x), can be approximated near the point x[0] using f(x[0]) and its derivatives evaluated at x[0]. A one term approximation is linear in x – x[0], that is
If f(x) = x^2 the expansion becomes
because f '(x) = 2x evaluated at x[0] is 2x[0].
For the problem at hand, you want to expand with respect to the nominal velocity v[0] and throttle setting w[0]. Make substitutions into the original differential equation as follows:
represent the deviation of velocity and throttle settings away from the nominal values. These are the new modeling parameters. In the original differential equation:
where a step function is included on the hill gravity term to model the hill onset to t = 0. Note in the last line
is zero because this is the nominal constant operating point, which also corresponds to the throttle setting w[0]. If you define the time constant
the now linearized differential equation becomes | {"url":"http://www.dummies.com/how-to/content/start-the-cruise-control-case-study-with-physics.html","timestamp":"2014-04-16T05:03:59Z","content_type":null,"content_length":"70851","record_id":"<urn:uuid:40a7080d-d433-4de6-843b-1af0107fa14e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] handling rank 2 types
Andrew Pimlott andrew at pimlott.net
Fri Nov 4 14:26:37 EST 2005
On Thu, Nov 03, 2005 at 11:57:49PM -0800, Ralf Lammel wrote:
> Let's do an experiment.
> We replace the IO monad by the Id(entity) monad.
> We actually replace the Id newtype also to become true type-level
> identity.
> So we get:
> --
> -- This is like your 2nd say unchecked... sample
> --
> fooBar :: [v] -> Empty
> fooBar l = Empty (map no l)
> where
> no :: a -> a'
> no x = error "element found"
> But the problem is not about "a higher-rank type occurring with a type
> constructor", as you guess.
I thought my explanation sounded fishy. Someone else explained this to
me off-list as well.
> It is rather about functions with classic rank-1 types.
> What you could do is define a suitably typed application operator
> (likewise, a suitably typed liftM).
> In the non-monadic example, we need:
> -- Use app instead of ($)
> app :: ((forall a. [a]) -> c) -> (forall b. [b]) -> c
> app f x = f x
Ok, but this would have to be rewritten for different kinds:
app1 :: ((forall a. c1 a) -> c) -> (forall b. c1 b) -> c
app1 f x = f x
app2 :: ((forall a. c1 c2 a) -> c) -> (forall b. c1 c2 b) -> c
app2 f x = f x
Furthermore, I don't believe it works for me. I would need something
liftM' :: Monad m => (forall a. [a] -> c) -> m (forall b. [b]) -> m c
liftM' f x = ???
First, the type is illegal: "you cannot make a forall-type the argument
of a type constructor". Second, even if it were, I would need versions
of the monad operators with custom types to implement it, eg
bind :: Monad m => m (forall a. [a]) -> ((forall b. [b]) -> m c) -> m c
Or am I wrong? An alternate (and legal) signature for liftM' is
liftM' :: Monad m => (forall a. [a] -> c) -> (forall b. m [b]) -> m c
but to implement this, I think I would need
bind :: Monad m => (forall a. m [a]) -> ((forall b. [b]) -> m c) -> m c
I tried implementing this just for Maybe:
bindMaybe :: (forall a. Maybe [a]) -> ((forall b. [b]) -> Maybe c) ->
Maybe c
bindMaybe x f = case x of
Just x' -> f x'
Nothing -> Nothing
But as expected, it doesn't typecheck, and AFAICT it shouldn't.
So my conclusion is that writing my own higher-ranked library wouldn't
work; I would still need newtype wrappers.
> BTW, what operations are we supposed to perform on the content of Empty;
> what's the real purpose for introducing this independent type variable
> "a'"?
Suppose instead
newtype NoRight = NoRight (forall a. [Either Int a])
An alternative might be
data No
type NoRight = [Either Int No]
But the first version I can use directly (after unwrapping NoRight) as
an [Either Int Int], [Either Int String], etc.
My real purpose is to enforce that a logical statement has no variables.
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2005-November/011975.html","timestamp":"2014-04-18T00:18:24Z","content_type":null,"content_length":"5779","record_id":"<urn:uuid:fdf5d997-d870-4a69-9e8a-8918b3327fbe>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paired single
From WiiBrew
Paired singles are a unique part of the Gekko/Broadway processors used in the Gamecube and Wii. They provide fast vector math by keeping two single-precision floating point numbers in a single
floating point register, and doing math across registers. This page will demonstrate how these instructions work.
Quantization and Dequantization
All numbers must be quantized before being put into Paired Singles. For conversion from non-floats, in order to allow for greater flexibility, there is a form of scaling implemented. All quantization
is controlled by the GQRs (Graphics Quantization Registers). The GQRs are 32bit registers containing the conversion types and scaling factors for storing and loading. (During loading, it dequantizes.
During storing, it quantizes.)
│GQR │
│ │31│30│29│28│27│26│25│24│23│22│21│20│19│18│17│16│
│Access │U │R/W │U │R/W │
│Field │ │L_Scale │ │L_Type │
│ │15│14│13│12│11│10│9 │8 │7 │6 │5 │4 │3 │2 │1 │0 │
│Access │U │R/W │U │R/W │
│Field │ │S_Scale │ │S_Type │
Field Description
L_* Values for dequantization.
S_* Values for quantization.
Scale Signed. During dequantization divide the number by (2^scale). During quantization, multiply the number by (2^scale).
Type 0: Float (this does no scaling during de/quantization), 4: Unsigned 8bit, 5: Unsigned 16bit, 6: Signed 8bit, 7: Signed 16bit.
Loading and Storing
To load and store Paired-singles, one must use the psq_l and psq_st instructions respectively, or one of their variants.
psq_l frD, d(rA), W, I
This instruction dequantizes values from the memory address in d+(rA|0) and puts them into PS0 and PS1 in frD. If W is 1, however, it only dequantizes one number, and places that into PS0. PS1 is
loaded with 1.0 always when W is 1. I specifies the GQR to use for dequantization parameters. The two numbers read from the memory are directly after each other, regardless of size (for example, if
the GQR specified to load as a u16, you would have d+(rA|0) point to a two-element array of u16s)
psq_lx frD, rA, rB, W, I
This instruction acts exactly like psq_l, except instead of (rA) being offset by d, it is offset by (rB).
psq_lu frD, d(rA), W, I
This instruction acts exactly like psq_l, except rA cannot be 0, and d+(rA) is placed back into rA.
psq_lux frD, rA, rB, W, I
This instruction acts exactly like psq_lx, except rA cannot be 0, and rB+(rA) is placed back into rA.
psq_st frD, d(rA), W, I
This instruction quantizes values from the Paired Singles in frD and places them in the memory address in d+(rA|0). If W is 1, however, it only quantizes PS0. I specifies the GQR to use for
dequantization parameters. The two numbers written to memory are directly after each other, regardless of size (for example, if the GQR specified to store as a u16, d+(rA|0) would be treated as a
two-element array of u16s)
psq_stx frD, rA, rB, W, I
This instruction acts exactly like psq_st, except instead of (rA) being offset by d, it is offset by (rB).
psq_stu frD, d(rA), W, I
This instruction acts exactly like psq_st, except rA cannot be 0, and d+(rA) is placed back into rA.
psq_stux frD, rA, rB, W, I
This instruction acts exactly like psq_stx, except rA cannot be 0, and rB+(rA) is placed back into rA.
Single Parameter Operations
These functions operate on one FPR.
Single floating-point absolute value on both ps0 and ps1.
ps_abs frD, frB
frD(ps0) = abs(frB(ps0))
frD(ps1) = abs(frB(ps1))
Move both ps0 and ps1 from one fpr to another.
ps_mr frD, frB
frD(ps0) = frB(ps0)
frD(ps1) = frB(ps1)
Single floating-point negative abs value on both ps0 and ps1.
ps_nabs frD, frB
frD(ps0) = -abs(frB(ps0))
frD(ps1) = -abs(frB(ps1))
Single floating-point negate on both ps0 and ps1.
ps_neg frD, frB
frD(ps0) = -frB(ps0)
frD(ps1) = -frB(ps1)
Reciprocal of ps0 and ps1.
ps_res frD, frB
frD(ps0) = -1/frB(ps0)
frD(ps1) = -1/frB(ps1)
Accurate to a precision of 1/4096.
Single floating-point reciprocal sqrt estimate.
ps_rsqrte frD, frB
frD(ps0) = -1/sqrt(frB(ps0))
frD(ps1) = -1/sqrt(frB(ps1))
Accurate to a precision of 1/4096.
Basic Math
Simple everyday math.
Single floating-point add on both ps0 and ps1.
ps_add frD, frA, frB
frD(ps0) = frA(ps0) + frB(ps0)
frD(ps1) = frA(ps1) + frB(ps1)
Single floating-point subtract on both ps0 and ps1.
ps_sub frD, frA, frB
frD(ps0) = frA(ps0) - frB(ps0)
frD(ps1) = frA(ps1) - frB(ps1)
Single floating-point multiply on both ps0 and ps1.
ps_mul frD, frA, frC
frD(ps0) = frA(ps0) * frC(ps0)
frD(ps1) = frA(ps1) * frC(ps1)
Single floating-point divide on both ps0 and ps1.
ps_div frD, frA, frB
frD(ps0) = frA(ps0) / frB(ps0)
frD(ps1) = frA(ps1) / frB(ps1)
Ordered compare of ps0 values.
ps_cmpo0 crfD, frA, frB
ps_cmpu0 crfD, frA, frB
cfrD = frA(ps0) compare frB(ps0)
Ordered compare of ps1 values.
ps_cmpo1 crfD, frA, frB
ps_cmpu1 crfD, frA, frB
cfrD = frA(ps1) compare frB(ps1)
Complex Multiply
These instructions multiply in complex ways
Single floating-point madd on both ps0 and ps1.
ps_madd frD, frA, frC, frB
frD(ps0) = frA(ps0) * frC(ps0) + frB(ps0)
frD(ps1) = frA(ps1) * frC(ps1) + frB(ps1)
Scalar-vector multiply-add using ps0 for scalar.
ps_madds0 frD, frA, frC, frB
frD(ps0) = frA(ps0) * frC(ps0) + frB(ps0)
frD(ps1) = frA(ps1) * frC(ps0) + frB(ps1)
Scalar-vector multiply-add using ps1 for scalar.
ps_madds1 frD, frA, frC, frB
frD(ps0) = frA(ps0) * frC(ps1) + frB(ps0)
frD(ps1) = frA(ps1) * frC(ps1) + frB(ps1)
Single floating-point msub on both ps0 and ps1.
ps_msub frD, frA, frC, frB
frD(ps0) = frA(ps0) * frC(ps0) - frB(ps0)
frD(ps1) = frA(ps1) * frC(ps1) - frB(ps1)
Scalar-vector multiply using ps0 for scalar.
ps_muls0 frD, frA, frC
frD(ps0) = frA(ps0) * frC(ps0)
frD(ps1) = frA(ps1) * frC(ps0)
Scalar-vector multiply using ps0 for scalar.
ps_muls1 frD, frA, frC
frD(ps0) = frA(ps0) * frC(ps1)
frD(ps1) = frA(ps1) * frC(ps1)
Single floating-point nmadd on both ps0 and ps1.
ps_nmadd frD, frA, frC, frB
frD(ps0) = -(frA(ps0) * frC(ps0) + frB(ps0))
frD(ps1) = -(frA(ps1) * frC(ps1) + frB(ps1))
Single floating-point nmsub on both ps0 and ps1.
ps_nmsub frD, frA, frC, frB
frD(ps0) = -(frA(ps0) * frC(ps0) - frB(ps0))
frD(ps1) = -(frA(ps1) * frC(ps1) - frB(ps1))
Whatever doesn't fit into the other categories.
Register move allowing swap/merge of ps0 values.
ps_merge00 frD, frA, frB
frD(ps0) = frA(ps0)
frD(ps1) = frB(ps0)
Register move allowing swap/merge of ps0 and ps1 values.
ps_merge01 frD, frA, frB
frD(ps0) = frA(ps0)
frD(ps1) = frB(ps1)
Register move allowing swap/merge of ps1 and ps0 values.
ps_merge10 frD, frA, frB
frD(ps0) = frA(ps1)
frD(ps1) = frB(ps0)
Register move allowing swap/merge of ps0 values.
ps_merge11 frD, frA, frB
frD(ps0) = frA(ps1)
frD(ps1) = frB(ps1)
Single floating-point select on both ps0 and ps1.
ps_sel frD, frA, frC, frB
if(frA(ps0) >= 0)
frD(ps0) = frC(ps0)
frD(ps0) = frB(ps0)
if(frA(ps1) >= 0)
frD(ps1) = frC(ps1)
frD(ps1) = frB(ps1)
Add a ps0 value to a ps1 value, result in ps0.
ps_sum0 frD, frA, frC, frB
frD(ps0) = frA(ps0) + frB(ps1)
frD(ps1) = frC(ps1)
Add a ps0 value to a ps1 value, result in ps1.
ps_sum1 frD, frA, frC, frB
frD(ps0) = frC(ps0)
frD(ps1) = frA(ps0) + frB(ps1) | {"url":"http://wiibrew.org/wiki/Paired_single","timestamp":"2014-04-19T20:26:04Z","content_type":null,"content_length":"35013","record_id":"<urn:uuid:f37aef1c-37fb-4f5a-9cd9-7ad95db1a83a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
An optimal control problem for flows with discontinuities
Results 1 - 10 of 11
- SIAM J. CONTROL OPTIM , 1997
"... In this paper a family of trust-region interior-point SQP algorithms for the solution of a class of minimization problems with nonlinear equality constraints and simple bounds on some of the
variables is described and analyzed. Such nonlinear programs arise e.g. from the discretization of optimal co ..."
Cited by 35 (8 self)
Add to MetaCart
In this paper a family of trust-region interior-point SQP algorithms for the solution of a class of minimization problems with nonlinear equality constraints and simple bounds on some of the
variables is described and analyzed. Such nonlinear programs arise e.g. from the discretization of optimal control problems. The algorithms treat states and controls as independent variables. They
are designed to take advantage of the structure of the problem. In particular they do not rely on matrix factorizations of the linearized constraints, but use solutions of the linearized state
equation and the adjoint equation. They are well suited for large scale problems arising from optimal control problems governed by partial differential equations. The algorithms keep strict
feasibility with respect to the bound constraints by using an affine scaling method proposed for a different class of problems by Coleman and Li and they exploit trust--region techniques for
equality-constrained optimizatio...
- Journal of Computational Physics , 2003
"... Optimal control of the 1-D Riemann problem of Euler equations whose solution is characterized by discontinuities is carried out by both nonsmooth and smooth op- timization methods. By matching a
desired flow to the numerical model for a given time window we effectively change the location of discont ..."
Cited by 15 (1 self)
Add to MetaCart
Optimal control of the 1-D Riemann problem of Euler equations whose solution is characterized by discontinuities is carried out by both nonsmooth and smooth op- timization methods. By matching a
desired flow to the numerical model for a given time window we effectively change the location of discontinuities. The control pa- rameters are chosen to be the initial values for pressure and
density fields. Existence of solutions for the optimal control problem is proven. A high resolution model and a model with artificial viscosity, implementing two different numerical methods, are used
to solve the forward model. The cost functional is the weighted difference be- tween the numerical values and the observations for pressure, density and velocity. The observations are constructed
from the analytical solution. We consider either distributed observations in time or observations calculated at the end of the assimi- lation window. We consider two different time horizons and two
sets of observations. The gradient (respectively a subgradient) of the cost functional, obtained from the adjoint of the discrete forward model, are employed for the smooth minimization (respectively
for the nonsmooth minimization) algorithm. Discontinuity detection improves the performance of the minimizer for the model with artificial viscosity by selecting the points where the shock occurs
(and these points are then removed from Preprint submitted to Elsevier Science 26 March 2002 the cost functional and its gradient). The numerical flow obtained with the optimal initial conditions
obtained from the nonsmooth minimization matches very well the observations. The algorithm for smooth minimization converges for the shorter time horizon but fails to perform satisfactorily for the
longer time horizon.
- ACM Transactions on Mathematical Software , 1998
"... This paper is concerned with the implementation of optimization algorithms for the solution of smooth discretized optimal control problems. The problems under consideration can be written as min
f(y; u) ..."
Cited by 13 (7 self)
Add to MetaCart
This paper is concerned with the implementation of optimization algorithms for the solution of smooth discretized optimal control problems. The problems under consideration can be written as min f(y;
, 1995
"... In this paper we analyze inexact trust-region interior-point (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality
constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applicati ..."
Cited by 11 (7 self)
Add to MetaCart
In this paper we analyze inexact trust-region interior-point (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and
simple bound constraints on some of the variables. Such problems arise in many engineering applications, in particular in optimal control problems with bounds on the control. The nonlinear
constraints often come from the discretization of partial differential equations. In such cases the calculation of derivative information and the solution of linearized equations is expensive. Often,
the solution of linear systems and derivatives are computed inexactly yielding nonzero residuals. This paper analyzes the effect of the inexactness onto the convergence of TRIP SQP and gives
practical rules to control the size of the residuals of these inexact calculations. It is shown that if the size of the residuals is of the order of both the size of the constraints and the
trust-region radius, t...
, 1997
"... The all-at-once approach is implemented to solve an optimum airfoil design problem. The airfoil design problem is formulated as a constrained optimization problem in which flow variables and
design variables are viewed as independent and the coupling steady state Euler equation is included as a cons ..."
Cited by 9 (0 self)
Add to MetaCart
The all-at-once approach is implemented to solve an optimum airfoil design problem. The airfoil design problem is formulated as a constrained optimization problem in which flow variables and design
variables are viewed as independent and the coupling steady state Euler equation is included as a constraint, along with geometry and other constraints. In this formulation, the optimizer computes a
sequence of points which tend toward feasiblility and optimality at the same time (all--at--once). This decoupling of variables typically makes the problem less nonlinear and can lead to more
efficient solutions. In this paper an existing optimization algorithm is combined with an existing flow code. The problem formulation, its discretization, and the underlying solvers are described.
Implementation issues are presented and numerical results are given which indicate that the cost of solving the design problem is approximately six times the cost of solving a single analysis
, 1995
"... In this paper we analyze incxact trust region interior point (TRIP) sequential quadr tic programming (SOP) algorithms for the solution of optimization problems with nonlinear equality
constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applicati ..."
Cited by 8 (0 self)
Add to MetaCart
In this paper we analyze incxact trust region interior point (TRIP) sequential quadr tic programming (SOP) algorithms for the solution of optimization problems with nonlinear equality constraints and
simple bound constraints on some of the variables. Such problems arise in many engineering applications, in particular in optimal control problems with bounds on the control. The nonhnear constraints
often come from the discretization of partial differential equations. In such cases the calculation of derivative information and the solution of hncarizcd equations is expensive. Often, the solution
of hncar systems and derivatives arc computed incxactly yielding nonzero residuals. This paper
- OPTIM. METHODS SOFTW , 1998
"... In this paper we consider a class of nonlinear programming problems that arise from the discretization of optimal control problems with bounds on both the state and the control variables. For
this class of problems, we analyze constraint qualifications and optimality conditions in detail. We derive ..."
Cited by 7 (2 self)
Add to MetaCart
In this paper we consider a class of nonlinear programming problems that arise from the discretization of optimal control problems with bounds on both the state and the control variables. For this
class of problems, we analyze constraint qualifications and optimality conditions in detail. We derive an affine-scaling and two primal-dual interior-point Newton algorithms by applying, in an
interior-point way, Newton's method to equivalent forms of the first-order optimality conditions. Under appropriate assumptions, the interior-point Newton algorithms are shown to be locally
well-defined with a q-quadratic rate of local convergence. By using the structure of the problem, the linear algebra of these algorithms can be reduced to the null space of the Jacobian of the
equality constraints. The similarities between the three algorithms are pointed out, and their corresponding versions for the general nonlinear programming problem are discussed.
"... . In this paper we approximate large sets of univariate data by piecewise linear functions which interpolate subsets of the data, using adaptive thinning strategies. Rather than minimize the
global error at each removal (AT0), we propose a much cheaper thinning strategy (AT1) which only minimize ..."
Add to MetaCart
. In this paper we approximate large sets of univariate data by piecewise linear functions which interpolate subsets of the data, using adaptive thinning strategies. Rather than minimize the global
error at each removal (AT0), we propose a much cheaper thinning strategy (AT1) which only minimizes errors locally. Interestingly, the two strategies are equivalent in all our numerical tests and we
prove this to be true for convex data. We also compare with non-adaptive thinning strategies. x1. Introduction In applications such as visualization, it is often desirable to generate a hierarchy of
coarser and coarser representations of a given discrete data set. Though we are primarily interested in hierarchies of scattered data sets, and in particular piecewise linear approximations over
triangulations in the plane [1], we focus in this paper on univariate data sets and propose several adaptive thinning strategies. Thinning algorithms generate hierarchies of subsets by removing
points ...
, 2000
"... We present a sensitivity and adjoint calculus for the control of entropy solutions of scalar conservation laws with controlled initial data and source term. The sensitivity analysis is based on
shift-variations which are the sum of a standard variation and suitable corrections by weighted indicator ..."
Add to MetaCart
We present a sensitivity and adjoint calculus for the control of entropy solutions of scalar conservation laws with controlled initial data and source term. The sensitivity analysis is based on
shift-variations which are the sum of a standard variation and suitable corrections by weighted indicator functions approximating the movement of the shock locations. Based on a first order
approximation by shift-variations in L¹ we introduce the concept of shift-differentiability which is applicable to operators having functions with moving discontinuities as images and implies
differentiability for a large class of tracking-type functionals. In the main part of the paper we show that entropy solutions are generically shift-differentiable at almost all times t ? 0 with
respect to the control. Hereby we admit shift-variations for the initial data which allows to use the shift-differentiability result repeatedly over time slabs. This is useful for the design of
optimization methods with time...
, 2003
"... 7 Optimal control of the 1-D Riemann problem of Euler equations is studied, with the initial values for pressure and 8 density as control parameters. The least-squares type cost functional
employs either distributed observations in time or 9 observations calculated at the end of the assimilation win ..."
Add to MetaCart
7 Optimal control of the 1-D Riemann problem of Euler equations is studied, with the initial values for pressure and 8 density as control parameters. The least-squares type cost functional employs
either distributed observations in time or 9 observations calculated at the end of the assimilation window. Existence of solutions for the optimal control problem is 10 proven. Smooth and nonsmooth
optimization methods employ the numerical gradient (respectively, a subgradient) of 11 the cost functional, obtained from the adjoint of the discrete forward model. The numerical flow obtained with
the 12 optimal initial conditions obtained from the nonsmooth minimization matches very well with the observations. The 13 algorithm for smooth minimization converges for the shorter time horizon but
fails to perform satisfactorily for the 14 longer time horizon, except when the observations corresponding to shocks are detected and removed. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1562205","timestamp":"2014-04-17T23:57:01Z","content_type":null,"content_length":"39784","record_id":"<urn:uuid:d52ef580-1c0f-408d-9c01-6be2c2b510f8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |